This was caused because of uninitialized netlint attrs in the bond-member
netlink parse API.
PS: It was caught by the upstream topotests on ARM8 (passed everywhere
else).
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
This is needed as kernel currently doesn't allow a mac replace if the dst
changes from a L2NHG to a single-VTEP and viceversa.
Ticket: CM-31561
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When a ES-bond is in bypass state MACs learnt on it are linked to the
access port instead of the ES. When LACP converges on the bond it moves
out of bypass and the MACs previously learnt on it are flushed to force
a re-learn on new traffic.
Ticket: CM-31326
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When an ES-bond comes out of bypass FRR needs to flush the local MACs learnt
while the bond was in bypass. To do that efficiently local MACs are linked
to the dest-access port. This only happens if the access-port is in
LACP-bypass or if it is non-ES.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Feature overview:
=================
A 802.3ad bond can be setup to allow lacp-bypass. This is done to enable
servers to pxe boot without a LACP license i.e. allows the bond to go oper
up (with a single link) without LACP converging.
If an ES-bond is oper-up in an "LACP-bypass" state MH treats it as a non-ES
bond. This involves the following special handling -
1. If the bond is in a bypass-state the associated ES is placed in a
bypass state.
2. If an ES is in a bypass state -
a. DF election is disabled (i.e. assumed DF)
b. SPH filter is not installed.
3. MACs learnt via the host bond are advertised with a zero ESI.
When the ES moves out of "bypass" the MACs are moved from a zero-ESI to
the correct non-zero id. This is treated as a local station move.
Implementation:
===============
When (a) an ES is detached from a hostbond or (b) an ES-bond goes into
LACP bypass zebra deletes all the local macs (with that ES as destination)
in the kernel and its local db. BGP re-sends any imported MAC-IP routes
that may exist with this ES destination as remote routes i.e. zebra can
end up programming a MAC that was perviously local as remote pointing
to a VTEP-ECMP group.
When an ES is attached to a hostbond or an ES-bond goes
LACP-up (out of bypss) zebra again deletes all the local macs in the
kernel and its local db. At this point BGP resends any imported MAC-IP
routes that may exist with this ES destination as sync routes i.e.
zebra can end up programming a MAC that was perviously remote
as local pointing to an access port.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
VNI configuration is done without NB layer in default VRF. It leads to
the following problems:
```
vtysh -c "conf" -c "vni 1"
vtysh -c "conf" -c "vrf default" -c "no vni"
```
Second command does nothing, because the NB node is not created by the
first command.
```
vtysh -c "conf" -c "vrf default" -c "vni 1"
vtysh -c "conf" -c "no vni 1"
```
Second command doesn't delete the NB node created by the first command.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This is causing problems with VM move i.e. transition from remote
neigh to local neigh. This transition involves changing the NUD_STATE
NUD_NOARP to NUD_STALE. And the weak override flag prevents changing
the state from connected (REACHABLE, NOARP, PERMANENT) to STALE.
PS: Weak-override was originally used to prevent race conditions where
FRR can end up making a REACHABLE neigh STALE. We may need to revisit
and address that case at a later point.
Ticket: CM-30273
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Start reorg of zebra nexthop-resolution so that we can use the
resolution logic for nexthop-groups as well as routes. Change
the signature of the core nexthop_active() api so that it does
not require a route-entry or route-node. Move some of the logic
around so that nexthop-specific logic is in nexthop_active(),
while route-oriented logic is in nexthop_active_check().
Signed-off-by: Mark Stapp <mjs@voltanet.io>
For MH the SVI MAC is advertised to prevent flooding of ARP replies.
But because of a bug the SVI MAC was being added to the zebra database
but not sent to bgpd for advertising.
Ticket: CM-33329
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
As a part of FRR shutdown interfaces are force flushed (in an arbitary
order). Interfaces are already down at that point i.e. resources like
SVI-MAC have already been released. Attempting to clean it up again
as a part of the force-flush was resulting in access of freed up memory -
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
==26457== Thread 1:
==26457== Invalid read of size 8
==26457== at 0x1AE6B0: zebra_evpn_acc_bd_svi_set (zebra_evpn_mh.c:606)
==26457== by 0x1B1460: zebra_evpn_if_cleanup (zebra_evpn_mh.c:1040)
==26457== by 0x13CA69: if_zebra_delete_hook (interface.c:244)
==26457== by 0x48A0E34: hook_call_if_del (if.c:59)
==26457== by 0x48A0E34: if_delete_retain (if.c:290)
==26457== by 0x48A2F94: if_delete (if.c:313)
==26457== by 0x48A3169: if_terminate (if.c:1217)
==26457== by 0x48E0024: vrf_delete (vrf.c:254)
==26457== by 0x48E0024: vrf_delete (vrf.c:225)
==26457== by 0x48E02FE: vrf_terminate (vrf.c:551)
==26457== by 0x1442E1: sigint (main.c:203)
==26457== by 0x1442E1: sigint (main.c:141)
==26457== by 0x48CF862: quagga_sigevent_process (sigevent.c:103)
==26457== by 0x48DD324: thread_fetch (thread.c:1404)
==26457== by 0x48A926A: frr_run (libfrr.c:1122)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
(gdb) bt
(gdb) fr 5
1037 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p zif->ifp->name
$2 = "vlan131", '\000' <repeats 12 times>
(gdb) p zif->link->info
$5 = (void *) 0x1
(gdb) p/x zif->ifp->flags
$7 = 0x1002
(gdb)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-32435
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
zebra crash is seen while cleaning up evpn interface
during shutdown event.
evpn interface clean up is called from vrf_delete callback
(gdb) frame 4
(is_up=false, br_zif=0x0, vlan_zif=0x557f31fb36f0) at zebra/zebra_evpn_mh.c:614
614 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p tmp_br_zif
$1 = (struct zebra_if *) 0x0
(gdb) p vlan_zif->link
$2 = (struct interface *) 0x557f31fb2d40
(gdb) p vlan_zif->link->info
$3 = (void *) 0x0
(gdb) p zebra_if->ifp->name
No symbol "zebra_if" in current context.
(gdb) p vlan_zif->ifp->name
$4 = "peerlink-3.4094\000\000\000\000"
Ticket:CM-32435
Reviewed By:CCR-10957
Testing Done:
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Added support for advertising SVI MAC if EVPN-MH is enabled.
In the case of EVPN MH arp replies from an attached server can be sent to
the ES-peer. To prevent flooding of the reply the SVI MAC needs to be
advertised by default.
Note:
advertise-svi-ip could have been used as an alternate way to advertise
SVI MAC. However that config cannot be turned on if SVI IPs are
re-used (which is done to avoid wasting IP addresses in a subnet).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
SVI IP is being advertised unconditionally i.e. even if disabled (and
that is the default config). This can be problematic when the SVI address
is re-used across racks.
Added the user config condition in all the relevant places where the
SVI advertisement is triggered.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When looking up the conversion from kernel protocol to
internal protocol family make sure we use the correct
AF_INET( what the kernel uses ) instead of AFI_IP (which
is what FRR uses ).
Routes from OSPF will show up from the kernel as OSPF6 instead of
OSPF. Which will cause mayhem
Ticket: CM-33306
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Neither tabs nor newlines are acceptable in syslog messages. They also
break line-based parsing of file logs.
Signed-off-by: David Lamparter <equinox@diac24.net>
the old VXLAN function for local MAC deletion was still in
existence and being called from the VXLAN code whilst the new
generic function was not being called at all. Resolve this so
the generic function matches the old function and is called
exclusively.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Move the pbr hash creation to be after the update release
and dplane install. Now that rules are installed in a separate
dplane pthread, we can have scenarios where we have an interface
flapping and we install/remove rules sufficiently fast enough we
could issue what we think is an update for an identical rule and
end up releasing the rule right after we created it and sent it to
the dplane. This solves the problem of recving duplicate rules
during interface flapping.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Disallow the resolution to nexthops that are marked duplicate.
When we are resolving to an ecmp group, it's possible this
group has duplicates.
I found this when I hit a bug where we can have groups resolving
to each other and cause the resolved->next->next pointer to increase
exponentially. Sufficiently large ecmp and zebra will grind to a hault.
Like so:
```
D> 4.4.4.14/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:02
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 4.4.4.1 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.2 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.3 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.4 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.5 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.6 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.7 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.8 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.9 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.10 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.11 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.12 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.13 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.15 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.16 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
D> 4.4.4.15/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:09
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 4.4.4.1 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.2 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.3 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.4 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.5 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.6 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.7 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.8 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.9 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.10 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.11 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.12 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.13 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.14 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.16 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
D> 4.4.4.16/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:19
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:19
via 4.4.4.1 (recursive), weight 1, 00:00:19
via 1.1.1.1, dummy1, weight 1, 00:00:19
via 4.4.4.2 (recursive), weight 1, 00:00:19
...............
................
and on...
```
You can repro the above via:
```
kernel routes:
1.1.1.1 dev dummy1 scope link
4.4.4.0/24 via 1.1.1.1 dev dummy1
==============================
config:
nexthop-group doof
nexthop 1.1.1.1
nexthop 4.4.4.1
nexthop 4.4.4.10
nexthop 4.4.4.11
nexthop 4.4.4.12
nexthop 4.4.4.13
nexthop 4.4.4.14
nexthop 4.4.4.15
nexthop 4.4.4.16
nexthop 4.4.4.2
nexthop 4.4.4.3
nexthop 4.4.4.4
nexthop 4.4.4.5
nexthop 4.4.4.6
nexthop 4.4.4.7
nexthop 4.4.4.8
nexthop 4.4.4.9
!
===========================
Then use sharpd to install 4.4.4.16 -> 4.4.4.1 pointing to that nexthop
group in decending order.
```
With these changes it prevents the growing ecmp above by disallowing
duplicates to be in the resolution decision. These nexthops are not
installed anyways so why should we be resolving to them?
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Description: When we get a new vrf add and vrf with same name, but different vrf-id already
exists in the database, we should treat vrf add as update.
This happens mostly when there are lots of vrf and other configuration being replayed.
There may be a stale vrf delete followed by new vrf add. This
can cause timing race condition where vrf delete could be missed and
further same vrf add would get rejected instead of treating last arrived
vrf add as update.
Treat vrf add for existing vrf as update.
Implicitly disable this VRF to cleanup routes and other functions as part of vrf disable.
Update vrf_id for the vrf and update vrf_id tree.
Re-enable VRF so that all routes are freshly installed.
Above 3 steps are mandatory since it can happen that with config reload
stale routes which are installed in vrf-1 table might contain routes from
older vrf-0 table which might have got deleted due to missing vrf-0 in new configuration.
Signed-off-by: sudhanshukumar22 <sudhanshu.kumar@broadcom.com>
valgrind is reporting:
2448137-==2448137== Thread 5 zebra_apic:
2448137-==2448137== Syscall param writev(vector[...]) points to uninitialised byte(s)
2448137:==2448137== at 0x4D6FDDD: __writev (writev.c:26)
2448137-==2448137== by 0x4D6FDDD: writev (writev.c:24)
2448137-==2448137== by 0x48A35F5: buffer_flush_available (buffer.c:431)
2448137-==2448137== by 0x48A3504: buffer_flush_all (buffer.c:237)
2448137-==2448137== by 0x495948: zserv_write (zserv.c:263)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Address 0x720c3ce is 62 bytes inside a block of size 4,120 alloc'd
2448137:==2448137== at 0x483877F: malloc (vg_replace_malloc.c:307)
2448137-==2448137== by 0x48D2977: qmalloc (memory.c:110)
2448137-==2448137== by 0x48A30E3: buffer_add (buffer.c:135)
2448137-==2448137== by 0x48A30E3: buffer_put (buffer.c:161)
2448137-==2448137== by 0x49591B: zserv_write (zserv.c:256)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Uninitialised value was created by a stack allocation
2448137:==2448137== at 0x43E490: zserv_encode_vrf (zapi_msg.c:103)
Effectively we are sending `struct vrf_data` without ensuring
data has been properly initialized.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Send the results of daemons' nhg updates asynchronously,
after the update has actually completed. Capture additional
info about the source daemon in order to locate the correct
zapi session. Simplify the result types considered by the
zebra_nhg module.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
The raw zapi apis to encode and decode NHGs don't need to be
public; also add a little more validity-checking.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Calling fpm_nl_enqueue we should expect a it fit or not
return value on the outgoing stream. This is not necessary
to check here because the while loop where we are checking this
already has ensured that the data being written will fit.
CID -> 1499854
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Setting `zebra route-map delay-timer 0` completely turns of any
route-map processing in zebra. Which is completely wrong. A timer
of 0 means `do it now`.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If we are running with a delayed timer to handle route-map changes
in zebra, if another route-map change is made to the cli, push
out the timer instead of not modifying the timer. This will
allow a large set of route-maps to be possibly be read in by
the system and we don't have a state where new route-map
changes are being read in and having the timer pop in
the middle of it.
Additionally convert to use THREAD_OFF, preventing a possible
use after free as well as aligning the thread api usage
with what we consider correct.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Current code when a route map changes schedules a rerun of all routes in the
particular table. So if you modify the `ip protocol XX route-map FOO`
route-map `FOO` all routes will be rechecked. This is extremely expensive.
Modify zebra to only update the routes associated with the route-map. So
if we have 800k bgp routes and 50 ospf routes and we are route-map'ing
the ospf routes we'll only look at 50 routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we need to cause a reprocessing of data the code currently
marks all routes as needing to be looked at. Modify the
rib_update_table code to allow us to specify a specific route
type we only want to reprocess. At this point none
of the code is behaving differently this is just setup
for a future code change.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use nl_pid from the netlink socket used for programming the kernel
(netlink_dplane) in netlink route messages sent by the 'fpm' module.
This makes 'fpm' consistent with 'dplane_fpm_nl' which already
behaves this way, and allows FPM server implementations to determine
route origin via nlmsg_pid.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Create a function that can dump the mac->flags in human readable
output and convert all debugs to use it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The re->flags and re->status in debugs were being dumped as hex values.
I can never quickly decode this. Here is an idea. Let's let FRR do
it for me.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In the case where a routes nexthops cannot be resolved as part
of route processing, immmediately notify the upper level protocol
that their routes failed to install if they are interested in
being informed about this issue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The zebra route-map delay timer value is a global value
not a per vrf change. As such we should only print it
out one time.
We are seeing this:
zebra route-map delay-timer 33
exit-vrf
zebra route-map delay-timer 33
When we have 2 vrf's configured.
Fix the code to only write it out for the default vrf
Ticket: CM-32888
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
when checking if there is a "hole" behind the current reservation
marker the calculation of whether the hole is big enough to satisfy
the requested chunk is out by 1. This could result in returning a label
which has already been allocated.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
if the requested chunk size was less than 16 then a chunk
within the reserved block would be returned. Make sure that
we never return labels that are below MPLS_LABEL_UNRESERVED_MIN
Signed-off-by: Pat Ruddy <pat@voltanet.io>
When dplane_fpm_nl is used the "Please add this protocol(n) to proper
rt_netlink.c handling" debug message is emitted for any route of type
kernel or connected.
This severely reduces performance of dplane_fpm_nl when large numbers
of these routes are present in the RIB.
The messages are not observed when using the original fpm module since
this uses a custom function, netlink_proto_from_route_type().
zebra2proto() now returns RTPROT_KERNEL for ZEBRA_ROUTE_CONNECT and
ZEBRA_ROUTE_KERNEL. This should only impact dplane_fpm_nl's use of
the common netlink routines since these routes generally ignored via
checking of RSYSTEM_ROUTE().
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
fpm_nl_process() now ensures that the dataplane thread is rescheduled
if it hits the work limit while processing its incoming work queue.
This would probably already occur due to some other event, such as
fpm_process_queue() enqueuing completed work to the output queue,
however it does no harm to add this explicit reschedule.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
If the dataplane thread hits the work limit while processing the
output queue for any given provider, we now explicitly reschedule
the thread.
Otherwise, if the number of items in the output queue is greater than
the work limit, draining of that output queue is dependent on new
dataplane work.
Routes which are not drained from the output queue are stuck with
the 'q' flag, so this is a similar issue to that observed in
164d8e8608.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
zebra maintains pseudo interface for hanging off user config after
the interface is deleted in the kernel. If an user tried to config
an ES against such an interface zebra would crash with the following
call stack -
at zebra/zebra_evpn_mh.c:2095
sysmac=sysmac@entry=0x55cfbadd3160) at zebra/zebra_evpn_mh.c:2258
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>, es_lid_str=<optimized out>, es_lid=1, no=0x0, vty=0x55cfbaf4c7b0)
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>) at ./zebra/zebra_evpn_mh_clippy.c:202
vty=vty@entry=0x55cfbaf4c7b0, cmd=cmd@entry=0x0, filter=FILTER_RELAXED)
at lib/command.c:1073
Ticket: CM-31702
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a local-MAC or local-neigh is not active locally it is not sent to BGP.
At this point if BGP rxes a remote route it accepts it and installs in
zebra. Zebra was rejecting BGP's update if it had a higher seq local (inactive)
entry. This would result in bgp and zebra falling out of sync.
In some cases zebra would delete the local-inactive entries in sometime (as
a part of the dplane/kernel garbage collection). This would leave zebra
with missing remote entries (which were still present in bgpd).
This change allows lower-seq BGP updates to overwrite zebra's local entry if
that entry happens to be local-inactive.
Note: This logic was already in use for sync-mac-ip updates. Extended the
same logic to remote-mac-ip updates.
Ticket: CM-31626
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When an VNI was deleted as a part of FRR/zebra shutdown the zevpn entry
was being freed without removing its reference in the access vlan
entry (i.e. without clearing the VLAN->VNI mapping) used by MH.
Ticket: CM-31197
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a netlink/dp notification is rxed for a neigh without the peer-sync
flag FRR re-installs the entry with the right flags. This change is
needed to handle cases where the dataplane and FRR may fall out of
sync because of neigh learning on the network ports (i.e. via
the VxLAN).
Ticket: CM-30693
The problem was found during VM mobility "torture" tests where 100s
of extended VM moves were done.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a remote MAC update is rxed from BGP with a lower sequence number than
the local one zebra ignores the MAC update. This typically happens if
there is a race condition (where updates are in flight from zebra to BGP).
There was a bug in zebra because of which the dest ES was being updated
before this check. This left the local MAC pointing to a remote ES.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Relevant Dumps:
===============
root@leaf21:mgmt:~# net show evpn mac vni 101101 mac 00:93:00:00:00:01
MAC: 00:93:00:00:00:01
ESI: 03:00:00:00:77:01:03:00:00:0d
Intf: - VLAN: 101
Sync-info: neigh#: 1 peer-proxy
Local Seq: 3 Remote Seq: 0
Neighbors:
21.1.13.1 Active
root@leaf21:mgmt:~# net sho evpn es
Type: L local, R remote, N non-DF
ESI Type ES-IF VTEPs
03:00:00:00:77:01:02:00:00:0c R - 6.0.0.10,6.0.0.11
03:00:00:00:77:01:03:00:00:0d R - 6.0.0.10,6.0.0.11,6.0.0.12
03:00:00:00:77:01:04:00:00:0e R - 6.0.0.10,6.0.0.11,6.0.0.12,6.0.0.13
03:00:00:00:77:02:02:00:00:16 LR bondP2-H2 6.0.0.15
03:00:00:00:77:02:03:00:00:17 LR bondP2-H3 6.0.0.15,6.0.0.16
03:00:00:00:77:02:04:00:00:18 LR bondP2-H4 6.0.0.15,6.0.0.16,6.0.0.17
root@leaf21:mgmt:~#
Relevant logs:
===============
2020/07/29 15:41:27.110846 ZEBRA: Recv MACIP ADD VNI 101101 MAC 00:93:00:00:00:01 IP 21.1.13.1 flags 0x0 seq 2 VTEP 0.0.0.0 ESI 03:00:00:00:77:01:03:00:00:0d from bgp
2020/07/29 15:41:27.110867 ZEBRA: Ignore remote MACIP ADD VNI 101101 MAC 00:93:00:00:00:01 IP 21.1.13.1 as existing MAC has higher seq 3 flags 0x401
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-30273
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
With EVPN-MH, Type-2 routes are also used for MAC-IP syncing between
ES peers so a change was done to only treat REACHABLE local neigh
entries as local-active and advertise them as Type-2 routes i.e. STALE
neigh entries are no longer advertised as Type-2s.
This however exposed some unexpected problems with MLAG where a
secondary reboot followed by a primary reboot left a lot of neighs
in STALE state (on the primary) resulting in them not being
advertised. And remote routed traffic to those hosts being
blackholed in a sym-IRB setup.
This commit is a workaround to fix the regression (it doesn't fix
the underlying problems with entries not becoming REACHABLE; which
maybe a day-1 problem). The workaround is to continue advertising
STALE neighbors if EVPN-MH is not enabled.
Ticket: CM-30303
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
zebra was crashing when the command was run on a non-existent VNI.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 json
[
]
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail json
[
]
root@torm-12:mgmt:~#
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-30232
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
in rib_handle_nhg_replace, do not use new as a parameter name to
allow compilation of c++ code including zebra headers.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
The way a couple of clauses were placed in a loop meant that
some info might not be collected - re-order things just a bit.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Derive the rule family from src if available, otherwise
dst if available, otherwise assume ipv4. We only support
ipv4/ipv6 currently so it we cant tell from the src/dst
it must be ipv4 and likely a dsfield match.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Maintain the count of contexts which have been processed in a local
variable, and perform a single atomic update after we have consumed
all queued contexts.
Generally this results in at least one less atomic operation per
context.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Don't use an atomic operation to determine whether fpm_process_queue()
needs to be re-scheduled. Instead we can simply use a local variable
to determine if we stopped processing because we ran out of buffers.
In the case where we would have re-scheduled due to new context objects
in the queue (enqueued after we stopped processing), fpm_nl_process()
will schedule us (or will have done already).
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Maintain the peak ctxqueue length in a local variable, and perform
a single atomic update after processing all contexts.
Generally this results in at least one less atomic operation per
context.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Reduce code in the critical sections of fpm_nl_process() and
fpm_process_queue() to the bare minimum - basically only enqueue
and dequeue operations on the shared ctxqueue.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
We don't need to use the 'force' flag when processing the
resolve-via-default clis for ip and ipv6: we can just do normal
nht processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
After removal of L3VNI config, the VNI should become an L2VNI if a VxLAN
interface is present for the VNI. This case is not handled in the code.
Changes:
1. After unconfiguring L3VNI, create an L2VNI if VxLAN interface is present
for the VNI.
2. Trigger an update to BGP.
3. Read MAC and ARP entries from kernel.
This PR fixes the issue only for route type-2, 3 and 5. This PR does not address
states regarding route type-1, 4 and multicast group for VxLAN interface.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
When a new ES is created it is held in a non-DF state for 3 seconds
as specified by RFC7432. This allows the switch time to import
the Type-4 routes from the peers. And the peers time to rx the new
Type-4 route.
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: non-df
DF delay: 00:00:01
DF preference: 50000
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: df
DF preference: 50000
root@torm-11:mgmt:~#
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>