The zebra route-map delay timer value is a global value
not a per vrf change. As such we should only print it
out one time.
We are seeing this:
zebra route-map delay-timer 33
exit-vrf
zebra route-map delay-timer 33
When we have 2 vrf's configured.
Fix the code to only write it out for the default vrf
Ticket: CM-32888
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
when checking if there is a "hole" behind the current reservation
marker the calculation of whether the hole is big enough to satisfy
the requested chunk is out by 1. This could result in returning a label
which has already been allocated.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
if the requested chunk size was less than 16 then a chunk
within the reserved block would be returned. Make sure that
we never return labels that are below MPLS_LABEL_UNRESERVED_MIN
Signed-off-by: Pat Ruddy <pat@voltanet.io>
When dplane_fpm_nl is used the "Please add this protocol(n) to proper
rt_netlink.c handling" debug message is emitted for any route of type
kernel or connected.
This severely reduces performance of dplane_fpm_nl when large numbers
of these routes are present in the RIB.
The messages are not observed when using the original fpm module since
this uses a custom function, netlink_proto_from_route_type().
zebra2proto() now returns RTPROT_KERNEL for ZEBRA_ROUTE_CONNECT and
ZEBRA_ROUTE_KERNEL. This should only impact dplane_fpm_nl's use of
the common netlink routines since these routes generally ignored via
checking of RSYSTEM_ROUTE().
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
fpm_nl_process() now ensures that the dataplane thread is rescheduled
if it hits the work limit while processing its incoming work queue.
This would probably already occur due to some other event, such as
fpm_process_queue() enqueuing completed work to the output queue,
however it does no harm to add this explicit reschedule.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
If the dataplane thread hits the work limit while processing the
output queue for any given provider, we now explicitly reschedule
the thread.
Otherwise, if the number of items in the output queue is greater than
the work limit, draining of that output queue is dependent on new
dataplane work.
Routes which are not drained from the output queue are stuck with
the 'q' flag, so this is a similar issue to that observed in
164d8e8608.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
zebra maintains pseudo interface for hanging off user config after
the interface is deleted in the kernel. If an user tried to config
an ES against such an interface zebra would crash with the following
call stack -
at zebra/zebra_evpn_mh.c:2095
sysmac=sysmac@entry=0x55cfbadd3160) at zebra/zebra_evpn_mh.c:2258
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>, es_lid_str=<optimized out>, es_lid=1, no=0x0, vty=0x55cfbaf4c7b0)
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>) at ./zebra/zebra_evpn_mh_clippy.c:202
vty=vty@entry=0x55cfbaf4c7b0, cmd=cmd@entry=0x0, filter=FILTER_RELAXED)
at lib/command.c:1073
Ticket: CM-31702
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a local-MAC or local-neigh is not active locally it is not sent to BGP.
At this point if BGP rxes a remote route it accepts it and installs in
zebra. Zebra was rejecting BGP's update if it had a higher seq local (inactive)
entry. This would result in bgp and zebra falling out of sync.
In some cases zebra would delete the local-inactive entries in sometime (as
a part of the dplane/kernel garbage collection). This would leave zebra
with missing remote entries (which were still present in bgpd).
This change allows lower-seq BGP updates to overwrite zebra's local entry if
that entry happens to be local-inactive.
Note: This logic was already in use for sync-mac-ip updates. Extended the
same logic to remote-mac-ip updates.
Ticket: CM-31626
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When an VNI was deleted as a part of FRR/zebra shutdown the zevpn entry
was being freed without removing its reference in the access vlan
entry (i.e. without clearing the VLAN->VNI mapping) used by MH.
Ticket: CM-31197
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a netlink/dp notification is rxed for a neigh without the peer-sync
flag FRR re-installs the entry with the right flags. This change is
needed to handle cases where the dataplane and FRR may fall out of
sync because of neigh learning on the network ports (i.e. via
the VxLAN).
Ticket: CM-30693
The problem was found during VM mobility "torture" tests where 100s
of extended VM moves were done.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
If a remote MAC update is rxed from BGP with a lower sequence number than
the local one zebra ignores the MAC update. This typically happens if
there is a race condition (where updates are in flight from zebra to BGP).
There was a bug in zebra because of which the dest ES was being updated
before this check. This left the local MAC pointing to a remote ES.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Relevant Dumps:
===============
root@leaf21:mgmt:~# net show evpn mac vni 101101 mac 00:93:00:00:00:01
MAC: 00:93:00:00:00:01
ESI: 03:00:00:00:77:01:03:00:00:0d
Intf: - VLAN: 101
Sync-info: neigh#: 1 peer-proxy
Local Seq: 3 Remote Seq: 0
Neighbors:
21.1.13.1 Active
root@leaf21:mgmt:~# net sho evpn es
Type: L local, R remote, N non-DF
ESI Type ES-IF VTEPs
03:00:00:00:77:01:02:00:00:0c R - 6.0.0.10,6.0.0.11
03:00:00:00:77:01:03:00:00:0d R - 6.0.0.10,6.0.0.11,6.0.0.12
03:00:00:00:77:01:04:00:00:0e R - 6.0.0.10,6.0.0.11,6.0.0.12,6.0.0.13
03:00:00:00:77:02:02:00:00:16 LR bondP2-H2 6.0.0.15
03:00:00:00:77:02:03:00:00:17 LR bondP2-H3 6.0.0.15,6.0.0.16
03:00:00:00:77:02:04:00:00:18 LR bondP2-H4 6.0.0.15,6.0.0.16,6.0.0.17
root@leaf21:mgmt:~#
Relevant logs:
===============
2020/07/29 15:41:27.110846 ZEBRA: Recv MACIP ADD VNI 101101 MAC 00:93:00:00:00:01 IP 21.1.13.1 flags 0x0 seq 2 VTEP 0.0.0.0 ESI 03:00:00:00:77:01:03:00:00:0d from bgp
2020/07/29 15:41:27.110867 ZEBRA: Ignore remote MACIP ADD VNI 101101 MAC 00:93:00:00:00:01 IP 21.1.13.1 as existing MAC has higher seq 3 flags 0x401
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-30273
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
With EVPN-MH, Type-2 routes are also used for MAC-IP syncing between
ES peers so a change was done to only treat REACHABLE local neigh
entries as local-active and advertise them as Type-2 routes i.e. STALE
neigh entries are no longer advertised as Type-2s.
This however exposed some unexpected problems with MLAG where a
secondary reboot followed by a primary reboot left a lot of neighs
in STALE state (on the primary) resulting in them not being
advertised. And remote routed traffic to those hosts being
blackholed in a sym-IRB setup.
This commit is a workaround to fix the regression (it doesn't fix
the underlying problems with entries not becoming REACHABLE; which
maybe a day-1 problem). The workaround is to continue advertising
STALE neighbors if EVPN-MH is not enabled.
Ticket: CM-30303
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
zebra was crashing when the command was run on a non-existent VNI.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 json
[
]
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail json
[
]
root@torm-12:mgmt:~#
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-30232
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
in rib_handle_nhg_replace, do not use new as a parameter name to
allow compilation of c++ code including zebra headers.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
The way a couple of clauses were placed in a loop meant that
some info might not be collected - re-order things just a bit.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Derive the rule family from src if available, otherwise
dst if available, otherwise assume ipv4. We only support
ipv4/ipv6 currently so it we cant tell from the src/dst
it must be ipv4 and likely a dsfield match.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Maintain the count of contexts which have been processed in a local
variable, and perform a single atomic update after we have consumed
all queued contexts.
Generally this results in at least one less atomic operation per
context.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Don't use an atomic operation to determine whether fpm_process_queue()
needs to be re-scheduled. Instead we can simply use a local variable
to determine if we stopped processing because we ran out of buffers.
In the case where we would have re-scheduled due to new context objects
in the queue (enqueued after we stopped processing), fpm_nl_process()
will schedule us (or will have done already).
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Maintain the peak ctxqueue length in a local variable, and perform
a single atomic update after processing all contexts.
Generally this results in at least one less atomic operation per
context.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Reduce code in the critical sections of fpm_nl_process() and
fpm_process_queue() to the bare minimum - basically only enqueue
and dequeue operations on the shared ctxqueue.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
We don't need to use the 'force' flag when processing the
resolve-via-default clis for ip and ipv6: we can just do normal
nht processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
After removal of L3VNI config, the VNI should become an L2VNI if a VxLAN
interface is present for the VNI. This case is not handled in the code.
Changes:
1. After unconfiguring L3VNI, create an L2VNI if VxLAN interface is present
for the VNI.
2. Trigger an update to BGP.
3. Read MAC and ARP entries from kernel.
This PR fixes the issue only for route type-2, 3 and 5. This PR does not address
states regarding route type-1, 4 and multicast group for VxLAN interface.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
When a new ES is created it is held in a non-DF state for 3 seconds
as specified by RFC7432. This allows the switch time to import
the Type-4 routes from the peers. And the peers time to rx the new
Type-4 route.
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: non-df
DF delay: 00:00:01
DF preference: 50000
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: df
DF preference: 50000
root@torm-11:mgmt:~#
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When all the uplinks go down the VTEP is disconnected from the
VxLAN overlay and this was handled by proto-downing the ES bonds. When
the uplinks come up again we need to re-enable the ES bonds but that
needs to be done after a delay to allow the EVPN network to converge.
And that is done by firing off the startup-delay timer on first
uplink-up.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
1. When a bond is associated with an ES we may need to re-sync
the dplane protodown state (which maybe stale/set by some other
app).
2. Also change the uplink state display to avoid confusion with
protodown reason code (both used to show uplink-up).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
protodown state is a combination of the dplane and zebra states.
protodown reason is maintained exclusively by zebra. Display this
information on two separate lines to make that ownership clearer.
Also display n/a for bonds as the dplane doesn't support protodowning
the bond device.
Sample output -
==============
root@torm-11:mgmt:~# vtysh -c "show interface hostbond1"|grep -i protodown
protodown: off (n/a)
protodown reasons: (uplinks-down)
root@torm-11:mgmt:~# vtysh -c "show interface swp5"|grep -i protodown
protodown: on
protodown reasons: (uplinks-down)
root@torm-11:mgmt:~#
PS: Cosmetic changes only, no functional change.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
The code for this was already there but was not kicking in because of a
zebra local reason-code dup check. Even if the reason-code is the same,
if the dplane and zebra disagree about the protodown state zebra will
need to re-program the dplane.
Fixed a couple of spelling errors in the protodown logs to make greps
easy.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>