The code for this was already there but was not kicking in because of a
zebra local reason-code dup check. Even if the reason-code is the same,
if the dplane and zebra disagree about the protodown state zebra will
need to re-program the dplane.
Fixed a couple of spelling errors in the protodown logs to make greps
easy.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When the ospfv3 interface is disabled by the command "no interface <eth> area <area-id>
the linked interface prefixes does not get flushed
Signed-off-by: Yash Ranjan <ranjany@vmware.com>
evpn route-map match (filter) on vni is not working
at the origin of the routes.
evpn match vni route checks for encap type as vxlan.
the source route attribute is not set with vxlan encap
thus the match filter wouldn't work.
Ticket:CM-32554
Reviewed By:CCR-11056
Testing Done:
At source have match vni plus set statement in route-map.
Validate the origin of the route's outbound correctly sets
the 'set' statment based on match vni filter.
At origin:
route-map RM-EVPN-TE-Matches permit 10
match evpn vni 4001
set large-community 10:10:119
Receiving end:
Route [5]:[0]:[24]:[78.41.1.0] VNI 4001
5550
27.0.0.15 from TORS1(downlink-5) (27.0.0.15)
Origin incomplete, metric 0, valid, external, bestpath-from-AS 5550, best (First path received)
Extended Community: RT:5550:4001 ET:8 Rmac:00:02:00:00:00:4d
Large Community: 10:10:119 <--- Large community stamped
Last update: Thu Dec 10 22:19:26 2020
Signed-off-by: Chirag Shah <chirag@nvidia.com>
When an interface goes up/down we need to pay attention to this
in PBR. In the past we were relying *only* on the nht events
but this is not sufficient for cases where an interface is flapping
up and down. If this is happening it could be happening fast enough
that zebra is not sending nht events because they are consolidated
into a single event from it's perspective and that is the right thing
to do. This commit will allow us to back out commit:
0aaa722883
As that commit introduced extra processing in zebra that is actually
causing issues in other places. The problem that commit was trying
to solve should have always been handled in pbrd instead of making
zebra do work that is unnatural to it's actual flow.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the new nested NDA_FDB_EXT_ATTRS attribute to control per-fdb
notifications.
PS: The attributes where updated as a part of the kernel upstreaming
hence the change.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
New work enqueued to the dplane_fpm_nl provider is initially de-queued
and re-enqueued, in fpm_nl_process(), to be processed by the provider's
own thread.
After performing this initial de-queue/enqueue we return to
dplane_thread_loop() and check the dplane_fpm_nl output queue for any
work which has been completed.
Since this work is being processed in another thread it is very likely
that there will be some (or all) work still outstanding at this point.
The dataplane thread finishes up any other tasks and then waits until
it is next scheduled. In the meantime the dplane_fpm_nl thread is
processing its work queue until completion.
The issue arises here as the dataplane thread is not explicitly
re-scheduled once dplane_fpm_nl has drained its work queue and
populated its output queue with completed work.
This completed work can sit in the output queue for an indeterminate
period of time, depending upon when the dataplane thread is next
scheduled for other work. If the RIB has reached a stable state then
this could be a significant period of time. During this period zebra
marks these routes as queued, even though they have actually been
processed by all dataplane providers.
An un-related RIB change which triggers a FIB update will result in
the dataplane thread being scheduled and this completed work then
being processed. At this point the routes will then no longer be
marked as queued by zebra. However this new FIB update might itself
then fall victim to the same scenario!
We can observe the above behaviour in these detailed dplane logs.
11:24:47 zebra[7282]: dplane: incoming new work counter: 2
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:47 zebra[7282]: dplane provider 'Kernel': processing
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:47 zebra[7282]: dplane dequeues 1 completed work from provider dplane_fpm_nl
11:24:47 zebra[7282]: dplane has 1 completed, 0 errors, for zebra main
2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
1 completed context was de-queued, so there is outstanding work.
11:24:58 zebra[7282]: dplane: incoming new work counter: 2
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:58 zebra[7282]: dplane provider 'Kernel': processing
11:24:58 zebra[7282]: ID (193) Dplane nexthop update ctx 0x55c429b6fed0 op NH_INSTALL
11:24:58 zebra[7282]: 0:5.5.5.5/32 Dplane route update ctx 0x55c429b79690 op ROUTE_INSTALL
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:24:58 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
A further 2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
2 completed contexts were de-queued, which sounds good as that is what we en-queued.
However, there is an outstanding context from earlier, so there is still outstanding
work.
Indeed the new 5.5.5.5/32 route is marked as queued:
O>q 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:01:19
This remains the case until we trigger a FIB update by installation of the
(eg.) 10.10.10.10/32 route:
11:26:41 zebra[7282]: dplane: incoming new work counter: 2
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:26:41 zebra[7282]: dplane provider 'Kernel': processing
11:26:41 zebra[7282]: ID (195) Dplane nexthop update ctx 0x55c429b78ce0 op NH_INSTALL
11:26:41 zebra[7282]: 0:10.10.10.10/32 Dplane route update ctx 0x55c429b7a040 op ROUTE_INSTALL
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:26:41 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
11:26:41 zebra[7282]: zebra2proto: Please add this protocol(2) to proper rt_netlink.c handling
11:26:41 zebra[7282]: Nexthop dplane ctx 0x55c429b6fed0, op NH_INSTALL, nexthop ID (193), result SUCCESS
11:26:41 zebra[7282]: default(0:254):5.5.5.5/32 Processing dplane result ctx 0x55c429b79690, op ROUTE_INSTALL result SUCCESS
We observe the same 2 enqueues and 2 dequeues as before, which again suggests
that there is outstanding work.
As expected, the 5.5.5.5/32 route is no longer marked as queued:
O>* 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:02:06
But the 10.10.10.10/32 route is, as we have not yet processed the completed
context:
C>q 10.10.10.10/32 is directly connected, lo, 00:26:05
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Returns the current number of (completed) contexts in the provider's
output queue (dp_ctx_out_q), allowing access to this data from the
provider itself.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Reference: https://www.cmand.org/communityexploration
--y2--
/ | \
c1 ---- x1 ---- y1 | z1
\ | /
--y3--
1. z1 announces 192.168.255.254/32 to y2, y3.
2. y2 and y3 tags this prefix at ingress with appropriate
communities 65004:2 (y2) and 65004:3 (y3).
3. x1 filters all communities at the egress to c1.
4. Shutdown the link between y1 and y2.
5. y1 will generate a BGP UPDATE message regarding the next-hop change.
6. x1 will generate a BGP UPDATE message regarding community change.
To avoid sending duplicate BGP UPDATE messages we should make sure
we send only actual route updates. In this example, x1 will skip
BGP UPDATE to c1 because the actual route is the same
(filtered communities - nothing changes).
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Some prefixes were not shown in the link database
show command, due to issues with pointer calculation.
Signed-off-by: Yash Ranjan <ranjany@vmware.com>
Some prefixes were not shown in the intra-prefix database
show command, due to issues with pointer calculation.
Signed-off-by: Yash Ranjan <ranjany@vmware.com>
1. Enhanced framework to
a. Verify fib active routes(lib/common_config.py).
b. Verify bgp multi path routes(lib/bgp.py).
c. Create mininet nodes with different names(lib/topojson.py).
4. 12 Test cases of static routing with ibgp.
Test suite execution time is ~30 minutes.
5. 12 Test cases of static routing with ebgp.
Test suite execution time is ~30 minutes.
Signed-off-by: naveen <nguggarigoud@vmware.com>
On top of the recent `bgp suppress-fib-pending which
was at a BGP_NODE level, add this command at the CONFIG_NODE
level as well and allow the command to apply to all instances
of bgp running.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Following functions which is a piece of label-maanager implementation
isn't called from out side of its file. And all lines of label-manager
are coded on zebra/label_manager.c at this time. So these functions
should be unexposed.
Functions:
- create_label_chunk
- assign_label_chunk
- delete_label_chunk
- release_label_chunk
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>