Some vendors only advertise EAD-per-ES routes i.e. they do not
advertise EAD-per-EVI routes. To interop with these vendors we need
to relax the dependancy on EAD-per-EVI routes to activate a remote ES-PE.
Sample config -
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
router bgp 5553
address-family l2vpn evpn
disable-ead-evi-rx
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-31177
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
New work enqueued to the dplane_fpm_nl provider is initially de-queued
and re-enqueued, in fpm_nl_process(), to be processed by the provider's
own thread.
After performing this initial de-queue/enqueue we return to
dplane_thread_loop() and check the dplane_fpm_nl output queue for any
work which has been completed.
Since this work is being processed in another thread it is very likely
that there will be some (or all) work still outstanding at this point.
The dataplane thread finishes up any other tasks and then waits until
it is next scheduled. In the meantime the dplane_fpm_nl thread is
processing its work queue until completion.
The issue arises here as the dataplane thread is not explicitly
re-scheduled once dplane_fpm_nl has drained its work queue and
populated its output queue with completed work.
This completed work can sit in the output queue for an indeterminate
period of time, depending upon when the dataplane thread is next
scheduled for other work. If the RIB has reached a stable state then
this could be a significant period of time. During this period zebra
marks these routes as queued, even though they have actually been
processed by all dataplane providers.
An un-related RIB change which triggers a FIB update will result in
the dataplane thread being scheduled and this completed work then
being processed. At this point the routes will then no longer be
marked as queued by zebra. However this new FIB update might itself
then fall victim to the same scenario!
We can observe the above behaviour in these detailed dplane logs.
11:24:47 zebra[7282]: dplane: incoming new work counter: 2
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:47 zebra[7282]: dplane provider 'Kernel': processing
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:47 zebra[7282]: dplane dequeues 1 completed work from provider dplane_fpm_nl
11:24:47 zebra[7282]: dplane has 1 completed, 0 errors, for zebra main
2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
1 completed context was de-queued, so there is outstanding work.
11:24:58 zebra[7282]: dplane: incoming new work counter: 2
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:58 zebra[7282]: dplane provider 'Kernel': processing
11:24:58 zebra[7282]: ID (193) Dplane nexthop update ctx 0x55c429b6fed0 op NH_INSTALL
11:24:58 zebra[7282]: 0:5.5.5.5/32 Dplane route update ctx 0x55c429b79690 op ROUTE_INSTALL
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:24:58 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
A further 2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
2 completed contexts were de-queued, which sounds good as that is what we en-queued.
However, there is an outstanding context from earlier, so there is still outstanding
work.
Indeed the new 5.5.5.5/32 route is marked as queued:
O>q 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:01:19
This remains the case until we trigger a FIB update by installation of the
(eg.) 10.10.10.10/32 route:
11:26:41 zebra[7282]: dplane: incoming new work counter: 2
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:26:41 zebra[7282]: dplane provider 'Kernel': processing
11:26:41 zebra[7282]: ID (195) Dplane nexthop update ctx 0x55c429b78ce0 op NH_INSTALL
11:26:41 zebra[7282]: 0:10.10.10.10/32 Dplane route update ctx 0x55c429b7a040 op ROUTE_INSTALL
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:26:41 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
11:26:41 zebra[7282]: zebra2proto: Please add this protocol(2) to proper rt_netlink.c handling
11:26:41 zebra[7282]: Nexthop dplane ctx 0x55c429b6fed0, op NH_INSTALL, nexthop ID (193), result SUCCESS
11:26:41 zebra[7282]: default(0:254):5.5.5.5/32 Processing dplane result ctx 0x55c429b79690, op ROUTE_INSTALL result SUCCESS
We observe the same 2 enqueues and 2 dequeues as before, which again suggests
that there is outstanding work.
As expected, the 5.5.5.5/32 route is no longer marked as queued:
O>* 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:02:06
But the 10.10.10.10/32 route is, as we have not yet processed the completed
context:
C>q 10.10.10.10/32 is directly connected, lo, 00:26:05
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Returns the current number of (completed) contexts in the provider's
output queue (dp_ctx_out_q), allowing access to this data from the
provider itself.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Reference: https://www.cmand.org/communityexploration
--y2--
/ | \
c1 ---- x1 ---- y1 | z1
\ | /
--y3--
1. z1 announces 192.168.255.254/32 to y2, y3.
2. y2 and y3 tags this prefix at ingress with appropriate
communities 65004:2 (y2) and 65004:3 (y3).
3. x1 filters all communities at the egress to c1.
4. Shutdown the link between y1 and y2.
5. y1 will generate a BGP UPDATE message regarding the next-hop change.
6. x1 will generate a BGP UPDATE message regarding community change.
To avoid sending duplicate BGP UPDATE messages we should make sure
we send only actual route updates. In this example, x1 will skip
BGP UPDATE to c1 because the actual route is the same
(filtered communities - nothing changes).
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Some prefixes were not shown in the link database
show command, due to issues with pointer calculation.
Signed-off-by: Yash Ranjan <ranjany@vmware.com>
Some prefixes were not shown in the intra-prefix database
show command, due to issues with pointer calculation.
Signed-off-by: Yash Ranjan <ranjany@vmware.com>
1. Enhanced framework to
a. Verify fib active routes(lib/common_config.py).
b. Verify bgp multi path routes(lib/bgp.py).
c. Create mininet nodes with different names(lib/topojson.py).
4. 12 Test cases of static routing with ibgp.
Test suite execution time is ~30 minutes.
5. 12 Test cases of static routing with ebgp.
Test suite execution time is ~30 minutes.
Signed-off-by: naveen <nguggarigoud@vmware.com>
On top of the recent `bgp suppress-fib-pending which
was at a BGP_NODE level, add this command at the CONFIG_NODE
level as well and allow the command to apply to all instances
of bgp running.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Following functions which is a piece of label-maanager implementation
isn't called from out side of its file. And all lines of label-manager
are coded on zebra/label_manager.c at this time. So these functions
should be unexposed.
Functions:
- create_label_chunk
- assign_label_chunk
- delete_label_chunk
- release_label_chunk
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
Use user provided AD for local routes (aggregate).
address-family ipv4 unicast
distance bgp 20 200 210
network 47.2.2.8/30
aggregate-address 51.1.0.0/16
Testing Done:
Before aggr route uses default 200 AD even user provided local AD.
B>* 51.1.0.0/16 [200/0] unreachable (blackhole), weight 1, 00:01:14
After:
B>* 51.1.0.0/16 [210/0] unreachable (blackhole), weight 1, 00:00:01
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Removing the obsolete ldp-sync periodic 'hello' message.
When ldp-sync is configured, IGPs take action if the LDP process goes down.
The IGPs have been updated to use the zapi client close callback to detect
the LDP process going down.
Signed-off-by: Karen Schoener <karen@voltanet.io>
In some extraordinary circumstances an LSP might not have any
TLV. Add a null check to prevent a crash when that happens.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
In ldpd, the child processes send IPC messages to the main process to
perform logging in their behalf (access to the file descriptor used
for logging needs to be serialized). This commit fixes a problem
that was preventing the printfrr format specifiers from working in
the child processes, since vsnprintf() was being used instead of
vsnprintfrr() before sending the log messages to the parent process.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
When ldp-sync is configured, IGPs take action if the LDP process goes down.
Currently, IGPs detect the LDP process is down if they do not receive a
periodic 'hello' message from LDP within 1 second.
Intermittently, this heartbeat mechanism causes false topotest failures.
When the failure occurs, LDP is busy receiving messages from zebra for a
few seconds. During this time, LDP does not send the expected periodic
message.
With this change, IGPs detect LDP down via zapi client close message.
Signed-off-by: Karen Schoener <karen@voltanet.io>
in the case the namespace pointer is already available, feed it at vrf
creation. this prevents from crashing if the netlink parsing already
began, and the vrf-lite is not enabled yet.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>