The multicast group ip address for BUM traffic is configurable per-l2-vni.
One way to configure that is to setup a vxlan device that per-l2-vni and
specify the address against that vxlan device -
root@TORS1:~# vtysh -c "show interface vx-1000" |grep -i vxlan
Interface Type Vxlan
VxLAN Id 1000 VTEP IP: 27.0.0.15 Access VLAN Id 1000 Mcast 239.1.1.100
root@TORS1:~# vtysh -c "show evpn vni 1000" |grep Mcast
Mcast group: 239.1.1.100
root@TORS1:~#
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Found that zebra_rnh_apply_nht_rmap would set the
NEXTHOP_FLAG_ACTIVE if not blocked by the route-map, even
if the flag was not active prior to the check. This fix
changes the flag used to denote the nexthop is filtered so
that proper active state can be retained. Additionally,
found two cases where we would send invalid nexthops via
send_client, which would also cause this crash. All three
fixed in this commit.
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
Update the nexthop flag output for the route entry dump to
include all possible flag states be output.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
We currently run nexthop_active_check multiple times. Make the
code run once and figure out state from that.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The nexthop_active_update command looks at each individual
nexthop and decides if it has changed. If any nexthop
has changed we will set the re->status to ROUTE_ENTRY_CHANGED
and ROUTE_ENTRY_NEXTHOPS_CHANGED.
Additionally the test for old_nh_num != curr_active
makes no sense because suppose we have several events
we are processing at the same time and a total ecmp
of 16 but 14 are active at the start and 14 are active
at the end but different interfaces are up or down.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The NEXTHOP_FLAG_FILTERED went away when we started treating
static routes like every other route in the system. This was
a special case for handling static route code that just didn't
get finished cleaning up.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
We are effectively calling nexthop_active_update() on every
route entry being processed for installation at least 2 times.
This is a bit ridiculous. We need to resolve the nexthops
when we know a route has changed in some manner, so do so.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
zlog() should be part of the public logging API as it's useful in
the cases where the logging priority isn't known at compile time
(i.e. it depends on a variable).
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
When having a route recovery, because of the route installation
cycling and the next hop label check, it could happen that the PW
never gets recovered. The original code shows the intention of retrying,
but the code was missing. The fix includes the call to the timer programming
the recovery attempt.
Example for reproducing the issue:
|P1| <-> |P2| <-> |P3|
- Being P1, P2, P3 nodes, using IS-IS as IGP, and having a pseudowire
betwen P1 and P3 (P1, P2, P3 having configured LDP daemons).
- After 60 seconds, kill the IS-IS daemon in P2.
- Wait 30 seconds
- Launch again the IS-IS daemon in P2
- The bug/issue is that after P1 <-> P3 recovering connectivity sometimes
the PW is not recovered because the reason explained in the first paragraph.
Signed-off-by: F. Aragon <paco@voltanet.io>
In zebra terminate path, the node was attempted to remove
twice from the RB_TREE table. This lead to a crash during
zebra shutdown zebra_router_free_table already calls RB_REMOVE
to remove a node from rb tree table.
siginfo=0x7fffd9134a30, context=<optimized out>) at lib/sigevent.c:249
rbt=<optimized out>, t=<optimized out>) at lib/openbsd-tree.c:226
t=0x56296965ff50 <zebra_router_table_head_RB_INFO>) at lib/openbsd-tree.c:383
rbt=rbt@entry=0x562969669bd0 <zrouter+16>, elm=elm@entry=0x56296afcf810)
at lib/openbsd-tree.c:393
(elm=0x56296afcf810, head=0x562969669bd0 <zrouter+16>) at zebra/zebra_router.h:46
Singned-off-by: Chirag Shah <chirag@cumulusnetworks.com>
We were memsetting zebra_pbr_rule struct after
we had already put some information in it. Also updated
the init of the struct to use braces instead of a
memset.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
The `show ipv[4|6] <nht|import-check> ...` commands are starting
to produce a bunch of output due to multiple daemons now
using the code. Allow the specification of a v4 or v6 address
to allow the show command to only display the interesting nht.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
This fix covers the case where two or more events are processed but only one
becoming effective. E.g. when mixing a synchronous label request from a LDP
deamon and an asynchronous request from a BGP daemon it could happen to the
BGP having the label chunk, but the LDP stuck waiting for the response.
Given e.g.
ldpd <-------->
(sync label request)
Zebra (label proxy) <--> Zebra (shared label manager)
bgpd <-------->
(async label request)
Sequence:
LDP label request ----->
Zebra (label proxy FW) ----> Zebra (LM)
BGP label request ----->
Zebra (label proxy FW) ----> Zebra (LM)
<---- Zebra (LM) RP LDP
<---- Zebra (LM) RP BGP
Signed-off-by: F. Aragon <paco@voltanet.io>
We don't use th vrf-level VRF_RIB_SCHEDULED flag any longer;
remove it and collapse the zebra_vrf flags' values.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
The current code path of registration does this:
a) Lookup or create the rnh
b) register the client with the rnh for callback
If this is a new rnh send a response to the client that
only includes the rnh data that it has (nothing so no path)
If this is a existing rnh send the actual path to the client,
if it exists.
c) If a new client or a flag has changed refigure and send result
to all clients.
This is problematic in that suppose the rnh is new. Clients
will receive two answers:
1) A call back with no nexthops
2) A call back with the resolved # of nexthops
Imagine pim who depends on nht to handle this, pim will create
a mroute( because it does a hard lookup of the rpf as it is registering
the nexthop ), then it will receive the first callback causing
it to tear down the mroute and then receive the second callback
causing it to put it right back.. This is obviously not very
good for mroutes.
This code moves the send to the new client till after the new
client has connected, thus only allowing one callback to the new
client with the actual answer.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Routing protocols are allowed ( and even encouraged ) to modify
the flags that influence the nexthop tracking. As such when
we modify the tracking of a nexthop to go from, say, connected force
or not we must re-evaluate the nexthop and send the results
up to the interested parties.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
After we have evaluated the rnh for an import-check type
and we copy the re then we know that the state has changed
and we should be notifying the end user about it.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
LSP processing was a zvrf flag based upon a connected route
coming or going. But this did not allow us to know
that we should do lsp processing other than after the meta-queue
processing was finished.
Eventually we moved meta-queue processing of do_nht_processing
to after the dataplane sent the main pthread some results.
This of course left us with a timing hole where if a connected
route came in and we received a data plane response *before*
the meta queue was processed we would not do the work as necessary.
Move the lsp processing to a flag off of the rib_dest_t. If it
is marked then we need to process lsps.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Add a detailed debugging command for NHT tracking and add
the detailed output to the log about why we make some decisions
that we are. I tried to model this like the rib processing
detailed debugs that we added a few months back.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Currently nexthop tracking is performed for all nexthops that
are being tracked after a group of contexts are passed back
from the data plane for post install processing.
This is inefficient and leaves us sending nexthop tracking
changes at an accelerated pace, when we think we've changed
a route. Additionally every route change will cause us
to relook at all nexthops we are tracking irrelevant if
they are possibly related to the route change or not.
Let's modify the code base to track the rnh's off of the rib
table's rn, `rib_dest_t`. So after we process a node, install
it into the data plane, in rib_process_result we can
look at the `rib_dest_t` associated with the rn and see that
a nexthop depended on this route node. If so, refigure it.
Additionally we will store rnh's that are not resolved on the
0.0.0.0/0 nexthop tracking list. As such when a route node
changes we can quickly walk up the rib tree and notice that
it needs to be reprocessed as well.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Add a default route_node for our routing tables. This will allow us
to know that we can hang data off the default route for processing.
We will be hanging the nexthop tracking data structures off the rib_dest_t
so that we can know which nexthops we need to handle. Effectively
nexthops that we are tracking that are unresolved will be stored on the
default route. When something changes in the rib tree we can
work up the rn->parent pointer checking for nexthops we need to re-evaluate.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The resolved_route is the prefix we are using in the routing table
to resolve this particular nexthop we are tracking. Add code
to better track it's change.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The prn value as passed in may be NULL as such do not
allow it to be derefed (even though it works now).
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
We have several route types KERNEL and CONNECT that are handled via special
case in the code. This was causing a lot of work keeping the two different
classes of route types as special(SYSTEM OR NOT). Put the dplane
in charge of the code that sets the bits for signalling route install/failure.
This greatly simplifies the code calling path and makes all route types
be handled exactly the same. Additionaly code that we want to run
post data plane install can just work as per normal then, instead
of having to know we need to run it when we have a special type
of route.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com.
When we get a route install failure from the kernel, actually
indicate in the rib the status of the routes.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When switching routes from one route type to another actually
unset the old route as enqueued.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When shutting down, the individual vrf's own the shutdown of the table
and subsuquent removal from the routes from the kernel.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When shutting down and we have a very large table to shutdown
and after we've intentionally closed all the client connections
close the zebra zserv client socket.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>