a) Use appropriate %p modifiers for output
2) Display vrf name in addition to vrf id
c) Remove now unused function
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
During quick ifdown / ifup events from the linux kernel there
exists a situation where a prefix that has both a kernel route
and a static route can queued up on the meta-q. If the static
route happens to point at a connected route for nexthop resolution
and we receive a series of quick up/down events *after* the
static route and kernel route are queued up for rib reprocessing.
Since the static route and kernel route are queued on meta-q 1
and the connected route is also on meta-q 1 there exists a situation
where the connected route will be resolved after the static route
fails to resolve, leaving the static route in a unresolved state.
Add a new queue level and put connected routes on their own level,
since they are the fundamental building blocks of pretty much
all the other routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When zebra is processing routes to determine what to send
to the rib, suppose we have two routes (a) a route processed
earlier that none of it's nexthops were active and (b)
a route that has good nexthops but has a worse admin distance.
rib_process, would not relook at (a)'s nexthops because
the ROUTE_ENTRY_CHANGED flag was not true and it would
win when compared to (b) because it's admin distance
was better, leaving us with a state where we would
attempt and fail to install route (a) because it
was not valid.
Modify the code to consider the number of nexthops
we have as a determiner if we can use the route.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Make sure the all-protocols test_isis_interfaces testcase uses
a regex substitution that includes all the hex characters.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
This should never happen; no need to debug guard it and it's not a
warning, if this isn't working then NHT is not working at all.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
This function returns true on success and false otherwise. Returning -1
on error is equivalent to returning true.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
In rib_process_update_fib, the function is sent two route entries
the old ( previously installed ) and new ( the one to install )
When the function detects that the new is unusable because
the number of nexthops that are usable for that route is 0,
then we uninstall the old route. The problem here is that
we should not attempt to uninstall any route that is
not owned by FRR. Modify the code to not attempt
this behavior
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
* add files to vtysh_scan when building only fabricd
* don't add isisd/fabricd commands when daemon build is disabled
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
When receiving old copies (e.g. originated before the local ospf6d was
restarted) of supposedly self-originated LSAs which we previously tried to
flush from the network (by setting them to MaxAge), neither flood them nor
add them to our LSDB. Instead, keep the MaxAge version until we actually
(re-)originate them.
Possible fix for #7030. Testcase in #7168
(tests/topotests/ospf6-dr-no-netlsa-bug7030).
Signed-off-by: Martin Buck <mb-tmp-tvguho.pbz@gromit.dyndns.org>
The pim-basic suite uses some private python scripts to
send and receive mcast traffic: revise them to support
both py2 and py3.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
b0e9567ed1 fixed an issue whereby
zebra would abort while building an update for a blackhole route.
The same issue, `assert(data_len)` failing in
`zfpm_build_route_updates()`, can be observed when building updates
for unreachable and prohibit routes.
To address this `netlink_route_info_fill()` is updated to not
indicate failure, due to lack of nexthops, for any blackhole routes.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
When debugging why a route was not successfully installed into the
rib, it would be preferable that the end user only have to turn
on `debug zebra rib detail` as that is what we have been telling
people to do for the last couple of years. Consolidate *back*
to this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
With l2vni flap leading to duplicate entry creation
in l3vni's l2vni-list.
Use list sorted add with no duplicates.
root@TORC11:mgmt:~# show evpn vni 4001
VNI: 4001
Type: L3
Tenant VRF: vrf1
State: Up
...
L2 VNIs: 1000 1000 1000 0 0 1002
root@TORC11:mgmt:~# ip link set down vx-1002
root@TORC11:mgmt:~# ip link set up vx-1002
root@TORC11:mgmt:~# show evpn vni 4001
VNI: 4001
Type: L3
Tenant VRF: vrf1
State: Up
...
L2 VNIs: 1000 1000 1000 0 0 1002 1002
Ticket:CM-31545
Reviewed By:
Testing Done:
With Fix:
Multiple time flaps vni counts remained the same.
root@TORC11:mgmt:~# ip link set down vx-1002
root@TORC11:mgmt:~# ip link set up vx-1002
root@TORC11:mgmt:~# ip link set down vx-1002
root@TORC11:mgmt:~# ip link set up vx-1002
root@TORC11:mgmt:~# net show evpn vni 4001
VNI: 4001
Type: L3
Tenant VRF: vrf1
State: Up
...
L2 VNIs: 1000 1002
Signed-off-by: Chirag Shah <chirag@nvidia.com>