The global vrf in zebra is always non-NULL. In general, it is bound to
default vrf by `zebra_vrf_init()`, at other times bound to some specific
vrf. Anyway, non-NULL.
So remove all redundant checkings for the returned value of
`zebra_vrf_get_evpn()`.
Additionally, remove the unnecessary check for `zvrf` in
`zebra_vxlan_cleanup_tables()`.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
RFC 7471 Section 4.2.7:
It is possible for min delay and max delay to be the same value.
Prior to this change, the code required min < avg < max. This
change allows min == avg and avg == max.
test case:
interface eth-rt1
link-params
delay 8000 min 8000 max 8000
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
Firstly, *keep no change* for `hash_get()` with NULL
`alloc_func`.
Only focus on cases with non-NULL `alloc_func` of
`hash_get()`.
Since `hash_get()` with non-NULL `alloc_func` parameter
shall not fail, just ignore the returned value of it.
The returned value must not be NULL.
So in this case, remove the unnecessary checking NULL
or not for the returned value and add `void` in front
of it.
Importantly, also *keep no change* for the two cases with
non-NULL `alloc_func` -
1) Use `assert(<returned_data> == <searching_data>)` to
ensure it is a created node, not a found node.
Refer to `isis_vertex_queue_insert()` of isisd, there
are many examples of this case in isid.
2) Use `<returned_data> != <searching_data>` to judge it
is a found node, then free <searching_data>.
Refer to `aspath_intern()` of bgpd, there are many
examples of this case in bgpd.
Here, <returned_data> is the returned value from `hash_get()`,
and <searching_data> is the data, which is to be put into
hash table.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Don't rely on the OS interface name length definition and use the FRR
definition instead.
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
There's a common pattern of "get VRF context for CLI node" here, which
first got a helper macro in zebra that then permeated into pimd.
Unfortunately the pimd copy wasn't quite adjusted correctly and thus
caused two coverity warnings (CID 1517453, CID 1517454).
Fix the PIM one, and clean up by providing a common base macro in
`lib/vty.h`.
Also rename the macros (add `_VRF`) to make more clear what they do.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
1. Adding a field family in the existing ZEBRA_IPMR_ROUTE_STATS
to get the ipv4 as well as ipv6 trafic stats between pim and zebra.
2. Modify the debug to print both v4/v6 prefixes
pimd: pim6d: Modify pim_zlookup_sg_statistics to get ipv6 stats
Modify the pim_zlookup_sg_statistics api to
get ipv4/ipv6 stats from zebra. Making the api
common.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Modify the structure mcast_route_data to store ipv4/ipv6
addr and lastused multicast information from kernel.
Adjust the related APIs to parse ipv4/ipv6 informations.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
By changing this API call to use a `struct ipaddr`, which encodes the
type of IP address with it. (And rename/remove the `IPV4` from the
command name.)
Also add a comment explaining that this function call is going to be
obsolete in the long run since pimd needs to move to proper MRIB NHT.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Add initial zebra tracepoint support infrastructure
as well as add a frr_zebra:netlink_interface
callback.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If you are in a situation where you have multiple addresses on an
interface, zebra creates one connected route for them.
The issue is that the rib entry is not created if addresses were
added before the interface was running.
We add the address to a running interface in a typical flow.
Therefore, we handle the route & rib creation within a single ADD event.
In the opposite case, we create the route entries without activating them.
These are considered to be active since ZEBRA_IFC_DOWN is not set.
On the following interface UP, we ignore the same ADDR_ADD as it overlaps
with the existing prefixes -> rib is never created.
The minimal reproducible setup:
-----------------------------------------
ip link add name dummy0 type dummy
ip addr flush dev dummy0
ip link set dummy0 down
ip addr add 192.168.1.7/24 dev dummy0
ip addr add 192.168.1.8/24 dev dummy0
ip link set dummy0 up
vtysh -c 'show ip route' | grep dummy0
Signed-off-by: Volodymyr Huti <v.huti@vyos.io>
Operators are seeing:
Mar 28 07:19:37 kingpin zebra[418]: [TZANK-DEMSE] netlink_nexthop_msg_encode: nhg_id 68 (zebra): proto-based nexthops only, ignoring
Mar 28 07:19:37 kingpin zebra[418]: [TZANK-DEMSE] netlink_nexthop_msg_encode: nhg_id 68 (zebra): proto-based nexthops only, ignoring
Mar 28 07:19:37 kingpin zebra[418]: [YXPF5-B2CE0] netlink_route_multipath_msg_encode: RTM_DELROUTE 2804:4d48:4000::/42 vrf 0(254)
Mar 28 07:19:37 kingpin zebra[418]: [YXPF5-B2CE0] netlink_route_multipath_msg_encode: RTM_NEWROUTE 2804:4d48:4000::/42 vrf 0(254)
Mar 28 07:19:37 kingpin zebra[418]: [TVM3E-A8ZAG] _netlink_route_build_singlepath: (single-path): 2804:4d48:4000::/42 nexthop via fe80::b6fb:e4ff:fe26:c5d5 if 2 vrf default(0)
Mar 28 07:19:37 kingpin zebra[418]: [HYEHE-CQZ9G] nl_batch_send: netlink-dp (NS 0), batch size=140, msg cnt=2
Mar 28 07:19:37 kingpin zebra[418]: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (68) into the kernel
When `zebra nexthop proto only` is turned on.
Effectively zebra intentionally does not do the nexthop group installation
and the dplane notification in zebra_nhg.c just assumes it was a failure
and prints an error message. Since this act was intentional, let's
just notice that it was intentional and not report the message
as a failure.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>