In places where we do a pim_ecmp_nexthop_search, also
use pim_ecmp_nexthop_lookup instead of the single path
case of pim_nexthop_lookup.
This is in preparation of more serious surgery to fix
the weird api of pim_find_or_track_nexthop.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Both pim_ecmp_nexthop_lookup and pim_ecmp_fib_lookup_if_vif_index
pass the address in 2 times. Make function calls consistent
and just pass in the src once.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The pim_ecmp_fib_looikup_if_vif_index does practically
the same work as pim_ecmp_nexthop_lookup, refactor to
use that function so that we do not have more code
that must parse the results from zclient_lookup_nexthop.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When doing nexthop lookups do not permanently allocate
memory in zebra and pim to track the nexthop specified
on the cli.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When we are looking up a RPF with a ecmp path, there
are situations where we are failing to find a path change
because we were not considering the actual number of neighbors
we have available to us at the start of the loop.
Example:
Suppose 2 way ecmp with a neighbor on each path. We have
multiple upstreams that are strewn across both paths.
If we loose a pim neighbor on one of the paths we would
initiate a rescan of the upstreams. If the neighbor
we lost happened to be the last ecmp path we rescanned
we would not successfully find a new path and leave
the upstream stranded.
This code change looks at the number of available neighbors
that we have -vs- the number of paths we have and chooses
the smaller of the two for figuring out what to do.
There probably exist other failure scenarios as well that
I am missing here and quite frankly the current code muddies
the water between a RPF lookup failure -vs- a RPF lookup succeeded
and there are no paths. Further work is needed here imo.
Additionally this idea of a pim_ecmp_nexthop_lookup and
pim_ecmp_nexthop_search is bogus. They are the same function and
should be merged at some point in time.
Ticket: CM-21599
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There is no reason that a IGMP src should need a upstream
pim neighbor when doing a RPF lookup.
Ticket: CM-21599
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Sometimes, the file under /var/run/netns may not be authorised to be
read ( because it is not read permission for frr user, for instance).
so it is good to know what happened.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This commit fixes two issues:
- memcpy() using containers of different sizes when using addr2sa(), mixing
'struct sockaddr_storage' and 'union sockunion'.
- addr2sa() function not being thread safe (using a local static variable as
container.
Signed-off-by: F. Aragon <paco@voltanet.io>
* Fix broken citations
* Remove trailing whitespace
* Rewrap to 80 lines
* Tweak capitalization of section headers
* Clean up a few indented blocks
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
In Symmetric routing case, L3VNI stores evpn MAC_IP routes
as IP_PREFIX routes in associated bgp_vrf and fib.
When vxlan device for l3vni goes down, triggers l3vni delete
in bgp.
As part l3vni delete, evpn ip prefix routes associated
with the vni need to be withdrawn from zebra as well
bgpinfo needs to be freed.
bgp_delete does not free bgp_info associated
to evpn ip prefix routes (link to bgp_vrf).
Call to uninstall_evpn_route_entry_in_vrf() properly
cleanup bgp_info as well triggers appropriate updates.
Ticket:CM-21443
Testing Done:
On DUT, bringup symmetric routing configuration, learn
EVPN Type-2 and Type-3 Routes.
Type-2 MAC_IP routes will be stored as ip_prefix in vrf table
during l3vni bring up.
Remove L3vni, deletes all ip_prefix routes from the zebra, kernel
vrf route table and bgp_info is freed.
Check the show bgp memory stats for bgp_info post l3vni flap.
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
While ZAPI I/O threads make a best effort to kill any scheduled tasks on
their threadmasters, after death another pthread can continue to
schedule onto the threadmaster. This isn't a problem per se since the
tasks will never run, but it also means that asserting that it hasn't
happened is pointless.
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
The warning given by PVS-Studio is related to per-element overflow (there is
no real overflow, because of how elements are mapped in the union). This
same warning is typically reported by Coverity, too.
Signed-off-by: F. Aragon <paco@voltanet.io>