RMAC keeping list of nexthops to keep track
of its existiance, remove the (old way) host prefix
mapping.
Ticket: #2798406
Reviewed By:
Testing Done:
TORS1# show evpn rmac vni 4001 mac 44:38:39:ff:ff:01
MAC: 44:38:39:ff:ff:01
Remote VTEP: 36.0.0.11
Refcount: 0
Prefixes:
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Keep the list of remote-vteps/nexthops in
rmac db.
Problem:
In CLAG deployment there might be a situation
where CLAG secondary sends individual ip as nexthop
along with anycast mac as RMAC. This combination
is updated in zebra's rmac cache.
Upon recovery at clag secondary sends withdrawal
of the incorrect rmac and nexthop mapping.
The RMAC entry mapping to nh is not cleaned up properly
in the zebra rmac cache.
Fix:
Zebra rmac db needs to maintain a list of nexthops.
When a bgp withdrawal for rmac to nexthop mapping
is received, remove the old nexthop from the rmac's nh
list and if the host reference still remains for
the RMAC,fall back to the nexthop one remaining in
the list.
At most you expect two nexthops mapped to RMAC
(in clag deployment).
Ticket: 2798406
Reviewed By:
Testing Done:
CLAG primary and secondary have advertise-pip enabled
advertise type-5 route (default route) with
individual IP as nh and individual svi mac as rmac.
- disable advertise pip on both clag devices, this
results in advertisement of routes with anycast ip as nh
and anycast mac as rmac.
- disable peerlink on clag primary, this triggers
clag secondary to (transitory) send bgp update with
individual ip as nh and anycast mac as rmac.
- At the remote vtep:
Check the zebra's rmac cache/nh mapping correctly
and points to anycast rmac and anycast ip as nh of the
clag system.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
IPv4 uses "struct ip" and IPv6 uses "struct ip6_hdr" as
headers. Get the src and dst in pim_sgaddr.
Added api pim_sgaddr_from_iphdr to do so.
Signed-off-by: Mobashshera Rasool <mrasool@vmwaer.com>
Contraints of host routes are too strict in current code:
Host routes with same destination address and nexthop address are forbidden
even when cross VRFs.
Currently host routes with different destination and nexthop address can cross
VRFs, it is ok. But host routes with same addresses are forbidden to cross VRFs,
it is wrong.
Since different VRFs can have the same addresses, leak specific host route with
the same nexthop address ( it means destination address is same to nexthop
address ) to other VRFs is a normal case.
This commit relaxes that contraints. Host routes with same destination address
and nexthop address are forbidden only when not cross VRFs.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
When an interface goes down, it signals any related NHGs to
re-validate themselves. During zebra shutdown, ensure we remove
any NHGs we've installed.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Added this api to fill all multicast group address based on IP version.
For PIMv4 its 224.0.0.0/4, for PIMv6 its FF00::0/8.
Changed the code where its being used currently.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Pass pim_addr as parameter for rp address to accomodate ipv6.
Modifying pim_rp_cmd_worker and pim_no_rp_cmd_worker function
parameters from in_addr to pim_addr.
Changes in the caller functions are done as well to make it work
for IPv6.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
1. Return value of this function pim_rp_del_config is nowhere used.
So made it as a void function.
2. Paramater const char *rp is first converted to string from prefix
in the caller and then back to prefix in this api pim_rp_del_config.
Fixed it by directly passing the address instead of string.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
1. Moving the processing of the above command to an api.
2. Change DEFUN to DEPFY
3. Make the api common for pimv4 and pimv6 processing.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Modify the apis pim_process_rp_cmd and pim_process_no_rp_cmd
to accomodate ipv4 as well as ipv6.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
The pattern defined for ipv6-multicast-group-prefix is wrong.
This is leading to mismatch for all the valid ipv6 multicast
addresses.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
The vtysh live logs don't try to buffer messages when vtysh isn't
reading them fast enough. Either the kernel has space and can accept
messages without delay, or it doesn't and we continue on.
While this is intentional (otherwise slow vtysh could block a routing
daemon), at least give the user an indication if messages were dropped.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
This provides direct raw log output with full metadata directly at
startup regardless of configuration details.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>