Move the mld/igmp deletion common code to api pim_gm_interface_delete
code for IPv6 deletion(gm_ifp_teardown) for MLD was missing in this flow
Making the code common fixes this too.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Rename pim_cmd_interface_delete to pim_pim_interface_delete
and move the api to pimd/pim_iface.c
Changed the return type of the api from int to void.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Move pim_if_membership_clear api from pimd/pim_nb_config.c
to pimd/pim_iface.c
Also fixed curly braces warning
WARNING: braces {} are not necessary for single statement blocks
1773: FILE: /tmp/f1-127504/pim_iface.c:1773:
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Receiver---LHR---RP
Problem:
In LHR, ipv6 pim state remains after MLD prune received.
Root Cause:
When LHR receives join, it creates (*,G) channel oil with
oil_ref_count = 2. The channel_oil is used by gm_sg sg->oil
and upstream->channel_oil.
When LHR receives prune, currently upstream->channel_oil is
deleted and gm_sg sg->oil still present. Due to this channel_oil
is still present with oil_ref_count = 1
Fix:
When LHR receives prune, upstream->channel_oil and pim_sg sg->oil
needs to be deleted.
Issue: #11249
Signed-off-by: Sarita Patra <saritap@vmware.com>
Currently, in PIM Northbound, when a path to RP is not found during config apply, we are treating this as a NB_ERR_INCONSISTENCY.
However, there are two issues with this approach:
When OSPF or IGP convergence is completed, it is possible that the RPF check will succeed.
If we have multiple groups and RPs (e.g. 50 RPs), we will receive 50 logs with inconsistency errors.
example:
2023/05/27 22:57:45 PIM: [G822R-SBMNH] config-from-file# ip pim rp 192.168.100.1 239.100.0.0/28 2023/05/27 22:57:45
PIM: [VAKV3-NMY7B][EC 100663337] error processing configuration change: error [internal inconsistency] event [apply]
operation [create] xpath [/frr-routing:routing/control-plane-protocols/control-plane-protocol[type='frr-pim:pimd']
[name='pim'][vrf='default']/frr-pim:pim/address-family[address-family='frr-routing:ipv4']/frr-pim-rp:rp/static-rp/rp-list
[rp-address='192.168.100.1']/group-list[.='239.100.0.0/28']] message: No Path to RP address specified: 192.168.100.1
Issue: #13620
Signed-off-by: Rajesh Varatharaj <rvaratharaj@nvidia.com>
Problem:
Execute the below commands, pim6d core happens.
interface ens193
ip address 69.0.0.2/24
ipv6 address 8000::1/120
ipv6 mld
ipv6 pim
We see crash only if the interface is not configured, and
we are executing PIM/MLD commands.
RootCause:
Interface ens193 is not configured. So, it will have
ifindex = 0 and mroute_vif_index = -1.
Currently, we don't enable MLD on an interface if
mroute_vif_index < 0. So, pim_ifp->MLD = NULL.
In the API pim_if_membership_refresh(), we are accessing
pim_ifp->MLD NULL pointer which leads to crash.
Fix:
Added NULL check before accessing pim_ifp->MLD pointer in
the API pim_if_membership_refresh().
Issue: #13385
Signed-off-by: Sarita Patra <saritap@vmware.com>
When entering some show commands that use json in pimd
when the interface cannot be found do not output non-json
format in that case.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Some interfaces are special, they have the same `ifindex` with pimreg.
Use macro for `ifindex` of pimreg.
And adjust log.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
`setsockopt()` should be only called once with `MRT_TABLE`
in "enable" case, otherwise it will fail. In current code,
`mroute_socket` of "pim instance" with VRF can't be correctly
closed.
Skip it in the "disable" case to let `mroute_socket` safely
closed.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Coverity complains about these being tainted/untrusted loop boundaries.
The way the code works, it's counting up groups/sources, but keeps
checking against remaining data length in the packet - which is
perfectly fine IMHO. Except Coverity doesn't understand it :(
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Coverity shows a path where the pim instance may
be null. In this code path if we have no pim
vrf there is nothing to do anyway so just return
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently the function `pim_parse_nexthop_update()` clearly
updates nexthop interfaces even as PIM-disabled.
Just correct the wrong comments about those PIM-disabled interfaces
to avoid misleading.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
ZEBRA_IMPORT_CHECK_UPDATE has been gone for more than a year; remove
some leftover dead references to it.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
1. Added interface name, group address and detail option to existing
"show ip igmp groups" so that user can retrieve all the groups
or a particular group for an interface. Detail option shows the source
information for the group. With that, the show command
looks like:
"show ip igmp [vrf NAME$vrf_name] groups [INTERFACE$ifname [GROUP$grp_str]] [detail$detail] [json$json]"
2. Changed pim_cmd_lookup_vrf() to return empty JSON if VRF is not present
3. Changed "detail" option to print non pretty JSON
4. Added interface name and group address to existing
"show ip igmp sources" so that user can retrieve all the sources for
all the groups or, all the sorces for a particular group for an
interface. With that, the show command looks like:
"show ip igmp [vrf NAME$vrf_name] sourcess [INTERFACE$ifname [GROUP$grp_str]] [json$json]"
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
Topology Used:
=============
Cisco---FRR4----FRR2
Initially PIM nbr is down between FRR4----FRR2 from FRR2 side
Cisco is sending BSR packet to FRR4.
Problem Statement:
=================
No shutdown the PIM neighbor on FRR2 towards FRR4.
FRR2, receives BSR packet immediately as the new neighbor
comes up. This BSR packet is having no-forward bit set.
FRR2 is not able to process the BSR packet, and drop the
BSR packet.
Root Cause:
==========
When PIMD comes up, we start BSM timer for 60 seconds.
Here, the value accept_nofwd_bsm is setting to false.
FRR2, when receives no-forward BSR packet, it is getting
accept_nofwd_bsm value as false.
So, it drops, the no-forward BSM packet.
Fix:
===
Set accept_nofwd_bsm as false after first BSM packet received.
Signed-off-by: Sarita Patra <saritap@vmware.com>
Effectively a massive search and replace of
`struct thread` to `struct event`. Using the
term `thread` gives people the thought that
this event system is a pthread when it is not
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This is a first in a series of commits, whose goal is to rename
the thread system in FRR to an event system. There is a continual
problem where people are confusing `struct thread` with a true
pthread. In reality, our entire thread.c is an event system.
In this commit rename the thread.[ch] files to event.[ch].
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After doing "no ipv6 pim" and "ipv6 pim" on receiver interface in LHR,
mroutes were not created.
When we enable pim after disabling it, we refresh the membership on pim interface.
First we clear the membership on the interface, then we add it back.
For PIMv6 we were only clearing it, membership was not added back. So mroutes were not created.
Now added code to fetch all the local membership information into PIM.
Fixes: #12005, #12820
Signed-off-by: Abhishek N R <abnr@vmware.com>
We have this valgrind trace:
==1125== Invalid read of size 4
==1125== at 0x170A7D: pim_if_delete (pim_iface.c:203)
==1125== by 0x170C01: pim_if_terminate (pim_iface.c:80)
==1125== by 0x174F34: pim_instance_terminate (pim_instance.c:68)
==1125== by 0x17535B: pim_vrf_terminate (pim_instance.c:260)
==1125== by 0x1941CF: pim_terminate (pimd.c:161)
==1125== by 0x1B476D: pim_sigint (pim_signals.c:44)
==1125== by 0x4910C22: frr_sigevent_process (sigevent.c:133)
==1125== by 0x49220A4: thread_fetch (thread.c:1777)
==1125== by 0x48DC8E2: frr_run (libfrr.c:1222)
==1125== by 0x15E12A: main (pim_main.c:176)
==1125== Address 0x6274d28 is 1,192 bytes inside a block of size 1,752 free'd
==1125== at 0x48369AB: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1125== by 0x174FF1: pim_vrf_delete (pim_instance.c:181)
==1125== by 0x4925480: vrf_delete (vrf.c:264)
==1125== by 0x4925480: vrf_delete (vrf.c:238)
==1125== by 0x49332C7: zclient_vrf_delete (zclient.c:2187)
==1125== by 0x4934319: zclient_read (zclient.c:4003)
==1125== by 0x492249C: thread_call (thread.c:2008)
==1125== by 0x48DC8D7: frr_run (libfrr.c:1223)
==1125== by 0x15E12A: main (pim_main.c:176)
==1125== Block was alloc'd at
==1125== at 0x4837B65: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1125== by 0x48E80AF: qcalloc (memory.c:116)
==1125== by 0x1750DA: pim_instance_init (pim_instance.c:90)
==1125== by 0x1750DA: pim_vrf_new (pim_instance.c:161)
==1125== by 0x4924FDC: vrf_get (vrf.c:183)
==1125== by 0x493334C: zclient_vrf_add (zclient.c:2157)
==1125== by 0x4934319: zclient_read (zclient.c:4003)
==1125== by 0x492249C: thread_call (thread.c:2008)
==1125== by 0x48DC8D7: frr_run (libfrr.c:1223)
==1125== by 0x15E12A: main (pim_main.c:176)
and you do this series of events:
a) Create a vrf, put an interface in it
b) Turn on pim on that interface and turn on pim in that vrf
c) Delete the vrf
d) Do anything with the interface, in this case shutdown the system
The move of the interface to a new vrf is leaving the pim_ifp->pim pointer pointing
at the old pim instance, which was just deleted, so the instance pointer was freed.
Let's clean up the pim pointer in the interface pointer as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add a hash_clean_and_free() function as well as convert
the code to use it. This function also takes a double
pointer to the hash to set it NULL. Also it cleanly
does nothing if the pointer is NULL( as a bunch of
code tested for ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After restarting pim6d, in some cases the ifindex is 0 for the interfaces,
so the vif index is also assigned as 0.
This causes the interface name to be pim6reg.
Fix:
If the ifindex is 0 and the interface name is not "pimreg" or "pim6reg",
the function will return without assigning vifindex with an error message.
Issue: #12744
Signed-off-by: Sai Gomathi N <nsaigomathi@vmware.com>