Changed PIM_DEBUG_IGMP_TRACE to PIM_DEBUG_GM_TRACE and
PIM_DEBUG_IGMP_TRACE_DETAIL to PIM_DEBUG_GM_TRACE_DETAIL.
Hence, these macros can be used for both v6 and v4.
Issue: #11895
Co-authored-by: Sai Gomathi N <nsaigomathi@vmware.com>
Signed-off-by: Abhishek N R <abnr@vmware.com>
Changing
PIM_DO_DEBUG_IGMP_EVENTS to PIM_DO_DEBUG_GM_EVENTS
PIM_DO_DEBUG_IGMP_PACKETS to PIM_DO_DEBUG_GM_PACKETS
PIM_DO_DEBUG_IGMP_TRACE to PIM_DO_DEBUG_GM_TRACE
PIM_DO_DEBUG_IGMP_TRACE_DETAIL to PIM_DO_DEBUG_GM_TRACE_DETAIL
PIM_DONT_DEBUG_IGMP_EVENTS to PIM_DONT_DEBUG_GM_EVENTS
PIM_DONT_DEBUG_IGMP_PACKETS to PIM_DONT_DEBUG_GM_PACKETS
PIM_DONT_DEBUG_IGMP_TRACE to PIM_DONT_DEBUG_GM_TRACE
PIM_DONT_DEBUG_IGMP_TRACE_DETAIL to PIM_DONT_DEBUG_GM_TRACE_DETAIL
PIM_MASK_IGMP_EVENTS to PIM_MASK_GM_EVENTS
PIM_MASK_IGMP_PACKETS to PIM_MASK_GM_PACKETS
PIM_MASK_IGMP_TRACE to PIM_MASK_GM_TRACE
PIM_MASK_IGMP_TRACE_DETAIL to PIM_MASK_GM_TRACE_DETAIL
to be used for both IGMP and MLD debugs.
Signed-off-by: Sai Gomathi N <nsaigomathi@vmware.com>
Added Robustness value, Query interval, Query response timer
and Last member query interval field in json output.
Issue: #11891
Signed-off-by: Abhishek N R <abnr@vmware.com>
Changing the macros to common so that it can be used for pimv6 debugs as well
to be used for both IGMP and MLD debugs.
Signed-off-by: Sai Gomathi N <nsaigomathi@vmware.com>
Changing the macros to common so that it can be used for pimv6 debugs as well
to be used for both IGMP and MLD debugs.
Signed-off-by: Sai Gomathi N <nsaigomathi@vmware.com>
When there is update in the configuration of last_member_query_interval
and last_member_query_count, call gm_ifp_update().
This will update cur_query_intv_trig and cur_lmqc of gm_ifp structure.
Issue: #11901
Signed-off-by: Sarita Patra <saritap@vmware.com>
show ip pim state should show IGMP Report while
show ipv6 pim state should show MLD Report.
Output After Fix:
frr# do sh ip pim state
Codes: J -> Pim Join, I -> IGMP Report, S -> Source, * -> Inherited from (*,G), V -> VxLAN, M -> Muted
Active Source Group RPT IIF OIL
frr# do sh ipv6 pim state
Codes: J -> Pim Join, I -> MLD Report, S -> Source, * -> Inherited from (*,G), V -> VxLAN, M -> Muted
Active Source Group RPT IIF OIL
frr#
Issue: #11249
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Modifying igmp_group_count of struct pim_instance
to gm_group_count which is to be used for both IGMP and MLD.
Signed-off-by: Sai Gomathi N <nsaigomathi@vmware.com>
When calling time(NULL), FRR is intentionally throwing
away the upper 32 bits of value returned. Let's explicitly
call it out so that coverity understands this is intentional
and ok.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
IPv4 and IPv6 behaves a little bit differently with the socket
options.
IPPROTO_RAW socket option is only for IPv4.
Therefore the register packet was not properly getting encapculated
for PIMv6 and was working fine for PIMv4.
So have used IPPROTO_PIM for PIMv6.
Fixes: #11846
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
The socket created for pimv6 was created using AF_INET for PIMV6
too.
Since the api pim_reg_sock is common to both PIMv4 and PIMv6,
need to use PIM_AF instead of AF_INET.
Fixes: #11815
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
The call into pim_cmd_lookup_vrf may be NULL
and dereferencing it before ensuring that the
vrf pointer is non-NULL is a good way to crash.
A crash can be initiated in pim:
eva# show ip msdp vrf NOEXIST mesh-group
vtysh: error reading from pimd: Permission denied (13)Warning: closing connection to pimd because of an I/O error!
eva# 2022/08/15 11:47:38 [PHJDC-499N2][EC 100663314] STARVATION: task vtysh_rl_read (560b77f76de6) ran for 16777ms (cpu time 0ms)
eva#
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When calling pim_upstream_add, the lookup for upstream
or the creation of the upstream cannot fail. As such
up is never NULL.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
in pim_ifchannel.c there exists several spots where
the ch->upstream is assumed to be NULL. This is not
possible.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>