1. Added interface name, group address and detail option to existing
"show ip igmp groups" so that user can retrieve all the groups
or a particular group for an interface. Detail option shows the source
information for the group. With that, the show command
looks like:
"show ip igmp [vrf NAME$vrf_name] groups [INTERFACE$ifname [GROUP$grp_str]] [detail$detail] [json$json]"
2. Changed pim_cmd_lookup_vrf() to return empty JSON if VRF is not present
3. Changed "detail" option to print non pretty JSON
4. Added interface name and group address to existing
"show ip igmp sources" so that user can retrieve all the sources for
all the groups or, all the sorces for a particular group for an
interface. With that, the show command looks like:
"show ip igmp [vrf NAME$vrf_name] sourcess [INTERFACE$ifname [GROUP$grp_str]] [json$json]"
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
Earlier daemon parameter was passed to
start_topology(), which is not needed now,
as new code is implemented to start
feature specific daemons.
Signed-off-by: Kuldeep Kashyap <kashyapk@vmware.com>
1. Modified pim APIs name to generic one, same APIs would be used for PIMv4 and PIMv6
verifications
2. Modified all affacted scripts and ran multiple times locally to avoid CI failures
Signed-off-by: Kuldeep Kashyap <kashyapk@vmware.com>
The test is testing whether interface flaps are causing
the appropriate pim reactions. Unfortunately the test
is turning off the multicast stream and the test also
has a keep alive timer of 15 seconds set on all routers.
Which of course means the test has 15 seconds(at most) to finish
testing. This is not always possible given system loads.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The test_multicast_pim_sm_topo3.py test is both spending extra time
looking for state that will never occurr but also generating a support
bundle when it doesn't find it. Fix the test to come to the correct
solution faster.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The test case test_PIM_hello_tx_rx_p1 is failing randomly because
sometimes the hello packet is received and sometimes not received while getting
the stats data.
When the hello packet is received HelloRx gets incremented to 1 and then
shutdown of the interface is executed which resets the stats to 0
and again when "no shutdown" of the interface is done, the stats get incremented to 1.
The test case checks after "no shutdown" of the interface whether the stats is incremented
but in this case although the stats got incremented the before and after value is same.
Hence the test case failed.
Adding correct expectations in the test case.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
The test is doing this:
a) gather interface data about packets sent
b) shut interface
c) no shut interface
d) gather interface data about packets sent
e) compare a to d and fail if packets sent/received has not incremented
The problem is, of course, that under heavy system load insufficient time
might not have passed for packets to be sent between c and d. Add up to
35 seconds of looking for packet data being incremented else heavily
loaded systems may never show that data is being sent.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
verify_pim_interface_traffic *fetches* the pim
traffic data. Rename the function to what it
actually does
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
- The PIM tests do not need kernel routes to help them bind joins and
sources to specific interfaces. They should do that themselves directly.
Also do not change system wide "rp_filter" sysctl away from the value
required by everyone else.
Signed-off-by: Christian Hopps <chopps@labn.net>
Change every `-` to `_` in directory names. This is to avoid mixing _ and -.
Just for consistency and directory sorting properly.
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>