Added a new show command "show ip multicast", display the multicast data
packet in and out on interface level.
Signed-off-by: Sarita Patra <saritap@vmware.com>
Replace sprintf with snprintf where straightforward to do so.
- sprintf's into local scope buffers of known size are replaced with the
equivalent snprintf call
- snprintf's into local scope buffers of known size that use the buffer
size expression now use sizeof(buffer)
- sprintf(buf + strlen(buf), ...) replaced with snprintf() into temp
buffer followed by strlcat
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
And again for the name. Why on earth would we centralize this, just so
people can forget to update it?
Signed-off-by: David Lamparter <equinox@diac24.net>
Same as before, instead of shoving this into a big central list we can
just put the parent node in cmd_node.
Signed-off-by: David Lamparter <equinox@diac24.net>
There is really no reason to not put this in the cmd_node.
And while we're add it, rename from pointless ".func" to ".config_write".
[v2: fix forgotten ldpd config_write]
Signed-off-by: David Lamparter <equinox@diac24.net>
The only nodes that have this as 0 don't have a "->func" anyway, so the
entire thing is really just pointless.
Signed-off-by: David Lamparter <equinox@diac24.net>
Issue: no ip msdp mesh-group <word> source command
deleting the mesh group, which might be used by the member.
Solution: no ip msdp mesh-group <word> source command, deletes
the mesh-group source.
Add a new cli command "no ip msdp mesh-group <word>" to delete
the mesh group.
Signed-off-by: Sarita Patra <saritap@vmware.com>
JSON output for igmp group display is modified as follows for better python handling.
1. Under each interface make array of group as an object
2. Include source and group inside each group object
These improvements are required for the set of topotest cases which will be upstreamed shortly.
Signed-off-by: Saravanan K <saravanank@vmware.com>
This CLI will allow user to configure a igmp group limit which will generate
a watermark warning when reached.
Though watermark may not make sense without setting a limit, this
implementation shall serve as a base to implementing limit in future and helps
tracking a particular scale currently.
Testing:
=======
ip igmp watermark-warn <10-60000>
on reaching the configured number of group, pim will issue warning
2019/09/18 18:30:55 PIM: SCALE ALERT: igmp group count reached watermak limit: 210(vrf: default)
Also added group count and watermark limit configured on cli - show ip igmp groups [json]
<snip>
Sw3# sh ip igmp groups json
{
"Total Groups":221, <=====
"Watermark limit":210, <=========
"ens224":{
"name":"ens224",
"state":"up",
"address":"40.0.0.1",
"index":6,
"flagMulticast":true,
"flagBroadcast":true,
"lanDelayEnabled":true,
"groups":[
{
"source":"40.0.0.1",
"group":"225.1.1.122",
"timer":"00:03:56",
"sourcesCount":1,
"version":2,
"uptime":"00:00:24"
<\snip>
<snip>
Sw3(config)# do sh ip igmp group
Total IGMP groups: 221
Watermark warn limit(Set) : 210
Interface Address Group Mode Timer Srcs V Uptime
ens224 40.0.0.1 225.1.1.122 ---- 00:04:06 1 2 00:13:22
ens224 40.0.0.1 225.1.1.144 ---- 00:04:02 1 2 00:13:22
ens224 40.0.0.1 225.1.1.57 ---- 00:04:01 1 2 00:13:22
ens224 40.0.0.1 225.1.1.210 ---- 00:04:06 1 2 00:13:22
<\snip>
Signed-off-by: Saravanan K <saravanank@vmware.com>
Problem: output is cut short when prefix string all octets are 3 digit.
RCA: Buffer was allocated only to hold ip addr str.
Fix: Added 3 bytes more to hold prefix length and a /.
Modified buffer in 'show ip pim bsrp-info' and 'show ip pim bsm database'
Signed-off-by: Saravanan K <saravanank@vmware.com>
Issue: Client1------LHR-----(int-1)RP(int-2)------client2
Client2 send IGMP join for group G.
Client1 send IGMP join for group G.
verify show ip mroute in RP, will have 2 OIL.
Client2 send IGMP leave.
Verify show ip mroute in RP, will still have 2.
Root cause: When RP receives IGMP join from client2, it creates
a (s,g) channel oil and add the interface int-2 into oil list and
set the flag PIM_OIF_FLAG_PROTO_IGMP to int-2
Client1 send IGMP join, LHR will send a (*,G) join to RP. RP will
add the interface int-1 into the oil list of (s,g) channel_oil and
will set the flag PIM_OIF_FLAG_PROTO_IGMP and PIM_OIF_FLAG_PROTO_PIM
to the int-1 and set PIM_OIF_FLAG_PROTO_PIM to int-2 as well. It is
happening because of the pim_upstream_inherited_olist_decide() and
forward_on() get all the oil and update the flag wrongly.
So now when client 2 sends IGMP prune, RP will not remove the int-2
from oil list since both PIM_OIF_FLAG_PROTO_PIM & PIM_OIF_FLAG_PROTO_IGMP
are set, it just unset the flag PIM_OIF_FLAG_PROTO_IGMP.
Fix: Introduced new flags in if_channel, PIM_IF_FLAG_MASK_PROTO_PIM
& PIM_IF_FLAG_MASK_PROTO_IGMP. If a if_channel is created because of
pim join or pim (s,g,rpt) prune received, then set the flag
PIM_IF_FLAG_MASK_PROTO_PIM. If a if_channel is created becuase of IGMP
join received, then set the flag PIM_IF_FLAG_MASK_PROTO_IGMP.
When an interface needs to be added into the oil list check if
PIM_IF_FLAG_MASK_PROTO_PIM or PIM_IF_FLAG_MASK_PROTO_IGMP is set, then
update oil flag accordingly.
Signed-off-by: Sarita Patra <saritap@vmware.com>
Issue 1: "show ip pim interface traffic" not show prune TX/RX
Rootcause : not added the variable in the json
Fix : add prune TX/RX in show ip pim interface traffic json
Issue 2: "show ip pim rp-info" not shows the key iAmRp when it is false
Rootcause: Only display the key when the value is true.
Fix: add iAmRp as false in show ip pim rp-info json
Issue 3: "show ip pim rp-info" not showing outbound interface if it is empty
Rootcause: Only display when there is any OIL
Fix: When RP is not reachable then, the outbound interface is Unknown
The command "show ip pim rp-info json" not displaying the outbound
interafce if it is unknown
Signed-off-by: Sarita Patra <saritap@vmware.com>
S - Sparse Mode
C - indicates there is a member of the group directly connected to the router.
R -set on an (S, G) by the receipt of an (S, G) RP bit prune message.
F -This indicates that this router is a FHR and send register messages to RP to inform RP of this active source
P - OIL list is NULL. That means the router will send a prune.
T - At least one packet received via SPT.
Signed-off-by: Sarita Patra <saritap@vmware.com>
RCA: When configured more than 32(MAXVIS), the inerfaces that are configured
after 32nd interfaces have the value of MAXVIF.
This is used as index to access the free vif tracker of array size 32(MAXVIFS).
So the channel oil list pointer which is present as the next field in pim structure get corrupt, when updating free vif.
This gets accessed during rpf update resulting in crash.
Fix: Refrain from allocating mcast interface structure and throw config error when more than MAXVIFS are attempted to configure.
Max vif checks are exempted for vrf device and pimreg as vrf device will be the first interface and not expected to fail and pimreg has reserved vif.
vxlan tunnel termination device creation has this check and throw warning on max vif.
All other creation are through CLI.
Signed-off-by: Saravanan K <saravanank@vmware.com>
Root cause: The header display is put in common outside the vtysh/json if-else.
Fix: Brought inside vtysh condition.
Signed-off-by: Saravanan K <saravanank@vmware.com>
Issue: show ip mroute displays the mroute uptime (time when
mroute installed into the kernel) per oif.
This is confusing.
Fix: Display mroute uptime per (s,g) mroute entry.
Signed-off-by: Sarita Patra <saritap@vmware.com>
When pim receives a register packet, we will apply the
received source to the prefix list. If accepted normal
processing continues. If denied we will send a register
stop message to the source.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The memory type PIM_SPT_PLIST_NAME is specific to
SPT but we are going to store more prefix-list names
in pim, make it generic to allow for less confusion
in the future.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There was some code missed during the upstreaming process
due to code squash. Identify and put into a commit
to keep code consistent and correct.
Signed-off-by: Satheesh Kumar K <sathk@cumulusnetworks.com>
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Issue 1:
1. Enable pim on an interface.
2. Configure query-interval or query max response time,
which results in pimd crash.
Root cause:
1. When pim is enabled on an interface, it creates a igmp socket
with querier_timer and other_querier time as NULL.
2. When query-interval/max_response_time is configured, it call the
function igmp_sock_query_reschedule() to reshedule the query. This
function check either of querier_timer or other_querier timer should
be running. Since in this case both are NULL, it results in crash.
Issue 2:
1. Enable pim on an interface.
2. Execute no ip igmp query-interval or query max response time,
which results in pimd crash.
Root cause:
1. When pim is enabled on an interface, it creates a pim interface
with querier_timer and other_querier time as NULL.
2. When no ip igmp query-interval/max_response_time is executed, it will
check either of querier_timer or other_querier timer should be running.
Since in this case both are NULL, it results in crash.
Fix:
When pim is enabled on an interface, it creates a igmp socket with
mtrace_only as true. So add a check if mtrace_only is true, then don't
reshedule the query.
Signed-off-by: Sarita Patra <saritap@vmware.com>
It's been a year search and destroy.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
1. show ip pim mlag summary
provides MLAG session information and stats
2. show ip pim mlag upstream
displays the upstream entries synced across the MLAG switches
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
A local membership is created on the vxlan termination device ipmr-lo. This
is done to -
1. Pull multicast vxlan tunnel traffic to the VTEP for termination by
triggering JoinDesired on the BUM multicast group.
2. Include the OIF in the mroute to signal to the dataplane component
that flow needs to be vxlan terminated.
Earlier we were overloading the PIM_UPSTREAM_FLAG_MASK_SRC_IGMP for
this local membership creation but that is creating confusion both in
the state machine and in the show outputs. To avoid that we use the
more apparent PIM_UPSTREAM_FLAG_MASK_SRC_VXLAN_TERM. With this change -
1. We get LHR functionality for VXLAN_TERM mroutes
2. OIF is populated with PIM_OIF_FLAG_PROTO_PIM only
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
1. Upstream entries associated with tunnel termination mroutes are
synced to the MLAG peer via the local MLAG daemon.
2. These entries are installed in the peer switch (via an upstream
ref flag).
3. DF (Designated Forwarder) election is run per-upstream entry by both
the MLAG switches -
a. The switch with the lowest RPF cost is the DF winner
b. If both switches have the same RPF cost the MLAG role is
used as a tie breaker with the MLAG primary becoming the DF
winner.
4. The DF winner terminates the multicast traffic by adding the tunnel
termination device to the OIL. The non-DF suppresses the termination
device from the OIL.
Note: Before the PIM-MLAG interface was available hidden config was
used to test the EVPN-PIM functionality with MLAG. I have removed the
code to persist that config to avoid confusion. The hidden commands are
still available.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Convert the upstream_list and hash to a rb tree, Significant
time was being spent in the listnode_add_sort. This reduces
this time greatly.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>