Running zebra after commit 888756b208
in valgrind produces this item:
==17102== Invalid read of size 8
==17102== at 0x44D84C: rib_dest_from_rnode (rib.h:375)
==17102== by 0x4546ED: rib_process_result (zebra_rib.c:1904)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102== Address 0x83bd468 is 88 bytes inside a block of size 96 free'd
==17102== at 0x4A35F54: free (vg_replace_malloc.c:530)
==17102== by 0x4CCAC00: qfree (memory.c:129)
==17102== by 0x4D03DC6: route_node_destroy (table.c:501)
==17102== by 0x4D039EE: route_node_free (table.c:90)
==17102== by 0x4D03971: route_node_delete (table.c:382)
==17102== by 0x44D82A: route_unlock_node (table.h:256)
==17102== by 0x454617: rib_process_result (zebra_rib.c:1882)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102== Block was alloc'd at
==17102== at 0x4A36FF6: calloc (vg_replace_malloc.c:752)
==17102== by 0x4CCAA2D: qcalloc (memory.c:110)
==17102== by 0x4D03D88: route_node_create (table.c:489)
==17102== by 0x4D0360F: route_node_new (table.c:65)
==17102== by 0x4D034F8: route_node_set (table.c:74)
==17102== by 0x4D03486: route_node_get (table.c:327)
==17102== by 0x4CFB700: srcdest_rnode_get (srcdest_table.c:243)
==17102== by 0x4545C1: rib_process_result (zebra_rib.c:1872)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102==
This is happening because of this order of events:
1) Route is deleted in the main thread and scheduled for rib processing.
2) Rib garbage collection is run and we remove the route node since it
is no longer needed.
3) Data plane returns from the deletion in the kernel and we call
the srcdest_rnode_get function to get the prefix that was deleted.
This recreates a new route node. This creates a route_node with
a lock count of 1, which we freed via the route_unlock_node call.
Then we continued to use the rn pointer. Which leaves us with use
after frees.
The solution is, of course, to just move the unlock the node at the
end of the function if we have a route_node.
Fixes: #3854
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
L3VNI keeps reference to svi interface (ifp).
When a netlink change received there is no flag
that mac has changed. Currently simply overwrite
interface's (ifp) hw_addr (MAC) field.
For originating EVPN type-2 and type-5 routes due to VNI
MAC change, comparison is required to check existing MAC
vs. netlink change MAC field.
Ticket:CM-23850
Reviewed By:CCR-8283
Testing Done:
Validate EVPN type-5 routes originated upon changing MAC address
of L3VNI's SVI inteface via ip link set cmd.
checked show bgp l2vpn evpn route and Rmac field contains new
MAC address.
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
FRR is reporting that a lsp deletion event as a failure
in the log messsages. This will lead to confusion and
lots of fun debugging.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
CLANG was throwing an error because struct rtadv_rdnss(dnssl) was being
initialized with = {0} instead of {}. Change to be the latter.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
When we schedule a packet for future handling, list the packet
type so that we can see what we are getting with debugs.
Also note which client and how many packets we received from that
client.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
In Asymmetric and symetric routing scenario in EVPN
where each VTEP pair having different set of addresses
for the SVIs.
This knob allows reachability (ping connectivity) of
SVI IPs and resolve ARP resoultion VTEPs across racks.
This knob should not be used when same SVI IPs configured
on VTEPs across racks or when advertise default gateway
is configured.
Ticket:CM-23782
Testing Done:
Bring up EVPN symmetric routing topology with different
SVI IPs on different VTEPs. Enable advertise svi ip
at each VTEP, remote VTEPs installs arp entry for
SVI IPs via EVPN type-2 route exchange.
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
With the data plane changes that were made, we are now running
nexthop tracking 2 times. Once at the end of meta-queue insertion
and once at the end of receiving a bunch of data from the dataplane.
The Addition of the data plane code caused flags to not be set
fully for the resolved routes( since we do not know the answer yet ),
This in turn caused the nexthop tracking run after the meta-queue
to think that the route was not `good`. This would cause it to
tell all interested parties that there was no nexthop.
After the dataplane insertion we are also no running nht code.
This was re-figuring out the nexthop correctly and also
correctly reporting to interested parties that there was a path again.
Example:
donna.cumulusnetworks.com(config)# do show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, f - failed route
K>* 0.0.0.0/0 [0/103] via 10.50.11.1, enp0s3, 00:06:47
S>* 4.5.6.7/32 [1/0] via 192.168.209.1, enp0s8, 00:04:47
C>* 10.50.11.0/24 is directly connected, enp0s3, 00:06:47
C>* 192.168.209.0/24 is directly connected, enp0s8, 00:06:47
C>* 192.168.210.0/24 is directly connected, enp0s9, 00:06:47
donna.cumulusnetworks.com(config)# ip route 4.5.6.7/32 192.168.210.1
donna.cumulusnetworks.com(config)# do show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, f - failed route
K>* 0.0.0.0/0 [0/103] via 10.50.11.1, enp0s3, 00:07:06
S>* 4.5.6.7/32 [1/0] via 192.168.209.1, enp0s8, 00:00:04
* via 192.168.210.1, enp0s9, 00:00:04
C>* 10.50.11.0/24 is directly connected, enp0s3, 00:07:06
C>* 192.168.209.0/24 is directly connected, enp0s8, 00:07:06
C>* 192.168.210.0/24 is directly connected, enp0s9, 00:07:06
donna.cumulusnetworks.com(config)#
Log files for sharp, which is watching 4.5.6.7:
2019/02/04 15:20:54.844288 SHARP: Received update for 4.5.6.7/32
2019/02/04 15:20:54.844820 SHARP: Received update for 4.5.6.7/32
2019/02/04 15:20:54.844836 SHARP: Nexthop 192.168.209.1, type: 2, ifindex: 3, vrf: 0, label_num: 0
2019/02/04 15:20:54.844853 SHARP: Nexthop 192.168.210.1, type: 2, ifindex: 4, vrf: 0, label_num: 0
As you can see we have received an update with no nexthops( invalid route )
and a second update immediately after it with 2 nexthops.
What's the big deal you say? Well we have code in other daemons that reacts
to not having a path for a nexthop. In BGP this will cause us to tear
down the peer. In staticd we'll remove the recursively resolved route.
In pim we'll remove all paths to the mroute. This is not desirable.
The fix is to remove the meta-queue run of nexthop tracking.
While running after data plane notice of routes to handle is not ideal
we will be fixing this in the future with the nexthop group code, which
should know what nexthops are affected by a nexthop group change.
Fixed code debug code:
donna.cumulusnetworks.com(config)# do show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, f - failed route
K>* 0.0.0.0/0 [0/103] via 10.50.11.1, enp0s3, 00:00:46
S>* 4.5.6.7/32 [1/0] via 192.168.209.1, enp0s8, 00:00:02
C>* 10.50.11.0/24 is directly connected, enp0s3, 00:00:46
C>* 192.168.209.0/24 is directly connected, enp0s8, 00:00:46
C>* 192.168.210.0/24 is directly connected, enp0s9, 00:00:46
donna.cumulusnetworks.com(config)# ip route 4.5.6.7/32 192.168.210.1
donna.cumulusnetworks.com(config)# do show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, f - failed route
K>* 0.0.0.0/0 [0/103] via 10.50.11.1, enp0s3, 00:00:59
S>* 4.5.6.7/32 [1/0] via 192.168.209.1, enp0s8, 00:00:02
* via 192.168.210.1, enp0s9, 00:00:02
C>* 10.50.11.0/24 is directly connected, enp0s3, 00:00:59
C>* 192.168.209.0/24 is directly connected, enp0s8, 00:00:59
C>* 192.168.210.0/24 is directly connected, enp0s9, 00:00:59
2019/02/04 15:26:20.656395 SHARP: Received update for 4.5.6.7/32
2019/02/04 15:26:20.656440 SHARP: Nexthop 192.168.209.1, type: 2, ifindex: 3, vrf: 0, label_num: 0
2019/02/04 15:26:33.688251 SHARP: Received update for 4.5.6.7/32
2019/02/04 15:26:33.688322 SHARP: Nexthop 192.168.209.1, type: 2, ifindex: 3, vrf: 0, label_num: 0
2019/02/04 15:26:33.688329 SHARP: Nexthop 192.168.210.1, type: 2, ifindex: 4, vrf: 0, label_num: 0
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The restricting of data about interfaces was both inconsistent
in application and allowed protocol developers to get into states where
they did not have the expected data about an interface that they
thought that they would. These restrictions and inconsistencies
keep causing bugs that have to be sorted through.
The latest iteration of this bug was that commit:
f20b478ef3
Has caused pim to not receive interface up notifications( but
it knows the interface is back in the vrf and it knows the
relevant ip addresses on the interface as they were changed
as part of an ifdown/ifup cycle ).
Remove this restriction and allow the interface events to
be propagated to all clients.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Made changes and updated the routemap applied counter in the following flows.
1.Increment when route map attached to a list.
2.Decrement when route map removed / modified from a list.
3.Increment/decrement when route map create/delete callback triggered.
4.Besides ,This counter need not be updated when a route map is got updated.
i.e changing/adding a match value to the existing routemap.
In Zebra , same update api called for all three add/delete/update operation.
But this counter have to be updated only for routemap addition.
Addressed this specific change by identifying the routemap operation based
on routemap pointer.
Signed-off-by: RajeshGirada <rgirada@vmware.com>
The zebra.sock data is the listener socket for the zapi protocol.
The rest of the zebra router does not need to see this data.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The client_list should be owned by the zebra_router data structure
as that it is part of global state information.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The master thread handler is really part of the zrouter structure.
So let's move it over to that. Eventually zserv.h will only be
used for zapi messages.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When we get into rib_process_result and the operation we are handling
is DPLANE_OP_ROUTE_UPDATE *and* the route entry being looked at
is a route replace, we currently have no way to decode to the old_re
and the re due to how we have stored context. As such they are the
same pointer.
As such the route replace for the same route type is causing the re
to set the installed flag and then immediately unset the installed
flag, leaving us in a state where the kernel has the route but
the rib thinks we are not installed.
Since the true old_re( the one being replaced by the update operation )
is going away( as that it zebra deletes the old one for us already )
this fix is not optimal but will get us moving forward.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Read the onlink flag from the kernel for routes and pass them
up and through to zebra so that we are consistent with what
the kernel is telling us.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
If we receive a valid message from the kernel that
is either a kernel or system route, we should trust
that the route is legit and just use it.
Old behavior:
K * 172.22.0.0/15 [0/0] via 172.22.2.254, eva_dummy1 inactive, 00:00:16
New Behavior:
K>* 172.22.0.0/15 [0/0] via 172.22.2.254, eva_dummy1, 00:02:35
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The route entry being displayed in debugs was displaying
the originating route type as a number. While numbers
are cool, I for one am not terribly interested in
memorizing them. Modify the (type %d) to a (%s) to
just list the string type of the route.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Apparently 'f' means both OpenFabric and a Failed kernel
route installation.
Let's switch the 'f' for the failed kernel route installation
to 'r - rejected route'.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>