Discovered in testing that if a static route in the default table
was entered immediately after a vrf static block, the static route
intended for the default table was put in the vrf instead. This
fix retains the "exit-vrf" statement which causes the following
static routes to appear in the default table correctly.
Ticket: CM-23985
Signed-off-by: Don Slice <dslice@cumulusnetwork.com>
Problem caused when nclu is used to create "ip route 1.1.1.0/24
blackhole" because frr-reload.py changed the line to Null0 instead
of blackhole. If nclu tries to delete it using the same line as
entered, the commit fails since it doesn't match.
Ticket: CM-23986
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
peer_flag_modify() will always return BGP_ERR_INVALID_FLAG because
the action was not defined for PEER_FLAG_IFPEER_V6ONLY flag.
```
global PEER_FLAG_IFPEER_V6ONLY = 16384;
global BGP_ERR_INVALID_FLAG = -2;
probe process("/usr/lib/frr/bgpd").statement("peer_flag_modify@/root/frr/bgpd/bgpd.c:3975")
{
if ($flag == PEER_FLAG_IFPEER_V6ONLY && $action->type == 0)
printf("action not found for the flag PEER_FLAG_IFPEER_V6ONLY\n");
}
probe process("/usr/lib/frr/bgpd").function("peer_flag_modify").return
{
if ($return == BGP_ERR_INVALID_FLAG)
printf("return BGP_ERR_INVALID_FLAG\n");
}
```
produces:
action not found for the flag PEER_FLAG_IFPEER_V6ONLY
return BGP_ERR_INVALID_FLAG
$ vtysh -c 'conf t' -c 'router bgp 20' -c 'neighbor eth1 interface v6only remote-as external'
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
When creating a ospf vrf based instance allow it to work
if the vrf has been created *before* we create the ospf
instance.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When RP gets deleted, find all the (*, G) upstream whose group belongs to
the deleted RP, release the upstream from pnc->upstream_hash in the function
pim_delete_tracked_nexthop().
Signed-off-by: Sarita Patra <saritap@vmware.com>
When a RP gets deleted, find all the (*, G) upstream whose
group belongs to the deleted RP.
case 1: if the group belongs to any other rp, then call
pim_upstream_update() to update the upstream addr and rpf
information.
case 2: If no RP found for the group, then clear the pim
upstream address and rpf information.
Signed-off-by: Sarita Patra <saritap@vmware.com>
When a new RP is configured, find all the (*, G) upstream whose
group belongs to the new RP and then update the upstream structure
with the below fields.
1. De-register for the old RP.
2. Set the upstream address as new RP
3. Register for the new RP.
4. Update the upstream rpf information and kernel multicast forwarding
cache(MFC), if the new RP is reachable.
Signed-off-by: Sarita Patra <saritap@vmware.com>
When route to RP gets modified, FRR receives a notification from
zebra, and call the function pim_resolve_upstream_nh() to compute the
nexthop and update upstream->rpf structure.
Issue: In case when RP becomes not reachable, FRR only uninstall
the mroute from the kernal, but not update the upstream->rpf structure.
Fix: When FRR receives a notification from zebra saying RP becomes
not reachable, then update the following fields.
1. update channel_oil incoming interface as MAXVIFS
2. Un-install the mroute from the kernel.
3. Switch upstream state from JOINED to NOTJOINED.
4. Clear the nexthop information of the upstream.
Signed-off-by: Sarita Patra <saritap@vmware.com>
When route to RP gets modified, FRR receives a notification from
zebra, and call the function pim_update_rp_nh() to compute the
new nexthop and will update the source_nexthop information of
rp_info. This is not working for the case when RP becomes not
reachable.
Fix: When FRR receives a notification from zebra saying RP becomes
not reachable, then delete the source_nexthop informatio of rp_info.
Signed-off-by: Sarita Patra <saritap@vmware.com>
In this commit, we are creating a dummy upstream & dummy channel_oil
for (*, G) when RP is not configured or not reachable.
Dummy upstream: <upstream_addr = INADDR_ANY, rpf = Unknown>
Dummy channel oil: <iif = MAXVIFS>
Signed-off-by: Sarita Patra <saritap@vmware.com>
When FRR receives IGMP/PIM (*, G) join and RP is not configured or not
reachable, then we are creating a dummy upstream with incoming interface
as NULL and upstream address as INADDR_ANY.
Added upstream address and incoming interface validation where it is necessary,
before doing any operation on the upstream.
Signed-off-by: Sarita Patra <saritap@vmware.com>
When FRR receives IGMP/PIM (*, G) join and RP is not configured or
not reachable, then we are creating a dummy upstream with incoming
interface as NULL.
Added some null checks for the incoming interface, while displaying
the pim upstream information in the cli command "show ip pim upstream".
Signed-off-by: Sarita Patra <saritap@vmware.com>
Added comments which explains the new values for existing fields
and new fields in the upstream and channel_oil data structure.
Following are the summary of the behaviour change in PIM code.
Scenario 1 : RP doesn’t exist/RP not reachable
Event: Join received
Current behaviour:
No upstream gets created
Changed behaviour:
Upstream data structure created with below info
upstream_addr: INADDR_ANY
channel_oil iif: MAXVIF
channel_oil is_valid: FALSE (flag introduced to indicate if this entry is valid to get installed in hardware)
RPF details: Not valid
Join state: NOT_JOINED
Kernal installed: FALSE
Scenario 2: Dummy upstream exists
Event: RP configured
Current Behaviour:
upstream address updated for the dummy upstream created.
Changed Behaviour:
upstream_addr: RP address
channel_oil iif: MAXVIF
channel_oil is_valid: FALSE
RPF details: only RP address updated
Join state: NOT_JOINED
Kernel installed: FALSE
Scenario 3: Dummy upstream exists
Event: RP becomes reachable
Current Behaviour:
Update channel oil, rpf details in the upstream and install in hardware
Changed Behaviour:
upstream_addr: RP Adress
channel_oil iif: MAXVIF
channel_oil is_valid: FALSE
RPF details: RPF details updated via NHT callback
Join state: JOINED
Kernel installed: TRUE
Scenario 4: MRoute exists
Event: RP gets deleted
Current behaviour:
Nothing got updated in him upstream and channel oil,
join timer still runs. Mroute still exists in kernel.
Changed behaviour:
upstream_addr: INADDR_ANY
channel_oil iif: MAXVIF
channel_oil is_valid: FALSE
RPF details: Not valid
Join state: NOT_JOINED (also sent prune towards deleted RPF nbr)
Kernel installed: FALSE
Scenario 5: MRoute Exists
Event: RP unreachable
Current behaviour:
Nothing got updated in him upstream and channel oil,
join timer still runs. Mroute sdeleted from kernel.
Changed behaviour:
upstream_addr: RP address
channel_oil iif: MAXVIF
channel_oil is_valid: FALSE
RPF details: only RP address updated
Join state: NOT_JOINED (also sent prune towards deleted RPF nbr)
Kernel installed: FALSE
Scenario 6: Mroute exists
Event: Better RP configured with precise group range & reachable.
Current behaviour:
No effect on existing route.
Changed behaviour:
Upstream address: Better RP
RPF interface: towards the better RP
Join state: JOINED (Send a prune towards the old RP and send a join
towards the better RP)
Scenario 7: Mroute exists
Event: RP deleted and another RP with broad group range fits this group & reachable
Current behaviour:
No effect on current behaviour
Changed behaviour:
Upstream address: next available RP
RPF interface: towards the next available RP
Join state: JOINED (Send a prune towards the old RP and send a join
towards the better RP)
Signed-off-by: Sarita Patra <saritap@vmware.com>
Display only ipv4 neighbors when 'show bgp ipv4 neighbors' command is issued.
Display only ipv6 neighbors when 'show bgp ipv6 neighbors' command is issued.
Take the address family of the peer address into account, while displaying the neighbors.
Signed-off-by: Akhilesh Samineni <akhilesh.samineni@broadcom.com>
Running zebra after commit 888756b208
in valgrind produces this item:
==17102== Invalid read of size 8
==17102== at 0x44D84C: rib_dest_from_rnode (rib.h:375)
==17102== by 0x4546ED: rib_process_result (zebra_rib.c:1904)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102== Address 0x83bd468 is 88 bytes inside a block of size 96 free'd
==17102== at 0x4A35F54: free (vg_replace_malloc.c:530)
==17102== by 0x4CCAC00: qfree (memory.c:129)
==17102== by 0x4D03DC6: route_node_destroy (table.c:501)
==17102== by 0x4D039EE: route_node_free (table.c:90)
==17102== by 0x4D03971: route_node_delete (table.c:382)
==17102== by 0x44D82A: route_unlock_node (table.h:256)
==17102== by 0x454617: rib_process_result (zebra_rib.c:1882)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102== Block was alloc'd at
==17102== at 0x4A36FF6: calloc (vg_replace_malloc.c:752)
==17102== by 0x4CCAA2D: qcalloc (memory.c:110)
==17102== by 0x4D03D88: route_node_create (table.c:489)
==17102== by 0x4D0360F: route_node_new (table.c:65)
==17102== by 0x4D034F8: route_node_set (table.c:74)
==17102== by 0x4D03486: route_node_get (table.c:327)
==17102== by 0x4CFB700: srcdest_rnode_get (srcdest_table.c:243)
==17102== by 0x4545C1: rib_process_result (zebra_rib.c:1872)
==17102== by 0x45436D: rib_process_dplane_results (zebra_rib.c:3295)
==17102== by 0x4D0902B: thread_call (thread.c:1607)
==17102== by 0x4CC3983: frr_run (libfrr.c:1011)
==17102== by 0x4266F6: main (main.c:473)
==17102==
This is happening because of this order of events:
1) Route is deleted in the main thread and scheduled for rib processing.
2) Rib garbage collection is run and we remove the route node since it
is no longer needed.
3) Data plane returns from the deletion in the kernel and we call
the srcdest_rnode_get function to get the prefix that was deleted.
This recreates a new route node. This creates a route_node with
a lock count of 1, which we freed via the route_unlock_node call.
Then we continued to use the rn pointer. Which leaves us with use
after frees.
The solution is, of course, to just move the unlock the node at the
end of the function if we have a route_node.
Fixes: #3854
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The community_delete and lcommunity_delete functionality was
creating a special string that needed to be specially parsed.
Remove all this string creation and just pass the pertinent
data into the appropriate functions.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Commit: 6005fe55bc
Introduced a crash with zebra looking up either the
nbr structure or the mac structure. This is because
the zvni used is NULL and we eventually call a hash_lookup
call that would cause a NULL dereference. Partially
revert this commit to original behavior.
Problems found via clang Static Analyzer.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The struct prefix *prefix is really a const struct prefix *
This was causing compile warns->errors on some compilers
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
L3VNI keeps reference to svi interface (ifp).
When a netlink change received there is no flag
that mac has changed. Currently simply overwrite
interface's (ifp) hw_addr (MAC) field.
For originating EVPN type-2 and type-5 routes due to VNI
MAC change, comparison is required to check existing MAC
vs. netlink change MAC field.
Ticket:CM-23850
Reviewed By:CCR-8283
Testing Done:
Validate EVPN type-5 routes originated upon changing MAC address
of L3VNI's SVI inteface via ip link set cmd.
checked show bgp l2vpn evpn route and Rmac field contains new
MAC address.
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
In order to iterate over MPLS VPN routes, it's necessary to use
two nested loops (the outer loop iterates over the MPLS VPN RDs,
and the inner loop iterates over the VPN routes from that RD).
The add-path code wasn't doing this, which was leading to lots of
crashes when add-path was enabled for the MPLS VPN SAFI. This patch
fixes the problem.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
If path->net is NULL in the bgp_path_info_free() function, then
bgpd would crash in bgp_addpath_free_info_data() with the following
backtrace:
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ff7b267a42a in __GI_abort () at abort.c:89
#2 0x00007ff7b39c1ca0 in core_handler (signo=11, siginfo=0x7ffff66414f0, context=<optimized out>) at lib/sigevent.c:249
#3 <signal handler called>
#4 idalloc_free_to_pool (pool_ptr=pool_ptr@entry=0x0, id=3) at lib/id_alloc.c:368
#5 0x0000560096246688 in bgp_addpath_free_info_data (d=d@entry=0x560098665468, nd=0x0) at bgpd/bgp_addpath.c:100
#6 0x00005600961bb522 in bgp_path_info_free (path=0x560098665400) at bgpd/bgp_route.c:252
#7 bgp_path_info_unlock (path=0x560098665400) at bgpd/bgp_route.c:276
#8 0x00005600961bb719 in bgp_path_info_reap (rn=rn@entry=0x5600986b2110, pi=pi@entry=0x560098665400) at bgpd/bgp_route.c:320
#9 0x00005600961bf4db in bgp_process_main_one (safi=SAFI_MPLS_VPN, afi=AFI_IP, rn=0x5600986b2110, bgp=0x560098587320) at bgpd/bgp_route.c:2476
#10 bgp_process_wq (wq=<optimized out>, data=0x56009869b8f0) at bgpd/bgp_route.c:2503
#11 0x00007ff7b39d5fcc in work_queue_run (thread=0x7ffff6641e10) at lib/workqueue.c:294
#12 0x00007ff7b39ce3b1 in thread_call (thread=thread@entry=0x7ffff6641e10) at lib/thread.c:1606
#13 0x00007ff7b39a3538 in frr_run (master=0x5600980795b0) at lib/libfrr.c:1011
#14 0x000056009618a5a3 in main (argc=3, argv=0x7ffff6642078) at bgpd/bgp_main.c:481
Add a null-check protection to fix this problem.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>