Under heavy system load with many peers in passive mode and a large
number of routes, bgpd can enter an infinite loop. This occurs while
processing timeout BGP_OPEN messages, which prevents it from accepting
new connections. The following log entries illustrate the issue:
>bgpd[6151]: [VX6SM-8YE5W][EC 33554460] 3.3.2.224: nexthop_set failed, resetting connection - intf 0x0
>bgpd[6151]: [P790V-THJKS][EC 100663299] bgp_open_receive: bgp_getsockname() failed for peer: 3.3.2.224
>bgpd[6151]: [HTQD2-0R1WR][EC 33554451] bgp_process_packet: BGP OPEN receipt failed for peer: 3.3.2.224
... repeating
The issue occurs when bgpd handles a massive number of routes in the RIB
while receiving numerous BGP_OPEN packets. If bgpd is overloaded, it
fails to process these packets promptly, leading the remote peer to
close the connection and resend BGP_OPEN packets.
When bgpd eventually starts processing these timeout BGP_OPEN packets,
it finds the TCP connection closed by the remote peer, resulting in
"bgp_stop()" being called. For each timeout peer, bgpd must iterate
through the routing table, which is time-consuming and causes new
incoming BGP_OPEN packets to timeout, perpetuating the infinite loop.
To address this issue, the code is modified to check if the peer has
been established at least once before calling "bgp_clear_route_all()".
This ensures that routes are only cleared for peers that had a
successful session, preventing unnecessary iterations over the routing
table for peers that never established a connection.
With this change, BGP_OPEN timeout messages may still occur, but in the
worst case, bgpd will stabilize. Before this patch, bgpd could enter a
loop where it was unable to accpet any new connections.
Signed-off-by: Loïc Sang <loic.sang@6wind.com>
The backup_nexthop entry list has been populated by mistake,
and should not. Fix this by reverting the introduced behavior.
Fixes: 237ebf8d45 ("bgpd: rework bgp_zebra_announce() function, separate nexthop handling")
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
In case of EVPN MH bond, a member port going in
protodown state due to external reason (one case being linkflap),
frr updates the state correctly but upon manually
clearing external reason trigger FRR to reinstate
protodown without any reason code.
Fix is to ensure if the protodown reason was external
and new state is to have protodown 'off' then do no reinstate
protodown.
Ticket: #3947432
Testing:
switch:#ip link show swp1
4: swp1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9216 qdisc
pfifo_fast master bond1 state DOWN mode DEFAULT group default qlen
1000
link/ether 1c:34:da:2c:aa:68 brd ff:ff:ff:ff:ff:ff protodown on
protodown_reason <linkflap>
switch:#ip link set swp1 protodown off protodown_reason linkflap off
switch:#ip link show swp1
4: swp1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9216 qdisc
pfifo_fast master bond1 state DOWN mode DEFAULT group default qlen
1000
link/ether 1c:34:da:2c:aa:68 brd ff:ff:ff:ff:ff:ff
Signed-off-by: Chirag Shah <chirag@nvidia.com>
In scaled EVPN + ipv4/ipv6 uni route sync to zebra,
some of the ipv4/ipv6 routes skipped reinstallation
due to incorrect local variable's stale value.
Once the local variable value reset in each loop
iteration all skipped routes synced to zebra properly.
Ticket: #3948828
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Use the name for when putting out debugs in bgp_zebra.c.
Additionally add an evpn flag for announce_route_actual.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The following table is not compliant with caml format when displayed in
json:
> ttable_add_row(
> tt,
> "Vertex|Type|Metric|Next-Hop|Interface|Parent");
>
> ttable_json(tt, "ssdsss");
output observed:
> [..]
> {
> "Vertex":"r1",
> "Type":"",
> "Metric":0,
> "Next-Hop":"",
> "Interface":"",
> "Parent":""
> }
output expected:
> [..]
> {
> "vertex":"r1",
> "type":"",
> "metric":0,
> "nextHop":"",
> "interface":"",
> "parent":""
> }
Override the ttable_json() function with a new function which has an
extra paramter: this parameter will redefine the initial row value for
json:
> ttable_json_with_json_text(tt,
> "vertex|type|metric|nextHop|interface|parent");
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
If using weighted ECMP, the weight for non-recursive next-hop should be
inherited from recursive next-hop.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
In the near future, some daemons may only register SIDs. This may be
the case for the pathd daemon when creating SRv6 binding SIDs.
When a locator is getting deleted at ZEBRA level, the daemon may have
an easy way to find out the SIds to unregister to.
This commit proposes to add the locator name to the SID_SRV6_NOTIFY
message whenever possible. Only case when an allocation failure happens,
the locator will not be present. In all other places, the notify API
at procol levels has the locator name extra-parameter.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
1. Router A is configured with "is-type level-1-2", while Router B is configured with "is-type level-1". Only level 1 neighbor entries are present on Router A.
2. After configuring Router B with "is-type level-2-only", both level 1 and level 2 neighbor entries exist on Router A. The state of these entries is UP, and the level 1 neighbor entry is currently aging.
3. Before the level 1 neighbor entry on Router A ages out, configuring Router B with "is-type level-1", both level 1 and level 2 neighbor entries exist on Router A. The level 2 neighbor entry is UP and will age out normally. However, the level 1 neighbor entry remains in the Initializing state, preventing the establishment of level 1 neighbor adjacency between Router A and Router B.
When the adjacency type of the link is switched in function isis_circuit_is_type_set, the function circuit_resign_level() is called to delete the old level's circuit->u.bc.lan_neighs linked list. If the old level is not level-1-2, the function circuit_commence_level() is called to create a new level's circuit->u.bc.lan_neighs linked list, but neither of these functions handle the circuit->u.bc.adjdb linked list. This leads to a situation where upon receiving hello packets again before the circuit->u.bc.adjdb linked list entries age out, the circuit->u.bc.lan_neighs linked list is not constructed based on the circuit->u.bc.adjdb linked list. As a result, the hello packets sent will consistently lack an SNPA, causing the neighbor to remain unable to establish an adjacency upon receiving the hello packets.
Signed-off-by: zhou-run <166502045+zhou-run@users.noreply.github.com>
When removing a large number of routes, the linux kernel can take the
cpu for an extended amount of time, leaving a situation where FRR
detects a starvation event.
r1# sharp install routes 10.0.0.0 nexthop 192.168.44.33 1000000 repeat 10
2024-06-14 12:55:49.365 [NTFY] sharpd: [M7Q4P-46WDR] vty[5]@# sharp install routes 10.0.0.0 nexthop 192.168.44.33 1000000 repeat 10
2024-06-14 12:55:49.365 [DEBG] sharpd: [YP4TQ-01TYK] Inserting 1000000 routes
2024-06-14 12:55:57.256 [DEBG] sharpd: [TPHKD-3NYSB] Installed All Items 7.890085
2024-06-14 12:55:57.256 [DEBG] sharpd: [YJ486-NX5R1] Removing 1000000 routes
2024-06-14 12:56:07.802 [WARN] zebra: [QH9AB-Y4XMZ][EC 100663314] STARVATION: task dplane_thread_loop (634377bc8f9e) ran for 7078ms (cpu time 220ms)
2024-06-14 12:56:25.039 [DEBG] sharpd: [WTN53-GK9Y5] Removed all Items 27.783668
2024-06-14 12:56:25.039 [DEBG] sharpd: [YP4TQ-01TYK] Inserting 1000000 routes
2024-06-14 12:56:32.783 [DEBG] sharpd: [TPHKD-3NYSB] Installed All Items 7.743524
2024-06-14 12:56:32.783 [DEBG] sharpd: [YJ486-NX5R1] Removing 1000000 routes
2024-06-14 12:56:41.447 [WARN] zebra: [QH9AB-Y4XMZ][EC 100663314] STARVATION: task dplane_thread_loop (634377bc8f9e) ran for 5175ms (cpu time 179ms)
Let's modify the loop in dplane_thread_loop such that after a provider
has been run, check to see if the event should yield, if so, stop
and reschedule this for the future.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Instead of keeping a counter that is independent
of the queue's data structure. Just use the queue's
built-in counter. Ensure that it's pthread safe by
keeping it wrapped inside the mutex for adding/deleting
to the queue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently, when a locator is deleted in zebra, zebra notifies only the
zclient that owns the locator.
With the introduction of SID Manager, the locator is no longer owned by
any client. Instead, the locator is owned by Zebra, and clients can
allocate and release SIDs from the locator using the ZAPI
ZEBRA_SRV6_MANAGER_GET_SID and ZEBRA_SRV6_MANAGER_RELEASE_SID.
Therefore, when a locator is removed in Zebra, we need to notify all
daemons so that they can release/uninstall the SIDs allocated by that
locator.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
Send asynchronous notifications to zclients when an SRv6 SID is
allocated/released and when a SID alloc/release operation fails.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
Add a new ZAPI command `ZEBRA_SRV6_SID_NOTIFY` used by zebra to send
asynchronous SRv6 SIDs notifications to zclients.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
Previous commits introduced two new ZAPI operations,
`ZEBRA_SRV6_MANAGER_GET_SRV6_SID` and
`ZEBRA_SRV6_MANAGER_RELEASE_SRV6_SID`. These operations allow a daemon
to interact with the SRv6 SID Manager to get and release an SRv6 SID,
respectively.
This commit extends the SID Manager by adding logic to process the
requests `ZEBRA_SRV6_MANAGER_GET_SRV6_SID` and
`ZEBRA_SRV6_MANAGER_RELEASE_SRV6_SID`, and allocate/release SIDs to
requesting daemons.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
Add functions to allocate/release SRv6 SIDs. SIDs can be allocated
either explicitly (allocate a specific SID) or dynamically (allocate any
available SID).
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
The previous commits introduced a new operation,
`ZEBRA_SRV6_MANAGER_GET_LOCATOR`, allowing a daemon to request
information about a specific SRv6 locator from the SRv6 SID Manager.
This commit extends the SID Manager to respond to a
`ZEBRA_SRV6_MANAGER_GET_LOCATOR` request and provide the requested
locator information.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>