When running the bgp_bmp_2 vrf test, peer vrf up/down events from the pre
and post policy are received with a wrong peer distinguisher value.
> {"peer_type": "route distinguisher instance", "policy": "pre-policy",
> "ipv6": true, "peer_ip": "192:168::2", "peer_distinguisher": "0:0",
> "peer_asn": 65502, "peer_bgp_id": "192.168.0.2", "timestamp":
> "2024-10-16 21:59:53.111962", "bmp_log_type": "peer up", "local_ip":
> "192:168::1", "local_port": 179, "remote_port": 50836, "seq": 5}
RFC7854 mentions in 4.2 that if the peer is a "RD Instance Peer", it is
set to the route distinguisher of the particular instance the peer
belongs to.
Fix this by modifying the BMP client, update the peer distinguisher
value by filling the peer distinguisher in the bmp_peerstate function.
> {"peer_type": "route distinguisher instance", "policy": "pre-policy",
> "ipv6": true, "peer_ip": "192:168::2", "peer_distinguisher": "444:1",
> "peer_asn": 65502, "peer_bgp_id": "192.168.0.2", "timestamp":
> "2024-10-16 21:59:53.111962", "bmp_log_type": "peer up", "local_ip":
> "192:168::1", "local_port": 179, "remote_port": 50836, "seq": 5}
Add a test to check that peer_distinguisher value is not 0:0 when an
RD instance is set.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When running the bgp_bmp_2 vrf test, peer up/down events from the pre
and post policy are received with a wrong peer type value
> {"peer_type": "global instance", "policy": "pre-policy", "ipv6": false,
> "peer_ip": "192.168.0.2", "peer_distinguisher": "0:0", "peer_asn": 65502,
> "peer_bgp_id": "192.168.0.2", "timestamp": "2024-10-16 21:59:53.111962",
> "bmp_log_type": "peer up", "local_ip": "192.168.0.1", "local_port": 179,
> "remote_port": 50710, "seq": 4}
RFC7854 defines RD instance peer type, and later in 4.2 requests that
the peer distinguisher value be set to non zero value when the peer type
is not global. This is the case for peer vrf instances.
Fix this by modifying the BMP client, update the peer type
value by updating the peer type value when sending peer up/down messages.
Add a check in the bgp_bmp_2 test to ensure that peer type is correctly
set.
> {"peer_type": "route distinguisher instance", "policy": "pre-policy",
> "ipv6": true, "peer_ip": "192:168::2", "peer_distinguisher": "0:0",
> "peer_asn": 65502, "peer_bgp_id": "192.168.0.2", "timestamp":
> "2024-10-16 21:59:53.111962", "bmp_log_type": "peer up", "local_ip":
> "192:168::1", "local_port": 179, "remote_port": 50836, "seq": 5}
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When running the bgp_bmp_2 vrf test, peer route messages from the pre
and post policy are received with a wrong peer distinguisher value.
> {"peer_type": "route distinguisher instance", "policy": "pre-policy", "ipv6": false,
> "peer_ip": "192.168.0.2", "peer_distinguisher": "0:0", "peer_asn": 65502,
> "peer_bgp_id": "192.168.0.2", "timestamp": "2024-10-31 08:19:58.111963",
> "bmp_log_type": "update", "origin": "IGP", "as_path": "65501 65502",
> "bgp_nexthop": "192.168.0.2", "ip_prefix": "172.31.0.15/32", "seq": 15}
RFC7854 mentions in 4.2 that if the peer is a "RD Instance Peer", it is
set to the route distinguisher of the particular instance the peer
belongs to.
Fix this by modifying the BMP client:
- update the peer distinguisher value by unlocking the filling of the peer distinguisher in the function.
This change impacts monitoring messages.
- add the peer distinguisher computation for mirror messages
- modify the bgp_bmp_2 vrf test, update the peer_distinguisher value
> {"peer_type": "route distinguisher instance", "policy": "pre-policy", "ipv6": false,
> "peer_ip": "192.168.0.2", "peer_distinguisher": "444:1", "peer_asn": 65502,
> "peer_bgp_id": "192.168.0.2", "timestamp": "2024-10-31 08:19:58.111963",
> "bmp_log_type": "update", "origin": "IGP", "as_path": "65501 65502",
> "bgp_nexthop": "192.168.0.2", "ip_prefix": "172.31.0.15/32", "seq": 15}
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When running the bgp_bmp_2 vrf test, peer route messages from the pre
and post policy are received with a wrong peer type value
> {"peer_type": "global instance", "policy": "pre-policy", "ipv6": false,
> "peer_ip": "192.168.0.2", "peer_distinguisher": "0:0", "peer_asn": 65502,
> "peer_bgp_id": "192.168.0.2", "timestamp": "2024-10-31 08:19:58.111963",
> "bmp_log_type": "update", "origin": "IGP", "as_path": "65501 65502",
> "bgp_nexthop": "192.168.0.2", "ip_prefix": "172.31.0.15/32", "seq": 15}
In addition to global instance peers, RFC7854 defines RD instance peers.
This value can be used for peers which are on a BGP VRF instance, for
example with an L3VPN setup.
When configuring a BGP VRF instance, the peer type should be seen as an
RD instance peer.
Fix this by modifying the BMP client:
- update the peer type for vrf mirror and monitoring messages
- modify bgp_bmp_2 vrf test to control the peer_type value
> {"peer_type": "route distinguisher instance", "policy": "pre-policy", "ipv6": false,
> "peer_ip": "192.168.0.2", "peer_distinguisher": "0:0", "peer_asn": 65502,
> "peer_bgp_id": "192.168.0.2", "timestamp": "2024-10-31 08:19:58.111963",
> "bmp_log_type": "update", "origin": "IGP", "as_path": "65501 65502",
> "bgp_nexthop": "192.168.0.2", "ip_prefix": "172.31.0.15/32", "seq": 15}
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
All BMP peer up/down messages send a 0:0 peer distinguisher.
This will not be ok when adding RD instance type.
Add code to get the peer distinguisher value.
- modify the API to pass the BGP instance instead of BMP.
- implement error cases with an unknown vrf identifier or a
peer type with local type value.
- handle the error return of the API; consequently, handle
the bmp_peerstate() error return in the calling functions.
There is no functional change, as the peer type value is
either loc-rib or global, both cases are already handled.
The next commit will handle the RD instance case.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
If a given L3VRF instance requests a peer distinguisher
for a peer up/down message, the AFI_UNSPEC afi parameter
will be used; no RD is chosen for this AFI.
Fix this by priorizing the AFI_IP value before the AFI_IP6
value. For instance, a router with both RD set for each
address-family, peer up/down messages will be sent with the
RD set to the one for AFI_IP.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The BMP implementation currently only supports global and
loc-rib instance types. When loc-rib is selected, the
peer_distinguisher is set to the route distinguisher of
the L3VRF where the BGP instance is. This functionality has
not been tested until now, because the peer distinguisher
value had been explicitly omitted in the bmp messages.
Expose the peer distinguisher value in all BMP messages
received. This change requires to modify the expected output
for loc-rib when the BGP instance is in a L3VRF.
The handling of peer distinguisher value for RD instances
will follow in the next commits.
Link: https://www.rfc-editor.org/rfc/rfc7854.html#section-4.2
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When running the bgp_bmp test, peer_up message from the loc-rib
are received with a wrong peer type.
> {"peer_type": "global instance", "policy": "pre-policy", "ipv6": false, "peer_ip": "0.0.0.0",
> "peer_distinguisher": "0:0", "peer_asn": 0, "peer_bgp_id": "0.0.0.0",
> "timestamp": "2024-10-16 21:59:53.111963", "bmp_log_type": "peer up", "local_ip": "0.0.0.0",
> "local_port": 0, "remote_port": 0, "seq": 1}
RFC9069 mentions in 5.1 that peer address must be set to 0.0.0.0,
and the peer_type value must be set to 3. Today, the value set
is 0 (global instance). This is wrong.
Fix this by modifying the BMP client, update the peer type value to
loc-rib on peer up messages.
Modify the current BMP test, by checking the peer up messages for the
0.0.0.0 IP address (which is the value used for loc-rib).
> {"peer_type": "loc-rib instance", "is_filtered": false, "policy": "loc-rib",
> "peer_distinguisher": "0:0", "peer_asn": 65501, "peer_bgp_id": "192.168.0.1",
> "timestamp": "2024-10-16 21:59:53.111963", "bmp_log_type": "peer up", "local_ip": "0.0.0.0",
> "local_port": 0, "remote_port": 0, "seq": 1}
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Although trigger unknown, based on the backtrace in one of the internal
testing, we do see some delete in the Intf where we can have the peer
ifp pointer null and we try to dereference it while trying to install
the route leading to a crash
Skip updating the ifindex in such cases and since the nexthop is not
properly updated, BGP skips sending it to zebra.
BackTrace:
0 0x00007faef05e7ebc in ?? () from /lib/x86_64-linux-gnu/libc.so.6
1 0x00007faef0598fb2 in raise () from /lib/x86_64-linux-gnu/libc.so.6
2 0x00007faef09900dc in core_handler (signo=11, siginfo=0x7ffdde8cb4b0, context=<optimized out>) at lib/sigevent.c:274
3 <signal handler called>
4 0x00005560aad4b7d8 in update_ipv6nh_for_route_install (api_nh=0x7ffdde8cbe94, is_evpn=false, best_pi=0x5560b21187d0, pi=0x5560b21187d0, ifindex=0, nexthop=0x5560b03cb0dc,
nh_bgp=0x5560ace04df0, nh_othervrf=0) at bgpd/bgp_zebra.c:1273
5 bgp_zebra_announce_actual (dest=dest@entry=0x5560afcfa950, info=0x5560b21187d0, bgp=0x5560ace04df0) at bgpd/bgp_zebra.c:1521
6 0x00005560aad4bc85 in bgp_handle_route_announcements_to_zebra (e=<optimized out>) at bgpd/bgp_zebra.c:1896
7 0x00007faef09a1c0d in thread_call (thread=thread@entry=0x7ffdde8d7580) at lib/thread.c:2008
8 0x00007faef095a598 in frr_run (master=0x5560ac7e5190) at lib/libfrr.c:1223
9 0x00005560aac65db6 in main (argc=<optimized out>, argv=<optimized out>) at bgpd/bgp_main.c:557
(gdb) f 4
4 0x00005560aad4b7d8 in update_ipv6nh_for_route_install (api_nh=0x7ffdde8cbe94, is_evpn=false, best_pi=0x5560b21187d0, pi=0x5560b21187d0, ifindex=0, nexthop=0x5560b03cb0dc,
nh_bgp=0x5560ace04df0, nh_othervrf=0) at bgpd/bgp_zebra.c:1273
1273 in bgpd/bgp_zebra.c
(gdb) p pi->peer->ifp
$26 = (struct interface *) 0x0
Ticket :#4203904
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Without the fix:
```
show ip prefix-list test_1 10.20.30.96/27 first-match
<no result>
show ip prefix-list test_2 192.168.1.2/32 first-match
<no result>
```
With the fix:
```
ip prefix-list test_1 seq 10 permit 10.20.30.64/26 le 27
!
end
donatas# show ip prefix-list test_1 10.20.30.96/27
seq 10 permit 10.20.30.64/26 le 27 (hit count: 1, refcount: 0)
donatas# show ip prefix-list test_1 10.20.30.64/27
seq 10 permit 10.20.30.64/26 le 27 (hit count: 2, refcount: 0)
donatas# show ip prefix-list test_1 10.20.30.64/28
donatas# show ip prefix-list test_1 10.20.30.126/26
seq 10 permit 10.20.30.64/26 le 27 (hit count: 3, refcount: 0)
donatas# show ip prefix-list test_1 10.20.30.126/30
donatas#
```
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
When a unicast route from source vrf is imported into
evpn as type5 route, prepend the asn of the source vrf into
type5 asn path list.
The condition of evpn type5 prefix path info is present but
source vrf route has been cleared via clear command. In this
case existing path info needs to rewrite the source vrf asn.
prepends asn of the source vrf, but the further condition
for existing path attribute for the same route needs to prepend
source vrf asn.
Ticket: #2943080
Testing:
Before fix:
r4# clear ip bgp vrf overlay prefix 0.0.0.0/0
Route Distinguisher: 128.117.243.209:4
*> [5]:[0]:[0]:[0.0.0.0]
203.0.113.1 0 0 194 ? <--- 64512 is missing
ET:8 RT:64532:104001 Rmac:06:ec:bf:59:e8:93
After fix:
r4# clear ip bgp vrf overlay prefix 0.0.0.0/0
Route's source vrf bgp output containing ASN 64512:
r4# show bgp vrf overlay
BGP table version is 2, local router ID is 128.117.243.209, vrf id 10
Default local pref 100, local AS 64512
...
Notice after clear command source vrf asn 64512 is retained.
Route Distinguisher: 128.117.243.209:4
*> [5]:[0]:[0]:[0.0.0.0]
203.0.113.1 0 0 64512 194 ?
ET:8 RT:64532:104001 Rmac:06:ec:bf:59:e8:93
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Prior to this we were only filtering EVPN routes from the import logic
if they were not route-type 1/2/3/5, which allowed things like RT-5s to
be imported into an L2VNI/MAC-VRF. This adds additional logic to ensure
routes are only imported into EVIs where they make sense.
No more nonsensical route importing!
Ticket: 2848204
Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
Effectively When bgp would send a route update down
to zebra and immediately after that a asic update
from the kernel was read. Zebra would choose the
asic update and drop the bgp update leaving us in
a state where bgp was not used as the true source.
Modify the code so that in rib_multipath_nhe
we notice that we have an unprocessed route update
from bgp. And if so just drop this kernel update
about an older version of the route since it is
no longer needed.
Ticket: 2722533
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This commit:
"tools: run `vtysh -b` once for all-startup"
changed things so that `vtysh -b` is run after all daemons have started
up instead of doing it for each daemon as they are started up. This
results in one long `vtysh -b`, which for large configs and many daemons
(in the case I saw, 4 daemons and 30,000 line config) can exceed the 20
second timer watchfrr uses to kill "hung" background tasks.
Shouldn't be any harm to increasing this to 90 seconds to give us some
leeway while still making sure we kill anything truly misbehaving.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
Looks like the cap setting was added for testing mlag via zebra test cli
to config the mlag role. However it is interfering with the valid state
updates rxed from the MLAG daemon based on timing (in some cases the
MLAG state changes are rxed before the capabilities).
Reference logs -
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
root@TORC11:mgmt:/home/cumulus# grep -ri "my_role\|MlagRole" /var/log/frr/bgpd.log
2021/06/18 13:26:40.380130 PIM: pim_mlag_process_mlagd_state_change: msg dump: my_role: SECONDARY, peer_state: DOWN
2021/06/18 13:26:40.380766 PIM: pim_mlag_process_mlagd_state_change: msg dump: my_role: SECONDARY, peer_state: DOWN
2021/06/18 13:26:41.382258 PIM: pim_mlag_process_mlagd_state_change: msg dump: my_role: SECONDARY, peer_state: RUNNING
2021/06/18 13:26:41.382379 PIM: pim_mlag_process_mlagd_state_change: msg dump: my_role: PRIMARY, peer_state: RUNNING
2021/06/18 13:26:52.386071 ZEBRA: Sending capabilities to client pim: MPLS enabled numMultipath 128 GR disabled MaintMode off MlagRole 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: #2691629
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
For those packets that we are not sending 16k of data, but something
far less than 256 bytes. Reduce those stream sizes we allocate
to something much more reasonable.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Check that the L3VNI is "up" before taking action to announce or
withdraw the EVPN type-5 default based on configuration. Otherwise,
there can be timing conditions where a EVPN type-5 default route
gets announced without a VNI and with invalid route targets.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com>
Ticket: #2684144
Reviewed By: Chirag Shah
Testing Done:
1. Rerun failed test multiple times successfully
2. Some manual testing
3. precommit and partial evpn-smoke
When a MAC gets deleted but associated neighbors remain, the MAC is
kept in the zebra MAC database as an internal ("auto") entry. When
this happens, reset the MAC's remote sequence number. This ensures that
when the host with the MAC later comes up behind a remote VTEP, the
local switch accepts the MAC and installs it into the bridge FDB and
we don't end up in a situation where remote MACs are not installed
into the bridge FDB.
This fix is a corollary of CM-22753 and is this time done for local
MACs upon delete.
Note: Commit is marked Cumulus-only because I need to evalute more
comprehensive changes before upstreaming it.
Ticket: CM-29581
Reviewed By: As above
Testing Done:
1. Multiple rounds of manual testing
2. Two rounds of evpn-smoke, 1 round of precommit
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Acked-by: Chirag Shah <chirag@cumulusnetworks.com>
Acked-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Consider a master bridge interface (br_l3vni) having a slave vxlan99
mapped to vlans used by 3 L3VNIs.
During ifdown br_l3vni interface, the function
zebra_vxlan_process_l3vni_oper_down() where zebra sends ZAPI to bgp for
a delete L3VNI is sent twice.
1) if_down -> zebra_vxlan_svi_down()
2) VXLAN is unlinked from the bridge i.e. vxlan99
zebra_if_dplane_ifp_handling() --> zebra_vxlan_if_update_vni()
(since ZEBRA_VXLIF_MASTER_CHANGE flag is set)
During ifup br_l3vni interface, the function
zebra_vxlan_process_l3vni_oper_down() is invoked because of access-vlan
change - process oper down, associate with new svi_if and then process
oper up again
The problem here is that the redundant ZAPI message of L3VNI delete
results in BGP doing a inline Global table walk for remote route
installation when the L3VNI is already removed/deleted. Bigger the
scale, more wastage is the CPU utilization.
Given the triggers for bridge flap is not a common scenario, idea is to
simply return from BGP if the L3VNI is already set to 0 i.e.
if the L3VNI is already deleted, do nothing and return.
NOTE/TBD: An ideal fix is to make zebra not send the second L3VNI delete
ZAPI message. However it is a much involved and a day-1 code to handle
corner cases.
Ticket :#3864372
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Anytime BGP gets a L3 VNI ADD/DEL from zebra,
- Walking the entire global routing table per L3VNI is very expensive.
- The next read (say of another VNI ADD/DEL) from the socket does
not proceed unless this walk is complete.
So for triggers where a bulk of L3VNI's are flapped, this results in
huge output buffer FIFO growth spiking up the memory in zebra since bgp
is slow/busy processing the first message.
To avoid this, idea is to hookup the BGP-VRF off the struct bgp_master
and maintain a struct bgp FIFO list which is processed later on, where
we walk a chunk of BGP-VRFs and do the remote route install/uninstall.
Ticket :#3864372
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Anytime BGP gets a L2 VNI ADD from zebra,
- Walking the entire global routing table per L2VNI is very expensive.
- The next read (say of another VNI ADD) from the socket does
not proceed unless this walk is complete.
So for triggers where a bulk of L2VNI's are flapped, this results in
huge output buffer FIFO growth spiking up the memory in zebra since bgp
is slow/busy processing the first message.
To avoid this, idea is to hookup the VPN off the bgp_master struct and
maintain a VPN FIFO list which is processed later on, where we walk a
chunk of VPNs and do the remote route install.
Note: So far in the L3 backpressure cases(#15524), we have considered
the fact that zebra is slow, and the buffer grows in the BGP.
However this is the reverse i.e. BGP is very busy processing the first
ZAPI message from zebra due to which the buffer grows huge in zebra
and memory spikes up.
Ticket :#3864372
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Previously we couldn't install VPN routes with self AS in the path because
we never checked if we have allowas-in enabled, or not.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Before this fix, if rpki_sync_socket_rtr socket returns EAGAIN, then ALL routes
in the RIB are revalidated which takes lots of CPU and some unnecessary traffic,
e.g. if using BMP servers. With a full feed it would waste 50-80Mbps.
Instead we should try to drain an existing pipe (another end), and revalidate
only affected prefixes.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Rather than storing the prefix-list name and looking it up every time we use it, store a pointer to the prefix-list itself.
Signed-off-by: Corey Siltala <csiltala@atcorp.com>
Add documentation for existing extended access-list functionality and
the new "ip multicast boundary" command leveraging that functionality.
Signed-off-by: Corey Siltala <csiltala@atcorp.com>
Add simple test to show filtering of IGMP joins using new "ip multicast
boundary" filtering with access-lists, include test of existing prefix-
list based "ip multicast boundary oil" command.
Signed-off-by: Corey Siltala <csiltala@atcorp.com>
Add new interface command ip multicast boundary ACCESSLIST4_NAME. This
allows filtering on both source and group using the extended access-list
syntax vs. group-only as with the existing "ip multicast boundary oil"
command, which uses prefix-lists. If both are configured, the prefix-
list is evaluated first. The default behavior for both prefix-lists and
access-lists remains "deny", so the prefix-list must have a terminating
"permit" statement in order to also evaluate against the access-list.
The following example denies groups in range 229.1.1.0/24 and groups in
range 232.1.1.0/24 with source 10.0.20.2:
!
ip prefix-list pim-oil-plist seq 10 deny 229.1.1.0/24
ip prefix-list pim-oil-plist seq 20 permit any
!
access-list pim-acl seq 10 deny ip host 10.0.20.2 232.1.1.0 0.0.0.255
access-list pim-acl seq 20 permit ip any any
!
interface r1-eth0
ip address 10.0.20.1/24
ip igmp
ip pim
ip multicast boundary oil pim-oil-plist
ip multicast boundary pim-acl
!
Signed-off-by: Corey Siltala <csiltala@atcorp.com>
Move the extended access-list handling from pim_msdp_packet.c to
pim_util.c to allow use elsewhere in the daemon.
Signed-off-by: Corey Siltala <csiltala@atcorp.com>