AS 65000 | AS 65001
|
RR |
| |
R1 --- | --- R2
|
When r1 peer is an iBGP route reflector client of rr and r2 peer is a
eBGP neighbor of rr, and all three routers shares the same network, r2
receives announcements coming from r1 with a IPv6 link-local nexthop
from rr. This is incorrect as r2 should send traffic to r1 without
involving rr.
Do not send an IPv6 link-local nexthop if the originating peer is a
route-reflector client.
Link: https://github.com/FRRouting/frr/pull/16219#issuecomment-2397425505
Link: https://github.com/FRRouting/frr/pull/17037#discussion_r1792529683
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
In subgroup_announce_check(), the variable reflect is misleading, as it
suggests a relation to route reflection. However, it actually refers to
the scenario where an iBGP peer announces a route to another iBGP peer.
Rename reflect to ibgp_to_ibgp.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
If we have something like:
```
int eth1
ip router openfabric x
ipv6 router openfabric x
```
And eth1 is removed, the first `ip router ...` fails and only `ipv6 router ...`
is enabled.
If we leave only:
```
int eth1
ipv6 router openfabric x
```
Then also, no interface is going to be enabled, which is weird too.
Fixes: https://github.com/FRRouting/frr/issues/17075
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Currently the max message size is 4k. With a 256 way
ecmp FRR is seeing message sizes that are in the
6k size. There is desire to allow this to increase as
well to 512. Since the multipath size directly effects
how big the message may be when sending the routes ecmp
let's give a bit of headroom for this value when compiling
FRR at greater sizes. Additionally since we know not everyone
is using such large ecmp, allow them to build as appropriate
for their use cases.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When the fpm_process_queue has run out of space
but has written to the fpm output buffer, schedule
it to wake up immediately, as that the write will go out
pretty much immediately, since it was scheduled first.
If the fpm_process_queue has not written to the output
buffer then delay the processing by 10 milliseconds to
allow a possibly backed up write processing to have a
chance to complete it's work.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The fpm_nl_process function was getting the count
of the total number of ctx's processed. This leads
to after having processed 1 context to always signal
the dataplane that there is work to do. Change the
code to only notify the dplane worker when a context
was actually added to the outgoing context queue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add test to check BMP in VRF.
Note that the following configuration works with interface r1-eth0
towards 192.0.2.10 (BMP collector) in the default VRF but not in vrf1.
> router bgp 65501 vrf vrf1
> bmp targets bmp1
> bmp connect 192.0.2.10 port 1789 min-retry 100 max-retry 10000
Also, for some reasons, the test works even without "bgpd: bmp loc-rib
peer up/down for vrfs" commit.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
added bmp bgp peer for vrfs
added peer up vrf in bmp peer up state
added vrf state in bmpbgp
added safe bmp_peer_sendall : bmp_peer_sendall_safe
changed bgp_open_send to call new bgp_open_make
bgp_open_make creates a bgp open packet, now used in bmp for peer up vrf
added hook and call to bgp instance state
vrf peer state is recomputed when interfaces (including vrf itf) go up / down
and when it gets created or removed
Link: e48ba38070
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Signed-off-by: Maxence Younsi <mx.yns@outlook.fr>
5bb99ccad2 ("bgpd: reset ipv6 invalid link-local nexthop") now resets
the link-local when originating and destination peers are not on the
same network segment. However, it does not work all the time.
The fix compares the 'from' and 'peer' global IPv6 address. However,
'peer' refers to one of the peers of subgroup. The subgroup may contain
peers located on different network segment.
Split nexthop-local unchanged peer subgroup by network segment.
Fixes: 5bb99ccad2 ("bgpd: reset ipv6 invalid link-local nexthop")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Unset enforce-first-as on r3 of bgp_route_server_client to enable the
reception of routes on this router.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Rework bgp_route_server_client in a more standard form in order to
facilitate the next commut changes. Cosmetic change.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
They are managed under `frr-route-map`, not under `frr-bgp-route-map`.
Fixes: https://github.com/FRRouting/frr/issues/17055
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
If e.g. BGP neighbor is using a route-map at the boot, that is not yet created,
then the log is spammed with `The route-map 'X' does not exist`.
Processing earlier, should do the trick.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
This command did not allow the operator to display neighbor information
related to graceful-restart when used inside of a vrf.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If the "nexthop-local unchanged" setting is enabled, it preserves the
IPv6 link-local nexthop from the originating peer. However, if the
originating and destination peers are not on the same network segment,
the originating peer's IPv6 link-local address will be unreachable from
the destination peer.
In such cases, reset the IPv6 link-local nexthop, even if "nexthop-local
unchanged" is set on the destination peer.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Do not add an IPv6 link-local nexthop if the originating peer does not
provide one and the nexthop-local unchanged setting is enabled.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
When LSP fragments age, isis_dynhn_remove() is also called to remove the corresponding dyhn_cache entries.
Signed-off-by: baozhen-H3C <bao.zhen@h3c.com>
The following ASAN issue has been observed:
> ERROR: AddressSanitizer: heap-use-after-free on address 0x6160000acba4 at pc 0x55910c5694d0 bp 0x7ffe3a8ac850 sp 0x7ffe3a8ac840
> READ of size 4 at 0x6160000acba4 thread T0
> #0 0x55910c5694cf in ctx_info_from_zns zebra/zebra_dplane.c:3315
> #1 0x55910c569696 in dplane_ctx_ns_init zebra/zebra_dplane.c:3331
> #2 0x55910c56bf61 in dplane_ctx_nexthop_init zebra/zebra_dplane.c:3680
> #3 0x55910c5711ca in dplane_nexthop_update_internal zebra/zebra_dplane.c:4490
> #4 0x55910c571c5c in dplane_nexthop_delete zebra/zebra_dplane.c:4717
> #5 0x55910c61e90e in zebra_nhg_uninstall_kernel zebra/zebra_nhg.c:3413
> #6 0x55910c615d8a in zebra_nhg_decrement_ref zebra/zebra_nhg.c:1919
> #7 0x55910c6404db in route_entry_update_nhe zebra/zebra_rib.c:454
> #8 0x55910c64c904 in rib_re_nhg_free zebra/zebra_rib.c:2822
> #9 0x55910c655be2 in rib_unlink zebra/zebra_rib.c:4212
> #10 0x55910c6430f9 in zebra_rtable_node_cleanup zebra/zebra_rib.c:968
> #11 0x7f26f275b8a9 in route_node_free lib/table.c:75
> #12 0x7f26f275bae4 in route_table_free lib/table.c:111
> #13 0x7f26f275b749 in route_table_finish lib/table.c:46
> #14 0x55910c65db17 in zebra_router_free_table zebra/zebra_router.c:191
> #15 0x55910c65dfb5 in zebra_router_terminate zebra/zebra_router.c:244
> #16 0x55910c4f40db in zebra_finalize zebra/main.c:249
> #17 0x7f26f2777108 in event_call lib/event.c:2011
> #18 0x7f26f264180e in frr_run lib/libfrr.c:1212
> #19 0x55910c4f49cb in main zebra/main.c:531
> #20 0x7f26f2029d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
> #21 0x7f26f2029e3f in __libc_start_main_impl ../csu/libc-start.c:392
> #22 0x55910c4b0114 in _start (/usr/lib/frr/zebra+0x1ae114)
It happens with FRR using the kernel. During shutdown, the
namespace identifier is attempted to be obtained by zebra, in an
attempt to prepare zebra dataplane nexthop messages.
Fix this by accessing the ns structure.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>