If you are in a situation where you have multiple addresses on an
interface, zebra creates one connected route for them.
The issue is that the rib entry is not created if addresses were
added before the interface was running.
We add the address to a running interface in a typical flow.
Therefore, we handle the route & rib creation within a single ADD event.
In the opposite case, we create the route entries without activating them.
These are considered to be active since ZEBRA_IFC_DOWN is not set.
On the following interface UP, we ignore the same ADDR_ADD as it overlaps
with the existing prefixes -> rib is never created.
The minimal reproducible setup:
-----------------------------------------
ip link add name dummy0 type dummy
ip addr flush dev dummy0
ip link set dummy0 down
ip addr add 192.168.1.7/24 dev dummy0
ip addr add 192.168.1.8/24 dev dummy0
ip link set dummy0 up
vtysh -c 'show ip route' | grep dummy0
Signed-off-by: Volodymyr Huti <v.huti@vyos.io>
Recent commit e92508a741 changed
the prefix_master->str to a RB tree. This introduced a condition
whnere on shutdown the prefix list was removed from the master list
and then operated on by passing around a name. Which was then used
to lookup the prefix list again when we operated on the code.
This change to a RB Tree first deleted the item from the RB tree
first thus introducing this crash
Crash:
(gdb) bt
index=0x556c07d59650, pentry=0x556c07d29380) at lib/routemap.c:2397
arg=0x7ffdbf84bc60) at lib/hash.c:267
event=RMAP_EVENT_PLIST_DELETED) at lib/routemap.c:2489
Grab the first item on the list, clean it and then remove it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we do "log file /var/log/frr/something", permissions are set to
0640 (frr:frr), but when the logrotate kicks in, we have 0640 (frr:frrvty).
I believe, we should have a consistent permissions.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
At the moment we set /var/log/frr permissions to 0750 (frr:frr), but the log
file is 0640 (root:adm) (unless logrotated) and that doesn't allow adm group
to even open the directory.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
In some stress testing, we are seeing type-5 evpn routes being
left in a rejected state in zebra.
Sequence of events as I am seeing it:
a) Interface comes up that type5 routes nexthop depends on
b) zebra processes creates the connected and lets bgp know via nht
c) bgp installs the route to zebra
d) zebra processes and sends install to kernel
e) before route is installed, the interface the nexthop points at flaps
f) the route install is rejected, notify zebra
g) the interface comes up
h) zebra gets the notification about the route install rejection
i) zebra processes the down/up and turns it into a single up event
j) BGP never reinstalls the type 5 route
This up event does not translate into a nexthop tracking event
when the events happen quickly enough and/or zebra is extremelyh
busy and bgp would never see that the nexthops changed even very quickly.
This is the same thing that was going on with
https://github.com/FRRouting/frr/pull/7724
in PBR.
To fix this let's notice the interface up/down events for v4
in bgp now as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When lsp-mtu is configured larger than interface mtu and the interface
is brought up, the ISIS code would crash. When other vendors have this
misconfiguration they just continue ISIS running and allow the LSP
packets to be created but not sent. When the misconfiguration is corrected
the LSP packets start being sent. This change creates that same behavior
in FRR.
The startup issue I am hitting is when the isis lsp-mtu is larger that the interfaces mtu.
We run into this case when we are in the process of changing the mtu on a tunnel.
I issue a shutdown/no shutdown on the interface, because the tunnel MTU is smaller
than the lsp-mtu, it is considered an error and calls circuit_if_del. This deletes
part of the circuit information, which includes the circuit->ip_addr list. Later on we get
an address update from zebra and try to add the interface address to this list and crash.
2022/04/07 20:19:52.032 ISIS: [GTRPJ-X68CG] CSM_EVENT for tun_gw2: IF_UP_FROM_Z
calls isis_circuit_if_add
this initialize the circuit->ip_addrs
isis_circuit_up
has the mtu check circuit->area->lsp_mtu > isis_circuit_pdu_size(circuit) and fails
returns ISIS_ERROR
on failure call isis_circuit_if_del
this deletes the circiut->ip_addrs list <----
2022/04/07 20:19:52.032 ZEBRA: [NXYHN-ZKW2V] zebra_if_addr_update_ctx: INTF_ADDR_ADD: ifindex 3, addr 192.168.0.1/24
message to isisd to add address
isis_zebra_if_address_add
isis_circuit_add_addr
circuit->ip_addr we try to add the ip address to the list, but it was deleted above and isisd crashes
Signed-off-by: Lynne Morrison <lynne.morrison@ibm.com>
Operators are seeing:
Mar 28 07:19:37 kingpin zebra[418]: [TZANK-DEMSE] netlink_nexthop_msg_encode: nhg_id 68 (zebra): proto-based nexthops only, ignoring
Mar 28 07:19:37 kingpin zebra[418]: [TZANK-DEMSE] netlink_nexthop_msg_encode: nhg_id 68 (zebra): proto-based nexthops only, ignoring
Mar 28 07:19:37 kingpin zebra[418]: [YXPF5-B2CE0] netlink_route_multipath_msg_encode: RTM_DELROUTE 2804:4d48:4000::/42 vrf 0(254)
Mar 28 07:19:37 kingpin zebra[418]: [YXPF5-B2CE0] netlink_route_multipath_msg_encode: RTM_NEWROUTE 2804:4d48:4000::/42 vrf 0(254)
Mar 28 07:19:37 kingpin zebra[418]: [TVM3E-A8ZAG] _netlink_route_build_singlepath: (single-path): 2804:4d48:4000::/42 nexthop via fe80::b6fb:e4ff:fe26:c5d5 if 2 vrf default(0)
Mar 28 07:19:37 kingpin zebra[418]: [HYEHE-CQZ9G] nl_batch_send: netlink-dp (NS 0), batch size=140, msg cnt=2
Mar 28 07:19:37 kingpin zebra[418]: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (68) into the kernel
When `zebra nexthop proto only` is turned on.
Effectively zebra intentionally does not do the nexthop group installation
and the dplane notification in zebra_nhg.c just assumes it was a failure
and prints an error message. Since this act was intentional, let's
just notice that it was intentional and not report the message
as a failure.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>