Contraints of host routes are too strict in current code:
Host routes with same destination address and nexthop address are forbidden
even when cross VRFs.
Currently host routes with different destination and nexthop address can cross
VRFs, it is ok. But host routes with same addresses are forbidden to cross VRFs,
it is wrong.
Since different VRFs can have the same addresses, leak specific host route with
the same nexthop address ( it means destination address is same to nexthop
address ) to other VRFs is a normal case.
This commit relaxes that contraints. Host routes with same destination address
and nexthop address are forbidden only when not cross VRFs.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
When an interface goes down, it signals any related NHGs to
re-validate themselves. During zebra shutdown, ensure we remove
any NHGs we've installed.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
In the loop, local variable `ip` is always set even if the check condition
is not satisfied.
Avoid the redundant set, move this set exactly after the check condition is
satisfied. Set `ip` only if the check condition is met, otherwise needn't.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
zebra crash is seen during shutdown (frr restart).
During shutdown, remote neigh and remote mac clean up
is triggered first, followed by per vni all neigh
(including local) and macs cleanup is triggered.
The crash occurs when a remote mac is cleaned up first
and its reference is remained in local neigh.
When local neigh attempt removes itself from its associated
mac's neigh_list it triggers inaccessible memory crash.
The fix is during mac deletion if its neigh_list is non-empty
then retain the MAC in AUTO state.
This can arise when MAC and neigh duo are in different state
(remote/local). Otherwise, the order of cleanup operation
is neighs followed by macs.
The auto mac will be cleaned up when per vni all neighs and macs
are cleaned up.
Ticket:CM-29826
Reviewed By:CCR-10369
Testing Done:
Configure evpn symmetric config where
MAC is in remote state and neigh is in local state.
Perform frr restart then crash is not seen.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
With recent changes to interface up mechanics in if_netlink.c
FRR was receiving as many as 4 up events for an interface
on ifdown/ifup events. This was causing timing issues
in FRR based upon some fun timings. Remove this from
happening.
Ticket: CM-31623
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Vxlan interfaces flap (protodown/up) event,
non ptm operative interfaces do not come up
as protodown up event do not trigger "if_up()"
event.
Ticket:CM-30477
Reviewed By:CCR-10681
Testing Done:
validated interfaces flaps, ip link down, ifdown
and protodown followed by UP event. all Vxlan interfaces
come up in bgpd post flap.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Frr need to handle protocol down event for vxlan
interface.
In MLAG scenario, one of the pair switch can put
vxlan port to protodown state, followed by
tunnel-ip change from anycast IP to individual IP.
In absence of protodown handling, evpn end up
advertising locally learn EVPN (MAC-IP) routes
with individual IP as nexthop.
This leads an issue of overwriting locally learn
entries as remote on MLAG pair.
Ticket:CM-24545
Reviewed By:CCR-10310
Testing Done:
In EVPN deployment, restart one of the MLAG
daemon, which puts vxlan interfaces in protodown state.
FRR treats protodown as oper down for vxlan interfaces.
VNI down cleans up/withdraws locally learn routes.
Followed by vxlan device UP event, re-advertise
locally learn routes.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
When a end operator is doing cross vrf imports in bgp:
router bgp 3239 vrf FOO
address-family ipv4 uni
import vrf BAR
!
and zebra has this configuration:
vrf FOO
ip protocol bgp route-map EVA
!
The current code in zebra_nhg.c was looking up the vrf of the
nexthop and attempting to apply the ip protocol route-map.
For most people the nexthop vrf and the re vrf are one and the
same so they never see a problem.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Upon restart zebra reads in the kernel state. Under linux
there is a mechanism to read the route and convert the protocol
to the correct internal FRR protocol to allow the zebra graceful
restart efforts to work properly.
Under *BSD I do not see a mechanism to convey the original FRR
protocol into the kernel and thus back out of it. Thus when
zebra crashes ( or restarts ) the routes read back in are kernel
routes and are effectively lost to the system and FRR cannot
remove them properly. Why? Because FRR see's kernel routes
as routes that it should not own and in general the admin
distance for those routes will be a better one than the
admin distance from a routing protocol. This is even
worse because when the graceful restart timer pops and rib_sweep
is run, FRR becomes out of sync with the state of the kernel forwarding
on *BSD.
On restart, notice that the route is a self route that there
is no way to know it's originating protocol. In this case
let's set the protocol to ZEBRA_ROUTE_STATIC and set the admin
distance to 255.
This way when an upper level protocol reinstalls it's route
the general zebra graceful restart code still works. The
high admin distance allows the code to just work in a way
that is graceful( HA! )
The drawback here is that the route shows up as a static
route for the time the system is doing it's work. FRR
could introduce *another* route type but this seems like
a bad idea and the STATIC route type is loosely analagous
to the type of route it has become.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
FRR will crash when the re->type is a ZEBRA_ROUTE_ALL and it
is inserted into the meta-queue. Let's just put some basic
code in place to prevent a crash from happening. No routing
protocol should be using ZEBRA_ROUTE_ALL as a value but
bugs do happen. Let's just accept the weird route type
gracefully and move on.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists some interface types that are slow on startup
to fully register their link speed. Especially those that
are working with an asic backend. The speed_update timer
associated with each interface would keep trying if the
system returned a MAX_UINT32 as the speed. This speed
means both unknown or there is none under linux.
Since some interface types are slow on startup let's modify
FRR to try for at most 4 minutes and give up trying on those
interfaces where we never get any useful data.
Why 4 minutes? I wanted to balance the time associated with
slow interfaces coming up with those that will never give us
a value. So I choose 4 minutes as a good ballpark of time
to keep trying
Why not track all those interfaces and just not attempt to
do the speed lookup? I would prefer to not keep track of these
as that I do not know all the interface types, nor do I wish
to keep programming as new ones come in.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
End operator is reporting that they are receiving buffer overruns
when attempting to read from the kernel receive socket. It is
possible to adjust this size to more modern levels especially
for when the system is under load. Modify the code base
so that *BSD operators can use the zebra `-s XXX` option
to specify a read buffer.
Additionally setup the default receive buffer size on *BSD
to be 128k instead of the 8k so that FRR does not run into
this issue again.
Fixes: #10666
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the dataplane to query and read interface NETCONF data;
add netconf-oriented data to the dplane context object, and
add accessors for it. Add handler for incoming update
processing.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Allow self-produced xxxNETCONF netlink messages through the BPF
filter we use. Just like address-configuration actions, we'll
process NETCONF changes in one path, whether the changes were
generated by zebra or by something else in the host OS.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
a) We'll need to pass the info up via some dataplane control method
(This way bsd and linux can both be zebra agnostic of each other)
b) We'll need to modify `struct interface *` to track this data
and when it changes to notify upper level protocols about it.
c) Work is needed to dump the entire mpls state at the start
so we can gather interface state. This should be done
after interface data gathering from the kernel.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Description:
===========
Change is intended for fixing the NHT resolution logic.
While recursively resolving nexthop, keep looking for a valid/useable route in the rib,
by not stopping at the first/most-specific route in the rib.
Consider the following set of events taking place on R1:
R1(config)# ip route 2.2.2.0/24 ens192
R1# sharp watch nexthop 2.2.2.32 connected
R1# show ip nht
2.2.2.32(Connected)
resolved via static
is directly connected, ens192
Client list: sharp(fd 33)
-2.2.2.32 NHT is resolved over the above valid static route.
R1# sharp install routes 2.2.2.32 nexthop 2.2.2.32 1
R1# 2.2.2.32(Connected)
resolved via static
is directly connected, ens192
Client list: sharp(fd 33)
-.32/32 comes which is going to resolve through itself, but since this is an invalid route,
it will be marked as inactive and will not affect the NHT.
R1# sharp install routes 2.2.2.31 nexthop 2.2.2.32 1
R1# 2.2.2.32(Connected)
unresolved(Connected)
Client list: sharp(fd 50)
-Now a .31/32 comes which will resolve over .32 route, but as per the current logic,
this will trigger the NHT check, in turn making the NHT unresolved.
-With fix, NHT should stay in resolved state as long as the valid static or connected route stays installed
Fix:
====
-While resolving nexthops, walk up the tree from the most-specific match,
walk up the tree without any ZEBRA_NHT_CONNECTED check.
Co-authored-by: Vishal Dhingra <vdhingra@vmware.com>
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Iqra Siddiqui <imujeebsiddi@vmware.com>
The recently-added hashtable of nlsock objects needs to be
thread-safe: it's accessed from the main and dplane pthreads.
Add a mutex for it, use wrapper apis when accessing it. Add
a per-OS init/terminate api so we can do init that's not
per-vrf or per-namespace.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
NetDEF CI has been whining about multiline string style.
Make the strings single-line and call it a day.
Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
Trying to call multiple ioctl calls on ifreq will result in
overwriting ifreq with garbage data. On if_get_flags call,
try to keep the flags field safe from another possible ioctl
call before applying the flags field.
Modified code as per Code Review, done by Donald Sharp.
Signed-off-by: Bijan <bijanebrahimi@riseup.net>
Currently when the kernel sends netlink messages to FRR
the buffers to receive this data is of fixed length.
The kernel, with certain configurations, will send
netlink messages that are larger than this fixed length.
This leads to situations where, on startup, zebra gets
really confused about the state of the kernel. Effectively
the current algorithm is this:
read up to buffer in size
while (data to parse)
get netlink message header, look at size
parse if you can
The problem is that there is a 32k buffer we read.
We get the first message that is say 1k in size,
subtract that 1k to 31k left to parse. We then
get the next header and notice that the length
of the message is 33k. Which is obviously larger
than what we read in. FRR has no recover mechanism
nor is there a way to know, a priori, what the maximum
size the kernel will send us.
Modify FRR to look at the kernel message and see if the
buffer is large enough, if not, make it large enough to
read in the message.
This code has to be per netlink socket because of the usage
of pthreads. So add to `struct nlsock` the buffer and current
buffer length. Growing it as necessary.
Fixes: #10404
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Store the fd that corresponds to the appropriate `struct nlsock` and pass
that around in the dplane context instead of the pointer to the nlsock.
Modify the kernel_netlink.c code to store in a hash the `struct nlsock`
with the socket fd as the key.
Why do this? The dataplane context is used to pass around the `struct nlsock`
but the zebra code has a bug where the received buffer for kernel netlink
messages from the kernel is not big enough. So we need to dynamically
grow the receive buffer per socket, instead of having a non-dynamic buffer
that we read into. By passing around the fd we can look up the `struct nlsock`
that will soon have the associated buffer and not have to worry about `const`
issues that will arise.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Store and use the sequence number instead of using what is in
the `struct nlsock`. Future commits are going away from storing
the `struct nlsock` and the copy of the nlsock was guaranteeing
unique sequence numbers per message. So let's store the
sequence number to use instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Using memcmp is wrong because struct ipaddr may contain unitialized
padding bytes that should not be compared.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>