Report the routes metric in IPFORWARDMETRIC1 and return
-1 for the other metrics as required by the IP-FORWARD-MIB.
inetCidrRouteMetric2 OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-create
STATUS current
DESCRIPTION
"An alternate routing metric for this route. The
semantics of this metric are determined by the routing-
protocol specified in the route's inetCidrRouteProto
value. If this metric is not used, its value should be
set to -1."
DEFVAL { -1 }
::= { inetCidrRouteEntry 13 }
I've included metric2 but it's the same for all of them.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The snmp walk of the zebra rib was skipping entries
because in_addr_cmp was replaced with a prefix_cmp
which worked slightly differently causing parts
of the zebra rib tree to be skipped.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
zebra_nhg_install_kernel takes a route type. We don't
know it at that particular spot but we should not be passing
in `true`. Let's use ZEBRA_ROUTE_MAX to indicate we do not
know, so that the correct thing is done.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Adding comments that tell what a variable is doing in
the middle of a function call makes it extremely hard
to read the formatting. Remove.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Trying to debug some cross vrf stuff in zebra and frankly
it's hard to grep the file for the routes you are interested
in. Let's clean this up some and get a bit better
information for us developers
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The show zebra dplane provider command was ommitting
the input and output queues to the dplane itself.
It would be nice to have this insight as well.
New output:
r1# show zebra dplane providers
dataplane Incoming Queue from Zebra: 100
Zebra dataplane providers:
Kernel (1): in: 6, q: 0, q_max: 3, out: 6, q: 14, q_max: 3
dplane_fpm_nl (2): in: 6, q: 10, q_max: 3, out: 6, q: 0, q_max: 3
dataplane Outgoing Queue to Zebra: 43
r1#
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The dplane providers have a concept of input queues
and output queues. These queues are chained together
during normal operation. The code in zebra also has
a feedback mechanism where the MetaQ will not run when
the first input queue is backed up. Having the dplane_fpm_nl
code grab all contexts when it is backed up prevents
this system from behaving appropriately.
Modify the code to not add to the dplane_fpm_nl's internal
queue when it is already full. This will allow the backpressure
to work appropriately in zebra proper.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently when the dplane_thread_loop is run, it moves contexts
from the dg_update_list and puts the contexts on the input queue
of the first provider. This provider is given a chance to run
and then the items on the output queue are pulled off and placed
on the input queue of the next provider. Rinse/Repeat down through
the entire list of providers. Now imagine that we have a list
of multiple providers and the last provider is getting backed up.
Contexts will end up sticking in the input Queue of the `slow`
provider. This can grow without bounds. This is a real problem
when you have a situation where an interface is flapping and an
upper level protocol is sending a continous stream of route
updates to reflect the change in ecmp. You can end up with
a very very large backlog of contexts. This is bad because
zebra can easily grow to a very very large memory size and on
restricted systems you can run out of memory. Fortunately
for us, the MetaQ already participates with this process
by not doing more route processing until the dg_update_list
goes below the working limit of dg_updates_per_cycle. Thus
if FRR modifies the behavior of this loop to not move more
contexts onto the input queue if either the input queue
or output queue of the next provider has reached this limit.
FRR will naturaly start auto handling backpressure for the dplane
context system and memory will not go out of control.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The ctx queue data structures already have a counter
associated with them. Let's just use them instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When trying to track down a MTYPE_TMP memory leak
it's harder to search for it when you happen to
have some usage of ttable_dump. Let's just give
it it's own memory type so that we can avoid
confusion in the future.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently the FRR code will receive both kernel and
connected routes that do not actually have an underlying
nexthop group at all. Zebra turns around and creates
a `matching` nexthop hash entry and installs it.
For connected routes, this will create 2 singleton
nexthops in the dplane per interface (v4 and v6).
For kernel routes it would just create 1 singleton
nexthop that might be used or not.
This is bad because the dplane has a limited amount
of space available for nexthop entries and if you
happen to have a large number of interfaces then
all of a sudden you have 2x(# of interfaces) singleton
nexthops.
Let's modify the code to delay creation of these singleton
nexthops until they have been used by something else in the
system.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There is a code path that could theoretically get you
to a point where the ng->nexthop is a NULL value.
Let's just make sure the SA system believes that
cannot happen anymore.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
A blackhole nexthop, according to the linux kernel,
can be v4 or v6. A v4 blackhole nexthop cannot be
used on a v6 route, but a v6 blackhole nexthop can
be used with a v4 route. Convert all blackhole
singleton nexthops to v6 and just use that.
Possibly reducing the number of active nexthops by 1.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Let's display the afi of the nexthop hash entry. Right
now it is impossible to tell the difference between v4 or
v6 nexthops, especially since it is important for the kernel.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Move the prefix lookup/comparison to outside the re loop
and into the rn loop, since that is where the code should
actually be.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists a path in rib_add_multipath where if a decision
is made to not use the passed in re, we just drop the memory
instead of freeing it. Let's free it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Current code intentionally ignores kernel routes. Modify
zebra to allow these routes to be read in on linux. Also
modify zebra to look to see if a route should be treated
as a connected and mark it as such.
Additionally this should properly handle some of the issues
being seen with NOPREFIXROUTE.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The prefix'es p and src_p are not const. Let's make
them so. Useful to signal that we will not change this
data.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently FRR when it has two nexthop groups:
A
nexthop 1 weight 5
nexthop 2 weight 6
nexthop 3 weight 7
B
nexthop 1 weight 3
nexthop 2 weight 4
nexthop 3 weight 5
We end up with 5 singleton nexthops and two groups:
ID: 181818168 (sharp)
RefCnt: 1
Uptime: 00:04:52
VRF: default
Valid, Installed
Depends: (69) (70) (71)
via 192.168.119.1, enp13s0 (vrf default), weight 182
via 192.168.119.2, enp13s0 (vrf default), weight 218
via 192.168.119.3, enp13s0 (vrf default), weight 255
ID: 181818169 (sharp)
RefCnt: 1
Uptime: 00:02:08
VRF: default
Valid, Installed
Depends: (71) (127) (128)
via 192.168.119.1, enp13s0 (vrf default), weight 127
via 192.168.119.2, enp13s0 (vrf default), weight 170
via 192.168.119.3, enp13s0 (vrf default), weight 255
id 69 via 192.168.119.1 dev enp13s0 scope link proto 194
id 70 via 192.168.119.2 dev enp13s0 scope link proto 194
id 71 via 192.168.119.3 dev enp13s0 scope link proto 194
id 127 via 192.168.119.1 dev enp13s0 scope link proto 194
id 128 via 192.168.119.2 dev enp13s0 scope link proto 194
id 181818168 group 69,182/70,218/71,255 proto 194
id 181818169 group 71,255/127,127/128,170 proto 194
This is not a desirable state to be in. If you have a
link flapping in the network and weights are changing
rapidly you end up with a large number of singleton
nexthops that are being used by the nexthop groups.
This fills up asic space and clutters the table.
Additionally singleton nexthops cannot have any weight
and the fact that you attempt to create a singleton
nexthop with different weights means nothing to the
linux kernel( or any asic dplane ). Let's modify
the code to always create the singleton nexthops
without a weight and then just creating the
NHG's that use the singletons with the appropriate
weight.
ID: 181818168 (sharp)
RefCnt: 1
Uptime: 00:00:32
VRF: default
Valid, Installed
Depends: (22) (24) (28)
via 192.168.119.1, enp13s0 (vrf default), weight 182
via 192.168.119.2, enp13s0 (vrf default), weight 218
via 192.168.119.3, enp13s0 (vrf default), weight 255
ID: 181818169 (sharp)
RefCnt: 1
Uptime: 00:00:14
VRF: default
Valid, Installed
Depends: (22) (24) (28)
via 192.168.119.1, enp13s0 (vrf default), weight 153
via 192.168.119.2, enp13s0 (vrf default), weight 204
via 192.168.119.3, enp13s0 (vrf default), weight 255
id 22 via 192.168.119.1 dev enp13s0 scope link proto 194
id 24 via 192.168.119.2 dev enp13s0 scope link proto 194
id 28 via 192.168.119.3 dev enp13s0 scope link proto 194
id 181818168 group 22,182/24,218/28,255 proto 194
id 181818169 group 22,153/24,204/28,255 proto 194
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
All routes received by zebra from upper level protocols have a weight
of 1. Let's just make everything extremely consistent in our code.
Lot's of tests needed to be fixed up to make this work.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The linux kernel adds 1 upon receipt of a weight, if you
send a 255 it gets unhappy. Let's Limit range to 254 as
that kernel does not like sending of 255.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Commit 605df8d44 zebra: Use zebra dplane for RTM link and addr broke loading of
kernel routes at startup for configurations without netlink.
It rearranged the startup sequence in zebra_ns_enable() to pass via
zebra_ns_startup_continue(), triggered through zebra_dplane_startup_stage()
calls. However, it neglected to make these calls in the non-netlink code path.
As a result zebra failed to load kernel routes at startup on platforms such
as FreeBSD.
Insert these calls so we run through all of the expected startup stages.
Signed-off-by: Kristof Provost <kprovost@netgate.com>
The function zebra_nhg_hash_equal is only used
as a hash function for storage of NHG's and retrieval.
If you have say two nhg's:
31 (25/26)
32 (25/26)
This function would return them as being equal. Which
of course leads to the problem when you attempt to
hash_release 32 but release 31 from the hash. Then later
when you attempt to do hash comparisons 32 has actually
been freed leaving to use after free situations and shit
goes down hill fast.
This hash is only used as part of the hash comparison
function for nexthop group storage. Since this is so
let's always return the 31/32 nhg's are not equal at all.
We possibly have a different problem where we are creating
31 and 32 ( when 31 should have just been used instead of 32 )
but we need to prevent any type of hash release problem at all.
This supercedes any other issue( that should be tracked down
on it's own ). Since you can have use after free situation
that leads to a crash -vs- some possible nexthop group duplication
which is very minor in comparison.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When the prefix-list is not found, show which AFI is the real one we are
looking for.
E.g.: looking at this output is not clear:
```
[RYF1Z-ZKDRS] route_match_address_prefix_list: Prefix List p1 specified does not exist defaulting to NO_MATCH
```
route_match_address_prefix_list() is called by route_match_ipv6_address_prefix_list(),
and route_match_ip_address_prefix_list().
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
It is possible that right before an upper level protocol dies
or is killed routes would be installed into zebra. These routes
could be on the Meta-Q for early route-processing. Leaving us with
a situation where the client is removed, and all it's routes that are
in the rib at that time, and then after that the MetaQ is run and the
routes are reprocessed leaving routes from an upper level daemon
post daemon going away from zebra's perspective. These routes will
be abandoned.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>