This is a first in a series of commits, whose goal is to rename
the thread system in FRR to an event system. There is a continual
problem where people are confusing `struct thread` with a true
pthread. In reality, our entire thread.c is an event system.
In this commit rename the thread.[ch] files to event.[ch].
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
a) Consolidate v4 and v6 versions of rib_match_multicast
b) Improve debug to show what we matched against as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In `rib_link`, if is_zebra_import_table_enabled returns
true, `rib_queue_add` will not called, resulting in other
table route node never processed. This actually should not
be dependent on whether the route is imported.
In `rib_delnode`, if is_zebra_import_table_enabled returns
true, it will use `rib_unlink` instead of enqueuing the
route node for process. There is no reason that imported
route nodes should not be reprocessed. Long ago, the
behaviour was dependent on whether the route_entry comes
from a table other than main.
Signed-off-by: zyxwvu Shi <i@shiyc.cn>
Use the already existing mpls label code to store VNI
info for vxlan. VNI's are defined as labels just like mpls,
we should be using the same code for both.
This patch is the first part of that. Next we will need to
abstract the label code to not be so mpls specific. Currently
in this, we are just treating VXLAN as a label type and storing
it that way.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Currently `ip import-table 33` imports routes with
a distance of 15, as defined by zebra.h. zebra_rib.c
on the other hand believes the default value for the table
is 150. Let's make them agree with each other.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the defines for distance that are in zebra.h. We could
easily have a cluster where we don't agree with ourselves. So
let's convert zebra to use the defines in zebra.h
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There existed the idea, from Volta, that a nexthop group would not have
the same nexthops installed -vs- what FRR actually sent down. The
dplane would notify you.
With the addition of 06525c4f99
the code was put behind a bit of a wall controlled the usage
of it.
The flag ROUTE_ENTRY_USE_FIB_NHG flag was being used
to control which set was being sent up to concerned parties
in nexthop tracking. Put this flag behind the wall and
do not necessarily set it when we receive a data plane
notification about a route being installed or not.
Fixes: #12706
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After calling `rib_unlink` the variable `re` will point to `free()`d
memory, so don't attempt to use it after this point.
Found by Coverity Scan (Coverity ID 1519784)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
When FRR receives a route from the kernel about the route
offload success/failure. The metric being reported is not
going to be correct since we may not know it appropriately
at this point in time. If we can set the metric to something
appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we are notified about the kernel about a route being offloaded
or not correctly set the distance.
Ticket: CM-33097
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The early route queue has a series of `struct zebra_early_route *`
entries. Zebra is treating this memory as just a `struct route entry`.
This is wrong. Correct this to free the memory correctly.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The wq->spec.errorfunc is never used in the code.
It's been in the code base since 2005 and I also
do not remember ever seeing it being called. No
workqueue process function ever returns error.
Since it's not used let's just remove it from the
code base.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Read from the fpm dplane a route update that will
include status about whether or not the asic was
successfull in offloading the route.
Have this data passed up to zebra for processing and disseminate
this data as appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Volta submitted notification changes for the dplane that had a
special use case for their system. Volta is no more, the code
is not being actively developed and from talking with ex-Volta
employees there is no current plans to even maintain this code.
Wrap the special handling of nexthops that their asic-dataplane
did in a bit of code to isolate it and allow for future removal,
as that I do not actually believe anyone else is using this code.
Add a CPP_NOTICE several years into the future that will tell us
to remove the code. If someone starts using it then they will
have to notice this variable to set it and hopefully they will
see my CPP_NOTICE to come talk to us. If this is being used then
we can just remove this wrapper.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This allows Zebra to manage QDISC, TCLASS, TFILTER in kernel and do cleaning
jobs when it starts up.
Signed-off-by: Siger Yang <siger.yang@outlook.com>
When zebra receives routes from upper level protocols it decodes the
zapi message and places the routes on the metaQ for processing. Suppose
we have a route A that is already installed by some routing protocol.
And there is a route B that has a nexthop that will be recursively
resolved through A. Imagine if a route replace operation for A is
going to happen from an upper level protocol at about the same time
the route B is going to be installed into zebra. If these routes
are received, and decoded, at about the same time there exists a
chance that the metaQ will contain both of them at the same time.
If the order of installation is [ B, A ]. B will be resolved
correctly through A and installed, A will be processed and
re-installed into the FIB. If the nexthops have changed for
A then the owner of B should be notified about the change( and B
can do the correct action here and decide to withdraw or re-install ).
Now imagine if the order of routes received for processing on the
metaQ is [ A, B ]. A will be received, processed and sent to the
dataplane for reinstall. B will then be pulled off the metaQ and
fail the install since A is in a `not Installed` state.
Let's loosen the restriction in nexthop resolution for B such
that if the route we are dependent on is a route replace operation
allow the resolution to suceed. This requires zebra to track a new
route state( ROUTE_ENTRY_ROUTE_REPLACING ) that can be looked at
during nexthop resolution. I believe this is ok because A is
a route replace operation, which could result in this:
-route install failed, in which case B should be nht'ing and
will receive the nht failure and the upper level protocol should
remove B.
-route install succeeded, no nexthop changes. In this case
allowing the resolution for B is ok, NHT will not notify the upper
level protocol so no action is needed.
-route install succeeded, nexthops changes. In this case
allowing the resolution for B is ok, NHT will notify the upper
level protocol and it can decide to reinstall B or not based
upon it's own algorithm.
This set of events was found by the bgp_distance_change topotest(s).
Effectively the tests were looking for the bug ( A, B order in the metaQ )
as the `correct` state. When under very heavy load, the A, B ordering
caused A to just be installed and fully resolved in the dataplane before
B is gotten to( which is entirely possible ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Fix issue#11996.
When removing VRF ( all routes of this VRF), zebra mistakenly forgot to check
whether its routes are in update queue of FPM. So FPM module will crash during
its dealing with these routes, which are already freed.
Add a new HOOK `rib_shutdown()`, `zebra_rtable_node_cleanup()` will use it
to remove these routes from update queue of FPM module before freeing them.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Currently if an operator does this operation:
sharpd@eva ~/frr8> sudo ip nexthop add id 5000 via 192.168.119.44 dev enp39s0 ; sudo ip route add 10.0.0.1 nhid 5000
2022/06/30 08:52:40 ZEBRA: [ZHQK5-J9M1R] proto2zebra: Please add this protocol(0) to proper rt_netlink.c handling
2022/06/30 08:52:40 ZEBRA: [PS16P-365FK][EC 4043309076] Zebra failed to find the nexthop hash entry for id=5000 in a route entry
sharpd@eva ~/frr8> vtysh -c "show ip route 10.0.0.1"
Routing entry for 0.0.0.0/0
Known via "kernel", distance 0, metric 100, best
Last update 00:01:58 ago
* 192.168.119.1, via enp39s0
The route is dropped by zebra with no warnings. This is not good,
but unlikely to happen at this point in time. In order to fix
this issue route processing from inputs needs to happen after nexthop
group processing from inputs. This was not possible because
nexthop groups are placed on the metaQ. As such the above
nexthop group creation is placed on the metaQ for processing
in META_QUEUE_NHG. Then the route is read in and processed
immediately. The nexthop group is not found ( not processed yet!)
and the route is dropped in zebra.
Modify the code to have early route processing of validity
on the MetaQ. This preserves the order of operations.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Convert label processing that comes from zapi messages
into being handled by the meta-Q. This is because early
route processing is going to be moved to the meta-Q as
well and we will have a chicken and egg problem without
moving this code to be processed by the meta-Q.
Ordering of messages from ospf as an example:
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:62] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:43] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:61] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:66] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
The ZEBRA_MPLS_LABELS_REPLACE immediately turn around and attempt to replace nexthop labels on routes that
were added. If the route add is placed on the metaQ, it will not exist yet and as such the label replace
will fail.
Modify the zebra code to take the label operations and place them on the metaQ as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This commit implements necessary netlink encoders for traffic control
including QDISC, TCLASS and TFILTER, and adds basic dplane operations.
Co-authored-by: Stephen Worley <sworley@nvidia.com>
Signed-off-by: Siger Yang <siger.yang@outlook.com>
For whatever reason. ZEBRA_ROUTE_SYSTEM routes were being processed
last. Since a system route is just another kernel route type. Let's
just switch it to be processed the same time as kernel routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There were more than a few places where the NHG meta
queue was not being explicitly called out. Let's
be consistent and use the same nomenclature as much
as possible when talking about metaQ's.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
convert:
frr_with_mutex(..)
to:
frr_with_mutex (..)
To make all our code agree with what clang-format is going to produce
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
New output example:
2022-07-03 09:40:29.310 [DEBG] zebra: [JF0K0-DVHWH] rib_meta_queue_add: (0:254):4.5.6.8/32: queued rn 0x55937f586ee0 into sub-queue Kernel Routes
2022-07-03 09:40:29.321 [DEBG] zebra: [HH6N2-PDCJS] default(0:254):4.5.6.8/32 rn 0x55937f586ee0 dequeued from sub-queue Kernel Routes
Let's make it a bit more human readable.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Instead of having global allow_delete move it to
where it belongs in the zrouter data structure.
Additionally show this data in `show zebra`
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The rib_process_dplane_results function was having each
sub function handler process the results and then
free the ctx. Lot's of functionality that needs to remember
to free the context. Let's just free it in the main loop.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Since the calling hook for old fpm is done in `rib_uninstall_kernel()`
inside, this calling place outside should be redundant. Just remove it.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Multipath route may have mixed nexthops of EVPN and IP unicast. Move
EVPN flag to nexthop to support such cases.
Signed-off-by: Xiao Liang <shaw.leon@gmail.com>
Add support for setting the protodown reason code.
829eb208e8
These patches handle all our netlink code for setting the reason.
For protodown reason we only set `frr` as the reason externally
but internally we have more descriptive reasoning available via
`show interface IFNAME`. The kernel only provides a bitwidth of 32
that all userspace programs have to share so this makes the most sense.
Since this is new functionality, it needs to be added to the dplane
pthread instead. So these patches, also move the protodown setting we
were doing before into the dplane pthread. For this, we abstract it a
bit more to make it a general interface LINK update dplane API. This
API can be expanded to support gernal link creation/updating when/if
someone ever adds that code.
We also move a more common entrypoint for evpn-mh and from zapi clients
like vrrpd. They both call common code now to set our internal flags
for protodown and protodown reason.
Also add debugging code for dumping netlink packets with
protodown/protodown_reason.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
FRR will crash when the re->type is a ZEBRA_ROUTE_ALL and it
is inserted into the meta-queue. Let's just put some basic
code in place to prevent a crash from happening. No routing
protocol should be using ZEBRA_ROUTE_ALL as a value but
bugs do happen. Let's just accept the weird route type
gracefully and move on.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the dataplane to query and read interface NETCONF data;
add netconf-oriented data to the dplane context object, and
add accessors for it. Add handler for incoming update
processing.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>