If a operator issues a series of route-map deletions and
then re-adds, *and* this triggers the hash table to realloc
to grow to a larger size, then subsuquent route-map operations
will be against a corrupted hash table.
Why?
Effectively the route-map code was inserting each
route-map <NAME> into a hash for storage. Upon
deletion there is this concept of delayed processing
so the routemap code sets a bit `to-be-processed`
and marks the route-map for deletion. This is
1 entry in the hash table. Then if the operator
recreates the hash, FRR would add another hash
entry. If another deletion happens then there
now are 2 deletion entries that are indistinguishable
from a hash perspective.
FRR stores the deleted name of the route-map so that
any delayed processing can lookup the name and only process
those peers that are related to that route-map name.
This is good as that if in say BGP, we do not want
to reprocess all the peers that don't use the route-map.
Solution:
The whole purpose of the delay of deletion and the
storage of the route-map is to allow the using protocol
the ability to process the route-map at a later time
while still retaining the route-map name( for more efficient
reprocessing ). The problem exists because we are keeping
multiple copies of deletion events that are indistinguishable
from each other causing hash havoc.
The truth is that we only need to keep 1 copy of the
routemap in the table. If the series of events is:
a) delete ( schedule processing )
b) add ( reschedule processing )
Current code ends up processing the route-map two times
and in this event we really just need to reprocess everything
with the new route-map.
If the series of events is:
a) delete (schedule processing )
b) add (reschedule)
c) delete (reschedule)
d) add (reschedule)
All this really points to is that FRR just needs to keep the last
in the series of maps and ensuring that FRR knows that we need
to continue processing the route-map. So in the creation processing
if the hash has an entry for this map, the routemap code knows that
this is a deletion event. Mark this route-map for later processing
if it was marked so. Also in the lookup function do not return
a map if the map found was deleted.
Fixes: #10708
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
State-only and configuration presence-containers need to be treated
differently when iterating over YANG operational data. Currently the
get_elem() callback is used to know when a state-only p-container
exists or not, and configuration p-containers are assumed to always
exist, which is clearly wrong. Fix this by checking the running
configuration to know whether a rw p-container exists or not.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
We can use PIMADDR_ANY instead of INADDR_NONE to initalize rp->rpf_addr
when there is no rp configured for group_all.
Signed-off-by: sarita patra <saritap@vmware.com>
On FreeBSD I have noticed that subsuquent calls to clock_gettime(..)
can return an after time that is before first calls value.
This in turn is generating CPU_HOG's because the subtraction
is wrapping into very very large numbers:
2022/02/28 20:12:58 SHARP: [PTDQA-70FG5] start: 35.741981000 now: 35.740581000
2022/02/28 20:12:58 SHARP: [XK9YH-ZD8FA][EC 100663313] CPU HOG: task zclient_read (800744240) ran for 0ms (cpu time 18446744073709550ms)
(Please note I added the first line of debug to figure this issue out).
I have been asked to open a FreeBSD bug report and have done so.
In the mean time I think that it is important that FRR does
not generate bogus CPU HOG's on FreeBSD ( especially since
this may or may not be easily fixed and FRR has no control
over what version of the operating system, operators are
going to be running with FRR.
So, add a bit of specialized code that checks to see if
the after time in FreeBSD is before the now time in
thread_consumed_time and do some quick manipulations
to not have this issue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When a end operator is doing cross vrf imports in bgp:
router bgp 3239 vrf FOO
address-family ipv4 uni
import vrf BAR
!
and zebra has this configuration:
vrf FOO
ip protocol bgp route-map EVA
!
The current code in zebra_nhg.c was looking up the vrf of the
nexthop and attempting to apply the ip protocol route-map.
For most people the nexthop vrf and the re vrf are one and the
same so they never see a problem.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>