The show zebra dplane provider command was ommitting
the input and output queues to the dplane itself.
It would be nice to have this insight as well.
New output:
r1# show zebra dplane providers
dataplane Incoming Queue from Zebra: 100
Zebra dataplane providers:
Kernel (1): in: 6, q: 0, q_max: 3, out: 6, q: 14, q_max: 3
dplane_fpm_nl (2): in: 6, q: 10, q_max: 3, out: 6, q: 0, q_max: 3
dataplane Outgoing Queue to Zebra: 43
r1#
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The dplane providers have a concept of input queues
and output queues. These queues are chained together
during normal operation. The code in zebra also has
a feedback mechanism where the MetaQ will not run when
the first input queue is backed up. Having the dplane_fpm_nl
code grab all contexts when it is backed up prevents
this system from behaving appropriately.
Modify the code to not add to the dplane_fpm_nl's internal
queue when it is already full. This will allow the backpressure
to work appropriately in zebra proper.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently when the dplane_thread_loop is run, it moves contexts
from the dg_update_list and puts the contexts on the input queue
of the first provider. This provider is given a chance to run
and then the items on the output queue are pulled off and placed
on the input queue of the next provider. Rinse/Repeat down through
the entire list of providers. Now imagine that we have a list
of multiple providers and the last provider is getting backed up.
Contexts will end up sticking in the input Queue of the `slow`
provider. This can grow without bounds. This is a real problem
when you have a situation where an interface is flapping and an
upper level protocol is sending a continous stream of route
updates to reflect the change in ecmp. You can end up with
a very very large backlog of contexts. This is bad because
zebra can easily grow to a very very large memory size and on
restricted systems you can run out of memory. Fortunately
for us, the MetaQ already participates with this process
by not doing more route processing until the dg_update_list
goes below the working limit of dg_updates_per_cycle. Thus
if FRR modifies the behavior of this loop to not move more
contexts onto the input queue if either the input queue
or output queue of the next provider has reached this limit.
FRR will naturaly start auto handling backpressure for the dplane
context system and memory will not go out of control.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The ctx queue data structures already have a counter
associated with them. Let's just use them instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
```
ton# sh ip bgp peer-group
BGP peer-group pg-a
Peer-group type is auto
Configured address-families: IPv4 Unicast;
BGP peer-group pg-e, remote AS 0
Peer-group type is external
Configured address-families: IPv4 Unicast;
BGP peer-group pg-i, remote AS 65001
Peer-group type is internal
Configured address-families: IPv4 Unicast;
ton#
```
`auto` should be handled accordingly.
Fixes: 0dfe25697f ("bgpd: Implement neighbor X remote-as auto")
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
In the near future, some daemons may only register SIDs. This may be
the case for the pathd daemon when creating SRv6 binding SIDs.
When a locator is getting deleted at ZEBRA level, the daemon may have
an easy way to find out the SIds to unregister to.
This commit proposes to add the locator name to the SID_SRV6_NOTIFY
message whenever possible. Only case when an allocation failure happens,
the locator will not be present. In all other places, the notify API
at procol levels has the locator name extra-parameter.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>