Both the label manager and table manager zapi code send data requests via zapi
to zebra and then immediately listen for a response from zebra. The problem here
is of course that the listen part is throwing away any zapi command that is not
the one it is looking for.
ISIS/OSPF and PIM all have synchronous abilities via zapi, which they all
do through a special zapi connection to zebra. BGP needs to follow this model
as well. Additionally the new zclient_sync connection that should be created,
a once a second timer should wake up and read any data on the socket to
prevent problems too much data accumulating in the socket.
```
r3# sh bgp labelpool summary
Labelpool Summary
-----------------
Ledger: 3
InUse: 3
Requests: 0
LabelChunks: 1
Pending: 128
Reconnects: 1
r3# sh bgp labelpool inuse
Prefix Label
---------------------------
10.0.0.1/32 16
192.168.31.0/24 17
192.168.32.0/24 18
r3#
```
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
When advertising an mpls vpn entry with a new label,
the return traffic is redirected to the local machine,
but the MPLS traffic is dropped.
Add an MPLS entry to handle MPLS packets which have
the new label value. Traffic is swapped to the original
label value from the mpls vpn next-hop entry; then it is
sent to the resolved next-hop of the original next-hop
from the mpls vpn next-hop entry.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The label per nexthop attributes take 24 bytes per bgp path
entry on AMD64 platform, and are only used for unicast paths.
The current patch-set introduces a similar attributes, but that
will be used only for l3vpn paths. To gain some memory on the
bgp_path_info structure in the next commit, do some changes.
Create an 'mplsvpn' union structure that will either include the
label per nexthop structs for ipv4 paths, or the l3vpn paths
structures. The 'label_nexthop_cache' and the 'label_nh_thread'
attributes of the 'bgp_path_info' structure are moved into an
union under a new structure called 'bgp_mplsvpn_label_nh_blnc'.
The flags attribute of 'bgp_path_info' is increased from 16 bits
to 32 bits, and the BGP_PATH_MPLSVPN_LABEL_NH flag is added to
know the 'mplsvpn' usage.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Redistributing mpls vpn prefixes with a new label
requires picking up unused MPLS local labels.
Today, there is an MPLS label pool which is owned
by the zebra daemon. BGP gets a chunk of labels
that are shared with the multiple usage of the BGP
daemon. A label type attribute is used whenever
BGP needs a new label value.
The 'LP_TYPE_BGP_L3VPN_BIND' label type will be used
to allocate the MPLS labels that will be bound to
the original next-hop, and label value of an L3VPN
update.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The following command is made available to list the labels
allocated per-nexthop, along with the paths registered to it.
> # show bgp vrf vrf1 label-nexthop
> Current BGP label nexthop cache for IP, VRF vrf1
> 192.0.2.11, label 20 #paths 3
> if r1-eth1
> Last update: Mon Jan 16 18:52:11 2023
> 192.0.2.12, label 17 #paths 2
> if r1-eth1
> Last update: Mon Jan 16 18:52:08 2023
> 192.0.2.14, label 18 #paths 1
> if r1-eth1
> Last update: Mon Jan 16 18:52:07 2023
> 192.168.255.13, label 19 #paths 1
> if r1-eth2
> Last update: Mon Jan 16 18:52:10 2023
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
BGP MPLSVPN next hop label allocation was using only the next-hop
IP address. As MPLSVPN contexts rely on bnc contexts, the real
nexthop interface is known, and the LSP entry to enter can apply
to the specific interface. To illustrate, the BGP service is able
to handle the following two iproute2 commands:
> ip -f mpls route add 105 via inet 192.0.2.45 dev r1-eth1
> ip -f mpls route add 105 via inet 192.0.2.46 dev r1-eth2
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This commit introduces the necessary structs and apis to
create the cache entries that store the label information
associated to a given nexthop.
A hash table is created in each BGP instance for all the
AFIs: IPv4 and IPv6. That hash table is initialised.
An API to look and/or create an entry based on a given
nexthop.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
A new label type is introduced: LP_TYPE_NEXTHOP. This new
label type will be used in next commits to allocate labels
for a specific nexthop IP address.
The commit changes add vty and json outputs to display
the new label type and the label values associated.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Effectively a massive search and replace of
`struct thread` to `struct event`. Using the
term `thread` gives people the thought that
this event system is a pthread when it is not
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The following command is made available to list the labels
allocated per-nexthop, along with the paths registered to it.
> # show bgp vrf vrf1 label-nexthop
> Current BGP label nexthop cache for IP, VRF vrf1
> 192.0.2.11, label 20 #paths 3
> if r1-eth1
> Last update: Mon Jan 16 18:52:11 2023
> 192.0.2.12, label 17 #paths 2
> if r1-eth1
> Last update: Mon Jan 16 18:52:08 2023
> 192.0.2.14, label 18 #paths 1
> if r1-eth1
> Last update: Mon Jan 16 18:52:07 2023
> 192.168.255.13, label 19 #paths 1
> if r1-eth2
> Last update: Mon Jan 16 18:52:10 2023
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
BGP MPLSVPN next hop label allocation was using only the next-hop
IP address. As MPLSVPN contexts rely on bnc contexts, the real
nexthop interface is known, and the LSP entry to enter can apply
to the specific interface. To illustrate, the BGP service is able
to handle the following two iproute2 commands:
> ip -f mpls route add 105 via inet 192.0.2.45 dev r1-eth1
> ip -f mpls route add 105 via inet 192.0.2.46 dev r1-eth2
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This commit introduces the necessary structs and apis to
create the cache entries that store the label information
associated to a given nexthop.
A hash table is created in each BGP instance for all the
AFIs: IPv4 and IPv6. That hash table is initialised.
An API to look and/or create an entry based on a given
nexthop.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
A new label type is introduced: LP_TYPE_NEXTHOP. This new
label type will be used in next commits to allocate labels
for a specific nexthop IP address.
The commit changes add vty and json outputs to display
the new label type and the label values associated.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Rather than running selected source files through the preprocessor and a
bunch of perl regex'ing to get the list of all DEFUNs, use the data
collected in frr.xref.
This not only eliminates issues we've been having with preprocessor
failures due to nonexistent header files, but is also much faster.
Where extract.pl would take 5s, this now finishes in 0.2s. And since
this is a non-parallelizable build step towards the end of the build
(dependent on a lot of other things being done already), the speedup is
actually noticeable.
Also files containing CLI no longer need to be listed in `vtysh_scan`
since the .xref data covers everything. `#ifndef VTYSH_EXTRACT_PL`
checks are equally obsolete.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
- double the size of each new chunk request from zebra
- use bitfields to track label allocations in a chunk
- When allocating:
- skip chunks with no free labels
- search biggest chunks first
- start search in chunk where last search ended
- Improve API documentation in comments (bgp_lp_get() and callback)
- Tweak formatting of "show bgp labelpool chunks"
- Add test features (compiled conditionally on BGP_LABELPOOL_ENABLE_TESTS)
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
Back when I put this together in 2015, ISO C11 was still reasonably new
and we couldn't require it just yet. Without ISO C11, there is no
"good" way (only bad hacks) to require a semicolon after a macro that
ends with a function definition. And if you added one anyway, you'd get
"spurious semicolon" warnings on some compilers...
With C11, `_Static_assert()` at the end of a macro will make it so that
the semicolon is properly required, consumed, and not warned about.
Consistently requiring semicolons after "file-level" macros matches
Linux kernel coding style and helps some editors against mis-syntax'ing
these macros.
Signed-off-by: David Lamparter <equinox@diac24.net>
The check for the return code for zclient_send_get_label_chunk is
reversed and therefore the pending count does not get incremented
for each successful label chunk request.
This has the effect of requesting a 50 label chunk per label request
from BGP i.e we request 50 times the labels we require.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
To prepare for fixing an issue where labels do not get released back
to the labelpool when the route is deleted some refactoring is
necessary. There are 2 parts to this.
1. restructure the code to remove the circular nature of label
allocations via the labelpool and decouple the label type decision
from the notification fo the FEC.
The code to notify the FEC association to zebra has been split out
into a separate function so that it can be called from the synchronous
path (for registration of index-based labels and de-registration of all
labels), and from the asynchronous path where we need to wait for a
callback from the labelpool code with a label allocation.
The decision about whether we are using an index-based label or an
allocated label is reflected in the state of the BGP_NODE_LABEL_REQUESTED
flag so the checks on the path_info in the labelpool callback code are
no longer required.
2. change the owned of a labelpool allocated label from the path info
structure to the bgp_dest structure. This allows labels to be released
(in a subsequent commit) when the owner (bgp_dest) goes away.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
when the path info information is queued on the work queue it
is protected by a lock to avoid the rug being pulled whilst it
resides on the queue add an unlock in the error case where we do
no queue the reference to the workqueue.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
The `enum zclient_send_status` enum needs to be extended
throughout the code base to use the new states and
to fix up places where we tested against the return
value being non zero.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
For SRGB, we need to support chunk requests starting at a
specific point in the label space, rather than just asking
for any sufficiently large chunk. To this purpose, we extend
the label manager api to request a chunk with a base value;
if the base is set to 0, the label manager will behave as it
currently does, i.e. fetching the first free chunk big enough
to satisfy the request.
update all the existing calls to get chunks from the label
manager so that they use MPLS_LABEL_BASE_ANY as the base
for the requested chunk
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
under some conditions, the callback to get a label for
a LU bgp path could be called after the path had already
been freed. In this case we would be reading garbage
and potentially crash. Lock the path info before
queueing the callback, and unlock as the first step
of the callback, exiting gracefully if the path info
is now NULL.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
Again, the FIFO_* stuff in lib/fifo.h is no different from a simple
unsorted list. Just use DECLARE_LIST here so we can get rid of FIFO_*.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Corrections so that the BGP daemon can work with the label manager properly
through a label-manager proxy. Details:
- Correction so the BGP daemon behind a proxy label manager gets the range
correctly (-I added to the BGP daemon, to set the daemon instance id)
- For the BGP case, added an asynchronous label manager connect command so
the labels get recycled in case of a BGP daemon reconnection. With this,
BGPd and LDPd would behave similarly.
Signed-off-by: F. Aragon <paco@voltanet.io>
There is no need to check for failure of a ALLOC call
as that any failure to do so will result in a assert
happening. So we can safely remove all of this code.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
MPLS label pool backed by allocations from the zebra label manager.
A caller requests a label (e.g., in support of an "auto" label
specification in the CLI) via lp_get(), supplying a unique ID and
a callback function. The callback function is invoked at a later
time with the unique ID and a label value to inform the requestor
of the assigned label.
Requestors may release their labels back to the pool via lp_release().
The label pool is stocked with labels allocated by the zebra label
manager. The interaction with zebra is asynchronous so that bgpd
is not blocked while awaiting a label allocation from zebra.
The label pool implementation allows for bgpd operation before (or
without) zebra, and gracefully handles loss and reconnection of
zebra. Of course, before initial connection with zebra, no labels
are assigned to requestors. If the zebra connection is lost and
regained, callbacks to requestors will invalidate old assignments
and then assign new labels.
Signed-off-by: G. Paul Ziemba <paulz@labn.net>