Currently the transition metric style is redundant because isis will
always read both reachability TLVs regardless of the configured
metric style. Correct this by only considering TLVs matching our
configuration.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
make sure that the order in which the pcep-related commands are
removed by frr-reload.py is the correct one, i.e., pce followed
by pce-config followed by pcc.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
on one hand, the default value for a peer preference was always being
displayed, and on the other there was some code in frr-reload.py which
was attempting to add a default value to match this behavior, and which
was incorrectly overriding a specified preference. Fix this by removing
this code and making pathd behave like other daemons in this respect,
i.e. not displaying the default value.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
when receiving interface and address notifications, one may be puzzled
by the information since for example, the presence of an interface is
not enough to use it in a bfd session, simply because the interface is
in the wrong vrf. add VRF information on those traces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
on vrf-lite environment, all incoming bfd packets are received by the
same socket on the default namespace. the vrfid is not relevant and
needs to be updated based on the incoming interface where traffic has
been received. If the traffic is received from an interface belonging to
a separate vrf, update the vrfid value accordingly.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The vrf interface notification and interface notifications are separated
on zapi interface between the system (zebra daemon) and other daemons
(bfd for instance). In the case of bfd, the initial code was waiting for
vrf notification to create the socket. Actually, in vrf-lite world, we
need to wait the vrf interface to be present, in order to create the
socket and bind to the vrf interface (this is the usual way to work with
vrf-lite).
On bfd, the changes consist in delaying the socket creation first, then
when interface is created, check the interface name presence instead of
checking the interface configuration.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When running in vrf-lite mode, the socket used in a vrf environment
should be bound to an interface belonging to the vrf. If no one is
selected, then the vrf interface itself should be bound to that socket,
so that outgoing packets are being applied routing rules for that vrf.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The zebra route-map delay timer value is a global value
not a per vrf change. As such we should only print it
out one time.
We are seeing this:
zebra route-map delay-timer 33
exit-vrf
zebra route-map delay-timer 33
When we have 2 vrf's configured.
Fix the code to only write it out for the default vrf
Ticket: CM-32888
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In test_bgp_mutli_vrf_topo2.py it's clear that we remove then
re-add the vrf interfaces. Then the test was immediately
checking to ensure that the routes were available.
BGP needs time to reconverge. Let's ensure that first.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
a) Add some useful commands
b) Remove `show error all` this just dumps the error codes. If
we know the version we don't need this. Additionally this is
rather large.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add some missing commands ( I am sure that there are more useful ones to )
Cleanup to use the modern non-deprecated syntax in case anyone runs across
this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
These two debug messages are so verbose to a point they impact
performance when testing RLFA/TI-LFA on large-scale networks. Remove
them since they aren't really useful.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Always call vid2string() whenever necessary instead of trying to be
too clever and call it only once. The original assumption was that
"buf" only needed to be initialized when LFA debugging was enabled,
but we also need that buffer when logging one error message.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Add new RLFA topotest that tests all RLFA configuration knobs and
how isisd and ldpd react to various configuration changes that can
occur in the network.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Extend the existing SPF unit testing infrastructure so that it can
test RLFA as well.
These new unit tests are useful to test the RLFA PQ node
computation on several different network topologies in a timely
manner. Artificial LDP labels (starting from 50000) are used to
activate the computed RLFAs.
It's worth mentioning that the computed backup routing tables
contain both local LFAs and remote LFAs, as running RLFA separately
isn't possible.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Remote LFA (RFC 7490) is an extension to the base LFA mechanism
that uses dynamically determined tunnels to extend the IP-FRR
protection coverage.
RLFA is similar to TI-LFA in that it computes a post-convergence
SPT (with the protected interface pruned from the network topology)
and the P/Q spaces based on that SPT. There are a few differences
however:
* RLFAs can push at most one label, so the P/Q spaces need to
intersect otherwise the destination can't be protected (the
protection coverage is topology dependent).
* isisd needs to interface with ldpd to obtain the labels it needs to
create a tunnel to the PQ node. That interaction needs to be done
asynchronously to prevent blocking the daemon for too long. With
TI-LFA all required labels are already available in the LSPDB.
RLFA and TI-LFA have more similarities than differences though,
and thanks to that both features share a lot of code.
Limitations:
* Only RLFA link protection is implemented. The algorithm used
to find node-protecting RLFAs (RFC 8102) is too CPU intensive and
doesn't always work. Most vendors implement RLFA link protection
only.
* RFC 7490 says it should be a local matter whether the repair path
selection policy favors LFA repairs over RLFA repairs. It might be
desirable, for instance, to prefer RLFAs that satisfy the downstream
condition over LFAs that don't. In this implementation, however,
RLFAs are only computed for destinations that can't be protected
by local LFAs.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
The "load-sharing" node is a boolean leaf that has a default
value. As such, it doesn't make sense to either create or delete
it. That node always exists in the configuration tree. Its value
should only be modified. Change the corresponding CLI wrapper
command to reflect that fact.
This commit doesn't introduce any change of behavior as the NB API
maps create/destroy edit operations to modify operations whenever
that makes sense. However it's better to not rely on that behavior
and always use the correct operations in the CLI commands.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Add an API that allows IGP client daemons to register/unregister
RLFAs with ldpd.
IGP daemons need to be able to query the LDP labels needed by RLFAs
and monitor label updates that might affect those RLFAs. This is
similar to the NHT mechanism used by bgpd to resolve and monitor
recursive nexthops.
This API is based on the following ZAPI opaque messages:
* LDP_RLFA_REGISTER: used by IGP daemons to register an RLFA with ldpd.
* LDP_RLFA_UNREGISTER_ALL: used by IGP daemons to unregister all of
their RLFAs with ldpd.
* LDP_RLFA_LABELS: used by ldpd to send RLFA labels to the registered
clients.
For each RLFA, ldpd needs to return the following labels:
* Outer label(s): the labels advertised by the adjacent routers to
reach the PQ node;
* Inner label: the label advertised by the PQ node to reach the RLFA
destination.
For the inner label, ldpd automatically establishes a targeted
neighborship with the PQ node if one doesn't already exist. For that
to work, the PQ node needs to be configured to accept targeted hello
messages. If that doesn't happen, ldpd doesn't send a response to
the IGP client daemon which in turn won't be able to activate the
previously computed RLFA.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Add some code to detect when a route received from zebra hasn't
changed and ignore the notification in that case, preventing ldpd
from sending unnecessary label mappings.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
When adding/removing some peer's flag we need to make sure we FORCE updates
to avoid suppressing critical updates.
Like entering `no neighbor x.x.x.x send-community large` would suppress
updates by default and another side will have stale large communities.
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
When last area address is removed, resign if we were DR.
This fixes an issue where: when the ISIS area address is changed, ISIS fails
to elect a new DR.
Signed-off-by: Karen Schoener <karen@voltanet.io>
This command has been added in the context of
PIM BSM functionality. This command will clear the
data structs having bsr information.
Co-authored-by: Sarita Patra <saritap@vmware.com>
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
In bgp_zebra_announce we do work to apply the table map.
This is the same for both v4 and v6 but we have the code
duplicated in both v4 and v6 if statements. Move outside
to reduce the duplications.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
BGP has created some redundant checks in bgp_zebra_announce()
Reduce the multiple if statements and consolidate a bit.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The bgp_gr_functionality_topo1 test was shutting down an
interface on r2 and then trying to bring it up on r1.
Hijinx ensued.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
`lcommunity_gettoken` expects a space-delimeted list of 0 or more large
communities. `lcommunity_list_valid` can perform this check.
`lcommunity_list_valid` now validates large community lists more
accurately based on the following condition: Each quantity in a standard bgp
large community must:
1. Contain at least one digit
2. Fit within 4 octets
3. Contain only digits unless the lcommunity is "expanded"
4. Contain a valid regex if the lcommunity is "expanded"
Moreover we validate that each large community list contains exactly 3
such values separated by a single colon each.
One quirk of our validation which is worth documenting is:
```
bgp large-community-list standard test2 permit 1:c:3
bgp large-community-list expanded test1 permit 1:c:3
```
The first line will throw an error complaining about a "malformed community-list
value". The second line will be accepted because the each value is each treated as
a regex when matching large communities, it simply will never match anything so
it's rather useless.
Signed-off-by: Wesley Coakley <wcoakley@nvidia.com>