The 'show mpls table json' command displays the outgoing interface
name only when the nexthop type is either NEXTHOP_TYPE_IFINDEX or
NEXTHOP_TYPE_IPV6_IFINDEX. add the interface name for the nexthop
type NEXTHOP_TYPE_IPV4_IFINDEX.
Fixes: ("b78b820d46d6") MPLS: Display enhancements and JSON support
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This commit addresses the case where a service wants to install
an LSP entry to a next-hop located in a VRF instance. The incoming
MPLS packet is on the namespace and has to be directed to a nexthop
located behind an interface that sits in a specific VRF instance.
The below iproute command can illustrate:
> ip link add vrf1 type vrf table 10
> ip link set dev vrf1 up
> ip link set dev eth0 master vrf1
> ip a a 192.0.2.1/24 dev eth0
> ip -f mpls route add 105 via inet 192.0.2.45 dev eth0
If a service uses the ZEBRA_MPLS_LABELS messages, then the LSP
message is ignored: from zebra perspective, the MPLS entries are
visible via the 'show mpls table' command, but no LSP entry is
installed in the kernel.
The issue is in the nhlfe_nexthop_active_ipv[4/6] function: the
outgoing interface mentioned in the nexthop is searched in the
main VRF, whereas the interface is in a separate VRF. The interface
is not found, and the nhlfe to install is considered not active.
To address this issue, reuse the incoming vrf_id parameter transmitted
in the nexthop structure from the ZEBRA_MPLS_LABELS message. When
creating an NHLFE entry, the vrf_id is used instead of the DEFAULT_VRF.
And the nhlfe entry can be considered as active.
One alternate solution to reuse the vrf_id parameter in the mpls network
context would be to modify the search function in nhlfe_nexthop_active..()
function: looking for an existing ifindex in the zns. However, this
solution may not fit later when netns backend would be used.
Note that some changes have not been done yet and are considered
sufficient for now:
- The 'nhlfe_find' API: the assumption is done that only the linux vrf
backend is used for now.
- The 'mpls_lsp_install()' API: It is currently used by the CLI command
which does not handle the interface parameter, and the SRTE service, whih
always sends LSPs towards a nexthop located in the VRF_DEFAULT.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The ZEBRA_MPLS_LABELS_[ADD/DELETE/REPLACE] messages may change an
LSP entry based on an incoming MPLS entry, followed by a given
next-hop.
Having a next hop with no label information inside is rejected
by the zebra layer. As illustration, the following ZAPI message
would be rejected, because the next hop does not contain any
label information.
> ip -f mpls route add 105 via inet 192.0.2.45
At the same time, such configuration is desirable to be
supported:
An attempt has been done to configure the next-hop with an implicit-
null label. But the message is rejected by the kernel:
> ip -f mpls route add 104 as 3 via inet 192.0.2.45
> Error: Implicit NULL Label (3) can not be used in encapsulation.
The commit proposes to accept ZEBRA_MPLS_LABELS_[XX] messages with
a nexthop that does not contain any label information.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Added topotest cases for the modification of added
prefix lists.
Author: Vijay kumar Gupta <vijayg@vmware.com>
Signed-off-by: Samanvitha B Bhargav <bsamanvitha@vmware.com>
Problem statement:
Step-1:
pl1 - 10.10.10.10/24 with deny sequence 1
Step-2:
pl1 - 10.10.10.10/24 with permit sequence 2
Step-3:
pl1 - 20.20.20.20/24 with deny sequence 1
Now we end up deleting permit sequence 2,
which might blackhole the traffic.
RCA:
Whenever we have multiple prefix lists,
having same prefix and different subnet range,
delete or replace of prefix list
would result in delete of entry in route-map
prefix table.
Fix:
We will skip deleting prefix list entry
from routemap prefix table, if we have
the multiple prefix lists having same prefix.
Signed-off-by: Samanvitha B Bhargav <bsamanvitha@vmware.com>
The idea is to catch some PRs and rerun them in CI to avoid merging PRs that
are based on the stale base branch and causes issues afterwards.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Reuse the exiting GR topotests since planned and unplanned GR should
behave the same.
The only difference is that for unplanned GR there's no preparation
phase. The OSPF daemons are just killed (SIGTERM) and restarted
normally. The tests then proceed to do the same checks they do for
planned GRs.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
This command makes unplanned GR more reliable by manipulating the
sending of Grace-LSAs and Hello packets for a certain amount of time,
increasing the chance that the neighboring routers are aware of
the ongoing graceful restart before resuming normal OSPF operation.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
In practical terms, unplanned GR refers to the act of recovering
from a software crash without affecting the forwarding plane.
Unplanned GR and Planned GR work virtually the same, except for the
following difference: on planned GR, the router sends the Grace-LSAs
*before* restarting, whereas in unplanned GR the router sends the
Grace-LSAs immediately *after* restarting.
For unplanned GR to work, ospf6d was modified to send a
ZEBRA_CLIENT_GR_CAPABILITIES message to zebra as soon as GR is
enabled. This causes zebra to freeze the OSPF routes in the RIB as
soon as the ospf6d daemon dies, for as long as the configured grace
period (the defaults is 120 seconds). Similarly, ospf6d now stores in
non-volatile memory that GR is enabled as soon as GR is configured.
Those two things are no longer done during the GR preparation phase,
which only happens for planned GRs.
Unplanned GR will only take effect when the daemon is killed
abruptly (e.g. SIGSEGV, SIGKILL), otherwise all OSPF routes will be
uninstalled while ospf6d is exiting. Once ospf6d starts, it will
check whether GR is enabled and enter in the GR mode if necessary,
sending Grace-LSAs out all operational interfaces.
One disadvantage of unplanned GR is that the neighboring routers
might time out their corresponding adjacencies if ospf6d takes too
long to come back up. This is especially the case when short dead
intervals are used (or BFD). For this and other reasons, planned
GR should be preferred whenever possible.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
In practical terms, unplanned GR refers to the act of recovering
from a software crash without affecting the forwarding plane.
Unplanned GR and Planned GR work virtually the same, except for the
following difference: on planned GR, the router sends the Grace-LSAs
*before* restarting, whereas in unplanned GR the router sends the
Grace-LSAs immediately *after* restarting.
For unplanned GR to work, ospf6d was modified to send a
ZEBRA_CLIENT_GR_CAPABILITIES message to zebra as soon as GR is
enabled. This causes zebra to freeze the OSPF routes in the RIB as
soon as the ospfd daemon dies, for as long as the configured grace
period (the defaults is 120 seconds). Similarly, ospfd now stores in
non-volatile memory that GR is enabled as soon as GR is configured.
Those two things are no longer done during the GR preparation phase,
which only happens for planned GRs.
Unplanned GR will only take effect when the daemon is killed
abruptly (e.g. SIGSEGV, SIGKILL), otherwise all OSPF routes will
be uninstalled while ospfd is exiting. Once ospfd starts, it will
check whether GR is enabled and enter in the GR mode if necessary,
sending Grace-LSAs out all operational interfaces.
One disadvantage of unplanned GR is that the neighboring routers
might time out their corresponding adjacencies if ospfd takes too
long to come back up. This is especially the case when short dead
intervals are used (or BFD). For this and other reasons, planned
GR should be preferred whenever possible.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
test_frrscript is run from the `tests` directory and expects the sample
lua script `script1.lua` to be present in the `lib` directory. When the
package is built out of tree (which always happens when a debian
package is built), and scripting is enabled, test fails because the lua
file is not present in the `tests/lib/` subdir of the _build_ directory.
Fix this by adding `script1.lua` as an extra dependency for
`test_frrscript`, and a recipe that copies the file from the source tree
to the build tree (note: it needs to be marked ".PHONY" because
otherwise `make` thinks that it already exists, in the source tree).
After this commit, the following command starts to work:
dpkg-buildpackage --build-profiles=pkg.frr.lua -b -uc
Signed-off-by: Eugene Crosser <crosser@average.org>
There is no guarantee that the vrfId is going to be the same across
tests, as that the vrfId is chosen based upon the ifindex of the
vrf device. As such we should not be looking for the vrfId, but
the correct vrf name.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The test ospf_metric_propagation is looking for a specific ifindex
this ifindex is not guaranteed to be any particular value by the underlying
OS. So let's remove this test for it. As a side note I am seeing
tests fail in upstream CI because of this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Imagine the following scenario:
1.Create a multihop ebgp peer and config the ttl as 254 for both side.
2.Call bgp_start and start an active connection.
Bgp will send a nht register with non-connected flag.
3.The function bgp_accept be called by remote connection.
Bgp will create a accept peer as a passive connection with default ttl(1). And then will send a nht register again with connected flag. This register result will cover the first one.
4.The active connection come to establish first. In funciton "peer_xfer_conn", check for "PEER_FLAG_CONFIG_NODE" flag of "from_peer->doppelganger" will not be pass, so we can not repair the nht register error forever.
Then the bgp nexthop will be like this:
2000::60 invalid, #paths 0, peer 2000::60
Must be Connected
Last update: Thu May 4 09:35:14 2023
The route from this peer can not be treat with a vaild nexthop forever.
This change will fix this error.
Signed-off-by: Jack.zhang <hanyu.zly@alibaba-inc.com>
Having tests for memory allocation success makes no sense
given what happens when frr fails to allocate memory.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Upon interface up associated singleton NHG's
dependent NHGs needs to be reinstalled as
kernel would have deleted if there is no route
referencing it.
Ticket:#3416477
Issue:3416477
Testing Done:
flap interfaces which are part of route NHG,
upon interfaces up event, NHGs are resynced
into dplane.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Intermittently zebra and kernel are out of sync
when interface flaps and the add's/dels are in
same processing queue and zebra assumes no change in nexthop.
Hence we need to bring in a reinstall to kernel
of the nexthops and routes to sync their states.
Upon interface flap kernel would have deleted NHGs
associated to a interface (the one flapped),
zebra retains NHGs for 3 mins even though upper
layer protocol removes the nexthops (associated NHG).
As part of interface address add ,
re-add singleton NHGs associated to interface.
Ticket: #3173663
Issue: 3173663
Signed-off-by: Ashwini Reddy <ashred@nvidia.com>
Signed-off-by: Chirag Shah <chirag@nvidia.com>