Enabe/fix using a munet.yaml config file for topology configuration.
Easier test writing.
This also uses the standard `frrinit.sh` to launch and teardown
FRR, so we actually test what most users use.
Signed-off-by: Christian Hopps <chopps@labn.net>
Ticket: #4060069
show bgp vrf afi unicast statistics json output is not return in json
format for non exists vrf.
Fix:
Json output is formatted for non exists vrf cases.
Command supported:
```
show bgp vrf <VRFNAME> ipv4/ipv6 unicast statistics json
show bgp vrf <VRFNAME> l2vpn evpn statistics json
```
Before Fix:
```
leaf11#
leaf11# show bgp vrf test ipv4 unicast statistics json
View/Vrf test is unknown
leaf11#
leaf11#
leaf11# show bgp vrf test ipv6 unicast statistics json
View/Vrf test is unknown
leaf11#
leaf11#
leaf11# show bgp vrf default1 l2vpn evpn statistics json
View/Vrf default1 is unknown
leaf11#
```
After Fix:
```
leaf11#
leaf11# show bgp vrf test ipv4 unicast statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
leaf11#
leaf11# show bgp vrf test ipv6 unicast statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
leaf11# show bgp vrf default1 l2vpn evpn statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
```
Ticket: #4060069
Signed-off-by: Sindhu Parvathi Gopinathan's <sgopinathan@nvidia.com>
When applying the route-map, we always set rmap_type to know who triggered
this action. PEER_RMAP_TYPE_IMPORT/EXPORT was used as a dead-code, and
PEER_RMAP_TYPE_NOSET not used at all.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
When trying to track down a MTYPE_TMP memory leak
it's harder to search for it when you happen to
have some usage of ttable_dump. Let's just give
it it's own memory type so that we can avoid
confusion in the future.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
evpn has a concept of `local` tables where the evpn routes
are actually converted into underlying routes/neighbor
table entries( or vice versa ). Then this local route
is propagated to the global evpn l2vpn table and sent
to the peers. Certain show commands in evpn look
operate on the local table but make the output look
like the data has not been sent to the peer. This
is confusing for the operator. Modify the code
such that local tables get a `Local BGP table not advertised`
in the place where the code talks about whom has received
the data or not.
Example:
torm11# show bgp l2vpn evpn route vni 1000 mac 8a:a1:cc:73:a3:ac ip 45.0.0.5
BGP routing table entry for [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5]
Paths: (2 available, best #2)
Local BGP table not advertised
Route [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5] VNI 1000
Imported from 192.168.100.18:2:[2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5], VNI 1000
65101 65005
192.168.100.18(leaf2) from leaf2(192.168.5.1) (192.168.100.14)
Origin IGP, valid, external
Extended Community: RT:65005:1000 ET:8
Last update: Thu Mar 21 14:29:04 2024
Route [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5] VNI 1000
Imported from 192.168.100.18:2:[2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5], VNI 1000
65101 65005
192.168.100.18(leaf1) from leaf1(192.168.1.1) (192.168.100.13)
Origin IGP, valid, external, bestpath-from-AS 65101, best (Router ID)
Extended Community: RT:65005:1000 ET:8
Last update: Thu Mar 21 14:29:04 2024
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Fix show nhrp shortcut json
Fixes: 87b9e98203 ("nhrpd: add json support to show nhrp vty commands")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Fully check the NHRP convergence after setting nhs1 down. Otherwise the
ping may pass because the previous shortcut is still present.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
After setting down nhs1, the test is checking that nhc1 routing table
matches routes in nhc1/nhrp_route.json. It is incorrect because it
checks that the NHRP route to nhs1 is still present but it should have
disappeared.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Rename router variables in nhrp_redundancy to match the actual name.
Cosmetic change to help debugging.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Currently the FRR code will receive both kernel and
connected routes that do not actually have an underlying
nexthop group at all. Zebra turns around and creates
a `matching` nexthop hash entry and installs it.
For connected routes, this will create 2 singleton
nexthops in the dplane per interface (v4 and v6).
For kernel routes it would just create 1 singleton
nexthop that might be used or not.
This is bad because the dplane has a limited amount
of space available for nexthop entries and if you
happen to have a large number of interfaces then
all of a sudden you have 2x(# of interfaces) singleton
nexthops.
Let's modify the code to delay creation of these singleton
nexthops until they have been used by something else in the
system.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The expected prefix should be 5.5.5.0/24 otherwise the hosts behind NHRP
client 1 nhc1 (aka. r5) are not reachable via NHRP.
The issue was not seen in the FRR official CI because the tests were
skipped because iptables were missing in CI machines.
It solves the 16690 issue.
Fixes: https://github.com/FRRouting/frr/issues/16690
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
There is a code path that could theoretically get you
to a point where the ng->nexthop is a NULL value.
Let's just make sure the SA system believes that
cannot happen anymore.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
A blackhole nexthop, according to the linux kernel,
can be v4 or v6. A v4 blackhole nexthop cannot be
used on a v6 route, but a v6 blackhole nexthop can
be used with a v4 route. Convert all blackhole
singleton nexthops to v6 and just use that.
Possibly reducing the number of active nexthops by 1.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Let's display the afi of the nexthop hash entry. Right
now it is impossible to tell the difference between v4 or
v6 nexthops, especially since it is important for the kernel.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Recent commits moved the default retries to 60, but
the higher ecmp counts were over-riding to 40. Let's
make it 80.
Noticed this when I went looking at failures on 386 platforms
in our ci. Route scale is timing out when deleting routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Test fails:
test_func = partial(
topotest.router_json_cmp,
router,
"show ip ospf vrf {0}-ospf-cust1 json".format(rname),
expected,
)
_, diff = topotest.run_and_expect(test_func, None, count=10, wait=0.5)
assertmsg = '"{}" JSON output mismatches'.format(rname)
> assert diff is None, assertmsg
E AssertionError: "r1" JSON output mismatches
E assert Generated JSON diff error report:
E
E > $->r1-ospf-cust1->areas->0.0.0.0->nbrFullAdjacentCounter: output has element with value '1' but in expected it has value '2'
/home/sharpd/frr2/tests/topotests/ospf_netns_vrf/test_ospf_netns_vrf.py:239: AssertionError
Support bundle has this data:
r1# show ip ospf vrf all neighbor
% 2024/08/28 14:55:54.763
VRF Name: r1-ospf-cust1
Neighbor ID Pri State Up Time Dead Time Address Interface RXmtL RqstL DBsmL
10.0.255.3 1 Full/DR 10.547s 39.456s 10.0.3.1 r1-eth1:10.0.3.2 0 0 0
10.0.255.2 1 Full/Backup 0.543s 38.378s 10.0.3.3 r1-eth1:10.0.3.2 1 0 0
So immediately after the test fails this test, the neighbor comes up.
Let's give the test a bit more time for failure to not happen
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Giving only 5 seconds to pass bgp data to peers on a heavily
loaded system is a recipe for not having fun. Add more time.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>