After implementing ACCEPT_OWN extended community, bgpd can't import VPN
routes to the VRFs whose RD is matched with that of VPN routes. This
commit adds new test to check the effect of the next commit.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
Testcase: test_pim6_multiple_groups_different_RP_address_p2
was failing because of a bug in framework, Fixed the
bug in this commit.
Signed-off-by: Kuldeep Kashyap <kashyapk@vmware.com>
Multicast pim6 static RP tests are failing
when run in parallel using micronet. There
are APIs to clean mcast traffic before
starting new test but these cleanups
are not needed when socat is used.
Signed-off-by: Kuldeep Kashyap <kashyapk@vmware.com>
Under really heavily loaded systems this is insufficient. Looking
at the run output we have this:
"2.1.3.22\/32":[
{
"installed":true,
}
],
"2.1.3.23\/32":[
{
"queued":true,
}
],
So after 10 seconds on the micronet system only 30 of the 100 routes are installed.
Give it more time.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Looks like under heavy load, the test is not giving enough
time to come to steady state. Do this:
a) send more udp packets and for longer
b) Increase time spent waiting
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
MPLS VPN networks can either peer with iBGP or eBGP. When
calculating the distance to send to zebra, the imported prefix
is never sent with distance information, even if the vty
command is used under the ipv4 unicast address family:
router bgp 65505 vrf vrf1
address-family ipv4 unicast
distance bgp 26 27 28
[vpn config]
The observation is that the distance sent to zebra for an
imported prefix is still 20:
[..]
VRF vrf1:
B> 192.168.0.0/24 [20/0] via 2.2.2.2 (vrf default) (recursive), label 20, weight 1, 00:00:12
* via 10.125.0.6, ntfp3 (vrf default), label implicit-null/20, weight 1, 00:00:12
The expectation is that the incoming prefix has to follow the
distance that is configured, or the distance derived from the peer
relationship established by the parent prefix.
In the case, an iBGP relationship is done, and no distance
configuration is done, the below show is expected:
[..]
VRF vrf1:
B*> 192.168.0.0/24 [200/0] via 192.168.0.2, r1-gre0 (vrf default), label 20, weight 1, 00:00:12
In the case an iBGP relationship is done, and distance configuration
is performed as below:
[..]
distance bgp 21 201 41
[..]
Then the below show is expected:
[..]
VRF vrf1:
B*> 192.168.0.0/24 [201/0] via 192.168.0.2, r1-gre0 (vrf default), label 20, weight 1, 00:00:12
To get this behaviour, get the peer origin where the prefix is coming
from.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
I'm seeing test failures after in micronet runs in CI
after 7 seconds * 30 attempts at seeing if it succeeds.
Let's see if another 60 seconds of attempts allows
this to work properly.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Single run of this test suite on my machine was 8 minutes.
Breaking this up into 3 test suites halves the run time.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This change alters the behavior of existing test code. The
default mode (before any call to luSetWaitType()) is now
"strict".
The historical behavior of luCommand(op="wait) is to ignore
failures to match the specified regexp in the specified time.
In those cases, no result was logged and no error was signaled.
This change introduces a new "strict" mode for luCommand(op="wait):
in "strict" wait mode, each invocation of luCommand(op="wait)
generates an explicit, logged failure result when it fails to match
the specified regexp in the specified time. These failures signal
an error for the test.
Calling luSetWaitType("nostrict") restores the historical behavior.
Calling luSetWaitType("strict") (re)enables the new strict behavior.
Individual calls to luCommand() may also specify op="wait-nostrict"
to override any default and use the historical behavior.
Individual calls to luCommand() may also specify op="wait-strict"
to override any default and use the new behavior.
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
Test that BFD static monitoring works:
When BFD session is up the routes are installed in the RIB and
distributed with routing protocol (in this case BGP). When the session
is down it is removed from RIB and propagated.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Tests are failing in micronet because linux kernel needs are 4.19
not 4.15
2023-01-11 17:15:06,657.657 INFO: topolog.r1: vtysh command => "show zebra"
2023-01-11 17:15:06,657.657 DEBUG: topolog.r1: LinuxNamespace(r1): cmd_status("['/bin/bash', '-c', 'vtysh -c "show zebra" 2>/dev/null']", kwargs: {'encoding': 'utf-8', 'stdout': -1, 'stderr': -2, 'shell': False, 'stdin': None})
2023-01-11 17:15:06,729.729 INFO: topolog.r1: vtysh result:
OS Linux(4.15.0-193-generic)
Notice the missing pimreg11 device needed in vrf blue:
2023-01-11 17:15:06,731.731 DEBUG: topolog.r1: LinuxNamespace(r1): cmd_status("['/bin/bash', '-c', 'vtysh -c "show int brief" 2>/dev/null']", kwargs: {'encoding': 'utf-8', 'stdout': -1, 'stderr': -2, 'shell': False, 'stdin': None})
2023-01-11 17:15:06,781.781 INFO: topolog.r1: vtysh result:
Interface Status VRF Addresses
--------- ------ --- ---------
blue up blue 192.168.0.1/32
r1-eth0 up blue 192.168.100.1/24
r1-eth1 up blue 192.168.101.1/24
Interface Status VRF Addresses
--------- ------ --- ---------
erspan0 down default
gre0 down default
gretap0 down default
lo up default
pimreg up default
Interface Status VRF Addresses
--------- ------ --- ---------
r1-eth2 up red 192.168.100.1/24
r1-eth3 up red 192.168.101.1/24
red up red 192.168.0.1/32
While on a 5.4 machine we have this:
mininet310# show int brief
Interface Status VRF Addresses
--------- ------ --- ---------
blue up blue
dummy1 up blue
dummy2 up blue
pimreg11 up blue
As such let's limit the test to a 4.19 kernel or above that our
documentations states we need for proper pim operation.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Previously, routes leaked from one VRF to another VRF were associated
with the original nexthop interface.
Commit 14aabc0156 replaced the nexthop
interface with the index of incoming VRF interface.
Due to this change, the `bgp_srv6l3vpn_route_leak` topotest always fails
because it still expects the nexthop interface.
This commit fixes the expected interface name in the
`bgp_srv6l3vpn_route_leak` topotest.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
To verify previous changes, this PR adds topotest to verify whether
imported routes redistributed will be active on other VRF RIB.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
Because of the issue described in the above link, pinging from vrf with
the command "ip vrf exec <vrf> ping -I <src> <addr>" may fail.
> root@topo:~# ip vrf exec vrf1 ping -c1 -I 192.168.2.1 192.168.1.1
> bind: Cannot assign requested address
Raise an error if pinging its own IP from a VRF fails. This test should
always work unless in the condition of this issue.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=203483
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Add an "exist" key to check the existence of a prefix in the BGP RIB.
Useful to check that a prefix has not leaked by error.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Update bgp_vrf_route_leak_basic to set up the VRF interfaces. Otherwise
the routes to the VRF interface are inactives.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Leaked connected routes have now the following nexthop interfaces:
- lo for routes imported from the default VRF
- or the VRF interface for routes imported from the other VRFs.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
The wq->spec.errorfunc is never used in the code.
It's been in the code base since 2005 and I also
do not remember ever seeing it being called. No
workqueue process function ever returns error.
Since it's not used let's just remove it from the
code base.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Building FRR with --enable-address-sanitizer and then running the
config_timing test makes the test run for over an hour on my machine.
The goal of this test is to ensure that the test runs 10000 routes
in/out in a reasonable amount of time. We cannot test this with
address-sanitizer enabled. So just make the test meaningless
from a timing perspective but keep it `alive` from a it might
catch some address sanitizer issue with 50 -vs- 10000 routes
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The L3VPN best path computation now takes into accound the IGP metric.
Adapt the bgp_l3vpn_to_bgp_vrf tests so that routes with the best IGP
metric are selected when needed.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>