Don't use 'interface WORD area A.B.C.D' for enabling OSPFv3 areas on
interfaces and instead use the standardized 'ipv6 ospf6 area A.B.C.D'.
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
./bfd_ospf_topo1.test_bfd_ospf_topo1/rt3/ospfd.log:2023/08/04 12:46:58 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 28: passive interface lo
./bfd_ospf_topo1.test_bfd_ospf_topo1/rt5/ospfd.log:2023/08/04 12:46:59 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 27: passive interface lo
./bfd_ospf_topo1.test_bfd_ospf_topo1/rt1/ospfd.log:2023/08/04 12:46:56 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 30: passive interface lo
./bfd_ospf_topo1.test_bfd_ospf_topo1/rt4/ospfd.log:2023/08/04 12:47:00 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 27: passive interface lo
./bfd_ospf_topo1.test_bfd_ospf_topo1/rt2/ospfd.log:2023/08/04 12:46:57 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 28: passive interface lo
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When our CI test system is under high load, expecting bfd to
converge in under 2 seconds is not going to happen. Modify the test
suites to just ensure that things reconvderge.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Debugs take up a significant amount of cpu time as well as
increased disk space for storage of results. Reduce test
over head by removing the debugs, Hopefully this helps
alleviate some of the overloading that we are seeing in
our CI systems.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We have this pattern in this test:
# Let's kill the interface on rt2 and see what happens with the RIB and BFD on rt1
tgen.gears["rt2"].link_enable("eth-rt1", enabled=False)
# By default BFD provides a recovery time of 900ms plus jitter, so let's wait
# initial 2 seconds to let the CI not suffer.
topotest.sleep(2, 'Wait for BFD down notification')
router_compare_json_output(
"rt1", "show ip route ospf json", "step3/show_ip_route_rt2_down.ref", 1, 0
)
Under a heavy CI load, interface down events and then reacting to them may not actually
happen within 2 seconds. Allow some more grace time in the test to ensure that we
react to it in an appropriate manner.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Document the `sleep` statement so people know that we are sleeping
because we are waiting for the BFD down notification. If we don't
sleep here it is possible that we get outdated `show` command results.
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Call the `show` commands less often to reduce the CPU pressure.
Also increase the wait time from 60 to 80 seconds to have spare room
for failures (4 times more). This is the latest measure wait time:
> INFO: topolog: 'router_json_cmp' succeeded after 20.08 seconds
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Reduce timers so we send hello packets more often and reduce dead
interval to converge faster.
Previous test wait amount:
> INFO: topolog: 'router_json_cmp' succeeded after 47.20 seconds
New test wait amount:
> INFO: topolog: 'router_json_cmp' succeeded after 20.08 seconds
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Change every `-` to `_` in directory names. This is to avoid mixing _ and -.
Just for consistency and directory sorting properly.
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>