Problem Statement:
=================
In scale setup BGP sessions start flapping.
RCA:
====
In virtualized environment there are multiple places where
MTU need to be set. If there are some places were MTU is not set
properly then there is chances that BGP packets get fragmented,
in scale setup this will lead to BGP session flap.
Fix:
====
A new tcp option is provided as part of this implementation,
which can be configured per neighbor and helps to set the TCP
max segment size. User need to derive the path MTU between the BGP
neighbors and set that value as part of tcp-mss setting.
1. CLI Configuration:
[no] neighbor <A.B.C.D|X:X::X:X|WORD> tcp-mss (1-65535)
2. Running config
frr# show running-config
router bgp 100
neighbor 198.51.100.2 tcp-mss 150 => new entry
neighbor 2001:DB8::2 tcp-mss 400 => new entry
3. Show command
frr# show bgp neighbors 198.51.100.2
BGP neighbor is 198.51.100.2, remote AS 100, local AS 100, internal link
Hostname: frr
Configured tcp-mss is 150, synced tcp-mss is 138 => new display
4. Show command json output
frr# show bgp neighbors 2001:DB8::2 json
{
"2001:DB8::2":{
"remoteAs":100,
"bgpTimerKeepAliveIntervalMsecs":60000,
"bgpTcpMssConfigured":400, => new entry
"bgpTcpMssSynced":388, => new entry
Risk:
=====
Low - This is a config driven feature and it sets the max segment
size for the TCP session between BGP peers.
Tests Executed:
===============
Have done manual testing with three router topology.
1. Executed basic config and un config scenarios
2. Verified if the config is updated in running config
during config and no config operation
3. Verified the show command output in both CLI format and
JSON format.
4. Verified if TCP SYN messages carry the max segment size
in their initial packets.
5. Verified the behaviour during clear bgp session.
6. done packet capture to see if the new segment size
takes effect.
Signed-off-by: Abhinay Ramesh <rabhinay@vmware.com>
Not having scapy in the docker image leads to very obtuse failures in
the pim bsm tests (obtuse, as in, it just fails without any hint as to
why...)
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
The previous method, using zassert.h and hoping nothing includes
assert.h (which, on glibc at least, just does "#undef assert" and puts
its own definition in...) was fragile - and actually broke undetected.
Just provide our own assert.h and control overriding by putting it in a
separate directory to add to the include path (or not.)
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
YANG model and CLI commands allow user to configure LDP-sync per area.
But the actual implementation is incorrect - all commands are changing
the config for the whole VRF instead of a single area. This commit fixes
this issue by actually implementing per area configuration.
Fixes#8578.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Currently we don't allow to configure the interface before the area is
configured. This approach has the following issues:
1. The area config can be deleted even when we have an interface config
relying on it. The code is not ready for that - we'll have a whole
bunch of stale pointers if user does that.
2. The code doesn't correctly process the event of changing the VRF for
an interface. There is no mechanism to ensure that the area exists
in the new VRF so currently the circuit still stays in the old VRF.
This commit allows an arbitrary order of area/interface configuration.
There is no more need to configure the area before configuring the
interface.
This change fixes both the issues.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Description:
DR information is missing under "show ip ospf interface [json]".
Added DR infomation to display in "show ip ospf interface".
Signed-off-by: Rajesh Girada <rgirada@vmware.com>
The current log prints maximum wait time which is not actually correct,
because it doesn't include the command execution time. We usually have
"failed after X seconds" log with X being far longer than this maximum.
Let's print the maximum number of tries instead.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This test is completely incorrect on test_bfd_loss_intermediate step.
It shuts down the interface and then "waiting" for the BGP session to
fail. But instead of the actual wait it compares the output of "show bfd
peers" with the "up" state. As it does this comparison right after the
interface shutdown, the BFD session has not yet failed and the comparison
is always successful except very rare cases when the command takes a lot
of time to execute (due to the heavy load on CI system I suppose).
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This function kills all processes that happen to have the same
name to frr processes and it was only ever used in the setup.
Setup should not be used to kill old runs. That should be a
separate process.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
`CFLAGS` is a "user variable", not intended to be controlled by
configure itself. Let's put all the "important" stuff in AC_CFLAGS and
only leave debug/optimization controls in CFLAGS.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
This test establishes a binding between nbma ip of a spoke and its
protocol address. This information is pushed to hub.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Just another round of trying to add pytest.mark.bgpd. Not finished yet just
what I could stand doing for a few minutes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Fixes:
/usr/lib/python3.9/site-packages/_pytest/config/__init__.py:1463: in getoption
val = getattr(self.option, name)
E AttributeError: 'Namespace' object has no attribute 'topology_only'
The above exception was the direct cause of the following exception:
/usr/lib/python3.9/site-packages/pluggy/manager.py:127: in register
hook._maybe_apply_history(hookimpl)
/usr/lib/python3.9/site-packages/pluggy/hooks.py:333: in _maybe_apply_history
res = self._hookexec(self, [method], kwargs)
/usr/lib/python3.9/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/lib/python3.9/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
tests/topotests/conftest.py:62: in pytest_configure
if config.getoption("--topology-only"):
/usr/lib/python3.9/site-packages/_pytest/config/__init__.py:1474: in getoption
raise ValueError(f"no option named {name!r}") from e
E ValueError: no option named 'topology_only'
Signed-off-by: Quentin Young <qlyoung@nvidia.com>