Gonna be covered later with further PRs. Now adding them to avoid compiler
errors due to uncovered switch/cases.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
We have dynamic capability support, but it handles only MP capability.
With this change, we can enable software version capability dynamicaly, without
resetting the session.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Add a `--v6-with-v4-nexthop` cli to bgp to allow it to peer with
neighbors in the configuration where the interface has no v6 addresses
at all and there is a v4 address that is usable as a v4 address
embedded in a v6 address.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Modify bgp to not allow a v6 peer to come up if the v6 afi is negotiated
and the outgoing interface has no v6 address as well as zebra does
not support the v6 with v4 nexthop capabilities that some dataplanes
allow.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After Zebra knows it's capability surrounding v6 with v4 nexthops
have it send this ability up to interested parties.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
RCA:
On encountering any attribute error for core attributes in update message,
the error handling is set to 'treat as withdraw' and
further parsing of the remaining attributes is skipped.
But the stream pointer is not being correctly adjusted to
point to the next NLRI field skipping the rest of the attributes.
This leads to incorrect parsing of the NLRI field,
which causes BGP session to reset.
Fix:
The stream pointer offset is rightly adjusted to point to the NLRI field correctly
when the malformed attribute is encountered and remaining attribute parsing is skipped.
Signed-off-by: Samanvitha B Bhargav <bsamanvitha@vmware.com>
The output of gen_json_diff_report is used all over the place and
it outputs d1 and d2. Let's change this to output and expected
as that is how it is used. Should help with debugging.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Let us start using output and expected in lib/topotest.py
because when we see output it is confusing what d1 is
versus what d2 is.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When zebra send msg to fpm client, it doesn't handle duplicated nexthops especially, which means if zebra has a route with NUM1 recursive nexthops, each resolved to the same NUM2 connected nexthops, it will send to fpm client a route with NUM1*NUM2 nexthops. But actually there are only NUM2 useful nexthops, the left NUM1*NUM2-NUM2 nexthops are all duplicated nexthops. By the way, zebra has duplicated nexthop remove logic when sending msg to kernel.
Add duplicated nexthop remove logic to zebra when sending msg to fpm client.
Signed-off-by: 恭简 <gongjian.lhr@alibaba-inc.com>
Setup:
------
R1( LHR) ---------R2( RP) ----------R3( FHR)
Problem:
-------
- Send IGMP/MLD join and traffic.
LHR: (S,G) mroute is created with reference count = 2
and set the flag SRC_STREAM.
(Code flow: pim_mroute_msg_wholepkt -> pim_upstream_add,
pim_upstream_sg_running_proc -> pim_upstream_ref)
- Send IGMP/MLD prune.
LHR: removes (*,G) entry and it tries to remove childen (S,G) entries.
But (S,G) is having reference count = 2. So after prune,
(S,G) entry reference count becomes 1 and will be present
until KAT expires.
Fix:
---
Don't set SRC_STREAM flag for LHR.
In LHR, (S,G) should be maintained, until (*,G) is present.
When prune receives delete (*,G) and children (S,G).
When traffic stops, delete (S,G) after KAT expires.
Issue: #13893
Signed-off-by: Sarita Patra <saritap@vmware.com>
It was hardcoded to x86_64, but we build Alpine images for more platforms, let's
be dynamical here.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Current isis tests use a variety of hello timers as well
as hello-multiplier, let's modify all of the isis test
cases to use 1 and 10. This cleans up some spurious test
failures I was seeing locally. As an example without
these changes running isis_tilfa_topo1 2r6 times I would
see 5-10 test failures now I am seeing ~2 test failures.
In any event part of the problem was that some tests were
not fully converged when looking at them under heavy
system load. Changing this to 1/10 gives us 10 chances
to see the incoming packet.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This test was failing upstream a bunch of times. Upon examining
the log files as well as the test script it was noticed that
the bfd peers were checked to see that they had come up. But
both the timers used for bgp as well as not checking that bgp
has actually come up would cause the test to fail in subsuquent
steps if bgp has not come up. Test that bgp peering is actually
established before testing link down events. It's possible
this test might need to be revisited to ensure that the routes
are actually installed and ready to go before as well, but I am
not seeing that right now.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This test is failing upstream regularly, when inspecting the log
files we see that the route being looked for is in a queued state
when the test fails. Give this test more time for when the
system is under severe load.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Upstream ( and locally ) this test fails. The adj-sid value
being looked for in the testing is a dynamic value that is
assigned based upon how the network comes up. The reality
is that there is no enforced order of what the adj-sid
can be. As such this test looking for this value makes
no sense. Let's remove that from the test.
Additionally bring the isis hello-interval to 1 down
from 3 to make things converge faster.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>