Problem commit -
[
b169fd6fd5 zebra: support for MAC-IP sync routes
]
That commit had accidentally replaced a mac-ip del to bgp with a mac
del (consequence of a bad cut-paste).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
As a part of the re-factoring some of the evpn_vni_es apis got re-named
as evpn_evpn_es. Changed them to evpn_es_evi to make it common to
vxlan and mpls.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When a MAC is detected duplicate on a local
learn event (with freeze action),
do not send update to bgp to advertise into
evpn control plane.
With evpn mh, inform_client flag is set and
sends notification to bgp albeit dup detect
is set.
Check mac are detected as duplicate before
setting inform_client to true.
Ticket:CM-29817
Reviewed By:CCR-10329
Testing Done:
Enable DAD with freeze action
Upon local learn MAC detected as duplica
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
When installing rules pass by the interface name across
zapi.
This is being changed because we have a situation where
if you quickly create/destroy ephermeal interfaces under
linux the upper level protocol may be trying to add
a rule for a interface that does not quite exist
at the moment. Since ip rules actually want the
interface name ( to handle just this sort of situation )
convert over to passing the interface name and storing
it and using it in zebra.
Ticket: CM-31042
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
this is used when parsing the newly network namespaces. actually, to
track the link of some interfaces like vxlan interfaces, both link index
and link nsid are necessary. if a vxlan interface is moved to a new
netns, the link information is in the default network namespace, then
LINK_NSID is the value of the netns by default in the new netns. That
value of the default netns in the new netns is not known, because the
system does not automatically assign an NSID of default network
namespace in the new netns. Now a new NSID of default netns, seen from
that new netns, is created. This permits to store at netns creation the
default netns relative value for further usage.
Because the default netns value is set from the new netns perspective,
it is not needed anymore to use the NETNSA_TARGET_NSID attribute only
available in recent kernels.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
the walk routine is used by vxlan service to identify some contexts in
each specific network namespace, when vrf netns backend is used. that
walk mechanism is extended with some additional paramters to the walk
routine.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when duplicate address detection is observed, some incrementation,
some timing mechanisms need to be done. For that the main evpn
configuration is retrieved. Until now, the VRF that was storing the dad
config parameters was the same VRF that hosted the VXLAN interface. With
netns backend, this is not true, as the VXLAN interface is in the
same VRF as the bridge interface. The modification takes same definition
as in BGP, that is to say that there is a single bgp evpn instance, and
this is that instance that will give the correct config settings.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
this change is needed when a MAC/IP entry is learned by zebra, and the
entry happens to be in a different namespace. So that the entry be
active, the correct vni match has to be found.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
1. MAC ref of a zero ESI was accidentally creating a new ES with zero
ES id.
2. When an ES was deleted and re-added the ES was not being sent to BGP
because of a stale flag that suppressed the update as a dup.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When we get a rib deletion event and we already have
that particular route node in the queue to be reprocessed,
just note that someone from kernel land has done us dirty
and allow it to be cleaned up by normal processing
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Imagine a situation where a interface is bouncing up/down.
The interface comes up and daemons like pbr will get a nht
tracking callback for a connected interface up and will install
the routes down to zebra. At this same time the interface can
go down. But since zebra is busy handling route changes ( from pbr )
it has not read the netlink message and can get into a situation
where the route resolves properly and then we attempt to install
it into the kernel( which is rejected ). If the interface
bounces back up fast at this point, the down then up netlink
message will be read and create two route entries off the connected
route node. Zebra will then enqueue both route entries for future processing.
After this processing happens the down/up is collapsed into an up
and nexthop tracking sees no changes and does not inform any upper
level protocol( in this case pbr ) that nexthop tracking has changed.
So pbr still believes the nexthops are good but the routes are not
installed since pbr has taken no action.
Fix this by immediately running rnh when we signal a connected
route entry is scheduled for removal. This should cause
upper level protocols to get a rnh notification for the small
amount of time that the connected route was bouncing around like
a madman.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
It was wrongly assumed that the kernel is replying in batches when multiple
requests fail. The kernel sends one error message at a time, so we can
simply keep reading data from the socket as long as possible.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
During testing it was noticed that routes were considered
installed by zebra, but the kernel did not have the route.
Upon close debugging of the rib it was noticed that FRR
was turning a dplane_ctx_route_init into a success and
FRR was now in a bad state.
2020/08/26 17:55:53.897436 PBR: route_notify_owner: [0.0.0.0/0] Route Removed succeeded for table: 10012
2020/08/26 17:55:53.897572 ZEBRA: 0.0.0.0/0: uptime == 432033, type == 24, instance == 0, table == 10012
2020/08/26 17:55:53.897622 ZEBRA: rib_meta_queue_add: (0:10012):0.0.0.0/0: queued rn 0x5566b0ea7680 into sub-queue 5
2020/08/26 17:55:53.907637 ZEBRA: default(0:10012):0.0.0.0/0: Processing rn 0x5566b0ea7680
2020/08/26 17:55:53.907665 ZEBRA: default(0:10012):0.0.0.0/0: Examine re 0x5566b0d01200 (pbr) status 2 flags 1 dist 200 metric 0
2020/08/26 17:55:53.907702 ZEBRA: default(0:10012):0.0.0.0/0: After processing: old_selected 0x0 new_selected 0x5566b0d01200 old_fib 0x0 new_fib 0x5566b0d01200
2020/08/26 17:55:53.907713 ZEBRA: default(0:10012):0.0.0.0/0: Adding route rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr)
2020/08/26 17:55:53.907879 ZEBRA: default(0:10012):0.0.0.0/0: rn 0x5566b0ea7680 dequeued from sub-queue 5
2020/08/26 17:55:53.907943 ZEBRA: netlink_route_multipath: RTM_NEWROUTE 0.0.0.0/0 vrf 0(10012)
2020/08/26 17:55:53.910756 ZEBRA: default(0:10012):0.0.0.0/0 Processing dplane result ctx 0x5566b0ea82f0, op ROUTE_INSTALL result SUCCESS
2020/08/26 17:55:53.910769 ZEBRA: update_from_ctx: default(0:10012):0.0.0.0/0: SELECTED, re 0x5566b0d01200
2020/08/26 17:55:53.910785 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): no fib nhg
2020/08/26 17:55:53.910793 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): rib nhg matched, changed 'true'
2020/08/26 17:55:53.910802 ZEBRA: (0:10012):0.0.0.0/0: Redist update re 0x5566b0d01200 (pbr), old 0x0 (None)
2020/08/26 17:55:53.910812 ZEBRA: Notifying Owner: 24 about prefix 0.0.0.0/0(10012) 2 vrf: 0
2020/08/26 17:55:53.910912 PBR: route_notify_owner: [0.0.0.0/0] Route installed succeeded for table: 10012
2020/08/26 17:55:55.400516 ZEBRA: RTM_DELROUTE 0.0.0.0/0 vrf default(0) table_id: 10012 metric: 20 Admin Distance: 0
2020/08/26 17:55:55.400527 ZEBRA: rib_delete: (0:10012):0.0.0.0/0: rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr) was deleted from kernel, adding
We were receiving a notification from the kernel that the route was deleted and deciding
that we needed to reinstall it. At that point in time when it got into the dplane
handlers to convert it to the dplane pthread, the dplane decided to drop the request
convert it too a success and not do anything.
This code change removes the conversion from this failure to success and
notifies the upper level about it. After this change the default route
to table 10012 is now properly marked as rejected:
root@mlx-2700-07:mgmt:/var/log/frr# vtysh -c "show ip route table 10012"
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF default table 10012:
F>r 0.0.0.0/0 [200/0] via 172.168.1.164, isp2-uplink (vrf PUBLIC), weight 1, 00:24:48
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When we are not using nexthop groups, there is no need to
test for whether or not they are installed correctly or not
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The fuzzing code that is in the master branch is outdated and unused, so it
is worth to remove it to improve readablity of the code.
All the code related to the fuzzing is in the `fuzz` branch.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
in order to create appropriate policy route, family attribute is stored
in ipset and iptable zapi contexts. This commit also adds the flow label
attribute in iptables, for further usage.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When turning on `debug zebra packet detail` or `debug zebra packet recv detail`
only display the detailed packet dump when `detail` is added.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There are a bunch of places where the table id is not being outputed
in debug messages for routing changes. Add in the table id we
are operating on. This is especially useful for the case where
pbr is working.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
all network namespaces are read so as to collect interesting fdb and
neighbor tables for EVPN.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
this information is necessary for local information, because the
interface associated to the mac address is stored with its ifindex, and
the ifindex may not be enough to get to the right interface when it
comes with multiple network namespaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when working with vrf netns backend, two bridges interfaces may have the
same bridge interface index, but not the same namespace. because in vrf
netns backend mode, a bridge slave always belong to the same network
namespace, then a check with the namespace id and the ns id of the
bridge interface permits to resolve correctly the interface pointer.
The problem could occur if a same index of two bridge interfaces can be
found on two different namespaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when receiving a netlink API for an interface in a namespace, this
interface may come with LINK_NSID value, which means that the interface
has its link in an other namespace. Unfortunately, the link_nsid value
is self to that namespace, and there is a need to know what is its
associated nsid value from the default namespace point of view.
The information collected previously on each namespace, can then be
compared with that value to check if the link belongs to the default
namespace or not.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
to be able to retrieve the network namespace identifier for each
namespace, the ns id is stored in each ns context. For default
namespace, the netns id is the same as that value.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
as remind, the netns identifiers are local to a namespace. that is to
say that for instance, a vrf <vrfx> will have a netns id value in one
netns, and have an other netns id value in one other netns.
There is a need for zebra daemon to collect some cross information, like
the LINK_NETNSID information from interfaces having link layer in an
other network namespace. For that, it is needed to have a global
overview instead of a relative overview per namespace.
The first brick of this change is an API that sticks to netlink API,
that uses NETNSA_TARGET_NSID. from a given vrf vrfX, and a new vrf
created vrfY, the API returns the value of nsID from vrfX, inside the
new vrf vrfY.
The brick also gets the ns id value of default namespace in each other
namespace. An additional value in ns.h is offered, that permits to
retrieve the default namespace context.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
an incoming bridge index has been found, that is linked with vxlan
interface, and the search for that bridge interface is done. In
vrf-lite, the search is done across the same default namespace, because
bridge and vxlan may not be in the same vrf. But this behaviour is wrong
when using vrf netns backend, as the bridge and the vxlan have to be in
the same vrf ( hence in the same network namespace). To comply with
that, use the netnamespace of the vxlan interface. Like that, the
appropriate nsid is passed as parameter, and consequently, the search is
correct, and the mac address passed to BGP will be ok too.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
other network namespaces are parsed because bridge interface can be
bridged with vxlan interfaces with a link in the default vrf that hosts
l2vpn.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
With vrf-lite mechanisms, it is possible to create layer 3 vnis by
creating a bridge interface in default vr, by creating a vxlan interface
that is attached to that bridge interface, then by moving the vxlan
interface to the wished vrf.
With vrf-netns mechanism, it is slightly different since bridged
interfaces can not be separated in different network namespaces. To make
it work, the setup consists in :
- creating a vxlan interface on default vrf.
- move the vxlan interface to the wished vrf ( with an other netns)
- create a bridge interface in the wished vrf
- attach the vxlan interface to that bridged interface
from that point, if BGP is enabled to advertise vnis in default vrf,
then vxlan interfaces are discovered appropriately in other vrfs,
provided that the link interface still resides in the vrf where l2vpn is
advertised.
to import ipv4 entries from a separate vrf, into the l2vpn, the
configuration of vni in the dedicated vrf + the advertisement of ipv4
entries in bgp vrf will import the entries in the bgp l2vpn.
the modification consists in parsing the vxlan interfaces in all network
namespaces, where the link resides in the same network namespace as the
bgp core instance where bgp l2vpn is enabled.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
We can make the Linux kernel send an ARP/NDP request by adding
a neighbour with the 'NUD_INCOMPLETE' state and the 'NTF_USE' flag.
This commit adds new dataplane operation as well as new zapi message
to allow other daemons send ARP/NDP requests.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
Reverting probing of neigh entry. There is a timing where
probe and remote macip add request comes at the same time resulting
in neigh to remain in local state event though it should be remote.
In mobility case, the host moves to remote VTEP, first MAC only type-2
route is received which triggers a PROBE of neighs (associated to MAC).
PROBE request can go via network port to remote VTEP.
PROBE request picks up local neigh with MAC entry's outgoing port is
remote VTEP tunnel port.
The PROBE reply and MAC-IP (containing IP) almost comes same time at
DUT.
DUT first processes remote macip and installs neigh as remote.
Followed by receives neigh as REACHABLE which marks neigh as LOCAL.
FRR does have BPF filter which does not allow its own netlink request
to receive. Otherwise frr's request to program neigh as remote can move
neigh from local to remote.
Though ordering can not be guranteed that REACHABLE (PROBE's repsonse)
can come at anytime and move it to LOCAL.
This fix would not suffice the needs of converging LOCAL inactive neighs
to remove from DB. As mobility draft sugges to PROBE local neigh when
MAC moves to remote but it is not working with current framework.
Ticket:CM-22864
This reverts commit 44bc8ae550
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>