The entirety of the import checking no longer needs to be
in zebra as that no-one is calling it. Remove the code.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
These are no longer really needed. The client just needs
to call nexthop resolution instead.
So let's remove the zapi types.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
At startup there exists a time frame where we might not know
a particular vrf's router id. When zebra gets a request for
it let's not just blindly send whatever we have. Let's be
a bit smart and only respond with one if we have one.
The upper level protocol can wait for it to have one.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Description:
Change is intended for fixing the following issues related to vrf route leaking:
Routes with special nexthops i.e. blackhole/sink routes when imported,
are not programmed into the FIB and corresponding nexthop is set as 'inactive',
nexthop interface as 'unknown'.
While importing/leaking routes between VRFs, in case of special nexthop(ipv4/ipv6)
once bgp announces route(s) to zebra, nexthop type is incorrectly set as
NEXTHOP_TYPE_IPV6_IFINDEX/NEXTHOP_TYPE_IFINDEX
i.e. directly connected even though we are not able to resolve through an interface.
This leads to nexthop_active_check marking nexthop !NEXTHOP_FLAG_ACTIVE.
Unable to find the active nexthop(s), route is not programmed into the FIB.
Whenever BGP leaks routes, set the correct nexthop type, so that route gets resolved
and correctly programmed into the FIB, in the imported vrf.
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Iqra Siddiqui <imujeebsiddi@vmware.com>
As NHRP expects some notification of neighboring entries on GRE
interface, when a new interface notification is encountered, the
exact neighbor state flag is found. Previously, the flag passed
to the upper layer was forced to NDM_STATE which is REACHABLE,
as can be seen on below trace:
2021/08/25 10:58:39 NHRP: [QQ0NK-1H449] Netlink: new-neigh 102.1.1.1 dev gre1 lladdr 10.125.0.2 nud 0x2 cache used 1 type 5
When passing the real value, NHRP received an other value like STALE.
2021/08/25 11:28:44 NHRP: [QQ0NK-1H449] Netlink: new-neigh 102.1.1.1 dev gre1 lladdr 10.125.0.2 nud 0x4 cache used 0 type 5
This flag is important for NHRP, as it permits to monitor the link
layer of NHRP entries.
Fixes: d603c0774e ("nhrp, zebra, lib: enforce usage of zapi_neigh_ip structure")
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
For some reason commit #ef524230a6baa decided
to remove enums and switch to uint16_t. Which
is not the right thing to do. Put it back
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When calling rib_add_multipath_nhe ensure that we have
well aligned return codes that mean something so that
interersted parties can properly handle the situation.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When receiving a route via zapi, if the route is rejected
there exists a code path where we would not free the corresponding
re created.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Move remote VTEP updates from immediate, inline processing
in their ZAPI message handlers to the main workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
In the reachability code we auto pass back the fully resolved
nexthops. Modify the ZEBRA_IPV4_NEXTHOP_LOOKUP_MRIB code
to do the exact same thing so that the zclient_lookup_nexthop
code does not need to recursively look for the data that
zebra already has.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
https://github.com/FRRouting/frr/pull/5865#discussion_r597670225
As this comment says. ZEBRA_FLAG_XXX should not have been used.
To communicate SRv6 Route Information. A simple Nexthop Flag would
have been sufficient for SRv6 information. And I fixed the whole
thing that way.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
With this patch, zclient can intall seg6 rotues when
they set properties "nh_seg6_segs" on struct nexthop
and set ZEBRA_FLAG_SEG6_ROUTE on zapi_route's flag.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit is a part of #5853 works that add new ZAPI to
configure SRv6 locator which manages chunk prefix for
SRv6 SID IPv6 address for each routing protocol daemons.
NEW-ZAPIs:
* ZEBRA_SRV6_LOCATOR_ADD
* ZEBRA_SRV6_LOCATOR_DELETE
* ZEBRA_SRV6_MANAGER_CONNECT
* ZEBRA_SRV6_MANAGER_GET_LOCATOR_CHUNK
* ZEBRA_SRV6_MANAGER_RELEASE_LOCATOR_CHUNK
Zclient can connect to zebra's srv6-manager with
ZEBRA_SRV6_MANAGER_CONNECT api like a label-manager.
Then zclient uses ZEBRA_SRV6_MANAGER_GET_LOCATOR_CHUNK to
allocated dedicated locator chunk for it's routing protocol.
Zebra works for only prefix reservation and distribute
the ownership of the locator chunks for zcliens.
Then, zclient installs SRv6 function with
ZEBRA_ROUTE_ADD api with nh_seg6local_* fields.
This feature is already implemented by another PR(#7680).
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
With this patch, zclient can intall seg6local rotues whem
they set properties nh_seg6local_{action,ctx} on struct nexthop
and set ZEBRA_FLAG_SEG6LOCAL_ROUTE on zapi_route's flag.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
- gre keys are collected and stored locally.
- when gre source set is requested, and the link interface
configured is different, the gre information collected is
pushed in the query, namely source ip or gre keys if present.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
preserve mtu upon interface flapping and tunnel source change.
Signed-off-by:Reuben Dowle <reuben.dowle@4rf.com>
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This action is initiated by nhrp and has been stubbed when
moving to zebra. Now, a netlink request is forged to set
the link interface of a gre interface if that gre interface
does not have already a link interface.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Use the main zebra workqueue for daemon-owned NHGs, in addition
to processing kernel-owned NHGs. The zapi message processing
creates a temporary object that's enqueued to the workqueue,
then processed/installed as part of the workqueue processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
zapi_nbr structure is renamed to zapi_neigh_ip.
Initially used to set a neighbor ip entry for gre interfaces, this
structure is used to get events from the zebra layer to nhrp layer.
The ndm state has been added, as it is needed on both sides.
The zebra dplane layer is slightly modified.
Also, to clarify what ZEBRA_NEIGH_ADD/DEL means, a rename is done:
it is called now ZEBRA_NEIGH_IP_ADD/DEL, and it signified that this
zapi interface permits to set link operations by associating ip
addresses to link addresses.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Instead of directly configuring the neighbor table after read from zapi
interface, a zebra dplane context is prepared to host the interface and
the family where the neighbor table is updated. Also, some other fields
are hosted: app_probes, ucast_probes, and mcast_probes. More information
on those fields can be found on ip-ntable configuration.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
EVPN neighbor operations were already done in the zebra dataplane
framework. Now that NHRP is able to use zebra to perform neighbor IP
operations (by programming link IP operations), handle this operation
under dataplane framework:
- assign two new operations NEIGH_IP_INSTALL and NEIGH_IP_DELETE; this
is reserved for GRE like interfaces:
example: ip neigh add A.B.C.D lladdr E.F.G.H
- use 'struct ipaddr' to store and encode the link ip address
- reuse dplane_neigh_info, and create an union with mac address
- reuse the protocol type and use it for neighbor operations; this
permits to store the daemon originating this neighbor operation.
a new route type is created: ZEBRA_ROUTE_NEIGH.
- the netlink level functions will handle a pointer, and a type; the
type indicates the family of the pointer: AF_INET or AF_INET6 if the
link type is an ip address, mac address otherwise.
- to keep backward compatibility with old queries, as no extension was
done, an option NEIGH_NO_EXTENSION has been put in place
- also, 2 new state flags are used: NUD_PERMANENT and NUD_FAILED.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
neighbor table api in zebra is added. a netlink api is created for that.
the handler is called from the api defined in the previous commit.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
a zebra api is extended to offer ability to add or remove neighbor
entry from daemon. Also this extension makes possible to add neigh
entry, not only between IPs and macs, but also between IPs and NBMA IPs.
This API supports configuring ipv6/ipv4 entries with ipv4/ipv6 lladdr.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
zebra implements zebra api for configuring link layer information. that
can be an arp entry (for ipv4) or ipv6 neighbor discovery entry. This
can also be an ipv4/ipv6 entry associated to an underlay ipv4 address,
as it is used in gre point to multipoint interfaces.
this api will also be used as monitoring. an hash list is instantiated
into zebra (this is the vrf bitmap). each client interested in those entries
in a specific vrf, will listen for following messages: entries added, removed,
or who-has messages.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
EVPN nexthops are installed as remote neighs by zebra. This was earlier
done only via VRF IPvX uni routes imported from EVPN routes.
With EVPN-MH these VRF routes now reference a L3NHG which is setup based
on the EAD and doesn't include the RMAC. To workaround that BGP now
consolidates and maintains EVPN nexthops which are then sent to zebra.
zebra sets up these nexthops as L3-VNI nh entries using a dummy type-1
route as reference.
Ticket: CM-31398
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
This one also needed a bit of shuffling around, but MTYPE_RE is the only
one left used across file boundaries now.
Signed-off-by: David Lamparter <equinox@diac24.net>
like it has been done for iptable contexts, a zebra dplane context is
created for each ipset/ipset entry event. The zebra_dplane_ctx job is
then enqueued and processed by separate thread. Like it has been done
for zebra_pbr_iptable context, the ipset and ipset entry contexts are
encapsulated into an union of structures in zebra_dplane_ctx.
There is a specificity in that when storing ipset_entry structure, there
was a backpointer pointer to the ipset structure that is necessary
to get some complementary information before calling the hook. The
proposal is to use an ipset_entry_info structure next to the ipset_entry,
in the zebra_dplane context. That information is used for ipset_entry
processing. The ipset name and the ipset type are the only fields
necessary.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>