On startup we create a thread timer event to do a rib sweep
of the system. On shutdown we never stopped this timer and
as such we have a situation where a thread event could be run
on shutdown after the data for it has been freed. Here is the
crash I am seeing:
(gdb) bt
(gdb)
Save the thread data in zebra_router and stop the thread so we don't
accidently do work on shutdown we don't mean to. In this case
it happened in our topotests with some severe system load.
Essentially we happened to kill the zebra daemon just as the
graceful_restart timer popped here.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In some cases, zebra may install a nexthop-group id that is
different from the id of the nhe struct attached to a
route-entry. This happens for a singleton recursive nexthop,
for example, where a route is installed with the resolving
nexthop's id.
The installed value is the most useful value - that corresponds
to information in the kernel on linux/netlink platforms that
support nhgs. Display both values if they differ in ascii
output, and include both values in the json form.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Since f60a1188 we store a pointer to the VRF in the interface structure.
There's no need anymore to store a separate vrf_id field.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
During zebra shutdown, we clear out the LSP workqueue. The LSPs
will be uninstalled and freed during the shutdown process, so
just ignore any LSPs that happen to be on the workqueue.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
At some scale we eventually run out of room displaying v4/v6 route
totals for `show zebra client summ`:
janelle# show zebra client summ
Name Connect Time Last Read Last Write IPv4 Routes IPv6 Routes
--------------------------------------------------------------------------------
bgp 04w0d18h 00:00:19 00:01:2411729127/4052681 2037786/903094
This total over ran the space in just a little over a week of uptime.
Expand to have a bit more room.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The dplane_ctx_get_pbr_ipset_entry function only
failed when the caller did not pass in a valid
usable pointer. Change the code to assert on
a pointer not being passed in and remove the
bool return
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The only time this function ever failed is when
the developer does not pass in a usable pointer
to place the data in. Change it to an assert
to signify to the end developer that is what
we want and then remove all the if checks
for failure
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The function call dplane_ctx_get_pbr_ipset only
returns false when the calling function fails to
pass in a valid ipset pointer. This should
be an assertion issue since it's a programming
issue as opposed to an actual run time issue.
Change the function call parameter to not return
a bool on success/fail for a compile time decision.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We should always treat the VRF interface as a loopback. Currently, this
is not the case, because in some old pre-VRF code we use if_is_loopback
instead of if_is_loopback_or_vrf. To avoid any future problems, the
proposal is to rename if_is_loopback_or_vrf to if_is_loopback and use it
everywhere. if_is_loopback is renamed to if_is_loopback_exact in case
it's ever needed, but currently it's not used anywhere.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
During shutdown, when table_manager_disable is called for the default
VRF, its vrf_id is already set to VRF_UNKNOWN, so the expression is true
and the table manager memory is not freed. Change the expression to
compare the VRF name instead of the id. The check in table_manager_enable
is changed for consistency.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Free the LSP workqueue later during shutdown, so that zebra
has enough time to clean up and uninstall any LSPs.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
42d4b30e introduced per-VRF table manager.
Table manager is allocated when the VRF is created, but it is freed when
the VRF is disabled. When this VRF is re-enabled, zebra ends up with
table manager being NULL pointer and it crashes on any dereference.
Table manager should be freed when the VRF is deleted, not when it's
disabled.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
We don't receive interface down/delete notifications from kernel when a
netns is deleted. Therefore we have to manually replicate the necessary
actions, otherwise interfaces are kept in the system with stale pointers
to the deleted netns.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Problem:
L2-VNI SVI down followed by L2-VNI's vxlan device
deletion leads to stale entry into L3VNI's
L2-VNI list.
Solution:
When L2-VNI associated SVI is down, default vrf
is the new tenant vrf.
Remove L2-VNI from L3VNI's l2vni list as
L3VNI/VRF is no longer valid in absence of associated
SVI.
When SVI is up re-add L2-VNI into associated VRF's
L3VNI.
The above remove/add from the L3VNI's L2VNI list is
already done when vxlan or L2-VNI is flaped, just need
to handle when SVI is flapped.
Ticket:#2817127
Reviewed By:
Testing Done:
After deleting SVI following by L2-VNI deletion,
L3VNI's L2-VNI list delets the L2-VNI. (no stale entry).
After adding back SVI/L2-VNI, L3VNI list adds back the
L2-VNI and it is associated right tenant VRF.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
In the rtadv_timer(), it always uses the zvrf's socket to send RA
packets. In the vrf-lite mode, it's righ since it uses the default
vrf to send the RA packets. But in the netns mode, it uses socket
in each netns. So the issue only happens in the netns mode because
the zvrf's socket may not be in the same netns as the interface's
netns. In order to compatible with both vrf-lite and netns mode,
the fix uses the if_lookup_by_index() to check whether interfaces
can use the zvrf's socket.
Signed-off-by: LEI BAO <bali.baolei@cn.ibm.com>
Before 42d4b30e, table_manager_enable was called only once and the hook
was also registered once. After the change, the hook is registered per
each VRF that is created in the system. This is wrong.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Currently the NEXTHOP_TYPE_IPV4 and NEXTHOP_TYPE_IPV6 are
not sending up the resolved ifindex for the route. This
is causing upper level protocols that have something like
this:
route-map FOO permit 10
match interface swp13
!
router ospf
redistribute static
!
ip route 4.5.6.7/32 10.10.10.10
where 10.10.10.10 resolves to interface swp13. The route-map
will never match in this case.
Since FRR has the resolved nexthop interface, FRR might as
well send it up to be selected on by the upper level protocol
as needed.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
It appears that without that change, there were no notifications
sent to bgp daemon, after flowspec operations have been sent to
zebra.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
It is needed for the ipset entry to know for which address family
this ipset entry applies to. Actually, the family is in the original
ipset structure and was not passed as attribute in the dataplane
ipset_info structure. Add it.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When injecting an ipset entry into the zebra dataplane context, the
ipset name is stored in a separate structure. This will permit the
flowspec plugin to be able to know which ipset has to be appended with
relevant ipset entry.
The problem was that the zebra dataplane objects related to ipset entries
is made up of an union between the ipset structure and the ipset info
structure. This was implying that the two structures were on the same
memory zone, and when extracting the data stored, the data were incomplete.
Fix this by replacing the union structure by a defined struct.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When the netns is deleted, we should always clear the vrf->ns_ctxt
pointer. Currently, it is not cleared when there are interfaces in the
netns at the time of deletion.
If the netns is re-created, zebra crashes because it tries to use the
stale pointer.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
if_lookup_by_index_all_vrf doesn't work correctly with netns VRF backend
as the same index may be used in multiple netns simultaneously.
In both case where it's used, we know the VRF in which we need to lookup
for the interface.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
The kernel can return to us nested attributes for BRIDGE RTM_NEWNEIGH
attributes. Just ensure that we can parse and read them.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
With the addition of resillient hashing for nexthops, the
parsing of nexthops requires telling the decoder functions
that there may be nested attributes. This was found by
code inspection of iproute2/ipnexthop.c when trying to
understand resillient hashing as well as statistics
gathering for nexthops that are / will be in upstream
kernels in the near future.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add actual recent nexthop.h file from kernel
and fix up resulting fallout because FRR's
original nexthop.h did not match upstream
linux kernel.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
when gre information could not be retrieved because GRE interface has
been deleted, a GRE_UPDATE message may be sent to NHRP. In that case,
the gre values are reset. There was a missing tunnel destination value,
which has been omitted.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
There is a bit of an impedance mismatch in the sequence of events here.
Depending on the dplane behavior, the `ROUTE_ENTRY_SELECTED` bit will be
inconsistent for rib_process_result().
With an asynchronous dataplane:
0. rib_process() is called
1. rib_install_kernel() is called, dplane action is queued
2. rib_install_kernel() returns
3. rib_process() sets the SELECTED bit appropriately, returns
4. dplane is done, triggers rib_process_result()
5. SELECTED bit is seen in "after" state
(5a. NHT code looks at the SELECTED bit, works correctly.)
With a synchronous dataplane:
0. rib_process() is called
1. rib_install_kernel() is called, dplane action is executed
2. dplane (should) trigger rib_process_result()
3. SELECTED bit is seen in "before" state
(3a. NHT code looks at the SELECTED bit, fails.)
4. rib_install_kernel() returns
5. rib_process() sets the SELECTED bit appropriately, too late.
Essentially, poking the dataplane is a sequencing point where control is
handed over to the dplane. Control may or may not return immediately.
Doing /anything/ after triggering the dataplane is a recipe for odd race
conditions.
(FWIW, I'm not sure rib_process_result() is called correctly in the
synchronous case, but that's a separate problem.)
Unfortunately, this change might have some unforeseen side effects. I
haven't dug through the code to see if anything breaks. There
/shouldn't/ be anything looking at the SELECTED bit here, but who knows.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Do not return pointer to the newly created thread from various thread_add
functions. This should prevent developers from storing a thread pointer
into some variable without letting the lib know that the pointer is
stored. When the lib doesn't know that the pointer is stored, it doesn't
prevent rescheduling and it can lead to hard to find bugs. If someone
wants to store the pointer, they should pass a double pointer as the last
argument.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
rib_update() was mallocing memory then attempting to schedule
and if the schedule failed( it was already going to be run )
FRR would then free the memory. Fix this memory usage pattern
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
It allows FRR to read the interface config even when the necessary VRFs
are not yet created and interfaces are in "wrong" VRFs. Currently, such
config is rejected.
For VRF-lite backend, we don't care at all about the VRF of the inactive
interface. When the interface is created in the OS and becomes active,
we always use its actual VRF instead of the configured one. So there's
no need to reject the config.
For netns backend, we may have multiple interfaces with the same name in
different VRFs. So we care about the VRF of inactive interfaces. And we
must allow to preconfigure the interface in a VRF even before it is
moved to the corresponding netns. From now on, we allow to create
multiple configs for the same interface name in different VRFs and
the necessary config is applied once the OS interface is moved to the
corresponding netns.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
When something is used only from zebra and part of its description is
"should be called from zebra only" then it belongs to zebra, not lib.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
When an ES is deleted and re-added bgpd can start sending MAC-IP sync updates
before the dataplane and zebra have setup the VLAN membership for the ES. Such
MAC entries are not installed in the dataplane till the ES-EVI is created.
Ticket: #2668488
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
In the window immediately after an ES deletion bgpd can send MAC-IP updates
using that ES. Zebra needs to ignore these updates to prevent creation
of stale entries.
Ticket: #2668488
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
This addresses deletion of ES interfaces that are were not completely
configured.
Ticket: #2668488
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
When PTM sends a "cbl status" message it specifies the interface name
but not the VRF name. It is fine for VRF-lite, but doesn't work for
netns because it's possible to have multiple interfaces with the same
name. Be more restrictive in this case and return an error instead of
randomly using of the interface with the specified name.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
With netns VRF backend, we may have multiple interfaces with the same
name. Currently, the function output is not deterministic in this case,
it returns the first interface that it finds in the list. Be more
explicit and tell the user that we need the VRF name.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
```
exit1-debian-9# show ip route 172.16.16.1/32
Routing entry for 172.16.16.1/32
Known via "bgp", distance 20, metric 0, best
Last update 00:00:28 ago
* 192.168.0.2, via eth1, weight 1
AS-Path : 65003
Communities : first 65001:2 65001:3
Large-Communities: 65001:1:1 65001:1:2 65001:1:3
Selection reason : First path received
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Currently, the ll_type is set only in `netlink_interface` which is
executed only during startup. If the interface is created when the FRR
is already running, the type is not stored.
Fixes#1164.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
When a client sends to zebra that GR mode is being turned
on. The client also passes down the time zebra should hold
onto the routes. Display this time with the output
of the `show zebra client` command as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When issuing the `show zebra client` command data about
Graceful Restart state is being printed 2 times.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In startup, zebra would dump interface information from Kernel in 3
steps w/o lock: step1, get interface information; step2, get interface
ipv4 address; step3, get interface ipv6 address.
If any interface gets added after step1, but before step2/3, zebra
would get extra interface addresses in step2/3 that has not been added
into zebra in step1. Returning error in the referenced interface lookup
would cause the startup interface retrieval to be incomplete.
Signed-off-by: Yuan Yuan <yyuanam@amazon.com>
FRR should only ever use the appropriate THREAD_ON/THREAD_OFF
semantics. This is espacially true for the functions we
end up calling the thread for.
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
There's a helper function to check whether the interface is loopback or
VRF - if_is_loopback_or_vrf. Let's use it whenever we need to check that.
There's no functional change in this commit.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Pass down the safi for when we need address
resolution. At this point in time we are
hard coding the safi to SAFI_UNICAST.
Future commits will take advantage of this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
PIM is going to need to be able to send down the address it is
trying to resolve in the multicast rib. We need a way to signal
this to the end developer. Start the conversion by adding the
ability to have a safi. But only allow SAFI_UNICAST at the moment.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The entirety of the import checking no longer needs to be
in zebra as that no-one is calling it. Remove the code.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
These are no longer really needed. The client just needs
to call nexthop resolution instead.
So let's remove the zapi types.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There were two identical blocks of code run at init time that
requested info about AF_BRIDGE - don't see any reason to do that
twice, so remove one block.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Because vrf backend may be based on namespaces, each vrf can
use in the [16-(2^32-1)] range table identifier for daemons that
request it. Extend the table manager to be hosted by vrf.
That possibility is disabled in the case the vrf backend is vrflite.
In that case, all vrf context use the same table manager instance.
Add a configuration command to be able to configure the wished
range of tables to use. This is a solution that permits to give
chunks to bgp daemon when it works with bgp flowspec entries and
wants to use specific iptables that do not override vrf tables.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When using bgp evpn rt5 setup, after BGP configuration has been
loaded, if the user attempts to detach and reattach the bridged
vxlan interface from the bridge, then BGP loses its BGP EVPN
contexts, and a refresh of BGP configuration is necessary to
maintain consistency between linux configuration and BGP EVPN
contexts (RIB). The following command can lead to inconsistency:
ip netns exec cust1 ip link set dev vxlan1000 nomaster
ip netns exec cust1 ip link set dev vxlan1000 master br1000
consecutive to the, BGP l2vpn evpn RIB is empty, and the way to
solve this until now is to reconfigure EVPN like this:
vrf cust1
no vni 1000
vni 1000
exit-vrf
Actually, the link information is correctly handled. In fact,
at the time of link event, the lower link status of the bridge
interface was not yet up, thus preventing from establishing
BGP EVPN contexts. In fact, when a bridge interface does not
have any slave interface, the link status of the bridge interface
is down. That change of status comes a bit after, and is not
detected by slave interfaces, as this event is not intercepted.
This commit intercepts the bridge link up event, and triggers
a check on slaved vxlan interfaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when running bgp evpn rt5 setup, the Rmac sent in BGP updates
stands for the MAC address of the bridge interface. After
having loaded frr configuration, the Rmac address is not refreshed.
This issue can be easily reproduced by executing some commands:
ip netns exec cust1 ip link set dev br1000 address 2e🆎45:aa:bb:cc
Actually, the BGP EVPN contexts are kept unchanged.
That commit proposes to fix this by intercepting the mac address
change, and refreshing the vxlan interfaces attached to te bridge
interface that changed its MAC address.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
We should not be using `case default` with an enumerated type
This prevents the developer of new cases from knowing where
they need to fix by just compiling.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Move the handler for incoming interface address events
to a neutral source file - it's not netlink-specific and
shouldn't have been in a netlink file.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
Read incoming interface address change notifications in the
dplane pthread; enqueue the events to the main pthread
for processing. This is netlink-only for now - the bsd
kernel socket path remains unchanged.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
Add new apis for dplane interface address handling, based on
the existing api. The existing api is basically split in two:
the first part processes an incoming netlink message in the
dplane pthread, creating a dplane context with info about
the event. The second part runs in the main pthread and uses
the context data to update an interface or connected object.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
Add a new netlink socket for events coming in from the host OS
to the dataplane system for processing. Rename the existing
outbound dplane socket.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
Description: Currently IPv4 routes with IPv6 link local next hops are
not properly installed in FPM.
Reason is the netlink decoding truncates the ipv6 LL address to 4 byte
ipv4 address.
Ex : fe80:: is directly converted to ipv4 and it results in 254.128.0.0
as next hop for below routes
show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
B>* 2.1.0.0/16 [200/0] via fe80::268a:7ff:fed0:d40, Ethernet0, weight 1,
02:22:26
B>* 5.1.0.0/16 [200/0] via fe80::268a:7ff:fed0:d40, Ethernet0, weight 1,
02:22:26
B>* 10.1.0.2/32 [200/0] via fe80::268a:7ff:fed0:d40, Ethernet0, weight
1, 02:22:26
Hence this fix converts the ipv6-LL address to ipv4-LL (169.254.0.1)
address before sending it to FPM. This is inline with how these types of
routes are currently programmed into kernel.
Signed-off-by: Nikhil Kelapure <nikhil.kelapure@broadcom.com>
Current implementation doesn't copy nexthop_srv6. This causes unexpected
behavior when receiving SID information and nexthop isn't onlink.t
Signed-off-by: Ryoga Saito <contact@proelbtn.com>
Problem:
When IP1:M1 (local) moved to IP1:M2 (remote-VTEP) bgpd continues to
advertise IP1:M1.
Fix:
Local path del is sent to bgp if the neigh was {local-active||peer-active}.
So path del needs to be called before the sync flags (including peer-active)
are cleared.
Ticket: #2706744
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
When we hand set the router-id, but we have choosen a router-id
that is already the `winner` there is no point in updating anyone
with this data.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
At startup there exists a time frame where we might not know
a particular vrf's router id. When zebra gets a request for
it let's not just blindly send whatever we have. Let's be
a bit smart and only respond with one if we have one.
The upper level protocol can wait for it to have one.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
vrf_name_to_id() returned VRF_DEFAULT when the vrf name was
unknown, hiding errors. Per community recommendation, vrf_name_to_id()
is now removed and the few callers now use vrf_lookup_by_name()
directly.
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
When running bgp evpn rt5 setup with vrf namespace backend, once the
BGP configuration loaded, some refresh like the config change of a
vxlan interface is not taken into account. As consequence, the BGP
l2vpn evpn entries are empty. This can happen by recreating vxlan
interface like follows:
ip netns exec cust1 ip li del vxlan1000
ip link add vxlan1000 type vxlan id 1000 dev loopback0 local 10.209.36.1 learning
ip link set dev vxlan1000 mtu 9000
ip link set dev vxlan1000 netns cust1
ip netns exec cust1 bash
ip link set dev vxlan1000 up
ip link set dev vxlan1000 master br1000
Actually, changing learning attribute requires recreation, and this
change needs to manually reload the frr configuration.
The update mechanism in zebra about vxlan interface updates is
already put in place, but it does not work well with namespace
based vrf backend. The function zl3vni_from_svi() is then
modified to parse all the interfaces of each namespace.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Description:
Change is intended for fixing the following issues related to vrf route leaking:
Routes with special nexthops i.e. blackhole/sink routes when imported,
are not programmed into the FIB and corresponding nexthop is set as 'inactive',
nexthop interface as 'unknown'.
While importing/leaking routes between VRFs, in case of special nexthop(ipv4/ipv6)
once bgp announces route(s) to zebra, nexthop type is incorrectly set as
NEXTHOP_TYPE_IPV6_IFINDEX/NEXTHOP_TYPE_IFINDEX
i.e. directly connected even though we are not able to resolve through an interface.
This leads to nexthop_active_check marking nexthop !NEXTHOP_FLAG_ACTIVE.
Unable to find the active nexthop(s), route is not programmed into the FIB.
Whenever BGP leaks routes, set the correct nexthop type, so that route gets resolved
and correctly programmed into the FIB, in the imported vrf.
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Iqra Siddiqui <imujeebsiddi@vmware.com>
Insist on the fact that zclient neighbor state flags are
mapped over netlink state flags. List all the defines
currently known on kernel, and create a netlink API to
convert netlink values to zclient values. The function is
simplified as it is a 1-1 match.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
As NHRP expects some notification of neighboring entries on GRE
interface, when a new interface notification is encountered, the
exact neighbor state flag is found. Previously, the flag passed
to the upper layer was forced to NDM_STATE which is REACHABLE,
as can be seen on below trace:
2021/08/25 10:58:39 NHRP: [QQ0NK-1H449] Netlink: new-neigh 102.1.1.1 dev gre1 lladdr 10.125.0.2 nud 0x2 cache used 1 type 5
When passing the real value, NHRP received an other value like STALE.
2021/08/25 11:28:44 NHRP: [QQ0NK-1H449] Netlink: new-neigh 102.1.1.1 dev gre1 lladdr 10.125.0.2 nud 0x4 cache used 0 type 5
This flag is important for NHRP, as it permits to monitor the link
layer of NHRP entries.
Fixes: d603c0774e ("nhrp, zebra, lib: enforce usage of zapi_neigh_ip structure")
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
"[no] netns NAME" commands are part of the lib, but they are actually
zebra-only:
- they are using vrf_netns_handler_create and its description clearly
says that it "should be called from zebra only"
- vtysh sends these commands only to zebra
- only zebra outputs the netns related config
- zebra notifies other daemons about netns attachment
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
There is a possibility that the same line can be matched as a command in
some node and its parent node. In this case, when reading the config,
this line is always executed as a command of the child node.
For example, with the following config:
```
router ospf
network 193.168.0.0/16 area 0
!
mpls ldp
discovery hello interval 111
!
```
Line `mpls ldp` is processed as command `mpls ldp-sync` inside the
`router ospf` node. This leads to a complete loss of `mpls ldp` node
configuration.
To eliminate this issue and all possible similar issues, let's print an
explicit "exit" at the end of every node config.
This commit also changes indentation for a couple of existing exit
commands so that all existing commands are on the same level as their
corresponding node-entering commands.
Fixes#9206.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
For some reason commit #ef524230a6baa decided
to remove enums and switch to uint16_t. Which
is not the right thing to do. Put it back
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Maybe with empty nexthop to call zebra_mpls_transit_lsp():
"no mpls lsp (16-1048575)".
So just remove this "gate_str" check. If without "gate" in command, "gtype" is
set to NEXTHOP_TYPE_BLACKHOLE for subsequent processing.
Signed-off-by: anlan_cs <anlan_cs@tom.com>
When NHRP registers to zebra to receive link layer events related to
gre interfaces, then it is interested in receiving also RTM_GETNEIGH
messages.
Fixes ("b3b751046495") nhrpd: link layer registration to notifications
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
In zebra_evpn_proc_remote_nh if we do not pass in a long
enough stream, the stream reads will fail. Ensure that
we have enough data.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Handle TYPE_IFINDEX nexthops more consistently in a few places;
be more specific about a few integer return values that were
being treated as booleans.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
When calling rib_add_multipath_nhe ensure that we have
well aligned return codes that mean something so that
interersted parties can properly handle the situation.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When receiving a route via zapi, if the route is rejected
there exists a code path where we would not free the corresponding
re created.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The command `debug zebra kernel msgdump is netlink specific.
There is no point at all to allow this to be configed on non
netlink platforms.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There were a bunch of places where we converted the
route node to a prefix string via srcdest_rnode2str when
we should have been using %pRN in zebra_rib.c. Just
convert over the ones we should to use it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we are calling rib_process and the route_node
in question has no dest, there is no work to do here
at all. As such we should just return before
attempting to do any other work. This is just a tiny bit
of simplification being done.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists a call path where the nhlfe_alloc can return NULL
for blackhole nexthops. In this case we were still trying
to save the nhlfe pointer causing a crash when we attempted
to add it to a self-contained list.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Do not use the `default` case when switching over an enumerated
type. This allows the code to fail to compile when we add a
new enumeration. Thus allowing us developers to know all
the places in the code we'll need to touch.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
1. This check is absolutely useless. Nothing keeps user from deleting
the address right after this check.
2. This check prevents zebra from correctly reading the user config with
"set src" because of a race with interface startup (see #4249).
3. NO OPERATIONAL DATA USAGE ON VALIDATION STAGE.
Fixes#7319.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
v4 and v6 host/refernce prefixes need to be setup separately for
[RMAC, VTEP] entries as the VTEP is always normalized to a v4 addr.
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
The only difference in daemons' interface node definition is the config
write function. No need to define the node in every daemon, just pass
the callback as an argument to a library function and define the node
there.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
There exists some rare situations where fpm will attempt
to send a route update with no valid nexthops. In that
case an assert would be hit. This is not good for
trying to keep your routing daemons up and running
when we can safely just recover the situation.
Fixes#7588
Signed-off-by: batmancn <batmanustc@gmail.com>
<fixed commit message, and used zlog_err>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently 'show evpn rmac vni .. mac .. json' includes fields for
localSequence and remoteSequence, which are misleading since they
aren't applicable to a macs in the IP-VRF mac table (RMAC).
This removes the localSequence + remoteSequence fields from the output.
Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
like the other automake variables, setting `xyz_LDFLAGS` causes
`AM_LDFLAGS` to be ignored for `xyz`. For some reason I had in my mind
that automake doesn't do this for LDFLAGS, but... it does. (Which is
consistent with `_CFLAGS` and co.)
So, all the libraries and modules have been ignoring `AM_LDFLAGS` (which
includes `SAN_FLAGS` too). Set up new `LIB_LDFLAGS` and
`MODULE_LDFLAGS` to handle all of this correctly (and move these bits to
a central location.)
Fixes: #9034
Fixes: 0c4285d77e ("build: properly split CFLAGS from AC_CFLAGS")
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Notice when a ip address on a bsd interface is considered
an alias, let's mark the connected prefix we generate as
a SECONDARY.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When port was removed from last access vlan, the linux kernel
won't send any vlan info in the netlink message, it might affact
the evpn mh not withdraw EAD-EVI routes.
Signed-off-by: Gord Chen <gord_chen@edge-core.com>
Current code was allowing redistribution of kernel routes from
the non-default non vrf tables once FRR was already up and running.
In the case where we add `redistribute kernel` in an upper level
protocol we never consider the non-default vrf or non-vrf tables
so it is never accepted.
In the case where a kernel route is added after `redistribute kernel`
is already in place we were never looking at the fact that the
route was in a non-default non-vrf table. This code fixes
that issue.
Fixes: #9073
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Move remote VTEP updates from immediate, inline processing
in their ZAPI message handlers to the main workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Enqueue incoming vxlan remote macip updates on the main
workqueue, instead of performing the updates immediately,
in-line.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Add workqueue subqueue for EVPN/VxLAN updates; migrate the
evpn route and remote ES processing from their ZAPI handlers
to the workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
At some point we broke the ifp pointer for nhe->ifp such
that it was pointing to an interface even in groups/recurisve
instances.
Add checks here to make it again so that we only set the ifp
pointer if it is a fully resolved singleton NHE.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
In the reachability code we auto pass back the fully resolved
nexthops. Modify the ZEBRA_IPV4_NEXTHOP_LOOKUP_MRIB code
to do the exact same thing so that the zclient_lookup_nexthop
code does not need to recursively look for the data that
zebra already has.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Basically, this is handled by JSON-C library. I've compiled with the
latest release of json-c and it works well.
Didn't test with various distribution versions, but this change is kinda
dependend from the json-c lib version the distra has.
Before:
```
"192.168.100.1\/32":[
{
"prefix":"192.168.100.1\/32",
```
After:
```
"192.168.100.1/32":[
{
"prefix":"192.168.100.1/32",
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
There are a few places in the code where we use PREFIX_COPY(_IPV4/IPV6)
macro to copy a prefix. Let's always use prefix_copy function for this.
This should fix CID 1482142 and 1504610.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
when sending nexthop information. We do not need to reset the
last_write_cmd since that is taken care of in the send routine.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Include the complete set of primary and backup nexthops from
the resolving route for a pseudowire. Add accessors for that
info. Modify the logic that creates the fib set of pw nexthops
so that only installed, labelled nexthops are included.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Modify the pseudowire reachability logic so that it returns
success if there is at least one installed labelled nexthop for
the route resolving the pw destination. We also check for
valid backup nexthops if necessary, in case there's been a
switchover event.
Only OpenBSD requires that _all_ nexthops be labelled, so we
have a more strict version of the logic also.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
When processing bulk messages we need more space to handle more
mroutes. In this case we are doubling the stream size from
16k -> 32k, which should roughly double the number of mroutes
we can handle in one go.
Additionally. If we cannot parse the passed message into
the stream to pass up to pimd then gracefully stop processing
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add a show command so we can easily get info on
what interfaces are turned on per ver and in
which list.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Rework RA handling for vrf-lite scenarios.
Before we were using a single FD descriptor for polling
across multiple zvrf's. This would cause us to hit this
assert() in some bgp unnumbered and vrrp configs:
```
/*
* What happens if we have a thread already
* created for this event?
*/
if (thread_array[fd])
assert(!"Thread already scheduled for file descriptor");
```
We were scheduling a thread_read on the same FD for every zvrf.
With vrf-lite, RAs and ARPs are not vrf-bound, so we can just use one
rtadv instance to manage them for all VRFs. We will choose the default
VRF for this.
This patch removes the rtadv_sock altogether for zrouter and moves the
functionality this represented to the default VRF. All RAs will be
handled in the default VRF under vrf-lite configs with only one poll
thread started for it.
This patch also extends how we track subscribed interfaces (s or msec)
to use an actual sorted list by interface names rather than just a
counter. With multiple daemons turning interfaces/on/off these counters
can get very wrong during ifup/down events. Making them a sorted list
prevents this from happening by preventing duplicates.
With netns-vrf's nothing should change other than the interface list.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
FPM sends VNI to the data plane with the EVPN prefix. For pure type-5 EVPN
route, nexthop interface of EVPN prefix is L3VNI SVI. Thus, we encode L3VNI
corresponding to the nexthop vrf with rtmsg for this prefix.
For EVPN type-5 route with gateway IP overlay index, we supporting
asymmetric IRB. Thus, nexthop interface is L2VNI SVI. So, instead of fetching
vrf VNI, fetch VNI corresponding to the nexthop SVI and encode it in the rtmsg
for EVPN prefix.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
SVI ifindex for L2VNI is required in BGP to perform EVPN type-5 to type-2
recusrsive resolution using gateway IP overlay index.
Program this svi_ifindex in struct zebra_vni_t as well as in struct bgpevpn
Changes include:
1. Add svi_if field to struct zebra_evpn_t
2. Add svi_ifindex field to struct bgpevpn
3. When SVI (bridge or VLAN) is bound to a VxLAN interface, store it in the
zebra_evpn_t structure.
4. Add this SVI ifindex to ZEBRA_VNI_ADD
5. Store svi_ifindex in struct bgpevpn
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
When the VRF node is exited using "exit" or "quit", there's still a VRF
pointer stored in the vty context. If you try to configure some router
related command, it will be applied to the previous VRF instead of the
default VRF. For example:
```
(config)# vrf test
(config-vrf)# ip router-id 1.1.1.1
(config-vrf)# do show run
...
!
vrf test
ip router-id 1.1.1.1
exit-vrf
!
...
(config-vrf)# exit
(config)# ip router-id 2.2.2.2
(config)# do show run
...
!
vrf test
ip router-id 2.2.2.2
exit-vrf
!
...
```
`vrf-exit` works correctly, because it stores a pointer to the default
VRF into the vty context (but weirdly keeping the VRF_NODE instead of
changing it to CONFIG_NODE).
Instead of relying on the behavior of exit function, always use the
default VRF when in CONFIG_NODE.
Another problem is missing `VTY_CHECK_CONTEXT`. If someone deletes the
VRF in which node the user enters the command, then zebra applies the
command to the default VRF instead of throwing an error.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
https://github.com/FRRouting/frr/pull/5865#discussion_r597670225
As this comment says. ZEBRA_FLAG_XXX should not have been used.
To communicate SRv6 Route Information. A simple Nexthop Flag would
have been sufficient for SRv6 information. And I fixed the whole
thing that way.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
FRRouting operator can install seg6 route via ZAPI,
But linux kernel operator also can install seg6 route
via Netlink directry (i.e. iproute2)
This commit make zebra to parse non-frr seg6 route
configuration via netlink and audit Zebra's RIB.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
With this patch, zclient can intall seg6 rotues when
they set properties "nh_seg6_segs" on struct nexthop
and set ZEBRA_FLAG_SEG6_ROUTE on zapi_route's flag.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit is a part of #5853 works that add new clis to
configure SRv6 locator and its show commands.
Following clis are added on this commit.
vtysh -c 'conf te' \
-c 'segment-routing' \
-c ' srv6' \
-c ' locators' \
-c ' locator LOC1' \
-c ' prefix A::/64'
- "show segment-routing srv6 sid [json]"
- "show segment-routing srv6 locator [json]"
- "show segment-routing srv6 locator NAME detail [json]"
- "show runnning-config" (make it to print srv6 configuration)
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit is a part of #5853 works that add new ZAPI to
configure SRv6 locator which manages chunk prefix for
SRv6 SID IPv6 address for each routing protocol daemons.
NEW-ZAPIs:
* ZEBRA_SRV6_LOCATOR_ADD
* ZEBRA_SRV6_LOCATOR_DELETE
* ZEBRA_SRV6_MANAGER_CONNECT
* ZEBRA_SRV6_MANAGER_GET_LOCATOR_CHUNK
* ZEBRA_SRV6_MANAGER_RELEASE_LOCATOR_CHUNK
Zclient can connect to zebra's srv6-manager with
ZEBRA_SRV6_MANAGER_CONNECT api like a label-manager.
Then zclient uses ZEBRA_SRV6_MANAGER_GET_LOCATOR_CHUNK to
allocated dedicated locator chunk for it's routing protocol.
Zebra works for only prefix reservation and distribute
the ownership of the locator chunks for zcliens.
Then, zclient installs SRv6 function with
ZEBRA_ROUTE_ADD api with nh_seg6local_* fields.
This feature is already implemented by another PR(#7680).
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit is a part of #5853 that add new cmd-node for SRv6 configuration.
This commit just add cmd-node and moving node cli only, acutual SRv6 config
command isn't added. (that is added later commit. of this branch)
new cli nodes:
* SRv6
* SRv6-locators
* SRv6-locator
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
FRRouting operator can install seg6local route via ZAPI,
But linux kernel operator also can install seg6local route
via Netlink directry (i.e. iproute2)
This commit make zebra to parse non-frr seg6local
route configuration via netlink and audit Zebra's RIB.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
With this patch, zclient can intall seg6local rotues whem
they set properties nh_seg6local_{action,ctx} on struct nexthop
and set ZEBRA_FLAG_SEG6LOCAL_ROUTE on zapi_route's flag.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This includes community and large-community data.
```
exit1-debian-9# show ip route 172.16.16.1/32
Routing entry for 172.16.16.1/32
Known via "bgp", distance 20, metric 0, best
Last update 00:00:23 ago
* 192.168.0.2, via eth1, weight 1
AS-Path : 65030
Communities : 65001:1 65001:2 65001:3 65001:4 65001:5 65001:6
Large-Communities: 65001:123:1 65001:123:2
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Track 'down' state of connected addresses with a new flag. We
may have multiple addresses on an interface that share a prefix;
in those cases, we need to determine when the first address
is valid, to install a connected route, and similarly detect
when the last address goes 'down', to remove the connected
route.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
if_netlink.c created it's on nested parsing #define which
is identical to netlink_parse_rtattr_nested. Consolidate
on one instead of having this duality.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In order to parse the netlink message into the
`struct rtattr *tb[size]` it is assumed that the buffer is
memset to 0 before the parsing. As such if you attempt
to read a value that was not returned in the message
you will not crash when you test for it.
The code has places were we memset it and places where we don't.
This *will* lead to crashes when the kernel changes. In
our parsing routines let's have them memset instead of having
to remember to do it pre pass in to the parser.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When clagd is stopped on secondary device,
all vxlan interfaces (vnis) are kept in protodown state.
FRR treats protodown vxlan interfaces (vnis) as interface down
and sends vni delete to bgpd.
In the event of clagd down, SVIs are flapping as underlying
bridge is going through churn.
When FRR receives SVI up notification do not trigger event to bgpd
if vnis are operationaly down.
Ticket:#2600210 CM-22929
Reviewed By:CCR-11544
Testing Done:
Performed CLAG stop/start on secondary device, all vxlan devices
remained in protodown along with this validated the vnis are cleaned up
and added back in bgpd.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Description:
Added a new show command("show ip zebra route dump") to dump all routes
with detailed information including nexthops,flags, status ..etc.
This helps for dubugging and added to support_bundle_command.conf.
Defined this command as a hidden command.
Signed-off-by: Rajesh Girada <rgirada@vmware.com>
When creating a large number of vrf's we are creating a fairly
large number of hash tables per vrf. Reduce memory usage on
startup as well as let us identify the table these things come
from.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We are creating 2 hash tables per vni in zebra. Once we start to
scale the number of vni's we start to see some serious memory
usage in zebra. Let's reduce the memory usage at startup
for scale of vni's.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Current code has an inconsistent behavior with redistribute routes.
Suppose you have a kernel route that is being read w/ a distance
of 255:
eva# show ip route kernel
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
t - trapped, o - offload failure
K>* 0.0.0.0/0 [0/100] via 192.168.161.1, enp39s0, 00:06:39
K>* 4.4.4.4/32 [255/8192] via 192.168.161.1, enp39s0, 00:01:26
eva#
If you have redistribution already turned on for kernel routes
you will be notified of the 4.4.4.4/32 route. If you turn
on kernel route redistribution watching after the 4.4.4.4/32 route
has been read by zebra you will never learn of it.
There is no need to look for infinite distance in the redistribution
code. Either we are selected or not. In other words non kernel routes
with an 255 distance are never installed so the checks were pointless.
So let's just remove the distance checking and tell interested parties
about the 255 kernel route if it exists.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently FRR reads the kernel for interface state and FRR
creates a connected route per address on an interface. If
you are in a situation where you have multiple addresses
on an interface just create 1 connected route for them:
sharpd@eva:/tmp/topotests$ vtysh -c "show int dummy302"
Interface dummy302 is up, line protocol is up
Link ups: 0 last: (never)
Link downs: 0 last: (never)
vrf: default
index 3279 metric 0 mtu 1500 speed 0
flags: <UP,BROADCAST,RUNNING,NOARP>
Type: Ethernet
HWaddr: aa:4a:ed:95:9f:18
inet 10.4.1.1/24
inet 10.4.1.2/24 secondary
inet 10.4.1.3/24 secondary
inet 10.4.1.4/24 secondary
inet 10.4.1.5/24 secondary
inet6 fe80::a84a:edff:fe95:9f18/64
Interface Type Other
Interface Slave Type None
protodown: off
sharpd@eva:/tmp/topotests$ vtysh -c "show ip route connected"
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
t - trapped, o - offload failure
C>* 10.4.1.0/24 is directly connected, dummy302, 00:10:03
C>* 192.168.161.0/24 is directly connected, enp39s0, 00:10:03
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Since _rnode_zlog was wrapping zlog(), these messages weren't getting an
unique ID assigned through the xref mechanism. Replace macro with a
small extension that prints (almost) the same thing.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Initially the reading of the speed of an interface happened
upon interface creation and happened until the speed of a link
settled down to a single value. The speed of an interface
can also change as that a new optic can be inserted that
changes the speed, in which case FRR would see a interface
down (optic removal) and then a interface up (optic insertion).
In this case FRR would not treat this as an event that changed
the speed. Let's expand the checking a bit more.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
- gre keys are collected and stored locally.
- when gre source set is requested, and the link interface
configured is different, the gre information collected is
pushed in the query, namely source ip or gre keys if present.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
preserve mtu upon interface flapping and tunnel source change.
Signed-off-by:Reuben Dowle <reuben.dowle@4rf.com>
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This action is initiated by nhrp and has been stubbed when
moving to zebra. Now, a netlink request is forged to set
the link interface of a gre interface if that gre interface
does not have already a link interface.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
zebra is able to get information about gre tunnels.
zebra_gre file is created to handle hooks, but is not yet used.
also, debug zebra gre command is done to add gre traces.
A zebra_gre file is used for complementary actions that may be needed.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when zebra has vrf backend mapped to namespaces, the polling
of interfaces leads to fix all linkages of interfaces. This
was not done on non default namespace. do it for other namespaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
There are cases where either link information is not present at
interface creation or link information changed. handle this
situation.
Signed-off-by: Philippe.Guibert <philippe.guibert@6wind.com>
zebra dd link
a) `debug zebra kernel` turns off `debug zebra kernel msgdump....`
this is odd and bad
b) `debug zebra kernel msgdump send` turns off receive and vice versa
this is counter intuitive as well
c) `no zebra kernel msgdump ...` turns off all kernel level debugging
we should only turn off msgdump specific debugs
d) `no debug zebra kernel` turns off all kernel level debugging
we should leave msgdump on.
e) Fix `show run` and show debug output
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
encoding signed int as unsigned is bad practice; since we want to do
it here lets at least be explicit about it
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
Use unsigned value for all RA requests to Zebra
- encoding signed int as unsigned is bad practice
- RA interval is never, and should never be, negative
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
This is always a 16 bit unsigned value.
- signed int is the wrong type to use
- encoding a signed int as a uint32 is bad practice
- decoding a signed int encoded as a uint32 into a uint16 is bad
practice
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
We're firing an event debug log for zebra_redistribute_add, but not one
for zebra_redistribute_delete. Let's make it symmetric.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
`config.h` has all the defines from autoconf, which may include things
that switch behavior of other included headers (e.g. _GNU_SOURCE
enabling prototypes for additional functions.)
So, the first include in any `.c` file must be either `config.h` (with
the appropriate guard) or `zebra.h` (which includes `config.h` first
thing.)
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Properly handle refcounting of Proto-owned NHGs when
zebra is operating under graceful restart and retain
conditions.
We have an extra refcnt of 1 we keep for proto-owned NHGs to
indicate the upper level proto has created and owns it.
When we are reading these in from the kernel, we need to set them
to 1 as appropriate. Without this, we fail in the assert() during
zebra_nhg_proto_add() after the owning daemons resends the NHG
and the refcnts are off by one.
Also add in the same logic we use for routes when sweeping with
respect to uptimes.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add uptime for use with NHEs to keep track of how
long we have had this NHE in our rib without an update.
This is treated exactly the same as the re->uptime for
routes. When we get an update for a route, we reset the
uptime.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add a PROTO_OWNED macro for code readability when checking
ID bounds for whether a NHG is proto owned.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Handle SR-TE policy changes in the LSP async notification
handler, as we do in the normal LSP dplane results handler.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
When capturing backup nexthops with recursive resolution,
ensure that inner labels from the recursive nexthop are
included in each backup (as they are with the resolving
primary nexthops).
Signed-off-by: Mark Stapp <mjs@voltanet.io>
`CFLAGS` is a "user variable", not intended to be controlled by
configure itself. Let's put all the "important" stuff in AC_CFLAGS and
only leave debug/optimization controls in CFLAGS.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
... by referencing all autogenerated headers relative to the root
directory. (90% of the changes here is `version.h`.)
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Use the main zebra workqueue for daemon-owned NHGs, in addition
to processing kernel-owned NHGs. The zapi message processing
creates a temporary object that's enqueued to the workqueue,
then processed/installed as part of the workqueue processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
do not add a new route type, and consider 0 as a value meaning
that zebra should be the owner.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
zapi_nbr structure is renamed to zapi_neigh_ip.
Initially used to set a neighbor ip entry for gre interfaces, this
structure is used to get events from the zebra layer to nhrp layer.
The ndm state has been added, as it is needed on both sides.
The zebra dplane layer is slightly modified.
Also, to clarify what ZEBRA_NEIGH_ADD/DEL means, a rename is done:
it is called now ZEBRA_NEIGH_IP_ADD/DEL, and it signified that this
zapi interface permits to set link operations by associating ip
addresses to link addresses.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The first change in this commit is the processing of the VRF termination.
When we terminate the VRF, we should not delete the underlying interfaces,
because there may be pointers to them in the northbound configuration. We
should move them to the default VRF instead.
Because of the first change, the VRF interface itself is also not deleted
when deleting the VRF. It should be handled in netlink_link_change. This
is done by the second change.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Most of these are many, many years out of date. All of them vary
randomly in quality. They show up by default in packages where they
aren't really useful now that we use integrated config. Remove them.
The useful ones have been moved to the docs.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
Instead of directly configuring the neighbor table after read from zapi
interface, a zebra dplane context is prepared to host the interface and
the family where the neighbor table is updated. Also, some other fields
are hosted: app_probes, ucast_probes, and mcast_probes. More information
on those fields can be found on ip-ntable configuration.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
EVPN neighbor operations were already done in the zebra dataplane
framework. Now that NHRP is able to use zebra to perform neighbor IP
operations (by programming link IP operations), handle this operation
under dataplane framework:
- assign two new operations NEIGH_IP_INSTALL and NEIGH_IP_DELETE; this
is reserved for GRE like interfaces:
example: ip neigh add A.B.C.D lladdr E.F.G.H
- use 'struct ipaddr' to store and encode the link ip address
- reuse dplane_neigh_info, and create an union with mac address
- reuse the protocol type and use it for neighbor operations; this
permits to store the daemon originating this neighbor operation.
a new route type is created: ZEBRA_ROUTE_NEIGH.
- the netlink level functions will handle a pointer, and a type; the
type indicates the family of the pointer: AF_INET or AF_INET6 if the
link type is an ip address, mac address otherwise.
- to keep backward compatibility with old queries, as no extension was
done, an option NEIGH_NO_EXTENSION has been put in place
- also, 2 new state flags are used: NUD_PERMANENT and NUD_FAILED.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
neighbor table api in zebra is added. a netlink api is created for that.
the handler is called from the api defined in the previous commit.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When netlink_neigh_update() is called, the link registration was
failing, due to bad request length.
Also, the query was failing if NDA_DST was an ipv6 address.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
a zebra api is extended to offer ability to add or remove neighbor
entry from daemon. Also this extension makes possible to add neigh
entry, not only between IPs and macs, but also between IPs and NBMA IPs.
This API supports configuring ipv6/ipv4 entries with ipv4/ipv6 lladdr.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
zebra implements zebra api for configuring link layer information. that
can be an arp entry (for ipv4) or ipv6 neighbor discovery entry. This
can also be an ipv4/ipv6 entry associated to an underlay ipv4 address,
as it is used in gre point to multipoint interfaces.
this api will also be used as monitoring. an hash list is instantiated
into zebra (this is the vrf bitmap). each client interested in those entries
in a specific vrf, will listen for following messages: entries added, removed,
or who-has messages.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Optionally hide route changes that only involve backup nexthop
activation/deactivation. The goal is to avoid route churn during
backup nexthop switchover events, before the resolving routes
re-converge. A UI config enables this 'hiding' behavior.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Description:
After FRR restart, routes are not getting redistributed;
when routes added first and then 'redistribute static' cmd is issued.
During the frr restart, vrf_id will be unknown,
so irrespective of redistribution, we set the redistribute vrf bitmap.
Later, when we add a route and then issue 'redistribute' cmd,
we check the redistribute vrf bitmap and return CMD_WARNING;
zebra_redistribute_add also checks the redistribute vrf bitmap and returns.
Instead of checking the redistribute vrf bitmap, always set it anyways.
Co-authored-by: Santosh P K <sapk@vmware.com>
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Abhinay Ramesh <rabhinay@vmware.com>
When certain events occur (connected route changes e.g.)
zebra examines LSPs to see if they might have been affected. For
LSPs with backup nhlfes, skip this immediate processing and
wait for the owning protocol daemon to react.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
This commit introduces the implementation for the north-bound
callbacks for the zebra-specific route-map match and set clauses.
Signed-off-by: NaveenThanikachalam <nthanikachal@vmware.com>
Signed-off-by: Sarita Patra <saritap@vmware.com>
This is to fix the crash reproduced by the following steps:
* ip link add red type vrf table 1
Creates VRF.
* vtysh -c "conf" -c "vrf red"
Creates VRF NB node and marks VRF as configured.
* ip route 1.1.1.0/24 2.2.2.2 vrf red
* no ip route 1.1.1.0/24 2.2.2.2 vrf red
(or similar l3vni set/unset in zebra)
Marks VRF as NOT configured.
* ip link del red
VRF is deleted, because it is marked as not configured, but NB node
stays.
Subsequent attempt to configure something in the VRF leads to a crash
because of the stale pointer in NB layer.
Fixes#8357.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
EVPN nexthops are installed as remote neighs by zebra. This was earlier
done only via VRF IPvX uni routes imported from EVPN routes.
With EVPN-MH these VRF routes now reference a L3NHG which is setup based
on the EAD and doesn't include the RMAC. To workaround that BGP now
consolidates and maintains EVPN nexthops which are then sent to zebra.
zebra sets up these nexthops as L3-VNI nh entries using a dummy type-1
route as reference.
Ticket: CM-31398
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
This one also needed a bit of shuffling around, but MTYPE_RE is the only
one left used across file boundaries now.
Signed-off-by: David Lamparter <equinox@diac24.net>
Back when I put this together in 2015, ISO C11 was still reasonably new
and we couldn't require it just yet. Without ISO C11, there is no
"good" way (only bad hacks) to require a semicolon after a macro that
ends with a function definition. And if you added one anyway, you'd get
"spurious semicolon" warnings on some compilers...
With C11, `_Static_assert()` at the end of a macro will make it so that
the semicolon is properly required, consumed, and not warned about.
Consistently requiring semicolons after "file-level" macros matches
Linux kernel coding style and helps some editors against mis-syntax'ing
these macros.
Signed-off-by: David Lamparter <equinox@diac24.net>
The point of the `-std=gnu99` was to override a `-std=c99` that may be
coming in from net-snmp. However, we want C11, not C99.
Signed-off-by: David Lamparter <equinox@diac24.net>
Add a control and api for the use of backup nexthops in
recursive resolution. With 'no', we won't try to use installed
backup nexthops when resolving a recursive route.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Zebra routing tables are not controlled by the user and can not be
created/deleted manually. Current NB create/destroy callbacks are
incorrectly implemented because instead of creating/deleting the RIB
they are only checking for it's existence. YANG model should reflect
the real situation.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
There are places in the code where function nb_running_get_entry is used
with abort_if_not_found set to true during the config validation stage.
This is incorrect because when used in transactional CLI, the running
entry won't be set until the apply stage, and such usage leads to crash.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
like it has been done for iptable contexts, a zebra dplane context is
created for each ipset/ipset entry event. The zebra_dplane_ctx job is
then enqueued and processed by separate thread. Like it has been done
for zebra_pbr_iptable context, the ipset and ipset entry contexts are
encapsulated into an union of structures in zebra_dplane_ctx.
There is a specificity in that when storing ipset_entry structure, there
was a backpointer pointer to the ipset structure that is necessary
to get some complementary information before calling the hook. The
proposal is to use an ipset_entry_info structure next to the ipset_entry,
in the zebra_dplane context. That information is used for ipset_entry
processing. The ipset name and the ipset type are the only fields
necessary.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The iptable processing was not handled in remote dataplane, and was
directly processed by the thread in charge of zapi calls. Now that call
can be handled in the zebra_dplane separate thread. once a
zebra_dplane_ctx is allocated for iptable handling, the hook call is
performed later. Subsequently, a return code may be triggered to zclient
interface if any problem occurs when calling the hook call.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This was caused because of uninitialized netlint attrs in the bond-member
netlink parse API.
PS: It was caught by the upstream topotests on ARM8 (passed everywhere
else).
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
This is needed as kernel currently doesn't allow a mac replace if the dst
changes from a L2NHG to a single-VTEP and viceversa.
Ticket: CM-31561
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When a ES-bond is in bypass state MACs learnt on it are linked to the
access port instead of the ES. When LACP converges on the bond it moves
out of bypass and the MACs previously learnt on it are flushed to force
a re-learn on new traffic.
Ticket: CM-31326
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When an ES-bond comes out of bypass FRR needs to flush the local MACs learnt
while the bond was in bypass. To do that efficiently local MACs are linked
to the dest-access port. This only happens if the access-port is in
LACP-bypass or if it is non-ES.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Feature overview:
=================
A 802.3ad bond can be setup to allow lacp-bypass. This is done to enable
servers to pxe boot without a LACP license i.e. allows the bond to go oper
up (with a single link) without LACP converging.
If an ES-bond is oper-up in an "LACP-bypass" state MH treats it as a non-ES
bond. This involves the following special handling -
1. If the bond is in a bypass-state the associated ES is placed in a
bypass state.
2. If an ES is in a bypass state -
a. DF election is disabled (i.e. assumed DF)
b. SPH filter is not installed.
3. MACs learnt via the host bond are advertised with a zero ESI.
When the ES moves out of "bypass" the MACs are moved from a zero-ESI to
the correct non-zero id. This is treated as a local station move.
Implementation:
===============
When (a) an ES is detached from a hostbond or (b) an ES-bond goes into
LACP bypass zebra deletes all the local macs (with that ES as destination)
in the kernel and its local db. BGP re-sends any imported MAC-IP routes
that may exist with this ES destination as remote routes i.e. zebra can
end up programming a MAC that was perviously local as remote pointing
to a VTEP-ECMP group.
When an ES is attached to a hostbond or an ES-bond goes
LACP-up (out of bypss) zebra again deletes all the local macs in the
kernel and its local db. At this point BGP resends any imported MAC-IP
routes that may exist with this ES destination as sync routes i.e.
zebra can end up programming a MAC that was perviously remote
as local pointing to an access port.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
VNI configuration is done without NB layer in default VRF. It leads to
the following problems:
```
vtysh -c "conf" -c "vni 1"
vtysh -c "conf" -c "vrf default" -c "no vni"
```
Second command does nothing, because the NB node is not created by the
first command.
```
vtysh -c "conf" -c "vrf default" -c "vni 1"
vtysh -c "conf" -c "no vni 1"
```
Second command doesn't delete the NB node created by the first command.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This is causing problems with VM move i.e. transition from remote
neigh to local neigh. This transition involves changing the NUD_STATE
NUD_NOARP to NUD_STALE. And the weak override flag prevents changing
the state from connected (REACHABLE, NOARP, PERMANENT) to STALE.
PS: Weak-override was originally used to prevent race conditions where
FRR can end up making a REACHABLE neigh STALE. We may need to revisit
and address that case at a later point.
Ticket: CM-30273
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Start reorg of zebra nexthop-resolution so that we can use the
resolution logic for nexthop-groups as well as routes. Change
the signature of the core nexthop_active() api so that it does
not require a route-entry or route-node. Move some of the logic
around so that nexthop-specific logic is in nexthop_active(),
while route-oriented logic is in nexthop_active_check().
Signed-off-by: Mark Stapp <mjs@voltanet.io>
For MH the SVI MAC is advertised to prevent flooding of ARP replies.
But because of a bug the SVI MAC was being added to the zebra database
but not sent to bgpd for advertising.
Ticket: CM-33329
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
As a part of FRR shutdown interfaces are force flushed (in an arbitary
order). Interfaces are already down at that point i.e. resources like
SVI-MAC have already been released. Attempting to clean it up again
as a part of the force-flush was resulting in access of freed up memory -
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
==26457== Thread 1:
==26457== Invalid read of size 8
==26457== at 0x1AE6B0: zebra_evpn_acc_bd_svi_set (zebra_evpn_mh.c:606)
==26457== by 0x1B1460: zebra_evpn_if_cleanup (zebra_evpn_mh.c:1040)
==26457== by 0x13CA69: if_zebra_delete_hook (interface.c:244)
==26457== by 0x48A0E34: hook_call_if_del (if.c:59)
==26457== by 0x48A0E34: if_delete_retain (if.c:290)
==26457== by 0x48A2F94: if_delete (if.c:313)
==26457== by 0x48A3169: if_terminate (if.c:1217)
==26457== by 0x48E0024: vrf_delete (vrf.c:254)
==26457== by 0x48E0024: vrf_delete (vrf.c:225)
==26457== by 0x48E02FE: vrf_terminate (vrf.c:551)
==26457== by 0x1442E1: sigint (main.c:203)
==26457== by 0x1442E1: sigint (main.c:141)
==26457== by 0x48CF862: quagga_sigevent_process (sigevent.c:103)
==26457== by 0x48DD324: thread_fetch (thread.c:1404)
==26457== by 0x48A926A: frr_run (libfrr.c:1122)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
(gdb) bt
(gdb) fr 5
1037 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p zif->ifp->name
$2 = "vlan131", '\000' <repeats 12 times>
(gdb) p zif->link->info
$5 = (void *) 0x1
(gdb) p/x zif->ifp->flags
$7 = 0x1002
(gdb)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-32435
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
zebra crash is seen while cleaning up evpn interface
during shutdown event.
evpn interface clean up is called from vrf_delete callback
(gdb) frame 4
(is_up=false, br_zif=0x0, vlan_zif=0x557f31fb36f0) at zebra/zebra_evpn_mh.c:614
614 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p tmp_br_zif
$1 = (struct zebra_if *) 0x0
(gdb) p vlan_zif->link
$2 = (struct interface *) 0x557f31fb2d40
(gdb) p vlan_zif->link->info
$3 = (void *) 0x0
(gdb) p zebra_if->ifp->name
No symbol "zebra_if" in current context.
(gdb) p vlan_zif->ifp->name
$4 = "peerlink-3.4094\000\000\000\000"
Ticket:CM-32435
Reviewed By:CCR-10957
Testing Done:
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Added support for advertising SVI MAC if EVPN-MH is enabled.
In the case of EVPN MH arp replies from an attached server can be sent to
the ES-peer. To prevent flooding of the reply the SVI MAC needs to be
advertised by default.
Note:
advertise-svi-ip could have been used as an alternate way to advertise
SVI MAC. However that config cannot be turned on if SVI IPs are
re-used (which is done to avoid wasting IP addresses in a subnet).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
SVI IP is being advertised unconditionally i.e. even if disabled (and
that is the default config). This can be problematic when the SVI address
is re-used across racks.
Added the user config condition in all the relevant places where the
SVI advertisement is triggered.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When looking up the conversion from kernel protocol to
internal protocol family make sure we use the correct
AF_INET( what the kernel uses ) instead of AFI_IP (which
is what FRR uses ).
Routes from OSPF will show up from the kernel as OSPF6 instead of
OSPF. Which will cause mayhem
Ticket: CM-33306
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Neither tabs nor newlines are acceptable in syslog messages. They also
break line-based parsing of file logs.
Signed-off-by: David Lamparter <equinox@diac24.net>
the old VXLAN function for local MAC deletion was still in
existence and being called from the VXLAN code whilst the new
generic function was not being called at all. Resolve this so
the generic function matches the old function and is called
exclusively.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Move the pbr hash creation to be after the update release
and dplane install. Now that rules are installed in a separate
dplane pthread, we can have scenarios where we have an interface
flapping and we install/remove rules sufficiently fast enough we
could issue what we think is an update for an identical rule and
end up releasing the rule right after we created it and sent it to
the dplane. This solves the problem of recving duplicate rules
during interface flapping.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Disallow the resolution to nexthops that are marked duplicate.
When we are resolving to an ecmp group, it's possible this
group has duplicates.
I found this when I hit a bug where we can have groups resolving
to each other and cause the resolved->next->next pointer to increase
exponentially. Sufficiently large ecmp and zebra will grind to a hault.
Like so:
```
D> 4.4.4.14/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:02
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 4.4.4.1 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.2 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.3 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.4 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.5 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.6 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.7 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.8 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.9 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.10 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.11 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.12 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.13 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.15 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.16 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
D> 4.4.4.15/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:09
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 4.4.4.1 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.2 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.3 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.4 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.5 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.6 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.7 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.8 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.9 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.10 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.11 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.12 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.13 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.14 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.16 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
D> 4.4.4.16/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:19
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:19
via 4.4.4.1 (recursive), weight 1, 00:00:19
via 1.1.1.1, dummy1, weight 1, 00:00:19
via 4.4.4.2 (recursive), weight 1, 00:00:19
...............
................
and on...
```
You can repro the above via:
```
kernel routes:
1.1.1.1 dev dummy1 scope link
4.4.4.0/24 via 1.1.1.1 dev dummy1
==============================
config:
nexthop-group doof
nexthop 1.1.1.1
nexthop 4.4.4.1
nexthop 4.4.4.10
nexthop 4.4.4.11
nexthop 4.4.4.12
nexthop 4.4.4.13
nexthop 4.4.4.14
nexthop 4.4.4.15
nexthop 4.4.4.16
nexthop 4.4.4.2
nexthop 4.4.4.3
nexthop 4.4.4.4
nexthop 4.4.4.5
nexthop 4.4.4.6
nexthop 4.4.4.7
nexthop 4.4.4.8
nexthop 4.4.4.9
!
===========================
Then use sharpd to install 4.4.4.16 -> 4.4.4.1 pointing to that nexthop
group in decending order.
```
With these changes it prevents the growing ecmp above by disallowing
duplicates to be in the resolution decision. These nexthops are not
installed anyways so why should we be resolving to them?
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Description: When we get a new vrf add and vrf with same name, but different vrf-id already
exists in the database, we should treat vrf add as update.
This happens mostly when there are lots of vrf and other configuration being replayed.
There may be a stale vrf delete followed by new vrf add. This
can cause timing race condition where vrf delete could be missed and
further same vrf add would get rejected instead of treating last arrived
vrf add as update.
Treat vrf add for existing vrf as update.
Implicitly disable this VRF to cleanup routes and other functions as part of vrf disable.
Update vrf_id for the vrf and update vrf_id tree.
Re-enable VRF so that all routes are freshly installed.
Above 3 steps are mandatory since it can happen that with config reload
stale routes which are installed in vrf-1 table might contain routes from
older vrf-0 table which might have got deleted due to missing vrf-0 in new configuration.
Signed-off-by: sudhanshukumar22 <sudhanshu.kumar@broadcom.com>
valgrind is reporting:
2448137-==2448137== Thread 5 zebra_apic:
2448137-==2448137== Syscall param writev(vector[...]) points to uninitialised byte(s)
2448137:==2448137== at 0x4D6FDDD: __writev (writev.c:26)
2448137-==2448137== by 0x4D6FDDD: writev (writev.c:24)
2448137-==2448137== by 0x48A35F5: buffer_flush_available (buffer.c:431)
2448137-==2448137== by 0x48A3504: buffer_flush_all (buffer.c:237)
2448137-==2448137== by 0x495948: zserv_write (zserv.c:263)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Address 0x720c3ce is 62 bytes inside a block of size 4,120 alloc'd
2448137:==2448137== at 0x483877F: malloc (vg_replace_malloc.c:307)
2448137-==2448137== by 0x48D2977: qmalloc (memory.c:110)
2448137-==2448137== by 0x48A30E3: buffer_add (buffer.c:135)
2448137-==2448137== by 0x48A30E3: buffer_put (buffer.c:161)
2448137-==2448137== by 0x49591B: zserv_write (zserv.c:256)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Uninitialised value was created by a stack allocation
2448137:==2448137== at 0x43E490: zserv_encode_vrf (zapi_msg.c:103)
Effectively we are sending `struct vrf_data` without ensuring
data has been properly initialized.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Send the results of daemons' nhg updates asynchronously,
after the update has actually completed. Capture additional
info about the source daemon in order to locate the correct
zapi session. Simplify the result types considered by the
zebra_nhg module.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
The raw zapi apis to encode and decode NHGs don't need to be
public; also add a little more validity-checking.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Calling fpm_nl_enqueue we should expect a it fit or not
return value on the outgoing stream. This is not necessary
to check here because the while loop where we are checking this
already has ensured that the data being written will fit.
CID -> 1499854
Signed-off-by: Donald Sharp <sharpd@nvidia.com>