the old VXLAN function for local MAC deletion was still in
existence and being called from the VXLAN code whilst the new
generic function was not being called at all. Resolve this so
the generic function matches the old function and is called
exclusively.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Move the pbr hash creation to be after the update release
and dplane install. Now that rules are installed in a separate
dplane pthread, we can have scenarios where we have an interface
flapping and we install/remove rules sufficiently fast enough we
could issue what we think is an update for an identical rule and
end up releasing the rule right after we created it and sent it to
the dplane. This solves the problem of recving duplicate rules
during interface flapping.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Disallow the resolution to nexthops that are marked duplicate.
When we are resolving to an ecmp group, it's possible this
group has duplicates.
I found this when I hit a bug where we can have groups resolving
to each other and cause the resolved->next->next pointer to increase
exponentially. Sufficiently large ecmp and zebra will grind to a hault.
Like so:
```
D> 4.4.4.14/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:02
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 4.4.4.1 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.2 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.3 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.4 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.5 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.6 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.7 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.8 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.9 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.10 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.11 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.12 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.13 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.15 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 4.4.4.16 (recursive), weight 1, 00:00:02
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
via 1.1.1.1, dummy1, weight 1, 00:00:02
D> 4.4.4.15/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:09
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 4.4.4.1 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.2 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.3 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.4 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.5 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.6 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.7 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.8 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.9 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.10 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.11 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.12 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.13 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.14 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 4.4.4.16 (recursive), weight 1, 00:00:09
via 1.1.1.1, dummy1 onlink, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
via 1.1.1.1, dummy1, weight 1, 00:00:09
D> 4.4.4.16/32 [150/0] via 1.1.1.1 (recursive), weight 1, 00:00:19
* via 1.1.1.1, dummy1 onlink, weight 1, 00:00:19
via 4.4.4.1 (recursive), weight 1, 00:00:19
via 1.1.1.1, dummy1, weight 1, 00:00:19
via 4.4.4.2 (recursive), weight 1, 00:00:19
...............
................
and on...
```
You can repro the above via:
```
kernel routes:
1.1.1.1 dev dummy1 scope link
4.4.4.0/24 via 1.1.1.1 dev dummy1
==============================
config:
nexthop-group doof
nexthop 1.1.1.1
nexthop 4.4.4.1
nexthop 4.4.4.10
nexthop 4.4.4.11
nexthop 4.4.4.12
nexthop 4.4.4.13
nexthop 4.4.4.14
nexthop 4.4.4.15
nexthop 4.4.4.16
nexthop 4.4.4.2
nexthop 4.4.4.3
nexthop 4.4.4.4
nexthop 4.4.4.5
nexthop 4.4.4.6
nexthop 4.4.4.7
nexthop 4.4.4.8
nexthop 4.4.4.9
!
===========================
Then use sharpd to install 4.4.4.16 -> 4.4.4.1 pointing to that nexthop
group in decending order.
```
With these changes it prevents the growing ecmp above by disallowing
duplicates to be in the resolution decision. These nexthops are not
installed anyways so why should we be resolving to them?
Signed-off-by: Stephen Worley <sworley@nvidia.com>
valgrind is reporting:
2448137-==2448137== Thread 5 zebra_apic:
2448137-==2448137== Syscall param writev(vector[...]) points to uninitialised byte(s)
2448137:==2448137== at 0x4D6FDDD: __writev (writev.c:26)
2448137-==2448137== by 0x4D6FDDD: writev (writev.c:24)
2448137-==2448137== by 0x48A35F5: buffer_flush_available (buffer.c:431)
2448137-==2448137== by 0x48A3504: buffer_flush_all (buffer.c:237)
2448137-==2448137== by 0x495948: zserv_write (zserv.c:263)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Address 0x720c3ce is 62 bytes inside a block of size 4,120 alloc'd
2448137:==2448137== at 0x483877F: malloc (vg_replace_malloc.c:307)
2448137-==2448137== by 0x48D2977: qmalloc (memory.c:110)
2448137-==2448137== by 0x48A30E3: buffer_add (buffer.c:135)
2448137-==2448137== by 0x48A30E3: buffer_put (buffer.c:161)
2448137-==2448137== by 0x49591B: zserv_write (zserv.c:256)
2448137-==2448137== by 0x4904B7E: thread_call (thread.c:1681)
2448137-==2448137== by 0x48BD3E5: fpt_run (frr_pthread.c:308)
2448137-==2448137== by 0x4C61EA6: start_thread (pthread_create.c:477)
2448137-==2448137== by 0x4D78DEE: clone (clone.S:95)
2448137-==2448137== Uninitialised value was created by a stack allocation
2448137:==2448137== at 0x43E490: zserv_encode_vrf (zapi_msg.c:103)
Effectively we are sending `struct vrf_data` without ensuring
data has been properly initialized.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Send the results of daemons' nhg updates asynchronously,
after the update has actually completed. Capture additional
info about the source daemon in order to locate the correct
zapi session. Simplify the result types considered by the
zebra_nhg module.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
The raw zapi apis to encode and decode NHGs don't need to be
public; also add a little more validity-checking.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Calling fpm_nl_enqueue we should expect a it fit or not
return value on the outgoing stream. This is not necessary
to check here because the while loop where we are checking this
already has ensured that the data being written will fit.
CID -> 1499854
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Setting `zebra route-map delay-timer 0` completely turns of any
route-map processing in zebra. Which is completely wrong. A timer
of 0 means `do it now`.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If we are running with a delayed timer to handle route-map changes
in zebra, if another route-map change is made to the cli, push
out the timer instead of not modifying the timer. This will
allow a large set of route-maps to be possibly be read in by
the system and we don't have a state where new route-map
changes are being read in and having the timer pop in
the middle of it.
Additionally convert to use THREAD_OFF, preventing a possible
use after free as well as aligning the thread api usage
with what we consider correct.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Current code when a route map changes schedules a rerun of all routes in the
particular table. So if you modify the `ip protocol XX route-map FOO`
route-map `FOO` all routes will be rechecked. This is extremely expensive.
Modify zebra to only update the routes associated with the route-map. So
if we have 800k bgp routes and 50 ospf routes and we are route-map'ing
the ospf routes we'll only look at 50 routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we need to cause a reprocessing of data the code currently
marks all routes as needing to be looked at. Modify the
rib_update_table code to allow us to specify a specific route
type we only want to reprocess. At this point none
of the code is behaving differently this is just setup
for a future code change.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use nl_pid from the netlink socket used for programming the kernel
(netlink_dplane) in netlink route messages sent by the 'fpm' module.
This makes 'fpm' consistent with 'dplane_fpm_nl' which already
behaves this way, and allows FPM server implementations to determine
route origin via nlmsg_pid.
Signed-off-by: Duncan Eastoe <duncan.eastoe@att.com>
Create a function that can dump the mac->flags in human readable
output and convert all debugs to use it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The re->flags and re->status in debugs were being dumped as hex values.
I can never quickly decode this. Here is an idea. Let's let FRR do
it for me.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In the case where a routes nexthops cannot be resolved as part
of route processing, immmediately notify the upper level protocol
that their routes failed to install if they are interested in
being informed about this issue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The zebra route-map delay timer value is a global value
not a per vrf change. As such we should only print it
out one time.
We are seeing this:
zebra route-map delay-timer 33
exit-vrf
zebra route-map delay-timer 33
When we have 2 vrf's configured.
Fix the code to only write it out for the default vrf
Ticket: CM-32888
Signed-off-by: Donald Sharp <sharpd@nvidia.com>