The code was installing the nexthop group again using
the NLM_F_REPLACE function causing extremely large
route installation times. This reduces the time from
installing 1 million routes from sharpd with a nhg
from > 200 seconds ( where I gave up ) to ~15
seconds on my machine for 32 x ecmp. As a side note 1 million
routes using master sharpd takes ~50 seconds to do
the same thing.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Add some logging for when we choose to ignore a NHG install
for one reason or another. Also, cleanup some of the code
using the same accessor functions for the context object.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Return the proto nhe on del even if their are still possible
route references.
We may get a del before the routes are removed. So we still need
to return this to the caller so they can decrement the ref.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Fix the releasing of proto-owned singletons from the attribute
hashed table. Proto-owned singleton nexthops are hashed so they
can still be shared therefore they are present in this table
and need to be released when the time comes.
This check was only matching on zebra proto before. Changed
to match IDs in zebra allocated range.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Increment the nhg proto score iterator we used to count
leftover NHGs after client disconnect and log.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Fix some reference counting issues seen when replacing
a NHG and deleting one.
For replacement, we should end with the same refcnt on the new
one.
For delete, its the caller's job to decrement its ref after
its done with it.
Further, update routes in the rib with the new pointer after replace.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Add code to handle proto-based NHG uninstalling after
the owning client disconnects.
This is handled the same way as rib_score_proto() but for now
we are ignoring instance.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
When we add a proto NHG, increment the refcount, when
we del a proto NHG, decrement the refcount rather than
deleting it explicitly. If the upper level proto is handling
it properly, it should get decremented to zero when we
receive a NHG del.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Remove some leftover boilerplate from the old replace
code path. That code ended up in the add API so its no
longer needed.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
If we have received a route that the already existing
route is exactly the same, just note that it happened
and move on.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Fix check in zread where we determine validity of a route
based on reading in nexthops/checking ID is present.
We had a bad conditional that was determining a route
is bad if its not NHG ID based.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
We were hard coding proto bgp for use with the NHG creation.
Use the actual passed one from zapi now that it exists.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Make NHG ID allocation smarter so it wraps once it hits
the lower bound for protos and performs a lookup to make
sure we don't already have that ID in use.
Its pretty unlikely we would wrap since the ID space is somewhere
around 24million for Zebra at this point in time.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Determine the NHG ID spacing and lower bound with ZEBRA_ROUTE_MAX
in macros.
Directly set the upperbound to be the lower 28bits of the uint32_t ID
space (the top 4 are reserved for l2-NHGs). Round that number down
a bit to make it more even.
Convert all former lower_bound calls to just use the macro.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
When we receive a NHG from the kernel, we set the ID counter
to that to avoid using IDs owned from the kernel.
If we get one outside of zebra's range, lets not update it
since its probably one we created and never deleted anyway.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
For now let's assume proto-NHG-based routes are good to go
(we assume they are onlink/interface based anyway) and bypass
route resolution altogether.
Once we determine how to handle recursive nexthop-resolution for
proto-NHGs we will revisit this.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Add code to properly handle routes sent with NHG ID rather
than a nexthop_group.
For now, we separate this from backup nexthop handling since that
should probably be added to the nhg_proto_add calls.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Implement the ability to replace an NHG sent down
from an upper level proto. With proto-owned NHGs, we make the
assumption they are ecmp and always treat them as a group
to make the replace from 1 -> 2 and 2 -> 1 quite a bit
easier.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
To prevent duplication of singleton NHGs, lets hash
any zebra-ID spaced NHGs sent from an upper level proto.
These would be singleton NHGs anyway and should prevent duplication
of dataplane installs.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Add a command/functionality to only install proto-based nexthops.
That is nexthops owned/created by upper level protocols, not ones
implicitly created by zebra.
There are some scenarios where you would not want zebra to be
arbitrarily installing nexthop groups and but you still want
to use ones you have control over via lib/nexthop_group config
and an upper level protocol.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Implement the underlying zebra functionality to Add/Del an
internal zebra and kernel NHG.
These NHGs are managed by the upperlevel protocols that send them
down via zapi messaging.
They are not put into the overall zebra NHG hash table and only
put into to the ID table. Therefore, different protos cannot
and will not share NHGs.
The proto is also set appropriately when sent to the kernel.
Expand the separation of Zebra hashed/shared/created NHGs and
proto created and mangaged NHGs.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Remove the code for setting a NHG as unhashable. Originally
this was to prevent us from attempting to put duplicates from
the kernel in our hashtable.
Now I think its better to not use them in the hashtable at all
and only track them in the ID table. Routes will still be able
to use them if they specify the ID explicitly when sending Zebra
the route, but 'normal' routes we hash the nexthop group on
will not.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Modify the send down of a route to use the nexthop group id
if we have one associated with the route.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Add the ability to send a NHG from an upper level protocol down to
zebra. ZAPI_NHG_ADD encompasses both the addition and replace
semantics ( If the id passed down does not exist yet, it's Add,
else it's a replace ).
Effectively zebra will take this nhg passed down save the nhg
in the id hash for nhg's and then create the appropriate nhg's
and finally install them into the linux kernel. Notification
will be the ZAPI_NHG_NOTIFY_OWNER zapi message for normal
success/failure messaging to the installing protocol.
This work is being done to allow us to work with EVPN MH
which needs the ability to modify NHG's that BGP will own
and operate on.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Take the zebra code that reads nexthops and combine it
into one function so that when we add zapi messages
to send/receive nexthops we can take advantage of this function.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When attempting to limit the amount of data sent from the kernel
to FRR, some kernels we can run against may not have this ability
in which case the setsockopt will fail. Notice that in the log.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The mlag_rd_buf_offset function was only ever being set to 0
in the mlag_read function and only written in that function.
There is no need for this global variable.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This problem was reported by the sanitizer -
=================================================================
==24764==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0000115c8 at pc 0x55cb9cfad312 bp 0x7fffa0552140 sp 0x7fffa0552138
READ of size 8 at 0x60d0000115c8 thread T0
#0 0x55cb9cfad311 in zebra_evpn_remote_es_flush zebra/zebra_evpn_mh.c:2041
#1 0x55cb9cfad311 in zebra_evpn_es_cleanup zebra/zebra_evpn_mh.c:2234
#2 0x55cb9cf6ae78 in zebra_vrf_disable zebra/zebra_vrf.c:205
#3 0x7fc8d478f114 in vrf_delete lib/vrf.c:229
#4 0x7fc8d478f99a in vrf_terminate lib/vrf.c:541
#5 0x55cb9ceba0af in sigint zebra/main.c:176
#6 0x55cb9ceba0af in sigint zebra/main.c:130
#7 0x7fc8d4765d20 in quagga_sigevent_process lib/sigevent.c:103
#8 0x7fc8d4787e8c in thread_fetch lib/thread.c:1396
#9 0x7fc8d4708782 in frr_run lib/libfrr.c:1092
#10 0x55cb9ce931d8 in main zebra/main.c:488
#11 0x7fc8d43ee09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)
#12 0x55cb9ce94c09 in _start (/usr/lib/frr/zebra+0x8ac09)
=================================================================
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
The read/write mlag buffer sizes of 2k were sufficient
for ~100 S,G notifications at one go. Increase to 32k
to give us 16 times the space.
Ticket: CM-31576
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If we receive a message that is greater than our buffer
size we are in a situation where both the read and write
buffers are fubar'ed beyond the end. Assert when we notice
this fact.
Ticket: CM-31576
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The normal pattern of writing the type/length at the beginning
of the packet was not being quite followed. Modify the mlag
code to respect the proper way of doing things and get rid
of a stream_new and copy.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The neigh hold timer was firing after the neigh was deleted resulting
in the following crash -
[
at ./zebra/zebra_evpn_neigh.h:155
at zebra/zebra_evpn_neigh.c:447
at lib/thread.c:1578
at zebra/main.c:488
]
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Found that the command "evpn mh neigh-holdtime" can be set but
not deleted. This fix solves the delete process
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
When an ES peer withdraws a MAC-IP route we hold the entry for N seconds
to allow an external daemon (neighmgr) to establish host reachability
independent of the peer. Add config commands to allow the user to set
this holdtime (N).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Let's not make the entire `depend_finds` function pay
for the data gathering needed for the debug. There
are numerous other places in the code that check
the NEXTHOP_FLAG_RECURSIVE and do the same output.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The linux kernel is getting RTM_F_TRAP and RTM_F_OFFLOAD for
kernel routes that have an underlying asic offload. Write the
code to receive these notifications from the linux kernel and
to store that data for display about the routes.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Some linux kernels are starting to support the idea of knowledge
about the underlying asic. Add a boolean that we can set/unset
to track whether or not we think the router has this functionality
available.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The Solaris code has gone through a deprecation cycle. No-one
has said anything to us and worse of all we don't have any test
systems running Solaris to know if we are making changes that
are breaking on Solaris. Remove it from the system so
we can clean up a bit.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Code was added in the past to support a value of VRF_DEFAULT different
from 0. This option was abandoned, the default vrf id is always 0.
Remove this code, this will simplify the code and improve performance
(use a constant value instead of a function that performs tests).
Signed-off-by: Christophe Gouault <christophe.gouault@6wind.com>
In all outputs (text and json): simplify and optimize the vrf name
display, use the vrf_id_to_name() handler.
Note: vrf_id_to_name() has a safeguard system that prevents from
crashing when the vrf cannot be found because it changed in some
(unexpected) manner, it returns "n/a".
Note: "vrf n/a" will now be displayed instead of "vrf UNKNOWN" in this
case, like in most other frr components.
This safeguard was missing for show ip route json, so this
optimization also fixes a potential crash.
Signed-off-by: Christophe Gouault <christophe.gouault@6wind.com>
Variable "show ip route" commands invoke the same helper
(do_show_ip_route), potentially several times.
When asking to dump a non-default vrf, all vrfs or all tables, the
output is messy, the header summarizing abbreviations is repeated
several times, excess line feeds appear, the default table of default
VRF is concatenated to the previous table output...
Normalize the output:
- whatever the case, display the common header at most once, if there
is at least an entry to dump.
- when using a "vrf all" or "table all" command, prepend a line with
the VRF and table (even for the default vrf or table).
- when dumping a specific vrf or table, prepend a line with the VRF
and table.
Example (vrf all)
=================
router# show ip route vrf all
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF main:
C>* 10.0.2.0/24 is directly connected, mgmt0, 00:24:09
K>* 10.0.2.2/32 [0/100] is directly connected, mgmt0, 00:24:09
C>* 10.125.0.0/24 is directly connected, ntfp2, 00:00:26
VRF private:
S>* 1.1.1.0/24 [1/0] via 10.125.0.2, loop0, 00:00:29
C>* 10.125.0.0/24 is directly connected, loop0, 00:00:42
Example (main vrf)
==================
router# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
C>* 10.0.2.0/24 is directly connected, mgmt0, 00:24:41
K>* 10.0.2.2/32 [0/100] is directly connected, mgmt0, 00:24:41
C>* 10.125.0.0/24 is directly connected, ntfp2, 00:00:58
Example (specific vrf)
======================
router# show ip route vrf private
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF private:
S>* 1.1.1.0/24 [1/0] via 10.125.0.2, loop0, 00:01:23
C>* 10.125.0.0/24 is directly connected, loop0, 00:01:36
Example (all tables)
====================
router# show ip route table all
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF main table 200:
S>* 4.4.4.4/32 [1/0] via 10.125.0.3, ntfp2, 00:01:51
VRF main table 254:
C>* 10.0.2.0/24 is directly connected, mgmt0, 00:25:34
K>* 10.0.2.2/32 [0/100] is directly connected, mgmt0, 00:25:34
C>* 10.125.0.0/24 is directly connected, ntfp2, 00:01:51
Example (all vrf, all table)
============================
router# show ip route table all vrf all
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF main table 200:
S>* 4.4.4.4/32 [1/0] via 10.125.0.3, ntfp2, 00:02:15
VRF main table 254:
C>* 10.0.2.0/24 is directly connected, mgmt0, 00:25:58
K>* 10.0.2.2/32 [0/100] is directly connected, mgmt0, 00:25:58
C>* 10.125.0.0/24 is directly connected, ntfp2, 00:02:15
VRF private table 200:
S>* 2.2.2.0/24 [1/0] via 10.125.0.2, loop0, 00:02:18
VRF private table 254:
S>* 1.1.1.0/24 [1/0] via 10.125.0.2, loop0, 00:02:18
C>* 10.125.0.0/24 is directly connected, loop0, 00:02:31
Example (specific table)
========================
router# show ip route table 200
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF main table 200:
S>* 4.4.4.4/32 [1/0] via 10.125.0.3, ntfp2, 00:05:26
Signed-off-by: Christophe Gouault <christophe.gouault@6wind.com>
This series of events:
$ sudo ifconfig lo0 add 4.4.4.4/32
$ sudo ifconfig lo0 inet 4.4.4.4/32 delete
would end up leaving the 4.4.4.4/32 address on the interface under
freebsd.
This all boils down to the fact that the interface is not
considered connected yet we have a destination. If the
destination is the same and we are not connected ignore
it on freebsd.
I am sure there are other fun scenarios that someone
will have to squirrel out.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Problem commit -
[
b169fd6fd5 zebra: support for MAC-IP sync routes
]
That commit had accidentally replaced a mac-ip del to bgp with a mac
del (consequence of a bad cut-paste).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Changes to setup peer-synced as static in the dataplane. This prevents
them from being flushed out when the local switch cannot establish
their reachability.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
As a part of the re-factoring some of the evpn_vni_es apis got re-named
as evpn_evpn_es. Changed them to evpn_es_evi to make it common to
vxlan and mpls.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When a MAC is detected duplicate on a local
learn event (with freeze action),
do not send update to bgp to advertise into
evpn control plane.
With evpn mh, inform_client flag is set and
sends notification to bgp albeit dup detect
is set.
Check mac are detected as duplicate before
setting inform_client to true.
Ticket:CM-29817
Reviewed By:CCR-10329
Testing Done:
Enable DAD with freeze action
Upon local learn MAC detected as duplica
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
When installing rules pass by the interface name across
zapi.
This is being changed because we have a situation where
if you quickly create/destroy ephermeal interfaces under
linux the upper level protocol may be trying to add
a rule for a interface that does not quite exist
at the moment. Since ip rules actually want the
interface name ( to handle just this sort of situation )
convert over to passing the interface name and storing
it and using it in zebra.
Ticket: CM-31042
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
this is used when parsing the newly network namespaces. actually, to
track the link of some interfaces like vxlan interfaces, both link index
and link nsid are necessary. if a vxlan interface is moved to a new
netns, the link information is in the default network namespace, then
LINK_NSID is the value of the netns by default in the new netns. That
value of the default netns in the new netns is not known, because the
system does not automatically assign an NSID of default network
namespace in the new netns. Now a new NSID of default netns, seen from
that new netns, is created. This permits to store at netns creation the
default netns relative value for further usage.
Because the default netns value is set from the new netns perspective,
it is not needed anymore to use the NETNSA_TARGET_NSID attribute only
available in recent kernels.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
the walk routine is used by vxlan service to identify some contexts in
each specific network namespace, when vrf netns backend is used. that
walk mechanism is extended with some additional paramters to the walk
routine.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when duplicate address detection is observed, some incrementation,
some timing mechanisms need to be done. For that the main evpn
configuration is retrieved. Until now, the VRF that was storing the dad
config parameters was the same VRF that hosted the VXLAN interface. With
netns backend, this is not true, as the VXLAN interface is in the
same VRF as the bridge interface. The modification takes same definition
as in BGP, that is to say that there is a single bgp evpn instance, and
this is that instance that will give the correct config settings.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
this change is needed when a MAC/IP entry is learned by zebra, and the
entry happens to be in a different namespace. So that the entry be
active, the correct vni match has to be found.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
1. MAC ref of a zero ESI was accidentally creating a new ES with zero
ES id.
2. When an ES was deleted and re-added the ES was not being sent to BGP
because of a stale flag that suppressed the update as a dup.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When we get a rib deletion event and we already have
that particular route node in the queue to be reprocessed,
just note that someone from kernel land has done us dirty
and allow it to be cleaned up by normal processing
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Imagine a situation where a interface is bouncing up/down.
The interface comes up and daemons like pbr will get a nht
tracking callback for a connected interface up and will install
the routes down to zebra. At this same time the interface can
go down. But since zebra is busy handling route changes ( from pbr )
it has not read the netlink message and can get into a situation
where the route resolves properly and then we attempt to install
it into the kernel( which is rejected ). If the interface
bounces back up fast at this point, the down then up netlink
message will be read and create two route entries off the connected
route node. Zebra will then enqueue both route entries for future processing.
After this processing happens the down/up is collapsed into an up
and nexthop tracking sees no changes and does not inform any upper
level protocol( in this case pbr ) that nexthop tracking has changed.
So pbr still believes the nexthops are good but the routes are not
installed since pbr has taken no action.
Fix this by immediately running rnh when we signal a connected
route entry is scheduled for removal. This should cause
upper level protocols to get a rnh notification for the small
amount of time that the connected route was bouncing around like
a madman.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
It was wrongly assumed that the kernel is replying in batches when multiple
requests fail. The kernel sends one error message at a time, so we can
simply keep reading data from the socket as long as possible.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
During testing it was noticed that routes were considered
installed by zebra, but the kernel did not have the route.
Upon close debugging of the rib it was noticed that FRR
was turning a dplane_ctx_route_init into a success and
FRR was now in a bad state.
2020/08/26 17:55:53.897436 PBR: route_notify_owner: [0.0.0.0/0] Route Removed succeeded for table: 10012
2020/08/26 17:55:53.897572 ZEBRA: 0.0.0.0/0: uptime == 432033, type == 24, instance == 0, table == 10012
2020/08/26 17:55:53.897622 ZEBRA: rib_meta_queue_add: (0:10012):0.0.0.0/0: queued rn 0x5566b0ea7680 into sub-queue 5
2020/08/26 17:55:53.907637 ZEBRA: default(0:10012):0.0.0.0/0: Processing rn 0x5566b0ea7680
2020/08/26 17:55:53.907665 ZEBRA: default(0:10012):0.0.0.0/0: Examine re 0x5566b0d01200 (pbr) status 2 flags 1 dist 200 metric 0
2020/08/26 17:55:53.907702 ZEBRA: default(0:10012):0.0.0.0/0: After processing: old_selected 0x0 new_selected 0x5566b0d01200 old_fib 0x0 new_fib 0x5566b0d01200
2020/08/26 17:55:53.907713 ZEBRA: default(0:10012):0.0.0.0/0: Adding route rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr)
2020/08/26 17:55:53.907879 ZEBRA: default(0:10012):0.0.0.0/0: rn 0x5566b0ea7680 dequeued from sub-queue 5
2020/08/26 17:55:53.907943 ZEBRA: netlink_route_multipath: RTM_NEWROUTE 0.0.0.0/0 vrf 0(10012)
2020/08/26 17:55:53.910756 ZEBRA: default(0:10012):0.0.0.0/0 Processing dplane result ctx 0x5566b0ea82f0, op ROUTE_INSTALL result SUCCESS
2020/08/26 17:55:53.910769 ZEBRA: update_from_ctx: default(0:10012):0.0.0.0/0: SELECTED, re 0x5566b0d01200
2020/08/26 17:55:53.910785 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): no fib nhg
2020/08/26 17:55:53.910793 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): rib nhg matched, changed 'true'
2020/08/26 17:55:53.910802 ZEBRA: (0:10012):0.0.0.0/0: Redist update re 0x5566b0d01200 (pbr), old 0x0 (None)
2020/08/26 17:55:53.910812 ZEBRA: Notifying Owner: 24 about prefix 0.0.0.0/0(10012) 2 vrf: 0
2020/08/26 17:55:53.910912 PBR: route_notify_owner: [0.0.0.0/0] Route installed succeeded for table: 10012
2020/08/26 17:55:55.400516 ZEBRA: RTM_DELROUTE 0.0.0.0/0 vrf default(0) table_id: 10012 metric: 20 Admin Distance: 0
2020/08/26 17:55:55.400527 ZEBRA: rib_delete: (0:10012):0.0.0.0/0: rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr) was deleted from kernel, adding
We were receiving a notification from the kernel that the route was deleted and deciding
that we needed to reinstall it. At that point in time when it got into the dplane
handlers to convert it to the dplane pthread, the dplane decided to drop the request
convert it too a success and not do anything.
This code change removes the conversion from this failure to success and
notifies the upper level about it. After this change the default route
to table 10012 is now properly marked as rejected:
root@mlx-2700-07:mgmt:/var/log/frr# vtysh -c "show ip route table 10012"
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF default table 10012:
F>r 0.0.0.0/0 [200/0] via 172.168.1.164, isp2-uplink (vrf PUBLIC), weight 1, 00:24:48
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When we are not using nexthop groups, there is no need to
test for whether or not they are installed correctly or not
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The fuzzing code that is in the master branch is outdated and unused, so it
is worth to remove it to improve readablity of the code.
All the code related to the fuzzing is in the `fuzz` branch.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
in order to create appropriate policy route, family attribute is stored
in ipset and iptable zapi contexts. This commit also adds the flow label
attribute in iptables, for further usage.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When turning on `debug zebra packet detail` or `debug zebra packet recv detail`
only display the detailed packet dump when `detail` is added.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There are a bunch of places where the table id is not being outputed
in debug messages for routing changes. Add in the table id we
are operating on. This is especially useful for the case where
pbr is working.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
all network namespaces are read so as to collect interesting fdb and
neighbor tables for EVPN.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
this information is necessary for local information, because the
interface associated to the mac address is stored with its ifindex, and
the ifindex may not be enough to get to the right interface when it
comes with multiple network namespaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when working with vrf netns backend, two bridges interfaces may have the
same bridge interface index, but not the same namespace. because in vrf
netns backend mode, a bridge slave always belong to the same network
namespace, then a check with the namespace id and the ns id of the
bridge interface permits to resolve correctly the interface pointer.
The problem could occur if a same index of two bridge interfaces can be
found on two different namespaces.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
when receiving a netlink API for an interface in a namespace, this
interface may come with LINK_NSID value, which means that the interface
has its link in an other namespace. Unfortunately, the link_nsid value
is self to that namespace, and there is a need to know what is its
associated nsid value from the default namespace point of view.
The information collected previously on each namespace, can then be
compared with that value to check if the link belongs to the default
namespace or not.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
to be able to retrieve the network namespace identifier for each
namespace, the ns id is stored in each ns context. For default
namespace, the netns id is the same as that value.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
as remind, the netns identifiers are local to a namespace. that is to
say that for instance, a vrf <vrfx> will have a netns id value in one
netns, and have an other netns id value in one other netns.
There is a need for zebra daemon to collect some cross information, like
the LINK_NETNSID information from interfaces having link layer in an
other network namespace. For that, it is needed to have a global
overview instead of a relative overview per namespace.
The first brick of this change is an API that sticks to netlink API,
that uses NETNSA_TARGET_NSID. from a given vrf vrfX, and a new vrf
created vrfY, the API returns the value of nsID from vrfX, inside the
new vrf vrfY.
The brick also gets the ns id value of default namespace in each other
namespace. An additional value in ns.h is offered, that permits to
retrieve the default namespace context.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
an incoming bridge index has been found, that is linked with vxlan
interface, and the search for that bridge interface is done. In
vrf-lite, the search is done across the same default namespace, because
bridge and vxlan may not be in the same vrf. But this behaviour is wrong
when using vrf netns backend, as the bridge and the vxlan have to be in
the same vrf ( hence in the same network namespace). To comply with
that, use the netnamespace of the vxlan interface. Like that, the
appropriate nsid is passed as parameter, and consequently, the search is
correct, and the mac address passed to BGP will be ok too.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
other network namespaces are parsed because bridge interface can be
bridged with vxlan interfaces with a link in the default vrf that hosts
l2vpn.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>