1. Removed the VRF_DEFAULT dependency from ospf6d.
2. The dependency on show command still exist
will be fixed when the ospf6 master is available.
Co-authored-by: Harios <hari@niralnetworks.com>
Signed-off-by: Kaushik <kaushik@niralnetworks.com>
When deleting a dynamic peer, unsetting md5 password would cause
it to be unset on the listener allowing unauthenticated connections
from any peer in the range.
Check for dynamic peers in peer delete and avoid this.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
When setting authentication on a BGP peer in a VRF the listener is
looked up from a global list. However there is no check that the
listener is the one associated with the VRF being configured. This
can result in the wrong listener beiong configured with a password,
leaving the intended listener in an open authentication state.
To simplify this lookup stash a pointer to the bgp instance in
the listener on creating (in the same way as is done for NS-based
VRFS).
Signed-off-by: Pat Ruddy <pat@voltanet.io>
1. Topotest for isis-vrf is added for ipv4 and ipv6.
2. Test case for checking isis topology.
3. Test case for checking zebra isis routes.
4. Test case for checking linux vrf routes.
5. 2 new API's written in topotest/lib for checking vrf routes.
Co-authored-by: Kaushik <kaushik@niralnetworks.com>"
Signed-off-by: harios_niral <hari@niralnetworks.com>
1. Added isis with different vrf and it's dependecies.
2. Added new vrf leaf in yang.
3. A minor change for IF_DOWN_FROM_Z passing argrument is
replaced with ifp pointer in api "isis_if_delete_hook()".
4. Minor fix in the isisd spf unit test.
Co-authored-by: Kaushik <kaushik@niralnetworks.com>"
Signed-off-by: harios_niral <hari@niralnetworks.com>
During the yangification of staticd, refactoring of code
around static_hold_route data struct has been done, In this
context warning log message when VRF is not ready had been
removed. This should not be removed, adding it back.
I have missed one MPLS validation too, adding it back.
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
When not using the transactional CLI mode, do not display a
warning when a YANG-modeled commmand doesn't perform any effective
configuration change.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
1. Added new API for add/delete acl with route map notify.
Co-authored-by: harios <hari@niralnetworks.com>
Signed-off-by: Kaushik <kaushik@niralnetworks.com>
since the addition of srte_color to the comparison for bgp nexthops
it is possible to have several nexthops per prefix but since zebra
only sores a per prefix registration we should not unregister for
nh notifications for a prefix unti all the nexthops for that prefix
have been deleted. Otherwise we can get into a deadlock situation
where BGP thinks we have registered but we have unregistered from zebra.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Extend the NHT code so that only the affected BGP routes are affected
whenever an SR-policy is updated on zebra.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Example configuration:
route-map SET_SR_POLICY permit 10
set sr-te color 1
!
router bgp 1
bgp router-id 1.1.1.1
neighbor 2.2.2.2 remote-as 1
neighbor 2.2.2.2 update-source lo
address-family ipv4 unicast
neighbor 2.2.2.2 next-hop-self
neighbor 2.2.2.2 route-map SET_SR_POLICY in
exit-address-family
!
!
Learned BGP routes from 2.2.2.2 are mapped to the SR-TE Policy
which is uniquely determined by the BGP nexthop (2.2.2.2 in this
case) and the SR-TE color in the route-map.
Co-authored-by: Renato Westphal <renato@opensourcerouting.org>
Co-authored-by: GalaxyGorilla <sascha@netdef.org>
Co-authored-by: Sebastien Merle <sebastien@netdef.org>
Signed-off-by: Sebastien Merle <sebastien@netdef.org>
Fist, routing tables aren't the most appropriate data structure
to store nexthops and imported routes since we don't need to do
longest prefix matches with that information.
Second, by converting the NHT code to use rb-trees, we can index
the nexthops using additional information, not only the destination
address. This will be useful later to index bgpd's nexthops by
both destination and SR-TE color.
Co-authored-by: Sebastien Merle <sebastien@netdef.org>
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
1. MAC ref of a zero ESI was accidentally creating a new ES with zero
ES id.
2. When an ES was deleted and re-added the ES was not being sent to BGP
because of a stale flag that suppressed the update as a dup.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When we get a rib deletion event and we already have
that particular route node in the queue to be reprocessed,
just note that someone from kernel land has done us dirty
and allow it to be cleaned up by normal processing
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Imagine a situation where a interface is bouncing up/down.
The interface comes up and daemons like pbr will get a nht
tracking callback for a connected interface up and will install
the routes down to zebra. At this same time the interface can
go down. But since zebra is busy handling route changes ( from pbr )
it has not read the netlink message and can get into a situation
where the route resolves properly and then we attempt to install
it into the kernel( which is rejected ). If the interface
bounces back up fast at this point, the down then up netlink
message will be read and create two route entries off the connected
route node. Zebra will then enqueue both route entries for future processing.
After this processing happens the down/up is collapsed into an up
and nexthop tracking sees no changes and does not inform any upper
level protocol( in this case pbr ) that nexthop tracking has changed.
So pbr still believes the nexthops are good but the routes are not
installed since pbr has taken no action.
Fix this by immediately running rnh when we signal a connected
route entry is scheduled for removal. This should cause
upper level protocols to get a rnh notification for the small
amount of time that the connected route was bouncing around like
a madman.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Avoid unnecessary use of StringIO in one place, use version-
dependent method in another. Remove a couple of other py2->py3
problems.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
The pnhc->nexthop was a pointer copy. Causing issues
with the ability to move pointers around for the
different pnhc since the pnhc mirrored the nexthop
caches. When we received a vrf change if we shared
pointers it was impossible to know if we had
already updated the code.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
We had multiple pnhc cache entries with the same nexthop
pointer. This causes some large amount of confusion.
Fixup the code to handle this situation better.
Ticket: CM-31044
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
If we have an interface configured in a daemon on shutdown
store the old ifindex value for retrieval on when it is
possibly recreated.
This is especially important for nexthop groups as that we
had at one point in time the ability to restore the
configuration but it was lost when we started deleting
all deleted interfaces. We need the nexthop group subsystem
to also mark that it has configured an interface.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
PBR needs the ability to allow ephermeal interfaces( bonds,
vrfs, dummy, bridges, etc ) to be destroyed and then
recreated and at the same time keep track of them and
rebuild state as appropriate when we get a change.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The nexthop_group_write_nexthop_simple function outputs the
interface name, because we've stored the ifindex. The problem
is that there are ephermeal interfaces in linux that can be
destroyed/recreated. Allow us to keep that data and do something
a bit smarter to allow show run's and other show commands to continue
to work when the interface is deleted.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Currently when a vrf is deleted than added back in PBR was
not going through and touching up all the data structures
that needed to be massaged to allow it to start working again.
This includes:
a) Search through the nexthop groups to find any nexthop
that references the old nexthop id and set it right again.
b) Search through the nexthop cache for nht and reset
those nexthops to the right vrf as well as re-register
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Not everything cares about the vrf and backup info. Break
up the API to add a simple version to just write gateway/interface
info.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
It was wrongly assumed that the kernel is replying in batches when multiple
requests fail. The kernel sends one error message at a time, so we can
simply keep reading data from the socket as long as possible.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
In the context of TI-LFA it is necessary to have multiple
representations of SPFs for so called P and Q spaces. Hence it makes
sense to start with fresh vertex lists, and only delete them when
the SPF calculation is not a 'dry run'.
Signed-off-by: GalaxyGorilla <sascha@netdef.org>
During testing it was noticed that routes were considered
installed by zebra, but the kernel did not have the route.
Upon close debugging of the rib it was noticed that FRR
was turning a dplane_ctx_route_init into a success and
FRR was now in a bad state.
2020/08/26 17:55:53.897436 PBR: route_notify_owner: [0.0.0.0/0] Route Removed succeeded for table: 10012
2020/08/26 17:55:53.897572 ZEBRA: 0.0.0.0/0: uptime == 432033, type == 24, instance == 0, table == 10012
2020/08/26 17:55:53.897622 ZEBRA: rib_meta_queue_add: (0:10012):0.0.0.0/0: queued rn 0x5566b0ea7680 into sub-queue 5
2020/08/26 17:55:53.907637 ZEBRA: default(0:10012):0.0.0.0/0: Processing rn 0x5566b0ea7680
2020/08/26 17:55:53.907665 ZEBRA: default(0:10012):0.0.0.0/0: Examine re 0x5566b0d01200 (pbr) status 2 flags 1 dist 200 metric 0
2020/08/26 17:55:53.907702 ZEBRA: default(0:10012):0.0.0.0/0: After processing: old_selected 0x0 new_selected 0x5566b0d01200 old_fib 0x0 new_fib 0x5566b0d01200
2020/08/26 17:55:53.907713 ZEBRA: default(0:10012):0.0.0.0/0: Adding route rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr)
2020/08/26 17:55:53.907879 ZEBRA: default(0:10012):0.0.0.0/0: rn 0x5566b0ea7680 dequeued from sub-queue 5
2020/08/26 17:55:53.907943 ZEBRA: netlink_route_multipath: RTM_NEWROUTE 0.0.0.0/0 vrf 0(10012)
2020/08/26 17:55:53.910756 ZEBRA: default(0:10012):0.0.0.0/0 Processing dplane result ctx 0x5566b0ea82f0, op ROUTE_INSTALL result SUCCESS
2020/08/26 17:55:53.910769 ZEBRA: update_from_ctx: default(0:10012):0.0.0.0/0: SELECTED, re 0x5566b0d01200
2020/08/26 17:55:53.910785 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): no fib nhg
2020/08/26 17:55:53.910793 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): rib nhg matched, changed 'true'
2020/08/26 17:55:53.910802 ZEBRA: (0:10012):0.0.0.0/0: Redist update re 0x5566b0d01200 (pbr), old 0x0 (None)
2020/08/26 17:55:53.910812 ZEBRA: Notifying Owner: 24 about prefix 0.0.0.0/0(10012) 2 vrf: 0
2020/08/26 17:55:53.910912 PBR: route_notify_owner: [0.0.0.0/0] Route installed succeeded for table: 10012
2020/08/26 17:55:55.400516 ZEBRA: RTM_DELROUTE 0.0.0.0/0 vrf default(0) table_id: 10012 metric: 20 Admin Distance: 0
2020/08/26 17:55:55.400527 ZEBRA: rib_delete: (0:10012):0.0.0.0/0: rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr) was deleted from kernel, adding
We were receiving a notification from the kernel that the route was deleted and deciding
that we needed to reinstall it. At that point in time when it got into the dplane
handlers to convert it to the dplane pthread, the dplane decided to drop the request
convert it too a success and not do anything.
This code change removes the conversion from this failure to success and
notifies the upper level about it. After this change the default route
to table 10012 is now properly marked as rejected:
root@mlx-2700-07:mgmt:/var/log/frr# vtysh -c "show ip route table 10012"
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF default table 10012:
F>r 0.0.0.0/0 [200/0] via 172.168.1.164, isp2-uplink (vrf PUBLIC), weight 1, 00:24:48
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>