if_lookup_by_index_all_vrf doesn't work correctly with netns VRF backend
as the same index may be used in multiple netns simultaneously.
We always know the BGP instance we work with, so use its VRF id for the
interface lookup.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
These are no longer really needed. The client just needs
to call nexthop resolution instead.
So let's remove the zapi types.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Allow bgp to figure out if it cares about address resolution instead
of having zebra care about it. This will allow the removal of the
zapi type for import checking and just use nexthop resolution.
Effectively we just look up the route being returned and
if it is in either table we just handle it instead of
looking for clues from the zapi message type.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Some BGP updates received by BGP invite local router to
install a route through itself. The system will not do it, and
the route should be considered as not valid at the earliest.
This case is detected on the zebra, and this detection prevents
from trying to install this route to the local system. However,
the nexthop tracking mechanism is called, and acts as if the route
was valid, which is not the case.
By detecting in BGP that use case, we avoid installing the invalid
routes.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When bgp peers with ipv6 link local addresses, it may receive a
BGP update with next-hop containing both LL and GA information.
By default, nexthop tracking applies to GA, and ignores presence
of LL, when both addresses are present. This is a problem for
resolving GA as next-hop as the next-hop information can be solved
by using the LL address only.
The solution consists in defaulting the nexthop ipv6 choice to LL
when available, and moving back to GA if a route-map is locally
configured at inbound.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When EVPN prefix route with a gateway IP overlay index is imported into the IP
vrf at the ingress PE, BGP nexthop of this route is set to the gateway IP.
For this vrf route to be valid, following conditions must be met.
- Gateway IP nexthop of this route should be L3 reachable, i.e., this route
should be resolved in RIB.
- A remote MAC/IP route should be present for the gateway IP address in the
EVI(L2VPN table).
To check for the first condition, gateway IP is registered with nht (nexthop
tracking) to receive the reachability notifications for this IP from zebra RIB.
If the gateway IP is reachable, zebra sends the reachability information (i.e.,
nexthop interface) for the gateway IP.
This nexthop interface should be the SVI interface.
Now, to find out type-2 route corresponding to the gateway IP, we need to fetch
the VNI for the above SVI.
To do this VNI lookup effitiently, define a hashtable of struct bgpevpn with
svi_ifindex as key.
struct hash *vni_svi_hash;
An EVI instance is added to vni_svi_hash if its svi_ifindex is nonzero.
Using this hash, we obtain struct bgpevpn corresponding to the gateway IP.
For gateway IP overlay index recursive lookup, once we find the correct EVI, we
have to lookup its route table for a MAC/IP prefix. As we have to iterate the
entire route table for every lookup, this lookup is expensive. We can optimize
this lookup by adding all the remote IP addresses in a hash table.
Following hash table is defined for this purpose in struct bgpevpn
Struct hash *remote_ip_hash;
When a MAC/IP route is installed in the EVI table, it is also added to
remote_ip_hash.
It is possible to have multiple MAC/IP routes with the same IP address because
of host move scenarios. Thus, for every address addr in remote_ip_hash, we
maintain list of all the MAC/IP routes having addr as their IP address.
Following structure defines an address in remote_ip_hash.
struct evpn_remote_ip {
struct ipaddr addr;
struct list *macip_path_list;
};
A Boolean field is added to struct bgp_nexthop_cache to indicate that the
nexthop is EVPN gateway IP overlay index.
bool is_evpn_gwip_nexthop;
A flag BGP_NEXTHOP_EVPN_INCOMPLETE is added to struct bgp_nexthop_cache.
This flag is set when the gateway IP is L3 reachable but not yet resolved by a
MAC/IP route.
Following table explains the combination of L3 and L2 reachability w.r.t.
BGP_NEXTHOP_VALID and BGP_NEXTHOP_EVPN_INCOMPLETE flags
* | MACIP resolved | MACIP unresolved
*----------------|----------------|------------------
* L3 reachable | VALID = 1 | VALID = 0
* | INCOMPLETE = 0 | INCOMPLETE = 1
* ---------------|----------------|--------------------
* L3 unreachable | VALID = 0 | VALID = 0
* | INCOMPLETE = 0 | INCOMPLETE = 0
Procedure that we use to check if the gateway IP is resolvable by a MAC/IP
route:
- Find the EVI/L2VRF that belongs to the nexthop SVI using vni_svi_hash.
- Check if the gateway IP is present in remote_ip_hash in this EVI.
When the gateway IP is L3 reachable and it is also resolved by a MAC/IP route,
unset BGP_NEXTHOP_EVPN_INCOMPLETE flag and set BGP_NEXTHOP_VALID flag.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
The IP/IPv6 prefix carried with EVPN RT-5 is imported in the BGP vrf according
to the attached route targets.
If the prefix carries a gateway IP overlay index, this gateway IP should be
installed as the nexthop of the route imported in the BGP vrf.
This route in vrf will be marked as VALID only if the nexthop is resolved in the
SVI network.
To receive runtime reachability information for the nexthop, register it with
the nexthop tracking module.
Send this route to zebra after processing.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
the code processing an NHT update was only resetting the BGP_NEXTHOP_VALID
flag, so labeled nexthops were considered valid even if there was no
nexthop. Reset the flag in response to the update, and also make the
isvalid_nexthop functions a little more robust by checking the number
of nexthops.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
The new LL code in:
8761cd6ddb
Introduced the idea of the bgp unnumbered peers using interface up/down
events to track the bgp peers nexthop. This code was not properly
working when a connection was received from a peer in some circumstances.
Effectively the connection from a peer was immediately skipping state transitions
and FRR was never properly tracking the peers nexthop. When we receive the
connection attempt, let's track the nexthop now.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
For link-local IPv6 next hops, the next hop tracking is implemented based
on interface status changes. For this purpose, the ifindex is stored in
the NHT. Reset this value if a change in ifindex is noticed, such as for
example after a restart of the networking service.
Also add some additional debug logs.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com>
Updates: "bgpd: Switch LL nexthop tracking to be interface based"
Ticket: RM 2575386
Testing Done:
1. Manual verification
2. Precommit (#156), evpn-smoke (#155), bgp-smoke (#157), vrl (#158)
-- Precommit is clean, reported failures in evpn-smoke & vrl are resolved
-- some other tests fail in evpn-smoke, bgp-smoke & vrl, appear to be existing
-- or unrelated failures
The v6 LL commit 8761cd6ddb
incorrectly was setting the metric value to 1 for the underlying
connected interface. Modify the code to use a metric value of 0
instead of 1 that now represents the actual metric value that
was originally passed up.
This was noticed when the `show bgp ipv4 uni` command was
inserting a `(metric 1)` into output where before it was not.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Recent changes to allow bgpd to handle v6 LL slightly
differently in the nexthop tracking code has not
interacted well with the blackhole nexthop change
for peers. Modify the code to do the right thing
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
bgp is currently registering v6 LL as nexthops to be tracked
from zebra. This presents several problems.
a) zebra does not properly track multiple prefixes that match
the same route properly at this point in time.
b) BGP was receiving nexthops that were just incorrect because
of (a).
c) When a nexthop changed that really didn't affect the v6 LL
we were responding incorrectly because of this
Modify the code such that bgp nexthop tracking notices that
we are trying to register a v6 LL. When we do so, shortcut
and watch interface up/down events for this v6 LL and do
the work when an interface goes up / down for this type
of tracking.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When bgp registers for a nexthop that is not reachable due
to the nexthop pointing to a blackhole, bgp is never going
to be able to reach it when attempting to open a connection.
Broken behavior:
<show bgp nexthop>
192.168.161.204 valid [IGP metric 0], #paths 0, peer 192.168.161.204
blackhole
Last update: Thu Feb 11 09:46:10 2021
eva# show bgp ipv4 uni summ fail
BGP router identifier 10.10.3.11, local AS number 3235 vrf-id 0
BGP table version 40
RIB entries 78, using 14 KiB of memory
Peers 2, using 54 KiB of memory
Neighbor EstdCnt DropCnt ResetTime Reason
192.168.161.204 0 0 never Waiting for peer OPEN
The log file fills up with this type of message:
2021-02-09T18:53:11.653433+00:00 nq-sjc6c-cor-01 bgpd[6548]: can't connect to 24.51.27.241 fd 26 : Invalid argument
2021-02-09T18:53:21.654005+00:00 nq-sjc6c-cor-01 bgpd[6548]: can't connect to 24.51.27.241 fd 26 : Invalid argument
2021-02-09T18:53:31.654381+00:00 nq-sjc6c-cor-01 bgpd[6548]: can't connect to 24.51.27.241 fd 26 : Invalid argument
2021-02-09T18:53:41.654729+00:00 nq-sjc6c-cor-01 bgpd[6548]: can't connect to 24.51.27.241 fd 26 : Invalid argument
2021-02-09T18:53:51.655147+00:00 nq-sjc6c-cor-01 bgpd[6548]: can't connect to 24.51.27.241 fd 26 : Invalid argument
As that the connect to a blackhole is correctly rejected by the kernel
Fixed behavior:
eva# show bgp ipv4 uni summ
BGP router identifier 10.10.3.11, local AS number 3235 vrf-id 0
BGP table version 40
RIB entries 78, using 14 KiB of memory
Peers 2, using 54 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
annie(192.168.161.2) 4 64539 126264 39 0 0 0 00:01:36 38 40 N/A
192.168.161.178 4 0 0 0 0 0 0 never Active 0 N/A
Total number of neighbors 2
eva# show bgp ipv4 uni summ fail
BGP router identifier 10.10.3.11, local AS number 3235 vrf-id 0
BGP table version 40
RIB entries 78, using 14 KiB of memory
Peers 2, using 54 KiB of memory
Neighbor EstdCnt DropCnt ResetTime Reason
192.168.161.178 0 0 never Waiting for NHT
Total number of neighbors 2
eva# show bgp nexthop
Current BGP nexthop cache:
192.168.161.2 valid [IGP metric 0], #paths 38, peer 192.168.161.2
if enp39s0
Last update: Thu Feb 11 09:52:05 2021
192.168.161.131 valid [IGP metric 0], #paths 0, peer 192.168.161.131
if enp39s0
Last update: Thu Feb 11 09:52:05 2021
192.168.161.178 invalid, #paths 0, peer 192.168.161.178
Must be Connected
Last update: Thu Feb 11 09:53:37 2021
eva#
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If we are using a nexthop for a MPLS VPN route make sure the
nexthop is over a labeled path. This new check mirrors the one
in validate_paths (where routes are enabled when a nexthop
becomes reachable). The check is introduced to the code path
where routes are added and the nexthop is looked up.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Two L3 next groups are installed per-VRF per-ES for v4 and v6. These
NHGs are used as an indirect destination for symmetric IRB host routes.
Using L3NHGs allows for efficient failover of an ES (similar to the
use of L2NHGs) i.e. when an ES goes down the number of dataplane
updates are limited to 2xN (where N is the number of tenant VRFs
associated with the ES) instead of updating all host-routes behind the
ES.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
ES-VRF entries are maintained for the purpose of L3-NHG creation -
1. Each ES-EVI entry is associated with a tenant VRF. This associaton
triggers the creation of an ES-VRF entry.
2. Type-2/MAC-IP routes are imported into a tenant VRF and programmed as
a /32 or host route entry in the dataplane. If the destination of
the host route is a remote-ES the route is programmed with the
corresponding (keyed in by {vrf,ES-id}) L3-NHG.
3. The reason for this indirection (route->L3-NHG, L3-NHG->list-of-VTEPs)
is to avoid route updates to the dplane when a remote-ES link flaps i.e.
instead of updating all the dependent routes the NHG's contents are
updated. This reduces the amount of dataplane updates (fewer nhg updates vs.
route updates) allowing for a faster failover.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
The `enum zclient_send_status` enum needs to be extended
throughout the code base to use the new states and
to fix up places where we tested against the return
value being non zero.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This should never happen; no need to debug guard it and it's not a
warning, if this isn't working then NHT is not working at all.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
This function is poorly named; it's really used to allow the FSM to
decide the next valid state based on whether a peer has valid /
reachable nexthops as determined by NHT or BFD.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
since the addition of srte_color to the comparison for bgp nexthops
it is possible to have several nexthops per prefix but since zebra
only sores a per prefix registration we should not unregister for
nh notifications for a prefix unti all the nexthops for that prefix
have been deleted. Otherwise we can get into a deadlock situation
where BGP thinks we have registered but we have unregistered from zebra.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Extend the NHT code so that only the affected BGP routes are affected
whenever an SR-policy is updated on zebra.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Fist, routing tables aren't the most appropriate data structure
to store nexthops and imported routes since we don't need to do
longest prefix matches with that information.
Second, by converting the NHT code to use rb-trees, we can index
the nexthops using additional information, not only the destination
address. This will be useful later to index bgpd's nexthops by
both destination and SR-TE color.
Co-authored-by: Sebastien Merle <sebastien@netdef.org>
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
until now, the assumption was done in bgp flowspec code that the
information contained was an ipv4 flowspec prefix. now that it is
possible to handle ipv4 or ipv6 flowspec prefixes, that information is
stored in prefix_flowspec attribute. Also, some unlocking is done in
order to process ipv4 and ipv6 flowspec entries.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Added a macro to validate the v4 mapped v6 address.
Modified bgp receive & send updates for v4 mapped v6 address as
nexthop and installing it as recursive nexthop in RIB.
Minor change in fpm while sending the routes for nexthop as
v4 mapped v6 address.
Signed-off-by: Kaushik <kaushik@niralnetworks.com>
This is the bulk part extracted from "bgpd: Convert from `struct
bgp_node` to `struct bgp_dest`". It should not result in any functional
change.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
When there is a NHT change and the paths dependent on that NHT are being
evaluated, skip those that are marked for removal or as history.
When a route gets withdrawn, its valid flag is cleared and it is flagged
for removal; in the case of an EVPN route, it is also unimported from
VRFs (L2 and/or L3). bgp_process is then scheduled. Under rare timing
conditions, an NHT update for the route's next hop may arrive right after,
and if routes flagged for removal are not skipped, they may not only be
incorrectly marked as valid but also re-imported in the case of EVPN,
which will be a serious error.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Ensure that only if there is a change to the path's validity based
on the NHT update, EVPN import or unimport is invoked.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
It is possible that the if_lookup_by_index() call will return
a NULL value and calling zclient_send_interface_radv_req. Just
test that we have a valid interface pointer.
Found by Coverity
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Problem reported that in many circumstances, RAs created in the
process of bringing up numbered IPv6 peers with extended-nexthop
capability enabled (for ipv4 over ipv6) were not stopped on the
interface when those peers were deleted. Found several circumstances
where this occurred and fix them in this patch.
Ticket: CM-26875
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
Problem Description:
=====================
+--+ +--+
|R1|-(192.201.202.1)----iBGP----(192.201.202.2)-|R2|
+--+ +--+
Routes on R2:
=============
S>* 202.202.202.202/32 [1/0] via 192.201.78.1, ens256, 00:40:48
Where, the next-hop network, 192.201.78.0/24, is a directly connected network address.
C>* 192.201.78.0/24 is directly connected, ens256, 00:40:48
Configurations on R1:
=====================
!
router bgp 201
bgp router-id 192.168.0.1
neighbor 192.201.202.2 remote-as 201
!
Configurations on R2:
=====================
!
ip route 202.202.202.202/32 192.201.78.1
!
router bgp 201
bgp router-id 192.168.0.2
neighbor 192.201.202.1 remote-as 201
!
address-family ipv4 unicast
redistribute static
exit-address-family
!
Step-1:
=======
R1 receives the route 202.202.202.202/32 from R2.
R1 installs the route in its BGP RIB.
Step-2:
=======
On R1, a connected interface address is added.
The address is the same as the next-hop of the BGP route received from R2 (192.201.78.1).
Point of Failure:
=================
R1 resolves the BGP route even though the route's next-hop is its own connected address.
Even though this appears to be a misconfiguration it would still be better to safeguard the code against it.
Fix:
====
When BGP receives a connected route from Zebra, it processes the
routes for the next-hop update.
While doing so, BGP must ignore routes whose next-hop address matches
the address of the connected route for which Zebra sent the next-hop update
message.
Signed-off-by: NaveenThanikachalam <nthanikachal@vmware.com>
Add new function `bgp_node_get_prefix()` and modify
the bgp code base to use it.
This is prep work for the struct bgp_dest rework.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Current failed reasons for bgp when you have a peer that
is not online yet is `Waiting for NHT`, even if NHT has
succeeded. Add some code to differentiate this.
eva# show bgp ipv4 uni summ failed
BGP router identifier 192.168.201.135, local AS number 3923 vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 2, using 43 KiB of memory
Neighbor EstdCnt DropCnt ResetTime Reason
192.168.44.1 0 0 never Waiting for NHT
192.168.201.139 0 0 never Waiting for Open to Succeed
Total number of neighbors 2
eva#
eva# show bgp nexthop
Current BGP nexthop cache:
192.168.44.1 invalid, peer 192.168.44.1
Must be Connected
Last update: Mon Feb 10 19:05:19 2020
192.168.201.139 valid [IGP metric 0], #paths 0, peer 192.168.201.139
So 192.168.201.139 is a peer for a connected route that has not been
created on .139, while 44.1 nexthop tracking has not succeeded yet.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
bgp nexthop cache update triggers RA for global ipv6
nexthop update.
In case of blackhole route type the outgoing interface
information is NULL which leads to bgpd crash.
Skip sending RA for blackhole nexthop type.
Ticket:CM-27299
Reviewed By:
Testing Done:
Configure bgp neighbor over global ipv6 address.
Configure static blackhole route with prefix includes
connected ipv6 global address.
Upon link flap, zebra sends nexthop update to bgp.
Bgp nexthop cache skips sending RA for blackhole nexthop type.
router bgp 65002
bgp router-id 91.189.93.190
...
neighbor 2001:67c:1360::b peer-group internal
static route:
ipv6 route 2001:67c:1360::/48 Null0 254
iface rowlink.4010
address 91.189.93.190/32
address 2001:67c:1360::a/128
Trigger ifdown rowlink.4010; ifup rowlink.4010
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
Problem statement:
When IPv4/IPv6 prefixes are received in BGP, bgp_update function registers the
nexthop of the route with nexthop tracking module. The BGP route is marked as
valid only if the nexthop is resolved.
Even for EVPN RT-5, route should be marked as valid only if the the nexthop is
resolvable.
Code changes:
1. Add nexthop of EVPN RT-5 for nexthop tracking. Route will be marked as valid
only if the nexthop is resolved.
2. Only the valid EVPN routes are imported to the vrf.
3. When nht update is received in BGP, make sure that the EVPN routes are
imported/unimported based on the route becomes valid/invalid.
Testcases:
1. At rtr-1, advertise EVPN RT-5 with a nexthop 10.100.0.2.
10.100.0.2 is resolved at rtr-2 in default vrf.
At rtr-2, remote EVPN RT-5 should be marked as valid and should be imported into
vrfs.
2. Make the nexthop 10.100.0.2 unreachable at rtr-2
Remote EVPN RT-5 should be marked as invalid and should be unimported from the
vrfs. As this code change deals with EVPN type-5 routes only, other EVPN routes
should be valid.
3. At rtr-2, add a static route to make nexthop 10.100.0.2 reachable.
EVPN RT-5 should again become valid and should be imported into the vrfs.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
Recently had a case where I was attempting to debug a nexthop tracking
issue across multiple bgp vrf's and since the setup vrf's in it with
overlapping address ranges, it became real fun real fast to track
vrf data associated. Add a bit of code to allow us to figure out
what vrf we are in when we print out debug messages.
Look through the rest of the code and find debugs where we are
not using bgp->name_pretty and switch it over.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The functions nexthop_same() does not check the resolved
nexthops so I don't think this function is even needed
anymore.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
if bfd comes back up, and a bgp reconnection is in progress, theorically
it should be necessary to wait for the end of the reconnection process.
however, since that reconnection process may take some time, update the
fsm by cancelling the connect timer. This done, one just have to call
the start timer.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Avoid tracking 0.0.0.0/32 nexthop with RIB.
When routes are aggregated,
the originate of the route becomes self.
Do not track nexthop self (0.0.0.0) with rib.
Ticket: CM-24248
Testing Done:
Before fix-
tor-11# show ip nht vrf all
VRF blue:
0.0.0.0
unresolved
Client list: bgp(fd 16)
VRF default:
VRF green:
VRF magenta:
0.0.0.0
unresolved
Client list: bgp(fd 16)
After fix-
tor-11# show ip nht vrf all
VRF blue:
VRF default:
VRF green:
VRF magenta:
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>
redirect IP nh of flowspec entry is retrieved so that the nexthop
IP information is injected into the nexthop tracking, and is associated
to the bgp_path structure. This permits validating or unvalidating the
bgp_path for injection in zebra or not.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Null check of 'rn' returned by bgp_node_lookup() because it could be
deferenced afterwards into bgp_nexthop_get_node_info()
Signed-off-by: F. Aragon <paco@voltanet.io>
The bgp_nexthop_set_node_info and bgp_nexthop_get_node_info
function names were slightly backwards, rename to bgp_node_set and get
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
When we have a late registration of the Extended Nexthop capability
for BGP and the peer already has nexthop information stored, go
through and enable RA on the important interfaces.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
If we attempt to register nexthops before we have the zebra
connection, they will not be installed. After we have noticed
that we are up, re-install them.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Allow some debug notification when we are unable to talk
to zebra due to the connection not being there yet.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Recent changes to the nht code in bgp caused us to actually
keep a true count of v6 nexthop paths when using v4 over v6.
This change introduced a race condition on shutdown on who
got to the bnc cache first( the v4 table or not ). Effectively
we were allowing the continued existence of the path->nexthop
pointing to the freed bnc. This was especially true when
we had route leaking. So when we free the bnc make sure
we clean up the path->nexthop variables pointing at it too.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Do a straight conversion of `struct bgp_info` to `struct bgp_path_info`.
This commit will setup the rename of variables as well.
This is being done because `struct bgp_info` is not descriptive
of what this data actually is. It is path information for routes
that we keep to build the actual routes nexthops plus some extra
information.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
Testing Done: bgp-smoke completed with no new failures
While testing 5549 support using global addresses, discovered that
ipv6 nexthop tracking thru a route-reflector didn't work. Since
the next-hop used for remote nexthops resolves to the link-local
of the route-reflector, we need to track it in order to react to
interface down events. Also tripped over a crash in certain cases
which is also resolved in this fix.
The bgp_nexthop_cache data is stored as a void pointer in `struct bgp_node`.
Abstract retrieval of this data and setting of this data
into functions so that in the future we can move around
what is stored in bgp_node.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Currently we only support RFC 5549 in bgp via
using the `neighbor swp1 interface remote-as ...`
command. This causes the extended capability
data to be traded as part of the open message.
Additionally at that point in time we notify
zebra to turn on the RA code for that interface
so that the zebra trick of turning the v6 nexthop
into a 169.254.0.1 nexthop and adding a neighbor
entry works.
This code change does 2 things:
1) Modify bgp to pass the extended capability
if we are attempting to establish a v4/unicast
session over a v6 peer. In the past we limited
this to just the LL based peer.
2) Modify the nexthop tracking code to notice
when it receives nexthop data about the global v6
peer to turn on RA code on those interfaces we will
be using. This will allow the v4 route with a v6
nexthop received in zebra to auto translate this
correctly.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Routes that have labels must be sent via a nexthop that also has labels.
This change notes whether any path in a nexthop update from zebra contains
labels. If so, then the nexthop is valid for routes that have labels.
If a nexthop update has no labeled paths, then any labeled routes
referencing the nexthop are marked not valid.
Add a route flag BGP_INFO_ANNC_NH_SELF that means "advertise myself
as nexthop when announcing" so that we can track our notion of the
nexthop without revealing it to peers.
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
Create a zapi_nexthop_update_decode function that both
pim and bgp use to decode the message from zebra.
There probably could be further optimizations but I opted
to keep the code as similiar as is possible between the
originals because they both make some assumptions about
code flow that I do not fully understand yet.
The real goal here is that I want to create a new
user of the nexthop tracking code from a higher level
daemon and I see no need to re-implement this damn
code again for a 3rd time.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Abstract the code that sends the zapi message into zebra
for the turn on/off of nexthop tracking for a prefix.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The flags value is not used for unregister events. Let's purposefully
not send anything and purposefully not accept non 0 for it.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
This fixes the broken indentation of several foreach loops throughout
the code.
From clang's documentation[1]:
ForEachMacros: A vector of macros that should be interpreted as foreach
loops instead of as function calls.
[1] http://clang.llvm.org/docs/ClangFormatStyleOptions.html
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
When the underlying VRF is deleted, ensure that state for the
next hops that BGP registers with zebra for tracking purposes is
properly updated. Otherwise BGP will not re-register the next hop
when the VRF is re-created, resulting in the next hop staying
unresolved.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Reviewed-by: Don Slice <dslice@cumulusnetworks.com>
Ticket: CM-17456
Reviewed By: CCR-6587
Testing Done: Manual, bgp-min, vrf
Most of the attributes in 'struct attr_extra' allow for
the more interesting cases of using bgp. The extra
overhead of managing it will induce errors as we add
more attributes and the extra memory overhead is
negligible on anything but full bgp feeds.
Additionally this greatly simplifies the code for
the handling of data.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
bgpd: Fix missing label set
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The FSF's address changed, and we had a mixture of comment styles for
the GPL file header. (The style with * at the beginning won out with
580 to 141 in existing files.)
Note: I've intentionally left intact other "variations" of the copyright
header, e.g. whether it says "Zebra", "Quagga", "FRR", or nothing.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Implement support for negotiating IPv4 or IPv6 labeled-unicast address
family, exchanging prefixes and installing them in the routing table, as
well as interactions with Zebra for FEC registration. This is the
implementation of RFC 3107.
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
In the future we plan to update Nexthop tracking to better
handle ipv6 lla. This commit will set this up for that.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>