When adding/removing some peer's flag we need to make sure we FORCE updates
to avoid suppressing critical updates.
Like entering `no neighbor x.x.x.x send-community large` would suppress
updates by default and another side will have stale large communities.
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
In bgp_zebra_announce we do work to apply the table map.
This is the same for both v4 and v6 but we have the code
duplicated in both v4 and v6 if statements. Move outside
to reduce the duplications.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
BGP has created some redundant checks in bgp_zebra_announce()
Reduce the multiple if statements and consolidate a bit.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
`lcommunity_gettoken` expects a space-delimeted list of 0 or more large
communities. `lcommunity_list_valid` can perform this check.
`lcommunity_list_valid` now validates large community lists more
accurately based on the following condition: Each quantity in a standard bgp
large community must:
1. Contain at least one digit
2. Fit within 4 octets
3. Contain only digits unless the lcommunity is "expanded"
4. Contain a valid regex if the lcommunity is "expanded"
Moreover we validate that each large community list contains exactly 3
such values separated by a single colon each.
One quirk of our validation which is worth documenting is:
```
bgp large-community-list standard test2 permit 1:c:3
bgp large-community-list expanded test1 permit 1:c:3
```
The first line will throw an error complaining about a "malformed community-list
value". The second line will be accepted because the each value is each treated as
a regex when matching large communities, it simply will never match anything so
it's rather useless.
Signed-off-by: Wesley Coakley <wcoakley@nvidia.com>
When bgp is using wait for install semantics it would be nice
to be able to debug it when it is running.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists a path where we could possibly have a NULL deref
of a pointer. Prevent this from happening.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Remove awful test of a strmatch against a call to get_afi_safi_str.
These are the easy ones as that the real decision point is/was
underneath this test. This is just duplicate expensive testing.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
evpn route-map match (filter) on vni is not working
at the origin of the routes.
evpn match vni route checks for encap type as vxlan.
the source route attribute is not set with vxlan encap
thus the match filter wouldn't work.
Ticket:CM-32554
Reviewed By:CCR-11056
Testing Done:
At source have match vni plus set statement in route-map.
Validate the origin of the route's outbound correctly sets
the 'set' statment based on match vni filter.
At origin:
route-map RM-EVPN-TE-Matches permit 10
match evpn vni 4001
set large-community 10:10:119
Receiving end:
Route [5]:[0]:[24]:[78.41.1.0] VNI 4001
5550
27.0.0.15 from TORS1(downlink-5) (27.0.0.15)
Origin incomplete, metric 0, valid, external, bestpath-from-AS 5550, best (First path received)
Extended Community: RT:5550:4001 ET:8 Rmac:00:02:00:00:00:4d
Large Community: 10:10:119 <--- Large community stamped
Last update: Thu Dec 10 22:19:26 2020
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Reference: https://www.cmand.org/communityexploration
--y2--
/ | \
c1 ---- x1 ---- y1 | z1
\ | /
--y3--
1. z1 announces 192.168.255.254/32 to y2, y3.
2. y2 and y3 tags this prefix at ingress with appropriate
communities 65004:2 (y2) and 65004:3 (y3).
3. x1 filters all communities at the egress to c1.
4. Shutdown the link between y1 and y2.
5. y1 will generate a BGP UPDATE message regarding the next-hop change.
6. x1 will generate a BGP UPDATE message regarding community change.
To avoid sending duplicate BGP UPDATE messages we should make sure
we send only actual route updates. In this example, x1 will skip
BGP UPDATE to c1 because the actual route is the same
(filtered communities - nothing changes).
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
On top of the recent `bgp suppress-fib-pending which
was at a BGP_NODE level, add this command at the CONFIG_NODE
level as well and allow the command to apply to all instances
of bgp running.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use user provided AD for local routes (aggregate).
address-family ipv4 unicast
distance bgp 20 200 210
network 47.2.2.8/30
aggregate-address 51.1.0.0/16
Testing Done:
Before aggr route uses default 200 AD even user provided local AD.
B>* 51.1.0.0/16 [200/0] unreachable (blackhole), weight 1, 00:01:14
After:
B>* 51.1.0.0/16 [210/0] unreachable (blackhole), weight 1, 00:00:01
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Add a bit of code to allow bgp to send the AS-Path associated with
the route being installed to zebra so it can be displayed and
used as part of the `show ip route A` command in zebra.
eva# show ip route 20.0.0.0/11
Routing entry for 20.0.0.0/11
Known via "bgp", distance 20, metric 0, best
Last update 00:00:00 ago
* 192.168.161.1, via enp39s0, weight 1
AS-Path: 60000 64539 15096 6939 8075
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
bgp aggregate address installs route with self peer which
can have peer->su of unspecifed type.
bgp_distance_apply bailed out as it fails to parse
sockunion2hostprefix for af type unspec.
config:
address-family ipv4 unicast
aggregate-address 50.1.0.0/16 summary-only
Testing Done:
Before:
B>* 50.1.0.0/16 [20/0] unreachable (blackhole), weight 1, 00:00:02
After:
B>* 50.1.0.0/16 [200/0] unreachable (blackhole), weight 1, 00:01:28
Signed-off-by: Chirag Shah <chirag@nvidia.com>
In bgp_zebra_announce when iterating over multipath
we were checking to ensure that the nexthop was updated
but never initially clearing the nh_updated variable.
Thus leading to a situation where we could crash.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
For each afi/safi of 'show bgp summary', display the peer description
each time needed. This information is useful, for instance in the case
of a device connected with multiple peers.
The topotest all_protocol_startup is changed accordingly.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
in the rare situation where we allocate the ES during the path link
we fail to check/store the allocated ES pointer thus leading to a
NULL dereference later in the function.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
Two L3 next groups are installed per-VRF per-ES for v4 and v6. These
NHGs are used as an indirect destination for symmetric IRB host routes.
Using L3NHGs allows for efficient failover of an ES (similar to the
use of L2NHGs) i.e. when an ES goes down the number of dataplane
updates are limited to 2xN (where N is the number of tenant VRFs
associated with the ES) instead of updating all host-routes behind the
ES.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>