Two changes for debug:
1. Add a field to indicate its vrf for nexthop. When the interface changes
vrf, we can't easily know the vrf of this nexthop according to current log.
2. Add a field to indicate operation type. We can't know whether to add or
remove route according to current log.
Before:
```
zebra_nhg_increment_ref: nhe 0x555623eb82c0 (76[if 6]) 0 => 1
zebra_interface_nhg_reinstall install nhe 75[77.75.1.75 if 6] nh type 3 flags 0x1
Route 77.75.1.0/24(8) queued for processing into sub-queue Early Route Processing
Route 77.75.1.0/24(8) queued for processing into sub-queue Early Route Processing
```
After:
```
zebra_nhg_increment_ref: nhe 0x555623eb82c0 (76[if 6 vrfid 9]) 0 => 1
zebra_interface_nhg_reinstall install nhe 75[77.75.1.75 if 6 vrfid 8] nh type 3 flags 0x1
Route 77.75.1.0/24(8) (add) queued for processing into sub-queue Early Route Processing
Route 77.75.1.0/24(8) (delete) queued for processing into sub-queue Early Route Processing
```
Signed-off-by: anlan_cs <anlan_cs@tom.com>
The last_reset_cause is a plain old BGP_MAX_PACKET_SIZE buffer
that is really enlarging the peer data structure. Let's just
copy the stream that failed and only allocate how ever much
the packet size actually was. While it's likely that we have
a reset reason, the packet typically is not going to be 65k
in size. Let's save space.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
GCC/clang warns about using strncpy in such a way that it does not copy
the null byte of a string; as implemented it was fine, but to fix the
warning, just use strlcat which was purpose made for the task being
accomplished here.
Signed-off-by: Quentin Young <qlyoung@qlyoung.net>
Currently, json output of show BGP commands are no pretty format.
This is an extremely expensive operation for huge scale (lots of
routes with lots of paths).
BGP json non-pretty commands support added:
```
show bgp neighbors <nbr-id> advertised-routes json
show bgp neighbors <nbr-id> received-routes json
show bgp neighbors <nbr-id> advertised-routes detail json
show bgp neighbors <nbr-id> received-routes detail json
```
Ticket:#3513256
Issue:3513256
Testing: UT done
Signed-off-by: Sindhu Parvathi Gopinathan's <sgopinathan@nvidia.com>
The ringbuf is 650k in size. This is obscenely large and
in practical experimentation FRR never even approaches
that size at all. Let's reduce this to 1.5 max packet sizes.
If a BGP_MAX_PACKET_SIZE packet is ever received having a bit
of extra space ensures that we can read at least 1 packet.
This also will significantly reduce memory usage when the
operator has a lot of peers.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Commit: a0b937de42
Introduced the idea of a input Q packet limit. Say you read in
635000 bytes of data and the input Q is already at it's limit
(currently 1000) then when bgp_process_reads runs it will
assert because there is less then a BGP_MAX_PACKET_SIZE in ibuf_work.
Don't assert as that it's irrelevant. Even if we can't read a full packet
in let's let the whole system keep working as that as the input Q length
comes down we will start pulling down the ibuf_work and it will be ok.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
PR#13413 introduces reinstall mechanism, but there is problem with the route
leak scenario.
With route leak configuration: ( `x1` and `x2` are binded to `vrf1` )
```
vrf vrf2
ip route 75.75.75.75/32 77.75.1.75 nexthop-vrf vrf1
ip route 75.75.75.75/32 77.75.2.75 nexthop-vrf vrf1
exit-vrf
```
Firstly, all are ok. But after `x1` is set down and up ( The interval
between the down and up operations should be less than 180 seconds. ) ,
`x1` is lost from the nexthop group:
```
anlan# ip nexthop
id 121 group 122/123 proto zebra
id 122 via 77.75.1.75 dev x1 scope link proto zebra
id 123 via 77.75.2.75 dev x2 scope link proto zebra
anlan# ip route show table 2
75.75.75.75 nhid 121 proto 196 metric 20
nexthop via 77.75.1.75 dev x1 weight 1
nexthop via 77.75.2.75 dev x2 weight 1
anlan# ip link set dev x1 down
anlan# ip link set dev x1 up
anlan# ip route show table 2 <- Wrong, one nexthop lost from group
75.75.75.75 nhid 121 via 77.75.2.75 dev x2 proto 196 metric 20
anlan# ip nexthop
id 121 group 123 proto zebra
id 122 via 77.75.1.75 dev x1 scope link proto zebra
id 123 via 77.75.2.75 dev x2 scope link proto zebra
anlan# show ip route vrf vrf2 <- Still ok
VRF vrf2:
S>* 75.75.75.75/32 [1/0] via 77.75.1.75, x1 (vrf vrf1), weight 1, 00:00:05
* via 77.75.2.75, x2 (vrf vrf1), weight 1, 00:00:05
```
From the impact on kernel:
The `nh->type` of `id 122` is *always* `NEXTHOP_TYPE_IPV4` in the route leak
case. Then, `nexthop_is_ifindex_type()` introduced by commit `5bb877` always
returns `false`, so its dependents can't be reinstalled. After `x1` is down,
there is only `id 123` in the group of `id 121`. So, Finally `id 121` remains
unchanged after `x1` is up, i.e., `id 122` is not added to the group even it is
reinstalled itself.
From the impact on zebra:
The `show ip route vrf vrf2` is still ok because the `id`s are reused/reinstalled
successfully within 180 seconds after `x1` is down and up. The group of `id 121`
is with old `NEXTHOP_GROUP_INSTALLED` flag, and it is still the group of `id 122`
and `id 123` as before.
In this way, kernel and zebra have become out of sync.
The `nh->type` of `id 122` should be adjusted to `NEXTHOP_TYPE_IPV4_IFINDEX`
after nexthop resolved. This commit is for doing this to make that reinstall
mechanism work.
Signed-off-by: anlan_cs <anlan_cs@tom.com>
Currently, json output of evpn route command are no pretty format.
This is an extremely expensive operation at high VNI scale
EVPN json non-pretty command support added:
```
show evpn mac vni <vni-id> detail json
show evpn vni detail json
```
Ticket:#3513256
Issue:3513256
Testing: UT done
Signed-off-by: Sindhu Parvathi Gopinathan's <sgopinathan@nvidia.com>
Currently, json output of show ip route command are no pretty format.
This is an extremely expensive operation at high scale
(with high number of routes with many paths).
Zebra json non-pretty command support added:
```
show ip route json
```
Ticket:#3513256
Issue:3513256
Testing: UT done
Signed-off-by: Sindhu Parvathi Gopinathan's <sgopinathan@nvidia.com>
The peer->ibuf_scratch was allocating 65535 * 10 bytes
for scratch space to hold data incoming from a read
from a peer. When you have 4k peers this is 262,1400,000
or 262 mb of data. Which is crazy large. Especially
since the i/o pthread is reading per peer without
any chance of having the data interfere with other reads.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
configure: error: in `/src':
configure: error: protobuf requested but protoc-c not found. Install protobuf-c.
See `config.log' for more details
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
bgpd, pbrd: use common pbr encoder
zebra: use common pbr decoder
tests: pbr_topo1: check more filter fields
Purpose:
1. Reduce likelihood of zapi format mismatches when adding
PBR fields due to multiple parallel encoder implementations
2. Encourage common PBR structure usage among various daemons
3. Reduce coding errors via explicit per-field enable flags
Signed-off-by: G. Paul Ziemba <paulz@labn.net>
Subset: doc and tests
doc
PBR section updated with new fields and some copy-editing
tests
pbr_topo1: ensure new vlan fields arrive at zebra
Changes by:
Josh Werner <joshuawerner@mitre.org>
Eli Baum <ebaum@mitre.org>
G. Paul Ziemba <paulz@labn.net>
Signed-off-by: G. Paul Ziemba <paulz@labn.net>