FRR has a provision to give exact-match in match clause for
standard community, but this option is missing for lcommunity.
Part 3 : show related changes for match clause
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
FRR has a provision to give exact-match in match clause for
standard community, but this option is missing for lcommunity.
Part 2 : CLI related changes for match clause
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
FRR has a provision to give exact-match in match clause for
standard community, but this option is missing for lcommunity.
Part 1 : Added support in clist lib
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
Add a expected count for the route node we will be processing
as part of nexthop resolution and modify the type to display
a useful string of what the type is instead of a number.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The `bgp multiple-instance` command has been deprecated and
removed. Finish off this by removing it from topotests too.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The `bgp multiple-instance` command has been removed but
we did not properly update the documentation. Let's do so.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
This code is not returned anywhere in the system as that bgp
is by default multiple-instance 'only' now. So remove
the last remaining bits of it from the code base.
Remove BGP_ERR_MULTIPLE_INSTANCE_USED too.
Make bgp_get explicitly return BGP_SUCCESS
instead of 0.
Remove the multi-instance error code too.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There exists a state where we may have a rd node but no individual
evpn prefix nodes in the two level table:
(gdb) bt
at bgpd/bgp_evpn_vty.c:1190
filter=FILTER_RELAXED) at lib/command.c:1060
at lib/command.c:1119
vtysh=vtysh@entry=0) at lib/command.c:1273
(gdb) f 5
at bgpd/bgp_evpn_vty.c:1190
1190 bgpd/bgp_evpn_vty.c: No such file or directory.
(gdb) p buf
$1 = "[2]:[0]:[48]:[00:00:00:00:00:00]", '\000' <repeats 240 times>...
(gdb) p json_nroute
$2 = (json_object *) 0x0
(gdb) p rd_header
$3 = 1
(gdb) p buf
$4 = "[2]:[0]:[48]:[00:00:00:00:00:00]", '\000' <repeats 240 times>...
(gdb)
I'm not entirely sure that this is not a `different` problem in that the
rd node should have been removed. But I think preventing the crash
in a show command is probably the right thing to do here.
Fixes: #4501
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
On interface up/down, bgp stores the mac address of the interface
in a bgp_mac_hash table entry and then initiates a rescan
of the evpn l2vpn table. The problem with this scan is that
it is looking at every item in the table when only 1 mac
has changed. So every up/down event causes some major trauma
in the bgp_update processing.
Modify the mac scanning such that we know the mac that is changed
and as such we should reprocess those entries only.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Add a bit of extra code to indicate to the operator why
we intentionally rejected a kernel route from being used.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
If we get a callback for a interface change but we do not
actually have to move the mac entry in the hash then
we were accidently leaking the Mac Hash String all over
ourselves. Messy Messy!
Ticket: CM-25351
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
- When the connection with the FPM socket is established, iterate through all the
L3VNIs and send all the RMACs for FPM processing zfpm_conn_up_thread_cb"
- We have already handled connection down even in previous commits. When the FPM
connection goes down, empty mac_q and FPM mac info hash table
"zfpm_conn_down_thread_cb"
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
- FPM write thread calls "zfpm_build_updates()" to process mac_q and dest_q and
to write update buffer over the FPM socket.
- "zfpm_build_updates()" processes all the update queues one by one in a while
loop. It will break the while loop and return if Queue processing function
returns "FPM_WRITE_STOP" OR FPM write buffer is full OR all the queues are
empty (no more update to process).
- "zfpm_build_route_updates()" dequeues and processes route nodes from "dest_q".
- "zfpm_build_mac_updates()" dequeues and processes MAC nodes from "mac_q"
- These queue processing functions return with "FPM_WRITE_STOP" if the write
buffer is full. Return value is "FPM_GOTO_NEXT_Q" if enough updates are
processed from this queue and we want to move on to the next queue.
- In each call, a queue processing function will process max
"FPM_QUEUE_PROCESS_LIMIT (10000)" updates to avoid starvation of other queues.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
- Define a hook "zebra_mac_update" which can be registered by multiple
data plane components (e.g. FPM, dplane).
DEFINE_HOOK(zebra_rmac_update, (zebra_mac_t *rmac, zebra_l3vni_t *zl3vni, bool
delete, const char *reason), (rmac, zl3vni, delete, reason))
- While performing RMAC add/delete for an L3VNI, call "zebra_mac_update" hook.
- This hook call triggers "zfpm_trigger_rmac_update". In this function, we do a
lookup for the RMAC in fpm_mac_info_table. If already present, this node is
updated with the latest RMAC info. Else, a new fpm_mac_info_t node is created
and inserted in the queue and hash data structures.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
- FPM MAC structure: This data structure will contain all the information
required for FPM message generation for an RMAC.
struct fpm_mac_info_t {
struct ethaddr macaddr;
uint32_t zebra_flags; /* Could be used to build FPM messages */
vni_t vni;
ifindex_t vxlan_if;
ifindex_t svi_if; /* L2 or L3 Bridge interface */
struct in_addr r_vtep_ip; /* Remote VTEP IP */
/* Linkage to put MAC on the FPM processing queue. */
TAILQ_ENTRY(fpm_mac_info_t) fpm_mac_q_entries;
uint8_t fpm_flags;
};
- Queue structure for FPM processing:
For FPM processing, we build a queue of "fpm_mac_info_t". When RMAC is
added or deleted from zebra, fpm_mac_info_t node is enqueued in this queue
for the corresponding operation. FPM thread will dequeue these nodes one by
one to generate a netlink message.
TAILQ_HEAD(zfpm_mac_q, fpm_mac_info_t) mac_q;
- Hash table for "fpm_mac_info_t"
When zebra tries to enqueue fpm_mac_info_t for a new RMAC add/delete
operation, it is possible that this RMAC is already present in the queue. So,
to avoid multiple messages for duplicate RMAC nodes, insert fpm_mac_info_t
into a hash table.
struct hash *fpm_mac_info_table;
- Before enqueueing any MAC, try to fetch the fpm_mac_info_t from the hash
table first.
- Entry is deleted from the hash table when the node is dequeued.
- For hash table key generation, parameters used are "mac adress" and "vni"
This will provide a fairly unique key for a MAC(fpm_mac_info_hash_keymake).
- Compare function uses "mac address", "RVTEP address" and "VNI" as the key
which is sufficient to distinguish any two RMACs. This compare function is
used for fpm_mac_info_t lookup (zfpm_mac_info_cmp).
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
Found that the "show interface brief" command was missing the
ability to specify all vrfs. Added that capability via this
fix.
Ticket: CM-25139
Signed-off-by: Don Slice <dslice@cumulusnetworks.com>
FRR has no option for the as-set for aggregate route
under IPV6 address family. Added the command to
configure the as-set option for IPV6.
Signed-off-by: vishaldhingra <vdhingra@vmware.com>
If we get a neighbor entry for 5549 failure notice
from the kernel that means that something has probably
gone terribly wrong. Let's notice and not reinstall.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The vifi being displayed is just confusing. Display the
actual interface name being used in the mroute.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
show bgp vrfs command is formatted with couple
of things.
show bgp vrfs to inclue bgp vrf instance's
SVI interface.
Move L3vni, RMAC and SVI value in next line.
Ticket:CM-25317
Reviewed By:CCR-8816
Testing Done:
New Output:
TORS1# show bgp vrfs
Type Id routerId #PeersVfg #PeersEstb Name
L3-VNI RouterMAC Interface
DFLT 0 27.0.0.15 2 2 default
0 00:00:00:00:00:00 unknown
VRF 31 45.0.8.2 0 0 vrf3
4003 00:02:00:00:00:4e vlan4003
VRF 35 45.0.2.2 0 0 vrf1
4001 00:02:00:00:00:4e vlan4001
VRF 25 45.0.6.2 0 0 vrf2
4002 00:02:00:00:00:4e vlan4002
Total number of VRFs (including default): 4
Signed-off-by: Chirag Shah <chirag@cumulusnetworks.com>