Since additional information such as block_bits_length is needed to
generate SIDs properly, the type of elements in srv6_locator_chunks
list is extended from "struct prefix_ipv6 *" to
"struct srv6_locator_chunk *". Even in terms of variable name,
"struct srv6_locator_chunk *" is appropriate.
Signed-off-by: Nobuhiro MIKI <nmiki@yahoo-corp.jp>
When BGP detects that a peering is using a global address but no v6 LL
address has been created for the interface that the global address is
on warn the user that something is amiss and they need to fix it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This patch adds transpostion_offset and transposition_len to bgp_sid_info,
and transposes SID only at bgp_zebra_announce.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
Currently the Wait for Install code ( bgp_suppress_fib ) does
not properly handle two states from zebra: ROUTE_INSTALL_FAILED
and BETTER_ADMIN_DISTANCE_WON. Pre this change the WFI code
would just never notify our peers about a route install failure
but more is needed. In the ROUTE_INSTALL_FAILED and the
BETTER_ADMIN_DISTANCE_WON we need to notify our peers with
a withdrawal about the route, else we will continue to
draw traffic to us when we cannot legally do so.
Why is this needed? In either case imagine that we've already
received a bgp route, installed it and sent to our peers.
In the Better admin distance won case, say a static route is installed
at this point in time we must stop advertising the route through
us since we are not installed. As such a withdrawal must be sent.
In the ROUTE_INSTALL_FAILED case, the code was not properly handling
the situation where we have Route A, it was successfully installed
and then we received a update to Route A that was attempted to be
installed but failed. In this case we also need to send a withdrawal
Finally update the bgp_suppress_fib topotest to test both of these
situations.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Duplicate a couple of definitions in order to remove the bgpd
includes from this libfrr header. This is necessary to fix some
name collisions like PREFIX_LIST_IN being defined differently on
multiple daemons (as soon as other daemons start including
route_opaque.h).
Including daemon headers on libfrr headers is a bad practice and
should be avoided whenever possible.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Since f60a1188 we store a pointer to the VRF in the interface structure.
There's no need anymore to store a separate vrf_id field.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Description:
Added a macro which optimises some part of the code.
Co-authored-by: Santosh P K <sapk@vmware.com>
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Iqra Siddiqui <imujeebsiddi@vmware.com>
We should always treat the VRF interface as a loopback. Currently, this
is not the case, because in some old pre-VRF code we use if_is_loopback
instead of if_is_loopback_or_vrf. To avoid any future problems, the
proposal is to rename if_is_loopback_or_vrf to if_is_loopback and use it
everywhere. if_is_loopback is renamed to if_is_loopback_exact in case
it's ever needed, but currently it's not used anywhere.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This removes a giant `switch { }` block from lib/zclient.c and
harmonizes all zclient callback function types to be the same (some had
a subset of the args, some had a void return, now they all have
ZAPI_CALLBACK_ARGS and int return.)
Apart from getting rid of the giant switch, this is a minor security
benefit since the function pointers are now in a `const` array, so they
can't be overwritten by e.g. heap overflows for code execution anymore.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
if_lookup_by_name_all_vrf doesn't work correctly with netns VRF backend
as the same index may be used in multiple netns simultaneously.
Use the appropriate VRF when looking for the interface.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
```
exit1-debian-9# show ip route 172.16.16.1/32
Routing entry for 172.16.16.1/32
Known via "bgp", distance 20, metric 0, best
Last update 00:00:28 ago
* 192.168.0.2, via eth1, weight 1
AS-Path : 65003
Communities : first 65001:2 65001:3
Large-Communities: 65001:1:1 65001:1:2 65001:1:3
Selection reason : First path received
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
With v6 interface based peering, we send the global as well as the LL address
as nexthops to the peer. When either of these were removed on the interface
we were not necessarily resetting the connection. Leaving bgp in a state
where the peer had reachability for addresses that are no longer in use.
Modify the code that when we receive an interface address deletion
event. Check to see that we are using the v6 address as nexthops
for that peer and if so, tell it to reset.
I initially struggled with a hard reset of the peer or a clear but
choose to follow other places in the code that we noticed address
changes that resulted in hard resets.
Ticket: #2799568
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
These are no longer really needed. The client just needs
to call nexthop resolution instead.
So let's remove the zapi types.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
currently, has_valid_label is only used to check need to print debug,
but if route has normal nexthops and mpls nexthops, label information
will be printed even for normal nexthops.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
In current implementation, only last path in mpinfo is treated as seg6
nexthop, but all paths should be treated as seg6 nexthop.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
Description:
Change is intended for fixing the following issues related to vrf route leaking:
Routes with special nexthops i.e. blackhole/sink routes when imported,
are not programmed into the FIB and corresponding nexthop is set as 'inactive',
nexthop interface as 'unknown'.
While importing/leaking routes between VRFs, in case of special nexthop(ipv4/ipv6)
once bgp announces route(s) to zebra, nexthop type is incorrectly set as
NEXTHOP_TYPE_IPV6_IFINDEX/NEXTHOP_TYPE_IFINDEX
i.e. directly connected even though we are not able to resolve through an interface.
This leads to nexthop_active_check marking nexthop !NEXTHOP_FLAG_ACTIVE.
Unable to find the active nexthop(s), route is not programmed into the FIB.
Whenever BGP leaks routes, set the correct nexthop type, so that route gets resolved
and correctly programmed into the FIB, in the imported vrf.
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Iqra Siddiqui <imujeebsiddi@vmware.com>
There's no IPv6 LL address on loopback/vrf interfaces. So if the user
configures update-source, the session is never going to be established.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
When bgp receives the admin distance from a redistribution statement
let's store that distance for later usage.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Until now, when bgp flowspec entry action was to redirect to a vrf, a
default route was installed in a specific table. that route was a vrf
route leak one. The process can be simplified, as vrf-lite already
has a table identifier. Actually, because policy routing is used to
redirect traffic to a defined table (with ip rule command), use
the table identifier of the VRF.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
When setting bgp configuration using peers referencing link local
ipv6 addresses, the bgp should be able to handle incoming bgp
connections, and find out the appropriate interface where the
connection comes from.
ipv6 link local sessions work by using bgp unnumbered interfaces
config, but it does not work if we have a shared media with
multiple potential link local ipv6 addresses on the network.
The fix consists in finding out the appropriate interface, when
the local configuration references a link local ipv6 addresses,
and the source address used references an interface. below
configuration illustrates what can be done then:
neighbor fe80::4113:5bba:2b61:b20c remote-as 55
neighbor fe80::4113:5bba:2b61:b20c update-source eth0
note: this change does not solve the ability for such config to
create an outgoing connection to remote peer (as the link local
ipv6 address config does not indicate which interface to use).
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
There are startup situations where we will attempt to connect to a remote
peer before bgp has received the v6 LL address. If we do not have this address
we must not allow the connection to come up until we have one available to use
in those situations where we must have a v6 LL address.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
SVI ifindex for L2VNI is required in BGP to perform EVPN type-5 to type-2
recusrsive resolution using gateway IP overlay index.
Program this svi_ifindex in struct zebra_vni_t as well as in struct bgpevpn
Changes include:
1. Add svi_if field to struct zebra_evpn_t
2. Add svi_ifindex field to struct bgpevpn
3. When SVI (bridge or VLAN) is bound to a VxLAN interface, store it in the
zebra_evpn_t structure.
4. Add this SVI ifindex to ZEBRA_VNI_ADD
5. Store svi_ifindex in struct bgpevpn
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
The IP/IPv6 prefix carried with EVPN RT-5 is imported in the BGP vrf according
to the attached route targets.
If the prefix carries a gateway IP overlay index, this gateway IP should be
installed as the nexthop of the route imported in the BGP vrf.
This route in vrf will be marked as VALID only if the nexthop is resolved in the
SVI network.
To receive runtime reachability information for the nexthop, register it with
the nexthop tracking module.
Send this route to zebra after processing.
Signed-off-by: Ameya Dharkar <adharkar@vmware.com>
We are inconsistently using peer_establiahed(peer) with
sometimes using `peer->status == Established`. Just Convert
over to using the function for consistency.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
https://github.com/FRRouting/frr/pull/5865#discussion_r597670225
As this comment says. ZEBRA_FLAG_XXX should not have been used.
To communicate SRv6 Route Information. A simple Nexthop Flag would
have been sufficient for SRv6 information. And I fixed the whole
thing that way.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit make bgpd to support install H.Encaps(seg6 mode segs) routes
when VPN-prefix has Prefix-sid.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
This commit add command to speficy SRv6 locator for BGP SRv6-VPN.
CLI example is follow. CLI block of "segment-routing" is already
implemented by previous commits and it's managed by zebra.
Zebra manage just the ownership of locator's prefix.
Zlient can request to get srv6-locator's prefix chunk using
srv6_manager_get_locator_chunk() which is usuful func to
execute ZEBRA_SRV6_MANAGER_GET_LOCATOR_CHUNK api. This request
is wokring as async, And zebra calls same api to Zclients when
zebra allocate locator prefix chunk.
And then, finally zclient(bgpd) catch the information via
process_srv6_lcoator_chunk callback function.
router bgp 1
segment-routing srv6
locator loc1
!
!
segment-routing
srv6
locators
locator loc1
prefix 2001:db8:1:1::/64
!
!
!
!
[POINT_OF_REVIEW]
In current implementation, user can just configure srv6 locator
but user can't de-configure srv6 locator.
Signed-off-by: Hiroki Shirokura <slank.dev@gmail.com>
If there's no default router configured at the moment when bgpd is
connected to zebra, bgpd is not registered as a BFD client.
We should do the registration regardless of the config existence.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
This includes community and large-community data.
```
exit1-debian-9# show ip route 172.16.16.1/32
Routing entry for 172.16.16.1/32
Known via "bgp", distance 20, metric 0, best
Last update 00:00:23 ago
* 192.168.0.2, via eth1, weight 1
AS-Path : 65030
Communities : 65001:1 65001:2 65001:3 65001:4 65001:5 65001:6
Large-Communities: 65001:123:1 65001:123:2
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Use unsigned value for all RA requests to Zebra
- encoding signed int as unsigned is bad practice
- RA interval is never, and should never be, negative
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
Description:
After FRR restart, routes are not getting redistributed;
when routes added first and then 'redistribute static' cmd is issued.
During the frr restart, vrf_id will be unknown,
so irrespective of redistribution, we set the redistribute vrf bitmap.
Later, when we add a route and then issue 'redistribute' cmd,
we check the redistribute vrf bitmap and return CMD_WARNING;
zebra_redistribute_add also checks the redistribute vrf bitmap and returns.
Instead of checking the redistribute vrf bitmap, always set it anyways.
Co-authored-by: Santosh P K <sapk@vmware.com>
Co-authored-by: Kantesh Mundaragi <kmundaragi@vmware.com>
Signed-off-by: Abhinay Ramesh <rabhinay@vmware.com>
When "bgp bestpath peer-type multipath-relax" is enabled, multipaths
with both eBGP and iBGP learned routes may exist. It is not desirable
for the iBGP next hops to be discarded from the FIB because they are not
directly connected. When publishing a nexthop group to zebra, the
ZEBRA_FLAG_ALLOW_RECURSION flag is normally not set when the best path
is eBGP; when "bgp bestpath aspath multipath-relax" is configured, the
flag will now be set if any paths are from iBGP peers. This leaves
all-eBGP multipaths still requiring nexthops over connected routes.
Signed-off-by: Joanne Mikkelson <jmmikkel@arista.com>
The BFD function `bgp_bfd_is_peer_multihop` will no longer exist and now
both code paths are equal.
Longer explanation:
Cumulus was previously using the BFD function to help determine whether a
peer is multi hop or not, because there is a configuration to set BFD
to use single or multi hop.
Current BFD code can automatically pick between single/multi hop by
using the protocol information and so it is a good idea to have that
tested/used than relying on yet another duplicated information.
(BFD extracts the TTL information from protocol and selects
single/multi hop based on that)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
When a local ES is in LACP bypass state BGP doesn't advertise
reachability to it i.e. the Type-1/EAD-per-ES routes and Type-4
route for the ES is not advertised. This is the equivalent of
oper-down handling.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
bgp is currently registering v6 LL as nexthops to be tracked
from zebra. This presents several problems.
a) zebra does not properly track multiple prefixes that match
the same route properly at this point in time.
b) BGP was receiving nexthops that were just incorrect because
of (a).
c) When a nexthop changed that really didn't affect the v6 LL
we were responding incorrectly because of this
Modify the code such that bgp nexthop tracking notices that
we are trying to register a v6 LL. When we do so, shortcut
and watch interface up/down events for this v6 LL and do
the work when an interface goes up / down for this type
of tracking.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
```
(gdb) bt
0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
1 0x00007fe57ca4a42a in __GI_abort () at abort.c:89
2 0x00007fe57ddd1935 in core_handler (signo=6, siginfo=0x7ffc81067570, context=<optimized out>) at lib/sigevent.c:255
3 <signal handler called>
4 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
5 0x00007fe57ca4a42a in __GI_abort () at abort.c:89
6 0x00007fe57ddd1935 in core_handler (signo=11, siginfo=0x7ffc81067e30, context=<optimized out>) at lib/sigevent.c:255
7 <signal handler called>
8 0x000055a7b25b923f in bgp_path_info_to_ipv6_nexthop (ifindex=ifindex@entry=0x7ffc810683c0, path=<optimized out>, path=<optimized out>) at bgpd/bgp_zebra.c:909
9 0x000055a7b25bb2e5 in bgp_zebra_announce (dest=dest@entry=0x55a7b5239c10, p=p@entry=0x55a7b5239c10, info=info@entry=0x55a7b5239cd0, bgp=bgp@entry=0x55a7b518b090, afi=afi@entry=AFI_IP6, safi=safi@entry=SAFI_UNICAST) at bgpd/bgp_zebra.c:1358
10 0x000055a7b256af6a in bgp_process_main_one (bgp=0x55a7b518b090, dest=0x55a7b5239c10, afi=AFI_IP6, safi=SAFI_UNICAST) at bgpd/bgp_route.c:2918
11 0x000055a7b256b0ee in bgp_process_wq (wq=<optimized out>, data=0x55a7b5221800) at bgpd/bgp_route.c:3027
12 0x00007fe57ddea2e0 in work_queue_run (thread=0x7ffc8106cd60) at lib/workqueue.c:291
13 0x00007fe57dde0781 in thread_call (thread=thread@entry=0x7ffc8106cd60) at lib/thread.c:1684
14 0x00007fe57dda84b8 in frr_run (master=0x55a7b48aaf00) at lib/libfrr.c:1126
15 0x000055a7b250a7da in main (argc=<optimized out>, argv=<optimized out>) at bgpd/bgp_main.c:540
(gdb)
```
This crashes with configs like:
```
router bgp 65534
no bgp ebgp-requires-policy
no bgp network import-check
!
address-family ipv6 unicast
import vrf donatas <<<<<< Crashes when entering this command
exit-address-family
!
router bgp 65534 vrf donatas
no bgp ebgp-requires-policy
no bgp network import-check
neighbor fe80::c15a:ddab:1689:db86 remote-as 65025
neighbor fe80::c15a:ddab:1689:db86 interface eth2
neighbor fe80::c15a:ddab:1689:db86 update-source eth2
neighbor fe80::c15a:ddab:1689:db86 capability extended-nexthop
!
address-family ipv6 unicast
network 2a02:face::/32 <<<<<< Crashes due to static networks
neighbor fe80::c15a:ddab:1689:db86 activate
exit-address-family
!
```
Locally configured routes do not have peer->su_remote.
```
exit1-debian-9# show bgp ipv6 unicast
BGP table version is 3, local router ID is 192.168.100.1, vrf id 0
Default local pref 100, local AS 65534
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 2a02🔤:/64 fe80::c15a:ddab:1689:db86@5<
0 65025 i
2a02:face::/32 ::@5< 0 32768 i
Displayed 2 routes and 2 total paths
exit1-debian-9#
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Add SNMP support for L3vpn Vrf table as defined in [RFC4382]
Keep track of vrf status for the table and for future traps.
Signed-off-by: Pat Ruddy <pat@voltanet.io>
In bgp_zebra_announce we do work to apply the table map.
This is the same for both v4 and v6 but we have the code
duplicated in both v4 and v6 if statements. Move outside
to reduce the duplications.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
BGP has created some redundant checks in bgp_zebra_announce()
Reduce the multiple if statements and consolidate a bit.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add a bit of code to allow bgp to send the AS-Path associated with
the route being installed to zebra so it can be displayed and
used as part of the `show ip route A` command in zebra.
eva# show ip route 20.0.0.0/11
Routing entry for 20.0.0.0/11
Known via "bgp", distance 20, metric 0, best
Last update 00:00:00 ago
* 192.168.161.1, via enp39s0, weight 1
AS-Path: 60000 64539 15096 6939 8075
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
bgp aggregate address installs route with self peer which
can have peer->su of unspecifed type.
bgp_distance_apply bailed out as it fails to parse
sockunion2hostprefix for af type unspec.
config:
address-family ipv4 unicast
aggregate-address 50.1.0.0/16 summary-only
Testing Done:
Before:
B>* 50.1.0.0/16 [20/0] unreachable (blackhole), weight 1, 00:00:02
After:
B>* 50.1.0.0/16 [200/0] unreachable (blackhole), weight 1, 00:01:28
Signed-off-by: Chirag Shah <chirag@nvidia.com>
In bgp_zebra_announce when iterating over multipath
we were checking to ensure that the nexthop was updated
but never initially clearing the nh_updated variable.
Thus leading to a situation where we could crash.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Two L3 next groups are installed per-VRF per-ES for v4 and v6. These
NHGs are used as an indirect destination for symmetric IRB host routes.
Using L3NHGs allows for efficient failover of an ES (similar to the
use of L2NHGs) i.e. when an ES goes down the number of dataplane
updates are limited to 2xN (where N is the number of tenant VRFs
associated with the ES) instead of updating all host-routes behind the
ES.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Host routes imported into the VRF can have a destination ES (per-VRF)
which is set up as a L3NHG for efficient failover.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
The `enum zclient_send_status` enum needs to be extended
throughout the code base to use the new states and
to fix up places where we tested against the return
value being non zero.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add a `enum zclient_send_status` for appropriate handling
of return codes from zclient_send_message. Touch all the places
where we handle this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When FRR sends data over the ZAPI protocol from the upper levels to zebra, indicate
to the calling functions that we have started buffering data to be sent if the
socket is full underneath it.
Also add a call back function `zebra_buffer_write_ready` that we can call
when an upper level protocol's socket buffer has been drained.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The recent change to use %pFX missed a code path
where we were displaying a buf that was uninited.
Display the prefix as intended.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists a code path where the esi would be passed
to a debug without the esi being setup with any values
causing us to display what ever is on the stack.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The route_map_object_t was being used to track what protocol we were
being called against. But each protocol was only ever calling itself.
So we had a variable that was only ever being passed in from route_map_apply
that had to be carried against and everyone was testing if that variable
was for their own stack.
Clean up this route_map_object_t from the entire system. We should
speed some stuff up. Yes I know not a bunch but this will add up.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
* Process FIB update in bgp_zebra_route_notify_owner() and call
group_announce_route() if route is installed
* When bgp update is received for a route which is not installed earlier
(flag BGP_NODE_FIB_INSTALLED is not set) and suppress fib is enabled
set the flag BGP_NODE_FIB_INSTALL_PENDING to indicate fib install is
pending for the route. The route will be advertised when zebra send
ZAPI_ROUTE_INSTALLED status.
* The advertisement delay (BGP_DEFAULT_UPDATE_ADVERTISEMENT_TIME)
is added to allow more routes to be sent in single update message.
This is required since zebra sends route notify message for each route.
The delay will be applied to update group timer which advertises
routes to peers.
Signed-off-by: kssoman <somanks@gmail.com>
DF (Designated forwarder) election is used for picking a single
BUM-traffic forwarded per-ES. RFC7432 specifies a mechanism called
service carving for DF election. However that mechanism has many
disadvantages -
1. LBs poorly.
2. Doesn't allow for a controlled failover needed in upgrade
scenarios.
3. Not easy to hw accelerate.
To fix the poor performance of service carving alternate DF mechanisms
have been proposed via the following drafts -
draft-ietf-bess-evpn-df-election-framework
draft-ietf-bess-evpn-pref-df
This commit adds support for the pref-df election mechanism which
is used as the default. Other mechanisms including service-carving
may be added later.
In this mechanism one switch on an ES is elected as DF based on the
preference value; higher preference wins with IP address acting
as the tie-breaker (lower-IP wins if pref value is the same).
Sample output
=============
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
torm-11# sh bgp l2vpn evpn es 03:00:00:00:00:01:11:00:00:01
ESI: 03:00:00:00:00:01:11:00:00:01
Type: LR
RD: 27.0.0.15:6
Originator-IP: 27.0.0.15
Local ES DF preference: 100
VNI Count: 10
Remote VNI Count: 10
Inconsistent VNI VTEP Count: 0
Inconsistencies: -
VTEPs:
27.0.0.16 flags: EA df_alg: preference df_pref: 32767
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
torm-11# sh bgp l2vpn evpn route esi 03:00:00:00:00:01:11:00:00:01
*> [4]:[03:00:00:00:00:01:11:00:00:01]:[32]:[27.0.0.15]
27.0.0.15 32768 i
ET:8 ES-Import-Rt:00:00:00:00:01:11 DF: (alg: 2, pref: 100)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>