Update ospfd and ospf6d to send opaque route attributes to
zebra. Those attributes are stored in the RIB and can be viewed
using the "show ip[v6] route" commands (other than that, they are
completely ignored by zebra).
Example:
```
debian# show ip route 192.168.1.0/24
Routing entry for 192.168.1.0/24
Known via "ospf", distance 110, metric 20, best
Last update 01:57:08 ago
* 10.0.1.2, via eth-rt2, weight 1
OSPF path type : External-2
OSPF tag : 0
debian#
debian# show ip route 192.168.1.0/24 json
{
"192.168.1.0\/24":[
{
"prefix":"192.168.1.0\/24",
"prefixLen":24,
"protocol":"ospf",
"vrfId":0,
"vrfName":"default",
"selected":true,
[snip]
"ospfPathType":"External-2",
"ospfTag":"0"
}
]
}
```
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Duplicate a couple of definitions in order to remove the bgpd
includes from this libfrr header. This is necessary to fix some
name collisions like PREFIX_LIST_IN being defined differently on
multiple daemons (as soon as other daemons start including
route_opaque.h).
Including daemon headers on libfrr headers is a bad practice and
should be avoided whenever possible.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Since f60a1188 we store a pointer to the VRF in the interface structure.
There's no need anymore to store a separate vrf_id field.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
There exist systems that do not explicity have a python soft-link
on their system. Let's explicity call out which python we want
to be using with exabgp.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
For IPv4 matching, we have "match ip next-hop address A.B.C.D".
For IPv6 matching, we have "match ipv6 next-hop X:X::X:X".
To have consistency, let's add "address" keyword to IPv6 commands.
Old commands are preserved as hidden for backward compatibility.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Problem Statement:
==================
on rp_change, PIM processes all the upstream in a loop and for selected
upstreams PIM has to send join/prune based on the RPF changed.
join and prune packets are not getting aggregated in a single packet.
Root Cause Analysis:
====================
on pim_rp_change pim_upstream_update() gets called for selected upstreams.
This API calculates to whom it has to send join and to whom it has to
send prune via API pim_zebra_upstream_rpf_changed(). This API peprares
the upstream_switch_list list per interface and inserts the group and
sources.
Now PIM is still in the pim_upstream_update() API context, i.e PIM
is still processing the same upstream. In the last there is a
call to pim_zebra_update_all_interfaces() which processes the
upstream_switch_list list, sends the packets out and clears the list.
Fix:
====
Don't process the upstream_switch_list in the upstream context.
process all the upstreams prepare the upstream_switch_list and then
process in one go. This will club all the S,G entries.
It also saves list cleanup with respect to memory allocation and
deallocation multiple times.
Signed-off-by: Vishal Dhingra <rac.vishaldhingra@gmail.com>
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
the fix consists in parsing the ext community list ipv6 by taking
account the size of the ecommunity val size.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
During zebra shutdown, we clear out the LSP workqueue. The LSPs
will be uninstalled and freed during the shutdown process, so
just ignore any LSPs that happen to be on the workqueue.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>