During CR for nexthop upstream it was noticed that usage
of prefix2str was not consistent. This fixes this problem
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
The json keyword was being read incorrectly.
Basically some commands read a variable # of arguments
and in ospf the command values were being placed into
argc and argv. With a variable # of arguments their
existed a possibility that less arguments would be read
from the cli than were being tested for in the command function
handler. This caused core dumps in some situations.
All code to read to decide to use the json keyword has
been centralized through a function and all code
converted to use it, irrelevant if it exhibited the bug
Ticket: CM-8278
Reviewed by: CCR-3830
Testing: OSPF no longer crashes and all other test suites still run
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
This patch addresses three main issues:
a. Passing along the global IPv6 nexthop received from the EBGP peer to
IBGP peers but setting the link-local IPv6 nexthop to ourselves when
advertising EBGP-learnt routes to IBGP peers (in the absence of outbound
route-map or other overrides). The fix is to not send a link-local IPv6
nexthop in this case.
b. Passing along the link-local IPv6 nexthop received from one peer to
another peer which is (or may be) on a different subnet. This violates the
semantics of link-local IPv6 address. The fix is to set the nexthop to
ourselves in the situation where the nexthop normally has to be passed
but is a link-local IPv6 address.
c. Different behavior wrt nexthop advertisement for BGP unnumbered peering
if it is setup using link-local IPv6 address versus IPv4 /30 or /31. The
fix is to make the behavior consistent as long as the interface config is
the same in both cases.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Ticket: CM-7846, CM-8043
Reviewed By: CCR-3749
Testing Done: Manual testing, bgpsmoke (on 2.5-br)
Note: Imported from 2.5-br patch bgpd-fix-link-local-nexthop-setting.patch
will be deprecated in the future
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Ticket: CM-8144
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Ticket: CM-8122
per draft-ietf-idr-ix-bgp-route-server-09:
2.3.2.2.2. BGP ADD-PATH Approach
The [I-D.ietf-idr-add-paths] Internet draft proposes a different
approach to multiple path propagation, by allowing a BGP speaker to
forward multiple paths for the same prefix on a single BGP session.
As [RFC4271] specifies that a BGP listener must implement an implicit
withdraw when it receives an UPDATE message for a prefix which
already exists in its Adj-RIB-In, this approach requires explicit
support for the feature both on the route server and on its clients.
If the ADD-PATH capability is negotiated bidirectionally between the
route server and a route server client, and the route server client
propagates multiple paths for the same prefix to the route server,
then this could potentially cause the propagation of inactive,
invalid or suboptimal paths to the route server, thereby causing loss
of reachability to other route server clients. For this reason, ADD-
PATH implementations on a route server should enforce send-only mode
with the route server clients, which would result in negotiating
receive-only mode from the client to the route server.
This allows us to delete all of the following code:
- All XXXX_rsclient() functions
- peer->rib
- BGP_TABLE_MAIN and BGP_TABLE_RSCLIENT
- RMAP_IMPORT and RMAP_EXPORT
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
Ticket: CM-7861
Reviewed by: CCR-3651
Testing: See bug
bgp is using both bm->master and master pointers interchangebly
for thread manipulation. Since they are the same thing consolidate
to one pointer.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Ticket: CM-7593
Reviewed By: CCR-3563
Testing Done: Manual verification of failed scenario (2.5-br)
When BGP receives an update to a redistributed route and the type of
the source has changed (e.g., from OSPF to static), the source route
type is not being updated in the RIB entry. This can lead to problems
such as the route being incorrectly deleted if redistribution for the
prior source is unconfigured.
Fix the code to update the source route type.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vipin Kumar <vipin@cumulusnetworks.com>
As part of the debian build process for jessie we are seeing
some compile issues. This addresses these issues
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
bgp is using both bm->master and master pointers interchangebly
for thread manipulation. Since they are the same thing consolidate
to one pointer.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
is empty
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Ticket: CM-7475
Reviewed By: Donald Sharp
Testing Done:
If the BGP table is empty, intead of returning
{'alert': 'No BGP network exists', 'routerId': '10.0.9.2',
'tableVersion': 180}
we should return
{'routerId': '10.0.9.2', 'routes': {}, 'tableVersion': 180}
This provides a more consistent json interface which is easier to script
against.
This patch adds a hostname capability. The node's hostname and
domainname are exchanged in the new capability and used in show command
outputs based on a knob enabled by the user. The hostname and domainname
can be a maximum of 64 chars long, each.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com>
Ticket: CM-5660
Reviewed By: CCR-2563
Testing Done:
Ticket: CM-4109
Reviewed-by: CCR-3414
Testing: See bug
Fixup of these memory issues:
(A) peer->clear_node_queue was accidently removed. Add back in.
(B) Clean up bm->process_main_queue and bm->process_rsclient_queue initialization
(C) Some memory leaks
(D) Clean up unused threads
Ticket: CM-7177
Reviewed-by: CCR-3396
Testing: See bug
This code change does several small things:
(A) Fix a couple detected memory leaks
(B) Fix all malloc operations to use the correct XMALLOC operation in bgpd and parts of lib
(C) Adds a few new memory types to make it easier to detect issues
Ticket: CM-6789
Reviewed By: CCR-3263
Testing Done: Manual Testing and smoke tests
Whenever some sort of output is encountered, added a json version with
proper logic as well.
Ticket : CM-7047
Reviewed by : CCR-3321
Testing : Trivial
In function bgp_aggregate_add, variables 'aspath' and 'community' are
malloced but not guaranteed to be freed before the function returns.
BGP: Display Link local addr and ifname as part of 5549 support
As part of BGP unnumbered and RFC 5549 support, the implementation will honor the
link local address as the NH if present and so it'd be useful to display that
info along with the interface name, when displaying the BGP route summary. That
is what this patch aims to do.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
"bgp confederation id X" are the same value.
router bgp 1
bgp router-id 10.1.1.1
bgp confederation identifier 1
bgp confederation peers 24 35
neighbor 10.1.1.2 remote-as 24
neighbor 10.1.1.2 update-source lo
neighbor 10.1.1.3 remote-as 1
neighbor 10.1.1.3 update-source lo
The customer does this because they want to peer to 10.1.1.2 as a
confed-external peer but peer with 10.1.1.3 as a normal iBGP peer.
The bug was that we thought 10.1.1.3 was an EBGP peer so we did not send him
LOCALPREF which caused the Juniper to send us a NOTIFICATION. I confirmed
that quagga also sends a NOTIFICATION in this scenario.
The fix is to add a check to see if router bgp X and bgp confederation
identifier X are equal because that is a factor in determining if a peer is
EBGP or IBGP
Additional issues fixed in the this patch:
We were not properly removing all AS_CONFED_SEQUENCEs/SETs from the aspath
when advertising a route to an ebgp peer. This was due to two issues:
We only called aspath_delete_confed_seq() if confederations were
configured. We can RX as aspath with CONFED segments even if
confederations are not configured.
aspath_delete_confed_seq() was implemented based on the original confed
RFC 3065 which basically said "remove all of the leading
AS_CONFED_SEQUENCEs/SETs" where the new confed RFC 5065 says "remove ALL
of the AS_CONFED_SEQUENCEs/SETs"
peer-groups did not work for confed-external peers. peer_calc_sort() always
returned BGP_PEER_EBGP for a confederations where the remote-as was not
specified. The reason was the peer->as_type was AS_UNSPECIFIED but we checked
if (peer->as_type != AS_SPECIFIED)
return (peer->as_type == AS_INTERNAL ? BGP_PEER_IBGP : BGP_PEER_EBGP);
After fixing that I found that when we got to the else where we checked for
peer1 we could only possibly return BGP_PEER_IBGP or BGP_PEER_EBGP, we need
to also be able to return BGP_PEER_CONFED. I changed this to return
peer1->sort.
"show ip bgp x.x.x.x" would always display "Local" for the aspath. This is
because we were calling aspath_counts_hop() to determine if the aspath was
empty. This is wrong though because CONFED segments do not count towards
aspath hopcount. The fix is to null check aspath->segments to determine if
the aspath is actually empty.
"show ip bgp x.x.x.x" and "show ip bgp neighbor" always displayed
"internal" or "external" and never "confed-internal" or "confed-external".
This made troubleshooting difficult because I couldn't tell exactly what
kind of peer I was dealing with. I added the confed-internal and
confed-external output...also added a "peer-type" field in the json output
for 'show ip bgp x.x.x.x'
"show ip bgp peer-group" did not list the peer-group name if we hadn't
determined the "type" (internal, external, etc) for the peer-group
an implicit withdraw as is done for the NEXT_HOP attribute in the
update itself.
Note: Check is implemented only for IPv6 for the global nexthop. The
code will quietly ignore an invalid IPv6 link-local nexthop, if present;
this is the existing behavior and is not changed.
Signed-off-by: Vivek Venkataraman <vivek@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
honored correctly for EBGP peers after the introduction of the
dynamic update groups functionality. Ensure this is handled
correctly. Also, the route-map can separately set different
nexthops - IPv4, IPv6 global or IPv6 link-local; treat these
separately.
This adds support for BGP RFC 5549 (Extended Next Hop Encoding capability)
* send and receive of the capability
* processing of IPv4->IPv6 next-hops
* for resolving these IPv6 next-hops, itsworks with the current
next-hop-tracking support
* added a new message type between BGP and Zebra for such route
install/uninstall
* zserv side of changes to process IPv4 prefix ->IPv6 next-hops
* required show command changes for IPv4 prefix having IPv6 next-hops
Few points to note about the implementation:
* It does an implicit next-hop-self when a [IPv4 prefix -> IPv6 LL next-hop]
is to be considered for advertisement to IPv4 peering (or IPv6 peering
without Extended next-hop capability negotiated)
* Currently feature is off by default, enable it by configuring
'neighbor <> capability extended-nexthop'
* Current support is for IPv4 Unicast prefixes only.
IMPORTANT NOTE:
This patch alone isn't enough to have IPv4->IPv6 routes installed into
the kernel. A separate patch is needed for that to work for the netlink
interface.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Vivek Venkatraman <vivek@cumulusnetworks.com>
Donald Sharp <sharpd@cumulusnetworks.com>
Our code implemented 'force' for a keyword while quagga mainline implemented 'all'.
This fixups the #define usage that was missed that came in during one of the patch
files. This is a compile only testing
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
bgpd: Exchange hostname capability and display hostnames in outputs
This patch adds a hostname capability. The node's hostname and
domainname are exchanged in the new capability and used in show command
outputs based on a knob enabled by the user. The hostname and domainname
can be a maximum of 64 chars long, each.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com>
BGP: Fix network import check use with NHT instead of scanner
When next hop tracking was implemented and the bgp scanner was eliminated,
the "network import-check" command got broken. This patch fixes that
issue. NHT is used to not just track nexthops, but also the static routes
that are announced as part of BGP's network command. The routes are
registered only when import-check is enabled. To optimize performance,
we register static routes only when import-check is enabled.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
state to facilitate clearing of the routes received from the peer - remove
from the RIB, reselect best path, update/delete from Zebra and to other
peers etc. At the end of this, a Clearing_Completed event is generated to
the FSM which will allow the peer to move out of Clearing to Idle.
The issue in the code is that there is a possibility of multiple Clearing
Completed events being generated for a peer, one per AFI/SAFI. Upon the
first such event, the peer would move to Idle. If other events happened
(e.g., new connection got established) before the last Clearing_Completed
event is received, bad things can happen.
Fix to ensure only one Clearing_Completed event is generated.
When a route is announced in BGP via "network" command, we also register its
next hop with NHT code to allow of updates when the nexthop changes. When this
route is deleted via "no network" command, we incorrectly make a second call to
unregister the NHT tracking associated with this route. This causes a crash.
Fix that.
comparison if the peer is not in an Established state. There can
be a window between a peer being deleted and the background
thread that actually clears the routes (marks them as "removed")
runs during which best path may run. If this path selection
compared two prefixes all the way down to peer IP addresses and
one of these two peers had just been deleted, that peer would
not have its sockunion structures, especially su_remote, resulting
in a BGPD exception.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
change processing etc.) that refer to the BGP instance, the correct BGP
instance must be referenced and not the default BGP instance. The default
BGP instance is the first instance on the instance list. In a scenario
where one BGP instance is deleted (through operator action such as a
"no router bgp" command) and another instance exists or is created, there
may still be events in-flight that need to be processed against the
deleted instance. Trying to process these against the default instance
is erroneous. The calls to bgp_get_default() must be limited to the user
interface (vtysh) context.
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
In the data center, in conjunction with next hop propagation for features
such as announcing VIP routes to load balancers and such, it is desired to
disable the connected route check even on ebgp peers with TTL of 1. This
patch is used to disable the check for all peers instead of the peer by
peer check that is currently supported. Furthermore, the existing
disable-connected-check is different from how Cisco implements this feature.
So, we add this new flag to avoid reliance on the existing flag.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
Reviewed-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
In the data center, where load balancers are announced as VIPs, and eBGP
is used as the routing protocol, this feature is required to ensure that
VIP announcements can be made from anywhere the operator sees fit.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
Signed-off-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
sessions dynamically. The operator configures a range of neighbor addresses
to which peering is allowed. The ranges are configured as subnets and
multiple ranges are allowed. Each range is associated with a peer-group
so that additional parameters can be configured.
BGP neighbor sessions are dynamically created when connections are initiated
by remote neighbors whose addresses fall within a configured range. The
sessions are deleted when the BGP connection terminates.
A limit on the number of neighbors allowed from each range of addresses
can be specified.
IPv4 and IPv6 peering is supported. Over the peering, any of the address
families configured for the peer-group can be negotiated.
This patch implements the 'update-groups' functionality in BGP. This is a
function that can significantly improve BGP performance for Update generation
and resultant network convergence. BGP Updates are formed for "groups" of
peers and then replicated and sent out to each peer rather than being formed
for each peer. Thus major BGP operations related to outbound policy
application, adj-out maintenance and actual Update packet formation
are optimized.
BGP update-groups dynamically groups peers together based on configuration
as well as run-time criteria. Thus, it is more flexible than update-formation
based on peer-groups, which relies on operator configuration.
[Note that peer-group based update formation has been introduced into BGP by
Cumulus but is currently intended only for specific releases.]
From 11098af65b2b8f9535484703e7f40330a71cbae4 Mon Sep 17 00:00:00 2001
Subject: [PATCH] updgrp commits
perform the neighbor address comparison as that information is invalid for
the stale entry. Attempting to perform the comparison results in a bgpd
exception.
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
Summary of changes
- added an option to enable keepalive debugs for a specific peer
- added an option to enable inbound and/or outbound updates debugs for a specific peer
- added an option to enable update debugs for a specific prefix
- added an option to enable zebra debugs for a specific prefix
- combined "deb bgp", "deb bgp events" and "deb bgp fsm" into "deb bgp neighbor-events". "deb bgp neighbor-events" can be enabled for a specific peer.
- merged "deb bgp filters" into "deb bgp update"
- moved the per-peer logging to one central log file. We now have the ability to filter all verbose debugs on a per-peer and per-prefix basis so we no longer need to keep log files per-peer. This simplifies troubleshooting by keeping all BGP logs in one location. The use
r can then grep for the peer IP they are interested in if they wish to see the logs for a specific peer.
- Changed "show debugging" in isis to "show debugging isis" to be consistent with all other protocols. This was very confusing for the user because they would type "show debug" and expect to see a list of debugs enabled across all protocols.
- Removed "undebug" from the parser for BGP. Again this was to be consisten with all other protocols.
- Removed the "all" keyword from the BGP debug parser. The user can now do "no debug bgp" to disable all BGP debugs, before you had to type "no deb all bgp" which was confusing.
The new parse tree for BGP debugging is:
deb bgp as4
deb bgp as4 segment
deb bgp keepalives [A.B.C.D|WORD|X:X::X:X]
deb bgp neighbor-events [A.B.C.D|WORD|X:X::X:X]
deb bgp nht
deb bgp updates [in|out] [A.B.C.D|WORD|X:X::X:X]
deb bgp updates prefix [A.B.C.D/M|X:X::X:X/M]
deb bgp zebra
deb bgp zebra prefix [A.B.C.D/M|X:X::X:X/M]
- Schedule write thread for advertisements and withdraws only if corresponding
FIFOs are growing and/or upon work_queue getting fully processed.
- Set non-default yield time for the main work_queue, as the default value
of 10ms results in yielding after processing very few nodes.
- Remove unnecessary scheduling of write thread when update packet is formed.
- If MRAI is 0, don't start a timer unnecessarily, directly schedule write
thread.
- Some debugs.
Credit
------
A huge amount of credit for this patch goes to Piotr Chytla for
their 'route tags support' patch that was submitted to quagga-dev
in June 2007.
Documentation
-------------
All ipv4 and ipv6 static route commands now have a "tag" option
which allows the user to set a tag between 1 and 65535.
quagga(config)# ip route 1.1.1.1/32 10.1.1.1 tag ?
<1-65535> Tag value
quagga(config)# ip route 1.1.1.1/32 10.1.1.1 tag 40
quagga(config)#
quagga# show ip route 1.1.1.1/32
Routing entry for 1.1.1.1/32
Known via "static", distance 1, metric 0, tag 40, best
* 10.1.1.1, via swp1
quagga#
The route-map parser supports matching on tags and setting tags
!
route-map MATCH_TAG_18 permit 10
match tag 18
!
!
route-map SET_TAG_22 permit 10
set tag 22
!
BGP and OSPF support:
- matching on tags when redistribing routes from the RIB into BGP/OSPF.
- setting tags when redistribing routes from the RIB into BGP/OSPF.
BGP also supports setting a tag via a table-map, when installing BGP
routes into the RIB.
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
BGP: Add match interface support to BGP route-map.
Currently, BGP route maps don't support interface match. This is a problem
for commands such as redistribite connected that cannot exclude routes from
specific interfaces (such as mgmt interfaces).
This patch adds the ability to see the effect of applying a route-map on
the routes received or advertised from or to a neighbor. This effect can
be seen without actually affecting the current state. If the result seen
is what is desired, then the user can actually apply the route-map.
Currently, the application acts on route-map in or out and on unsuppress
maps.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
ISSUE:
During startup, BGP update prefix packing wasnt optimal and route installation
was found to be spread over.
SOLUTION:
With this patch, update-delay post processing is serialized to achieve:
a. better peer update packing
(which helps in reducing total number of BGP update packets)
b. installation of the resulting routes in zebra as close to each others
as possible.
(which can help zebra batch its processing and updates to Kernel better)
BGPd: Allow route-map policy modifications to also affect route reflectors.
By default, attribute modification via route-map policy out is ignored on
reflected routes. This patch provides an option to allow this modification
to occur. Once enabled, it affects all reflected routes.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
BGP: Fix FSM to handle active/passive connections better
The existing code didn't work well when dual connections resulted between
peers during session bringup. This patch fixes that.
Signed-off-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
quagga: nexthop-tracking.patch
Add next hop tracking support to Quagga. Complete documentation in doc/next-hop-tracking.txt.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Signed-off-by: Dinesh Dutt <ddutt@cumulusnetworks.com>
COMMAND:
'update-delay <max-delay in seconds> [<establish-wait in seconds>]'
DESCRIPTION:
This feature is used to enable read-only mode on BGP process restart or when
BGP process is cleared using 'clear ip bgp *'. When applicable, read-only mode
would begin as soon as the first peer reaches Established state and a timer
for <max-delay> seconds is started.
During this mode BGP doesn't run any best-path or generate any updates to its
peers. This mode continues until:
1. All the configured peers, except the shutdown peers, have sent explicit EOR
(End-Of-RIB) or an implicit-EOR. The first keep-alive after BGP has reached
Established is considered an implicit-EOR.
If the <establish-wait> optional value is given, then BGP will wait for
peers to reach establish from the begining of the update-delay till the
establish-wait period is over, i.e. the minimum set of established peers for
which EOR is expected would be peers established during the establish-wait
window, not necessarily all the configured neighbors.
2. max-delay period is over.
On hitting any of the above two conditions, BGP resumes the decision process
and generates updates to its peers.
Default <max-delay> is 0, i.e. the feature is off by default.
This feature can be useful in reducing CPU/network used as BGP restarts/clears.
Particularly useful in the topologies where BGP learns a prefix from many peers.
Intermediate bestpaths are possible for the same prefix as peers get established
and start receiving updates at different times. This feature should offer a
value-add if the network has a high number of such prefixes.
IMPLEMENTATION OBJECTIVES:
Given this is an optional feature, minimized the code-churn. Used existing
constructs wherever possible (existing queue-plug/unplug were used to achieve
delay and resume of best-paths/update-generation). As a result, no new
data-structure(s) had to be defined and allocated. When the feature is disabled,
the new node is not exercised for the most part.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Dinesh Dutt <ddutt@cumulusnetworks.com>
A fat tree topology running IBGP gets into two issues with anycast address
routing. Consider the following topology:
R9 R10
x x
R3 R4 R7 R8
x x
R1 R2 R5 R6
| | | |
10/8 10/8 10/8 S
Let's remind ourselves of BGP decision process steps:
1. Highest Local Preference
2. Shortest AS Path Length
3. Lowest Origin Type
4. Lowest MED (Multi-Exit Discriminator)
5. Prefer External to Internal
6. Closest Egress (Lowest IGP Distance)
7. Tie Breaking (Lowest-Router-ID)
8. Tie Breaking (Lowest-cluster-list length)
9. Tie Breaking (Lowest-neighbor-address)
Without any policies, steps 1-6 will almost always evaluate identically for
all paths received on any router in the above topology. Let's assume that
the router-ids follow the following inequality: R1 < R2 < R5 < R6. Owing to
the 7th step above, all routers will now choose R1's path as the best. This
is undesirable. As an example, traffic from S to 10/8 will follow the path
S -> R6 -> R7 -> R9 -> R4 -> R2 -> 10/8 instead of S -> R6 -> R7 -> R5 -> 10/8.
Furthermore, once R7 (& R8) chooses R1's path as the best, it would withdraw
its path learned through (R5, R6) from (R9, R10). This leads to inefficient
load balancing - e.g. R9 can't do ECMP across all available egresses -
(R1, R2, R5).
The patch addresses these issues by noting that that cluster list is always
carried along with the routes and its length is a good indicator of IBGP
hops. It thus makes sense to compare that as an extension to metric after
step 6. That automatically ensures correct multipath computation.
Unfortunately a partial deployment of this in a generic topology (note:
fat-tree/clos topologies work fine) may lead to potential loops. It needs
to be looked into.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>