"set community accept-own-nexthop" returns "malformed communities"
error. This is because the token matching hits an earlier "accept-own"
and leaves "-nexthop" as a separate token to be processed.
Reorder the switch cases so that both are processed correctly.
Signed-off-by: Appu Joseph <apjo@kaloom.com>
we were not correctly checking the MPLS-TE status of the area when
adding an IP address to a circuit, and this was preventing the local
address TLV to be populated after an interfaced flap.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
Updated documentation for routers with a large route table, which breaks
SNMP/AgentX and in some conditions even crashes FRR. The documentation
proposal amends the SNMP configuration to exclude certain OID's that
are not needed in normal cases.
Format-fixed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Signed-off-by: Bart Vrancken <bart@abuse.io>
Description:
OSPF uses an algo to generate unique LSIDs when route updates
exists with same adrress and different masklens. It genearates
the unique LSIDs by masking its hostbits.
Ex :
Rt1 : 10.0.0.0/32 - LSID : 10.0.0.0
Rt2 : 10.0.0.0/16 - LSID : 10.0.255.255
Rt3 : 10.0.0.0/24 - LSID : 10.0.0.255
Observed an issue with external LSAs when such routes originated.
If the first route (with actual address as LSID) is got deleted,
the routes with same addresss(different msaks) are failed
to get LSA pointers from LSDB due to current fetching API design.
api : ospf_external_info_find_lsa
Here , it is allowing to look up the LSA with address specific
unique LSID (address with host bits set ex: 10.0.255.255) only
if the LSA exists where LSID as actual address of the route
(ex: 10.0.0.0 ) which is not expected and cauing for other issues.
Fix:
Corrected this logic, by looking up the LSA with unique LSID first
if it doesn’t exist then It is allowing to look up the LSDB with LSID
as address of the route.
Signed-off-by: Rajesh Girada <rgirada@vmware.com>
Each northbound callback has a set of valid return values, some of
which might depend on the transaction phase. The valid return values
for each callback are documented in the northbound main header.
Add some code to detect when a callback returns an unexpected value
and log the occurrence. This should help us to identify and fix
such problems.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
The northbound configuration callbacks should now print error
messages to the provided buffer (args->errmsg) instead of logging
them directly. This will allow the northbound layer to forward the
error messages to the northbound clients in addition to logging them.
NOTE: many callbacks are returning errors without providing any
error message. This needs to be fixed long term.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
Instead of returning only error codes (e.g. NB_ERR_VALIDATION)
to the northbound clients, do better than that and also return
a human-readable error message. This should make FRR more
automation-friendly since operators won't need to dig into system
logs to find out what went wrong in the case of an error.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
The new northbound context structure contains information about
the client performing a configuration transaction. This information
will be made available to all configuration callbacks through the
args->context parameter.
The usefulness of this structure comes from the fact that it can be
used as a communication channel (both input and output) between the
northbound callbacks and the northbound clients. This can be done
through its "client_data" field which contains client-specific data.
This should cover some very specific scenarios where a northbound
callback should perform an action only if the configuration change
is coming from a given client. An example would be sending a PCEP
response to a PCE when an SR-TE policy is created or modified
through the PCEP northbound client (for that to happen, the
northbound callbacks need to have access to the PCEP request ID,
which needs to be available).
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
We are crashing in thread_cancel on shutdown because
the thread pointer is NULL. Use the more appropriate
THREAD_CANCEL macro
Ticket: CM-29873
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
`debug zebra packet detail` dumps the full message whereas
it had been dropping exactly 10 bytes, the size of the zebra header
Signed-off-by: Wesley Coakley <wcoakley@cumulusnetworks.com>
We were using thread_cancel() directly instead of
THREAD_OFF which correctly ensures the pointer is not NULL.
Ticket:CM-29866
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Revise new `show pbr` keys to be consistent with existing
json in other daemons
target->nexthop
id->tableId (where relevant)
isValid->valid
isInstalled->installed
Signed-off-by: Wesley Coakley <wcoakley@cumulusnetworks.com>
The new json output for the `show pbr` directives return arrays instead
of associative arrays, which are more meaningful in this context
Signed-off-by: Wesley Coakley <wcoakley@cumulusnetworks.com>
Increased the verbosity of the json keys and flattened the returned
structure by removing superfluous keys.
Signed-off-by: Wesley Coakley <wcoakley@cumulusnetworks.com>
After the cleanup, adding this doesn't require updating a zillion
locations in the code anymore, just one :)
Partially derived from 6a00e91d99f7f98d857c2056d0dcfeba48966581
Originally-by: Emanuele Di Pascale <emanuele@voltanet.io>
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
- throw vtysh into a wrapper class
- ignore "username" commands
- use mark output on stdout
- some other random cleanups
Signed-off-by: David Lamparter <equinox@diac24.net>
... to skip the "Building configuration..." header that gets in the way
of automated processing.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
The route-map optimization is not equipped to match IPv6 next-hop
criteria while evaluating IPv4 routes with IPv6 next-hops.
Similary, it is also not equipped to match IPv4 next-hop criteria
while evaluating IPv6 routes with IPv4 next-hops.
This change addresses these issues.
Signed-off-by: NaveenThanikachalam <nthanikachal@vmware.com>