mirror of
https://git.proxmox.com/git/mirror_frr
synced 2025-04-28 15:36:25 +00:00
Merge pull request #13599 from LabNConsulting/chopps/analyze-search
tests: allow selecting test results by regexp match
This commit is contained in:
commit
145acbb3bb
@ -196,13 +196,15 @@ Analyze Test Results (``analyze.py``)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
By default router and execution logs are saved in ``/tmp/topotests`` and an XML
|
||||
results file is saved in ``/tmp/topotests.xml``. An analysis tool ``analyze.py``
|
||||
is provided to archive and analyze these results after the run completes.
|
||||
results file is saved in ``/tmp/topotests/topotests.xml``. An analysis tool
|
||||
``analyze.py`` is provided to archive and analyze these results after the run
|
||||
completes.
|
||||
|
||||
After the test run completes one should pick an archive directory to store the
|
||||
results in and pass this value to ``analyze.py``. On first execution the results
|
||||
are copied to that directory from ``/tmp``, and subsequent runs use that
|
||||
directory for analyzing the results. Below is an example of this which also
|
||||
are moved to that directory from ``/tmp/topotests``. Subsequent runs of
|
||||
``analyze.py`` with the same args will use that directories contents for instead
|
||||
of copying any new results from ``/tmp``. Below is an example of this which also
|
||||
shows the default behavior which is to display all failed and errored tests in
|
||||
the run.
|
||||
|
||||
@ -214,7 +216,7 @@ the run.
|
||||
bgp_gr_functionality_topo2/test_bgp_gr_functionality_topo2.py::test_BGP_GR_10_p2
|
||||
bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_routingTable
|
||||
|
||||
Here we see that 4 tests have failed. We an dig deeper by displaying the
|
||||
Here we see that 4 tests have failed. We can dig deeper by displaying the
|
||||
captured logs and errors. First let's redisplay the results enumerated by adding
|
||||
the ``-E`` flag
|
||||
|
||||
@ -249,9 +251,11 @@ the number of the test we are interested in along with ``--errmsg`` option.
|
||||
|
||||
assert False
|
||||
|
||||
Now to look at the full text of the error for a failed test we use ``-T N``
|
||||
where N is the number of the test we are interested in along with ``--errtext``
|
||||
option.
|
||||
Now to look at the error text for a failed test we can use ``-T RANGES`` where
|
||||
``RANGES`` can be a number (e.g., ``5``), a range (e.g., ``0-10``), or a comma
|
||||
separated list numbers and ranges (e.g., ``5,10-20,30``) of the test cases we
|
||||
are interested in along with ``--errtext`` option. In the example below we'll
|
||||
select the first failed test case.
|
||||
|
||||
.. code:: shell
|
||||
|
||||
@ -277,8 +281,8 @@ option.
|
||||
[...]
|
||||
|
||||
To look at the full capture for a test including the stdout and stderr which
|
||||
includes full debug logs, just use the ``-T N`` option without the ``--errmsg``
|
||||
or ``--errtext`` options.
|
||||
includes full debug logs, use ``--full`` option, or specify a ``-T RANGES`` without
|
||||
specifying ``--errmsg`` or ``--errtext``.
|
||||
|
||||
.. code:: shell
|
||||
|
||||
@ -298,6 +302,46 @@ or ``--errtext`` options.
|
||||
--------------------------------- Captured Out ---------------------------------
|
||||
system-err: --------------------------------- Captured Err ---------------------------------
|
||||
|
||||
Filtered results
|
||||
""""""""""""""""
|
||||
|
||||
There are 4 types of test results, [e]rrored, [f]ailed, [p]assed, and
|
||||
[s]kipped. One can select the set of results to show with the ``-S`` or
|
||||
``--select`` flags along with the letters for each type (i.e., ``-S efps``
|
||||
would select all results). By default ``analyze.py`` will use ``-S ef`` (i.e.,
|
||||
[e]rrors and [f]ailures) unless the ``--search`` filter is given in which case
|
||||
the default is to search all results (i.e., ``-S efps``).
|
||||
|
||||
One can find all results which contain a ``REGEXP``. To filter results using a
|
||||
regular expression use the ``--search REGEXP`` option. In this case, by default,
|
||||
all result types will be searched for a match against the given ``REGEXP``. If a
|
||||
test result output contains a match it is selected into the set of results to show.
|
||||
|
||||
An example of using ``--search`` would be to search all tests results for some
|
||||
log message, perhaps a warning or error.
|
||||
|
||||
Using XML Results File from CI
|
||||
""""""""""""""""""""""""""""""
|
||||
|
||||
``analyze.py`` actually only needs the ``topotests.xml`` file to run. This is
|
||||
very useful for analyzing a CI run failure where one only need download the
|
||||
``topotests.xml`` artifact from the run and then pass that to ``analyze.py``
|
||||
with the ``-r`` or ``--results`` option.
|
||||
|
||||
For local runs if you wish to simply copy the ``topotests.xml`` file (leaving
|
||||
the log files where they are), you can pass the ``-a`` (or ``--save-xml``)
|
||||
instead of the ``-A`` (or ``-save``) options.
|
||||
|
||||
Analyze Results from a Container Run
|
||||
""""""""""""""""""""""""""""""""""""
|
||||
|
||||
``analyze.py`` can also be used with ``docker`` or ``podman`` containers.
|
||||
Everything works exactly as with a host run except that you specify the name of
|
||||
the container, or the container-id, using the `-C` or ``--container`` option.
|
||||
``analyze.py`` will then use the results inside that containers
|
||||
``/tmp/topotests`` directory. It will extract and save those results when you
|
||||
pass the ``-A`` or ``-a`` options just as withe host results.
|
||||
|
||||
|
||||
Execute single test
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -7,17 +7,61 @@
|
||||
# Copyright (c) 2021, LabN Consulting, L.L.C.
|
||||
#
|
||||
import argparse
|
||||
import glob
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from collections import OrderedDict
|
||||
|
||||
import xmltodict
|
||||
|
||||
|
||||
def get_range_list(rangestr):
|
||||
result = []
|
||||
for e in rangestr.split(","):
|
||||
e = e.strip()
|
||||
if not e:
|
||||
continue
|
||||
if e.find("-") == -1:
|
||||
result.append(int(e))
|
||||
else:
|
||||
start, end = e.split("-")
|
||||
result.extend(list(range(int(start), int(end) + 1)))
|
||||
return result
|
||||
|
||||
|
||||
def dict_range_(dct, rangestr, dokeys):
|
||||
keys = list(dct.keys())
|
||||
if not rangestr or rangestr == "all":
|
||||
for key in keys:
|
||||
if dokeys:
|
||||
yield key
|
||||
else:
|
||||
yield dct[key]
|
||||
return
|
||||
|
||||
dlen = len(keys)
|
||||
for index in get_range_list(rangestr):
|
||||
if index >= dlen:
|
||||
break
|
||||
key = keys[index]
|
||||
if dokeys:
|
||||
yield key
|
||||
else:
|
||||
yield dct[key]
|
||||
|
||||
|
||||
def dict_range_keys(dct, rangestr):
|
||||
return dict_range_(dct, rangestr, True)
|
||||
|
||||
|
||||
def dict_range_values(dct, rangestr):
|
||||
return dict_range_(dct, rangestr, False)
|
||||
|
||||
|
||||
def get_summary(results):
|
||||
ntest = int(results["@tests"])
|
||||
nfail = int(results["@failures"])
|
||||
@ -87,7 +131,7 @@ def get_filtered(tfilters, results, args):
|
||||
else:
|
||||
if not fname:
|
||||
fname = cname.replace(".", "/") + ".py"
|
||||
if args.files_only or "@name" not in testcase:
|
||||
if "@name" not in testcase:
|
||||
tcname = fname
|
||||
else:
|
||||
tcname = fname + "::" + testcase["@name"]
|
||||
@ -95,9 +139,14 @@ def get_filtered(tfilters, results, args):
|
||||
return found_files
|
||||
|
||||
|
||||
def dump_testcase(testcase):
|
||||
expand_keys = ("failure", "error", "skipped")
|
||||
def search_testcase(testcase, regexp):
|
||||
for key, val in testcase.items():
|
||||
if regexp.search(str(val)):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def dump_testcase(testcase):
|
||||
s = ""
|
||||
for key, val in testcase.items():
|
||||
if isinstance(val, str) or isinstance(val, float) or isinstance(val, int):
|
||||
@ -113,23 +162,50 @@ def dump_testcase(testcase):
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"-a",
|
||||
"--save-xml",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Move [container:]/tmp/topotests/topotests.xml "
|
||||
"to --results value if --results does not exist yet"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"-A",
|
||||
"--save",
|
||||
action="store_true",
|
||||
help="Save /tmp/topotests{,.xml} in --rundir if --rundir does not yet exist",
|
||||
help=(
|
||||
"Move [container:]/tmp/topotests{,.xml} "
|
||||
"to --results value if --results does not exist yet"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"-F",
|
||||
"--files-only",
|
||||
"-C",
|
||||
"--container",
|
||||
help="specify docker/podman container of the run",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--use-podman",
|
||||
action="store_true",
|
||||
help="print test file names rather than individual full testcase names",
|
||||
help="Use `podman` instead of `docker` for saving container data",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-S",
|
||||
"--select",
|
||||
default="fe",
|
||||
help="select results combination of letters: 'e'rrored 'f'ailed 'p'assed 's'kipped.",
|
||||
help=(
|
||||
"select results combination of letters: "
|
||||
"'e'rrored 'f'ailed 'p'assed 's'kipped. "
|
||||
"Default is 'fe', unless --search or --time which default to 'efps'"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"-R",
|
||||
"--search",
|
||||
help=(
|
||||
"filter results to those which match a regex. "
|
||||
"All test text is search unless restricted by --errmsg or --errtext"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"-r",
|
||||
@ -143,59 +219,147 @@ def main():
|
||||
action="store_true",
|
||||
help="enumerate each item (results scoped)",
|
||||
)
|
||||
parser.add_argument("-T", "--test", help="print testcase at enumeration")
|
||||
parser.add_argument(
|
||||
"-T", "--test", help="select testcase at given ordinal from the enumerated list"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--errmsg", action="store_true", help="print testcase error message"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--errtext", action="store_true", help="print testcase error text"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--full", action="store_true", help="print all logging for selected testcases"
|
||||
)
|
||||
parser.add_argument("--time", action="store_true", help="print testcase run times")
|
||||
|
||||
parser.add_argument("-s", "--summary", action="store_true", help="print summary")
|
||||
parser.add_argument("-v", "--verbose", action="store_true", help="be verbose")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.save and args.results and not os.path.exists(args.results):
|
||||
if not os.path.exists("/tmp/topotests"):
|
||||
logging.critical('No "/tmp/topotests" directory to save')
|
||||
if args.save and args.save_xml:
|
||||
logging.critical("Only one of --save or --save-xml allowed")
|
||||
sys.exit(1)
|
||||
subprocess.run(["mv", "/tmp/topotests", args.results])
|
||||
|
||||
scount = bool(args.save) + bool(args.save_xml)
|
||||
|
||||
#
|
||||
# Saving/Archiving results
|
||||
#
|
||||
|
||||
docker_bin = "podman" if args.use_podman else "docker"
|
||||
contid = ""
|
||||
if args.container:
|
||||
# check for container existence
|
||||
contid = args.container
|
||||
try:
|
||||
# p =
|
||||
subprocess.run(
|
||||
f"{docker_bin} inspect {contid}",
|
||||
check=True,
|
||||
shell=True,
|
||||
errors="ignore",
|
||||
capture_output=True,
|
||||
)
|
||||
except subprocess.CalledProcessError:
|
||||
print(f"{docker_bin} container '{contid}' does not exist")
|
||||
sys.exit(1)
|
||||
# If you need container info someday...
|
||||
# cont_info = json.loads(p.stdout)
|
||||
|
||||
cppath = "/tmp/topotests"
|
||||
if args.save_xml or scount == 0:
|
||||
cppath += "/topotests.xml"
|
||||
if contid:
|
||||
cppath = contid + ":" + cppath
|
||||
|
||||
tresfile = None
|
||||
|
||||
if scount and args.results and not os.path.exists(args.results):
|
||||
if not contid:
|
||||
if not os.path.exists(cppath):
|
||||
print(f"'{cppath}' doesn't exist to save")
|
||||
sys.exit(1)
|
||||
if args.save_xml:
|
||||
subprocess.run(["cp", cppath, args.results])
|
||||
else:
|
||||
subprocess.run(["mv", cppath, args.results])
|
||||
else:
|
||||
try:
|
||||
subprocess.run(
|
||||
f"{docker_bin} cp {cppath} {args.results}",
|
||||
check=True,
|
||||
shell=True,
|
||||
errors="ignore",
|
||||
capture_output=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as error:
|
||||
print(f"Can't {docker_bin} cp '{cppath}': %s", str(error))
|
||||
sys.exit(1)
|
||||
|
||||
if "SUDO_USER" in os.environ:
|
||||
subprocess.run(["chown", "-R", os.environ["SUDO_USER"], args.results])
|
||||
# # Old location for results
|
||||
# if os.path.exists("/tmp/topotests.xml", args.results):
|
||||
# subprocess.run(["mv", "/tmp/topotests.xml", args.results])
|
||||
elif not args.results:
|
||||
# User doesn't want to save results just use them inplace
|
||||
if not contid:
|
||||
if not os.path.exists(cppath):
|
||||
print(f"'{cppath}' doesn't exist")
|
||||
sys.exit(1)
|
||||
args.results = cppath
|
||||
else:
|
||||
tresfile, tresname = tempfile.mkstemp(
|
||||
suffix=".xml", prefix="topotests-", text=True
|
||||
)
|
||||
atexit.register(lambda: os.unlink(tresname))
|
||||
os.close(tresfile)
|
||||
try:
|
||||
subprocess.run(
|
||||
f"{docker_bin} cp {cppath} {tresname}",
|
||||
check=True,
|
||||
shell=True,
|
||||
errors="ignore",
|
||||
capture_output=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as error:
|
||||
print(f"Can't {docker_bin} cp '{cppath}': %s", str(error))
|
||||
sys.exit(1)
|
||||
args.results = tresname
|
||||
|
||||
assert (
|
||||
args.test is None or not args.files_only
|
||||
), "Can't have both --files and --test"
|
||||
#
|
||||
# Result option validation
|
||||
#
|
||||
|
||||
count = 0
|
||||
if args.errmsg:
|
||||
count += 1
|
||||
if args.errtext:
|
||||
count += 1
|
||||
if args.full:
|
||||
count += 1
|
||||
if count > 1:
|
||||
logging.critical("Only one of --full, --errmsg or --errtext allowed")
|
||||
sys.exit(1)
|
||||
|
||||
if args.time and count:
|
||||
logging.critical("Can't use --full, --errmsg or --errtext with --time")
|
||||
sys.exit(1)
|
||||
|
||||
if args.enumerate and (count or args.time or args.test):
|
||||
logging.critical(
|
||||
"Can't use --enumerate with --errmsg, --errtext, --full, --test or --time"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
results = {}
|
||||
ttfiles = []
|
||||
if args.rundir:
|
||||
basedir = os.path.realpath(args.rundir)
|
||||
os.chdir(basedir)
|
||||
|
||||
newfiles = glob.glob("tt-group-*/topotests.xml")
|
||||
if newfiles:
|
||||
ttfiles.extend(newfiles)
|
||||
if os.path.exists("topotests.xml"):
|
||||
ttfiles.append("topotests.xml")
|
||||
else:
|
||||
if args.results:
|
||||
if os.path.exists(os.path.join(args.results, "topotests.xml")):
|
||||
args.results = os.path.join(args.results, "topotests.xml")
|
||||
if not os.path.exists(args.results):
|
||||
logging.critical("%s doesn't exist", args.results)
|
||||
sys.exit(1)
|
||||
ttfiles = [args.results]
|
||||
elif os.path.exists("/tmp/topotests/topotests.xml"):
|
||||
ttfiles.append("/tmp/topotests/topotests.xml")
|
||||
|
||||
if not ttfiles:
|
||||
if os.path.exists("/tmp/topotests.xml"):
|
||||
ttfiles.append("/tmp/topotests.xml")
|
||||
ttfiles = [args.results]
|
||||
|
||||
for f in ttfiles:
|
||||
m = re.match(r"tt-group-(\d+)/topotests.xml", f)
|
||||
@ -203,6 +367,14 @@ def main():
|
||||
with open(f) as xml_file:
|
||||
results[group] = xmltodict.parse(xml_file.read())["testsuites"]["testsuite"]
|
||||
|
||||
search_re = re.compile(args.search) if args.search else None
|
||||
|
||||
if args.select is None:
|
||||
if search_re or args.time:
|
||||
args.select = "efsp"
|
||||
else:
|
||||
args.select = "fe"
|
||||
|
||||
filters = []
|
||||
if "e" in args.select:
|
||||
filters.append("error")
|
||||
@ -214,15 +386,26 @@ def main():
|
||||
filters.append(None)
|
||||
|
||||
found_files = get_filtered(filters, results, args)
|
||||
if found_files:
|
||||
if args.test is not None:
|
||||
if args.test == "all":
|
||||
keys = found_files.keys()
|
||||
|
||||
if search_re:
|
||||
found_files = {
|
||||
k: v for k, v in found_files.items() if search_testcase(v, search_re)
|
||||
}
|
||||
|
||||
if args.enumerate:
|
||||
# print the selected test names with ordinal
|
||||
print("\n".join(["{} {}".format(i, x) for i, x in enumerate(found_files)]))
|
||||
elif args.test is None and count == 0 and not args.time:
|
||||
# print the selected test names
|
||||
print("\n".join([str(x) for x in found_files]))
|
||||
else:
|
||||
keys = [list(found_files.keys())[int(args.test)]]
|
||||
for key in keys:
|
||||
rangestr = args.test if args.test else "all"
|
||||
for key in dict_range_keys(found_files, rangestr):
|
||||
testcase = found_files[key]
|
||||
if args.errtext:
|
||||
if args.time:
|
||||
text = testcase["@time"]
|
||||
s = "{}: {}".format(text, key)
|
||||
elif args.errtext:
|
||||
if "error" in testcase:
|
||||
errmsg = testcase["error"]["#text"]
|
||||
elif "failure" in testcase:
|
||||
@ -230,9 +413,6 @@ def main():
|
||||
else:
|
||||
errmsg = "none found"
|
||||
s = "{}: {}".format(key, errmsg)
|
||||
elif args.time:
|
||||
text = testcase["@time"]
|
||||
s = "{}: {}".format(text, key)
|
||||
elif args.errmsg:
|
||||
if "error" in testcase:
|
||||
errmsg = testcase["error"]["@message"]
|
||||
@ -244,13 +424,6 @@ def main():
|
||||
else:
|
||||
s = dump_testcase(testcase)
|
||||
print(s)
|
||||
elif filters:
|
||||
if args.enumerate:
|
||||
print(
|
||||
"\n".join(["{} {}".format(i, x) for i, x in enumerate(found_files)])
|
||||
)
|
||||
else:
|
||||
print("\n".join(found_files))
|
||||
|
||||
if args.summary:
|
||||
print_summary(results, args)
|
||||
|
Loading…
Reference in New Issue
Block a user