The routines to push and pop ignore files while traversing a
directory had some issues. In particular, setting up the initial
list would sometimes push an ignore file before it ought to be
applied if the starting path was a directory containing an ignore
file. Also, the pop function was not always matching the right
part of the path and would fail to pop ignores from the list in
some cases.
This adds some tests that exercise a particular problematic case
and then fixes the problems that I could find related to this.
At some point, I'd like to isolate this ignore rule management
code and rewrite it, but that's a larger project and right now,
I'll opt to just try to fix the broken behaviors.
This rolls back the changes to fnmatch parsing from commit
2e40a60e84 except for the tests
that were added. Instead this adds couple of new flags that can
be passed in when attempting to parse an fnmatch pattern. Also,
this changes the pathspec match logic to special case matching a
filename with a '!' prefix against a negative pattern.
This fixes the build.
`git_config_set_string(config, "config.section", "")` fails when
escaping the value.
The buffer in `escape_value` is allocated without NULL-termination. And
in case of empty string 0 is passed for buffer size in `git_buf_grow`.
`git_buf_detach` returns NULL when the allocated size is 0 and that
leads to an error return in `GITERR_CHECK_ALLOC` called after
`escape_value`
The change in `config_file.c` was suggested by Russell Belfer <rb@github.com>
The test coverage for ambiguous OIDs was pretty thin. This adds
a bunch of new objects both in packs, across packs, and loose that
match to 8 characters so that we can test various cases of
ambiguous lookups.
In git_diff_paired_foreach, temporarily resort the
index->workdir diff list by index path so that we can
track a rename in the workdir from head->index->workdir.
Create a new section of clar tests "stress" that will default to
being off where we can put slow tests that push the library for
performance testing purposes.
When using a rename source that is actually a to-be-split record,
we have to update the best-fit mapping data in both the case where
the target is also a split record and the case where the target
is a simple added record. Before this commit, we were only doing
the update when the target was itself a split record (and even in
that case, the test was slightly wrong).
After doing further profiling, I found that a lot of time was
being spent attempting to insert hashes into the file hash
signature when using the rolling hash because the rolling hash
approach generates a hash per byte of the file instead of one
per run/line of data.
To optimize this, I decided to convert back to a run-based file
signature algorithm which would be more like core Git.
After changing this, a number of the existing tests started to
fail. In some cases, this appears to have been because the test
was coded to be too specific to the particular results of the file
similarity metric and in some cases there appear to have been bugs
in the core rename detection code where only by the coincidence
of the file similarity scoring were the expected results being
generated.
This renames all the variables in the core rename detection code
to be more consistent and hopefully easier to follow which made it
a bit easier to reason about the behavior of that code and fix the
problems that I was seeing. I think it's in better shape now.
There are a couple of tests now that attempt to stress test the
rename detection code and they are quite slow. Most of the time
is spent setting up the test data on disk and in the index. When
we roll out performance improvements for index insertion, it
should also speed up these tests I hope.
This option is already present in the CMake config, but was missing from `Makefile.embed` and would cause all kinds of weird failures when compiling rugged on windows with the ruby devkit.
The size data in the index may not reflect the actual size of the
blob data from the ODB when content filtering comes into play.
This commit fixes rename detection to use the actual blob size when
calculating data signatures instead of the value from the index.
Because of a misunderstanding on my part, I first converted the
git_index_add_bypath API to use the post-filtered blob data size
in creating the index entry. I backed that change out, but I
kept the overall refactoring of that routine and the new internal
git_blob__create_from_paths API because it eliminates an extra
stat() call from the code that adds a file to the index.
The existing tests actually cover this code path, at least when
running on Windows, so at this point I'm not adding new tests to
cover the changes.
The previous fix for checking file sizes with rename detection
always loads the blob. In this version, if the odb backend can
get the object header without loading the whole thing into memory,
then we'll just use that, so that we can eliminate possible rename
sources & targets without loading them.