Introduce a new test suite "odb::backend::simple", which utilizes the
fake backend to exercise the ODB abstraction layer. While such tests
already exist for the case where multiple backends are put together, no
direct testing for functionality with a single backend exist yet.
The fake backend currently implements all reading functions except for
the `exists_prefix` one. Implement it to enable further testing of the
ODB layer.
The `search_object` function takes the OID length as one of its
parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists`
function of the fake backend used `GIT_OID_RAWSZ` though, leading to
only the first half of the OID being used when finding the correct
object.
In order to be able to test the ODB prefix functions, we need to be able
to detect ambiguous prefixes in case multiple objects with the same
prefix exist in the fake ODB. Extend `search_object` to detect ambiguous
queries and have callers return its error code instead of always
returning `GIT_ENOTFOUND`.
Initialization of the `git_proxy_options` structure is never tested
anywhere. Include it in our usual initialization test in
"core::structinit::compare".
A newly added test uses the `git_repository_new` function without the
corresponding header file being included. While this works due to the
compiler deducing the correct function signature, we should obviously
just include the function's declaration file.
Previous to pulling out and extending the fake backend, it was quite
cumbersome to write tests for very specific scenarios regarding
backends. But as we have made it more generic, it has become much easier
to do so. As such, this commit adds multiple tests for scenarios with
multiple backends for the ODB.
The changes also include a test for a very targeted scenario. When one
backend found a matching object via `read_prefix`, but the last backend
returns `GIT_ENOTFOUND` and when object hash verification is turned off,
we fail to reset the error code to `GIT_OK`. This causes us to segfault
later on, when doing a double-free on the returned object.
Right now, the fake backend is quite restrained in the way how it
works: we pass it an OID which it is to return later as well as an error
code we want it to return. While this is sufficient for existing tests,
we can make the fake backend a little bit more generic in order to allow
us testing for additional scenarios.
To do so, we change the backend to not accept an error code and OID
which it is to return for queries, but instead a simple array of OIDs
with their respective blob contents. On each query, the fake backend
simply iterates through this array and returns the first matching
object.
In order to make the fake backend more useful, we want to enable it
holding multiple object references. To do so, we need to decouple it
from the single fake OID it currently holds, which we simply move up
into the calling tests.
The fake backend used by the test suite `odb::backend::nonrefreshing` is
useful to have some low-level tests for the ODB layer. As such, we move
the implementation into its own `backend_helpers` module.
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout`
invocations, we remove files which were previously staged. But while
doing so, we unfortunately also remove unstaged files in a directory
which contains at least one staged file, resulting in potential data
loss.
This commit adds two tests to verify behavior.
Initially, the setting has been solely used to enable the use of
`fsync()` when creating objects. Since then, the use has been extended
to also cover references and index files. As the option is not yet part
of any release, we can still correct this by renaming the option to
something more sensible, indicating not only correlation to objects.
This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also
move the variable from the object to repository source code.
The current write test does not trigger some edge-cases in the index
version 4 path compression code. Rewrite the test to start off the an
empty standard repository, creating index entries with interesting paths
itself. This allows for more fine-grained control over checked paths.
Furthermore, we now also verify that entry paths are actually
reconstructed correctly.
While we do have a test which checks whether a written index of version
4 has the correct version set, we do not check whether this actually
enables path compression for index entries. This commit adds a new test
by adding a number of index entries with equal path prefixes to the
index and subsequently flushing that to disk. With suffix compression
enabled by index version 4, only the last few bytes of these paths will
actually have to be written to the index, saving a lot of disk space.
For the test, differences are about an order of magnitude, allowing us
to easily verify without taking a deeper look at actual on-disk
contents.
While we have a simple test to determine whether we can write an index
of version 4, we never verified that we are able to read this kind of
index (and in fact, we were not able to do so). Add a new repository
which has an index of version 4. This repository is then read from a new
test.
The init and cleanup functions for test suites are usually prepended to
our actual tests. The index::version test suite does not adhere to this
stile. Fix this.
When encoding varints to a buffer, we want to remain sure that the
required buffer space does not exceed what is actually available. Our
current check does not do the right thing, though, in that it does not
honor that our `pos` variable counts the position down instead of up. As
such, we will require too much memory for small varints and not enough
memory for big varints.
Fix the issue by correctly calculating the required size as
`(sizeof(varint) - pos)`. Add a test which failed before.
Upstream git.git has changed the way how packfiles are named.
Previously, they were using a hash of the contained object's OIDs, which
has then been changed to use the hash of the complete packfile instead.
See 1190a1acf (pack-objects: name pack files after trailer hash,
2013-12-05) in the git.git repository for more information on this
change.
This commit changes our logic to match the behavior of core git.
To determine if a repository is a worktree or not, we currently check
for the existence of a "gitdir" file inside of the repository's gitdir.
While this is sufficient for non-broken repositories, we have at least
one case of a subtly broken repository where there exists a gitdir file
inside of a gitmodule. This will cause us to misidentify the submodule
as a worktree.
While this is not really a fault of ours, we can do better here by
observing that a repository can only ever be a worktree iff its common
directory and dotgit directory are different. This allows us to make our
check whether a repo is a worktree or not more strict by doing a simple
string comparison of these two directories. This will also allow us to
do the right thing in the above case of a broken repository, as for
submodules these directories will be the same. At the same time, this
allows us to skip the `stat` check for the "gitdir" file for most
repositories.
The current signature of `git_worktree_prune` accepts a flags field to
alter its behavior. This is not as flexible as we'd like it to be when
we want to enable passing additional options in the future. As the
function has not been part of any release yet, we are still free to
alter its current signature. This commit does so by using our usual
pattern of an options structure, which is easily extendable without
breaking the API.
When creating a new worktree, we do have a potential race with us
creating the worktree and another process trying to delete the same
worktree as it is being created. As such, the upstream git project has
introduced a flag `git worktree add --locked`, which will cause the
newly created worktree to be locked immediately after its creation. This
mitigates the race condition.
We want to be able to mirror the same behavior. As such, a new flag
`locked` is added to the options structure of `git_worktree_add` which
allows the user to enable this behavior.