When adding a new entry to an existing index via `git_index_read_index`,
be sure to remove the tree cache entry for that new path. This will
mark all parent trees as dirty.
Clear any error state upon each iteration. If one of the iterations
ends (with an error of `GIT_ITEROVER`) we need to reset that error to 0,
lest we stop the whole process prematurely.
Read a tree into an index using `git_index_read_index` (by reading
a tree into a new index, then reading that index into the current
index), then write the index back out, ensuring that our new index
is treesame to the tree that we read.
It looks like we're getting the operation and not doing anything
with it, when in fact we are asserting that it's not null. Simply
assert that we are within the operation boundary instead of using
the `git_array_get` macro to do this for us.
When we want to remove the file, use the basename as the name of the
entry to remove, instead of the full one, which includes the directories
we've inserted into the stack.
Instead of going through the usual steps of reading a tree recursively
into an index, modifying it and writing it back out as a tree, introduce
a function to perform simple updates more efficiently.
`git_tree_create_updated` avoids reading trees which are not modified
and supports upsert and delete operations. It is not as versatile as
modifying the index, but it makes some common operations much more
efficient.
`test_commit_commit__create_initial_commit_parent_not_current` was not correctly
testing that `HEAD` was not changed. Now we grab the oid that it was pointing to
before the call to `git_commit_create` and the oid that it's pointing to afterwards
and compare those.
When calling `git_commit_create` with an empty array of `parents` and `parent_count == 0`
the call will segfault at https://github.com/libgit2/libgit2/blob/master/src/commit.c#L107
when it's trying to compare `current_id` to a null parent oid.
This just puts in a check to stop that segfault.
When determining diffs between two iterators we may need to
recurse into an unmatched directory for the "new" iterator when
it is either a prefix to the current item of the "old" iterator
or when untracked/ignored changes are requested by the user and
the directory is untracked/ignored.
When advancing into the directory and no files are found, we will
get back `GIT_ENOTFOUND`. If so, we simply skip the directory,
handling resulting unmatched old items in the next iteration. The
other case of `iterator_advance_into` returning either
`GIT_NOERROR` or any other error but `GIT_ENOTFOUND` will be
handled by the caller, which will now either compare the first
directory entry of the "new" iterator in case of `GIT_ENOERROR`
or abort on other cases.
Improve readability of the code to make the above logic more
clear.
We compute offsets by executing `off |= (*delta++ << 24)` for
multiple constants, where `off` is of type `size_t` and `delta`
is of type `unsigned char`. The usual arithmetic conversions (see
ISO C89 §3.2.1.5 "Usual arithmetic conversions") kick in here,
causing us to promote both operands to `int` and then extending
the result to an `unsigned long` when OR'ing it with `off`.
The integer promotion to `int` may result in wrong size
calculations for big values.
Fix the issue by making the constants `unsigned long`, causing both
operands to be promoted to `unsigned long`.
An object's size is computed by reading the object header's size
field until the most significant bit is not set anymore. To get
the total size, we increase the shift on each iteration and add
the shifted value to the total size.
We read the current value into a variable of type `unsigned
char`, from which we then take all bits except the most
significant bit and shift the result. We will end up with a
maximum shift of 60, but this exceeds the width of the value's
type, resulting in undefined behavior.
Fix the issue by instead reading the values into a variable of
type `unsigned long`, which matches the required width. This is
equivalent to git.git, which uses an `unsigned long` as well.