The `normalize_find_opts` function in theory allows for the
incoming diff to have no repository. When the caller does not
pass in diff find options or if the GIT_DIFF_FIND_BY_CONFIG value
is set, though, we try to derive the configuration from the
diff's repository configuration without first verifying that the
repository is actually set to a non-NULL value.
Fix this issue by explicitly checking if the repository is set
and if it is not, fall back to a default value of
GIT_DIFF_FIND_RENAMES.
Convert `rebase_alloc` to use our usual error propagation
patterns, that is accept an out-parameter and return an error
code that is to be checked by the caller. This allows us to use
the GITERR_CHECK_ALLOC macro, which helps static analysis.
Set the error code when an error occurs in any of the called
functions. This ensures we pass the error up to callers and
actually free the remote when an error occurs.
The overflow check in `read_reuc` tries to verify if the
`git__strtol32` parses an integer bigger than UINT_MAX. The `tmp`
variable is casted to an unsigned int for this and then checked
for being greater than UINT_MAX, which obviously can never be
true.
Fix this by instead fixing the `mode` field's size in `struct
git_index_reuc_entry` to `uint32_t`. We can now parse the int
with `git__strtol64`, which can never return a value bigger than
`UINT32_MAX`, and additionally checking if the returned value is
smaller than zero.
We do not need to handle overflows explicitly here, as
`git__strtol64` returns an error when the returned value would
overflow.
The fail-label of `reflog_parse` explicitly checks the entry
poitner for NULL before freeing it. When we jump to the label the
variable has to be set to a non-NULL and valid pointer though: if
the allocation fails we immediately return with an error code and
if the loop was not entered we return with a success code,
withouth executing the label's code.
Remove the useless NULL-check to silence Coverity.
When invoking `diff_print_info_init_frompatch` it is obvious that
the patch should be non-NULL. We explicitly check if the variable
is set and continue afterwards, happily dereferencing the
potential NULL-pointer.
Fix this by instead asserting that patch is set. This also
silences Coverity.
The function `compute_write_order` may return a `NULL`-pointer
when an error occurs. In such cases we jump to the `done`-label
where we try to clean up allocated memory. Unfortunately we try
to deallocate the `write_order` array, though, which may be NULL
here.
Fix this error by returning early instead of jumping to the
`done` label. There is no data to be cleaned up anyway.
When no payload is set for `crlf_apply` we try to compute the
crlf attributes ourselves with `crlf_check`. When the function
determines that the current file does not require any treatment
we return the GIT_PASSTHROUGH error code without actually
allocating the out-pointer, which indicates the file should not
be passed through the filter.
The `crlf_apply` function explicitly checks for the
GIT_PASSTHROUGH return code and ignores it. This means we will
try to apply the crlf-filter to the current file, leading us to
dereference the unallocated payload-pointer.
Fix this obviously incorrect behavior by not treating
GIT_PASSTHROUGH in any special way. This is the correct thing to
do anyway, as the code indicates that the file should not be
passed through the filter.
We commonly have to check if a git_buf has been allocated
correctly or if we ran out of memory. Introduce a new macro
similar to `GITERR_CHECK_ALLOC` which checks if we ran OOM and if
so returns an error. Provide a `#nodef` for Coverity to mark the
error case as an abort path.
When checking for out of memory situations we usually use the
GITERR_CHECK_ALLOC macro. Besides conforming to our current code
base it adds the benefit of silencing errors in Coverity due to
Coverity handling the macro's error path as abort.
Allow `git_index_read` to handle reading existing indexes with
illegal entries. Allow the low-level `git_index_add` to add
properly formed `git_index_entry`s even if they contain paths
that would be illegal for the current filesystem (eg, `AUX`).
Continue to disallow `git_index_add_bypath` from adding entries
that are illegal universally illegal (eg, `.git`, `foo/../bar`).
Although a `tree_iterator` that failed to be properly created
does not have a frame, all other `tree_iterator`s should. Do not
call `pop` in the failure case, but assert that in all other
cases there is a frame.
When Git repository at network locations, sometimes git_iterator_for_tree
fails at iterator__update_ignore_case so it goes to git_iterator_free.
Null pointer will crash the process if not check.
Signed-off-by: Colin Xu <colin.xu@gmail.com>
We should be checking whether the object we're looking up is a commit,
and we should let the caller know whether the not-found return code
comes from a bad object type or just a missing signature.
When performing an in-memory rebase, keep a single index for the
duration, so that callers have the expected index lifecycle and
do not hold on to an index that is free'd out from under them.
When we moved the logic to handle the first one, wrong loop logic was
kept in place which meant we still finished early. But we now notice it
because we're not reading past the last LF we find.
This was not noticed before as the last field in the tested commit was
multi-line which does not trigger the early break.
Introduce the ability to rebase in-memory or in a bare repository.
When `rebase_options.inmemory` is specified, the resultant `git_rebase`
session will not be persisted to disk. Callers may still analyze
the rebase operations, resolve any conflicts against the in-memory
index and create the commits. Neither `HEAD` nor the working
directory will be updated during this process.
The function `git_packfile_stream_open` tries to free the passed
in stream when an error occurs. The only call site is
`git_indexer_append`, though, which passes in the address of a
stream struct which has not been allocated on the heap.
Fix the issue by simply removing the call to free. In case of an
error we did not allocate any memory yet and otherwise it should
be the caller's responsibility to manage it's object's lifetime.
We were searching only past the first header field, which meant we were
unable to find e.g. `tree` which is the first field.
While here, make sure to set an error message in case we cannot find the
field.
Previously we would set the global filter registry structure before
adding filters to the structure, without a lock, which is quite racy.
Now, register default filters during global registration and use an
rwlock to read and write the filter registry (as appopriate).
Standard Windows type systems define CLSID_InternetSecurityManager
and IID_IInternetSecurityManager, but MinGW lacks these definitions.
As a result, we must hardcode these definitions ourselves. However,
we should not use a public struct with those names, lest another
library do the same thing and consumers cannot link to both.
We don't support using an index object from multiple threads at the same
time, so the locking doesn't have any effect when following the
rules. If not following the rules, things are going to break down
anyway.
Include dotfiles when copying template directory, which will handle
both a template directory itself that begins with a dotfile, and
any dotfiles inside the directory.
Fix the possibility of returning successfully from ssh_stream_read()
with *bytes_read < 0. This would occur if stdout channel read resulted
in 0, and stderr channel read failed afterwards.
Note that we're not checking whether the resize succeeds; in OOM cases,
we let it run with a "small" vector and hash table and see if by chance
we can grow it dynamically as we insert the new entries. Nothing to
lose really.
Instead of calling `git_index_add` in a loop, use the new
`git_index_fill` internal API to fill the index with the initial staged
entries.
The new `fill` helper assumes that all the entries will be unique and
valid, so it can append them at the end of the entries vector and only
sort it once at the end. It performs no validation checks.
This prevents the quadratic behavior caused by having to sort the
entries list once after every insertion.
When replacing an index with a new one, we need to iterate
through all index entries in order to determine which entries are
equal. When it is not possible to re-use old entries for the new
index, we move it into a list of entries that are to be removed
and thus free'd.
When we encounter a non-zero error code, though, we skip adding
the current index entry to the remove-queue. `INSERT_MAP_EX`,
which is the function last run before adding to the remove-queue,
may return a positive non-zero code that indicates what exactly
happened while inserting the element. In this case we skip adding
the entry to the remove-queue but still continue the current
operation, leading to a leak of the current entry.
Fix this by checking for a negative return value instead of a
non-zero one when we want to add the current index entry to the
remove-queue.
When adding to the index, we look to see if a portion of the given
path matches a portion of a path in the index. If so, we will use
the existing path information. For example, when adding `foo/bar.c`,
if there is an index entry to `FOO/other` and the filesystem is case
insensitive, then we will put `bar.c` into the existing tree instead
of creating a new one with a different case.
Use `strncmp` to do that instead of `memcmp`. When we `bsearch`
into the index, we locate the position where the new entry would
go. The index entry at that position does not necessarily have
a relation to the entry we're adding, so we cannot make assumptions
and use `memcmp`. Instead, compare them as strings.
When canonicalizing paths, we look for the first index entry that
matches a given substring.
When duplicating a `struct git_tree_entry` with
`git_tree_entry_dup` the resulting structure is not allocated
inside a memory pool. As we do a 1:1 copy of the original struct,
though, we also copy the `pooled` field, which is set to `true`
for pooled entries. This results in a huge memory leak as we
never free tree entries that were duplicated from a pooled
tree entry.
Fix this by marking the newly duplicated entry as un-pooled.
When formatting a patch as email we do not include the commit's
message in the formatted patch output. Implement this and add a
test that verifies behavior.
It is already possible to get a commit's summary with the
`git_commit_summary` function. It is not possible to get the
remaining part of the commit message, that is the commit
message's body.
Fix this by introducing a new function `git_commit_body`.
The `git_blame__entry` struct keeps track of line counts with
`int` fields. Since `int` is only guaranteed to be at least 16
bits we may overflow on certain platforms when line counts exceed
2^15.
Fix this by instead storing line counts in `size_t`.
It is not unreasonable to have versioned files with a line count
exceeding 2^16. Upon blaming such files we fail to correctly keep
track of the lines as `git_blame_hunk` stores them in `uint16_t`
fields.
Fix this by converting the line fields of `git_blame_hunk` to
`size_t`. Add test to verify behavior.
This reduces the size of the struct from 32 to 26 bytes, and leaves a
single padding byte at the end of the struct (which comes from the
zero-length array).
These are rather small allocations, so we end up spending a non-trivial
amount of time asking the OS for memory. Since these entries are tied to
the lifetime of their tree, we can give the tree a pool so we speed up
the allocations.
We've already looked at the filename with `memchr()` and then used
`strlen()` to allocate the entry. We already know how much we have to
advance to get to the object id, so add the filename length instead of
looking at each byte again.
When building a recursive merge base, allow conflicts to occur.
Use the file (with conflict markers) as the common ancestor.
The user has already seen and dealt with this conflict by virtue
of having a criss-cross merge. If they resolved this conflict
identically in both branches, then there will be no conflict in the
result. This is the best case scenario.
If they did not resolve the conflict identically in the two branches,
then we will generate a new conflict. If the user is simply using
standard conflict output then the results will be fairly sensible.
But if the user is using a mergetool or using diff3 output, then the
common ancestor will be a conflict file (itself with diff3 output,
haha!). This is quite terrible, but it matches git's behavior.
Use annotated commits to act as our virtual bases, instead of regular
commits, to avoid polluting the odb with virtual base commits and
trees. Instead, build an annotated commit with an index and pointers
to the commits that it was merged from.
When there are more than two common ancestors, continue merging the
virtual base with the additional common ancestors, effectively
octopus merging a new virtual base.
When examining the working directory and determining whether it's
up-to-date, only consider the nanoseconds in the index entry when
built with `GIT_USE_NSEC`. This prevents us from believing that
the working directory is always dirty when the index was originally
written with a git client that uinderstands nsecs (like git 2.x).
Allow users to set the `git_libgit2_opts` search path for the
`GIT_CONFIG_LEVEL_PROGRAMDATA`. Convert `GIT_CONFIG_LEVEL_PROGRAMDATA`
to `GIT_SYSDIR_PROGRAMDATA` for setting the configuration.
Ensure that `git_index_read_index` clears the uptodate bit on
files that it modifies.
Further, do not propagate the cache from an on-disk index into
another on-disk index. Although this should not be done, as
`git_index_read_index` is used to bring an in-memory index into
another index (that may or may not be on-disk), ensure that we do
not accidentally bring in these bits when misused.
The uptodate bit should have a lifecycle of a single read->write
on the index. Once the index is written, the files within it should
be scanned for racy timestamps against the new index timestamp.
Keep track of entries that we believe are up-to-date, because we
added the index entries since the index was loaded. This prevents
us from unnecessarily examining files that we wrote during the
cleanup of racy entries (when we smudge racily clean files that have
a timestamp newer than or equal to the index's timestamp when we
read it). Without keeping track of this, we would examine every
file that we just checked out for raciness, since all their timestamps
would be newer than the index's timestamp.
When we insert a conflict in a case-insensitive index, accept the
new entry's path as the correct case instead of leaving the path we
already had.
This puts `git_index_conflict_add()` on the same level as
`git_index_add()` in this respect.
Reload the HEAD and index data for a submodule after reading the
configuration. The configuration may specify a `path`, so we must
update HEAD and index data with that path in mind.
When creating a filebuf, detect a directory that exists in our
target file location. This prevents a failure later, when we try
to move the lock file to the destination.
On platforms that lack `core.symlinks`, we should not go looking for
symbolic links and `p_readlink` their target. Instead, we should
examine the file's contents.
This reduces the chances of a crash in the thread tests. This shouldn't
affect general usage too much, since the main usage of these functions
are to read into an empty buffer.
Instead of relying on the size and timestamp, which can hide changes
performed in the same second, hash the file content's when we care about
detecting changes.
Using calloc instead of malloc because the parse error will lead to an immediate free of committer (and its properties, which can segfault on free if undefined - test_refs_reflog_reflog__reading_a_reflog_with_invalid_format_returns_error segfaulted before the fix).
#3458
Inserting new REUC entries can quickly become pathological given that
each insert unsorts the REUC vector, and both subsequent lookups *and*
insertions will require sorting it again before being successful.
To avoid this, we're switching to `git_vector_insert_sorted`: this keeps
the REUC vector constantly sorted and lets us use the `on_dup` callback
to skip an extra binary search on each insertion.
Provide a new merge option, GIT_MERGE_TREE_FAIL_ON_CONFLICT, which
will stop on the first conflict and fail the merge operation with
GIT_EMERGECONFLICT.
Although CMake will correctly configure include directories for us,
some people may use their own build system, and we should reference
`util.h` based on where it actually lives.
Although our index contains the literal time present in the index,
we do not read nanoseconds from disk, and thus we should not use
them in any comparisons, lest we always think our working directory
is dirty.
Guard this behind a `GIT_USE_NSECS` for future improvement.
For most real use cases, repositories with alternates use them as main
object storage. Checking the alternate for objects before the main
repository should result in measurable speedups.
Because of this, we're changing the sorting algorithm to prioritize
alternates *in cases where two backends have the same priority*. This
means that the pack backend for the alternate will be checked before the
pack backend for the main repository *but* both of them will be checked
before any loose backends.
In the current implementation of ODB backends, each backend is tasked
with refreshing itself after a failed lookup. This is standard Git
behavior: we want to e.g. reload the packfiles on disk in case they have
changed and that's the reason we can't find the object we're looking
for.
This behavior, however, becomes pathological in repositories where
multiple alternates have been loaded. Given that each alternate counts
as a separate backend, a miss in the main repository (which can
potentially be very frequent in cases where object storage comes from
the alternate) will result in refreshing all its packfiles before we
move on to the alternate backend where the object will most likely be
found.
To fix this, the code in `odb.c` has been refactored as to perform the
refresh of all the backends externally, once we've verified that the
object is nowhere to be found.
If the refresh is successful, we then perform the lookup sequentially
through all the backends, skipping the ones that we know for sure
weren't refreshed (because they have no refresh API).
The on-disk pack backend has been adjusted accordingly: it no longer
performs refreshes internally.
We moved the "main" parsing to use 64 bits for the timestamp, but the
quick parsing for the revwalk did not. This means that for large
timestamps we fail to parse the time and thus the walk.
Move this parser to use 64 bits as well.