There was a latent bug where files that use macro definitions
could be parsed before the macro definitions were loaded. Because
of attribute file caching, preloading files that are going to be
used doesn't add a significant amount of overhead, so let's always
preload any files that could contain macros before we assemble the
actual vector of files to scan for attributes.
When traversing the directory structure, the iterator pushes and
pops ignore files using a vector. Some directories don't have
ignore files, so it uses a path comparison to decide when it is
right to actually pop the last ignore file. This was only
comparing directory suffixes, though, so a subdirectory with the
same name as a parent could result in the parent's .gitignore
being popped off the list ignores too early. This changes the
logic to compare the entire relative path of the ignore file.
The ssh-specific credentials allow the username to be missing. The idea
being that the ssh transport will then use the username provided in the
url, if it's available. There are two main issues with this.
The credential callback already knows what username was provided by the
url and needs to figure out whether it wants to ask the user for it or
it can reuse it, so passing NULL as the username means the credential
callback is suspicious.
The username provided in the url is not in fact used by the
transport. The only time it even considers it is for the user/pass
credential, which asserts the existence of a username in its
constructor. For the ssh-specific ones, it passes in the username stored
in the credential, which is NULL. The libssh2 macro we use runs strlen()
against this value (which is no different from what we would be doing
ourselves), so we then crash.
As the documentation doesn't suggest to leave out the username, assert
the need for a username in the code, which removes this buggy behavior
and removes implicit state.
git_cred_has_username() becomes a blacklist of credential types that do
not have a username. The only one at the moment is the 'default' one,
which is meant to call up some Microsoft magic.
Now that our strmap is no longer modified but replaced, we can use the
same strmap for the snapshot's values and it will be freed when we don't
need it anymore.
When we delete an entry, we also want to refresh the configuration to
catch any changes that happened externally.
This allows us to simplify the logic, as we no longer need to delete
these variables internally. The whole state will be refreshed and the
deleted entries won't be there.
With the isolation of complex reads, we can now try to refresh the
on-disk file before reading a value from it.
This changes the semantics a bit, as before we could be sure that a
string we got from the configuration was valid until we wrote or
refreshed. This is no longer the case, as a read can also invalidate the
pointer.
Current code sets the active map to a new one and builds it whilst it's
active. This is a race condition with someone else trying to access the
same config.
Instead, let's build up our new map and swap the active and new one.
In order to have consistent views of the config files for remotes,
submodules et al. and a configuration that represents what is currently
stored on-disk, we need a way to provide a view of the configuration
that does not change.
The goal here is to provide the snapshotting part by creating a
read-only copy of the state of the configuration at a particular point
in time, which does not change when a repository's main config changes.
The checks to see if files were out of date in the attibute cache
was wrong because the cache-breaker data wasn't getting stored
correctly. Additionally, when the cache-breaker triggered, the
old file data was being leaked.
I don't love this approach, but achieving thread-safety for
attribute and ignore data while reloading files would require a
larger rewrite in order to avoid this. If an attribute or ignore
file is out of date, this holds a lock on the file while we are
reloading the data so that another thread won't try to reload the
data at the same time.
In the threading tests, I was still seeing a race condition where
the same item could end up being inserted multiple times into the
index. Preserving the sorted-ness of the index outside of the
`index_insert` call fixes the issue.
This is a big refactoring of the attribute file cache to be a bit
simpler which in turn makes it easier to enforce a lock around any
updates to the cache so that it can be used in a threaded env.
Tons of changes to the attributes and ignores code.
While I was looking at the conflict cleanup code, I looked over at
the tree cache code, since we clear the tree cache for each entry
that gets removed and there is some redundancy there. I made some
small tweaks to avoid extra calls to strchr and strlen in a few
circumstances.
I introduced a leak into conflict cleanup by removing items from
inside the git_vector_remove_matching call. This simplifies the
code to just use one common way for the two conflict cleanup APIs.
When an index has an active snapshot, removing an item can cause
an error (inserting into the deferred deletion vector), so I made
the git_index_conflict_cleanup API return an error code. I felt
like this wasn't so bad since it is just like the other APIs.
I fixed up a couple of comments while I was changing the header.
The iterator pushes and pops ignores incrementally onto a list as
it traverses the directory structure so that it doesn't have to
constantly recheck which ignore files apply. With the new ref
counting, it wasn't decrementing the refcount on the ignores that
it removed from the vector.
This makes the lock management on the index a little bit broader,
having a number of routines hold the lock across looking up the
item to be modified and actually making the modification. Still
not true thread safety, but more pure index modifications are now
safe which allows the simple cases (such as starting up a diff
while index modifications are underway) safe enough to get the
snapshot without hitting allocation problems.
As part of this, I simplified the allocation of index entries to
use a flex array and just put the path at the end of the index
entry. This makes every entry self-contained and makes it a
little easier to feel sure that pointers to strings aren't
being accidentally copied and freed while other references are
still being held.
This adds a basic test of doing simultaneous diffs on multiple
threads and adds basic locking for the attr file cache because
that was the immediate problem that arose from these tests.
This makes the index iterator honor the GIT_ITERATOR_IGNORE_CASE
and GIT_ITERATOR_DONT_IGNORE_CASE flags without modifying the
index data itself. To take advantage of this, I had to export a
number of the internal index entry comparison functions. I also
wrote some new tests to exercise the capability.
The usefulness of these helpers came up for me while debugging
some of the iterator changes that I was making, so since they
have also been requested (albeit indirectly) I thought I'd include
them.
Again, laying groundwork for some index iterator changes, this
contains a bunch of code refactorings for index internals that
should make it easier down the line to add locking around index
modifications. Also this removes the redundant prefix_position
function and fixes some potential memory leaks.
Ignore rules with slashes in them are matched using FNM_PATHNAME
and use the path to the .gitignore file from the root of the
repository along with the path fragment (including slashes) in
the ignore file itself. Unfortunately, the relative path to the
.gitignore file was being applied to the global core.excludesfile
if that was also named ".gitignore".
This fixes that with more precise matching and includes test for
ignore rules with leading slashes (which were the primary example
of this being broken in the real world).
This also backports an improvement to the file context logic from
the threadsafe-iterators branch where we don't rely on mutating
the key of the attribute file name to generate the context path.
There were a couple bugs in popping ignore files during iteration
that could result in incorrect decisions be made and thus ignore
files below the root either not being loaded correctly or not
being popped at the right time.
One bug was an off-by-one in comparing the path of the gitignore
file with the path being exited during iteration.
The second bug was not correctly truncating the path being tracked
during traversal if there were no ignores on the list (i.e. when
you have no .gitignore at the root, but do have some in contained
directories).
This updates how libgit2 treats submodule-like directories that
actually have tracked content inside of them. This is a strange
corner case, but it seems that many people have abortive submodule
setups and then just went ahead and added the files into the
parent repository. In this case, we should just treat the
submodule as if it was a normal directory.
Libgit2 will still try to skip over real submodules and contained
repositories that do not have tracked files inside them, but this
adds some new handling for cases where the apparently submodule
data is in conflict with the actual list of tracked files.
git_merge_base() returns GIT_ENOTFOUND when it cannot find a merge
base. graph_desdendant_of() returns a boolean value (barring any
errors), so it needs to catch the NOTFOUND return value and convert it
into false, as not merge base means it cannot be a descendant.
The internal buffer in the `git_path_iconv_t` structure was not
being reset before the calls to `iconv` were made to convert data,
so if there were multiple decomposed Unicode paths in a single
directory, paths after the first one were being appended to the
first instead of treated as independent data.
The base for the relative urls is determined as follows, with descending
priority:
- remote url of HEAD's remote tracking branch
- remote "origin"
- workdir
This follows git.git behaviour
When the current branch is unborn, git will still mark the current
branch's upstream for-merge if there is an upstream configuration. The
only non-constrived case is cloning from an empty repository which then
gains history. origin's master should be marked for-merge.
In order to do this, we cannot use the high-level wrappers that expect a
reference, as we may not have one. Move over to the internal ones that
expect a reference name, which we do have.
When doing a diff for use in status, we should never show the
content of a git repository contained inside another one. The
logic to do this was looking for a .git directory and so when a
gitlink plain .git file was used, it was failing to exclude the
directory content.
There was a little bug where the submodule cache thought that the
index date was out of date even when it wasn't that was resulting
in some extra scans of index data even when not needed.
Mostly this commit adds a bunch of new tests including adding and
removing submodules in the index and in the HEAD and seeing if we
can automatically pick them up when refreshing.
Wrote tests that try adding, removing, and updating the name of
submodules which showed a number of problems with how we account
for changes when incrementally updating the submodule info. Most
of these issues didn't exist before because reloading would always
blow away the old submodule data.
This improvement the management of the lock around submodule cache
updates slightly, using the lock to make sure that foreach can
safely make a snapshot of all existing submodules and making sure
that git_submodule_add_setup also grabs a lock before inserting
the new submodule. Cache initialization / refresh should already
have been holding the lock correctly as it adds submodules.
When forcing cache flushes or reload, etc., it is easier to keep
track of intent using enums instead of plain bools. Also, this
fixes a bug where the cache was not being properly refreshes by
a git_submodule_reload_all.
This makes submodule cache refresh actually look at the timestamps
from the data sources for submodules and reload as needed if they
have changed since the last refresh.
This takes the old submodule cache which was just a git_strmap
and makes a real git_submodule_cache object that can contain other
things like a lock and timestamp-ish data to control refreshing of
submodule info.
This fixes `git_submodule_sync` to correctly update the remote URL
of the default branch of the submodule along with the URL in the
parent repository config (i.e. match core Git's behavior).
Also move some useful helper logic from the submodule code into
a shared config API `git_config__update_entry` that can either set
or delete an entry with constraints like not overwriting or not
creating a new entry. I used that helper to update a couple other
places in the code.
There are a few places where we need to join three strings to
assemble a path. This adds a simple join3 function to avoid the
comparatively expensive join_n (which calls strlen on each string
twice).
The order in this function is the opposite to what
create_with_fetchspec() has, so change this one, as url-then-refspec is
what git does.
As we need to break compilation and the swap doesn't do that, let's take
this opportunity to rename in-memory remotes to anonymous as that's
really what sets them apart.
With the changes to how git_path_dirload_with_stat handles things
that look like submodules, submodules could end up sorted in the
wrong order with the workdir iterator. This moves the submodule
check earlier in the iterator processing of a new directory so
that the submodule name updates will happen immediately and the
sort order will be correct.
While looping over multiple heads, an up-to-date head will clobber the `remote->need_pack` setting, preventing the rest of the machinery from building and downloading a pack-file, breaking fetches.
If the first call to release a no-longer-existent submodule freed
the object, the check if a second is needed would dereference the
data that was just freed.
When a file is open for reading (without shared-delete permission), and
then a different thread/process called p_rename, that would fail, even
if the file was only open for reading for a few milliseconds. This
change lets p_rename wait up to 50ms for the file to be closed by the
reader. Applies only to win32.
This is especially important for git_filebuf_commit, because writes
should not fail if the file is read simultaneously.
Fixes#2207
When a submodule was inserted with a different path and name, the
return value from khash greater than zero was allowed to propagate
back out to the caller when it should really be zeroed. This led
to a possible crash when reloading submodules if that was the
first time that submodule data was loaded.
The reload_all call could end up dereferencing a NULL pointer if
there was an error while attempting to load the submodules config
data (i.e. invalid content in the gitmodules file). This fixes it.
This cleans up some places I missed that could hold onto submodule
references and cleans up the way in which the repository cache is
both reloaded and released so that existing submodule references
aren't destroyed inappropriately.
When a directory containing a .git directory (or even just a plain
gitlink) was found, libgit2 was going out of its way to treat it
specially. This seemed like it was necessary because the diff
code was not originally emulating Git's behavior for untracked
directories correctly (i.e. scanning for ignored vs untracked items
inside). Now that libgit2 diff mimics Git's untracked directory
behavior, the special handling for contained Git repos is actually
incorrect and this commit rips it out.
`git_submodule` objects were already refcounted internally in case
the submodule name was different from the path at which it was
stored. This makes that refcounting externally used as well, so
`git_submodule_lookup` and `git_submodule_add_setup` return an
object that requires a `git_submodule_free` when done.
As a way to speed up the cases where we need to hide some commits, we
find out what the merge bases are so we know to stop marking commits as
uninteresting and avoid walking down a potentially very large amount of
commits which we will never see. There are however two oversights in
current code.
The merge-base finding algorithm fails to recognize that if it is only
given one commit, there can be no merge base. It instead walks down the
whole ancestor chain needlessly. Make it return an empty list
immediately in this situation.
The revwalk does not know whether the user has asked to hide any commits
at all. In situation where the user pushes multiple commits but doesn't
hide any, the above fix wouldn't do the trick. Keep track of whether the
user wants to hide any commits and only run the merge-base finding
algorithm when it's needed.
The reflog append function was overzealous in its checking. When passed
an old and new ids, it should not do any checking, but just serialize
the data to a reflog entry.
The existing ones lack checking zeroed ids when switching back from an
unborn branch as well as what happens when detaching.
The reflog appending function mistakenly wrote zeros when dealing with a
detached HEAD. This explicitly checks for those situations and fixes
them.
When we update the current branch, we must also append to HEAD's reflog
to keep them in sync.
This is a bit of a hack, but as git.git says, it covers 100% of
default cases.
If the pqueue comparison fn returned just 0 or 1 (think "a<b")
then the sort order of returned items could be wrong because there
was a "< 0" that really needed to be "<= 0". Yikes!!!
The git_odb_exists_prefix API was not dealing correctly when a
later backend returned GIT_ENOTFOUND even if an earlier backend
had found the object.
Additionally, the unit tests were not properly exercising the API
and had a couple mistakes in checking the results.
Lastly, since the backends are not expected to behavior correctly
unless all bytes of the short id are zero except for the prefix,
this makes the ODB prefix APIs explicitly clear out the extra
bytes so the user doesn't have to be as careful.
We look up a reference in order to figure out if it's the current
branch, which we need to free once we're done with the check.
As a bonus, only perform the check when we're passed the force flag, as
it's a useless check otherwise.
We can make use of git_object_dup to use refcounting instead of pointer
comparison to make sure we don't free the caller's object.
This also lets us simplify the case for '~0' which is now just an
assignment instead of looking up the object we have at hand.
Previously the hunk_byfinalline_search_cmp function was called with different
data types (size_t and uint32_t) for the key argument but expected only the
former resulting in an invalid memory access when passed the latter on a 64 bit
machine.
The following patch makes sure that the function is called and works with the
same type (size_t).
If a directory disappears between the time we look up the entries of its
parent and the time when we go to look at it, we should ignore the error
and move forward.
This fixes#2046.
This finds a short id string that will unambiguously select the
given object, starting with the core.abbrev length (usually 7)
and growing until it is no longer ambiguous.
A few fixes have accumulated in this area which have made the freeing of
data a bit muddy. Make sure to free the data only when needed and once.
When we are going to write a delta to the packfile, we need to free the
data, otherwise leave it. The current version of the code mixes up the
checks for po->data and po->delta_data.
This adds `git_diff_buffers` and `git_patch_from_buffers`. This
also includes a bunch of internal refactoring to increase the
shared code between these functions and the blob-to-blob and
blob-to-buffer APIs, as well as some higher level assert helpers
in the tests to also remove redundancy.
- added MSVC cmake definitions to disable warnings
- general.c is rewritten so it is ansi-c compatible and compiles ok on microsoft windows
- some MSVC reported warning fixes
* Make GIT_INLINE an internal definition so it cannot be used in
public headers
* Fix language in CONTRIBUTING
* Make index caps API use signed instead of unsigned values
On some systems, notably HP PA-RISC systems running Linux or HP-UX,
EWOULDBLOCK and EAGAIN are not the same value. POSIX (and these OSes) allow
EWOULDBLOCK to occur on write(2) (and send(2), etc.), so check explicitly
for this case as well as EAGAIN by defining and using a macro GIT_ISBLOCKED
that considers both.
The macro is necessary because MSYS does not provide EWOULDBLOCK and
compilation fails if an attempt is made to use it unconditionally. On most
systems, where the two values are the same, the compiler will simply
optimize this check out and it will have no effect.
This adds an API to amend an existing commit, basically a shorthand
for creating a new commit filling in missing parameters from the
values of an existing commit. As part of this, I also added a new
"sys" API to create a commit using a callback to get the parents.
This allowed me to rewrite all the other commit creation APIs so
that temporary allocations are no longer needed.
This fixes a number of warnings with the Windows 64-bit build
including a test failure in test_repo_message__message where an
invalid pointer to a git_buf was being used.
We need this from util.h and posix.h, but the latter includes common.h
which includes util.h, which means p_strlen is not defined by the time
we get to git__strndup().
Split the definition on p_strlen() off into its own header so we can use
it in util.h.
The current code issues a lot of strncmp() calls in order to check for
the end of the header, simply in order to copy it and start going
through it again. These are a lot of calls for something we can check as
we go along. Knowing the amount of parents beforehand to reduce
allocations in extreme cases does not make up for them.
Instead start parsing immediately and check for the double-newline after
each header field, leaving the raw_header allocation for the end, which
lets us go through the header once and reduces the amount of strncmp()
calls significantly.
In unscientific testing, this has reduced a shortlog-like usage (walking
though the whole history of a branch and extracting data from the
commits) of git.git from ~830ms to ~700ms and makes the time we spend in
strncmp() negligible.
Let the user push committish objects and peel them to figure out which
commit to push to our queue.
This is for convenience and for allowing uses of
git_revwalk_push_glob(w, "tags")
with annotated tags.
Change the name to _matching() intead of _if(), and force _set_target()
to be a conditional update. If the user doesn't care about the old
value, they should use git_reference_create().
This tweaks the pqueue_up and pqueue_down routines so that they
will not do full element swaps but instead carry over the state
of the previous loop iteration and only assign elements for which
we know the final position. This will avoid a little bit of data
assignment which should improve performance in theory.
Also got rid of some vector helpers that I'm no longer using.
This fixes a typo I made for setting the sorted flag on the index
after a reload. That typo didn't actually cause any test failures
so I'm also adding a test that explicitly checks that the index is
correctly sorted after a reload when ignoring case and when not.
This updates the git_pqueue to simply be a set of specialized
init/insert/pop functions on a git_vector.
To preserve the pqueue feature of having a fixed size heap, I
converted the "sorted" field in git_vectors to a more general
"flags" field so that pqueue could mix in it's own flag. This
had a bunch of ramifications because a number of places were
directly looking at the vector "sorted" field - I added a couple
new git_vector helpers (is_sorted, set_sorted) so the specific
representation of this information could be abstracted.
I accidentally wrote a separate priority queue implementation when
I was working on file rename detection as part of the file hash
signature calculation code. To simplify licensing terms, I just
adapted that to a general purpose priority queue and replace the
old priority queue implementation that was borrowed from elsewhere.
This also removes parts of the COPYING document that no longer
apply to libgit2.
Validating the workdir should not compare HEAD to working
directory - this is both inefficient (as it ignores the cache)
and incorrect. If we had legitimately allowed changes in the
index (identical to the merge result) then comparing HEAD to
workdir would reject these changes as different. Further, this
will identify files that were filtered strangely as modified,
while testing with the cache would prevent this.
Also, it's stupid slow.
The checkout code used to defer removal of "blocking" files in
checkouts until the blocked item was actually being written (since
we have already checked that the removing the block is acceptable
according to the update rules). Unfortunately, this resulted in
an intermediate index state where both the blocking and new items
were in the index which is no longer allowed. Now we just remove
the blocking item in the first pass so it never needs to coexist.
In cases where there are typechanges, this could result in a bit
more churn of removing and recreating intermediate directories,
but I'm going to assume that is an unusual case and the churn will
not be too costly.
There were some confusing issues mixing up the number of bytes
written to the zstream output buffer with the number of bytes
consumed from the zstream input. This reorganizes the zstream
API and makes it easier to deflate an arbitrarily large input
while still using a fixed size output.
This removes the fetchRecurse compiler warnings and makes the
behavior match the other submodule options (i.e. the in-memory
setting can be reset to the on-disk value).
When three-way merging indexes, we previously changed each path
as we read them, which would lead to us adding an index entry for
'foo', then removing an index entry for 'foo/file'. With the new
index requirements, this is not allowed. Removing entries in the
merged index, then adding them, resolves this. In the previous
example, we now remove 'foo/file' before adding 'foo'.
In case insensitive index mode, we would stop at a prefixed entry,
treating the provided search key length as a substring, not the
length of the string to match.
Writing a sample Javascript driver pointed out some extra
whitespace handling that needed to be done in the diff driver.
This adds some tests with some sample javascript code that I
pulled off of GitHub just to see what would happen. Also, to
clean up the userdiff test data, I did a "git gc" and packed
up the test objects.