There may be multiple deltas referencing the same base as well as OFS
deltas which rely on a thin delta. Deal with both at the same time by
injecting a single object and going back up to the main
delta-resolving loop.
When a tool needs to recreate the tree object (for example an
interface to another VCS), it needs to use the raw attributes,
forgoing any normalization.
When a repository is transferred from one file system to another,
many of the config settings that represent the properties of the
file system may be wrong. This adds a new public API that will
refresh the config settings of the repository to account for the
change of file system. This doesn't do a full "reinitialize" and
operates on a existing git_repository object refreshing the config
when done.
This commit then makes use of the new API in clar as each test
repository is set up.
This commit also has a number of other clar test fixes where we
were making assumptions about the type of filesystem, either based
on outdated config data or based on the OS instead of the FS.
When given an ODB from which to read objects, the indexer will attempt
to inject the missing bases at the end of the pack and update the
header and trailer to reflect the new contents.
Though unusual, a packfile may contain a delta whose base is a delta
that comes later. In order index such a packfile, we must not give up
on the first failure to resolve a delta, but keep it around.
If there is a pass which makes no progress, this indicates that the
packfile is broken, so fail accordingly.
The repo init code was assuming Windows == no filemode, and
Mac or Windows == no case sensitivity. Those assumptions are not
consistently true depending on the mounted file system. This is a
first step to removing those assumptions. It focuses on the repo
init code and the tests of that code. There are still many other
tests that are broken when those assumptions don't hold true, but
this clears up one area of the code.
Also, this moves the core.precomposeunicode logic to be closer to
the current logic in core Git where it will be set to true on any
filesystem where composed unicode is decomposed when read back.
The indexer code was generating warnings on Windows 64-bit. I
looked closely at the logic and was able to simplify it a bit.
Also this fixes some other Windows and Linux warnings.
This adds a simple wrapper around the iconv APIs and uses it
instead of the old code that was inlining the iconv stuff. This
makes it possible for me to test the iconv logic in isolation.
A "no iconv" version of the API was defined with macros so that
I could have fewer ifdefs in the code itself.
This simplifies git_path_is_empty_dir on both Windows (getting rid
of git_buf allocation inside the function) and other platforms (by
just using git_path_direach), and adds tests for the function, and
uses the function to simplify some existing tests.
This hooks up git_path_direach and git_path_dirload so that they
will take a flag indicating if directory entry names should be
tested and converted from decomposed unicode to precomposed form.
This code will only come into play on the Apple platform and even
then, only when certain types of filesystems are used.
This involved adding a flag to these functions which involved
changing a lot of places in the code.
This was an opportunity to do a bit of code cleanup here and there,
for example, getting rid of the git_futils_cleanupdir_r function in
favor of a simple flag to git_futils_rmdir_r to not remove the top
level entry. That ended up adding depth tracking during rmdir_r
which led to a safety check for infinite directory recursion. Yay.
This hasn't actually been tested on the Mac filesystems where the
issue occurs. I still need to get test environment for that.
This doesn't actual do string precompose but it puts the hooks in
place into the iterators and the git_path_dirload function so that
the actual precompose work is ready to go.
This adds initialization of core.precomposeunicode to repo init
on Mac. This is necessary because when a Mac accesses a repo on
a VFAT or SAMBA file system, it will return directory entries in
decomposed unicode even if the filesystem entry is precomposed.
This also removes caching of a number of repo properties from the
repo init pipeline because these are properties of the specific
filesystem on which the repo is created, not of the system as a
whole.
This commit adds cancellation for the push operation. This work consists of:
1) Support cancellation during push operation
- During object counting phase
- During network transfer phase
- Propagate GIT_EUSER error code out to caller
2) Improve cancellation support during fetch
- Handle cancellation request during network transfer phase
- Clear error string when cancelled during indexing
3) Fix error handling in git_smart__download_pack
Cancellation during push is still only handled in the pack building and
network transfer stages of push (and not during packbuilding).
References and their logs are logically coupled, let's make it so in
the code by moving the fs-based reflog implementation to live next to
the fs-based refs one.
As part of the change, make the function take names rather than
references, as only the names are relevant when looking up and
handling reflogs.
The basic clone function is there to make it easy to create a "normal"
clone. Remove a bunch of options that are about changing the remote's
configuration.
The text progress and update_tips callbacks are already part of the
struct, which was meant to unify the callback setup, but the download
one was left out.
This adds the basics of progress reporting during push. While progress
for all aspects of a push operation are not reported with this change,
it lays the foundation to add these later. Push progress reporting
can be improved in the future - and consumers of the API should
just get more accurate information at that point.
The main areas where this is lacking are:
1) packbuilding progress: does not report progress during deltafication,
as this involves coordinating progress from multiple threads.
2) network progress: reports progress as objects and bytes are going
to be written to the subtransport (instead of as client gets
confirmation that they have been received by the server) and leaves
out some of the bytes that are transfered as part of the push protocol.
Basically, this reports the pack bytes that are written to the
subtransport. It does not report the bytes sent on the wire that
are received by the server. This should be a good estimate of
progress (and an improvement over no progress).
The subtransport path was relying on pointing to data owned by
the remote which meant that after a redirect, the updated path
was getting lost for future requests. This updates the http
transport to strdup the path and maintain its own lifetime.
This also pulls responsibility for parsing the URL back into the
http transport and isolates the functions that parse and free that
connection data so that they can be reused between the initial
parsing and the redirect parsing.
On occasion, files can disappear while we're iterating the
filesystem, between calls to readdir and stat. Let's pretend
those didn't exist in the first place.
The git_buf_text_gather_stats call returns a boolean indicating if
the file looks like binary data. That shouldn't be an error; it
should be used to skip CRLF processing though.
This replaces some git_buf_printf calls with simple calls to
git_buf_put instead. Also, it fixes a missing va_end inside
the git_buf_vprintf implementation.
The attempt to "clean up warnings" seems to have introduced some
new warnings on compliant compilers. This fixes those in a way
that I suspect will also be okay for the non-compliant compilers.
Also this fixes what appears to be an extra semicolon in the
repo initialization template dir handling (and as part of that
fix, handles the case where an error occurs correctly).
In revwalk, we are doing a very simple check to see if a string
contains wildcard characters, so a full regular expression match
is not needed.
In remote listing, now that we have git_config_foreach_match with
full regular expression matching, we can take advantage of that
and eliminate the regex here, replacing it with much simpler string
manipulation.
This contains a few bug fixes and some header and API cleanups.
The main API change is that filters should now use GIT_PASSTHROUGH
to indicate that they wish to skip processing a file instead of
GIT_ENOTFOUND.
The bug fixes include a possible out-of-range buffer access in
the ident filter, a filter ordering problem I introduced into the
custom filter tests on Windows, and a filter buf NUL termination
issue that was coming up on Linux.
This adds more tests of filters, including the ident filter when
mixed with custom filters. I was able to combine with the reverse
filter and demonstrate that the order of filter application with
the default priority constants matches the order of core Git.
Also, this fixes two issues in the ident filter: preventing ident
expansion on binary files and avoiding a NULL dereference when
dollar sign characters are found without Id.
Fixed the filter order to match core Git, too.
This test demonstrates an interesting behavior of core Git (which
is totally reasonable and which libgit2 matches, although mostly
by coincidence). If you use the ident filter and commit a file
with a garbage ident in it, like '$Id: this is just garbage$' and
then immediately do a 'git checkout-index' with the new file, Git
will not consider the file out of date and will not overwrite the
file with an updated $Id$. Libgit2 has the same behavior. If you
remove the file and then do a checkout-index, it will be replaced
with a filtered version that has injected the OID correctly.
These are a couple of new clar helpers for testing that a file
has expected contents that I extracted from the checkout code.
Actually wrote this as part of an abandoned earlier attempt at a
new filters API, but it will be useful now for some of the tests
I'm going to write.
I wish MSVC understood that "const char **" is not a const ptr,
but it a non-const pointer to an array of const ptrs. Does that
seem like too much to ask.
Checkout should not reject binary files from filters, as a filter
may actually wish to operate on binary files. The CRLF filter should
reject binary files itself if it wishes to. Moreover, the CRLF
filter requires this logic so that users can emulate the checkout
data in their odb -> workdir filtering.
Conflicts:
src/checkout.c
src/crlf.c
This makes the git_buf struct that was used internally into an
externally available structure and eliminates the git_buffer.
As part of that, some of the special cases that arose with the
externally used git_buffer were blended into the git_buf, such as
being careful about git_buf objects that may have a NULL ptr and
allowing for bufs with a valid ptr and size but zero asize as a
way of referring to externally owned data.
This adds the ident filter (that knows how to replace $Id$) and
tweaks the filter APIs and code so that git_filter_source objects
actually have the updated OID of the object being filtered when
it is a known value.
Extend the git2/sys/filter API with functions to look up a filter
and add it manually to a filter list. This requires some trickery
because the regular attribute lookups and checks are bypassed when
this happens, but in the right hands, it will allow a user to have
granular control over applying filters.
Increasingly there are a number of components that want to do some
cleanup at global shutdown time (at least if there are not going
to be memory leaks). This creates a very simple system of shutdown
hooks that will be invoked by git_threads_shutdown. Right now, the
maximum number of hooks is hardcoded, but since adding a hook is
not a public API, it should be fine and I thought it was better to
start off with really simple code.
This moves the git_filter_list into the public API so that users
can create, apply, and dispose of filter lists. This allows more
granular application of filters to user data outside of libgit2
internals.
This also converts all the internal usage of filters to the public
APIs along with a few small tweaks to make it easier to use the
public git_buffer stuff alongside the internal git_buf.
The filter registry as implemented was too primitive to actually
work once multiple filters were coming into play. This expands
the implementation of the registry to handle multiple prioritized
filters correctly.
Additionally, this adds an "attributes" field to a filter that
makes it really really easy to implement filters that are based
on one or more attribute values. The lookup and even simple value
checking can all happen automatically without custom filter code.
Lastly, with the registry improvements, this fills out the filter
lifecycle callbacks, with initialize and shutdown callbacks that
will be called before the filter is first used and after it is
last invoked. This allows for system-wide initialization and
cleanup by the filter.
This creates include/sys/filter.h with a basic definition of a
git_filter and then converts the internal code to use it. There
are related internal objects (git_filter_list) that we will want
to publish at some point, but this is a first step.
This begins the process of exposing git_filter objects to the
public API. This includes:
* new public type and API for `git_buffer` through which an
allocated buffer can be passed to the user
* new API `git_blob_filtered_content`
* make the git_filter type and GIT_FILTER_TO_... constants public
Unfortunately git-core uses the term "unborn branch" and "orphan
branch" interchangeably. However, "orphan" is only really there for
the checkout command, which has the `--orphan` option so it doesn't
actually create the branch.
Branches never have parents, so the distinction of a branch with no
parents is odd to begin with. Crucially, the error messages deal with
unborn branches, so let's use that.
As the include depth increases, the chance of a realloc
increases. This means that whenever we run git_array_alloc() or call
config_parse(), we need to remember what our reader's index is so we
can look it up again.
We need to refresh the variables from the included files if they are
changed, so loop over all included files and re-parse the files if any
of them has changed.