When we discover that we want to keep a negative rule, make sure to
clear the error variable, as it we otherwise return whatever was left by
the previous loop iteration.
This can be used by tools to show mesages about failing to communicate
with the server. The error message in this case will often contain the
server's error message, as far as it managed to send anything.
When we fail to read from stdout, it's typically because the URL was
wrong and the server process has sent some output over its stderr
output.
Read that output and set the error message to whatever we read from it.
We set an error if we get an error when reading, but we don't bother
setting an error message for write failing. This causes a cryptic error
to be shown to the user when the target filesystem is full.
This function deals with functions doing IO which means the amount of
errors that can happen is quit large. It does not help if it always
ovewrites the underlying error message with a less understandable
version of "something went wrong".
Instead, only use this generic message if there was no error set by the
callback.
Instead of going through each entry we have and re-adding, which may not
even be correct for certain crlf options and has bad performance, use
the function which performs a diff against the worktree and try to add
and remove files from that list.
We currently iterate over all the entries and re-add them to the
index. While this provides correctness, it is wasteful as we try to
re-insert files which have not changed.
Instead, take a diff between the index and the worktree and only re-add
those which we already know have changed.
This is useful to send to the client while we're performing the work.
The reporting function has a force parameter which makes sure that we
do send out the message of 100% completed, even if this comes before the
next udpate window.
A couple of tests use the wrong remote to push to. We did not notice up
to now because the local push would copy individual objects, and those
already existed, so it became a no-op.
Once we made local push create the packfile, it became noticeable that
there was a new packfile where it didn't belong.
Instead of copying each object individually, as we'd been doing, use the
packbuilder which should be faster and give us some feedback.
While performing this change, we can hook up the packbuilder's writing
to the push progress so the caller knows how far along we are.
We currently first look in the loose object dir and then in the packs
for objects. When performing operations on recent history this has a
higher likelihood of hitting, but when we deal with operations which
look further back into the past, we start spending a large amount of
time getting ENOTENT from `access`.
Reversing the priorities means that long-running operations can get to
their objects faster, as we can look at the index data we have in memory
(or rather mapped) to figure out whether we have an object, which is
faster than going out to the filesystem.
The packed backend already implements an optimistic read algorithm by
first looking at the packs we know about and only going out to disk to
referesh if the object is not found which means that in the case where
we do have the object (which will be in the majority for anything that
traverses the graph) we can avoid going to to disk entirely to determine
whether an object exists.
Operations which look at recent history may take a slight impact, but
these would be operations which look a lot less at object and thus take
less time regardless.
The base refspecs changing can be a cause of confusion as to what is the
current base refspec set and complicate saving the remote's
configuration.
Change `git_remote_add_{fetch,push}()` to update the configuration
instead of an instance.
This finally makes `git_remote_save()` a no-op, it will be removed in a
later commit.
While this will rarely be different from the default, having it in the
remote adds yet another setting it has to keep around and can affect its
behaviour. Move it to the options.