When implementing the ssh testing, the move to the script made it so
the first test suite's exit code was ignored. Check whether the main
tests fail and exit with an error in that case.
Report the index being locked with its own error code in order to be
able to differentiate, as a locked index is typically the result of a
crashed process or concurrent access, both of which often require user
intervention to fix.
If none of the backends support direct writes and we must stream the
whole file, we already know what the object's id should be; so use the
stream's functions directly, bypassing the frontend's hashing and
overwriting of our existing id.
Clarify the role of each function and in particular mention that there
is no need for the backend or stream to worry about the object's id,
as it will be given when `finalize_write` is called.
The frontend is in charge of calculating the id of the objects. Thus
the backends should treat it as a read-only value. The positioning in
the function signature made it seem as though it was an output
parameter.
Make the id const and move it from the front to behind the subject
(backend or stream).
When dealing with a chain of tags, we need to enqueue each of them
individually, which means we can't use `git_tag_peel` as that jumps
over the intermediate tags.
Do the peeling manually so we can look at each object and take the
appropriate action.
Hash the data as it's coming into the stream and tell the backend what
its name is when finalizing the write. This makes it consistent with
the way a plain git_odb_write() performs the write.
This is in preparation for moving the hashing to the frontend, which
requires us to handle the incoming data before passing it to the
backend's stream.
This fixes a small memory leak in git_revparse where early returns on
errors from git_revparse_single cause a free() on the (reallocated) left
side of the revspec to be skipped.
Accept any value for the remote's url, including an empty string which
we used to reject as invalid configuration.
This is not quite what git does (although it has its own problems with
such configurations) and it makes it harder to fix the issue, by not
letting the user modify it.
As we already need to check for a valid URL when we try to connect to
the network, let that perform the check, as we don't need to do it
anywhere else.
Set up the ssh credentials so we are able to talk to localhost and
issue git commands. Move to use a script, as the command list is
getting somewhat long.
While here, delay installing valgrind until we need it, as it and its
dependencies are by far the largest downloads and this allows us to
start compiling (and failing) faster and we only incur this cost when
the test suite runs successfully.
This reverts refactoring done in 13224ea4aa
that introduces a performance regression for NFS when reading files that
don't exist. open() forces a cache invalidation on NFS, while stat()ing a
file just uses the cache and is very quick.
To give a specific example, say you have a repo with a thousand packed
refs. Before this change, looking up every single one ould incur a thousand
slow open() calls. With this change, it's a thousand fast stat() calls.
This is just a bunch of small fixes that I noticed while looking
at the UTF8 and UTF16 path stuff. It fixes a slowdown in looking
for an empty directory (not exiting loop asap), makes the dir name
in the git__DIR structure be a GIT_FLEX_ARRAY to save an allocation,
and fixes some slightly odd assumptions in the cl_getenv helper.