While often similar, these are not the same on Windows. We want to use the page
size on Windows for the pools, but for mmap we need to use the allocation
granularity as the alignment.
On the other platforms these values remain the same.
Coverity complains about the git_rawobj ones because we use a loop in
which we keep remembering the old version, and we end up copying our
object as the base, so we want to have the data pointer be NULL.
We've been using `p_ftruncate()` to extend the packfile in order to mmap
it and write the new data into it. This works well in the general case,
but as truncation does not allocate space in the filesystem, it must do
so when we write data to it.
The only way the OS has to indicate a failure to allocate space is via
SIGBUS which means we tried to write outside the file. This will cause
everyone to crash as they don't expect to handle this signal.
Switch to using `p_lseek()` and `p_write()` to extend the file in a way
which tells the filesystem to allocate the space for the missing
data. We can then be sure that we have space to write into.
It turns out that erroring out on duplicate commits is the right thing
to do, but git was not hitting the bug on the server-side.
Bring back a descriptive error message in case of duplicate entries and
error out.
If a packfile includes duplicate objects, we can choose to use the
secon copy instead of the first by using the same logic as if it were
the first.
Change the error condition from 0 to -1, which indicates a bad resize,
and set the OOM message in that case.
This does mean we will leak the first copy of the object. We can deal
with that later, but making fetches work is more important.
While this is not even close to a fix, we can at least set an error
message so we know which error we are facing. Up to know we just
returned an error without a message.
Keep the definitions in the headers, while putting the declarations in
the C files. Putting the function definitions in headers causes
them to be duplicated if you include two headers with them.
Opening the same repository multiple times will currently open the same
file multiple times, as well as map the same region of the file multiple
times. This is not necessary, as the packfile data is immutable.
Instead of opening and closing packfiles directly, introduce an
indirection and allocate packfiles globally. This does mean locking on
each packfile open, but we already use this lock for the global mwindow
list so it doesn't introduce a new contention point.
Use size_t for page size, instead of long. Check result of sysconf.
Use size_t for page offset so no cast to size_t (second arg to p_mmap).
Use mod instead div/mult pair, so no cast to size_t is necessary.
Windows has its own ftruncate() called _chsize_s().
p_mkstemp() is changed to use p_open() so we can make sure we open for
writing; the addition of exclusive create is a good thing to do
regardless, as we want a temporary path for ourselves.
Lastly, MSVC doesn't quite know how to add two numbers if one of them is a
void pointer, so let's alias it to unsigned char.C
Some OSs cannot keep their ideas about file content straight when mixing
standard IO with file mapping. As we use mmap for reading from the
packfile, let's make writing to the pack file use mmap.
Our vector does a move of the rest of the array when we remove an
item. Doing this repeatedly can be expensive, and we do this a lot in
the indexer. Instead, set the value to NULL and skip those entries.
perf reported around 30% of `index-pack` time was going into
memmove. With this change, that goes away and we spent most of the time
hashing and inflating data.
This renames git_vector_free_all to the better git_vector_free_deep
and also contains a couple of memory leak fixes based on valgrind
checks. The fixes are specifically: failure to free global dir
path variables when not compiled with threading on and failure to
free filters from the filter registry that had not be initialized
fully.
This adds tests that try canceling an indexer operation from
within the progress callback.
After writing the tests, I wanted to run this under valgrind and
had a number of errors in that situation because mmap wasn't
working. I added a CMake option to force emulation of mmap and
consolidated the Amiga-specific code into that new place (so we
don't actually need separate Amiga code now, just have to turn on
-DNO_MMAP).
Additionally, I made the indexer code propagate error codes more
reliably than it used to.
This changes the behavior of callbacks so that the callback error
code is not converted into GIT_EUSER and instead we propagate the
return value through to the caller. Instead of using the
giterr_capture and giterr_restore functions, we now rely on all
functions to pass back the return value from a callback.
To avoid having a return value with no error message, the user
can call the public giterr_set_str or some such function to set
an error message. There is a new helper 'giterr_set_callback'
that functions can invoke after making a callback which ensures
that some error message was set in case the callback did not set
one.
In places where the sign of the callback return value is
meaningful (e.g. positive to skip, negative to abort), only the
negative values are returned back to the caller, obviously, since
the other values allow for continuing the loop.
The hardest parts of this were in the checkout code where positive
return values were overloaded as meaningful values for checkout.
I fixed this by adding an output parameter to many of the internal
checkout functions and removing the overload. This added some
code, but it is probably a better implementation.
There is some funkiness in the network code where user provided
callbacks could be returning a positive or a negative value and
we want to rely on that to cancel the loop. There are still a
couple places where an user error might get turned into GIT_EUSER
there, I think, though none exercised by the tests.
There are a lot of places that we call git__free on each item in
a vector and then call git_vector_free on the vector itself. This
just wraps that up into one convenient helper function.
This continues auditing all the places where GIT_EUSER is being
returned and making sure to clear any existing error using the
new giterr_user_cancel helper. As a result, places that relied
on intercepting GIT_EUSER but having the old error preserved also
needed to be cleaned up to correctly stash and then retrieve the
actual error.
Additionally, as I encountered places where error codes were not
being propagated correctly, I tried to fix them up. A number of
those fixes are included in the this commit as well.
It was there to keep it apart from the one which read in from a file on
disk. This other indexer does not exist anymore, so there is no need for
anything other than git_indexer to refer to it.
While here, rename _add() function to _append() and _finalize() to
_commit(). The former change is cosmetic, while the latter avoids
talking about "finalizing", which OO languages use to mean something
completely different.
The user is unable to derive the number of deltas in the pack, as that
would require them to capture the stats exactly in the moment between
download and final processing, which is abstracted away in the fetch.
Capture these numbers for the user and expose them in the progress
struct. The clone and fetch examples now also present this information
to the user.