pixman_composite_trapezoids() is supposed to composite across the
entire destination, but it actually only composites across the extent
of the trapezoids. For operators such as ADD or OVER this doesn't
matter since a zero source has no effect on the destination. But for
operators such as SRC or IN, it does matter.
So for such operators where a zero source has an effect, don't clip to
the trap extents.
When pixman_image_create_bits() function is given NULL for bits, it
will allocate a new buffer and initialize it to zero. However, in some
cases, only a small region of the image is actually used; in that case
it is wasteful to touch all of the memory.
The new pixman_image_create_bits_no_clear() works exactly like
_create_bits() except that it doesn't initialize any newly allocated
memory.
(fixes bug #52101)
On MirBSD, the compiler produces a (harmless) warning when the compiler
is called without the standard CFLAGS:
foo.c:0: note: someone does not honour COPTS correctly, passed 0 times
However, PIXMAN_LINK_WITH_ENV considers _any_ output on stderr as an
error, even if the exit status of the compiler is 0. Furthermore, it
resets CFLAGS and LDFLAGS at the start. On MirBSD, this will lead to a
warning in each test, making all such tests fail. In particular, the
pthread_setspecific test fails, thus pixman is compiled without thread
support. This leads to compile errors later on, or at least it did when
I tried this on pkgsrc. Re-adding the saved CFLAGS, LDFLAGS and LIBS
before the test makes it work.
The second hunk inverts the order of the pthread flag checks. On BSD
systems (this is true at least on OpenBSD and MirBSD), both -lpthread
and -pthread work but the latter is "preferred", whatever this means.
This provides a way to enable MIPS DSP ASE optimizations if running
under qemu-user (where /proc/cpuinfo contains information about the
host processor instead of the emulated one). Can be used for running
pixman test suite in qemu-user when having no access to real MIPS
hardware.
The while part of a do/while loop was formatted as if it were a while
loop with an empty body. Probably some indent tool misinterpreted the
code at some point.
In order for a src/mask pair to be considered a pixbuf, they have to
have identical transformations, but we don't check for that. Since the
only fast paths we have for pixbufs require identity transformations,
it sufficies to check that both source and mask are
untransformed.
This is also the reason that this bug can't be triggered by any test
code - if the source and mask had different transformations, we would
consider them a pixbuf, but then wouldn't take the fast path because
at least one of the transformations would be different from the
identity.
GCC doesn't move the divisions out of the loop, so do it manually by
looking up the four (1.0f / mask) values in a table. Table lookups are
used under the theory that one L2 hit plus three L1 hits is preferable
to four floating point divisions.
Since pixman-combine64.[ch] are not used anymore, there is no point
generating these files from pixman-combine.[ch].template.
Also get rid of dependency on perl in configure.ac.
The 64 bit pipeline is not used anymore, so it can now be removed.
Don't generate pixman-combine64.[ch] anymore. Don't generate the
pixman-srgb.c anymore. Delete all the 64 bit fetchers in
pixman-access.c, all the 64 bit iterator functions in
pixman-bits-image.c and all the functions that expand from 8 to 16
bits.
In pixman-bits-image.c, remove bits_image_fetch_untransformed_64() and
add bits_image_fetch_untransformed_float(); change
dest_get_scanline_wide() to produce a floating point buffer,
In the gradients, change *_get_scanline_wide() to call
pixman_expand_to_float() instead of pixman_expand().
In pixman-general.c change the wide Bpp to 16 instead of 8, and
initialize the buffers to 0 to prevent NaNs from causing trouble.
In pixman-noop.c make the wide solid iterator generate floating point
pixels.
In pixman-solid-fill.c, cache a floating point pixel, and make the
wide iterator generate floating point pixels.
Bug fix in bits_image_fetch_untransformed_repeat_normal
Three new function pointer fields are added to bits_image_t:
fetch_scanline_float
fetch_pixel_float
store_scanline_float
similar to the existing 32 and 64 bit accessors. The fetcher_info_t
struct in pixman_access similarly gets a new get_scanline_float field.
For most formats, the new get_scanline_float field is set to a new
function fetch_scanline_generic_float() that first calls the 32 bit
fetcher uses the 32 bit scanline fetcher and then expands these pixels
to floating point.
For the 10 bpc formats, new floating point accessors are added that
use pixman_unorm_to_float() and pixman_float_to_unorm() to convert
back and forth.
The PIXMAN_a8r8g8b8_sRGB format is handled with a 256-entry table that
maps 8 bit sRGB channels to linear single precision floating point
numbers. The sRGB->linear direction can then be done with a simple
table lookup.
The other direction is currently done with 4096-entry table which
works fine for 16 bit integers, but not so great for floating
point. So instead this patch uses a binary search in the sRGB->linear
table. The existing 32 bit accessors for the sRGB format are also
converted to use this method.
A new struct argb_t containing a floating point pixel is added to
pixman-private.h and conversion routines are added to pixman-utils.c
to convert normalized integers to and from that struct.
New functions:
- pixman_expand_to_float()
Expands a buffer of integer pixels to a buffer of argb_t pixels
- pixman_contract_from_float()
Converts a buffer of argb_t pixels to a buffer integer pixels
- pixman_float_to_unorm()
Converts a floating point number to an unsigned normalized integer
- pixman_unorm_to_float()
Converts an unsigned normalized integer to a floating point number
This test runs the new floating point combiners on random input with
divide-by-zero exceptions turned on.
With the floating point combiners the only thing we guarantee is that
divide-by-zero exceptions are not generated, so change
enable_fp_exceptions() to only enable those, and rename accordingly.
This file contains floating point implementations of combiners for all
pixman operators. These combiners operate on buffers containing single
precision floating point pixels stored in (a, r, g, b) order.
The combiners are added to the pixman_implementation_t struct, but
nothing uses them yet.
This commit incorporates a number of bug fixes contributed by Andrea
Canciani.
Some notes:
- The combiners are making sure to never divide by zero regardless of
input, so an application could enable divide-by-zero exceptions and
pixman wouldn't generate any.
- The operators are implemented according to the Render spec. Ie.,
- If the input pixels are between 0 and 1, then so is the output.
- The source and destination coefficients for the conjoint and
disjoint operators are clamped to [0, 1].
- The PDF operators are not described in the render spec, and the
implementation here doesn't do any clamping except in the final
conversion from floating point to destination format.
All of the above will need to be rethought if we add support for pixel
formats that can support negative and greater-than-one pixels. It is
in fact already the case in principle that convolution filters can
produce pixels with negative values, but since these go through the
broken "wide" path that narrows everything to 32 bits, these negative
values don't currently survive to the combiners.
In preparation for an upcoming change of the wide pipe to use floating
point, comment out some formats in glyph-test that are going to be
using floating point and update the CRC32 value to match.
Add const to pointer arguments when the function doesn't change the
pointed-to data.
Also in add_glyphs() in pixman-glyph.c make 'white' in add_glyphs()
static and const.
Before this patch it was often faster to scale and repeat
in two passes because each pass used a fast path vs.
the slow path that the single pass approach takes. This
makes it so that the single pass approach has competitive
performance.
The infinite loop detected by "affine-test 212944861" is caused by an
overflow in this expression:
max_x = pixman_fixed_to_int (vx + (width - 1) * unit_x) + 1;
where (width - 1) * unit_x doesn't fit in a signed int. This causes
max_x to be too small so that this:
src_width = 0
while (src_width < REPEAT_NORMAL_MIN_WIDTH && src_width <= max_x)
src_width += src_image->bits.width;
results in src_width being 0. Later on when src_width is used for
repeat calculations, we get the infinite loop.
By casting unit_x to int64_t, the expression no longer overflows and
affine-test 212944861 and infinite-loop no longer loop forever.
This test demonstrates a bug where a certain transformation matrix can
result in an infinite loop. It was extracted as a standalone version
of "affine-test 212944861".
If given the option -nf, the test program will not call fail_after()
and therefore potentially run forever.
Printing out the translation and scale is a bit misleading because the
actual transformation matrix can be modified in various other ways.
Instead simply print the whole transformation matrix that is actually
used.
In the checks for whether the transforms are rotation matrices "-1"
and "1" were used instead of the correct -pixman_fixed_1 and
pixman_fixed_1.
Fixes test suite failure for rotate-test.
This program exercises a bug in pixman-image.c where "-1" and "1" were
used instead of the correct "- pixman_fixed_1" and "pixman_fixed_1".
With the fast implementation enabled:
% ./rotate-test
rotate test failed! (checksum=35A01AAB, expected 03A24D51)
Without it:
% env PIXMAN_DISABLE=fast ./rotate-test
pixman: Disabled fast implementation
rotate test passed (checksum=03A24D51)
V2: The first version didn't have lcg_srand (testnum) in test_transform().
In general, the component alpha version of an operator is supposed to
do this:
- multiply source with mask in all channels
- multiply mask with source alpha in all channels
- compute the regular operator in all channels using the
mask value whenever source alpha is called for
The first two steps are usually accomplished with the function
combine_mask_ca(), but for operators where source alpha is not used,
such as SRC, ADD and OUT, the simpler function
combine_mask_value_ca(), which doesn't compute the new mask values,
can be used.
However, the PDF blend modes generally *do* make use of source alpha,
so they can't use combine_mask_value_ca() as they do now. They have to
use combine_mask_ca().
This patch fixes this in combine_multiply_ca() and the CA combiners
generated by PDF_SEPARABLE_BLEND_MODE.
The fast_composite_scaled_nearest() function can be called when the
format is x8b8g8r8. In that case pixels fetched in fetch_nearest()
need to have their alpha channel set to 0xff.
Fixes test suite failure in scaling-test.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Update the CRC values based on what the general implementation
reports. This reveals a bug in the fast implementation:
% env PIXMAN_DISABLE="mmx sse2" ./test/scaling-test
pixman: Disabled mmx implementation
pixman: Disabled sse2 implementation
scaling test failed! (checksum=AA722B06, expected 03A23E0C)
vs.
% env PIXMAN_DISABLE="mmx sse2 fast" ./test/scaling-test
pixman: Disabled fast implementation
pixman: Disabled mmx implementation
pixman: Disabled sse2 implementation
scaling test passed (checksum=03A23E0C)
Reviewed-by: Matt Turner <mattst88@gmail.com>
Instead of relying on each implementation to delegate when an iterator
can't be initialized, change the type of iterator initializers to
boolean and make pixman-implementation.c do the delegation whenever an
iterator initializer returns FALSE.
As in the blt commit, do the delegation in pixman-implementation.c
whenever the implementation fill returns FALSE instead of relying on
each implementation to do it by itself.
With this change there is no longer any reason for the implementations
to have one fill function that delegates and one that actually blits,
so consolidate those in the NEON, DSPr2, SSE2, and MMX
implementations.
Rather than require each individual implementation to do the
delegation for blt, just do it in pixman-implementation.c whenever the
implementation blt returns FALSE.
With this change, there is no longer any reason for the
implementations to have one blt function that delegates and one that
actually blits, so consolidate those in the NEON, DSPr2, SSE2, and MMX
implementations.