In the common case no images need the workaround, so we check for that
first, and only if an image does need a workaround do we check which
one of the images actually need it.
They are no longer necessary because we will just walk the fast path
tables, and the general composite path is treated as another fast
path.
This unfortunately means that sse2_composite() can no longer be
responsible for realigning the stack to 16 bytes, so we have to move
that to pixman_image_composite().
We introduce a new PIXMAN_OP_any fake operator and a PIXMAN_any fake
format that match anything. Then general_composite_rect() can be used
as another fast path.
Because general_composite_rect() does not require the sources to cover
the clip region, we add a new flag FAST_PATH_COVERS_CLIP which is part
of the set of standard flags for fast paths.
Because this flag cannot be computed until after the clip region is
available, we have to call pixman_compute_composite_region32() before
checking for fast paths. This will resolve itself when we get to the
point where _pixman_run_fast_path() is only called once per composite
operation.
- Make it work for PIXMAN_OP_OVER
- Split repeat computation for x and y, and only the x part in the
inner loop.
- Move stride multiplication outside of inner loop
There is not much real benefit in having asserts turned on in
snapshots because it doesn't lead to any new bug reports, just to
people not installing development snapshots since they case X server
crashes. So just turn them off.
While we are at it, limit the number of messages to stderr to 5
instead of 50.
Old code assumed that all ARMv7 processors support NEON instructions
unless overrided by environment variable ARM_TRUST_HWCAP. This causes
X server to die with SIGILL if NEON support is disabled in the kernel
configuration. Additionally, ARMv7 processors lacking NEON unit are
going to become available eventually.
The problem was reported by user bearsh at irc.freenode.net #gentoo-embedded
This makes sets the stage for caching the information by image instead
of computing it on each composite invocation.
This patch also computes format codes for images such as PIXMAN_solid,
so that we can no longer end up in the situation that a fast path is
selected for a 1x1 solid image, when that fast path doesn't actually
understand repeating.
Previously it would be multiplied onto the image pixel, but the Render
specification is pretty clear that the alpha map should be used
*instead* of any alpha channel within the image.
This makes the assumption that the pixels in the image are already
premultiplied with the alpha channel from the alpha map. If we don't
make this assumption and the image has an alpha channel of its own, we
would have to first unpremultiply that pixel, and then premultiply the
alpha value onto the color channels, and then replace the alpha
channel.
This program demonstrates three bugs relating to alpha maps:
- When fetching from an alpha map into 32 bit intermediates, we use
the fetcher from the image, and not the one from the alpha map.
- For 64 bit intermediates we call fetch_pixel_generic_lossy_32()
which then calls fetch_pixel_raw_64, which is NULL because alpha
images are never validated.
- The alpha map should be used *in place* of any existing alpha
channel, but we are actually multiplying it onto the image.