tb hash: hash phys_pc, pc, and flags with xxhash

For some workloads such as arm bootup, tb_phys_hash is performance-critical.
The is due to the high frequency of accesses to the hash table, originated
by (frequent) TLB flushes that wipe out the cpu-private tb_jmp_cache's.
More info:
  https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg05098.html

To dig further into this I modified an arm image booting debian jessie to
immediately shut down after boot. Analysis revealed that quite a bit of time
is unnecessarily spent in tb_phys_hash: the cause is poor hashing that
results in very uneven loading of chains in the hash table's buckets;
the longest observed chain had ~550 elements.

The appended addresses this with two changes:

1) Use xxhash as the hash table's hash function. xxhash is a fast,
   high-quality hashing function.

2) Feed the hashing function with not just tb_phys, but also pc and flags.

This improves performance over using just tb_phys for hashing, since that
resulted in some hash buckets having many TB's, while others getting very few;
with these changes, the longest observed chain on a single hash bucket is
brought down from ~550 to ~40.

Tests show that the other element checked for in tb_find_physical,
cs_base, is always a match when tb_phys+pc+flags are a match,
so hashing cs_base is wasteful. It could be that this is an ARM-only
thing, though. UPDATE:
On Tue, Apr 05, 2016 at 08:41:43 -0700, Richard Henderson wrote:
> The cs_base field is only used by i386 (in 16-bit modes), and sparc (for a TB
> consisting of only a delay slot).
> It may well still turn out to be reasonable to ignore cs_base for hashing.

BTW, after this change the hash table should not be called "tb_hash_phys"
anymore; this is addressed later in this series.

This change gives consistent bootup time improvements. I tested two
host machines:
- Intel Xeon E5-2690: 11.6% less time
- Intel i7-4790K: 19.2% less time

Increasing the number of hash buckets yields further improvements. However,
using a larger, fixed number of buckets can degrade performance for other
workloads that do not translate as many blocks (600K+ for debian-jessie arm
bootup). This is dealt with later in this series.

Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-8-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This commit is contained in:
Emilio G. Cota 2016-06-08 14:55:25 -04:00 committed by Richard Henderson
parent dc8b295d05
commit 42bd32287f
3 changed files with 13 additions and 9 deletions

View File

@ -232,13 +232,13 @@ static TranslationBlock *tb_find_physical(CPUState *cpu,
{ {
CPUArchState *env = (CPUArchState *)cpu->env_ptr; CPUArchState *env = (CPUArchState *)cpu->env_ptr;
TranslationBlock *tb, **tb_hash_head, **ptb1; TranslationBlock *tb, **tb_hash_head, **ptb1;
unsigned int h; uint32_t h;
tb_page_addr_t phys_pc, phys_page1; tb_page_addr_t phys_pc, phys_page1;
/* find translated block using physical mappings */ /* find translated block using physical mappings */
phys_pc = get_page_addr_code(env, pc); phys_pc = get_page_addr_code(env, pc);
phys_page1 = phys_pc & TARGET_PAGE_MASK; phys_page1 = phys_pc & TARGET_PAGE_MASK;
h = tb_phys_hash_func(phys_pc); h = tb_hash_func(phys_pc, pc, flags);
/* Start at head of the hash entry */ /* Start at head of the hash entry */
ptb1 = tb_hash_head = &tcg_ctx.tb_ctx.tb_phys_hash[h]; ptb1 = tb_hash_head = &tcg_ctx.tb_ctx.tb_phys_hash[h];

View File

@ -20,6 +20,9 @@
#ifndef EXEC_TB_HASH #ifndef EXEC_TB_HASH
#define EXEC_TB_HASH #define EXEC_TB_HASH
#include "exec/exec-all.h"
#include "exec/tb-hash-xx.h"
/* Only the bottom TB_JMP_PAGE_BITS of the jump cache hash bits vary for /* Only the bottom TB_JMP_PAGE_BITS of the jump cache hash bits vary for
addresses on the same page. The top bits are the same. This allows addresses on the same page. The top bits are the same. This allows
TLB invalidation to quickly clear a subset of the hash table. */ TLB invalidation to quickly clear a subset of the hash table. */
@ -43,9 +46,10 @@ static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
| (tmp & TB_JMP_ADDR_MASK)); | (tmp & TB_JMP_ADDR_MASK));
} }
static inline unsigned int tb_phys_hash_func(tb_page_addr_t pc) static inline
uint32_t tb_hash_func(tb_page_addr_t phys_pc, target_ulong pc, uint32_t flags)
{ {
return (pc >> 2) & (CODE_GEN_PHYS_HASH_SIZE - 1); return tb_hash_func5(phys_pc, pc, flags) & (CODE_GEN_PHYS_HASH_SIZE - 1);
} }
#endif #endif

View File

@ -992,12 +992,12 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{ {
CPUState *cpu; CPUState *cpu;
PageDesc *p; PageDesc *p;
unsigned int h; uint32_t h;
tb_page_addr_t phys_pc; tb_page_addr_t phys_pc;
/* remove the TB from the hash list */ /* remove the TB from the hash list */
phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK); phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
h = tb_phys_hash_func(phys_pc); h = tb_hash_func(phys_pc, tb->pc, tb->flags);
tb_hash_remove(&tcg_ctx.tb_ctx.tb_phys_hash[h], tb); tb_hash_remove(&tcg_ctx.tb_ctx.tb_phys_hash[h], tb);
/* remove the TB from the page list */ /* remove the TB from the page list */
@ -1127,11 +1127,11 @@ static inline void tb_alloc_page(TranslationBlock *tb,
static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2) tb_page_addr_t phys_page2)
{ {
unsigned int h; uint32_t h;
TranslationBlock **ptb; TranslationBlock **ptb;
/* add in the physical hash table */ /* add in the hash table */
h = tb_phys_hash_func(phys_pc); h = tb_hash_func(phys_pc, tb->pc, tb->flags);
ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h]; ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h];
tb->phys_hash_next = *ptb; tb->phys_hash_next = *ptb;
*ptb = tb; *ptb = tb;