fs/fshelp: Catch impermissibly large block sizes in read helper

A fuzzed HFS+ filesystem had log2blocksize = 22. This gave
log2blocksize + GRUB_DISK_SECTOR_BITS = 31. 1 << 31 = 0x80000000,
which is -1 as an int. This caused some wacky behavior later on in
the function, leading to out-of-bounds writes on the destination buffer.

Catch log2blocksize + GRUB_DISK_SECTOR_BITS >= 31. We could be stricter,
but this is the minimum that will prevent integer size weirdness.

Signed-off-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
This commit is contained in:
Daniel Axtens 2021-01-18 11:46:39 +11:00 committed by Daniel Kiper
parent 829329bddb
commit b5bc456f66

View File

@ -362,6 +362,18 @@ grub_fshelp_read_file (grub_disk_t disk, grub_fshelp_node_t node,
grub_disk_addr_t i, blockcnt; grub_disk_addr_t i, blockcnt;
int blocksize = 1 << (log2blocksize + GRUB_DISK_SECTOR_BITS); int blocksize = 1 << (log2blocksize + GRUB_DISK_SECTOR_BITS);
/*
* Catch blatantly invalid log2blocksize. We could be a lot stricter, but
* this is the most permissive we can be before we start to see integer
* overflow/underflow issues.
*/
if (log2blocksize + GRUB_DISK_SECTOR_BITS >= 31)
{
grub_error (GRUB_ERR_OUT_OF_RANGE,
N_("blocksize too large"));
return -1;
}
if (pos > filesize) if (pos > filesize)
{ {
grub_error (GRUB_ERR_OUT_OF_RANGE, grub_error (GRUB_ERR_OUT_OF_RANGE,