doc: consolidate use of multiple-byte units

Refs: https://en.wikipedia.org/wiki/Byte#Multiple-byte_units

PR-URL: https://github.com/nodejs/node/pull/42587
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Darshan Sen <raisinten@gmail.com>
Reviewed-By: Paolo Insogna <paolo@cowtech.it>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Mestery <mestery@protonmail.com>
This commit is contained in:
Antoine du Hamel 2022-04-20 00:46:37 +02:00 committed by GitHub
parent 51fd5db4c1
commit 1e761654d3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 46 additions and 46 deletions

View File

@ -833,7 +833,7 @@ _may contain sensitive data_. Use [`buf.fill(0)`][`buf.fill()`] to initialize
such `Buffer` instances with zeroes. such `Buffer` instances with zeroes.
When using [`Buffer.allocUnsafe()`][] to allocate new `Buffer` instances, When using [`Buffer.allocUnsafe()`][] to allocate new `Buffer` instances,
allocations under 4 KB are sliced from a single pre-allocated `Buffer`. This allocations under 4 KiB are sliced from a single pre-allocated `Buffer`. This
allows applications to avoid the garbage collection overhead of creating many allows applications to avoid the garbage collection overhead of creating many
individually allocated `Buffer` instances. This approach improves both individually allocated `Buffer` instances. This approach improves both
performance and memory usage by eliminating the need to track and clean up as performance and memory usage by eliminating the need to track and clean up as
@ -5204,9 +5204,9 @@ changes:
* {integer} The largest size allowed for a single `Buffer` instance. * {integer} The largest size allowed for a single `Buffer` instance.
On 32-bit architectures, this value currently is 2<sup>30</sup> - 1 (about 1 On 32-bit architectures, this value currently is 2<sup>30</sup> - 1 (about 1
GB). GiB).
On 64-bit architectures, this value currently is 2<sup>32</sup> (about 4 GB). On 64-bit architectures, this value currently is 2<sup>32</sup> (about 4 GiB).
It reflects [`v8::TypedArray::kMaxLength`][] under the hood. It reflects [`v8::TypedArray::kMaxLength`][] under the hood.

View File

@ -666,10 +666,10 @@ added:
changes: changes:
- version: v13.13.0 - version: v13.13.0
pr-url: https://github.com/nodejs/node/pull/32520 pr-url: https://github.com/nodejs/node/pull/32520
description: Change maximum default size of HTTP headers from 8 KB to 16 KB. description: Change maximum default size of HTTP headers from 8 KiB to 16 KiB.
--> -->
Specify the maximum size, in bytes, of HTTP headers. Defaults to 16 KB. Specify the maximum size, in bytes, of HTTP headers. Defaults to 16 KiB.
### `--napi-modules` ### `--napi-modules`
@ -1993,8 +1993,8 @@ Sets the max memory size of V8's old memory section. As memory
consumption approaches the limit, V8 will spend more time on consumption approaches the limit, V8 will spend more time on
garbage collection in an effort to free unused memory. garbage collection in an effort to free unused memory.
On a machine with 2 GB of memory, consider setting this to On a machine with 2 GiB of memory, consider setting this to
1536 (1.5 GB) to leave some memory for other uses and avoid swapping. 1536 (1.5 GiB) to leave some memory for other uses and avoid swapping.
```console ```console
$ node --max-old-space-size=1536 index.js $ node --max-old-space-size=1536 index.js

View File

@ -2951,11 +2951,11 @@ changes:
- v10.15.0 - v10.15.0
commit: 186035243fad247e3955f commit: 186035243fad247e3955f
pr-url: https://github.com/nodejs-private/node-private/pull/143 pr-url: https://github.com/nodejs-private/node-private/pull/143
description: Max header size in `http_parser` was set to 8 KB. description: Max header size in `http_parser` was set to 8 KiB.
--> -->
Too much HTTP header data was received. In order to protect against malicious or Too much HTTP header data was received. In order to protect against malicious or
malconfigured clients, if more than 8 KB of HTTP header data is received then malconfigured clients, if more than 8 KiB of HTTP header data is received then
HTTP parsing will abort without a request or response object being created, and HTTP parsing will abort without a request or response object being created, and
an `Error` with this code will be emitted. an `Error` with this code will be emitted.

View File

@ -262,8 +262,8 @@ added: v16.11.0
* `highWaterMark` {integer} **Default:** `64 * 1024` * `highWaterMark` {integer} **Default:** `64 * 1024`
* Returns: {fs.ReadStream} * Returns: {fs.ReadStream}
Unlike the 16 kb default `highWaterMark` for a {stream.Readable}, the stream Unlike the 16 KiB default `highWaterMark` for a {stream.Readable}, the stream
returned by this method has a default `highWaterMark` of 64 kb. returned by this method has a default `highWaterMark` of 64 KiB.
`options` can include `start` and `end` values to read a range of bytes from `options` can include `start` and `end` values to read a range of bytes from
the file instead of the entire file. Both `start` and `end` are inclusive and the file instead of the entire file. Both `start` and `end` are inclusive and
@ -2186,8 +2186,8 @@ changes:
* `fs` {Object|null} **Default:** `null` * `fs` {Object|null} **Default:** `null`
* Returns: {fs.ReadStream} * Returns: {fs.ReadStream}
Unlike the 16 kb default `highWaterMark` for a {stream.Readable}, the stream Unlike the 16 KiB default `highWaterMark` for a {stream.Readable}, the stream
returned by this method has a default `highWaterMark` of 64 kb. returned by this method has a default `highWaterMark` of 64 KiB.
`options` can include `start` and `end` values to read a range of bytes from `options` can include `start` and `end` values to read a range of bytes from
the file instead of the entire file. Both `start` and `end` are inclusive and the file instead of the entire file. Both `start` and `end` are inclusive and
@ -3430,8 +3430,8 @@ to read a complete file into memory.
The additional read overhead can vary broadly on different systems and depends The additional read overhead can vary broadly on different systems and depends
on the type of file being read. If the file type is not a regular file (a pipe on the type of file being read. If the file type is not a regular file (a pipe
for instance) and Node.js is unable to determine an actual file size, each read for instance) and Node.js is unable to determine an actual file size, each read
operation will load on 64 KB of data. For regular files, each read will process operation will load on 64 KiB of data. For regular files, each read will process
512 KB of data. 512 KiB of data.
For applications that require as-fast-as-possible reading of file contents, it For applications that require as-fast-as-possible reading of file contents, it
is better to use `fs.read()` directly and for application code to manage is better to use `fs.read()` directly and for application code to manage

View File

@ -2998,7 +2998,7 @@ changes:
* `maxHeaderSize` {number} Optionally overrides the value of * `maxHeaderSize` {number} Optionally overrides the value of
[`--max-http-header-size`][] for requests received by this server, i.e. [`--max-http-header-size`][] for requests received by this server, i.e.
the maximum length of request headers in bytes. the maximum length of request headers in bytes.
**Default:** 16384 (16 KB). **Default:** 16384 (16 KiB).
* `noDelay` {boolean} If set to `true`, it disables the use of Nagle's * `noDelay` {boolean} If set to `true`, it disables the use of Nagle's
algorithm immediately after a new incoming connection is received. algorithm immediately after a new incoming connection is received.
**Default:** `true`. **Default:** `true`.
@ -3154,7 +3154,7 @@ added:
* {number} * {number}
Read-only property specifying the maximum allowed size of HTTP headers in bytes. Read-only property specifying the maximum allowed size of HTTP headers in bytes.
Defaults to 16 KB. Configurable using the [`--max-http-header-size`][] CLI Defaults to 16 KiB. Configurable using the [`--max-http-header-size`][] CLI
option. option.
This can be overridden for servers and client requests by passing the This can be overridden for servers and client requests by passing the
@ -3231,7 +3231,7 @@ changes:
* `maxHeaderSize` {number} Optionally overrides the value of * `maxHeaderSize` {number} Optionally overrides the value of
[`--max-http-header-size`][] (the maximum length of response headers in [`--max-http-header-size`][] (the maximum length of response headers in
bytes) for responses received from the server. bytes) for responses received from the server.
**Default:** 16384 (16 KB). **Default:** 16384 (16 KiB).
* `method` {string} A string specifying the HTTP request method. **Default:** * `method` {string} A string specifying the HTTP request method. **Default:**
`'GET'`. `'GET'`.
* `path` {string} Request path. Should include query string if any. * `path` {string} Request path. Should include query string if any.

View File

@ -1674,7 +1674,7 @@ If the loop terminates with a `break`, `return`, or a `throw`, the stream will
be destroyed. In other terms, iterating over a stream will consume the stream be destroyed. In other terms, iterating over a stream will consume the stream
fully. The stream will be read in chunks of size equal to the `highWaterMark` fully. The stream will be read in chunks of size equal to the `highWaterMark`
option. In the code example above, data will be in a single chunk if the file option. In the code example above, data will be in a single chunk if the file
has less then 64 KB of data because no `highWaterMark` option is provided to has less then 64 KiB of data because no `highWaterMark` option is provided to
[`fs.createReadStream()`][]. [`fs.createReadStream()`][].
##### `readable.iterator([options])` ##### `readable.iterator([options])`
@ -3047,7 +3047,7 @@ changes:
* `options` {Object} * `options` {Object}
* `highWaterMark` {number} Buffer level when * `highWaterMark` {number} Buffer level when
[`stream.write()`][stream-write] starts returning `false`. **Default:** [`stream.write()`][stream-write] starts returning `false`. **Default:**
`16384` (16 KB), or `16` for `objectMode` streams. `16384` (16 KiB), or `16` for `objectMode` streams.
* `decodeStrings` {boolean} Whether to encode `string`s passed to * `decodeStrings` {boolean} Whether to encode `string`s passed to
[`stream.write()`][stream-write] to `Buffer`s (with the encoding [`stream.write()`][stream-write] to `Buffer`s (with the encoding
specified in the [`stream.write()`][stream-write] call) before passing specified in the [`stream.write()`][stream-write] call) before passing
@ -3420,7 +3420,7 @@ changes:
* `options` {Object} * `options` {Object}
* `highWaterMark` {number} The maximum [number of bytes][hwm-gotcha] to store * `highWaterMark` {number} The maximum [number of bytes][hwm-gotcha] to store
in the internal buffer before ceasing to read from the underlying resource. in the internal buffer before ceasing to read from the underlying resource.
**Default:** `16384` (16 KB), or `16` for `objectMode` streams. **Default:** `16384` (16 KiB), or `16` for `objectMode` streams.
* `encoding` {string} If specified, then buffers will be decoded to * `encoding` {string} If specified, then buffers will be decoded to
strings using the specified encoding. **Default:** `null`. strings using the specified encoding. **Default:** `null`.
* `objectMode` {boolean} Whether this stream should behave * `objectMode` {boolean} Whether this stream should behave

View File

@ -536,7 +536,7 @@ changes:
description: The `depth` default changed to `20`. description: The `depth` default changed to `20`.
- version: v11.0.0 - version: v11.0.0
pr-url: https://github.com/nodejs/node/pull/22756 pr-url: https://github.com/nodejs/node/pull/22756
description: The inspection output is now limited to about 128 MB. Data description: The inspection output is now limited to about 128 MiB. Data
above that size will not be fully inspected. above that size will not be fully inspected.
- version: v10.12.0 - version: v10.12.0
pr-url: https://github.com/nodejs/node/pull/22788 pr-url: https://github.com/nodejs/node/pull/22788
@ -778,7 +778,7 @@ console.log(thousand, million, bigNumber, bigDecimal);
``` ```
`util.inspect()` is a synchronous method intended for debugging. Its maximum `util.inspect()` is a synchronous method intended for debugging. Its maximum
output length is approximately 128 MB. Inputs that result in longer output will output length is approximately 128 MiB. Inputs that result in longer output will
be truncated. be truncated.
### Customizing `util.inspect` colors ### Customizing `util.inspect` colors

View File

@ -101,7 +101,7 @@ Leaks can be introduced in native addons and the following is a simple
example leak based on the "Hello world" addon from example leak based on the "Hello world" addon from
[node-addon-examples](https://github.com/nodejs/node-addon-examples). [node-addon-examples](https://github.com/nodejs/node-addon-examples).
In this example, a loop which allocates approximately 1 MB of memory and never In this example, a loop which allocates approximately 1 MiB of memory and never
frees it has been added: frees it has been added:
```cpp ```cpp

View File

@ -270,7 +270,7 @@ This flag is inherited from V8 and is subject to change upstream. It may
disappear in a non-semver-major release. disappear in a non-semver-major release.
. .
.It Fl -max-http-header-size Ns = Ns Ar size .It Fl -max-http-header-size Ns = Ns Ar size
Specify the maximum size of HTTP headers in bytes. Defaults to 16 KB. Specify the maximum size of HTTP headers in bytes. Defaults to 16 KiB.
. .
.It Fl -napi-modules .It Fl -napi-modules
This option is a no-op. This option is a no-op.

View File

@ -213,7 +213,7 @@ function getCode(fd, line, column) {
let lines = 0; let lines = 0;
// Prevent blocking the event loop by limiting the maximum amount of // Prevent blocking the event loop by limiting the maximum amount of
// data that may be read. // data that may be read.
let maxReads = 32; // bytesPerRead * maxReads = 512 kb let maxReads = 32; // bytesPerRead * maxReads = 512 KiB
const bytesPerRead = 16384; const bytesPerRead = 16384;
// Use a single buffer up front that is reused until the call site is found. // Use a single buffer up front that is reused until the call site is found.
let buffer = Buffer.allocUnsafe(bytesPerRead); let buffer = Buffer.allocUnsafe(bytesPerRead);

View File

@ -2879,7 +2879,7 @@ function lazyLoadStreams() {
/** /**
* Creates a readable stream with a default `highWaterMark` * Creates a readable stream with a default `highWaterMark`
* of 64 kb. * of 64 KiB.
* @param {string | Buffer | URL} path * @param {string | Buffer | URL} path
* @param {string | { * @param {string | {
* flags?: string; * flags?: string;

View File

@ -41,7 +41,7 @@ const defaults = {
N: 16384, N: 16384,
r: 8, r: 8,
p: 1, p: 1,
maxmem: 32 << 20, // 32 MB, matches SCRYPT_MAX_MEM. maxmem: 32 << 20, // 32 MiB, matches SCRYPT_MAX_MEM.
}; };
function scrypt(password, salt, keylen, options, callback = defaults) { function scrypt(password, salt, keylen, options, callback = defaults) {

View File

@ -1004,7 +1004,7 @@ E('ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY',
'Cannot overwrite symlink in subdirectory of self', SystemError); 'Cannot overwrite symlink in subdirectory of self', SystemError);
E('ERR_FS_CP_UNKNOWN', 'Cannot copy an unknown file type', SystemError); E('ERR_FS_CP_UNKNOWN', 'Cannot copy an unknown file type', SystemError);
E('ERR_FS_EISDIR', 'Path is a directory', SystemError); E('ERR_FS_EISDIR', 'Path is a directory', SystemError);
E('ERR_FS_FILE_TOO_LARGE', 'File size (%s) is greater than 2 GB', RangeError); E('ERR_FS_FILE_TOO_LARGE', 'File size (%s) is greater than 2 GiB', RangeError);
E('ERR_FS_INVALID_SYMLINK_TYPE', E('ERR_FS_INVALID_SYMLINK_TYPE',
'Symlink type must be one of "dir", "file", or "junction". Received "%s"', 'Symlink type must be one of "dir", "file", or "junction". Received "%s"',
Error); // Switch to TypeError. The current implementation does not seem right Error); // Switch to TypeError. The current implementation does not seem right

View File

@ -122,7 +122,7 @@ const kMaximumCopyMode = COPYFILE_EXCL |
COPYFILE_FICLONE | COPYFILE_FICLONE |
COPYFILE_FICLONE_FORCE; COPYFILE_FICLONE_FORCE;
// Most platforms don't allow reads or writes >= 2 GB. // Most platforms don't allow reads or writes >= 2 GiB.
// See https://github.com/libuv/libuv/pull/1501. // See https://github.com/libuv/libuv/pull/1501.
const kIoMaxLength = 2 ** 31 - 1; const kIoMaxLength = 2 ** 31 - 1;

View File

@ -97,11 +97,11 @@ static int StartDebugSignalHandler() {
pthread_attr_t attr; pthread_attr_t attr;
CHECK_EQ(0, pthread_attr_init(&attr)); CHECK_EQ(0, pthread_attr_init(&attr));
#if defined(PTHREAD_STACK_MIN) && !defined(__FreeBSD__) #if defined(PTHREAD_STACK_MIN) && !defined(__FreeBSD__)
// PTHREAD_STACK_MIN is 2 KB with musl libc, which is too small to safely // PTHREAD_STACK_MIN is 2 KiB with musl libc, which is too small to safely
// receive signals. PTHREAD_STACK_MIN + MINSIGSTKSZ is 8 KB on arm64, which // receive signals. PTHREAD_STACK_MIN + MINSIGSTKSZ is 8 KiB on arm64, which
// is the musl architecture with the biggest MINSIGSTKSZ so let's use that // is the musl architecture with the biggest MINSIGSTKSZ so let's use that
// as a lower bound and let's quadruple it just in case. The goal is to avoid // as a lower bound and let's quadruple it just in case. The goal is to avoid
// creating a big 2 or 4 MB address space gap (problematic on 32 bits // creating a big 2 or 4 MiB address space gap (problematic on 32 bits
// because of fragmentation), not squeeze out every last byte. // because of fragmentation), not squeeze out every last byte.
// Omitted on FreeBSD because it doesn't seem to like small stacks. // Omitted on FreeBSD because it doesn't seem to like small stacks.
const size_t stack_size = std::max(static_cast<size_t>(4 * 8192), const size_t stack_size = std::max(static_cast<size_t>(4 * 8192),

View File

@ -59,7 +59,7 @@ On non-Windows platforms, this always returns `true`.
### `createZeroFilledFile(filename)` ### `createZeroFilledFile(filename)`
Creates a 10 MB file of all null characters. Creates a 10 MiB file of all null characters.
### `enoughTestMem` ### `enoughTestMem`

View File

@ -24,7 +24,7 @@ const good = [
}, },
// Test vectors from https://tools.ietf.org/html/rfc7914#page-13 that // Test vectors from https://tools.ietf.org/html/rfc7914#page-13 that
// should pass. Note that the test vector with N=1048576 is omitted // should pass. Note that the test vector with N=1048576 is omitted
// because it takes too long to complete and uses over 1 GB of memory. // because it takes too long to complete and uses over 1 GiB of memory.
{ {
pass: '', pass: '',
salt: '', salt: '',

View File

@ -50,7 +50,7 @@ const {
); );
} }
// Most platforms don't allow reads or writes >= 2 GB. // Most platforms don't allow reads or writes >= 2 GiB.
// See https://github.com/libuv/libuv/pull/1501. // See https://github.com/libuv/libuv/pull/1501.
const kIoMaxLength = 2 ** 31 - 1; const kIoMaxLength = 2 ** 31 - 1;

View File

@ -9,7 +9,7 @@ const http2 = require('http2');
// mechanism. // mechanism.
const bodyLength = 8192; const bodyLength = 8192;
const maxSessionMemory = 1; // 1 MB const maxSessionMemory = 1; // 1 MiB
const requestCount = 1000; const requestCount = 1000;
const server = http2.createServer({ maxSessionMemory }); const server = http2.createServer({ maxSessionMemory });

View File

@ -20,9 +20,9 @@ setImmediate(() => {
global.gc(); global.gc();
const after = process.memoryUsage().external; const after = process.memoryUsage().external;
// It's not an exact science but a SecurePair grows .external by about 45 kB. // It's not an exact science but a SecurePair grows .external by about 45 KiB.
// Unless AdjustAmountOfExternalAllocatedMemory() is called on destruction, // Unless AdjustAmountOfExternalAllocatedMemory() is called on destruction,
// 10,000 instances make it grow by well over 400 MB. Allow for some slop // 10,000 instances make it grow by well over 400 MiB. Allow for some slop
// because objects like buffers also affect the external limit. // because objects like buffers also affect the external limit.
assert(after - before < 25 << 20); assert(after - before < 25 << 20);
}); });

View File

@ -21,7 +21,7 @@ if __name__ == '__main__':
# To make decompression a little easier, we prepend the compressed data # To make decompression a little easier, we prepend the compressed data
# with the size of the uncompressed data as a 24 bits BE unsigned integer. # with the size of the uncompressed data as a 24 bits BE unsigned integer.
assert len(text) < 1 << 24, 'Uncompressed JSON must be < 16 MB.' assert len(text) < 1 << 24, 'Uncompressed JSON must be < 16 MiB.'
data = struct.pack('>I', len(text))[1:4] + data data = struct.pack('>I', len(text))[1:4] + data
step = 20 step = 20

View File

@ -2112,8 +2112,8 @@ def GetDefaultConcurrentLinks():
ctypes.windll.kernel32.GlobalMemoryStatusEx(ctypes.byref(stat)) ctypes.windll.kernel32.GlobalMemoryStatusEx(ctypes.byref(stat))
# VS 2015 uses 20% more working set than VS 2013 and can consume all RAM # VS 2015 uses 20% more working set than VS 2013 and can consume all RAM
# on a 64 GB machine. # on a 64 GiB machine.
mem_limit = max(1, stat.ullTotalPhys // (5 * (2 ** 30))) # total / 5GB mem_limit = max(1, stat.ullTotalPhys // (5 * (2 ** 30))) # total / 5GiB
hard_cap = max(1, int(os.environ.get("GYP_LINK_CONCURRENCY_MAX", 2 ** 32))) hard_cap = max(1, int(os.environ.get("GYP_LINK_CONCURRENCY_MAX", 2 ** 32)))
return min(mem_limit, hard_cap) return min(mem_limit, hard_cap)
elif sys.platform.startswith("linux"): elif sys.platform.startswith("linux"):

View File

@ -167,7 +167,7 @@ namespace cbor {
// must use a 32 bit wide length. // must use a 32 bit wide length.
// - At the top level, a message must be an indefinite length map // - At the top level, a message must be an indefinite length map
// wrapped by an envelope. // wrapped by an envelope.
// - Maximal size for messages is 2^32 (4 GB). // - Maximal size for messages is 2^32 (4 GiB).
// - For scalars, we support only the int32_t range, encoded as // - For scalars, we support only the int32_t range, encoded as
// UNSIGNED/NEGATIVE (major types 0 / 1). // UNSIGNED/NEGATIVE (major types 0 / 1).
// - UTF16 strings, including with unbalanced surrogate pairs, are encoded // - UTF16 strings, including with unbalanced surrogate pairs, are encoded

View File

@ -176,7 +176,7 @@ namespace cbor {
// must use a 32 bit wide length. // must use a 32 bit wide length.
// - At the top level, a message must be an indefinite length map // - At the top level, a message must be an indefinite length map
// wrapped by an envelope. // wrapped by an envelope.
// - Maximal size for messages is 2^32 (4 GB). // - Maximal size for messages is 2^32 (4 GiB).
// - For scalars, we support only the int32_t range, encoded as // - For scalars, we support only the int32_t range, encoded as
// UNSIGNED/NEGATIVE (major types 0 / 1). // UNSIGNED/NEGATIVE (major types 0 / 1).
// - UTF16 strings, including with unbalanced surrogate pairs, are encoded // - UTF16 strings, including with unbalanced surrogate pairs, are encoded

View File

@ -13,7 +13,7 @@ echo This script will install Python and the Visual Studio Build Tools, necessar
echo to compile Node.js native modules. Note that Chocolatey and required Windows echo to compile Node.js native modules. Note that Chocolatey and required Windows
echo updates will also be installed. echo updates will also be installed.
echo. echo.
echo This will require about 3 Gb of free disk space, plus any space necessary to echo This will require about 3 GiB of free disk space, plus any space necessary to
echo install Windows updates. This will take a while to run. echo install Windows updates. This will take a while to run.
echo. echo.
echo Please close all open programs for the duration of the installation. If the echo Please close all open programs for the duration of the installation. If the

View File

@ -275,7 +275,7 @@
# will fail. # will fail.
'v8_enable_webassembly%': 1, 'v8_enable_webassembly%': 1,
# Enable advanced BigInt algorithms, costing about 10-30 KB binary size # Enable advanced BigInt algorithms, costing about 10-30 KiB binary size
# depending on platform. # depending on platform.
'v8_advanced_bigint_algorithms%': 1 'v8_advanced_bigint_algorithms%': 1
}, },