Imported Upstream version 1.7.0+dfsg1

This commit is contained in:
Sylvestre Ledru 2016-03-03 22:34:40 +01:00 committed by Ximin Luo
parent 92a42be040
commit 9cc50fc6f5
1085 changed files with 49364 additions and 28825 deletions

View File

@ -174,7 +174,7 @@ labels to triage issues:
* Yellow, **A**-prefixed labels state which **area** of the project an issue
relates to.
* Magenta, **B**-prefixed labels identify bugs which **belong** elsewhere.
* Magenta, **B**-prefixed labels identify bugs which are **blockers**.
* Green, **E**-prefixed labels explain the level of **experience** necessary
to fix the issue.
@ -238,7 +238,7 @@ are:
* Don't be afraid to ask! The Rust community is friendly and helpful.
[gdfrustc]: http://manishearth.github.io/rust-internals-docs/rustc/
[gsearchdocs]: https://www.google.de/search?q=site:doc.rust-lang.org+your+query+here
[gsearchdocs]: https://www.google.com/search?q=site:doc.rust-lang.org+your+query+here
[rif]: http://internals.rust-lang.org
[rr]: https://doc.rust-lang.org/book/README.html
[tlgba]: http://tomlee.co/2014/04/03/a-more-detailed-tour-of-the-rust-compiler/

View File

@ -6,7 +6,7 @@ terms.
Longer version:
The Rust Project is copyright 2015, The Rust Project
The Rust Project is copyright 2016, The Rust Project
Developers (given in the file AUTHORS.txt).
Licensed under the Apache License, Version 2.0

View File

@ -1,4 +1,4 @@
Copyright (c) 2015 The Rust Project Developers
Copyright (c) 2016 The Rust Project Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated

View File

@ -17,7 +17,7 @@ Read ["Installing Rust"] from [The Book].
1. Make sure you have installed the dependencies:
* `g++` 4.7 or `clang++` 3.x
* `python` 2.6 or later (but not 3.x)
* `python` 2.7 or later (but not 3.x)
* GNU `make` 3.81 or later
* `curl`
* `git`
@ -53,6 +53,16 @@ Read ["Installing Rust"] from [The Book].
### Building on Windows
There are two prominent ABIs in use on Windows: the native (MSVC) ABI used by
Visual Studio, and the GNU ABI used by the GCC toolchain. Which version of Rust
you need depends largely on what C/C++ libraries you want to interoperate with:
for interop with software produced by Visual Studio use the MSVC build of Rust;
for interop with GNU software built using the MinGW/MSYS2 toolchain use the GNU
build.
#### MinGW
[MSYS2](http://msys2.github.io/) can be used to easily build Rust on Windows:
1. Grab the latest MSYS2 installer and go through the installer.
@ -63,12 +73,15 @@ Read ["Installing Rust"] from [The Book].
```sh
# Update package mirrors (may be needed if you have a fresh install of MSYS2)
$ pacman -Sy pacman-mirrors
```
# Choose one based on platform:
# *** see the note below ***
$ pacman -S mingw-w64-i686-toolchain
$ pacman -S mingw-w64-x86_64-toolchain
Download [MinGW from
here](http://mingw-w64.org/doku.php/download/mingw-builds), and choose the
`threads=win32,exceptions=dwarf/seh` flavor when installing. After installing,
add its `bin` directory to your `PATH`. This is due to [#28260](https://github.com/rust-lang/rust/issues/28260), in the future,
installing from pacman should be just fine.
```
# Make git available in MSYS2 (if not already available on path)
$ pacman -S git
@ -84,16 +97,19 @@ Read ["Installing Rust"] from [The Book].
$ ./configure
$ make && make install
```
> ***Note:*** gcc versions >= 5 currently have issues building LLVM on Windows
> resulting in a segmentation fault when building Rust. In order to avoid this
> it may be necessary to obtain an earlier version of gcc such as 4.9.x.
> Msys's `pacman` will install the latest version, so for the time being it is
> recommended to skip gcc toolchain installation step above and use [Mingw-Builds]
> project's installer instead. Be sure to add gcc `bin` directory to the path
> before running `configure`.
> For more information on this see issue #28260.
[Mingw-Builds]: http://sourceforge.net/projects/mingw-w64/
#### MSVC
MSVC builds of Rust additionally require an installation of Visual Studio 2013
(or later) so `rustc` can use its linker. Make sure to check the “C++ tools”
option. In addition, `cmake` needs to be installed to build LLVM.
With these dependencies installed, the build takes two steps:
```sh
$ ./configure
$ make && make install
```
## Building Documentation
@ -135,7 +151,7 @@ Snapshot binaries are currently built and tested on several platforms:
You may find that other platforms work, but these are our officially
supported build environments that are most likely to work.
Rust currently needs about 1.5 GiB of RAM to build without swapping; if it hits
Rust currently needs between 600MiB and 1.5GiB to build, depending on platform. If it hits
swap, it will take a very long time to build.
There is more advice about hacking on Rust in [CONTRIBUTING.md].

View File

@ -1,3 +1,205 @@
Version 1.7.0 (2016-03-03)
==========================
Libraries
---------
* Stabilized APIs
* `Path`
* [`Path::strip_prefix`][] (renamed from relative_from)
* [`path::StripPrefixError`][] (new error type returned from strip_prefix)
* `Ipv4Addr`
* [`Ipv4Addr::is_loopback`]
* [`Ipv4Addr::is_private`]
* [`Ipv4Addr::is_link_local`]
* [`Ipv4Addr::is_multicast`]
* [`Ipv4Addr::is_broadcast`]
* [`Ipv4Addr::is_documentation`]
* `Ipv6Addr`
* [`Ipv6Addr::is_unspecified`]
* [`Ipv6Addr::is_loopback`]
* [`Ipv6Addr::is_multicast`]
* `Vec`
* [`Vec::as_slice`]
* [`Vec::as_mut_slice`]
* `String`
* [`String::as_str`]
* [`String::as_mut_str`]
* Slices
* `<[T]>::`[`clone_from_slice`], which now requires the two slices to
be the same length
* `<[T]>::`[`sort_by_key`]
* checked, saturated, and overflowing operations
* [`i32::checked_rem`], [`i32::checked_neg`], [`i32::checked_shl`], [`i32::checked_shr`]
* [`i32::saturating_mul`]
* [`i32::overflowing_add`], [`i32::overflowing_sub`], [`i32::overflowing_mul`], [`i32::overflowing_div`]
* [`i32::overflowing_rem`], [`i32::overflowing_neg`], [`i32::overflowing_shl`], [`i32::overflowing_shr`]
* [`u32::checked_rem`], [`u32::checked_neg`], [`u32::checked_shl`], [`u32::checked_shl`]
* [`u32::saturating_mul`]
* [`u32::overflowing_add`], [`u32::overflowing_sub`], [`u32::overflowing_mul`], [`u32::overflowing_div`]
* [`u32::overflowing_rem`], [`u32::overflowing_neg`], [`u32::overflowing_shl`], [`u32::overflowing_shr`]
* and checked, saturated, and overflowing operations for other primitive types
* FFI
* [`ffi::IntoStringError`]
* [`CString::into_string`]
* [`CString::into_bytes`]
* [`CString::into_bytes_with_nul`]
* `From<CString> for Vec<u8>`
* `IntoStringError`
* [`IntoStringError::into_cstring`]
* [`IntoStringError::utf8_error`]
* `Error for IntoStringError`
* Hashing
* [`std::hash::BuildHasher`]
* [`BuildHasher::Hasher`]
* [`BuildHasher::build_hasher`]
* [`std::hash::BuildHasherDefault`]
* [`HashMap::with_hasher`]
* [`HashMap::with_capacity_and_hasher`]
* [`HashSet::with_hasher`]
* [`HashSet::with_capacity_and_hasher`]
* [`std::collections::hash_map::RandomState`]
* [`RandomState::new`]
* [Validating UTF-8 is faster by a factor of between 7 and 14x for
ASCII input][1.7utf8]. This means that creating `String`s and `str`s
from bytes is faster.
* [The performance of `LineWriter` (and thus `io::stdout`) was
improved by using `memchr` to search for newlines][1.7m].
* [`f32::to_degrees` and `f32::to_radians` are stable][1.7f]. The
`f64` variants were stabilized previously.
* [`BTreeMap` was rewritten to use less memory and improve the performance
of insertion and iteration, the latter by as much as 5x][1.7bm].
* [`BTreeSet` and its iterators, `Iter`, `IntoIter`, and `Range` are
covariant over their contained type][1.7bt].
* [`LinkedList` and its iterators, `Iter` and `IntoIter` are covariant
over their contained type][1.7ll].
* [`str::replace` now accepts a `Pattern`][1.7rp], like other string
searching methods.
* [`Any` is implemented for unsized types][1.7a].
* [`Hash` is implemented for `Duration`][1.7h].
Misc
----
* [When running tests with `--test`, rustdoc will pass `--cfg`
arguments to the compiler][1.7dt].
* [The compiler is built with RPATH information by default][1.7rpa].
This means that it will be possible to run `rustc` when installed in
unusual configurations without configuring the dynamic linker search
path explicitly.
* [`rustc` passes `--enable-new-dtags` to GNU ld][1.7dta]. This makes
any RPATH entries (emitted with `-C rpath`) *not* take precedence
over `LD_LIBRARY_PATH`.
Cargo
-----
* [`cargo rustc` accepts a `--profile` flag that runs `rustc` under
any of the compilation profiles, 'dev', 'bench', or 'test'][1.7cp].
* [The `rerun-if-changed` build script directive no longer causes the
build script to incorrectly run twice in certain scenarios][1.7rr].
Compatibility Notes
-------------------
* Soundness fixes to the interactions between associated types and
lifetimes, specified in [RFC 1214], [now generate errors][1.7sf] for
code that violates the new rules. This is a significant change that
is known to break existing code, so it has emitted warnings for the
new error cases since 1.4 to give crate authors time to adapt. The
details of what is changing are subtle; read the RFC for more.
* [Several bugs in the compiler's visibility calculations were
fixed][1.7v]. Since this was found to break significant amounts of
code, the new errors will be emitted as warnings for several release
cycles, under the `private_in_public` lint.
* Defaulted type parameters were accidentally accepted in positions
that were not intended. In this release, [defaulted type parameters
appearing outside of type definitions will generate a
warning][1.7d], which will become an error in future releases.
* [Parsing "." as a float results in an error instead of
0][1.7p]. That is, `".".parse::<f32>()` returns `Err`, not `Ok(0)`.
* [Borrows of closure parameters may not outlive the closure][1.7bc].
[1.7a]: https://github.com/rust-lang/rust/pull/30928
[1.7bc]: https://github.com/rust-lang/rust/pull/30341
[1.7bm]: https://github.com/rust-lang/rust/pull/30426
[1.7bt]: https://github.com/rust-lang/rust/pull/30998
[1.7cp]: https://github.com/rust-lang/cargo/pull/2224
[1.7d]: https://github.com/rust-lang/rust/pull/30724
[1.7dt]: https://github.com/rust-lang/rust/pull/30372
[1.7dta]: https://github.com/rust-lang/rust/pull/30394
[1.7f]: https://github.com/rust-lang/rust/pull/30672
[1.7h]: https://github.com/rust-lang/rust/pull/30818
[1.7ll]: https://github.com/rust-lang/rust/pull/30663
[1.7m]: https://github.com/rust-lang/rust/pull/30381
[1.7p]: https://github.com/rust-lang/rust/pull/30681
[1.7rp]: https://github.com/rust-lang/rust/pull/29498
[1.7rpa]: https://github.com/rust-lang/rust/pull/30353
[1.7rr]: https://github.com/rust-lang/cargo/pull/2279
[1.7sf]: https://github.com/rust-lang/rust/pull/30389
[1.7utf8]: https://github.com/rust-lang/rust/pull/30740
[1.7v]: https://github.com/rust-lang/rust/pull/29973
[RFC 1214]: https://github.com/rust-lang/rfcs/blob/master/text/1214-projections-lifetimes-and-wf.md
[`BuildHasher::Hasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.Hasher.html
[`BuildHasher::build_hasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.BuildHasher.html#tymethod.build_hasher
[`CString::into_bytes_with_nul`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_bytes_with_nul
[`CString::into_bytes`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_bytes
[`CString::into_string`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_string
[`HashMap::with_capacity_and_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashMap.html#method.with_capacity_and_hasher
[`HashMap::with_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashMap.html#method.with_hasher
[`HashSet::with_capacity_and_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.with_capacity_and_hasher
[`HashSet::with_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.with_hasher
[`IntoStringError::into_cstring`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html#method.into_cstring
[`IntoStringError::utf8_error`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html#method.utf8_error
[`Ipv4Addr::is_broadcast`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_broadcast
[`Ipv4Addr::is_documentation`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_documentation
[`Ipv4Addr::is_link_local`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_link_local
[`Ipv4Addr::is_loopback`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_loopback
[`Ipv4Addr::is_multicast`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_multicast
[`Ipv4Addr::is_private`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_private
[`Ipv6Addr::is_loopback`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_loopback
[`Ipv6Addr::is_multicast`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_multicast
[`Ipv6Addr::is_unspecified`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_unspecified
[`Path::strip_prefix`]: http://doc.rust-lang.org/nightly/std/path/struct.Path.html#method.strip_prefix
[`RandomState::new`]: http://doc.rust-lang.org/nightly/std/collections/hash_map/struct.RandomState.html#method.new
[`String::as_mut_str`]: http://doc.rust-lang.org/nightly/std/string/struct.String.html#method.as_mut_str
[`String::as_str`]: http://doc.rust-lang.org/nightly/std/string/struct.String.html#method.as_str
[`Vec::as_mut_slice`]: http://doc.rust-lang.org/nightly/std/vec/struct.Vec.html#method.as_mut_slice
[`Vec::as_slice`]: http://doc.rust-lang.org/nightly/std/vec/struct.Vec.html#method.as_slice
[`clone_from_slice`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.clone_from_slice
[`ffi::IntoStringError`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html
[`i32::checked_neg`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_neg
[`i32::checked_rem`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_rem
[`i32::checked_shl`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_shl
[`i32::checked_shr`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_shr
[`i32::overflowing_add`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_add
[`i32::overflowing_div`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_div
[`i32::overflowing_mul`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_mul
[`i32::overflowing_neg`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_neg
[`i32::overflowing_rem`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_rem
[`i32::overflowing_shl`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_shl
[`i32::overflowing_shr`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_shr
[`i32::overflowing_sub`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_sub
[`i32::saturating_mul`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.saturating_mul
[`path::StripPrefixError`]: http://doc.rust-lang.org/nightly/std/path/struct.StripPrefixError.html
[`sort_by_key`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.sort_by_key
[`std::collections::hash_map::RandomState`]: http://doc.rust-lang.org/nightly/std/collections/hash_map/struct.RandomState.html
[`std::hash::BuildHasherDefault`]: http://doc.rust-lang.org/nightly/std/hash/struct.BuildHasherDefault.html
[`std::hash::BuildHasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.BuildHasher.html
[`u32::checked_neg`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_neg
[`u32::checked_rem`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_rem
[`u32::checked_shl`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_shl
[`u32::overflowing_add`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_add
[`u32::overflowing_div`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_div
[`u32::overflowing_mul`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_mul
[`u32::overflowing_neg`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_neg
[`u32::overflowing_rem`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_rem
[`u32::overflowing_shl`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_shl
[`u32::overflowing_shr`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_shr
[`u32::overflowing_sub`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_sub
[`u32::saturating_mul`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.saturating_mul
Version 1.6.0 (2016-01-21)
==========================
@ -16,8 +218,9 @@ Libraries
---------
* Stabilized APIs:
[`Read::read_exact`], [`ErrorKind::UnexpectedEof`] (renamed from
`UnexpectedEOF`), [`fs::DirBuilder`], [`fs::DirBuilder::new`],
[`Read::read_exact`],
[`ErrorKind::UnexpectedEof`][] (renamed from `UnexpectedEOF`),
[`fs::DirBuilder`], [`fs::DirBuilder::new`],
[`fs::DirBuilder::recursive`], [`fs::DirBuilder::create`],
[`os::unix::fs::DirBuilderExt`],
[`os::unix::fs::DirBuilderExt::mode`], [`vec::Drain`],
@ -29,10 +232,11 @@ Libraries
[`collections::hash_set::HashSet::drain`],
[`collections::binary_heap::Drain`],
[`collections::binary_heap::BinaryHeap::drain`],
[`Vec::extend_from_slice`] (renamed from `push_all`),
[`Vec::extend_from_slice`][] (renamed from `push_all`),
[`Mutex::get_mut`], [`Mutex::into_inner`], [`RwLock::get_mut`],
[`RwLock::into_inner`], [`Iterator::min_by_key`] (renamed from
`min_by`), [`Iterator::max_by_key`] (renamed from `max_by`).
[`RwLock::into_inner`],
[`Iterator::min_by_key`][] (renamed from `min_by`),
[`Iterator::max_by_key`][] (renamed from `max_by`).
* The [core library][1.6co] is stable, as are most of its APIs.
* [The `assert_eq!` macro supports arguments that don't implement
`Sized`][1.6ae], such as arrays. In this way it behaves more like
@ -40,7 +244,7 @@ Libraries
* Several timer functions that take duration in milliseconds [are
deprecated in favor of those that take `Duration`][1.6ms]. These
include `Condvar::wait_timeout_ms`, `thread::sleep_ms`, and
`thread_park_timeout_ms`.
`thread::park_timeout_ms`.
* The algorithm by which `Vec` reserves additional elements was
[tweaked to not allocate excessive space][1.6a] while still growing
exponentially.
@ -67,9 +271,8 @@ Cargo
* crates.io will reject publication of crates with dependencies that
have a wildcard version constraint. Crates with wildcard
dependencies were seen to cause a variety of problems, as described
in [RFC 1241]. Disallowing them will create more predictable
development experience and a more stable ecosystem. Since 1.5
publication of such crates has emitted a warning.
in [RFC 1241]. Since 1.5 publication of such crates has emitted a
warning.
* `cargo clean` [accepts a `--release` flag][1.6cc] to clean the
release folder. A variety of artifacts that Cargo failed to clean
are now correctly deleted.
@ -94,10 +297,11 @@ Compatibility Notes
* [A number of bugs were fixed in the privacy checker][1.6p] that
could cause previously-accepted code to break.
* [Modules and unit/tuple structs may not share the same name][1.6ts].
* [Bugs in pattern matching unit structs were fixed][1.6us]: the tuple
struct pattern syntax (`Foo(..)`) no longer can be used with unit
structs; patterns that share the same name as a const are now an
error.
* [Bugs in pattern matching unit structs were fixed][1.6us]. The tuple
struct pattern syntax (`Foo(..)`) can no longer be used to match
unit structs. This is a warning now, but will become an error in
future releases. Patterns that share the same name as a const are
now an error.
* A bug was fixed that causes [rustc not to apply default type
parameters][1.6xc] when resolving certain method implementations of
traits defined in other crates.

56
configure vendored
View File

@ -499,13 +499,18 @@ case $CFG_CPUTYPE in
CFG_CPUTYPE=aarch64
;;
# At some point, when ppc64[le] support happens, this will need to do
# something clever. For now it's safe to assume that we're only ever
# interested in building 32 bit.
powerpc | ppc | ppc64)
powerpc | ppc)
CFG_CPUTYPE=powerpc
;;
powerpc64 | ppc64)
CFG_CPUTYPE=powerpc64
;;
powerpc64le | ppc64le)
CFG_CPUTYPE=powerpc64le
;;
x86_64 | x86-64 | x64 | amd64)
CFG_CPUTYPE=x86_64
;;
@ -521,15 +526,18 @@ then
# if configure is running in an interactive bash shell. /usr/bin/env
# exists *everywhere*.
BIN_TO_PROBE="$SHELL"
if [ -z "$BIN_TO_PROBE" -a -e "/usr/bin/env" ]; then
BIN_TO_PROBE="/usr/bin/env"
if [ ! -r "$BIN_TO_PROBE" ]; then
if [ -r "/usr/bin/env" ]; then
BIN_TO_PROBE="/usr/bin/env"
else
warn "Cannot check if the userland is i686 or x86_64"
fi
fi
file -L "$BIN_TO_PROBE" | grep -q "x86[_-]64"
if [ $? != 0 ]; then
msg "i686 userland on x86_64 Linux kernel"
CFG_CPUTYPE=i686
fi
if [ -n "$BIN_TO_PROBE" ]; then
file -L "$BIN_TO_PROBE" | grep -q "x86[_-]64"
if [ $? != 0 ]; then
CFG_CPUTYPE=i686
fi
fi
fi
@ -587,7 +595,7 @@ opt fast-make 0 "use .gitmodules as timestamp for submodule deps"
opt ccache 0 "invoke gcc/clang via ccache to reuse object files between builds"
opt local-rust 0 "use an installed rustc rather than downloading a snapshot"
opt llvm-static-stdcpp 0 "statically link to libstdc++ for LLVM"
opt rpath 0 "build rpaths into rustc itself"
opt rpath 1 "build rpaths into rustc itself"
opt stage0-landing-pads 1 "enable landing pads during bootstrap with stage0"
# This is used by the automation to produce single-target nightlies
opt dist-host-only 0 "only install bins for the host architecture"
@ -616,8 +624,10 @@ valopt android-cross-path "/opt/ndk_standalone" "Android NDK standalone path (de
valopt i686-linux-android-ndk "" "i686-linux-android NDK standalone path"
valopt arm-linux-androideabi-ndk "" "arm-linux-androideabi NDK standalone path"
valopt aarch64-linux-android-ndk "" "aarch64-linux-android NDK standalone path"
valopt nacl-cross-path "" "NaCl SDK path (Pepper Canary is recommended). Must be absolute!"
valopt release-channel "dev" "the name of the release channel to build"
valopt musl-root "/usr/local" "MUSL root installation directory"
valopt extra-filename "" "Additional data that is hashed and passed to the -C extra-filename flag"
# Used on systems where "cc" and "ar" are unavailable
valopt default-linker "cc" "the default linker"
@ -939,6 +949,13 @@ then
putvar CFG_ENABLE_CLANG
fi
if [ -z "$CFG_DISABLE_LIBCPP" -a -n "$CFG_ENABLE_CLANG" ]
then
CFG_USING_LIBCPP="1"
else
CFG_USING_LIBCPP="0"
fi
# Same with jemalloc. save the setting here.
if [ -n "$CFG_DISABLE_JEMALLOC" ]
then
@ -1018,7 +1035,7 @@ then
if [ -n "$CFG_OSX_CLANG_VERSION" ]
then
case $CFG_OSX_CLANG_VERSION in
(7.0*)
(7.0* | 7.1* | 7.2*)
step_msg "found ok version of APPLE CLANG: $CFG_OSX_CLANG_VERSION"
;;
(*)
@ -1140,7 +1157,12 @@ do
fi
done
;;
*-unknown-nacl)
if [ -z "$CFG_NACL_CROSS_PATH" ]
then
err "I need the NaCl SDK path! (use --nacl-cross-path)"
fi
;;
arm-apple-darwin)
if [ $CFG_OSTYPE != apple-darwin ]
then
@ -1682,7 +1704,7 @@ do
CXXFLAGS="$CXXFLAGS $LLVM_CXXFLAGS"
LDFLAGS="$LDFLAGS $LLVM_LDFLAGS"
if [ -z "$CFG_DISABLE_LIBCPP" ] && [ -n "$CFG_USING_CLANG" ]; then
if [ "$CFG_USING_LIBCPP" != "0" ]; then
LLVM_OPTS="$LLVM_OPTS --enable-libcpp"
fi
@ -1742,7 +1764,9 @@ putvar CFG_DISABLE_MANAGE_SUBMODULES
putvar CFG_AARCH64_LINUX_ANDROID_NDK
putvar CFG_ARM_LINUX_ANDROIDEABI_NDK
putvar CFG_I686_LINUX_ANDROID_NDK
putvar CFG_NACL_CROSS_PATH
putvar CFG_MANDIR
putvar CFG_USING_LIBCPP
# Avoid spurious warnings from clang by feeding it original source on
# ccache-miss rather than preprocessed input.

View File

@ -8,8 +8,8 @@ CFG_LIB_NAME_arm-unknown-linux-gnueabi=lib$(1).so
CFG_STATIC_LIB_NAME_arm-unknown-linux-gnueabi=lib$(1).a
CFG_LIB_GLOB_arm-unknown-linux-gnueabi=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_arm-unknown-linux-gnueabi=lib$(1)-*.dylib.dSYM
CFG_JEMALLOC_CFLAGS_arm-unknown-linux-gnueabi := -D__arm__ -mfpu=vfp $(CFLAGS)
CFG_GCCISH_CFLAGS_arm-unknown-linux-gnueabi := -Wall -g -fPIC -D__arm__ -mfpu=vfp $(CFLAGS)
CFG_JEMALLOC_CFLAGS_arm-unknown-linux-gnueabi := -D__arm__ -mfloat-abi=soft $(CFLAGS)
CFG_GCCISH_CFLAGS_arm-unknown-linux-gnueabi := -Wall -g -fPIC -D__arm__ -mfloat-abi=soft $(CFLAGS)
CFG_GCCISH_CXXFLAGS_arm-unknown-linux-gnueabi := -fno-rtti $(CXXFLAGS)
CFG_GCCISH_LINK_FLAGS_arm-unknown-linux-gnueabi := -shared -fPIC -g
CFG_GCCISH_DEF_FLAG_arm-unknown-linux-gnueabi := -Wl,--export-dynamic,--dynamic-list=

View File

@ -7,7 +7,7 @@ CFG_LIB_NAME_i686-unknown-freebsd=lib$(1).so
CFG_STATIC_LIB_NAME_i686-unknown-freebsd=lib$(1).a
CFG_LIB_GLOB_i686-unknown-freebsd=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_i686-unknown-freebsd=$(1)-*.dylib.dSYM
CFG_JEMALLOC_CFLAGS_i686-unknown-freebsd := -m32 -arch i386 -I/usr/local/include $(CFLAGS)
CFG_JEMALLOC_CFLAGS_i686-unknown-freebsd := -m32 -I/usr/local/include $(CFLAGS)
CFG_GCCISH_CFLAGS_i686-unknown-freebsd := -Wall -Werror -g -fPIC -m32 -arch i386 -I/usr/local/include $(CFLAGS)
CFG_GCCISH_LINK_FLAGS_i686-unknown-freebsd := -m32 -shared -fPIC -g -pthread -lrt
CFG_GCCISH_DEF_FLAG_i686-unknown-freebsd := -Wl,--export-dynamic,--dynamic-list=

View File

@ -0,0 +1,40 @@
# le32-unknown-nacl (portable, PNaCl)
ifneq ($(CFG_NACL_CROSS_PATH),)
CC_le32-unknown-nacl=$(shell $(CFG_PYTHON) $(CFG_NACL_CROSS_PATH)/tools/nacl_config.py -t pnacl --tool cc)
CXX_le32-unknown-nacl=$(shell $(CFG_PYTHON) $(CFG_NACL_CROSS_PATH)/tools/nacl_config.py -t pnacl --tool c++)
CPP_le32-unknown-nacl=$(CXX_le32-unknown-nacl) -E
AR_le32-unknown-nacl=$(shell $(CFG_PYTHON) $(CFG_NACL_CROSS_PATH)/tools/nacl_config.py -t pnacl --tool ar)
CFG_PNACL_TOOLCHAIN := $(abspath $(dir $(AR_le32-unknown-nacl)/../))
# Note: pso's aren't supported by PNaCl.
CFG_LIB_NAME_le32-unknown-nacl=lib$(1).pso
CFG_STATIC_LIB_NAME_le32-unknown-nacl=lib$(1).a
CFG_LIB_GLOB_le32-unknown-nacl=lib$(1)-*.pso
CFG_LIB_DSYM_GLOB_le32-unknown-nacl=lib$(1)-*.dylib.dSYM
CFG_GCCISH_CFLAGS_le32-unknown-nacl := -Wall -Wno-unused-variable -Wno-unused-value $(shell $(CFG_PYTHON) $(CFG_NACL_CROSS_PATH)/tools/nacl_config.py -t pnacl --cflags) -D_YUGA_LITTLE_ENDIAN=1 -D_YUGA_BIG_ENDIAN=0
CFG_GCCISH_CXXFLAGS_le32-unknown-nacl := -stdlib=libc++ $(CFG_GCCISH_CFLAGS_le32-unknown-nacl)
CFG_GCCISH_LINK_FLAGS_le32-unknown-nacl := -static -pthread -lm
CFG_GCCISH_DEF_FLAG_le32-unknown-nacl := -Wl,--export-dynamic,--dynamic-list=
CFG_GCCISH_PRE_LIB_FLAGS_le32-unknown-nacl := -Wl,-no-whole-archive
CFG_GCCISH_POST_LIB_FLAGS_le32-unknown-nacl :=
CFG_DEF_SUFFIX_le32-unknown-nacl := .le32.nacl.def
CFG_INSTALL_NAME_le32-unknown-nacl =
CFG_EXE_SUFFIX_le32-unknown-nacl = .pexe
CFG_WINDOWSY_le32-unknown-nacl :=
CFG_UNIXY_le32-unknown-nacl := 1
CFG_NACLY_le32-unknown-nacl := 1
CFG_PATH_MUNGE_le32-unknown-nacl := true
CFG_LDPATH_le32-unknown-nacl :=
CFG_RUN_le32-unknown-nacl=$(2)
CFG_RUN_TARG_le32-unknown-nacl=$(call CFG_RUN_le32-unknown-nacl,,$(2))
RUSTC_FLAGS_le32-unknown-nacl:=
RUSTC_CROSS_FLAGS_le32-unknown-nacl=-L $(CFG_NACL_CROSS_PATH)/lib/pnacl/Release -L $(CFG_PNACL_TOOLCHAIN)/lib/clang/3.7.0/lib/le32-nacl -L $(CFG_PNACL_TOOLCHAIN)/le32-nacl/usr/lib -L $(CFG_PNACL_TOOLCHAIN)/le32-nacl/lib
CFG_GNU_TRIPLE_le32-unknown-nacl := le32-unknown-nacl
# strdup isn't defined unless -std=gnu++11 is used :/
LLVM_FILTER_CXXFLAGS_le32-unknown-nacl := -std=c++11
LLVM_EXTRA_CXXFLAGS_le32-unknown-nacl := -std=gnu++11
endif

View File

@ -0,0 +1,24 @@
# powerpc64-unknown-linux-gnu configuration
CROSS_PREFIX_powerpc64-unknown-linux-gnu=powerpc64-linux-gnu-
CC_powerpc64-unknown-linux-gnu=$(CC)
CXX_powerpc64-unknown-linux-gnu=$(CXX)
CPP_powerpc64-unknown-linux-gnu=$(CPP)
AR_powerpc64-unknown-linux-gnu=$(AR)
CFG_LIB_NAME_powerpc64-unknown-linux-gnu=lib$(1).so
CFG_STATIC_LIB_NAME_powerpc64-unknown-linux-gnu=lib$(1).a
CFG_LIB_GLOB_powerpc64-unknown-linux-gnu=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_powerpc64-unknown-linux-gnu=lib$(1)-*.dylib.dSYM
CFG_CFLAGS_powerpc64-unknown-linux-gnu := -m64 $(CFLAGS)
CFG_GCCISH_CFLAGS_powerpc64-unknown-linux-gnu := -Wall -Werror -g -fPIC -m64 $(CFLAGS)
CFG_GCCISH_CXXFLAGS_powerpc64-unknown-linux-gnu := -fno-rtti $(CXXFLAGS)
CFG_GCCISH_LINK_FLAGS_powerpc64-unknown-linux-gnu := -shared -fPIC -ldl -pthread -lrt -g -m64
CFG_GCCISH_DEF_FLAG_powerpc64-unknown-linux-gnu := -Wl,--export-dynamic,--dynamic-list=
CFG_LLC_FLAGS_powerpc64-unknown-linux-gnu :=
CFG_INSTALL_NAME_powerpc64-unknown-linux-gnu =
CFG_EXE_SUFFIX_powerpc64-unknown-linux-gnu =
CFG_WINDOWSY_powerpc64-unknown-linux-gnu :=
CFG_UNIXY_powerpc64-unknown-linux-gnu := 1
CFG_LDPATH_powerpc64-unknown-linux-gnu :=
CFG_RUN_powerpc64-unknown-linux-gnu=$(2)
CFG_RUN_TARG_powerpc64-unknown-linux-gnu=$(call CFG_RUN_powerpc64-unknown-linux-gnu,,$(2))
CFG_GNU_TRIPLE_powerpc64-unknown-linux-gnu := powerpc64-unknown-linux-gnu

View File

@ -0,0 +1,24 @@
# powerpc64le-unknown-linux-gnu configuration
CROSS_PREFIX_powerpc64le-unknown-linux-gnu=powerpc64le-linux-gnu-
CC_powerpc64le-unknown-linux-gnu=$(CC)
CXX_powerpc64le-unknown-linux-gnu=$(CXX)
CPP_powerpc64le-unknown-linux-gnu=$(CPP)
AR_powerpc64le-unknown-linux-gnu=$(AR)
CFG_LIB_NAME_powerpc64le-unknown-linux-gnu=lib$(1).so
CFG_STATIC_LIB_NAME_powerpc64le-unknown-linux-gnu=lib$(1).a
CFG_LIB_GLOB_powerpc64le-unknown-linux-gnu=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_powerpc64le-unknown-linux-gnu=lib$(1)-*.dylib.dSYM
CFG_CFLAGS_powerpc64le-unknown-linux-gnu := -m64 $(CFLAGS)
CFG_GCCISH_CFLAGS_powerpc64le-unknown-linux-gnu := -Wall -Werror -g -fPIC -m64 $(CFLAGS)
CFG_GCCISH_CXXFLAGS_powerpc64le-unknown-linux-gnu := -fno-rtti $(CXXFLAGS)
CFG_GCCISH_LINK_FLAGS_powerpc64le-unknown-linux-gnu := -shared -fPIC -ldl -pthread -lrt -g -m64
CFG_GCCISH_DEF_FLAG_powerpc64le-unknown-linux-gnu := -Wl,--export-dynamic,--dynamic-list=
CFG_LLC_FLAGS_powerpc64le-unknown-linux-gnu :=
CFG_INSTALL_NAME_powerpc64le-unknown-linux-gnu =
CFG_EXE_SUFFIX_powerpc64le-unknown-linux-gnu =
CFG_WINDOWSY_powerpc64le-unknown-linux-gnu :=
CFG_UNIXY_powerpc64le-unknown-linux-gnu := 1
CFG_LDPATH_powerpc64le-unknown-linux-gnu :=
CFG_RUN_powerpc64le-unknown-linux-gnu=$(2)
CFG_RUN_TARG_powerpc64le-unknown-linux-gnu=$(call CFG_RUN_powerpc64le-unknown-linux-gnu,,$(2))
CFG_GNU_TRIPLE_powerpc64le-unknown-linux-gnu := powerpc64le-unknown-linux-gnu

View File

@ -8,7 +8,7 @@ CFG_STATIC_LIB_NAME_x86_64-unknown-bitrig=lib$(1).a
CFG_LIB_GLOB_x86_64-unknown-bitrig=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_x86_64-unknown-bitrig=$(1)-*.dylib.dSYM
CFG_JEMALLOC_CFLAGS_x86_64-unknown-bitrig := -m64 -I/usr/include $(CFLAGS)
CFG_GCCISH_CFLAGS_x86_64-unknown-bitrig := -Wall -Werror -fPIC -m64 -I/usr/include $(CFLAGS)
CFG_GCCISH_CFLAGS_x86_64-unknown-bitrig := -Wall -Werror -fPIE -fPIC -m64 -I/usr/include $(CFLAGS)
CFG_GCCISH_LINK_FLAGS_x86_64-unknown-bitrig := -shared -pic -pthread -m64 $(LDFLAGS)
CFG_GCCISH_DEF_FLAG_x86_64-unknown-bitrig := -Wl,--export-dynamic,--dynamic-list=
CFG_LLC_FLAGS_x86_64-unknown-bitrig :=

View File

@ -20,3 +20,4 @@ CFG_LDPATH_x86_64-unknown-openbsd :=
CFG_RUN_x86_64-unknown-openbsd=$(2)
CFG_RUN_TARG_x86_64-unknown-openbsd=$(call CFG_RUN_x86_64-unknown-openbsd,,$(2))
CFG_GNU_TRIPLE_x86_64-unknown-openbsd := x86_64-unknown-openbsd
RUSTC_FLAGS_x86_64-unknown-openbsd=-C linker=$(call FIND_COMPILER,$(CC))

View File

@ -57,8 +57,8 @@ TARGET_CRATES := libc std flate arena term \
RUSTC_CRATES := rustc rustc_typeck rustc_mir rustc_borrowck rustc_resolve rustc_driver \
rustc_trans rustc_back rustc_llvm rustc_privacy rustc_lint \
rustc_data_structures rustc_front rustc_platform_intrinsics \
rustc_plugin rustc_metadata
HOST_CRATES := syntax $(RUSTC_CRATES) rustdoc fmt_macros
rustc_plugin rustc_metadata rustc_passes
HOST_CRATES := syntax syntax_ext $(RUSTC_CRATES) rustdoc fmt_macros
TOOLS := compiletest rustdoc rustc rustbook error-index-generator
DEPS_core :=
@ -71,7 +71,7 @@ DEPS_rustc_bitflags := core
DEPS_rustc_unicode := core
DEPS_std := core libc rand alloc collections rustc_unicode \
native:rust_builtin native:backtrace \
native:backtrace \
alloc_system
DEPS_arena := std
DEPS_glob := std
@ -86,9 +86,10 @@ DEPS_serialize := std log
DEPS_term := std log
DEPS_test := std getopts serialize rbml term native:rust_test_helpers
DEPS_syntax := std term serialize log fmt_macros arena libc rustc_bitflags
DEPS_syntax := std term serialize log arena libc rustc_bitflags
DEPS_syntax_ext := syntax fmt_macros
DEPS_rustc := syntax flate arena serialize getopts rustc_front\
DEPS_rustc := syntax fmt_macros flate arena serialize getopts rbml rustc_front\
log graphviz rustc_llvm rustc_back rustc_data_structures
DEPS_rustc_back := std syntax rustc_llvm rustc_front flate log libc
DEPS_rustc_borrowck := rustc rustc_front log graphviz syntax
@ -96,13 +97,14 @@ DEPS_rustc_data_structures := std log serialize
DEPS_rustc_driver := arena flate getopts graphviz libc rustc rustc_back rustc_borrowck \
rustc_typeck rustc_mir rustc_resolve log syntax serialize rustc_llvm \
rustc_trans rustc_privacy rustc_lint rustc_front rustc_plugin \
rustc_metadata
rustc_metadata syntax_ext rustc_passes
DEPS_rustc_front := std syntax log serialize
DEPS_rustc_lint := rustc log syntax
DEPS_rustc_llvm := native:rustllvm libc std rustc_bitflags
DEPS_rustc_metadata := rustc rustc_front syntax rbml
DEPS_rustc_passes := syntax rustc core
DEPS_rustc_mir := rustc rustc_front syntax
DEPS_rustc_resolve := rustc rustc_front log syntax
DEPS_rustc_resolve := arena rustc rustc_front log syntax
DEPS_rustc_platform_intrinsics := rustc rustc_llvm
DEPS_rustc_plugin := rustc rustc_metadata syntax
DEPS_rustc_privacy := rustc rustc_front log syntax
@ -174,9 +176,5 @@ endef
$(foreach crate,$(TOOLS),$(eval $(call RUST_TOOL,$(crate))))
ifdef CFG_DISABLE_ELF_TLS
RUSTFLAGS_std := --cfg no_elf_tls
endif
CRATEFILE_libc := $(SREL)src/liblibc/src/lib.rs
RUSTFLAGS_libc := --cfg stdbuild

View File

@ -80,10 +80,16 @@ endif
# LLVM linkage:
# Note: Filter with llvm-config so that optional targets which aren't present
# don't cause errors (ie PNaCl's target is only present within PNaCl's LLVM
# fork).
LLVM_LINKAGE_PATH_$(1):=$$(abspath $$(RT_OUTPUT_DIR_$(1))/llvmdeps.rs)
$$(LLVM_LINKAGE_PATH_$(1)): $(S)src/etc/mklldeps.py $$(LLVM_CONFIG_$(1))
$(Q)$(CFG_PYTHON) "$$<" "$$@" "$$(LLVM_COMPONENTS)" "$$(CFG_ENABLE_LLVM_STATIC_STDCPP)" \
$$(LLVM_CONFIG_$(1)) "$(CFG_STDCPP_NAME)"
$(Q)$(CFG_PYTHON) "$$<" "$$@" "$$(filter $$(shell \
$$(LLVM_CONFIG_$(1)) --components), \
$(LLVM_OPTIONAL_COMPONENTS)) $(LLVM_REQUIRED_COMPONENTS)" \
"$$(CFG_ENABLE_LLVM_STATIC_STDCPP)" $$(LLVM_CONFIG_$(1)) \
"$(CFG_STDCPP_NAME)" "$$(CFG_USING_LIBCPP)"
endef
$(foreach host,$(CFG_HOST), \
@ -95,6 +101,8 @@ $(foreach host,$(CFG_HOST), \
# This can't be done in target.mk because it's included before this file.
define LLVM_LINKAGE_DEPS
$$(TLIB$(1)_T_$(2)_H_$(3))/stamp.rustc_llvm: $$(LLVM_LINKAGE_PATH_$(2))
RUSTFLAGS$(1)_rustc_llvm_T_$(2) += $$(shell echo $$(LLVM_ALL_COMPONENTS_$(2)) | tr '-' '_' |\
sed -e 's/^ //;s/\([^ ]*\)/\-\-cfg have_component_\1/g')
endef
$(foreach source,$(CFG_HOST), \

View File

@ -13,7 +13,7 @@
######################################################################
# The version number
CFG_RELEASE_NUM=1.6.0
CFG_RELEASE_NUM=1.7.0
# An optional number to put after the label, e.g. '.2' -> '-beta.2'
# NB Make sure it starts with a dot to conform to semver pre-release
@ -22,7 +22,7 @@ CFG_PRERELEASE_VERSION=.4
# Append a version-dependent hash to each library, so we can install different
# versions in the same place
CFG_FILENAME_EXTRA=$(shell printf '%s' $(CFG_RELEASE) | $(CFG_HASH_COMMAND))
CFG_FILENAME_EXTRA=$(shell printf '%s' $(CFG_RELEASE)$(CFG_EXTRA_FILENAME) | $(CFG_HASH_COMMAND))
ifeq ($(CFG_RELEASE_CHANNEL),stable)
# This is the normal semver version string, e.g. "0.12.0", "0.12.0-nightly"
@ -131,11 +131,7 @@ endif
ifdef CFG_ENABLE_DEBUGINFO
$(info cfg: enabling debuginfo (CFG_ENABLE_DEBUGINFO))
# FIXME: Re-enable -g in stage0 after new snapshot
#CFG_RUSTC_FLAGS += -g
RUSTFLAGS_STAGE1 += -g
RUSTFLAGS_STAGE2 += -g
RUSTFLAGS_STAGE3 += -g
CFG_RUSTC_FLAGS += -g
endif
ifdef SAVE_TEMPS
@ -153,7 +149,7 @@ endif
ifdef TRACE
CFG_RUSTC_FLAGS += -Z trace
endif
ifdef CFG_ENABLE_RPATH
ifndef CFG_DISABLE_RPATH
CFG_RUSTC_FLAGS += -C rpath
endif
@ -276,9 +272,18 @@ endif
# LLVM macros
######################################################################
LLVM_COMPONENTS=x86 arm aarch64 mips powerpc ipo bitreader bitwriter linker asmparser mcjit \
LLVM_OPTIONAL_COMPONENTS=x86 arm aarch64 mips powerpc pnacl
LLVM_REQUIRED_COMPONENTS=ipo bitreader bitwriter linker asmparser mcjit \
interpreter instrumentation
ifneq ($(CFG_LLVM_ROOT),)
# Ensure we only try to link targets that the installed LLVM actually has:
LLVM_COMPONENTS := $(filter $(shell $(CFG_LLVM_ROOT)/bin/llvm-config$(X_$(CFG_BUILD)) --components),\
$(LLVM_OPTIONAL_COMPONENTS)) $(LLVM_REQUIRED_COMPONENTS)
else
LLVM_COMPONENTS := $(LLVM_OPTIONAL_COMPONENTS) $(LLVM_REQUIRED_COMPONENTS)
endif
# Only build these LLVM tools
LLVM_TOOLS=bugpoint llc llvm-ar llvm-as llvm-dis llvm-mc opt llvm-extract
@ -314,6 +319,8 @@ LLVM_HOST_TRIPLE_$(1)=$$(shell "$$(LLVM_CONFIG_$(1))" --host-target)
LLVM_AS_$(1)=$$(CFG_LLVM_INST_DIR_$(1))/bin/llvm-as$$(X_$(1))
LLC_$(1)=$$(CFG_LLVM_INST_DIR_$(1))/bin/llc$$(X_$(1))
LLVM_ALL_COMPONENTS_$(1)=$$(shell "$$(LLVM_CONFIG_$(1))" --components)
endef
$(foreach host,$(CFG_HOST), \
@ -476,7 +483,7 @@ endif
endif
LD_LIBRARY_PATH_ENV_HOSTDIR$(1)_T_$(2)_H_$(3) := \
$$(CURDIR)/$$(HLIB$(1)_H_$(3))
$$(CURDIR)/$$(HLIB$(1)_H_$(3)):$$(CFG_LLVM_INST_DIR_$(3))/lib
LD_LIBRARY_PATH_ENV_TARGETDIR$(1)_T_$(2)_H_$(3) := \
$$(CURDIR)/$$(TLIB1_T_$(2)_H_$(CFG_BUILD))

View File

@ -64,14 +64,18 @@ define DEF_GOOD_VALGRIND
ifeq ($(OSTYPE_$(1)),unknown-linux-gnu)
GOOD_VALGRIND_$(1) = 1
endif
ifneq (,$(filter $(OSTYPE_$(1)),darwin freebsd))
ifeq (HOST_$(1),x86_64)
ifneq (,$(filter $(OSTYPE_$(1)),apple-darwin freebsd))
ifeq ($(HOST_$(1)),x86_64)
GOOD_VALGRIND_$(1) = 1
endif
endif
ifdef GOOD_VALGRIND_$(t)
$$(info cfg: have good valgrind for $(t))
else
$$(info cfg: no good valgrind for $(t))
endif
endef
$(foreach t,$(CFG_TARGET),$(eval $(call DEF_GOOD_VALGRIND,$(t))))
$(foreach t,$(CFG_TARGET),$(info cfg: good valgrind for $(t) is $(GOOD_VALGRIND_$(t))))
ifneq ($(findstring linux,$(CFG_OSTYPE)),)
ifdef CFG_PERF
@ -215,16 +219,6 @@ define CFG_MAKE_TOOLCHAIN
ifeq ($$(findstring $(HOST_$(1)),arm aarch64 mips mipsel powerpc),)
# On OpenBSD, we need to pass the path of libstdc++.so to the linker
# (use path of libstdc++.a which is a known name for the same path)
ifeq ($(OSTYPE_$(1)),unknown-openbsd)
STDCPP_LIBDIR_RUSTFLAGS_$(1)= \
-L "$$(dir $$(shell $$(CC_$(1)) $$(CFG_GCCISH_CFLAGS_$(1)) \
-print-file-name=lib$(CFG_STDCPP_NAME).a))"
else
STDCPP_LIBDIR_RUSTFLAGS_$(1)=
endif
# On Bitrig, we need the relocation model to be PIC for everything
ifeq (,$(filter $(OSTYPE_$(1)),bitrig))
LLVM_MC_RELOCATION_MODEL="pic"

View File

@ -35,7 +35,7 @@
# that's per-target so you're allowed to conditionally add files based on the
# target.
################################################################################
NATIVE_LIBS := rust_builtin hoedown miniz rust_test_helpers
NATIVE_LIBS := hoedown miniz rust_test_helpers
# $(1) is the target triple
define NATIVE_LIBRARIES
@ -50,8 +50,6 @@ NATIVE_DEPS_hoedown_$(1) := hoedown/src/autolink.c \
hoedown/src/stack.c \
hoedown/src/version.c
NATIVE_DEPS_miniz_$(1) = miniz.c
NATIVE_DEPS_rust_builtin_$(1) := rust_builtin.c \
rust_android_dummy.c
NATIVE_DEPS_rust_test_helpers_$(1) := rust_test_helpers.c
################################################################################
@ -128,12 +126,25 @@ define DEF_THIRD_PARTY_TARGETS
# $(1) is the target triple
ifeq ($$(CFG_WINDOWSY_$(1)), 1)
# This isn't necessarily a desired option, but it's harmless and works around
# what appears to be a mingw-w64 bug.
ifeq ($$(CFG_WINDOWSY_$(1)),1)
# A bit of history here, this used to be --enable-lazy-lock added in #14006
# which was filed with jemalloc in jemalloc/jemalloc#83 which was also
# reported to MinGW: http://sourceforge.net/p/mingw-w64/bugs/395/
#
# https://sourceforge.net/p/mingw-w64/bugs/395/
JEMALLOC_ARGS_$(1) := --enable-lazy-lock
# When updating jemalloc to 4.0, however, it was found that binaries would
# exit with the status code STATUS_RESOURCE_NOT_OWNED indicating that a thread
# was unlocking a mutex it never locked. Disabling this "lazy lock" option
# seems to fix the issue, but it was enabled by default for MinGW targets in
# 13473c7 for jemalloc.
#
# As a result of all that, force disabling lazy lock on Windows, and after
# reading some code it at least *appears* that the initialization of mutexes
# is otherwise ok in jemalloc, so shouldn't cause problems hopefully...
#
# tl;dr: make windows behave like other platforms by disabling lazy locking,
# but requires passing an option due to a historical default with
# jemalloc.
JEMALLOC_ARGS_$(1) := --disable-lazy-lock
else ifeq ($(OSTYPE_$(1)), apple-ios)
JEMALLOC_ARGS_$(1) := --disable-tls
else ifeq ($(findstring android, $(OSTYPE_$(1))), android)

View File

@ -95,7 +95,6 @@ $$(TLIB$(1)_T_$(2)_H_$(3))/stamp.$(4): \
$$(RUSTFLAGS_$(4)) \
$$(RUSTFLAGS$(1)_$(4)) \
$$(RUSTFLAGS$(1)_$(4)_T_$(2)) \
$$(STDCPP_LIBDIR_RUSTFLAGS_$(2)) \
--out-dir $$(@D) \
-C extra-filename=-$$(CFG_FILENAME_EXTRA) \
$$<
@ -130,7 +129,7 @@ $$(TBIN$(1)_T_$(2)_H_$(3))/$(4)$$(X_$(2)): \
| $$(TBIN$(1)_T_$(2)_H_$(3))/
@$$(call E, rustc: $$@)
$$(STAGE$(1)_T_$(2)_H_$(3)) \
$$(STDCPP_LIBDIR_RUSTFLAGS_$(2)) \
$$(LLVM_LIBDIR_RUSTFLAGS_$(2)) \
-o $$@ $$< --cfg $(4)
endef

View File

@ -393,8 +393,7 @@ $(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): \
$$(subst @,,$$(STAGE$(1)_T_$(2)_H_$(3))) -o $$@ $$< --test \
-L "$$(RT_OUTPUT_DIR_$(2))" \
$$(LLVM_LIBDIR_RUSTFLAGS_$(2)) \
$$(RUSTFLAGS_$(4)) \
$$(STDCPP_LIBDIR_RUSTFLAGS_$(2))
$$(RUSTFLAGS_$(4))
endef
@ -664,9 +663,9 @@ CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3) := \
--android-cross-path=$(CFG_ANDROID_CROSS_PATH) \
--adb-path=$(CFG_ADB) \
--adb-test-dir=$(CFG_ADB_TEST_DIR) \
--host-rustcflags "$(RUSTC_FLAGS_$(3)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(3)) $$(STDCPP_LIBDIR_RUSTFLAGS_$(3))" \
--host-rustcflags "$(RUSTC_FLAGS_$(3)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(3))" \
--lldb-python-dir=$(CFG_LLDB_PYTHON_DIR) \
--target-rustcflags "$(RUSTC_FLAGS_$(2)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(2)) $$(STDCPP_LIBDIR_RUSTFLAGS_$(2))" \
--target-rustcflags "$(RUSTC_FLAGS_$(2)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(2))" \
$$(CTEST_TESTARGS)
ifdef CFG_VALGRIND_RPASS
@ -1072,7 +1071,9 @@ $(3)/test/run-make/%-$(1)-T-$(2)-H-$(3).ok: \
"$$(LD_LIBRARY_PATH_ENV_TARGETDIR$(1)_T_$(2)_H_$(3))" \
$(1) \
$$(S) \
$(3)
$(3) \
"$$(LLVM_LIBDIR_RUSTFLAGS_$(3))" \
"$$(LLVM_ALL_COMPONENTS_$(3))"
@touch -r $$@.start_time $$@ && rm $$@.start_time
else
# FIXME #11094 - The above rule doesn't work right for multiple targets

View File

@ -8,8 +8,6 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use self::TargetLocation::*;
use common::Config;
use common::{CompileFail, ParseFail, Pretty, RunFail, RunPass, RunPassValgrind};
use common::{Codegen, DebugInfoLldb, DebugInfoGdb, Rustdoc};

View File

@ -38,6 +38,8 @@ const ARCH_TABLE: &'static [(&'static str, &'static str)] = &[
("mips", "mips"),
("msp430", "msp430"),
("powerpc", "powerpc"),
("powerpc64", "powerpc64"),
("powerpc64le", "powerpc64le"),
("s390x", "systemz"),
("sparc", "sparc"),
("x86_64", "x86_64"),

View File

@ -14,31 +14,25 @@ Even then, Rust still allows precise control like a low-level language would.
[rust]: https://www.rust-lang.org
“The Rust Programming Language” is split into eight sections. This introduction
“The Rust Programming Language” is split into chapters. This introduction
is the first. After this:
* [Getting started][gs] - Set up your computer for Rust development.
* [Learn Rust][lr] - Learn Rust programming through small projects.
* [Effective Rust][er] - Higher-level concepts for writing excellent Rust code.
* [Tutorial: Guessing Game][gg] - Learn some Rust with a small project.
* [Syntax and Semantics][ss] - Each bit of Rust, broken down into small chunks.
* [Effective Rust][er] - Higher-level concepts for writing excellent Rust code.
* [Nightly Rust][nr] - Cutting-edge features that arent in stable builds yet.
* [Glossary][gl] - A reference of terms used in the book.
* [Bibliography][bi] - Background on Rust's influences, papers about Rust.
[gs]: getting-started.html
[lr]: learn-rust.html
[gg]: guessing-game.html
[er]: effective-rust.html
[ss]: syntax-and-semantics.html
[nr]: nightly-rust.html
[gl]: glossary.html
[bi]: bibliography.html
After reading this introduction, youll want to dive into either Learn Rust or
Syntax and Semantics, depending on your preference: Learn Rust if you want
to dive in with a project, or Syntax and Semantics if you prefer to start
small, and learn a single concept thoroughly before moving onto the next.
Copious cross-linking connects these parts together.
### Contributing
The source files from which this book is generated can be found on

View File

@ -1,10 +1,7 @@
# Summary
* [Getting Started](getting-started.md)
* [Learn Rust](learn-rust.md)
* [Guessing Game](guessing-game.md)
* [Dining Philosophers](dining-philosophers.md)
* [Rust Inside Other Languages](rust-inside-other-languages.md)
* [Tutorial: Guessing Game](guessing-game.md)
* [Syntax and Semantics](syntax-and-semantics.md)
* [Variable Bindings](variable-bindings.md)
* [Functions](functions.md)

View File

@ -24,7 +24,7 @@ fn distance<N, E, G: Graph<N, E>>(graph: &G, start: &N, end: &N) -> u32 { ... }
```
Our distance calculation works regardless of our `Edge` type, so the `E` stuff in
this signature is just a distraction.
this signature is a distraction.
What we really want to say is that a certain `E`dge and `N`ode type come together
to form each kind of `Graph`. We can do that with associated types:
@ -118,10 +118,10 @@ impl Graph for MyGraph {
This silly implementation always returns `true` and an empty `Vec<Edge>`, but it
gives you an idea of how to implement this kind of thing. We first need three
`struct`s, one for the graph, one for the node, and one for the edge. If it made
more sense to use a different type, that would work as well, were just going to
more sense to use a different type, that would work as well, were going to
use `struct`s for all three here.
Next is the `impl` line, which is just like implementing any other trait.
Next is the `impl` line, which is an implementation like any other trait.
From here, we use `=` to define our associated types. The name the trait uses
goes on the left of the `=`, and the concrete type were `impl`ementing this

View File

@ -33,7 +33,7 @@ Rust, as well as publications about Rust.
* [Non-blocking steal-half work queues](http://www.cs.bgu.ac.il/%7Ehendlerd/papers/p280-hendler.pdf)
* [Reagents: expressing and composing fine-grained concurrency](http://www.mpi-sws.org/~turon/reagents.pdf)
* [Algorithms for scalable synchronization of shared-memory multiprocessors](https://www.cs.rochester.edu/u/scott/papers/1991_TOCS_synch.pdf)
* [Epoc-based reclamation](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-579.pdf).
* [Epoch-based reclamation](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-579.pdf).
### Others

View File

@ -154,7 +154,7 @@ implemented. For this, we need something more dangerous.
The `transmute` function is provided by a [compiler intrinsic][intrinsics], and
what it does is very simple, but very scary. It tells Rust to treat a value of
one type as though it were another type. It does this regardless of the
typechecking system, and just completely trusts you.
typechecking system, and completely trusts you.
[intrinsics]: intrinsics.html

View File

@ -52,7 +52,7 @@ These pointers cannot be copied in such a way that they outlive the lifetime ass
## `*const T` and `*mut T`
These are C-like raw pointers with no lifetime or ownership attached to them. They just point to
These are C-like raw pointers with no lifetime or ownership attached to them. They point to
some location in memory with no other restrictions. The only guarantee that these provide is that
they cannot be dereferenced except in code marked `unsafe`.
@ -255,7 +255,7 @@ major ones will be covered below.
## `Arc<T>`
[`Arc<T>`][arc] is just a version of `Rc<T>` that uses an atomic reference count (hence, "Arc").
[`Arc<T>`][arc] is a version of `Rc<T>` that uses an atomic reference count (hence, "Arc").
This can be sent freely between threads.
C++'s `shared_ptr` is similar to `Arc`, however in the case of C++ the inner data is always mutable.
@ -340,11 +340,11 @@ With the former, the `RefCell<T>` is wrapping the `Vec<T>`, so the `Vec<T>` in i
mutable. At the same time, there can only be one mutable borrow of the whole `Vec` at a given time.
This means that your code cannot simultaneously work on different elements of the vector from
different `Rc` handles. However, we are able to push and pop from the `Vec<T>` at will. This is
similar to an `&mut Vec<T>` with the borrow checking done at runtime.
similar to a `&mut Vec<T>` with the borrow checking done at runtime.
With the latter, the borrowing is of individual elements, but the overall vector is immutable. Thus,
we can independently borrow separate elements, but we cannot push or pop from the vector. This is
similar to an `&mut [T]`[^3], but, again, the borrow checking is at runtime.
similar to a `&mut [T]`[^3], but, again, the borrow checking is at runtime.
In concurrent programs, we have a similar situation with `Arc<Mutex<T>>`, which provides shared
mutability and ownership.

View File

@ -208,7 +208,7 @@ different.
Rusts implementation of closures is a bit different than other languages. They
are effectively syntax sugar for traits. Youll want to make sure to have read
the [traits chapter][traits] before this one, as well as the chapter on [trait
the [traits][traits] section before this one, as well as the section on [trait
objects][trait-objects].
[traits]: traits.html
@ -253,7 +253,7 @@ use it.
# Taking closures as arguments
Now that we know that closures are traits, we already know how to accept and
return closures: just like any other trait!
return closures: the same as any other trait!
This also means that we can choose static vs dynamic dispatch as well. First,
lets write a function which takes something callable, calls it, and returns
@ -271,7 +271,7 @@ let answer = call_with_one(|x| x + 2);
assert_eq!(3, answer);
```
We pass our closure, `|x| x + 2`, to `call_with_one`. It just does what it
We pass our closure, `|x| x + 2`, to `call_with_one`. It does what it
suggests: it calls the closure, giving it `1` as an argument.
Lets examine the signature of `call_with_one` in more depth:
@ -448,7 +448,7 @@ This error is letting us know that we dont have a `&'static Fn(i32) -> i32`,
we have a `[closure@<anon>:7:9: 7:20]`. Wait, what?
Because each closure generates its own environment `struct` and implementation
of `Fn` and friends, these types are anonymous. They exist just solely for
of `Fn` and friends, these types are anonymous. They exist solely for
this closure. So Rust shows them as `closure@<anon>`, rather than some
autogenerated name.

View File

@ -305,10 +305,10 @@ fn main() {
}
```
We use the `mpsc::channel()` method to construct a new channel. We just `send`
We use the `mpsc::channel()` method to construct a new channel. We `send`
a simple `()` down the channel, and then wait for ten of them to come back.
While this channel is just sending a generic signal, we can send any data that
While this channel is sending a generic signal, we can send any data that
is `Send` over the channel!
```rust

View File

@ -2,7 +2,7 @@
When a project starts getting large, its considered good software
engineering practice to split it up into a bunch of smaller pieces, and then
fit them together. Its also important to have a well-defined interface, so
fit them together. It is also important to have a well-defined interface, so
that some of your functionality is private, and some is public. To facilitate
these kinds of things, Rust has a module system.
@ -222,7 +222,7 @@ fn hello() -> String {
}
```
Of course, you can copy and paste this from this web page, or just type
Of course, you can copy and paste this from this web page, or type
something else. Its not important that you actually put konnichiwa to learn
about the module system.
@ -299,7 +299,7 @@ depth.
Rust allows you to precisely control which aspects of your interface are
public, and so private is the default. To make things public, you use the `pub`
keyword. Lets focus on the `english` module first, so lets reduce our `src/main.rs`
to just this:
to only this:
```rust,ignore
extern crate phrases;
@ -447,7 +447,7 @@ use phrases::english::{greetings, farewells};
## Re-exporting with `pub use`
You dont just use `use` to shorten identifiers. You can also use it inside of your crate
You dont only use `use` to shorten identifiers. You can also use it inside of your crate
to re-export a function inside another module. This allows you to present an external
interface that may not directly map to your internal code organization.
@ -584,5 +584,5 @@ use sayings::english::farewells as en_farewells;
```
As you can see, the curly brackets compress `use` statements for several items
under the same path, and in this context `self` just refers back to that path.
under the same path, and in this context `self` refers back to that path.
Note: The curly brackets cannot be nested or mixed with star globbing.

View File

@ -13,7 +13,7 @@ own allocator up and running.
The compiler currently ships two default allocators: `alloc_system` and
`alloc_jemalloc` (some targets don't have jemalloc, however). These allocators
are just normal Rust crates and contain an implementation of the routines to
are normal Rust crates and contain an implementation of the routines to
allocate and deallocate memory. The standard library is not compiled assuming
either one, and the compiler will decide which allocator is in use at
compile-time depending on the type of output artifact being produced.
@ -134,7 +134,7 @@ pub extern fn __rust_usable_size(size: usize, _align: usize) -> usize {
size
}
# // just needed to get rustdoc to test this
# // only needed to get rustdoc to test this
# fn main() {}
# #[lang = "panic_fmt"] fn panic_fmt() {}
# #[lang = "eh_personality"] fn eh_personality() {}

View File

@ -1,723 +0,0 @@
% Dining Philosophers
For our second project, lets look at a classic concurrency problem. Its
called the dining philosophers. It was originally conceived by Dijkstra in
1965, but well use a lightly adapted version from [this paper][paper] by Tony
Hoare in 1985.
[paper]: http://www.usingcsp.com/cspbook.pdf
> In ancient times, a wealthy philanthropist endowed a College to accommodate
> five eminent philosophers. Each philosopher had a room in which they could
> engage in their professional activity of thinking; there was also a common
> dining room, furnished with a circular table, surrounded by five chairs, each
> labelled by the name of the philosopher who was to sit in it. They sat
> anticlockwise around the table. To the left of each philosopher there was
> laid a golden fork, and in the center stood a large bowl of spaghetti, which
> was constantly replenished. A philosopher was expected to spend most of
> their time thinking; but when they felt hungry, they went to the dining
> room, sat down in their own chair, picked up their own fork on their left,
> and plunged it into the spaghetti. But such is the tangled nature of
> spaghetti that a second fork is required to carry it to the mouth. The
> philosopher therefore had also to pick up the fork on their right. When
> they were finished they would put down both their forks, get up from their
> chair, and continue thinking. Of course, a fork can be used by only one
> philosopher at a time. If the other philosopher wants it, they just have
> to wait until the fork is available again.
This classic problem shows off a few different elements of concurrency. The
reason is that it's actually slightly tricky to implement: a simple
implementation can deadlock. For example, let's consider a simple algorithm
that would solve this problem:
1. A philosopher picks up the fork on their left.
2. They then pick up the fork on their right.
3. They eat.
4. They return the forks.
Now, lets imagine this sequence of events:
1. Philosopher 1 begins the algorithm, picking up the fork on their left.
2. Philosopher 2 begins the algorithm, picking up the fork on their left.
3. Philosopher 3 begins the algorithm, picking up the fork on their left.
4. Philosopher 4 begins the algorithm, picking up the fork on their left.
5. Philosopher 5 begins the algorithm, picking up the fork on their left.
6. ... ? All the forks are taken, but nobody can eat!
There are different ways to solve this problem. Well get to our solution in
the tutorial itself. For now, lets get started and create a new project with
`cargo`:
```bash
$ cd ~/projects
$ cargo new dining_philosophers --bin
$ cd dining_philosophers
```
Now we can start modeling the problem itself. Well start with the philosophers
in `src/main.rs`:
```rust
struct Philosopher {
name: String,
}
impl Philosopher {
fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
}
}
fn main() {
let p1 = Philosopher::new("Judith Butler");
let p2 = Philosopher::new("Gilles Deleuze");
let p3 = Philosopher::new("Karl Marx");
let p4 = Philosopher::new("Emma Goldman");
let p5 = Philosopher::new("Michel Foucault");
}
```
Here, we make a [`struct`][struct] to represent a philosopher. For now,
a name is all we need. We choose the [`String`][string] type for the name,
rather than `&str`. Generally speaking, working with a type which owns its
data is easier than working with one that uses references.
[struct]: structs.html
[string]: strings.html
Lets continue:
```rust
# struct Philosopher {
# name: String,
# }
impl Philosopher {
fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
}
}
```
This `impl` block lets us define things on `Philosopher` structs. In this case,
we define an associated function called `new`. The first line looks like this:
```rust
# struct Philosopher {
# name: String,
# }
# impl Philosopher {
fn new(name: &str) -> Philosopher {
# Philosopher {
# name: name.to_string(),
# }
# }
# }
```
We take one argument, a `name`, of type `&str`. This is a reference to another
string. It returns an instance of our `Philosopher` struct.
```rust
# struct Philosopher {
# name: String,
# }
# impl Philosopher {
# fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
# }
# }
```
This creates a new `Philosopher`, and sets its `name` to our `name` argument.
Not just the argument itself, though, as we call `.to_string()` on it. This
will create a copy of the string that our `&str` points to, and give us a new
`String`, which is the type of the `name` field of `Philosopher`.
Why not accept a `String` directly? Its nicer to call. If we took a `String`,
but our caller had a `&str`, theyd have to call this method themselves. The
downside of this flexibility is that we _always_ make a copy. For this small
program, thats not particularly important, as we know well just be using
short strings anyway.
One last thing youll notice: we just define a `Philosopher`, and seemingly
dont do anything with it. Rust is an expression based language, which means
that almost everything in Rust is an expression which returns a value. This is
true of functions as well — the last expression is automatically returned. Since
we create a new `Philosopher` as the last expression of this function, we end
up returning it.
This name, `new()`, isnt anything special to Rust, but it is a convention for
functions that create new instances of structs. Before we talk about why, lets
look at `main()` again:
```rust
# struct Philosopher {
# name: String,
# }
#
# impl Philosopher {
# fn new(name: &str) -> Philosopher {
# Philosopher {
# name: name.to_string(),
# }
# }
# }
#
fn main() {
let p1 = Philosopher::new("Judith Butler");
let p2 = Philosopher::new("Gilles Deleuze");
let p3 = Philosopher::new("Karl Marx");
let p4 = Philosopher::new("Emma Goldman");
let p5 = Philosopher::new("Michel Foucault");
}
```
Here, we create five variable bindings with five new philosophers.
If we _didnt_ define
that `new()` function, it would look like this:
```rust
# struct Philosopher {
# name: String,
# }
fn main() {
let p1 = Philosopher { name: "Judith Butler".to_string() };
let p2 = Philosopher { name: "Gilles Deleuze".to_string() };
let p3 = Philosopher { name: "Karl Marx".to_string() };
let p4 = Philosopher { name: "Emma Goldman".to_string() };
let p5 = Philosopher { name: "Michel Foucault".to_string() };
}
```
Thats much noisier. Using `new` has other advantages too, but even in
this simple case, it ends up being nicer to use.
Now that weve got the basics in place, theres a number of ways that we can
tackle the broader problem here. I like to start from the end first: lets
set up a way for each philosopher to finish eating. As a tiny step, lets make
a method, and then loop through all the philosophers, calling it:
```rust
struct Philosopher {
name: String,
}
impl Philosopher {
fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
}
fn eat(&self) {
println!("{} is done eating.", self.name);
}
}
fn main() {
let philosophers = vec![
Philosopher::new("Judith Butler"),
Philosopher::new("Gilles Deleuze"),
Philosopher::new("Karl Marx"),
Philosopher::new("Emma Goldman"),
Philosopher::new("Michel Foucault"),
];
for p in &philosophers {
p.eat();
}
}
```
Lets look at `main()` first. Rather than have five individual variable
bindings for our philosophers, we make a `Vec<T>` of them instead. `Vec<T>` is
also called a vector, and its a growable array type. We then use a
[`for`][for] loop to iterate through the vector, getting a reference to each
philosopher in turn.
[for]: loops.html#for
In the body of the loop, we call `p.eat()`, which is defined above:
```rust,ignore
fn eat(&self) {
println!("{} is done eating.", self.name);
}
```
In Rust, methods take an explicit `self` parameter. Thats why `eat()` is a
method, but `new` is an associated function: `new()` has no `self`. For our
first version of `eat()`, we just print out the name of the philosopher, and
mention theyre done eating. Running this program should give you the following
output:
```text
Judith Butler is done eating.
Gilles Deleuze is done eating.
Karl Marx is done eating.
Emma Goldman is done eating.
Michel Foucault is done eating.
```
Easy enough, theyre all done! We havent actually implemented the real problem
yet, though, so were not done yet!
Next, we want to make our philosophers not just finish eating, but actually
eat. Heres the next version:
```rust
use std::thread;
use std::time::Duration;
struct Philosopher {
name: String,
}
impl Philosopher {
fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
}
fn eat(&self) {
println!("{} is eating.", self.name);
thread::sleep(Duration::from_millis(1000));
println!("{} is done eating.", self.name);
}
}
fn main() {
let philosophers = vec![
Philosopher::new("Judith Butler"),
Philosopher::new("Gilles Deleuze"),
Philosopher::new("Karl Marx"),
Philosopher::new("Emma Goldman"),
Philosopher::new("Michel Foucault"),
];
for p in &philosophers {
p.eat();
}
}
```
Just a few changes. Lets break it down.
```rust,ignore
use std::thread;
```
`use` brings names into scope. Were going to start using the `thread` module
from the standard library, and so we need to `use` it.
```rust,ignore
fn eat(&self) {
println!("{} is eating.", self.name);
thread::sleep(Duration::from_millis(1000));
println!("{} is done eating.", self.name);
}
```
We now print out two messages, with a `sleep` in the middle. This will
simulate the time it takes a philosopher to eat.
If you run this program, you should see each philosopher eat in turn:
```text
Judith Butler is eating.
Judith Butler is done eating.
Gilles Deleuze is eating.
Gilles Deleuze is done eating.
Karl Marx is eating.
Karl Marx is done eating.
Emma Goldman is eating.
Emma Goldman is done eating.
Michel Foucault is eating.
Michel Foucault is done eating.
```
Excellent! Were getting there. Theres just one problem: we arent actually
operating in a concurrent fashion, which is a core part of the problem!
To make our philosophers eat concurrently, we need to make a small change.
Heres the next iteration:
```rust
use std::thread;
use std::time::Duration;
struct Philosopher {
name: String,
}
impl Philosopher {
fn new(name: &str) -> Philosopher {
Philosopher {
name: name.to_string(),
}
}
fn eat(&self) {
println!("{} is eating.", self.name);
thread::sleep(Duration::from_millis(1000));
println!("{} is done eating.", self.name);
}
}
fn main() {
let philosophers = vec![
Philosopher::new("Judith Butler"),
Philosopher::new("Gilles Deleuze"),
Philosopher::new("Karl Marx"),
Philosopher::new("Emma Goldman"),
Philosopher::new("Michel Foucault"),
];
let handles: Vec<_> = philosophers.into_iter().map(|p| {
thread::spawn(move || {
p.eat();
})
}).collect();
for h in handles {
h.join().unwrap();
}
}
```
All weve done is change the loop in `main()`, and added a second one! Heres the
first change:
```rust,ignore
let handles: Vec<_> = philosophers.into_iter().map(|p| {
thread::spawn(move || {
p.eat();
})
}).collect();
```
While this is only five lines, theyre a dense five. Lets break it down.
```rust,ignore
let handles: Vec<_> =
```
We introduce a new binding, called `handles`. Weve given it this name because
we are going to make some new threads, and that will return some handles to those
threads that let us control their operation. We need to explicitly annotate
the type here, though, due to an issue well talk about later. The `_` is
a type placeholder. Were saying “`handles` is a vector of something, but you
can figure out what that something is, Rust.”
```rust,ignore
philosophers.into_iter().map(|p| {
```
We take our list of philosophers and call `into_iter()` on it. This creates an
iterator that takes ownership of each philosopher. We need to do this to pass
them to our threads. We take that iterator and call `map` on it, which takes a
closure as an argument and calls that closure on each element in turn.
```rust,ignore
thread::spawn(move || {
p.eat();
})
```
Heres where the concurrency happens. The `thread::spawn` function takes a closure
as an argument and executes that closure in a new thread. This closure needs
an extra annotation, `move`, to indicate that the closure is going to take
ownership of the values its capturing. In this case, it's the `p` variable of the
`map` function.
Inside the thread, all we do is call `eat()` on `p`. Also note that
the call to `thread::spawn` lacks a trailing semicolon, making this an
expression. This distinction is important, yielding the correct return
value. For more details, read [Expressions vs. Statements][es].
[es]: functions.html#expressions-vs-statements
```rust,ignore
}).collect();
```
Finally, we take the result of all those `map` calls and collect them up.
`collect()` will make them into a collection of some kind, which is why we
needed to annotate the return type: we want a `Vec<T>`. The elements are the
return values of the `thread::spawn` calls, which are handles to those threads.
Whew!
```rust,ignore
for h in handles {
h.join().unwrap();
}
```
At the end of `main()`, we loop through the handles and call `join()` on them,
which blocks execution until the thread has completed execution. This ensures
that the threads complete their work before the program exits.
If you run this program, youll see that the philosophers eat out of order!
We have multi-threading!
```text
Judith Butler is eating.
Gilles Deleuze is eating.
Karl Marx is eating.
Emma Goldman is eating.
Michel Foucault is eating.
Judith Butler is done eating.
Gilles Deleuze is done eating.
Karl Marx is done eating.
Emma Goldman is done eating.
Michel Foucault is done eating.
```
But what about the forks? We havent modeled them at all yet.
To do that, lets make a new `struct`:
```rust
use std::sync::Mutex;
struct Table {
forks: Vec<Mutex<()>>,
}
```
This `Table` has a vector of `Mutex`es. A mutex is a way to control
concurrency: only one thread can access the contents at once. This is exactly
the property we need with our forks. We use an empty tuple, `()`, inside the
mutex, since were not actually going to use the value, just hold onto it.
Lets modify the program to use the `Table`:
```rust
use std::thread;
use std::time::Duration;
use std::sync::{Mutex, Arc};
struct Philosopher {
name: String,
left: usize,
right: usize,
}
impl Philosopher {
fn new(name: &str, left: usize, right: usize) -> Philosopher {
Philosopher {
name: name.to_string(),
left: left,
right: right,
}
}
fn eat(&self, table: &Table) {
let _left = table.forks[self.left].lock().unwrap();
thread::sleep(Duration::from_millis(150));
let _right = table.forks[self.right].lock().unwrap();
println!("{} is eating.", self.name);
thread::sleep(Duration::from_millis(1000));
println!("{} is done eating.", self.name);
}
}
struct Table {
forks: Vec<Mutex<()>>,
}
fn main() {
let table = Arc::new(Table { forks: vec![
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
]});
let philosophers = vec![
Philosopher::new("Judith Butler", 0, 1),
Philosopher::new("Gilles Deleuze", 1, 2),
Philosopher::new("Karl Marx", 2, 3),
Philosopher::new("Emma Goldman", 3, 4),
Philosopher::new("Michel Foucault", 0, 4),
];
let handles: Vec<_> = philosophers.into_iter().map(|p| {
let table = table.clone();
thread::spawn(move || {
p.eat(&table);
})
}).collect();
for h in handles {
h.join().unwrap();
}
}
```
Lots of changes! However, with this iteration, weve got a working program.
Lets go over the details:
```rust,ignore
use std::sync::{Mutex, Arc};
```
Were going to use another structure from the `std::sync` package: `Arc<T>`.
Well talk more about it when we use it.
```rust,ignore
struct Philosopher {
name: String,
left: usize,
right: usize,
}
```
We need to add two more fields to our `Philosopher`. Each philosopher is going
to have two forks: the one on their left, and the one on their right.
Well use the `usize` type to indicate them, as its the type that you index
vectors with. These two values will be the indexes into the `forks` our `Table`
has.
```rust,ignore
fn new(name: &str, left: usize, right: usize) -> Philosopher {
Philosopher {
name: name.to_string(),
left: left,
right: right,
}
}
```
We now need to construct those `left` and `right` values, so we add them to
`new()`.
```rust,ignore
fn eat(&self, table: &Table) {
let _left = table.forks[self.left].lock().unwrap();
thread::sleep(Duration::from_millis(150));
let _right = table.forks[self.right].lock().unwrap();
println!("{} is eating.", self.name);
thread::sleep(Duration::from_millis(1000));
println!("{} is done eating.", self.name);
}
```
We have three new lines. Weve added an argument, `table`. We access the
`Table`s list of forks, and then use `self.left` and `self.right` to access
the fork at that particular index. That gives us access to the `Mutex` at that
index, and we call `lock()` on it. If the mutex is currently being accessed by
someone else, well block until it becomes available. We have also a call to
`thread::sleep` between the moment the first fork is picked and the moment the
second forked is picked, as the process of picking up the fork is not
immediate.
The call to `lock()` might fail, and if it does, we want to crash. In this
case, the error that could happen is that the mutex is [poisoned][poison],
which is what happens when the thread panics while the lock is held. Since this
shouldnt happen, we just use `unwrap()`.
[poison]: ../std/sync/struct.Mutex.html#poisoning
One other odd thing about these lines: weve named the results `_left` and
`_right`. Whats up with that underscore? Well, we arent planning on
_using_ the value inside the lock. We just want to acquire it. As such,
Rust will warn us that we never use the value. By using the underscore,
we tell Rust that this is what we intended, and it wont throw a warning.
What about releasing the lock? Well, that will happen when `_left` and
`_right` go out of scope, automatically.
```rust,ignore
let table = Arc::new(Table { forks: vec![
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
Mutex::new(()),
]});
```
Next, in `main()`, we make a new `Table` and wrap it in an `Arc<T>`.
arc stands for atomic reference count, and we need that to share
our `Table` across multiple threads. As we share it, the reference
count will go up, and when each thread ends, it will go back down.
```rust,ignore
let philosophers = vec![
Philosopher::new("Judith Butler", 0, 1),
Philosopher::new("Gilles Deleuze", 1, 2),
Philosopher::new("Karl Marx", 2, 3),
Philosopher::new("Emma Goldman", 3, 4),
Philosopher::new("Michel Foucault", 0, 4),
];
```
We need to pass in our `left` and `right` values to the constructors for our
`Philosopher`s. But theres one more detail here, and its _very_ important. If
you look at the pattern, its all consistent until the very end. Monsieur
Foucault should have `4, 0` as arguments, but instead, has `0, 4`. This is what
prevents deadlock, actually: one of our philosophers is left handed! This is
one way to solve the problem, and in my opinion, its the simplest. If you
change the order of the parameters, you will be able to observe the deadlock
taking place.
```rust,ignore
let handles: Vec<_> = philosophers.into_iter().map(|p| {
let table = table.clone();
thread::spawn(move || {
p.eat(&table);
})
}).collect();
```
Finally, inside of our `map()`/`collect()` loop, we call `table.clone()`. The
`clone()` method on `Arc<T>` is what bumps up the reference count, and when it
goes out of scope, it decrements the count. This is needed so that we know how
many references to `table` exist across our threads. If we didnt have a count,
we wouldnt know how to deallocate it.
Youll notice we can introduce a new binding to `table` here, and it will
shadow the old one. This is often used so that you dont need to come up with
two unique names.
With this, our program works! Only two philosophers can eat at any one time,
and so youll get some output like this:
```text
Gilles Deleuze is eating.
Emma Goldman is eating.
Emma Goldman is done eating.
Gilles Deleuze is done eating.
Judith Butler is eating.
Karl Marx is eating.
Judith Butler is done eating.
Michel Foucault is eating.
Karl Marx is done eating.
Michel Foucault is done eating.
```
Congrats! Youve implemented a classic concurrency problem in Rust.

View File

@ -73,7 +73,7 @@ hello.rs:4 }
```
This [unfortunate error](https://github.com/rust-lang/rust/issues/22547) is
correct: documentation comments apply to the thing after them, and there's
correct; documentation comments apply to the thing after them, and there's
nothing after that last comment.
[rc-new]: https://doc.rust-lang.org/nightly/std/rc/struct.Rc.html#method.new
@ -193,7 +193,7 @@ If you want something that's not Rust code, you can add an annotation:
```
This will highlight according to whatever language you're showing off.
If you're just showing plain text, choose `text`.
If you're only showing plain text, choose `text`.
It's important to choose the correct annotation here, because `rustdoc` uses it
in an interesting way: It can be used to actually test your examples in a
@ -273,7 +273,7 @@ be hidden from the output, but will be used when compiling your code. You
can use this to your advantage. In this case, documentation comments need
to apply to some kind of function, so if I want to show you just a
documentation comment, I need to add a little function definition below
it. At the same time, it's just there to satisfy the compiler, so hiding
it. At the same time, it's only there to satisfy the compiler, so hiding
it makes the example more clear. You can use this technique to explain
longer examples in detail, while still preserving the testability of your
documentation.
@ -512,7 +512,7 @@ the documentation with comments. For example:
# fn foo() {}
```
is just
is:
~~~markdown
# Examples

View File

@ -3,6 +3,6 @@
So youve learned how to write some Rust code. But theres a difference between
writing *any* Rust code and writing *good* Rust code.
This section consists of relatively independent tutorials which show you how to
This chapter consists of relatively independent tutorials which show you how to
take your Rust to the next level. Common patterns and standard library features
will be introduced. Read these sections in any order of your choosing.

View File

@ -1,7 +1,8 @@
% Enums
An `enum` in Rust is a type that represents data that could be one of
several possible variants:
An `enum` in Rust is a type that represents data that is one of
several possible variants. Each variant in the `enum` can optionally
have data associated with it:
```rust
enum Message {
@ -12,9 +13,8 @@ enum Message {
}
```
Each variant can optionally have data associated with it. The syntax for
defining variants resembles the syntaxes used to define structs: you can
have variants with no data (like unit-like structs), variants with named
The syntax for defining variants resembles the syntaxes used to define structs:
you can have variants with no data (like unit-like structs), variants with named
data, and variants with unnamed data (like tuple structs). Unlike
separate struct definitions, however, an `enum` is a single type. A
value of the enum can match any of the variants. For this reason, an
@ -41,7 +41,7 @@ let y: BoardGameTurn = BoardGameTurn::Move { squares: 1 };
Both variants are named `Move`, but since theyre scoped to the name of
the enum, they can both be used without conflict.
A value of an enum type contains information about which variant it is,
A value of an `enum` type contains information about which variant it is,
in addition to any data associated with that variant. This is sometimes
referred to as a tagged union, since the data includes a tag
indicating what type it is. The compiler uses this information to
@ -62,12 +62,11 @@ learn in the next section. We dont know enough about Rust to implement
equality yet, but well find out in the [`traits`][traits] section.
[match]: match.html
[if-let]: if-let.html
[traits]: traits.html
# Constructors as functions
An enums constructors can also be used like functions. For example:
An `enum` constructor can also be used like a function. For example:
```rust
# enum Message {
@ -76,7 +75,7 @@ An enums constructors can also be used like functions. For example:
let m = Message::Write("Hello, world".to_string());
```
Is the same as
is the same as
```rust
# enum Message {

View File

@ -5,18 +5,18 @@ errors in a particular way. Generally speaking, error handling is divided into
two broad categories: exceptions and return values. Rust opts for return
values.
In this chapter, we intend to provide a comprehensive treatment of how to deal
In this section, we intend to provide a comprehensive treatment of how to deal
with errors in Rust. More than that, we will attempt to introduce error handling
one piece at a time so that you'll come away with a solid working knowledge of
how everything fits together.
When done naïvely, error handling in Rust can be verbose and annoying. This
chapter will explore those stumbling blocks and demonstrate how to use the
section will explore those stumbling blocks and demonstrate how to use the
standard library to make error handling concise and ergonomic.
# Table of Contents
This chapter is very long, mostly because we start at the very beginning with
This section is very long, mostly because we start at the very beginning with
sum types and combinators, and try to motivate the way Rust does error handling
incrementally. As such, programmers with experience in other expressive type
systems may want to jump around.
@ -117,8 +117,8 @@ the first example. This is because the
panic is embedded in the calls to `unwrap`.
To “unwrap” something in Rust is to say, “Give me the result of the
computation, and if there was an error, just panic and stop the program.”
It would be better if we just showed the code for unwrapping because it is so
computation, and if there was an error, panic and stop the program.”
It would be better if we showed the code for unwrapping because it is so
simple, but to do that, we will first need to explore the `Option` and `Result`
types. Both of these types have a method called `unwrap` defined on them.
@ -154,7 +154,7 @@ fn find(haystack: &str, needle: char) -> Option<usize> {
}
```
Notice that when this function finds a matching character, it doesn't just
Notice that when this function finds a matching character, it doesn't only
return the `offset`. Instead, it returns `Some(offset)`. `Some` is a variant or
a *value constructor* for the `Option` type. You can think of it as a function
with the type `fn<T>(value: T) -> Option<T>`. Correspondingly, `None` is also a
@ -182,7 +182,7 @@ analysis is the only way to get at the value stored inside an `Option<T>`. This
means that you, as the programmer, must handle the case when an `Option<T>` is
`None` instead of `Some(t)`.
But wait, what about `unwrap`,which we used [`previously`](#code-unwrap-double)?
But wait, what about `unwrap`, which we used [previously](#code-unwrap-double)?
There was no case analysis there! Instead, the case analysis was put inside the
`unwrap` method for you. You could define it yourself if you want:
@ -216,7 +216,7 @@ we saw how to use `find` to discover the extension in a file name. Of course,
not all file names have a `.` in them, so it's possible that the file name has
no extension. This *possibility of absence* is encoded into the types using
`Option<T>`. In other words, the compiler will force us to address the
possibility that an extension does not exist. In our case, we just print out a
possibility that an extension does not exist. In our case, we only print out a
message saying as such.
Getting the extension of a file name is a pretty common operation, so it makes
@ -248,7 +248,7 @@ tiresome.
In fact, the case analysis in `extension_explicit` follows a very common
pattern: *map* a function on to the value inside of an `Option<T>`, unless the
option is `None`, in which case, just return `None`.
option is `None`, in which case, return `None`.
Rust has parametric polymorphism, so it is very easy to define a combinator
that abstracts this pattern:
@ -350,7 +350,7 @@ fn file_name(file_path: &str) -> Option<&str> {
}
```
You might think that we could just use the `map` combinator to reduce the case
You might think that we could use the `map` combinator to reduce the case
analysis, but its type doesn't quite fit. Namely, `map` takes a function that
does something only with the inner value. The result of that function is then
*always* [rewrapped with `Some`](#code-option-map). Instead, we need something
@ -636,7 +636,7 @@ Thus far, we've looked at error handling where everything was either an
`Option` and a `Result`? Or what if you have a `Result<T, Error1>` and a
`Result<T, Error2>`? Handling *composition of distinct error types* is the next
challenge in front of us, and it will be the major theme throughout the rest of
this chapter.
this section.
## Composing `Option` and `Result`
@ -648,7 +648,7 @@ Of course, in real code, things aren't always as clean. Sometimes you have a
mix of `Option` and `Result` types. Must we resort to explicit case analysis,
or can we continue using combinators?
For now, let's revisit one of the first examples in this chapter:
For now, let's revisit one of the first examples in this section:
```rust,should_panic
use std::env;
@ -670,7 +670,7 @@ The tricky aspect here is that `argv.nth(1)` produces an `Option` while
with both an `Option` and a `Result`, the solution is *usually* to convert the
`Option` to a `Result`. In our case, the absence of a command line parameter
(from `env::args()`) means the user didn't invoke the program correctly. We
could just use a `String` to describe the error. Let's try:
could use a `String` to describe the error. Let's try:
<span id="code-error-double-string"></span>
@ -709,7 +709,7 @@ fn ok_or<T, E>(option: Option<T>, err: E) -> Result<T, E> {
The other new combinator used here is
[`Result::map_err`](../std/result/enum.Result.html#method.map_err).
This is just like `Result::map`, except it maps a function on to the *error*
This is like `Result::map`, except it maps a function on to the *error*
portion of a `Result` value. If the `Result` is an `Ok(...)` value, then it is
returned unmodified.
@ -841,7 +841,7 @@ example, the very last call to `map` multiplies the `Ok(...)` value (which is
an `i32`) by `2`. If an error had occurred before that point, this operation
would have been skipped because of how `map` is defined.
`map_err` is the trick that makes all of this work. `map_err` is just like
`map_err` is the trick that makes all of this work. `map_err` is like
`map`, except it applies a function to the `Err(...)` value of a `Result`. In
this case, we want to convert all of our errors to one type: `String`. Since
both `io::Error` and `num::ParseIntError` implement `ToString`, we can call the
@ -887,7 +887,7 @@ fn main() {
}
```
Reasonable people can disagree over whether this code is better that the code
Reasonable people can disagree over whether this code is better than the code
that uses combinators, but if you aren't familiar with the combinator approach,
this code looks simpler to read to me. It uses explicit case analysis with
`match` and `if let`. If an error occurs, it simply stops executing the
@ -901,7 +901,7 @@ reduce explicit case analysis. Combinators aren't the only way.
## The `try!` macro
A cornerstone of error handling in Rust is the `try!` macro. The `try!` macro
abstracts case analysis just like combinators, but unlike combinators, it also
abstracts case analysis like combinators, but unlike combinators, it also
abstracts *control flow*. Namely, it can abstract the *early return* pattern
seen above.
@ -1319,7 +1319,7 @@ and [`cause`](../std/error/trait.Error.html#method.cause), but the
limitation remains: `Box<Error>` is opaque. (N.B. This isn't entirely
true because Rust does have runtime reflection, which is useful in
some scenarios that are [beyond the scope of this
chapter](https://crates.io/crates/error).)
section](https://crates.io/crates/error).)
It's time to revisit our custom `CliError` type and tie everything together.
@ -1461,7 +1461,7 @@ expose its representation (like
[`ErrorKind`](../std/io/enum.ErrorKind.html)) or keep it hidden (like
[`ParseIntError`](../std/num/struct.ParseIntError.html)). Regardless
of how you do it, it's usually good practice to at least provide some
information about the error beyond just its `String`
information about the error beyond its `String`
representation. But certainly, this will vary depending on use cases.
At a minimum, you should probably implement the
@ -1486,7 +1486,7 @@ and [`fmt::Result`](../std/fmt/type.Result.html).
# Case study: A program to read population data
This chapter was long, and depending on your background, it might be
This section was long, and depending on your background, it might be
rather dense. While there is plenty of example code to go along with
the prose, most of it was specifically designed to be pedagogical. So,
we're going to do something new: a case study.
@ -1499,7 +1499,7 @@ that can go wrong!
The data we'll be using comes from the [Data Science
Toolkit][11]. I've prepared some data from it for this exercise. You
can either grab the [world population data][12] (41MB gzip compressed,
145MB uncompressed) or just the [US population data][13] (2.2MB gzip
145MB uncompressed) or only the [US population data][13] (2.2MB gzip
compressed, 7.2MB uncompressed).
Up until now, we've kept the code limited to Rust's standard library. For a real
@ -1512,7 +1512,7 @@ and [`rustc-serialize`](https://crates.io/crates/rustc-serialize) crates.
We're not going to spend a lot of time on setting up a project with
Cargo because it is already covered well in [the Cargo
chapter](../book/hello-cargo.html) and [Cargo's documentation][14].
section](../book/hello-cargo.html) and [Cargo's documentation][14].
To get started from scratch, run `cargo new --bin city-pop` and make sure your
`Cargo.toml` looks something like this:
@ -1573,11 +1573,11 @@ fn main() {
let matches = match opts.parse(&args[1..]) {
Ok(m) => { m }
Err(e) => { panic!(e.to_string()) }
Err(e) => { panic!(e.to_string()) }
};
if matches.opt_present("h") {
print_usage(&program, opts);
return;
return;
}
let data_path = args[1].clone();
let city = args[2].clone();
@ -1613,6 +1613,9 @@ CSV data given to us and print out a field in matching rows. Let's do it. (Make
sure to add `extern crate csv;` to the top of your file.)
```rust,ignore
use std::fs::File;
use std::path::Path;
// This struct represents the data in each row of the CSV file.
// Type based decoding absolves us of a lot of the nitty gritty error
// handling, like parsing strings as integers or floats.
@ -1656,7 +1659,7 @@ fn main() {
let data_path = Path::new(&data_file);
let city = args[2].clone();
let file = fs::File::open(data_path).unwrap();
let file = File::open(data_path).unwrap();
let mut rdr = csv::Reader::from_reader(file);
for row in rdr.decode::<Row>() {
@ -1674,7 +1677,7 @@ fn main() {
Let's outline the errors. We can start with the obvious: the three places that
`unwrap` is called:
1. [`fs::File::open`](../std/fs/struct.File.html#method.open)
1. [`File::open`](../std/fs/struct.File.html#method.open)
can return an
[`io::Error`](../std/io/struct.Error.html).
2. [`csv::Reader::decode`](http://burntsushi.net/rustdoc/csv/struct.Reader.html#method.decode)
@ -1703,7 +1706,7 @@ compiler can no longer reason about its underlying type.
[Previously](#the-limits-of-combinators) we started refactoring our code by
changing the type of our function from `T` to `Result<T, OurErrorType>`. In
this case, `OurErrorType` is just `Box<Error>`. But what's `T`? And can we add
this case, `OurErrorType` is only `Box<Error>`. But what's `T`? And can we add
a return type to `main`?
The answer to the second question is no, we can't. That means we'll need to
@ -1734,7 +1737,7 @@ fn print_usage(program: &str, opts: Options) {
fn search<P: AsRef<Path>>(file_path: P, city: &str) -> Vec<PopulationCount> {
let mut found = vec![];
let file = fs::File::open(file_path).unwrap();
let file = File::open(file_path).unwrap();
let mut rdr = csv::Reader::from_reader(file);
for row in rdr.decode::<Row>() {
let row = row.unwrap();
@ -1792,11 +1795,15 @@ To convert this to proper error handling, we need to do the following:
Let's try it:
```rust,ignore
use std::error::Error;
// The rest of the code before this is unchanged
fn search<P: AsRef<Path>>
(file_path: P, city: &str)
-> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
let mut found = vec![];
let file = try!(fs::File::open(file_path));
let file = try!(File::open(file_path));
let mut rdr = csv::Reader::from_reader(file);
for row in rdr.decode::<Row>() {
let row = try!(row);
@ -1900,8 +1907,13 @@ let city = if !matches.free.is_empty() {
return;
};
for pop in search(&data_file, &city) {
println!("{}, {}: {:?}", pop.city, pop.country, pop.count);
match search(&data_file, &city) {
Ok(pops) => {
for pop in pops {
println!("{}, {}: {:?}", pop.city, pop.country, pop.count);
}
}
Err(err) => println!("{}", err)
}
...
```
@ -1921,16 +1933,20 @@ parser out of
But how can we use the same code over both types? There's actually a
couple ways we could go about this. One way is to write `search` such
that it is generic on some type parameter `R` that satisfies
`io::Read`. Another way is to just use trait objects:
`io::Read`. Another way is to use trait objects:
```rust,ignore
use std::io;
// The rest of the code before this is unchanged
fn search<P: AsRef<Path>>
(file_path: &Option<P>, city: &str)
-> Result<Vec<PopulationCount>, Box<Error+Send+Sync>> {
let mut found = vec![];
let input: Box<io::Read> = match *file_path {
None => Box::new(io::stdin()),
Some(ref file_path) => Box::new(try!(fs::File::open(file_path))),
Some(ref file_path) => Box::new(try!(File::open(file_path))),
};
let mut rdr = csv::Reader::from_reader(input);
// The rest remains unchanged!
@ -2017,7 +2033,7 @@ fn search<P: AsRef<Path>>
let mut found = vec![];
let input: Box<io::Read> = match *file_path {
None => Box::new(io::stdin()),
Some(ref file_path) => Box::new(try!(fs::File::open(file_path))),
Some(ref file_path) => Box::new(try!(File::open(file_path))),
};
let mut rdr = csv::Reader::from_reader(input);
for row in rdr.decode::<Row>() {
@ -2078,7 +2094,7 @@ opts.optflag("q", "quiet", "Silences errors and warnings.");
...
```
Now we just need to implement our “quiet” functionality. This requires us to
Now we only need to implement our “quiet” functionality. This requires us to
tweak the case analysis in `main`:
```rust,ignore
@ -2105,13 +2121,13 @@ handling.
# The Short Story
Since this chapter is long, it is useful to have a quick summary for error
Since this section is long, it is useful to have a quick summary for error
handling in Rust. These are some good “rules of thumb." They are emphatically
*not* commandments. There are probably good reasons to break every one of these
heuristics!
* If you're writing short example code that would be overburdened by error
handling, it's probably just fine to use `unwrap` (whether that's
handling, it's probably fine to use `unwrap` (whether that's
[`Result::unwrap`](../std/result/enum.Result.html#method.unwrap),
[`Option::unwrap`](../std/option/enum.Option.html#method.unwrap)
or preferably

View File

@ -367,7 +367,7 @@ artifact.
A few examples of how this model can be used are:
* A native build dependency. Sometimes some C/C++ glue is needed when writing
some Rust code, but distribution of the C/C++ code in a library format is just
some Rust code, but distribution of the C/C++ code in a library format is
a burden. In this case, the code will be archived into `libfoo.a` and then the
Rust crate would declare a dependency via `#[link(name = "foo", kind =
"static")]`.
@ -478,6 +478,8 @@ are:
* `aapcs`
* `cdecl`
* `fastcall`
* `vectorcall`
This is currently hidden behind the `abi_vectorcall` gate and is subject to change.
* `Rust`
* `rust-intrinsic`
* `system`
@ -490,7 +492,7 @@ interoperating with the target's libraries. For example, on win32 with a x86
architecture, this means that the abi used would be `stdcall`. On x86_64,
however, windows uses the `C` calling convention, so `C` would be used. This
means that in our previous example, we could have used `extern "system" { ... }`
to define a block for all windows systems, not just x86 ones.
to define a block for all windows systems, not only x86 ones.
# Interoperability with foreign code

View File

@ -124,7 +124,7 @@ statement `x + 1;` doesnt return a value. There are two kinds of statements i
Rust: declaration statements and expression statements. Everything else is
an expression. Lets talk about declaration statements first.
In some languages, variable bindings can be written as expressions, not just
In some languages, variable bindings can be written as expressions, not
statements. Like Ruby:
```ruby
@ -145,7 +145,7 @@ Note that assigning to an already-bound variable (e.g. `y = 5`) is still an
expression, although its value is not particularly useful. Unlike other
languages where an assignment evaluates to the assigned value (e.g. `5` in the
previous example), in Rust the value of an assignment is an empty tuple `()`
because the assigned value can have [just one owner](ownership.html), and any
because the assigned value can have [only one owner](ownership.html), and any
other returned value would be too surprising:
```rust

View File

@ -37,7 +37,7 @@ let x: Option<f64> = Some(5);
// found `core::option::Option<_>` (expected f64 but found integral variable)
```
That doesnt mean we cant make `Option<T>`s that hold an `f64`! They just have
That doesnt mean we cant make `Option<T>`s that hold an `f64`! They have
to match up:
```rust
@ -118,7 +118,7 @@ let float_origin = Point { x: 0.0, y: 0.0 };
Similar to functions, the `<T>` is where we declare the generic parameters,
and we then use `x: T` in the type declaration, too.
When you want to add an implementation for the generic `struct`, you just
When you want to add an implementation for the generic `struct`, you
declare the type parameter after the `impl`:
```rust

View File

@ -1,13 +1,13 @@
% Getting Started
This first section of the book will get us going with Rust and its tooling.
This first chapter of the book will get us going with Rust and its tooling.
First, well install Rust. Then, the classic Hello World program. Finally,
well talk about Cargo, Rusts build system and package manager.
# Installing Rust
The first step to using Rust is to install it. Generally speaking, youll need
an Internet connection to run the commands in this chapter, as well be
an Internet connection to run the commands in this section, as well be
downloading Rust from the internet.
Well be showing off a number of commands using a terminal, and those lines all
@ -63,6 +63,13 @@ these platforms are required to have each of the following:
| Target | std |rustc|cargo| notes |
|-------------------------------|-----|-----|-----|----------------------------|
| `i686-pc-windows-msvc` | ✓ | ✓ | ✓ | 32-bit MSVC (Windows 7+) |
| `x86_64-unknown-linux-musl` | ✓ | | | 64-bit Linux with MUSL |
| `arm-linux-androideabi` | ✓ | | | ARM Android |
| `arm-unknown-linux-gnueabi` | ✓ | ✓ | | ARM Linux (2.6.18+) |
| `arm-unknown-linux-gnueabihf` | ✓ | ✓ | | ARM Linux (2.6.18+) |
| `aarch64-unknown-linux-gnu` | ✓ | | | ARM64 Linux (2.6.18+) |
| `mips-unknown-linux-gnu` | ✓ | | | MIPS Linux (2.6.18+) |
| `mipsel-unknown-linux-gnu` | ✓ | | | MIPS (LE) Linux (2.6.18+) |
### Tier 3
@ -75,15 +82,8 @@ unofficial locations.
| Target | std |rustc|cargo| notes |
|-------------------------------|-----|-----|-----|----------------------------|
| `x86_64-unknown-linux-musl` | ✓ | | | 64-bit Linux with MUSL |
| `arm-linux-androideabi` | ✓ | | | ARM Android |
| `i686-linux-android` | ✓ | | | 32-bit x86 Android |
| `aarch64-linux-android` | ✓ | | | ARM64 Android |
| `arm-unknown-linux-gnueabi` | ✓ | ✓ | | ARM Linux (2.6.18+) |
| `arm-unknown-linux-gnueabihf` | ✓ | ✓ | | ARM Linux (2.6.18+) |
| `aarch64-unknown-linux-gnu` | ✓ | | | ARM64 Linux (2.6.18+) |
| `mips-unknown-linux-gnu` | ✓ | | | MIPS Linux (2.6.18+) |
| `mipsel-unknown-linux-gnu` | ✓ | | | MIPS (LE) Linux (2.6.18+) |
| `powerpc-unknown-linux-gnu` | ✓ | | | PowerPC Linux (2.6.18+) |
| `i386-apple-ios` | ✓ | | | 32-bit x86 iOS |
| `x86_64-apple-ios` | ✓ | | | 64-bit x86 iOS |
@ -140,7 +140,7 @@ If you're on Windows, please download the appropriate [installer][install-page].
## Uninstalling
Uninstalling Rust is as easy as installing it. On Linux or Mac, just run
Uninstalling Rust is as easy as installing it. On Linux or Mac, run
the uninstall script:
```bash
@ -192,7 +192,7 @@ that tradition.
The nice thing about starting with such a simple program is that you can
quickly verify that your compiler is installed, and that it's working properly.
Printing information to the screen is also just a pretty common thing to do, so
Printing information to the screen is also a pretty common thing to do, so
practicing it early on is good.
> Note: This book assumes basic familiarity with the command line. Rust itself
@ -248,7 +248,7 @@ $ ./main
Hello, world!
```
In Windows, just replace `main` with `main.exe`. Regardless of your operating
In Windows, replace `main` with `main.exe`. Regardless of your operating
system, you should see the string `Hello, world!` print to the terminal. If you
did, then congratulations! You've officially written a Rust program. That makes
you a Rust programmer! Welcome.
@ -289,7 +289,7 @@ that its indented with four spaces, not tabs.
The second important part is the `println!()` line. This is calling a Rust
*[macro]*, which is how metaprogramming is done in Rust. If it were calling a
function instead, it would look like this: `println()` (without the !). We'll
discuss Rust macros in more detail later, but for now you just need to
discuss Rust macros in more detail later, but for now you only need to
know that when you see a `!` that means that youre calling a macro instead of
a normal function.
@ -303,10 +303,10 @@ prints the string to the screen. Easy enough!
[statically allocated]: the-stack-and-the-heap.html
The line ends with a semicolon (`;`). Rust is an *[expression oriented]*
language, which means that most things are expressions, rather than statements.
The `;` indicates that this expression is over, and the next one is ready to
begin. Most lines of Rust code end with a `;`.
The line ends with a semicolon (`;`). Rust is an *[expression-oriented
language]*, which means that most things are expressions, rather than
statements. The `;` indicates that this expression is over, and the next one is
ready to begin. Most lines of Rust code end with a `;`.
[expression-oriented language]: glossary.html#expression-oriented-language
@ -456,7 +456,7 @@ authors = [ "Your name <you@example.com>" ]
The first line, `[package]`, indicates that the following statements are
configuring a package. As we add more information to this file, well add other
sections, but for now, we just have the package configuration.
sections, but for now, we only have the package configuration.
The other three lines set the three bits of configuration that Cargo needs to
know to compile your program: its name, what version it is, and who wrote it.
@ -505,9 +505,11 @@ Cargo checks to see if any of your projects files have been modified, and onl
rebuilds your project if theyve changed since the last time you built it.
With simple projects, Cargo doesn't bring a whole lot over just using `rustc`,
but it will become useful in future. With complex projects composed of multiple
crates, its much easier to let Cargo coordinate the build. With Cargo, you can
just run `cargo build`, and it should work the right way.
but it will become useful in future. This is especially true when you start
using crates; these are synonymous with a library or package in other
programming languages. For complex projects composed of multiple crates, its
much easier to let Cargo coordinate the build. Using Cargo, you can run `cargo
build`, and it should work the right way.
## Building for Release

View File

@ -1,10 +1,14 @@
% Guessing Game
For our first project, well implement a classic beginner programming problem:
the guessing game. Heres how it works: Our program will generate a random
integer between one and a hundred. It will then prompt us to enter a guess.
Upon entering our guess, it will tell us if were too low or too high. Once we
guess correctly, it will congratulate us. Sounds good?
Lets learn some Rust! For our first project, well implement a classic
beginner programming problem: the guessing game. Heres how it works: Our
program will generate a random integer between one and a hundred. It will then
prompt us to enter a guess. Upon entering our guess, it will tell us if were
too low or too high. Once we guess correctly, it will congratulate us. Sounds
good?
Along the way, well learn a little bit about Rust. The next chapter, Syntax
and Semantics, will dive deeper into each part.
# Set up
@ -64,7 +68,7 @@ Hello, world!
```
Great! The `run` command comes in handy when you need to rapidly iterate on a
project. Our game is just such a project, we need to quickly test each
project. Our game is such a project, we need to quickly test each
iteration before moving on to the next one.
# Processing a Guess
@ -290,12 +294,12 @@ src/main.rs:10 io::stdin().read_line(&mut guess);
Rust warns us that we havent used the `Result` value. This warning comes from
a special annotation that `io::Result` has. Rust is trying to tell you that
you havent handled a possible error. The right way to suppress the error is
to actually write error handling. Luckily, if we just want to crash if theres
to actually write error handling. Luckily, if we want to crash if theres
a problem, we can use these two little methods. If we can recover from the
error somehow, wed do something else, but well save that for a future
project.
Theres just one line of this first example left:
Theres only one line of this first example left:
```rust,ignore
println!("You guessed: {}", guess);
@ -404,7 +408,7 @@ $ cargo build
Thats right, no output! Cargo knows that our project has been built, and that
all of its dependencies are built, and so theres no reason to do all that
stuff. With nothing to do, it simply exits. If we open up `src/main.rs` again,
make a trivial change, and then save it again, well just see one line:
make a trivial change, and then save it again, well only see one line:
```bash
$ cargo build
@ -500,7 +504,7 @@ so we need `1` and `101` to get a number ranging from one to a hundred.
[concurrency]: concurrency.html
The second line just prints out the secret number. This is useful while
The second line prints out the secret number. This is useful while
were developing our program, so we can easily test it out. But well be
deleting it for the final version. Its not much of a game if it prints out
the answer when you start it up!
@ -701,7 +705,7 @@ input in it. The `trim()` method on `String`s will eliminate any white space at
the beginning and end of our string. This is important, as we had to press the
return key to satisfy `read_line()`. This means that if we type `5` and hit
return, `guess` looks like this: `5\n`. The `\n` represents newline, the
enter key. `trim()` gets rid of this, leaving our string with just the `5`. The
enter key. `trim()` gets rid of this, leaving our string with only the `5`. The
[`parse()` method on strings][parse] parses a string into some kind of number.
Since it can parse a variety of numbers, we need to give Rust a hint as to the
exact type of number we want. Hence, `let guess: u32`. The colon (`:`) after
@ -849,8 +853,8 @@ fn main() {
By adding the `break` line after the `You win!`, well exit the loop when we
win. Exiting the loop also means exiting the program, since its the last
thing in `main()`. We have just one more tweak to make: when someone inputs a
non-number, we dont want to quit, we just want to ignore it. We can do that
thing in `main()`. We have only one more tweak to make: when someone inputs a
non-number, we dont want to quit, we want to ignore it. We can do that
like this:
```rust,ignore
@ -904,12 +908,12 @@ let guess: u32 = match guess.trim().parse() {
```
This is how you generally move from crash on error to actually handle the
returned by `parse()` is an `enum` just like `Ordering`, but in this case, each
returned by `parse()` is an `enum` like `Ordering`, but in this case, each
variant has some data associated with it: `Ok` is a success, and `Err` is a
failure. Each contains more information: the successfully parsed integer, or an
error type. In this case, we `match` on `Ok(num)`, which sets the inner value
of the `Ok` to the name `num`, and then we just return it on the right-hand
side. In the `Err` case, we dont care what kind of error it is, so we just
of the `Ok` to the name `num`, and then we return it on the right-hand
side. In the `Err` case, we dont care what kind of error it is, so we
use `_` instead of a name. This ignores the error, and `continue` causes us
to go to the next iteration of the `loop`.

View File

@ -37,7 +37,7 @@ which gives us a reference to the next value of the iterator. `next` returns an
`None`, we `break` out of the loop.
This code sample is basically the same as our `for` loop version. The `for`
loop is just a handy way to write this `loop`/`match`/`break` construct.
loop is a handy way to write this `loop`/`match`/`break` construct.
`for` loops aren't the only thing that uses iterators, however. Writing your
own iterator involves implementing the `Iterator` trait. While doing that is
@ -94,8 +94,8 @@ Now we're explicitly dereferencing `num`. Why does `&nums` give us
references? Firstly, because we explicitly asked it to with
`&`. Secondly, if it gave us the data itself, we would have to be its
owner, which would involve making a copy of the data and giving us the
copy. With references, we're just borrowing a reference to the data,
and so it's just passing a reference, without needing to do the move.
copy. With references, we're only borrowing a reference to the data,
and so it's only passing a reference, without needing to do the move.
So, now that we've established that ranges are often not what you want, let's
talk about what you do want instead.
@ -278,7 +278,7 @@ doesn't print any numbers:
```
If you are trying to execute a closure on an iterator for its side effects,
just use `for` instead.
use `for` instead.
There are tons of interesting iterator adaptors. `take(n)` will return an
iterator over the next `n` elements of the original iterator. Let's try it out

View File

@ -1,6 +1,6 @@
% Learn Rust
Welcome! This section has a few tutorials that teach you Rust through building
Welcome! This chapter has a few tutorials that teach you Rust through building
projects. Youll get a high-level overview, but well skim over the details.
If youd prefer a more from the ground up-style experience, check

View File

@ -84,7 +84,7 @@ We previously talked a little about [function syntax][functions], but we didn
discuss the `<>`s after a functions name. A function can have generic
parameters between the `<>`s, of which lifetimes are one kind. Well discuss
other kinds of generics [later in the book][generics], but for now, lets
just focus on the lifetimes aspect.
focus on the lifetimes aspect.
[functions]: functions.html
[generics]: generics.html
@ -103,13 +103,13 @@ Then in our parameter list, we use the lifetimes weve named:
...(x: &'a i32)
```
If we wanted an `&mut` reference, wed do this:
If we wanted a `&mut` reference, wed do this:
```rust,ignore
...(x: &'a mut i32)
```
If you compare `&mut i32` to `&'a mut i32`, theyre the same, its just that
If you compare `&mut i32` to `&'a mut i32`, theyre the same, its that
the lifetime `'a` has snuck in between the `&` and the `mut i32`. We read `&mut
i32` as a mutable reference to an `i32` and `&'a mut i32` as a mutable
reference to an `i32` with the lifetime `'a`.
@ -175,7 +175,7 @@ fn main() {
```
As you can see, we need to declare a lifetime for `Foo` in the `impl` line. We repeat
`'a` twice, just like on functions: `impl<'a>` defines a lifetime `'a`, and `Foo<'a>`
`'a` twice, like on functions: `impl<'a>` defines a lifetime `'a`, and `Foo<'a>`
uses it.
## Multiple lifetimes
@ -353,8 +353,8 @@ fn frob<'a, 'b>(s: &'a str, t: &'b str) -> &str; // Expanded: Output lifetime is
fn get_mut(&mut self) -> &mut T; // elided
fn get_mut<'a>(&'a mut self) -> &'a mut T; // expanded
fn args<T:ToCStr>(&mut self, args: &[T]) -> &mut Command; // elided
fn args<'a, 'b, T:ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command; // expanded
fn args<T: ToCStr>(&mut self, args: &[T]) -> &mut Command; // elided
fn args<'a, 'b, T: ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command; // expanded
fn new(buf: &mut [u8]) -> BufWriter; // elided
fn new<'a>(buf: &'a mut [u8]) -> BufWriter<'a>; // expanded

View File

@ -285,9 +285,11 @@ This expands to
```text
const char *state = "reticulating splines";
int state = get_log_state();
if (state > 0) {
printf("log(%d): %s\n", state, state);
{
int state = get_log_state();
if (state > 0) {
printf("log(%d): %s\n", state, state);
}
}
```
@ -476,19 +478,19 @@ which syntactic form it matches.
There are additional rules regarding the next token after a metavariable:
* `expr` variables may only be followed by one of: `=> , ;`
* `ty` and `path` variables may only be followed by one of: `=> , : = > as`
* `pat` variables may only be followed by one of: `=> , = if in`
* `expr` and `stmt` variables may only be followed by one of: `=> , ;`
* `ty` and `path` variables may only be followed by one of: `=> , = | ; : > [ { as where`
* `pat` variables may only be followed by one of: `=> , = | if in`
* Other variables may be followed by any token.
These rules provide some flexibility for Rusts syntax to evolve without
breaking existing macros.
The macro system does not deal with parse ambiguity at all. For example, the
grammar `$($t:ty)* $e:expr` will always fail to parse, because the parser would
be forced to choose between parsing `$t` and parsing `$e`. Changing the
grammar `$($i:ident)* $e:expr` will always fail to parse, because the parser would
be forced to choose between parsing `$i` and parsing `$e`. Changing the
invocation syntax to put a distinctive token in front can solve the problem. In
this case, you can write `$(T $t:ty)* E $e:exp`.
this case, you can write `$(I $i:ident)* E $e:expr`.
[item]: ../reference.html#items
@ -611,8 +613,7 @@ to define a single macro that works both inside and outside our library. The
function name will expand to either `::increment` or `::mylib::increment`.
To keep this system simple and correct, `#[macro_use] extern crate ...` may
only appear at the root of your crate, not inside `mod`. This ensures that
`$crate` is a single identifier.
only appear at the root of your crate, not inside `mod`.
# The deep end

View File

@ -23,26 +23,24 @@ match x {
`match` takes an expression and then branches based on its value. Each arm of
the branch is of the form `val => expression`. When the value matches, that arms
expression will be evaluated. Its called `match` because of the term pattern
matching, which `match` is an implementation of. Theres an [entire section on
matching, which `match` is an implementation of. Theres a [separate section on
patterns][patterns] that covers all the patterns that are possible here.
[patterns]: patterns.html
So whats the big advantage? Well, there are a few. First of all, `match`
enforces exhaustiveness checking. Do you see that last arm, the one with the
underscore (`_`)? If we remove that arm, Rust will give us an error:
One of the many advantages of `match` is it enforces exhaustiveness checking.
For example if we remove the last arm with the underscore `_`, the compiler will
give us an error:
```text
error: non-exhaustive patterns: `_` not covered
```
In other words, Rust is trying to tell us we forgot a value. Because `x` is an
integer, Rust knows that it can have a number of different values for
example, `6`. Without the `_`, however, there is no arm that could match, and
so Rust refuses to compile the code. `_` acts like a catch-all arm. If none
of the other arms match, the arm with `_` will, and since we have this
catch-all arm, we now have an arm for every possible value of `x`, and so our
program will compile successfully.
Rust is telling us that we forgot a value. The compiler infers from `x` that it
can have any positive 32bit value; for example 1 to 2,147,483,647. The `_` acts
as a 'catch-all', and will catch all possible values that *aren't* specified in
an arm of `match`. As you can see with the previous example, we provide `match`
arms for integers 1-5, if `x` is 6 or any other value, then it is caught by `_`.
`match` is also an expression, which means we can use it on the right-hand
side of a `let` binding or directly where an expression is used:
@ -60,7 +58,8 @@ let number = match x {
};
```
Sometimes its a nice way of converting something from one type to another.
Sometimes its a nice way of converting something from one type to another; in
this example the integers are converted to `String`.
# Matching on enums
@ -91,7 +90,8 @@ fn process_message(msg: Message) {
Again, the Rust compiler checks exhaustiveness, so it demands that you
have a match arm for every variant of the enum. If you leave one off, it
will give you a compile-time error unless you use `_`.
will give you a compile-time error unless you use `_` or provide all possible
arms.
Unlike the previous uses of `match`, you cant use the normal `if`
statement to do this. You can use the [`if let`][if-let] statement,

View File

@ -49,11 +49,11 @@ and inside it, define a method, `area`.
Methods take a special first parameter, of which there are three variants:
`self`, `&self`, and `&mut self`. You can think of this first parameter as
being the `foo` in `foo.bar()`. The three variants correspond to the three
kinds of things `foo` could be: `self` if its just a value on the stack,
kinds of things `foo` could be: `self` if its a value on the stack,
`&self` if its a reference, and `&mut self` if its a mutable reference.
Because we took the `&self` parameter to `area`, we can use it just like any
Because we took the `&self` parameter to `area`, we can use it like any
other parameter. Because we know its a `Circle`, we can access the `radius`
just like we would with any other `struct`.
like we would with any other `struct`.
We should default to using `&self`, as you should prefer borrowing over taking
ownership, as well as taking immutable references over mutable ones. Heres an
@ -151,7 +151,7 @@ fn grow(&self, increment: f64) -> Circle {
# Circle } }
```
We just say were returning a `Circle`. With this method, we can grow a new
We say were returning a `Circle`. With this method, we can grow a new
`Circle` to any arbitrary size.
# Associated functions

View File

@ -39,7 +39,7 @@ script:
$ sudo /usr/local/lib/rustlib/uninstall.sh
```
If you used the Windows installer, just re-run the `.msi` and it will give you
If you used the Windows installer, re-run the `.msi` and it will give you
an uninstall option.
Some people, and somewhat rightfully so, get very upset when we tell you to
@ -66,7 +66,7 @@ Finally, a comment about Windows. Rust considers Windows to be a first-class
platform upon release, but if we're honest, the Windows experience isn't as
integrated as the Linux/OS X experience is. We're working on it! If anything
does not work, it is a bug. Please let us know if that happens. Each and every
commit is tested against Windows just like any other platform.
commit is tested against Windows like any other platform.
If you've got Rust installed, you can open up a shell, and type this:

View File

@ -120,7 +120,7 @@ fn main() {
}
```
For `HasArea` and `Square`, we just declare a type parameter `T` and replace
For `HasArea` and `Square`, we declare a type parameter `T` and replace
`f64` with it. The `impl` needs more involved modifications:
```ignore

View File

@ -51,15 +51,24 @@ fn foo() {
}
```
When `v` comes into scope, a new [`Vec<T>`][vect] is created. In this case, the
vector also allocates space on [the heap][heap], for the three elements. When
`v` goes out of scope at the end of `foo()`, Rust will clean up everything
related to the vector, even the heap-allocated memory. This happens
deterministically, at the end of the scope.
When `v` comes into scope, a new [vector] is created, and it allocates space on
[the heap][heap] for each of its elements. When `v` goes out of scope at the
end of `foo()`, Rust will clean up everything related to the vector, even the
heap-allocated memory. This happens deterministically, at the end of the scope.
[vect]: ../std/vec/struct.Vec.html
We'll cover [vectors] in detail later in this chapter; we only use them
here as an example of a type that allocates space on the heap at runtime. They
behave like [arrays], except their size may change by `push()`ing more
elements onto them.
Vectors have a [generic type][generics] `Vec<T>`, so in this example `v` will have type
`Vec<i32>`. We'll cover generics in detail later in this chapter.
[arrays]: primitive-types.html#arrays
[vectors]: vectors.html
[heap]: the-stack-and-the-heap.html
[bindings]: variable-bindings.html
[generics]: generics.html
# Move semantics

View File

@ -27,7 +27,7 @@ Theres one pitfall with patterns: like anything that introduces a new binding
they introduce shadowing. For example:
```rust
let x = 'x';
let x = 1;
let c = 'c';
match c {
@ -41,12 +41,14 @@ This prints:
```text
x: c c: c
x: x
x: 1
```
In other words, `x =>` matches the pattern and introduces a new binding named
`x` thats in scope for the match arm. Because we already have a binding named
`x`, this new `x` shadows it.
`x`. This new binding is in scope for the match arm and takes on the value of
`c`. Notice that the value of `x` outside the scope of the match has no bearing
on the value of `x` within it. Because we already have a binding named `x`, this
new `x` shadows it.
# Multiple patterns
@ -116,7 +118,7 @@ match origin {
This prints `x is 0`.
You can do this kind of match on any member, not just the first:
You can do this kind of match on any member, not only the first:
```rust
struct Point {
@ -153,7 +155,7 @@ match some_value {
```
In the first arm, we bind the value inside the `Ok` variant to `value`. But
in the `Err` arm, we use `_` to disregard the specific error, and just print
in the `Err` arm, we use `_` to disregard the specific error, and print
a general error message.
`_` is valid in any pattern that creates a binding. This can be useful to
@ -324,7 +326,7 @@ match x {
```
This prints `no`, because the `if` applies to the whole of `4 | 5`, and not to
just the `5`. In other words, the precedence of `if` behaves like this:
only the `5`. In other words, the precedence of `if` behaves like this:
```text
(4 | 5) if y => ...

View File

@ -160,20 +160,23 @@ documentation][array].
A slice is a reference to (or “view” into) another data structure. They are
useful for allowing safe, efficient access to a portion of an array without
copying. For example, you might want to reference just one line of a file read
copying. For example, you might want to reference only one line of a file read
into memory. By nature, a slice is not created directly, but from an existing
variable binding. Slices have a defined length, can be mutable or immutable.
## Slicing syntax
You can use a combo of `&` and `[]` to create a slice from various things. The
`&` indicates that slices are similar to references, and the `[]`s, with a
range, let you define the length of the slice:
`&` indicates that slices are similar to [references], which we will cover in
detail later in this section. The `[]`s, with a range, let you define the
length of the slice:
[references]: references-and-borrowing.html
```rust
let a = [0, 1, 2, 3, 4];
let complete = &a[..]; // A slice containing all of the elements in a
let middle = &a[1..4]; // A slice of a: just the elements 1, 2, and 3
let middle = &a[1..4]; // A slice of a: only the elements 1, 2, and 3
```
Slices have type `&[T]`. Well talk about that `T` when we cover
@ -189,11 +192,13 @@ documentation][slice].
# `str`
Rusts `str` type is the most primitive string type. As an [unsized type][dst],
its not very useful by itself, but becomes useful when placed behind a reference,
like [`&str`][strings]. As such, well just leave it at that.
its not very useful by itself, but becomes useful when placed behind a
reference, like `&str`. We'll elaborate further when we cover
[Strings][strings] and [references].
[dst]: unsized-types.html
[strings]: strings.html
[references]: references-and-borrowing.html
You can find more documentation for `str` [in the standard library
documentation][str].
@ -215,11 +220,11 @@ with the type annotated:
let x: (i32, &str) = (1, "hello");
```
As you can see, the type of a tuple looks just like the tuple, but with each
As you can see, the type of a tuple looks like the tuple, but with each
position having a type name rather than the value. Careful readers will also
note that tuples are heterogeneous: we have an `i32` and a `&str` in this tuple.
In systems programming languages, strings are a bit more complex than in other
languages. For now, just read `&str` as a *string slice*, and well learn more
languages. For now, read `&str` as a *string slice*, and well learn more
soon.
You can assign one tuple into another, if they have the same contained types
@ -244,7 +249,7 @@ println!("x is {}", x);
```
Remember [before][let] when I said the left-hand side of a `let` statement was more
powerful than just assigning a binding? Here we are. We can put a pattern on
powerful than assigning a binding? Here we are. We can put a pattern on
the left-hand side of the `let`, and if it matches up to the right-hand side,
we can assign multiple bindings at once. In this case, `let` “destructures”
or “breaks up” the tuple, and assigns the bits to three bindings.

View File

@ -84,7 +84,7 @@ it borrows ownership. A binding that borrows something does not deallocate the
resource when it goes out of scope. This means that after the call to `foo()`,
we can use our original bindings again.
References are immutable, just like bindings. This means that inside of `foo()`,
References are immutable, like bindings. This means that inside of `foo()`,
the vectors cant be changed at all:
```rust,ignore
@ -126,10 +126,10 @@ the thing `y` points at. Youll notice that `x` had to be marked `mut` as well
If it wasnt, we couldnt take a mutable borrow to an immutable value.
You'll also notice we added an asterisk (`*`) in front of `y`, making it `*y`,
this is because `y` is an `&mut` reference. You'll also need to use them for
this is because `y` is a `&mut` reference. You'll also need to use them for
accessing the contents of a reference as well.
Otherwise, `&mut` references are just like references. There _is_ a large
Otherwise, `&mut` references are like references. There _is_ a large
difference between the two, and how they interact, though. You can tell
something is fishy in the above example, because we need that extra scope, with
the `{` and `}`. If we remove them, we get an error:
@ -263,7 +263,7 @@ for i in &v {
}
```
This prints out one through three. As we iterate through the vectors, were
This prints out one through three. As we iterate through the vector, were
only given references to the elements. And `v` is itself borrowed as immutable,
which means we cant change it while were iterating:

View File

@ -1,344 +0,0 @@
% Rust Inside Other Languages
For our third project, were going to choose something that shows off one of
Rusts greatest strengths: a lack of a substantial runtime.
As organizations grow, they increasingly rely on a multitude of programming
languages. Different programming languages have different strengths and
weaknesses, and a polyglot stack lets you use a particular language where
its strengths make sense and a different one where its weak.
A very common area where many programming languages are weak is in runtime
performance of programs. Often, using a language that is slower, but offers
greater programmer productivity, is a worthwhile trade-off. To help mitigate
this, they provide a way to write some of your system in C and then call
that C code as though it were written in the higher-level language. This is
called a foreign function interface, often shortened to FFI.
Rust has support for FFI in both directions: it can call into C code easily,
but crucially, it can also be called _into_ as easily as C. Combined with
Rusts lack of a garbage collector and low runtime requirements, this makes
Rust a great candidate to embed inside of other languages when you need
that extra oomph.
There is a whole [chapter devoted to FFI][ffi] and its specifics elsewhere in
the book, but in this chapter, well examine this particular use-case of FFI,
with examples in Ruby, Python, and JavaScript.
[ffi]: ffi.html
# The problem
There are many different projects we could choose here, but were going to
pick an example where Rust has a clear advantage over many other languages:
numeric computing and threading.
Many languages, for the sake of consistency, place numbers on the heap, rather
than on the stack. Especially in languages that focus on object-oriented
programming and use garbage collection, heap allocation is the default. Sometimes
optimizations can stack allocate particular numbers, but rather than relying
on an optimizer to do its job, we may want to ensure that were always using
primitive number types rather than some sort of object type.
Second, many languages have a global interpreter lock (GIL), which limits
concurrency in many situations. This is done in the name of safety, which is
a positive effect, but it limits the amount of work that can be done at the
same time, which is a big negative.
To emphasize these two aspects, were going to create a little project that
uses these two aspects heavily. Since the focus of the example is to embed
Rust into other languages, rather than the problem itself, well just use a
toy example:
> Start ten threads. Inside each thread, count from one to five million. After
> all ten threads are finished, print out done!.
I chose five million based on my particular computer. Heres an example of this
code in Ruby:
```ruby
threads = []
10.times do
threads << Thread.new do
count = 0
5_000_000.times do
count += 1
end
count
end
end
threads.each do |t|
puts "Thread finished with count=#{t.value}"
end
puts "done!"
```
Try running this example, and choose a number that runs for a few seconds.
Depending on your computers hardware, you may have to increase or decrease the
number.
On my system, running this program takes `2.156` seconds. And, if I use some
sort of process monitoring tool, like `top`, I can see that it only uses one
core on my machine. Thats the GIL kicking in.
While its true that this is a synthetic program, one can imagine many problems
that are similar to this in the real world. For our purposes, spinning up a few
busy threads represents some sort of parallel, expensive computation.
# A Rust library
Lets rewrite this problem in Rust. First, lets make a new project with
Cargo:
```bash
$ cargo new embed
$ cd embed
```
This program is fairly easy to write in Rust:
```rust
use std::thread;
fn process() {
let handles: Vec<_> = (0..10).map(|_| {
thread::spawn(|| {
let mut x = 0;
for _ in 0..5_000_000 {
x += 1
}
x
})
}).collect();
for h in handles {
println!("Thread finished with count={}",
h.join().map_err(|_| "Could not join a thread!").unwrap());
}
}
```
Some of this should look familiar from previous examples. We spin up ten
threads, collecting them into a `handles` vector. Inside of each thread, we
loop five million times, and add one to `x` each time. Finally, we join on
each thread.
Right now, however, this is a Rust library, and it doesnt expose anything
thats callable from C. If we tried to hook this up to another language right
now, it wouldnt work. We only need to make two small changes to fix this,
though. The first is to modify the beginning of our code:
```rust,ignore
#[no_mangle]
pub extern fn process() {
```
We have to add a new attribute, `no_mangle`. When you create a Rust library, it
changes the name of the function in the compiled output. The reasons for this
are outside the scope of this tutorial, but in order for other languages to
know how to call the function, we cant do that. This attribute turns
that behavior off.
The other change is the `pub extern`. The `pub` means that this function should
be callable from outside of this module, and the `extern` says that it should
be able to be called from C. Thats it! Not a whole lot of change.
The second thing we need to do is to change a setting in our `Cargo.toml`. Add
this at the bottom:
```toml
[lib]
name = "embed"
crate-type = ["dylib"]
```
This tells Rust that we want to compile our library into a standard dynamic
library. By default, Rust compiles an rlib, a Rust-specific format.
Lets build the project now:
```bash
$ cargo build --release
Compiling embed v0.1.0 (file:///home/steve/src/embed)
```
Weve chosen `cargo build --release`, which builds with optimizations on. We
want this to be as fast as possible! You can find the output of the library in
`target/release`:
```bash
$ ls target/release/
build deps examples libembed.so native
```
That `libembed.so` is our shared object library. We can use this file
just like any shared object library written in C! As an aside, this may be
`embed.dll` (Microsoft Windows) or `libembed.dylib` (Mac OS X), depending on
your operating system.
Now that weve got our Rust library built, lets use it from our Ruby.
# Ruby
Open up an `embed.rb` file inside of our project, and do this:
```ruby
require 'ffi'
module Hello
extend FFI::Library
ffi_lib 'target/release/libembed.so'
attach_function :process, [], :void
end
Hello.process
puts 'done!'
```
Before we can run this, we need to install the `ffi` gem:
```bash
$ gem install ffi # this may need sudo
Fetching: ffi-1.9.8.gem (100%)
Building native extensions. This could take a while...
Successfully installed ffi-1.9.8
Parsing documentation for ffi-1.9.8
Installing ri documentation for ffi-1.9.8
Done installing documentation for ffi after 0 seconds
1 gem installed
```
And finally, we can try running it:
```bash
$ ruby embed.rb
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
Thread finished with count=5000000
done!
done!
$
```
Whoa, that was fast! On my system, this took `0.086` seconds, rather than
the two seconds the pure Ruby version took. Lets break down this Ruby
code:
```ruby
require 'ffi'
```
We first need to require the `ffi` gem. This lets us interface with our
Rust library like a C library.
```ruby
module Hello
extend FFI::Library
ffi_lib 'target/release/libembed.so'
```
The `Hello` module is used to attach the native functions from the shared
library. Inside, we `extend` the necessary `FFI::Library` module and then call
`ffi_lib` to load up our shared object library. We just pass it the path that
our library is stored, which, as we saw before, is
`target/release/libembed.so`.
```ruby
attach_function :process, [], :void
```
The `attach_function` method is provided by the FFI gem. Its what
connects our `process()` function in Rust to a Ruby function of the
same name. Since `process()` takes no arguments, the second parameter
is an empty array, and since it returns nothing, we pass `:void` as
the final argument.
```ruby
Hello.process
```
This is the actual call into Rust. The combination of our `module`
and the call to `attach_function` sets this all up. It looks like
a Ruby function but is actually Rust!
```ruby
puts 'done!'
```
Finally, as per our projects requirements, we print out `done!`.
Thats it! As weve seen, bridging between the two languages is really easy,
and buys us a lot of performance.
Next, lets try Python!
# Python
Create an `embed.py` file in this directory, and put this in it:
```python
from ctypes import cdll
lib = cdll.LoadLibrary("target/release/libembed.so")
lib.process()
print("done!")
```
Even easier! We use `cdll` from the `ctypes` module. A quick call
to `LoadLibrary` later, and we can call `process()`.
On my system, this takes `0.017` seconds. Speedy!
# Node.js
Node isnt a language, but its currently the dominant implementation of
server-side JavaScript.
In order to do FFI with Node, we first need to install the library:
```bash
$ npm install ffi
```
After that installs, we can use it:
```javascript
var ffi = require('ffi');
var lib = ffi.Library('target/release/libembed', {
'process': ['void', []]
});
lib.process();
console.log("done!");
```
It looks more like the Ruby example than the Python example. We use
the `ffi` module to get access to `ffi.Library()`, which loads up
our shared object. We need to annotate the return type and argument
types of the function, which are `void` for return and an empty
array to signify no arguments. From there, we just call it and
print the result.
On my system, this takes a quick `0.092` seconds.
# Conclusion
As you can see, the basics of doing this are _very_ easy. Of course,
there's a lot more that we could do here. Check out the [FFI][ffi]
chapter for more details.

View File

@ -44,7 +44,7 @@ let s = "foo\
assert_eq!("foobar", s);
```
Rust has more than just `&str`s though. A `String`, is a heap-allocated string.
Rust has more than only `&str`s though. A `String`, is a heap-allocated string.
This string is growable, and is also guaranteed to be UTF-8. `String`s are
commonly created by converting from a string slice using the `to_string`
method.

View File

@ -9,7 +9,8 @@ let origin_x = 0;
let origin_y = 0;
```
A `struct` lets us combine these two into a single, unified datatype:
A `struct` lets us combine these two into a single, unified datatype with `x`
and `y` as field labels:
```rust
struct Point {
@ -32,7 +33,7 @@ We can create an instance of our `struct` via `let`, as usual, but we use a `key
value` style syntax to set each field. The order doesnt need to be the same as
in the original declaration.
Finally, because fields have names, we can access the field through dot
Finally, because fields have names, we can access them through dot
notation: `origin.x`.
The values in `struct`s are immutable by default, like other bindings in Rust.
@ -67,9 +68,8 @@ struct Point {
Mutability is a property of the binding, not of the structure itself. If youre
used to field-level mutability, this may seem strange at first, but it
significantly simplifies things. It even lets you make things mutable for a short
time only:
significantly simplifies things. It even lets you make things mutable on a temporary
basis:
```rust,ignore
struct Point {
@ -82,12 +82,41 @@ fn main() {
point.x = 5;
let point = point; // this new binding cant change now
let point = point; // now immutable
point.y = 6; // this causes an error
}
```
Your structure can still contain `&mut` pointers, which will let
you do some kinds of mutation:
```rust
struct Point {
x: i32,
y: i32,
}
struct PointRef<'a> {
x: &'a mut i32,
y: &'a mut i32,
}
fn main() {
let mut point = Point { x: 0, y: 0 };
{
let r = PointRef { x: &mut point.x, y: &mut point.y };
*r.x = 5;
*r.y = 6;
}
assert_eq!(5, point.x);
assert_eq!(6, point.y);
}
```
# Update syntax
A `struct` can include `..` to indicate that you want to use a copy of some
@ -121,27 +150,24 @@ let point = Point3d { z: 1, x: 2, .. origin };
# Tuple structs
Rust has another data type thats like a hybrid between a [tuple][tuple] and a
`struct`, called a tuple struct. Tuple structs have a name, but
their fields dont:
`struct`, called a tuple struct. Tuple structs have a name, but their fields
don't. They are declared with the `struct` keyword, and then with a name
followed by a tuple:
[tuple]: primitive-types.html#tuples
```rust
struct Color(i32, i32, i32);
struct Point(i32, i32, i32);
```
[tuple]: primitive-types.html#tuples
These two will not be equal, even if they have the same values:
```rust
# struct Color(i32, i32, i32);
# struct Point(i32, i32, i32);
let black = Color(0, 0, 0);
let origin = Point(0, 0, 0);
```
Here, `black` and `origin` are not equal, even though they contain the same
values.
It is almost always better to use a `struct` than a tuple struct. We would write
`Color` and `Point` like this instead:
It is almost always better to use a `struct` than a tuple struct. We
would write `Color` and `Point` like this instead:
```rust
struct Color {
@ -157,13 +183,14 @@ struct Point {
}
```
Now, we have actual names, rather than positions. Good names are important,
and with a `struct`, we have actual names.
Good names are important, and while values in a tuple struct can be
referenced with dot notation as well, a `struct` gives us actual names,
rather than positions.
There _is_ one case when a tuple struct is very useful, though, and thats a
tuple struct with only one element. We call this the newtype pattern, because
it allows you to create a new type, distinct from that of its contained value
and expressing its own semantic meaning:
There _is_ one case when a tuple struct is very useful, though, and that is when
it has only one element. We call this the newtype pattern, because
it allows you to create a new type that is distinct from its contained value
and also expresses its own semantic meaning:
```rust
struct Inches(i32);
@ -175,7 +202,7 @@ println!("length is {} inches", integer_length);
```
As you can see here, you can extract the inner integer type through a
destructuring `let`, just as with regular tuples. In this case, the
destructuring `let`, as with regular tuples. In this case, the
`let Inches(integer_length)` assigns `10` to `integer_length`.
# Unit-like structs
@ -196,7 +223,7 @@ This is rarely useful on its own (although sometimes it can serve as a
marker type), but in combination with other features, it can become
useful. For instance, a library may ask you to create a structure that
implements a certain [trait][trait] to handle events. If you dont have
any data you need to store in the structure, you can just create a
any data you need to store in the structure, you can create a
unit-like `struct`.
[trait]: traits.html

View File

@ -1,6 +1,6 @@
% Syntax and Semantics
This section breaks Rust down into small chunks, one for each concept.
This chapter breaks Rust down into small chunks, one for each concept.
If youd like to learn Rust from the bottom up, reading this in order is a
great way to do that.

View File

@ -41,6 +41,7 @@
* `!` (`ident!(…)`, `ident!{…}`, `ident![…]`): denotes macro expansion. See [Macros].
* `!` (`!expr`): bitwise or logical complement. Overloadable (`Not`).
* `!=` (`var != expr`): nonequality comparison. Overloadable (`PartialEq`).
* `%` (`expr % expr`): arithmetic remainder. Overloadable (`Rem`).
* `%=` (`var %= expr`): arithmetic remainder & assignment.
* `&` (`expr & expr`): bitwise and. Overloadable (`BitAnd`).
@ -75,13 +76,13 @@
* `;` (`[…; len]`): part of fixed-size array syntax. See [Primitive Types (Arrays)].
* `<<` (`expr << expr`): left-shift. Overloadable (`Shl`).
* `<<=` (`var <<= expr`): left-shift & assignment.
* `<` (`expr < expr`): less-than comparison. Overloadable (`Cmp`, `PartialCmp`).
* `<=` (`var <= expr`): less-than or equal-to comparison. Overloadable (`Cmp`, `PartialCmp`).
* `<` (`expr < expr`): less-than comparison. Overloadable (`PartialOrd`).
* `<=` (`var <= expr`): less-than or equal-to comparison. Overloadable (`PartialOrd`).
* `=` (`var = expr`, `ident = type`): assignment/equivalence. See [Variable Bindings], [`type` Aliases], generic parameter defaults.
* `==` (`var == expr`): comparison. Overloadable (`Eq`, `PartialEq`).
* `==` (`var == expr`): equality comparison. Overloadable (`PartialEq`).
* `=>` (`pat => expr`): part of match arm syntax. See [Match].
* `>` (`expr > expr`): greater-than comparison. Overloadable (`Cmp`, `PartialCmp`).
* `>=` (`var >= expr`): greater-than or equal-to comparison. Overloadable (`Cmp`, `PartialCmp`).
* `>` (`expr > expr`): greater-than comparison. Overloadable (`PartialOrd`).
* `>=` (`var >= expr`): greater-than or equal-to comparison. Overloadable (`PartialOrd`).
* `>>` (`expr >> expr`): right-shift. Overloadable (`Shr`).
* `>>=` (`var >>= expr`): right-shift & assignment.
* `@` (`ident @ pat`): pattern binding. See [Patterns (Bindings)].
@ -234,5 +235,5 @@
[Traits (Multiple Trait Bounds)]: traits.html#multiple-trait-bounds
[Traits]: traits.html
[Unsafe]: unsafe.html
[Unsized Types (`?Sized`)]: unsized-types.html#?sized
[Unsized Types (`?Sized`)]: unsized-types.html#sized
[Variable Bindings]: variable-bindings.html

View File

@ -365,7 +365,7 @@ test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
It works!
The current convention is to use the `tests` module to hold your "unit-style"
tests. Anything that just tests one small bit of functionality makes sense to
tests. Anything that tests one small bit of functionality makes sense to
go here. But what about "integration-style" tests instead? For that, we have
the `tests` directory.
@ -503,7 +503,7 @@ for the function test. These will auto increment with names like `add_two_1` as
you add more examples.
We havent covered all of the details with writing documentation tests. For more,
please see the [Documentation chapter](documentation.html)
please see the [Documentation chapter](documentation.html).
One final note: documentation tests *cannot* be run on binary crates.
To see more on file arrangement see the [Crates and

View File

@ -44,7 +44,7 @@ values go on the stack. What does that mean?
Well, when a function gets called, some memory gets allocated for all of its
local variables and some other information. This is called a stack frame, and
for the purpose of this tutorial, were going to ignore the extra information
and just consider the local variables were allocating. So in this case, when
and only consider the local variables were allocating. So in this case, when
`main()` is run, well allocate a single 32-bit integer for our stack frame.
This is automatically handled for you, as you can see; we didnt have to write
any special Rust code or anything.
@ -130,63 +130,64 @@ on the stack is the first one you retrieve from it.
Lets try a three-deep example:
```rust
fn bar() {
fn italic() {
let i = 6;
}
fn foo() {
fn bold() {
let a = 5;
let b = 100;
let c = 1;
bar();
italic();
}
fn main() {
let x = 42;
foo();
bold();
}
```
We have some kooky function names to make the diagrams clearer.
Okay, first, we call `main()`:
| Address | Name | Value |
|---------|------|-------|
| 0 | x | 42 |
Next up, `main()` calls `foo()`:
Next up, `main()` calls `bold()`:
| Address | Name | Value |
|---------|------|-------|
| 3 | c | 1 |
| 2 | b | 100 |
| 1 | a | 5 |
| **3** | **c**|**1** |
| **2** | **b**|**100**|
| **1** | **a**| **5** |
| 0 | x | 42 |
And then `foo()` calls `bar()`:
And then `bold()` calls `italic()`:
| Address | Name | Value |
|---------|------|-------|
| 4 | i | 6 |
| 3 | c | 1 |
| 2 | b | 100 |
| 1 | a | 5 |
| *4* | *i* | *6* |
| **3** | **c**|**1** |
| **2** | **b**|**100**|
| **1** | **a**| **5** |
| 0 | x | 42 |
Whew! Our stack is growing tall.
After `bar()` is over, its frame is deallocated, leaving just `foo()` and
After `italic()` is over, its frame is deallocated, leaving only `bold()` and
`main()`:
| Address | Name | Value |
|---------|------|-------|
| 3 | c | 1 |
| 2 | b | 100 |
| 1 | a | 5 |
| **3** | **c**|**1** |
| **2** | **b**|**100**|
| **1** | **a**| **5** |
| 0 | x | 42 |
And then `foo()` ends, leaving just `main()`:
And then `bold()` ends, leaving only `main()`:
| Address | Name | Value |
|---------|------|-------|
@ -246,7 +247,7 @@ location weve asked for.
We havent really talked too much about what it actually means to allocate and
deallocate memory in these contexts. Getting into very deep detail is out of
the scope of this tutorial, but whats important to point out here is that
the heap isnt just a stack that grows from the opposite end. Well have an
the heap isnt a stack that grows from the opposite end. Well have an
example of this later in the book, but because the heap can be allocated and
freed in any order, it can end up with holes. Heres a diagram of the memory
layout of a program which has been running for a while now:
@ -331,13 +332,13 @@ What about when we call `foo()`, passing `y` as an argument?
| 1 | y | → 0 |
| 0 | x | 5 |
Stack frames arent just for local bindings, theyre for arguments too. So in
Stack frames arent only for local bindings, theyre for arguments too. So in
this case, we need to have both `i`, our argument, and `z`, our local variable
binding. `i` is a copy of the argument, `y`. Since `y`s value is `0`, so is
`i`s.
This is one reason why borrowing a variable doesnt deallocate any memory: the
value of a reference is just a pointer to a memory location. If we got rid of
value of a reference is a pointer to a memory location. If we got rid of
the underlying memory, things wouldnt work very well.
# A complex example
@ -453,7 +454,7 @@ Next, `foo()` calls `bar()` with `x` and `z`:
| 0 | h | 3 |
We end up allocating another value on the heap, and so we have to subtract one
from (2<sup>30</sup>) - 1. Its easier to just write that than `1,073,741,822`. In any
from (2<sup>30</sup>) - 1. Its easier to write that than `1,073,741,822`. In any
case, we set up the variables as usual.
At the end of `bar()`, it calls `baz()`:
@ -538,7 +539,7 @@ instead.
# Which to use?
So if the stack is faster and easier to manage, why do we need the heap? A big
reason is that Stack-allocation alone means you only have LIFO semantics for
reason is that Stack-allocation alone means you only have 'Last In First Out (LIFO)' semantics for
reclaiming storage. Heap-allocation is strictly more general, allowing storage
to be taken from and returned to the pool in arbitrary order, but at a
complexity cost.
@ -549,12 +550,12 @@ has two big impacts: runtime efficiency and semantic impact.
## Runtime Efficiency
Managing the memory for the stack is trivial: The machine just
Managing the memory for the stack is trivial: The machine
increments or decrements a single value, the so-called “stack pointer”.
Managing memory for the heap is non-trivial: heap-allocated memory is freed at
arbitrary points, and each block of heap-allocated memory can be of arbitrary
size, the memory manager must generally work much harder to identify memory for
reuse.
size, so the memory manager must generally work much harder to
identify memory for reuse.
If youd like to dive into this topic in greater detail, [this paper][wilson]
is a great introduction.

View File

@ -272,7 +272,7 @@ made more flexible.
Suppose weve got some values that implement `Foo`. The explicit form of
construction and use of `Foo` trait objects might look a bit like (ignoring the
type mismatches: theyre all just pointers anyway):
type mismatches: theyre all pointers anyway):
```rust,ignore
let a: String = "foo".to_string();

View File

@ -44,8 +44,8 @@ impl HasArea for Circle {
```
As you can see, the `trait` block looks very similar to the `impl` block,
but we dont define a body, just a type signature. When we `impl` a trait,
we use `impl Trait for Item`, rather than just `impl Item`.
but we dont define a body, only a type signature. When we `impl` a trait,
we use `impl Trait for Item`, rather than only `impl Item`.
## Trait bounds on generic functions

View File

@ -41,8 +41,8 @@ unsafe impl Scary for i32 {}
```
Its important to be able to explicitly delineate code that may have bugs that
cause big problems. If a Rust program segfaults, you can be sure its somewhere
in the sections marked `unsafe`.
cause big problems. If a Rust program segfaults, you can be sure the cause is
related to something marked `unsafe`.
# What does safe mean?
@ -100,7 +100,7 @@ that you normally can not do. Just three. Here they are:
Thats it. Its important that `unsafe` does not, for example, turn off the
borrow checker. Adding `unsafe` to some random Rust code doesnt change its
semantics, it wont just start accepting anything. But it will let you write
semantics, it wont start accepting anything. But it will let you write
things that _do_ break some of the rules.
You will also encounter the `unsafe` keyword when writing bindings to foreign

View File

@ -11,7 +11,7 @@ Rust understands a few of these types, but they have some restrictions. There
are three:
1. We can only manipulate an instance of an unsized type via a pointer. An
`&[T]` works just fine, but a `[T]` does not.
`&[T]` works fine, but a `[T]` does not.
2. Variables and arguments cannot have dynamically sized types.
3. Only the last field in a `struct` may have a dynamically sized type; the
other fields must not. Enum variants must not have dynamically sized types as

View File

@ -2,7 +2,7 @@
Virtually every non-'Hello World Rust program uses *variable bindings*. They
bind some value to a name, so it can be used later. `let` is
used to introduce a binding, just like this:
used to introduce a binding, like this:
```rust
fn main() {
@ -18,7 +18,7 @@ function, rather than leaving it off. Otherwise, youll get an error.
In many languages, a variable binding would be called a *variable*, but Rusts
variable bindings have a few tricks up their sleeves. For example the
left-hand side of a `let` expression is a [pattern][pattern], not just a
left-hand side of a `let` expression is a [pattern][pattern], not a
variable name. This means we can do things like:
```rust
@ -27,7 +27,7 @@ let (x, y) = (1, 2);
After this expression is evaluated, `x` will be one, and `y` will be two.
Patterns are really powerful, and have [their own section][pattern] in the
book. We dont need those features for now, so well just keep this in the back
book. We dont need those features for now, so well keep this in the back
of our minds as we go forward.
[pattern]: patterns.html
@ -169,10 +169,10 @@ in the middle of a string." We add a comma, and then `x`, to indicate that we
want `x` to be the value were interpolating. The comma is used to separate
arguments we pass to functions and macros, if youre passing more than one.
When you just use the curly braces, Rust will attempt to display the value in a
When you use the curly braces, Rust will attempt to display the value in a
meaningful way by checking out its type. If you want to specify the format in a
more detailed manner, there are a [wide number of options available][format].
For now, we'll just stick to the default: integers aren't very complicated to
For now, we'll stick to the default: integers aren't very complicated to
print.
[format]: ../std/fmt/index.html

View File

@ -61,6 +61,33 @@ error: aborting due to previous error
Theres a lot of punctuation in that message, but the core of it makes sense:
you cannot index with an `i32`.
## Out-of-bounds Access
If you try to access an index that doesnt exist:
```ignore
let v = vec![1, 2, 3];
println!("Item 7 is {}", v[7]);
```
then the current thread will [panic] with a message like this:
```text
thread '<main>' panicked at 'index out of bounds: the len is 3 but the index is 7'
```
If you want to handle out-of-bounds errors without panicking, you can use
methods like [`get`][get] or [`get_mut`][get_mut] that return `None` when
given an invalid index:
```rust
let v = vec![1, 2, 3];
match v.get(7) {
Some(x) => println!("Item 7 is {}", x),
None => println!("Sorry, this vector is too short.")
}
```
## Iterating
Once you have a vector, you can iterate through its elements with `for`. There
@ -87,3 +114,6 @@ API documentation][vec].
[vec]: ../std/vec/index.html
[generic]: generics.html
[panic]: concurrency.html#panics
[get]: http://doc.rust-lang.org/std/vec/struct.Vec.html#method.get
[get_mut]: http://doc.rust-lang.org/std/vec/struct.Vec.html#method.get_mut

View File

@ -1,186 +1,3 @@
% The Rust Design FAQ
This document describes decisions that were arrived at after lengthy discussion and
experimenting with alternatives. Please do not propose reversing them unless
you have a new, extremely compelling argument. Note that this document
specifically talks about the *language* and not any library or implementation.
A few general guidelines define the philosophy:
- [Memory safety][mem] must never be compromised
- [Abstraction][abs] should be zero-cost, while still maintaining safety
- Practicality is key
[mem]: http://en.wikipedia.org/wiki/Memory_safety
[abs]: http://en.wikipedia.org/wiki/Abstraction_%28computer_science%29
# Semantics
## Data layout is unspecified
In the general case, `enum` and `struct` layout is undefined. This allows the
compiler to potentially do optimizations like re-using padding for the
discriminant, compacting variants of nested enums, reordering fields to remove
padding, etc. `enum`s which carry no data ("C-like") are eligible to have a
defined representation. Such `enum`s are easily distinguished in that they are
simply a list of names that carry no data:
```
enum CLike {
A,
B = 32,
C = 34,
D
}
```
The [repr attribute][repr] can be applied to such `enum`s to give them the same
representation as a primitive. This allows using Rust `enum`s in FFI where C
`enum`s are also used, for most use cases. The attribute can also be applied
to `struct`s to get the same layout as a C struct would.
[repr]: reference.html#ffi-attributes
## There is no GC
A language that requires a GC is a language that opts into a larger, more
complex runtime than Rust cares for. Rust is usable on bare metal with no
extra runtime. Additionally, garbage collection is frequently a source of
non-deterministic behavior. Rust provides the tools to make using a GC
possible and even pleasant, but it should not be a requirement for
implementing the language.
## Non-`Sync` `static mut` is unsafe
Types which are [`Sync`][sync] are thread-safe when multiple shared
references to them are used concurrently. Types which are not `Sync` are not
thread-safe, and thus when used in a global require unsafe code to use.
[sync]: core/marker/trait.Sync.html
### If mutable static items that implement `Sync` are safe, why is taking &mut SHARABLE unsafe?
Having multiple aliasing `&mut T`s is never allowed. Due to the nature of
globals, the borrow checker cannot possibly ensure that a static obeys the
borrowing rules, so taking a mutable reference to a static is always unsafe.
## There is no life before or after main (no static ctors/dtors)
Globals can not have a non-constant-expression constructor and cannot have a
destructor at all. This is an opinion of the language. Static constructors are
undesirable because they can slow down program startup. Life before main is
often considered a misfeature, never to be used. Rust helps this along by just
not having the feature.
See [the C++ FQA][fqa] about the "static initialization order fiasco", and
[Eric Lippert's blog][elp] for the challenges in C#, which also has this
feature.
A nice replacement is [lazy_static][lazy_static].
[fqa]: http://yosefk.com/c++fqa/ctors.html#fqa-10.12
[elp]: http://ericlippert.com/2013/02/06/static-constructors-part-one/
[lazy_static]: https://crates.io/crates/lazy_static
## The language does not require a runtime
See the above entry on GC. Requiring a runtime limits the utility of the
language, and makes it undeserving of the title "systems language". All Rust
code should need to run is a stack.
## `match` must be exhaustive
`match` being exhaustive has some useful properties. First, if every
possibility is covered by the `match`, adding further variants to the `enum`
in the future will prompt a compilation failure, rather than runtime panic.
Second, it makes cost explicit. In general, the only safe way to have a
non-exhaustive match would be to panic the thread if nothing is matched, though
it could fall through if the type of the `match` expression is `()`. This sort
of hidden cost and special casing is against the language's philosophy. It's
easy to ignore all unspecified cases by using the `_` wildcard:
```rust,ignore
match val.do_something() {
Cat(a) => { /* ... */ }
_ => { /* ... */ }
}
```
[#3101][iss] is the issue that proposed making this the only behavior, with
rationale and discussion.
[iss]: https://github.com/rust-lang/rust/issues/3101
## No guaranteed tail-call optimization
In general, tail-call optimization is not guaranteed: see [here][tml] for a
detailed explanation with references. There is a [proposed extension][tce] that
would allow tail-call elimination in certain contexts. The compiler is still
free to optimize tail-calls [when it pleases][sco], however.
[tml]: https://mail.mozilla.org/pipermail/rust-dev/2013-April/003557.html
[sco]: http://llvm.org/docs/CodeGenerator.html#sibling-call-optimization
[tce]: https://github.com/rust-lang/rfcs/pull/81
## No constructors
Functions can serve the same purpose as constructors without adding any
language complexity.
## No copy constructors
Types which implement [`Copy`][copy], will do a standard C-like "shallow copy"
with no extra work (similar to "plain old data" in C++). It is impossible to
implement `Copy` types that require custom copy behavior. Instead, in Rust
"copy constructors" are created by implementing the [`Clone`][clone] trait,
and explicitly calling the `clone` method. Making user-defined copy operators
explicit surfaces the underlying complexity, forcing the developer to opt-in
to potentially expensive operations.
[copy]: core/marker/trait.Copy.html
[clone]: core/clone/trait.Clone.html
## No move constructors
Values of all types are moved via `memcpy`. This makes writing generic unsafe
code much simpler since assignment, passing and returning are known to never
have a side effect like unwinding.
# Syntax
## Macros require balanced delimiters
This is to make the language easier to parse for machines. Since the body of a
macro can contain arbitrary tokens, some restriction is needed to allow simple
non-macro-expanding lexers and parsers. This comes in the form of requiring
that all delimiters be balanced.
## `->` for function return type
This is to make the language easier to parse for humans, especially in the face
of higher-order functions. `fn foo<T>(f: fn(i32): i32, fn(T): U): U` is not
particularly easy to read.
## Why is `let` used to introduce variables?
Instead of the term "variable", we use "variable bindings". The
simplest way for creating a binding is by using the `let` syntax.
Other ways include `if let`, `while let`, and `match`. Bindings also
exist in function argument positions.
Bindings always happen in pattern matching positions, and it's also Rust's way
to declare mutability. One can also re-declare mutability of a binding in
pattern matching. This is useful to avoid unnecessary `mut` annotations. An
interesting historical note is that Rust comes, syntactically, most closely
from ML, which also uses `let` to introduce bindings.
See also [a long thread][alt] on renaming `let mut` to `var`.
[alt]: https://mail.mozilla.org/pipermail/rust-dev/2014-January/008319.html
## Why no `--x` or `x++`?
Preincrement and postincrement, while convenient, are also fairly complex. They
require knowledge of evaluation order, and often lead to subtle bugs and
undefined behavior in C and C++. `x = x + 1` or `x += 1` is only slightly
longer, but unambiguous.
This content has moved to [the website](https://www.rust-lang.org/).

View File

@ -1,177 +1,3 @@
% The Rust Language FAQ
## Are there any big programs written in it yet? I want to read big samples.
There aren't many large programs yet. The Rust [compiler][rustc], 60,000+ lines at the time of writing, is written in Rust. As the oldest body of Rust code it has gone through many iterations of the language, and some parts are nicer to look at than others. It may not be the best code to learn from, but [borrowck] and [resolve] were written recently.
[rustc]: https://github.com/rust-lang/rust/tree/master/src/librustc
[resolve]: https://github.com/rust-lang/rust/tree/master/src/librustc_resolve
[borrowck]: https://github.com/rust-lang/rust/tree/master/src/librustc_borrowck/borrowck
A research browser engine called [Servo][servo], currently 30,000+ lines across more than a dozen crates, will be exercising a lot of Rust's distinctive type-system and concurrency features, and integrating many native libraries.
[servo]: https://github.com/servo/servo
Some examples that demonstrate different aspects of the language:
* [sprocketnes], an NES emulator with no GC, using modern Rust conventions
* The language's general-purpose [hash] function, SipHash-2-4. Bit twiddling, OO, macros
* The standard library's [HashMap], a sendable hash map in an OO style
* The standard library's [json] module. Enums and pattern matching
[sprocketnes]: https://github.com/pcwalton/sprocketnes
[hash]: https://github.com/rust-lang/rust/tree/master/src/libcore/hash
[HashMap]: https://github.com/rust-lang/rust/tree/master/src/libstd/collections/hash
[json]: https://github.com/rust-lang/rust/blob/master/src/libserialize/json.rs
You may also be interested in browsing [trending Rust repositories][github-rust] on GitHub.
[github-rust]: https://github.com/trending?l=rust
## Is anyone using Rust in production?
Yes. For example (incomplete):
* [OpenDNS](http://labs.opendns.com/2013/10/04/zeromq-helping-us-block-malicious-domains/)
* [Skylight](http://skylight.io)
* [wit.ai](https://github.com/wit-ai/witd)
* [Codius](https://codius.org/blog/codius-rust/)
* [MaidSafe](http://maidsafe.net/)
* [Terminal.com](https://terminal.com)
## Does it run on Windows?
Yes. All development happens in lockstep on all 3 target platforms (using MinGW, not Cygwin).
## Is it OO? How do I do this thing I normally do in an OO language?
It is multi-paradigm. Not everything is shoe-horned into a single abstraction. Many things you can do in OO languages you can do in Rust, but not everything, and not always using the same abstraction you're accustomed to.
## How do you get away with "no null pointers"?
Data values in the language can only be constructed through a fixed set of initializer forms. Each of those forms requires that its inputs already be initialized. A liveness analysis ensures that local variables are initialized before use.
## What is the relationship between a module and a crate?
* A crate is a top-level compilation unit that corresponds to a single loadable object.
* A module is a (possibly nested) unit of name-management inside a crate.
* A crate contains an implicit, un-named top-level module.
* Recursive definitions can span modules, but not crates.
* Crates do not have global names, only a set of non-unique metadata tags.
* There is no global inter-crate namespace; all name management occurs within a crate.
* Using another crate binds the root of _its_ namespace into the user's namespace.
## Why is panic unwinding non-recoverable within a thread? Why not try to "catch exceptions"?
In short, because too few guarantees could be made about the dynamic environment of the catch block, as well as invariants holding in the unwound heap, to be able to safely resume; we believe that other methods of signalling and logging errors are more appropriate, with threads playing the role of a "hard" isolation boundary between separate heaps.
Rust provides, instead, three predictable and well-defined options for handling any combination of the three main categories of "catch" logic:
* Failure _logging_ is done by the integrated logging subsystem.
* _Recovery_ after a panic is done by trapping a thread panic from _outside_
the thread, where other threads are known to be unaffected.
* _Cleanup_ of resources is done by RAII-style objects with destructors.
Cleanup through RAII-style destructors is more likely to work than in catch blocks anyways, since it will be better tested (part of the non-error control paths, so executed all the time).
## Why aren't modules type-parametric?
We want to maintain the option to parameterize at runtime. We may eventually change this limitation, but initially this is how type parameters were implemented.
## Why aren't values type-parametric? Why only items?
Doing so would make type inference much more complex, and require the implementation strategy of runtime parameterization.
## Why are enumerations nominal and closed?
We don't know if there's an obvious, easy, efficient, stock-textbook way of supporting open or structural disjoint unions. We prefer to stick to language features that have an obvious and well-explored semantics.
## Why aren't channels synchronous?
There's a lot of debate on this topic; it's easy to find a proponent of default-sync or default-async communication, and there are good reasons for either. Our choice rests on the following arguments:
* Part of the point of isolating threads is to decouple threads from one another, such that assumptions in one thread do not cause undue constraints (or bugs, if violated!) in another. Temporal coupling is as real as any other kind; async-by-default relaxes the default case to only _causal_ coupling.
* Default-async supports buffering and batching communication, reducing the frequency and severity of thread-switching and inter-thread / inter-domain synchronization.
* Default-async with transmittable channels is the lowest-level building block on which more-complex synchronization topologies and strategies can be built; it is not clear to us that the majority of cases fit the 2-party full-synchronization pattern rather than some more complex multi-party or multi-stage scenario. We did not want to force all programs to pay for wiring the former assumption into all communications.
## Why are channels half-duplex (one-way)?
Similar to the reasoning about default-sync: it wires fewer assumptions into the implementation, that would have to be paid by all use-cases even if they actually require a more complex communication topology.
## Why are strings UTF-8 by default? Why not UCS2 or UCS4?
The `str` type is UTF-8 because we observe more text in the wild in this encoding particularly in network transmissions, which are endian-agnostic and we think it's best that the default treatment of I/O not involve having to recode codepoints in each direction.
This does mean that indexed access to a Unicode codepoint inside a `str` value is an O(n) operation. On the one hand, this is clearly undesirable; on the other hand, this problem is full of trade-offs and we'd like to point a few important qualifications:
* Scanning a `str` for ASCII-range codepoints can still be done safely octet-at-a-time. If you use `.as_bytes()`, pulling out a `u8` costs only O(1) and produces a value that can be cast and compared to an ASCII-range `char`. So if you're (say) line-breaking on `'\n'`, octet-based treatment still works. UTF8 was well-designed this way.
* Most "character oriented" operations on text only work under very restricted language assumptions sets such as "ASCII-range codepoints only". Outside ASCII-range, you tend to have to use a complex (non-constant-time) algorithm for determining linguistic-unit (glyph, word, paragraph) boundaries anyways. We recommend using an "honest" linguistically-aware, Unicode-approved algorithm.
* The `char` type is UCS4. If you honestly need to do a codepoint-at-a-time algorithm, it's trivial to write a `type wstr = [char]`, and unpack a `str` into it in a single pass, then work with the `wstr`. In other words: the fact that the language is not "decoding to UCS4 by default" shouldn't stop you from decoding (or re-encoding any other way) if you need to work with that encoding.
## Why are `str`s, slices, arrays etc. built-in types rather than (say) special kinds of trait/impl?
In each case there is one or more operator, literal constructor, overloaded use or integration with a built-in control structure that makes us think it would be awkward to phrase the type in terms of more-general type constructors. Same as, say, with numbers! But this is partly an aesthetic call, and we'd be willing to look at a worked-out proposal for eliminating or rephrasing these special cases.
## Can Rust code call C code?
Yes. Calling C code from Rust is simple and exactly as efficient as calling C code from C.
## Can C code call Rust code?
Yes. The Rust code has to be exposed via an `extern` declaration, which makes it C-ABI compatible. Such a function can be passed to C code as a function pointer or, if given the `#[no_mangle]` attribute to disable symbol mangling, can be called directly from C code.
## Why aren't function signatures inferred? Why only local variables?
* Mechanically, it simplifies the inference algorithm; inference only requires looking at one function at a time.
* The same simplification goes double for human readers. A reader does not need an IDE running an inference algorithm across an entire crate to be able to guess at a function's argument types; it's always explicit and nearby.
## Why does a type parameter need explicit trait bounds to invoke methods on it, when C++ templates do not?
* Requiring explicit bounds means that the compiler can type-check the code at the point where the type-parametric item is *defined*, rather than delaying to when its type parameters are instantiated. You know that *any* set of type parameters fulfilling the bounds listed in the API will compile. It's an enforced minimal level of documentation, and results in very clean error messages.
* Scoping of methods is also a problem. C++ needs [Koenig (argument dependent) lookup](http://en.wikipedia.org/wiki/Argument-dependent_name_lookup), which comes with its own host of problems. Explicit bounds avoid this issue: traits are explicitly imported and then used as bounds on type parameters, so there is a clear mapping from the method to its implementation (via the trait and the instantiated type).
* Related to the above point: since a parameter explicitly names its trait bounds, a single type is able to implement traits whose sets of method names overlap, cleanly and unambiguously.
* There is further discussion on [this thread on the Rust mailing list](https://mail.mozilla.org/pipermail/rust-dev/2013-September/005603.html).
## Will Rust implement automatic semicolon insertion, like in Go?
For simplicity, we do not plan to do so. Implementing automatic semicolon insertion for Rust would be tricky because the absence of a trailing semicolon means "return a value".
## How do I get my program to display the output of logging macros?
**Short Answer**: Set the `RUST_LOG` environment variable to the name of your source file, sans extension.
```sh
rustc hello.rs
export RUST_LOG=hello
./hello
```
**Long Answer**: `RUST_LOG` takes a 'logging spec' that consists of a
comma-separated list of paths, where a path consists of the crate name and
sequence of module names, each separated by double-colons. For standalone `.rs`
files, the crate is implicitly named after the source file, so in the above
example we were setting `RUST_LOG` to the name of the hello crate. Multiple paths
can be combined to control the exact logging you want to see. For example, when
debugging linking in the compiler, you might set the following:
```sh
RUST_LOG=rustc_metadata::creader,rustc::util::filesearch,rustc::back::rpath
```
For a full description, see [the logging crate][1].
## How fast is Rust?
As always, this question is difficult to answer. There's still a lot of work to
do on speed, and depending on what you're benchmarking, Rust has variable
performance.
That said, it is an explicit goal of Rust to be as fast as C++ for most things.
Language decisions are made with performance in mind, and we want Rust to be as
fast as possible. Given that Rust is built on top of LLVM, any performance
improvements in it also help Rust become faster.
[1]:log/index.html
This content has moved to [the website](https://www.rust-lang.org/).

View File

@ -1,42 +1,3 @@
% The Rust Project FAQ
# What is this project's goal, in one sentence?
To design and implement a safe, concurrent, practical, static systems language.
# Why are you doing this?
Existing languages at this level of abstraction and efficiency are unsatisfactory. In particular:
* Too little attention paid to safety.
* Poor concurrency support.
* Lack of practical affordances, too dogmatic about paradigm.
# What are some non-goals?
* To employ any particularly cutting-edge technologies. Old, established techniques are better.
* To prize expressiveness, minimalism or elegance above other goals. These are desirable but subordinate goals.
* To cover the complete feature-set of C++, or any other language. It should provide majority-case features.
* To be 100% static, 100% safe, 100% reflective, or too dogmatic in any other sense. Trade-offs exist.
* To run on "every possible platform". It must eventually work without unnecessary compromises on widely-used hardware and software platforms.
# Is any part of this thing production-ready?
Yes!
# Is this a completely Mozilla-planned and orchestrated thing?
No. It started as a Graydon Hoare's part-time side project in 2006 and remained so for over 3 years. Mozilla got involved in 2009 once the language was mature enough to run some basic tests and demonstrate the idea. Though it is sponsored by Mozilla, Rust is developed by a diverse community of enthusiasts.
# What will Mozilla use Rust for?
Mozilla intends to use Rust as a platform for prototyping experimental browser architectures. Specifically, the hope is to develop a browser that is more amenable to parallelization than existing ones, while also being less prone to common C++ coding errors that result in security exploits. The name of that project is _[Servo](http://github.com/servo/servo)_.
# Why a BSD-style permissive license rather than MPL or tri-license?
* Partly due to preference of the original developer (Graydon).
* Partly due to the fact that languages tend to have a wider audience and more diverse set of possible embeddings and end-uses than focused, coherent products such as web browsers. We'd like to appeal to as many of those potential contributors as possible.
# Why dual MIT/ASL2 license?
The Apache license includes important protection against patent aggression, but it is not compatible with the GPL, version 2. To avoid problems using Rust with GPL2, it is alternately MIT licensed.
This content has moved to [the website](https://www.rust-lang.org/).

View File

@ -1,72 +1,37 @@
% Rust Documentation
Welcome to the Rust documentation! You can use the section headings above
to jump to any particular section.
<style>
nav {
display: none;
}
</style>
# Getting Started
This is an index of the documentation included with the Rust
compiler. For more comprehensive documentation see [the
website](https://www.rust-lang.org).
If you haven't seen Rust at all yet, the first thing you should read is the
introduction to [The Rust Programming Language](book/index.html). It'll give
you a good idea of what Rust is like.
[**The Rust Programming Language**][book]. Also known as "The Book",
The Rust Programming Language is the most comprehensive resource for
all topics related to Rust, and is the primary official document of
the language.
The book provides a lengthy explanation of Rust, its syntax, and its
concepts. Upon completing the book, you'll be an intermediate Rust
developer, and will have a good grasp of the fundamental ideas behind
Rust.
[**The Rust Reference**][ref]. While Rust does not have a
specification, the reference tries to describe its working in
detail. It tends to be out of date.
[Rust By Example][rbe] teaches you Rust through a series of small
examples.
[**Standard Library API Reference**][api]. Documentation for the
standard library.
[rbe]: http://rustbyexample.com/
[**The Rustonomicon**][nomicon]. An entire book dedicated to
explaining how to write unsafe Rust code. It is for advanced Rust
programmers.
# Language Reference
[**Compiler Error Index**][err]. Extended explanations of
the errors produced by the Rust compiler.
Rust does not have an exact specification yet, but an effort to describe as much of
the language in as much detail as possible is in [the reference](reference.html).
[book]: book/index.html
[ref]: reference.html
[api]: std/index.html
[nomicon]: nomicon/index.html
[err]: error-index.html
# Standard Library Reference
We have [API documentation for the entire standard
library](std/index.html). There's a list of crates on the left with more
specific sections, or you can use the search bar at the top to search for
something if you know its name.
# The Rustonomicon
[The Rustonomicon] is an entire book dedicated to explaining
how to write `unsafe` Rust code. It is for advanced Rust programmers.
[The Rustonomicon]: nomicon/index.html
# Tools
[Cargo](http://doc.crates.io/index.html) is the Rust package manager providing access to libraries
beyond the standard one, and its website contains lots of good documentation.
[`rustdoc`](book/documentation.html) is the Rust's documentation generator, a tool converting
annotated source code into HTML docs.
# FAQs
There are questions that are asked quite often, so we've made FAQs for them:
* [Language Design FAQ](complement-design-faq.html)
* [Language FAQ](complement-lang-faq.html)
* [Project FAQ](complement-project-faq.html)
* [How to submit a bug report](https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports)
# The Error Index
If you encounter an error while compiling your code you may be able to look it
up in the [Rust Compiler Error Index](error-index.html).
# Community Translations
Several projects have been started to translate the documentation into other
languages:
- [Russian](https://github.com/kgv/rust_book_ru)
- [Korean](https://github.com/rust-kr/doc.rust-kr.org)
- [Chinese](https://github.com/KaiserY/rust-book-chinese)
- [Spanish](https://goyox86.github.io/elpr)
- [German](https://panicbit.github.io/rustbook-de)

View File

@ -55,8 +55,8 @@ fn frob(s: &str, t: &str) -> &str; // ILLEGAL
fn get_mut(&mut self) -> &mut T; // elided
fn get_mut<'a>(&'a mut self) -> &'a mut T; // expanded
fn args<T:ToCStr>(&mut self, args: &[T]) -> &mut Command // elided
fn args<'a, 'b, T:ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command // expanded
fn args<T: ToCStr>(&mut self, args: &[T]) -> &mut Command // elided
fn args<'a, 'b, T: ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command // expanded
fn new(buf: &mut [u8]) -> BufWriter; // elided
fn new<'a>(buf: &'a mut [u8]) -> BufWriter<'a> // expanded

View File

@ -107,8 +107,8 @@ This signature of `as_str` takes a reference to a u32 with *some* lifetime, and
promises that it can produce a reference to a str that can live *just as long*.
Already we can see why this signature might be trouble. That basically implies
that we're going to find a str somewhere in the scope the reference
to the u32 originated in, or somewhere *even earlier*. That's a bit of a big
ask.
to the u32 originated in, or somewhere *even earlier*. That's a bit of a tall
order.
We then proceed to compute the string `s`, and return a reference to it. Since
the contract of our function says the reference must outlive `'a`, that's the

View File

@ -44,10 +44,11 @@ subtyping of its outputs. There are two kinds of variance in Rust:
* F is *invariant* over `T` otherwise (no subtyping relation can be derived)
(For those of you who are familiar with variance from other languages, what we
refer to as "just" variance is in fact *covariance*. Rust does not have
contravariance. Historically Rust did have some contravariance but it was
scrapped due to poor interactions with other features. If you experience
contravariance in Rust call your local compiler developer for medical advice.)
refer to as "just" variance is in fact *covariance*. Rust has *contravariance*
for functions. The future of contravariance is uncertain and it may be
scrapped. For now, `fn(T)` is contravariant in `T`, which is used in matching
methods in trait implementations to the trait definition. Traits don't have
inferred variance, so `Fn(T)` is invariant in `T`).
Some important variances:
@ -200,7 +201,7 @@ use std::cell::Cell;
struct Foo<'a, 'b, A: 'a, B: 'b, C, D, E, F, G, H> {
a: &'a A, // variant over 'a and A
b: &'b mut B, // invariant over 'b and B
b: &'b mut B, // variant over 'b and invariant over B
c: *const C, // variant over C
d: *mut D, // invariant over D
e: Vec<E>, // variant over E

View File

@ -21,7 +21,7 @@ impl<T> Drop for Vec<T> {
let elem_size = mem::size_of::<T>();
let num_bytes = elem_size * self.cap;
unsafe {
heap::deallocate(*self.ptr, num_bytes, align);
heap::deallocate(*self.ptr as *mut _, num_bytes, align);
}
}
}

View File

@ -226,7 +226,11 @@ impl<T> Iterator for RawValIter<T> {
} else {
unsafe {
let result = ptr::read(self.start);
self.start = self.start.offset(1);
self.start = if mem::size_of::<T>() == 0 {
(self.start as usize + 1) as *const _
} else {
self.start.offset(1)
};
Some(result)
}
}
@ -246,7 +250,11 @@ impl<T> DoubleEndedIterator for RawValIter<T> {
None
} else {
unsafe {
self.end = self.end.offset(-1);
self.end = if mem::size_of::<T>() == 0 {
(self.end as usize - 1) as *const _
} else {
self.end.offset(-1)
};
Some(ptr::read(self.end))
}
}

View File

@ -24,7 +24,7 @@ pub fn insert(&mut self, index: usize, elem: T) {
// ptr::copy(src, dest, len): "copy from source to dest len elems"
ptr::copy(self.ptr.offset(index as isize),
self.ptr.offset(index as isize + 1),
len - index);
self.len - index);
}
ptr::write(self.ptr.offset(index as isize), elem);
self.len += 1;
@ -44,7 +44,7 @@ pub fn remove(&mut self, index: usize) -> T {
let result = ptr::read(self.ptr.offset(index as isize));
ptr::copy(self.ptr.offset(index as isize + 1),
self.ptr.offset(index as isize),
len - index);
self.len - index);
result
}
}

View File

@ -140,8 +140,8 @@ impl<T> Iterator for RawValIter<T> {
self.start = if mem::size_of::<T>() == 0 {
(self.start as usize + 1) as *const _
} else {
self.start.offset(1);
}
self.start.offset(1)
};
Some(result)
}
}
@ -164,8 +164,8 @@ impl<T> DoubleEndedIterator for RawValIter<T> {
self.end = if mem::size_of::<T>() == 0 {
(self.end as usize - 1) as *const _
} else {
self.end.offset(-1);
}
self.end.offset(-1)
};
Some(ptr::read(self.end))
}
}

View File

@ -208,10 +208,10 @@ A _string literal_ is a sequence of any Unicode characters enclosed within two
which must be _escaped_ by a preceding `U+005C` character (`\`).
Line-break characters are allowed in string literals. Normally they represent
themselves (i.e. no translation), but as a special exception, when a `U+005C`
character (`\`) occurs immediately before the newline, the `U+005C` character,
the newline, and all whitespace at the beginning of the next line are ignored.
Thus `a` and `b` are equal:
themselves (i.e. no translation), but as a special exception, when an unescaped
`U+005C` character (`\`) occurs immediately before the newline (`U+000A`), the
`U+005C` character, the newline, and all whitespace at the beginning of the
next line are ignored. Thus `a` and `b` are equal:
```rust
let a = "foobar";
@ -2044,7 +2044,7 @@ The following configurations must be defined by the implementation:
production. For example, it controls the behavior of the standard library's
`debug_assert!` macro.
* `target_arch = "..."` - Target CPU architecture, such as `"x86"`, `"x86_64"`
`"mips"`, `"powerpc"`, `"arm"`, or `"aarch64"`.
`"mips"`, `"powerpc"`, `"powerpc64"`, `"powerpc64le"`, `"arm"`, or `"aarch64"`.
* `target_endian = "..."` - Endianness of the target CPU, either `"little"` or
`"big"`.
* `target_env = ".."` - An option provided by the compiler by default
@ -2372,10 +2372,6 @@ The currently implemented features of the reference compiler are:
Such items should not be allowed by the compiler to exist,
so if you need this there probably is a compiler bug.
* `visible_private_types` - Allows public APIs to expose otherwise private
types, e.g. as the return type of a public function.
This capability may be removed in the future.
* `allow_internal_unstable` - Allows `macro_rules!` macros to be tagged with the
`#[allow_internal_unstable]` attribute, designed
to allow `std` macros to call
@ -2390,6 +2386,13 @@ The currently implemented features of the reference compiler are:
* - `stmt_expr_attributes` - Allows attributes on expressions and
non-item statements.
* - `deprecated` - Allows using the `#[deprecated]` attribute.
* - `type_ascription` - Allows type ascription expressions `expr: Type`.
* - `abi_vectorcall` - Allows the usage of the vectorcall calling convention
(e.g. `extern "vectorcall" func fn_();`)
If a feature is promoted to a language feature, then all existing programs will
start to receive compilation warnings about `#![feature]` directives which enabled
the new feature (because the directive is no longer necessary). However, if a
@ -3677,10 +3680,10 @@ sites are:
* `let` statements where an explicit type is given.
For example, `128` is coerced to have type `i8` in the following:
For example, `42` is coerced to have type `i8` in the following:
```rust
let _: i8 = 128;
let _: i8 = 42;
```
* `static` and `const` statements (similar to `let` statements).
@ -3690,36 +3693,36 @@ sites are:
The value being coerced is the actual parameter, and it is coerced to
the type of the formal parameter.
For example, `128` is coerced to have type `i8` in the following:
For example, `42` is coerced to have type `i8` in the following:
```rust
fn bar(_: i8) { }
fn main() {
bar(128);
bar(42);
}
```
* Instantiations of struct or variant fields
For example, `128` is coerced to have type `i8` in the following:
For example, `42` is coerced to have type `i8` in the following:
```rust
struct Foo { x: i8 }
fn main() {
Foo { x: 128 };
Foo { x: 42 };
}
```
* Function results, either the final line of a block if it is not
semicolon-terminated or any expression in a `return` statement
For example, `128` is coerced to have type `i8` in the following:
For example, `42` is coerced to have type `i8` in the following:
```rust
fn foo() -> i8 {
128
42
}
```

View File

@ -1,17 +1,11 @@
{
osx-frameworks.rs-fails-otherwise-1
Memcheck:Leak
match-leak-kinds: possible
match-leak-kinds: definite,possible
fun:malloc
...
fun:__CFInitialize
fun:_ZN16ImageLoaderMachO11doImageInitERKN11ImageLoader11LinkContextE
fun:_ZN16ImageLoaderMachO16doInitializationERKN11ImageLoader11LinkContextE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader19processInitializersERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader15runInitializersERKNS_11LinkContextERNS_21InitializerTimingListE
fun:_ZN4dyld24initializeMainExecutableEv
...
}
{
@ -22,10 +16,6 @@
...
fun:__CFInitialize
fun:_ZN16ImageLoaderMachO11doImageInitERKN11ImageLoader11LinkContextE
fun:_ZN16ImageLoaderMachO16doInitializationERKN11ImageLoader11LinkContextE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader19processInitializersERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
}
{
@ -33,12 +23,10 @@
Memcheck:Leak
match-leak-kinds: possible
fun:realloc
fun:_ZL12realizeClassP10objc_class
fun:_ZL12realizeClassP10objc_class
fun:_ZN13list_array_ttIm15protocol_list_tE11attachListsEPKPS0_j
...
fun:_read_images
fun:map_images_nolock
fun:map_2_images
...
fun:_ZN4dyldL18notifyBatchPartialE17dyld_image_statesbPFPKcS0_jPK15dyld_image_infoE
fun:_ZN4dyld36registerImageStateBatchChangeHandlerE17dyld_image_statesPFPKcS0_jPK15dyld_image_infoE
fun:dyld_register_image_state_change_handler
@ -49,7 +37,7 @@
{
osx-frameworks.rs-fails-otherwise-4
Memcheck:Leak
match-leak-kinds: possible
match-leak-kinds: definite,possible
fun:calloc
...
fun:__CFInitialize
@ -61,45 +49,27 @@
{
osx-frameworks.rs-fails-otherwise-5
Memcheck:Leak
match-leak-kinds: definite
fun:calloc
...
fun:__CFInitialize
fun:_ZN16ImageLoaderMachO11doImageInitERKN11ImageLoader11LinkContextE
fun:_ZN16ImageLoaderMachO16doInitializationERKN11ImageLoader11LinkContextE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
}
{
osx-frameworks.rs-fails-otherwise-6
Memcheck:Leak
match-leak-kinds: definite
fun:malloc
fun:strdup
fun:_CFProcessPath
fun:__CFInitialize
fun:_ZN16ImageLoaderMachO11doImageInitERKN11ImageLoader11LinkContextE
fun:_ZN16ImageLoaderMachO16doInitializationERKN11ImageLoader11LinkContextE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader19processInitializersERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader15runInitializersERKNS_11LinkContextERNS_21InitializerTimingListE
fun:_ZN4dyld24initializeMainExecutableEv
fun:_ZN4dyld5_mainEPK12macho_headermiPPKcS5_S5_Pm
}
{
osx-frameworks.rs-fails-otherwise-7
Memcheck:Leak
match-leak-kinds: definite
match-leak-kinds: definite,possible
fun:malloc_zone_malloc
...
fun:__CFInitialize
fun:_ZN16ImageLoaderMachO11doImageInitERKN11ImageLoader11LinkContextE
fun:_ZN16ImageLoaderMachO16doInitializationERKN11ImageLoader11LinkContextE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader23recursiveInitializationERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader19processInitializersERKNS_11LinkContextEjRNS_21InitializerTimingListERNS_15UninitedUpwardsE
fun:_ZN11ImageLoader15runInitializersERKNS_11LinkContextERNS_21InitializerTimingListE
fun:_ZN4dyld24initializeMainExecutableEv
...
}
{
fails-since-xcode-7.2
Memcheck:Leak
match-leak-kinds: possible
fun:malloc_zone_malloc
fun:_objc_copyClassNamesForImage
fun:_ZL9protocolsv
fun:_Z9readClassP10objc_classbb
fun:gc_init
fun:_ZL33objc_initializeClassPair_internalP10objc_classPKcS0_S0_
fun:layout_string_create
fun:_ZL12realizeClassP10objc_class
fun:_ZL22copySwiftV1MangledNamePKcb
fun:_ZL22copySwiftV1MangledNamePKcb
fun:_ZL22copySwiftV1MangledNamePKcb
fun:_ZL22copySwiftV1MangledNamePKcb
}

View File

@ -25,6 +25,7 @@ even larger, and it's already uncomfortably large (6 KiB).
"""
from __future__ import print_function
import sys
from math import ceil, log
from fractions import Fraction
from collections import namedtuple
@ -33,7 +34,6 @@ N = 64 # Size of the significand field in bits
MIN_SIG = 2 ** (N - 1)
MAX_SIG = (2 ** N) - 1
# Hand-rolled fp representation without arithmetic or any other operations.
# The significand is normalized and always N bit, but the exponent is
# unrestricted in range.
@ -92,7 +92,7 @@ def error(f, e, z):
ulp_err = abs_err / Fraction(2) ** z.exp
return float(ulp_err)
LICENSE = """
HEADER = """
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
@ -102,9 +102,23 @@ LICENSE = """
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Tables of approximations of powers of ten.
//! DO NOT MODIFY: Generated by `src/etc/dec2flt_table.py`
"""
def main():
print(HEADER.strip())
print()
print_proper_powers()
print()
print_short_powers(32, 24)
print()
print_short_powers(64, 53)
def print_proper_powers():
MIN_E = -305
MAX_E = 305
e_range = range(MIN_E, MAX_E+1)
@ -114,13 +128,10 @@ def main():
err = error(1, e, z)
assert err < 0.5
powers.append(z)
typ = "([u64; {0}], [i16; {0}])".format(len(e_range))
print(LICENSE.strip())
print("// Table of approximations of powers of ten.")
print("// DO NOT MODIFY: Generated by a src/etc/dec2flt_table.py")
print("pub const MIN_E: i16 = {};".format(MIN_E))
print("pub const MAX_E: i16 = {};".format(MAX_E))
print()
typ = "([u64; {0}], [i16; {0}])".format(len(powers))
print("pub const POWERS: ", typ, " = ([", sep='')
for z in powers:
print(" 0x{:x},".format(z.sig))
@ -130,5 +141,17 @@ def main():
print("]);")
def print_short_powers(num_bits, significand_size):
max_sig = 2**significand_size - 1
# The fast path bails out for exponents >= ceil(log5(max_sig))
max_e = int(ceil(log(max_sig, 5)))
e_range = range(max_e)
typ = "[f{}; {}]".format(num_bits, len(e_range))
print("pub const F", num_bits, "_SHORT_POWERS: ", typ, " = [", sep='')
for e in e_range:
print(" 1e{},".format(e))
print("];")
if __name__ == '__main__':
main()

View File

@ -104,6 +104,7 @@ checks if the given file does not exist, for example.
"""
from __future__ import print_function
import sys
import os.path
import re
@ -160,8 +161,13 @@ class CustomHTMLParser(HTMLParser):
HTMLParser.close(self)
return self.__builder.close()
Command = namedtuple('Command', 'negated cmd args lineno')
Command = namedtuple('Command', 'negated cmd args lineno context')
class FailedCheck(Exception):
pass
class InvalidCheck(Exception):
pass
def concat_multi_lines(f):
"""returns a generator out of the file object, which
@ -196,7 +202,7 @@ def concat_multi_lines(f):
catenated = ''
if lastline is not None:
raise RuntimeError('Trailing backslash in the end of file')
print_err(lineno, line, 'Trailing backslash at the end of the file')
LINE_PATTERN = re.compile(r'''
(?<=(?<!\S)@)(?P<negated>!?)
@ -216,9 +222,10 @@ def get_commands(template):
cmd = m.group('cmd')
args = m.group('args')
if args and not args[:1].isspace():
raise RuntimeError('Invalid template syntax at line {}'.format(lineno+1))
print_err(lineno, line, 'Invalid template syntax')
continue
args = shlex.split(args)
yield Command(negated=negated, cmd=cmd, args=args, lineno=lineno+1)
yield Command(negated=negated, cmd=cmd, args=args, lineno=lineno+1, context=line)
def _flatten(node, acc):
@ -242,8 +249,7 @@ def normalize_xpath(path):
elif path.startswith('.//'):
return path
else:
raise RuntimeError('Non-absolute XPath is not supported due to \
the implementation issue.')
raise InvalidCheck('Non-absolute XPath is not supported due to implementation issues')
class CachedFiles(object):
@ -259,41 +265,40 @@ class CachedFiles(object):
self.last_path = path
return path
elif self.last_path is None:
raise RuntimeError('Tried to use the previous path in the first command')
raise InvalidCheck('Tried to use the previous path in the first command')
else:
return self.last_path
def get_file(self, path):
path = self.resolve_path(path)
try:
if path in self.files:
return self.files[path]
except KeyError:
try:
with open(os.path.join(self.root, path)) as f:
data = f.read()
except Exception as e:
raise RuntimeError('Cannot open file {!r}: {}'.format(path, e))
else:
self.files[path] = data
return data
abspath = os.path.join(self.root, path)
if not(os.path.exists(abspath) and os.path.isfile(abspath)):
raise FailedCheck('File does not exist {!r}'.format(path))
with open(abspath) as f:
data = f.read()
self.files[path] = data
return data
def get_tree(self, path):
path = self.resolve_path(path)
try:
if path in self.trees:
return self.trees[path]
except KeyError:
abspath = os.path.join(self.root, path)
if not(os.path.exists(abspath) and os.path.isfile(abspath)):
raise FailedCheck('File does not exist {!r}'.format(path))
with open(abspath) as f:
try:
f = open(os.path.join(self.root, path))
except Exception as e:
raise RuntimeError('Cannot open file {!r}: {}'.format(path, e))
try:
with f:
tree = ET.parse(f, CustomHTMLParser())
tree = ET.parse(f, CustomHTMLParser())
except Exception as e:
raise RuntimeError('Cannot parse an HTML file {!r}: {}'.format(path, e))
else:
self.trees[path] = tree
return self.trees[path]
self.trees[path] = tree
return self.trees[path]
def check_string(data, pat, regexp):
@ -311,14 +316,14 @@ def check_tree_attr(tree, path, attr, pat, regexp):
path = normalize_xpath(path)
ret = False
for e in tree.findall(path):
try:
if attr in e.attrib:
value = e.attrib[attr]
except KeyError:
continue
else:
ret = check_string(value, pat, regexp)
if ret:
break
continue
ret = check_string(value, pat, regexp)
if ret:
break
return ret
@ -341,57 +346,84 @@ def check_tree_count(tree, path, count):
path = normalize_xpath(path)
return len(tree.findall(path)) == count
def stderr(*args):
print(*args, file=sys.stderr)
def check(target, commands):
cache = CachedFiles(target)
for c in commands:
def print_err(lineno, context, err, message=None):
global ERR_COUNT
ERR_COUNT += 1
stderr("{}: {}".format(lineno, message or err))
if message and err:
stderr("\t{}".format(err))
if context:
stderr("\t{}".format(context))
ERR_COUNT = 0
def check_command(c, cache):
try:
cerr = ""
if c.cmd == 'has' or c.cmd == 'matches': # string test
regexp = (c.cmd == 'matches')
if len(c.args) == 1 and not regexp: # @has <path> = file existence
try:
cache.get_file(c.args[0])
ret = True
except RuntimeError:
except FailedCheck as err:
cerr = err.message
ret = False
elif len(c.args) == 2: # @has/matches <path> <pat> = string test
cerr = "`PATTERN` did not match"
ret = check_string(cache.get_file(c.args[0]), c.args[1], regexp)
elif len(c.args) == 3: # @has/matches <path> <pat> <match> = XML tree test
cerr = "`XPATH PATTERN` did not match"
tree = cache.get_tree(c.args[0])
pat, sep, attr = c.args[1].partition('/@')
if sep: # attribute
ret = check_tree_attr(cache.get_tree(c.args[0]), pat, attr, c.args[2], regexp)
tree = cache.get_tree(c.args[0])
ret = check_tree_attr(tree, pat, attr, c.args[2], regexp)
else: # normalized text
pat = c.args[1]
if pat.endswith('/text()'):
pat = pat[:-7]
ret = check_tree_text(cache.get_tree(c.args[0]), pat, c.args[2], regexp)
else:
raise RuntimeError('Invalid number of @{} arguments \
at line {}'.format(c.cmd, c.lineno))
raise InvalidCheck('Invalid number of @{} arguments'.format(c.cmd))
elif c.cmd == 'count': # count test
if len(c.args) == 3: # @count <path> <pat> <count> = count test
ret = check_tree_count(cache.get_tree(c.args[0]), c.args[1], int(c.args[2]))
else:
raise RuntimeError('Invalid number of @{} arguments \
at line {}'.format(c.cmd, c.lineno))
raise InvalidCheck('Invalid number of @{} arguments'.format(c.cmd))
elif c.cmd == 'valid-html':
raise RuntimeError('Unimplemented @valid-html at line {}'.format(c.lineno))
raise InvalidCheck('Unimplemented @valid-html')
elif c.cmd == 'valid-links':
raise RuntimeError('Unimplemented @valid-links at line {}'.format(c.lineno))
raise InvalidCheck('Unimplemented @valid-links')
else:
raise RuntimeError('Unrecognized @{} at line {}'.format(c.cmd, c.lineno))
raise InvalidCheck('Unrecognized @{}'.format(c.cmd))
if ret == c.negated:
raise RuntimeError('@{}{} check failed at line {}'.format('!' if c.negated else '',
c.cmd, c.lineno))
raise FailedCheck(cerr)
except FailedCheck as err:
message = '@{}{} check failed'.format('!' if c.negated else '', c.cmd)
print_err(c.lineno, c.context, err.message, message)
except InvalidCheck as err:
print_err(c.lineno, c.context, err.message)
def check(target, commands):
cache = CachedFiles(target)
for c in commands:
check_command(c, cache)
if __name__ == '__main__':
if len(sys.argv) < 3:
print >>sys.stderr, 'Usage: {} <doc dir> <template>'.format(sys.argv[0])
if len(sys.argv) != 3:
stderr('Usage: {} <doc dir> <template>'.format(sys.argv[0]))
raise SystemExit(1)
check(sys.argv[1], get_commands(sys.argv[2]))
if ERR_COUNT:
stderr("\nEncountered {} errors".format(ERR_COUNT))
raise SystemExit(1)
else:
check(sys.argv[1], get_commands(sys.argv[2]))

View File

@ -53,6 +53,8 @@ putenv('HOST_RPATH_DIR', os.path.abspath(sys.argv[10]))
putenv('TARGET_RPATH_DIR', os.path.abspath(sys.argv[11]))
putenv('RUST_BUILD_STAGE', sys.argv[12])
putenv('S', os.path.abspath(sys.argv[13]))
putenv('RUSTFLAGS', sys.argv[15])
putenv('LLVM_COMPONENTS', sys.argv[16])
putenv('PYTHON', sys.executable)
os.putenv('TARGET', target_triple)

View File

@ -18,6 +18,7 @@ components = sys.argv[2].split() # splits on whitespace
enable_static = sys.argv[3]
llvm_config = sys.argv[4]
stdcpp_name = sys.argv[5]
use_libcpp = sys.argv[6]
f.write("""// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
@ -44,11 +45,25 @@ def run(args):
sys.exit(1)
return out
def runErr(args):
proc = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = proc.communicate()
if err:
return False, out
else:
return True, out
f.write("\n")
args = [llvm_config, '--shared-mode']
args.extend(components)
llvm_shared, out = runErr(args)
if llvm_shared:
llvm_shared = 'shared' in out
# LLVM libs
args = [llvm_config, '--libs', '--system-libs']
args.extend(components)
out = run(args)
for lib in out.strip().replace("\n", ' ').split(' '):
@ -63,8 +78,7 @@ for lib in out.strip().replace("\n", ' ').split(' '):
elif lib[0] == '-':
lib = lib.strip()[1:]
f.write("#[link(name = \"" + lib + "\"")
# LLVM libraries are all static libraries
if 'LLVM' in lib:
if not llvm_shared and 'LLVM' in lib:
f.write(", kind = \"static\"")
f.write(")]\n")
@ -83,7 +97,7 @@ else:
# Note that we use `cfg_attr` here because on MSVC the C++ standard library
# is not c++ or stdc++, but rather the linker takes care of linking the
# right standard library.
if 'stdlib=libc++' in out:
if use_libcpp != "0" or 'stdlib=libc++' in out:
f.write("#[cfg_attr(not(target_env = \"msvc\"), link(name = \"c++\"))]\n")
else:
f.write("#[cfg_attr(not(target_env = \"msvc\"), link(name = \"" + stdcpp_name + "\"))]\n")

View File

@ -271,43 +271,6 @@ def load_properties(f, interestingprops):
return props
# load all widths of want_widths, except those in except_cats
def load_east_asian_width(want_widths, except_cats):
f = "EastAsianWidth.txt"
fetch(f)
widths = {}
re1 = re.compile("^([0-9A-F]+);(\w+) +# (\w+)")
re2 = re.compile("^([0-9A-F]+)\.\.([0-9A-F]+);(\w+) +# (\w+)")
for line in fileinput.input(f):
width = None
d_lo = 0
d_hi = 0
cat = None
m = re1.match(line)
if m:
d_lo = m.group(1)
d_hi = m.group(1)
width = m.group(2)
cat = m.group(3)
else:
m = re2.match(line)
if m:
d_lo = m.group(1)
d_hi = m.group(2)
width = m.group(3)
cat = m.group(4)
else:
continue
if cat in except_cats or width not in want_widths:
continue
d_lo = int(d_lo, 16)
d_hi = int(d_hi, 16)
if width not in widths:
widths[width] = []
widths[width].append((d_lo, d_hi))
return widths
def escape_char(c):
return "'\\u{%x}'" % c if c != 0 else "'\\0'"
@ -316,12 +279,12 @@ def emit_bsearch_range_table(f):
fn bsearch_range_table(c: char, r: &'static [(char, char)]) -> bool {
use core::cmp::Ordering::{Equal, Less, Greater};
r.binary_search_by(|&(lo, hi)| {
if lo <= c && c <= hi {
Equal
if c < lo {
Greater
} else if hi < c {
Less
} else {
Greater
Equal
}
})
.is_ok()
@ -356,34 +319,25 @@ def emit_property_module(f, mod, tbl, emit):
def emit_conversions_module(f, to_upper, to_lower, to_title):
f.write("pub mod conversions {")
f.write("""
use core::cmp::Ordering::{Equal, Less, Greater};
use core::option::Option;
use core::option::Option::{Some, None};
use core::result::Result::{Ok, Err};
pub fn to_lower(c: char) -> [char; 3] {
match bsearch_case_table(c, to_lowercase_table) {
None => [c, '\\0', '\\0'],
Some(index) => to_lowercase_table[index].1
None => [c, '\\0', '\\0'],
Some(index) => to_lowercase_table[index].1,
}
}
pub fn to_upper(c: char) -> [char; 3] {
match bsearch_case_table(c, to_uppercase_table) {
None => [c, '\\0', '\\0'],
Some(index) => to_uppercase_table[index].1
Some(index) => to_uppercase_table[index].1,
}
}
fn bsearch_case_table(c: char, table: &'static [(char, [char; 3])]) -> Option<usize> {
match table.binary_search_by(|&(key, _)| {
if c == key { Equal }
else if key < c { Less }
else { Greater }
}) {
Ok(i) => Some(i),
Err(_) => None,
}
table.binary_search_by(|&(key, _)| key.cmp(&c)).ok()
}
""")
@ -398,47 +352,6 @@ def emit_conversions_module(f, to_upper, to_lower, to_title):
is_pub=False, t_type = t_type, pfun=pfun)
f.write("}\n\n")
def emit_charwidth_module(f, width_table):
f.write("pub mod charwidth {\n")
f.write(" use core::option::Option;\n")
f.write(" use core::option::Option::{Some, None};\n")
f.write(" use core::result::Result::{Ok, Err};\n")
f.write("""
fn bsearch_range_value_table(c: char, is_cjk: bool, r: &'static [(char, char, u8, u8)]) -> u8 {
use core::cmp::Ordering::{Equal, Less, Greater};
match r.binary_search_by(|&(lo, hi, _, _)| {
if lo <= c && c <= hi { Equal }
else if hi < c { Less }
else { Greater }
}) {
Ok(idx) => {
let (_, _, r_ncjk, r_cjk) = r[idx];
if is_cjk { r_cjk } else { r_ncjk }
}
Err(_) => 1
}
}
""")
f.write("""
pub fn width(c: char, is_cjk: bool) -> Option<usize> {
match c as usize {
_c @ 0 => Some(0), // null is zero width
cu if cu < 0x20 => None, // control sequences have no width
cu if cu < 0x7F => Some(1), // ASCII
cu if cu < 0xA0 => None, // more control sequences
_ => Some(bsearch_range_value_table(c, is_cjk, charwidth_table) as usize)
}
}
""")
f.write(" // character width table. Based on Markus Kuhn's free wcwidth() implementation,\n")
f.write(" // http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c\n")
emit_table(f, "charwidth_table", width_table, "&'static [(char, char, u8, u8)]", is_pub=False,
pfun=lambda x: "(%s,%s,%s,%s)" % (escape_char(x[0]), escape_char(x[1]), x[2], x[3]))
f.write("}\n\n")
def emit_norm_module(f, canon, compat, combine, norm_props):
canon_keys = canon.keys()
canon_keys.sort()
@ -459,43 +372,6 @@ def emit_norm_module(f, canon, compat, combine, norm_props):
canon_comp_keys = canon_comp.keys()
canon_comp_keys.sort()
def remove_from_wtable(wtable, val):
wtable_out = []
while wtable:
if wtable[0][1] < val:
wtable_out.append(wtable.pop(0))
elif wtable[0][0] > val:
break
else:
(wt_lo, wt_hi, width, width_cjk) = wtable.pop(0)
if wt_lo == wt_hi == val:
continue
elif wt_lo == val:
wtable_out.append((wt_lo+1, wt_hi, width, width_cjk))
elif wt_hi == val:
wtable_out.append((wt_lo, wt_hi-1, width, width_cjk))
else:
wtable_out.append((wt_lo, val-1, width, width_cjk))
wtable_out.append((val+1, wt_hi, width, width_cjk))
if wtable:
wtable_out.extend(wtable)
return wtable_out
def optimize_width_table(wtable):
wtable_out = []
w_this = wtable.pop(0)
while wtable:
if w_this[1] == wtable[0][0] - 1 and w_this[2:3] == wtable[0][2:3]:
w_tmp = wtable.pop(0)
w_this = (w_this[0], w_tmp[1], w_tmp[2], w_tmp[3])
else:
wtable_out.append(w_this)
w_this = wtable.pop(0)
wtable_out.append(w_this)
return wtable_out
if __name__ == "__main__":
r = "tables.rs"
if os.path.exists(r):

View File

@ -12,7 +12,7 @@
fun:tlv_finalize
fun:_pthread_tsd_cleanup
fun:_pthread_exit
fun:_pthread_body
...
fun:_pthread_start
fun:thread_start
}
@ -24,7 +24,7 @@
fun:tlv_finalize
fun:_pthread_tsd_cleanup
fun:_pthread_exit
fun:_pthread_body
...
fun:_pthread_start
fun:thread_start
}
@ -36,7 +36,7 @@
fun:tlv_finalize
fun:_pthread_tsd_cleanup
fun:_pthread_exit
fun:_pthread_body
...
fun:_pthread_start
fun:thread_start
}
@ -48,7 +48,7 @@
fun:tlv_finalize
fun:_pthread_tsd_cleanup
fun:_pthread_exit
fun:_pthread_body
...
fun:_pthread_start
fun:thread_start
}

View File

@ -1,10 +1,10 @@
Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license:
--------------------------------------------------------------------------------
Copyright (C) 2002-2014 Jason Evans <jasone@canonware.com>.
Copyright (C) 2002-2015 Jason Evans <jasone@canonware.com>.
All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2014 Facebook, Inc. All rights reserved.
Copyright (C) 2009-2015 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

View File

@ -1,10 +1,262 @@
Following are change highlights associated with official releases. Important
bug fixes are all mentioned, but internal enhancements are omitted here for
brevity (even though they are more fun to write about). Much more detail can be
found in the git revision history:
bug fixes are all mentioned, but some internal enhancements are omitted here for
brevity. Much more detail can be found in the git revision history:
https://github.com/jemalloc/jemalloc
* 4.0.4 (October 24, 2015)
This bugfix release fixes another xallocx() regression. No other regressions
have come to light in over a month, so this is likely a good starting point
for people who prefer to wait for "dot one" releases with all the major issues
shaken out.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large
allocations that have been randomly assigned an offset of 0 when
--enable-cache-oblivious configure option is enabled.
* 4.0.3 (September 24, 2015)
This bugfix release continues the trend of xallocx() and heap profiling fixes.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large
allocations when --enable-cache-oblivious configure option is enabled.
- Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
when resizing from/to a size class that is not a multiple of the chunk size.
- Fix prof_tctx_dump_iter() to filter out nodes that were created after heap
profile dumping started.
- Work around a potentially bad thread-specific data initialization
interaction with NPTL (glibc's pthreads implementation).
* 4.0.2 (September 21, 2015)
This bugfix release addresses a few bugs specific to heap profiling.
Bug fixes:
- Fix ixallocx_prof_sample() to never modify nor create sampled small
allocations. xallocx() is in general incapable of moving small allocations,
so this fix removes buggy code without loss of generality.
- Fix irallocx_prof_sample() to always allocate large regions, even when
alignment is non-zero.
- Fix prof_alloc_rollback() to read tdata from thread-specific data rather
than dereferencing a potentially invalid tctx.
* 4.0.1 (September 15, 2015)
This is a bugfix release that is somewhat high risk due to the amount of
refactoring required to address deep xallocx() problems. As a side effect of
these fixes, xallocx() now tries harder to partially fulfill requests for
optional extra space. Note that a couple of minor heap profiling
optimizations are included, but these are better thought of as performance
fixes that were integral to disovering most of the other bugs.
Optimizations:
- Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
fast path when heap profiling is enabled. Additionally, split a special
case out into arena_prof_tctx_reset(), which also avoids chunk metadata
reads.
- Optimize irallocx_prof() to optimistically update the sampler state. The
prior implementation appears to have been a holdover from when
rallocx()/xallocx() functionality was combined as rallocm().
Bug fixes:
- Fix TLS configuration such that it is enabled by default for platforms on
which it works correctly.
- Fix arenas_cache_cleanup() and arena_get_hard() to handle
allocation/deallocation within the application's thread-specific data
cleanup functions even after arenas_cache is torn down.
- Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
- Fix chunk purge hook calls for in-place huge shrinking reallocation to
specify the old chunk size rather than the new chunk size. This bug caused
no correctness issues for the default chunk purge function, but was
visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
- Fix heap profiling bugs:
+ Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This bug
could cause data structure corruption that would most likely result in a
segfault.
+ Fix irealloc_prof() to prof_alloc_rollback() on OOM.
+ Make one call to prof_active_get_unlocked() per allocation event, and use
the result throughout the relevant functions that handle an allocation
event. Also add a missing check in prof_realloc(). These fixes protect
allocation events against concurrent prof_active changes.
+ Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
in the correct order.
+ Fix prof_realloc() to call prof_free_sampled_object() after calling
prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
the same, the tctx could have been prematurely destroyed.
- Fix portability bugs:
+ Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps. This affected systems with page sizes greater than 8
KiB.
+ Rename index_t to szind_t to avoid an existing type on Solaris.
+ Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
match glibc and avoid compilation errors when including both
jemalloc/jemalloc.h and malloc.h in C++ code.
+ Don't assume that /bin/sh is appropriate when running size_classes.sh
during configuration.
+ Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
+ Link tests to librt if it contains clock_gettime(2).
* 4.0.0 (August 17, 2015)
This version contains many speed and space optimizations, both minor and
major. The major themes are generalization, unification, and simplification.
Although many of these optimizations cause no visible behavior change, their
cumulative effect is substantial.
New features:
- Normalize size class spacing to be consistent across the complete size
range. By default there are four size classes per size doubling, but this
is now configurable via the --with-lg-size-class-group option. Also add the
--with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
--with-lg-tiny-min options, which can be used to tweak page and size class
settings. Impacts:
+ Worst case performance for incrementally growing/shrinking reallocation
is improved because there are far fewer size classes, and therefore
copying happens less often.
+ Internal fragmentation is limited to 20% for all but the smallest size
classes (those less than four times the quantum). (1B + 4 KiB)
and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
+ Chunk fragmentation tends to be lower because there are fewer distinct run
sizes to pack.
- Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
"tcache.destroy" mallctls control tcache lifetime and flushing, and the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
control which tcache is used for each operation.
- Implement per thread heap profiling, as well as the ability to
enable/disable heap profiling on a per thread basis. Add the "prof.reset",
"prof.lg_sample", "thread.prof.name", "thread.prof.active",
"opt.prof_thread_active_init", "prof.thread_active_init", and
"thread.prof.active" mallctls.
- Add support for per arena application-specified chunk allocators, configured
via the "arena.<i>.chunk_hooks" mallctl.
- Refactor huge allocation to be managed by arenas, so that arenas now
function as general purpose independent allocators. This is important in
the context of user-specified chunk allocators, aside from the scalability
benefits. Related new statistics:
+ The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
"stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
mallctls provide high level per arena huge allocation statistics.
+ The "arenas.nhchunks", "arenas.hchunk.<i>.size",
"stats.arenas.<i>.hchunks.<j>.nmalloc",
"stats.arenas.<i>.hchunks.<j>.ndalloc",
"stats.arenas.<i>.hchunks.<j>.nrequests", and
"stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
statistics.
- Add the 'util' column to malloc_stats_print() output, which reports the
proportion of available regions that are currently in use for each small
size class.
- Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
mallctl), so that it is possible to separately enable junk filling for
allocation versus deallocation.
- Add the jemalloc-config script, which provides information about how
jemalloc was configured, and how to integrate it into application builds.
- Add metadata statistics, which are accessible via the "stats.metadata",
"stats.arenas.<i>.metadata.mapped", and
"stats.arenas.<i>.metadata.allocated" mallctls.
- Add the "stats.resident" mallctl, which reports the upper limit of
physically resident memory mapped by the allocator.
- Add per arena control over unused dirty page purging, via the
"arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
"stats.arenas.<i>.lg_dirty_mult" mallctls.
- Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
feature on/off during program execution.
- Add sdallocx(), which implements sized deallocation. The primary
optimization over dallocx() is the removal of a metadata read, which often
suffers an L1 cache miss.
- Add missing header includes in jemalloc/jemalloc.h, so that applications
only have to #include <jemalloc/jemalloc.h>.
- Add support for additional platforms:
+ Bitrig
+ Cygwin
+ DragonFlyBSD
+ iOS
+ OpenBSD
+ OpenRISC/or1k
Optimizations:
- Maintain dirty runs in per arena LRUs rather than in per arena trees of
dirty-run-containing chunks. In practice this change significantly reduces
dirty page purging volume.
- Integrate whole chunks into the unused dirty page purging machinery. This
reduces the cost of repeated huge allocation/deallocation, because it
effectively introduces a cache of chunks.
- Split the arena chunk map into two separate arrays, in order to increase
cache locality for the frequently accessed bits.
- Move small run metadata out of runs, into arena chunk headers. This reduces
run fragmentation, smaller runs reduce external fragmentation for small size
classes, and packed (less uniformly aligned) metadata layout improves CPU
cache set distribution.
- Randomly distribute large allocation base pointer alignment relative to page
boundaries in order to more uniformly utilize CPU cache sets. This can be
disabled via the --disable-cache-oblivious configure option, and queried via
the "config.cache_oblivious" mallctl.
- Micro-optimize the fast paths for the public API functions.
- Refactor thread-specific data to reside in a single structure. This assures
that only a single TLS read is necessary per call into the public API.
- Implement in-place huge allocation growing and shrinking.
- Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
additional optimizations that reduce maximum lookup depth to one or two
levels. This resolves what was a concurrency bottleneck for per arena huge
allocation, because a global data structure is critical for determining
which arenas own which huge allocations.
Incompatible changes:
- Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
warnings by default.
- Assure that the constness of malloc_usable_size()'s return type matches that
of the system implementation.
- Change the heap profile dump format to support per thread heap profiling,
rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
result, the bundled jeprof must now be used rather than the upstream
(gperftools) pprof.
- Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
internally deadlock on some platforms.
- Change the "arenas.nlruns" mallctl type from size_t to unsigned.
- Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
"stats.arenas.<i>.bins.<j>.curregs".
- Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
- Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
Removed features:
- Remove the *allocm() API, which is superseded by the *allocx() API.
- Remove the --enable-dss options, and make dss non-optional on all platforms
which support sbrk(2).
- Remove the "arenas.purge" mallctl, which was obsoleted by the
"arena.<i>.purge" mallctl in 3.1.0.
- Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
detects whether it is running inside Valgrind.
- Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
"stats.huge.ndalloc" mallctls.
- Remove the --enable-mremap option.
- Remove the "stats.chunks.current", "stats.chunks.total", and
"stats.chunks.high" mallctls.
Bug fixes:
- Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was first released in 3.5.0.
- Fix OOM handling in memalign() and valloc(). A variant of this bug existed
in all releases since 2.0.0, which introduced these functions.
- Fix an OOM-related regression in arena_tcache_fill_small(), which could
cause cache corruption on OOM. This regression was present in all releases
from 2.2.0 through 3.6.0.
- Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
calloc(), and realloc() when profiling is enabled.
- Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
- Fix fallback lg_floor() implementations to handle extremely large inputs.
- Ensure the default purgeable zone is after the default zone on OS X.
- Fix latent bugs in atomic_*().
- Fix the "arena.<i>.dss" mallctl to handle read-only calls.
- Fix tls_model configuration to enable the initial-exec model when possible.
- Mark malloc_conf as a weak symbol so that the application can override it.
- Correctly detect glibc's adaptive pthread mutexes.
- Fix the --without-export configure option.
* 3.6.0 (March 31, 2014)
This version contains a critical bug fix for a regression present in 3.5.0 and
@ -21,7 +273,7 @@ found in the git revision history:
backtracing to be reliable.
- Use dss allocation precedence for huge allocations as well as small/large
allocations.
- Fix test assertion failure message formatting. This bug did not manifect on
- Fix test assertion failure message formatting. This bug did not manifest on
x86_64 systems because of implementation subtleties in va_list.
- Fix inconsequential test failures for hash and SFMT code.
@ -516,7 +768,7 @@ found in the git revision history:
- Make it possible for the application to manually flush a thread's cache, via
the "tcache.flush" mallctl.
- Base maximum dirty page count on proportion of active memory.
- Compute various addtional run-time statistics, including per size class
- Compute various additional run-time statistics, including per size class
statistics for large objects.
- Expose malloc_stats_print(), which can be called repeatedly by the
application.

View File

@ -107,15 +107,15 @@ any of the following arguments (not a definitive list) to 'configure':
there are interactions between the various coverage targets, so it is
usually advisable to run 'make clean' between repeated code coverage runs.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a substantial
performance hit.
--disable-stats
Disable statistics gathering functionality. See the "opt.stats_print"
option documentation for usage details.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a minor
performance hit.
--enable-prof
Enable heap profiling and leak detection functionality. See the "opt.prof"
option documentation for usage details. When enabled, there are several
@ -185,10 +185,106 @@ any of the following arguments (not a definitive list) to 'configure':
thread-local variables via the __thread keyword. If TLS is available,
jemalloc uses it for several purposes.
--disable-cache-oblivious
Disable cache-oblivious large allocation alignment for large allocation
requests with no alignment constraints. If this feature is disabled, all
large allocations are page-aligned as an implementation artifact, which can
severely harm CPU cache utilization. However, the cache-oblivious layout
comes at the cost of one extra page per large allocation, which in the
most extreme case increases physical memory usage for the 16 KiB size class
to 20 KiB.
--with-xslroot=<path>
Specify where to find DocBook XSL stylesheets when building the
documentation.
--with-lg-page=<lg-page>
Specify the base 2 log of the system page size. This option is only useful
when cross compiling, since the configure script automatically determines
the host's page size by default.
--with-lg-page-sizes=<lg-page-sizes>
Specify the comma-separated base 2 logs of the page sizes to support. This
option may be useful when cross-compiling in combination with
--with-lg-page, but its primary use case is for integration with FreeBSD's
libc, wherein jemalloc is embedded.
--with-lg-size-class-group=<lg-size-class-group>
Specify the base 2 log of how many size classes to use for each doubling in
size. By default jemalloc uses <lg-size-class-group>=2, which results in
e.g. the following size classes:
[...], 64,
80, 96, 112, 128,
160, [...]
<lg-size-class-group>=3 results in e.g. the following size classes:
[...], 64,
72, 80, 88, 96, 104, 112, 120, 128,
144, [...]
The minimal <lg-size-class-group>=0 causes jemalloc to only provide size
classes that are powers of 2:
[...],
64,
128,
256,
[...]
An implementation detail currently limits the total number of small size
classes to 255, and a compilation error will result if the
<lg-size-class-group> you specify cannot be supported. The limit is
roughly <lg-size-class-group>=4, depending on page size.
--with-lg-quantum=<lg-quantum>
Specify the base 2 log of the minimum allocation alignment. jemalloc needs
to know the minimum alignment that meets the following C standard
requirement (quoted from the April 12, 2011 draft of the C11 standard):
The pointer returned if the allocation succeeds is suitably aligned so
that it may be assigned to a pointer to any type of object with a
fundamental alignment requirement and then used to access such an object
or an array of such objects in the space allocated [...]
This setting is architecture-specific, and although jemalloc includes known
safe values for the most commonly used modern architectures, there is a
wrinkle related to GNU libc (glibc) that may impact your choice of
<lg-quantum>. On most modern architectures, this mandates 16-byte alignment
(<lg-quantum>=4), but the glibc developers chose not to meet this
requirement for performance reasons. An old discussion can be found at
https://sourceware.org/bugzilla/show_bug.cgi?id=206 . Unlike glibc,
jemalloc does follow the C standard by default (caveat: jemalloc
technically cheats if --with-lg-tiny-min is smaller than
--with-lg-quantum), but the fact that Linux systems already work around
this allocator noncompliance means that it is generally safe in practice to
let jemalloc's minimum alignment follow glibc's lead. If you specify
--with-lg-quantum=3 during configuration, jemalloc will provide additional
size classes that are not 16-byte-aligned (24, 40, and 56, assuming
--with-lg-size-class-group=2).
--with-lg-tiny-min=<lg-tiny-min>
Specify the base 2 log of the minimum tiny size class to support. Tiny
size classes are powers of 2 less than the quantum, and are only
incorporated if <lg-tiny-min> is less than <lg-quantum> (see
--with-lg-quantum). Tiny size classes technically violate the C standard
requirement for minimum alignment, and crashes could conceivably result if
the compiler were to generate instructions that made alignment assumptions,
both because illegal instruction traps could result, and because accesses
could straddle page boundaries and cause segmentation faults due to
accessing unmapped addresses.
The default of <lg-tiny-min>=3 works well in practice even on architectures
that technically require 16-byte alignment, probably for the same reason
--with-lg-quantum=3 works. Smaller tiny size classes can, and will, cause
crashes (see https://bugzilla.mozilla.org/show_bug.cgi?id=691003 for an
example).
This option is rarely useful, and is mainly provided as documentation of a
subtle implementation detail. If you do use this option, specify a
value in [3, ..., <lg-quantum>].
The following environment variables (not a definitive list) impact configure's
behavior:

View File

@ -28,6 +28,7 @@ CFLAGS := @CFLAGS@
LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@
IMPORTLIB := @importlib@
@ -48,8 +49,10 @@ cfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@)
cfgoutputs_out := @cfgoutputs_out@
enable_autogen := @enable_autogen@
enable_code_coverage := @enable_code_coverage@
enable_prof := @enable_prof@
enable_valgrind := @enable_valgrind@
enable_zone_allocator := @enable_zone_allocator@
MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF
DSO_LDFLAGS = @DSO_LDFLAGS@
SOREV = @SOREV@
PIC_CFLAGS = @PIC_CFLAGS@
@ -73,16 +76,17 @@ endif
LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)
# Lists of files.
BINS := $(srcroot)bin/pprof $(objroot)bin/jemalloc.sh
BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof
C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h
C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \
$(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \
$(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \
$(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \
$(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \
$(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/prof.c \
$(srcroot)src/quarantine.c $(srcroot)src/rtree.c $(srcroot)src/stats.c \
$(srcroot)src/tcache.c $(srcroot)src/util.c $(srcroot)src/tsd.c
$(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/pages.c \
$(srcroot)src/prof.c $(srcroot)src/quarantine.c $(srcroot)src/rtree.c \
$(srcroot)src/stats.c $(srcroot)src/tcache.c $(srcroot)src/util.c \
$(srcroot)src/tsd.c
ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c
endif
@ -104,20 +108,23 @@ endif
PC := $(objroot)jemalloc.pc
MAN3 := $(objroot)doc/jemalloc$(install_suffix).3
DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.3)
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.3)
DOCS := $(DOCS_HTML) $(DOCS_MAN3)
C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/btalloc_1.c $(srcroot)test/src/math.c \
$(srcroot)test/src/mtx.c $(srcroot)test/src/SFMT.c \
$(srcroot)test/src/test.c $(srcroot)test/src/thd.c \
$(srcroot)test/src/timer.c
$(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c
TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/hash.c \
$(srcroot)test/unit/junk.c \
$(srcroot)test/unit/junk_alloc.c \
$(srcroot)test/unit/junk_free.c \
$(srcroot)test/unit/lg_chunk.c \
$(srcroot)test/unit/mallctl.c \
$(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \
@ -134,6 +141,7 @@ TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/rb.c \
$(srcroot)test/unit/rtree.c \
$(srcroot)test/unit/SFMT.c \
$(srcroot)test/unit/size_classes.c \
$(srcroot)test/unit/stats.c \
$(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \
@ -143,6 +151,7 @@ TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/sdallocx.c \
$(srcroot)test/integration/mallocx.c \
$(srcroot)test/integration/MALLOCX_ARENA.c \
$(srcroot)test/integration/overflow.c \
$(srcroot)test/integration/posix_memalign.c \
$(srcroot)test/integration/rallocx.c \
$(srcroot)test/integration/thread_arena.c \
@ -178,10 +187,10 @@ all: build_lib
dist: build_doc
$(srcroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(objroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(XSLTPROC) -o $@ $(objroot)doc/html.xsl $<
$(srcroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(objroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $<
build_doc_html: $(DOCS_HTML)
@ -257,15 +266,15 @@ $(STATIC_LIBS):
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS)
@ -335,18 +344,23 @@ check_unit_dir:
@mkdir -p $(objroot)test/unit
check_integration_dir:
@mkdir -p $(objroot)test/integration
check_stress_dir:
stress_dir:
@mkdir -p $(objroot)test/stress
check_dir: check_unit_dir check_integration_dir check_stress_dir
check_dir: check_unit_dir check_integration_dir
check_unit: tests_unit check_unit_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
check_integration_prof: tests_integration check_integration_dir
ifeq ($(enable_prof), 1)
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
endif
check_integration: tests_integration check_integration_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
check_stress: tests_stress check_stress_dir
stress: tests_stress stress_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
check: tests check_dir
$(SHELL) $(objroot)test/test.sh $(TESTS:$(srcroot)%.c=$(objroot)%)
check: tests check_dir check_integration_prof
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
ifeq ($(enable_code_coverage), 1)
coverage_unit: check_unit
@ -360,7 +374,7 @@ coverage_integration: check_integration
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
coverage_stress: check_stress
coverage_stress: stress
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
@ -405,7 +419,9 @@ clean:
rm -f $(objroot)*.gcov.*
distclean: clean
rm -f $(objroot)bin/jemalloc-config
rm -f $(objroot)bin/jemalloc.sh
rm -f $(objroot)bin/jeprof
rm -f $(objroot)config.log
rm -f $(objroot)config.status
rm -f $(objroot)config.stamp
@ -414,7 +430,7 @@ distclean: clean
relclean: distclean
rm -f $(objroot)configure
rm -f $(srcroot)VERSION
rm -f $(objroot)VERSION
rm -f $(DOCS_HTML)
rm -f $(DOCS_MAN3)

View File

@ -1 +0,0 @@
0.12.0-15684-gc30b771ad9d44ab84f8c88b80c25fcfde2433126

View File

@ -0,0 +1,79 @@
#!/bin/sh
usage() {
cat <<EOF
Usage:
@BINDIR@/jemalloc-config <option>
Options:
--help | -h : Print usage.
--version : Print jemalloc version.
--revision : Print shared library revision number.
--config : Print configure options used to build jemalloc.
--prefix : Print installation directory prefix.
--bindir : Print binary installation directory.
--datadir : Print data installation directory.
--includedir : Print include installation directory.
--libdir : Print library installation directory.
--mandir : Print manual page installation directory.
--cc : Print compiler used to build jemalloc.
--cflags : Print compiler flags used to build jemalloc.
--cppflags : Print preprocessor flags used to build jemalloc.
--ldflags : Print library flags used to build jemalloc.
--libs : Print libraries jemalloc was linked against.
EOF
}
prefix="@prefix@"
exec_prefix="@exec_prefix@"
case "$1" in
--help | -h)
usage
exit 0
;;
--version)
echo "@jemalloc_version@"
;;
--revision)
echo "@rev@"
;;
--config)
echo "@CONFIG@"
;;
--prefix)
echo "@PREFIX@"
;;
--bindir)
echo "@BINDIR@"
;;
--datadir)
echo "@DATADIR@"
;;
--includedir)
echo "@INCLUDEDIR@"
;;
--libdir)
echo "@LIBDIR@"
;;
--mandir)
echo "@MANDIR@"
;;
--cc)
echo "@CC@"
;;
--cflags)
echo "@CFLAGS@"
;;
--cppflags)
echo "@CPPFLAGS@"
;;
--ldflags)
echo "@LDFLAGS@ @EXTRA_LDFLAGS@"
;;
--libs)
echo "@LIBS@"
;;
*)
usage
exit 1
esac

119
src/jemalloc/bin/pprof → src/jemalloc/bin/jeprof.in Executable file → Normal file
View File

@ -40,28 +40,28 @@
#
# Examples:
#
# % tools/pprof "program" "profile"
# % tools/jeprof "program" "profile"
# Enters "interactive" mode
#
# % tools/pprof --text "program" "profile"
# % tools/jeprof --text "program" "profile"
# Generates one line per procedure
#
# % tools/pprof --gv "program" "profile"
# % tools/jeprof --gv "program" "profile"
# Generates annotated call-graph and displays via "gv"
#
# % tools/pprof --gv --focus=Mutex "program" "profile"
# % tools/jeprof --gv --focus=Mutex "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex"
#
# % tools/pprof --gv --focus=Mutex --ignore=string "program" "profile"
# % tools/jeprof --gv --focus=Mutex --ignore=string "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex"
# and does not match "string"
#
# % tools/pprof --list=IBF_CheckDocid "program" "profile"
# % tools/jeprof --list=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one
# sample that match the --list=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each line.
#
# % tools/pprof --disasm=IBF_CheckDocid "program" "profile"
# % tools/jeprof --disasm=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one
# sample that match the --disasm=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each PC value.
@ -72,10 +72,11 @@ use strict;
use warnings;
use Getopt::Long;
my $JEPROF_VERSION = "@jemalloc_version@";
my $PPROF_VERSION = "2.0";
# These are the object tools we use which can come from a
# user-specified location using --tools, from the PPROF_TOOLS
# user-specified location using --tools, from the JEPROF_TOOLS
# environment variable, or from the environment.
my %obj_tool_map = (
"objdump" => "objdump",
@ -144,13 +145,13 @@ my $sep_address = undef;
sub usage_string {
return <<EOF;
Usage:
pprof [options] <program> <profiles>
jeprof [options] <program> <profiles>
<profiles> is a space separated list of profile names.
pprof [options] <symbolized-profiles>
jeprof [options] <symbolized-profiles>
<symbolized-profiles> is a list of profile files where each file contains
the necessary symbol mappings as well as profile data (likely generated
with --raw).
pprof [options] <profile>
jeprof [options] <profile>
<profile> is a remote form. Symbols are obtained from host:port$SYMBOL_PAGE
Each name can be:
@ -161,9 +162,9 @@ pprof [options] <profile>
$GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall,
$CENSUSPROFILE_PAGE, or /pprof/filteredprofile.
For instance:
pprof http://myserver.com:80$HEAP_PAGE
jeprof http://myserver.com:80$HEAP_PAGE
If /<service> is omitted, the service defaults to $PROFILE_PAGE (cpu profiling).
pprof --symbols <program>
jeprof --symbols <program>
Maps addresses to symbol names. In this mode, stdin should be a
list of library mappings, in the same format as is found in the heap-
and cpu-profile files (this loosely matches that of /proc/self/maps
@ -202,7 +203,7 @@ Output type:
--pdf Generate PDF to stdout
--svg Generate SVG to stdout
--gif Generate GIF to stdout
--raw Generate symbolized pprof data (useful with remote fetch)
--raw Generate symbolized jeprof data (useful with remote fetch)
Heap-Profile Options:
--inuse_space Display in-use (mega)bytes [default]
@ -236,34 +237,34 @@ Miscellaneous:
--version Version information
Environment Variables:
PPROF_TMPDIR Profiles directory. Defaults to \$HOME/pprof
PPROF_TOOLS Prefix for object tools pathnames
JEPROF_TMPDIR Profiles directory. Defaults to \$HOME/jeprof
JEPROF_TOOLS Prefix for object tools pathnames
Examples:
pprof /bin/ls ls.prof
jeprof /bin/ls ls.prof
Enters "interactive" mode
pprof --text /bin/ls ls.prof
jeprof --text /bin/ls ls.prof
Outputs one line per procedure
pprof --web /bin/ls ls.prof
jeprof --web /bin/ls ls.prof
Displays annotated call-graph in web browser
pprof --gv /bin/ls ls.prof
jeprof --gv /bin/ls ls.prof
Displays annotated call-graph via 'gv'
pprof --gv --focus=Mutex /bin/ls ls.prof
jeprof --gv --focus=Mutex /bin/ls ls.prof
Restricts to code paths including a .*Mutex.* entry
pprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof
jeprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof
Code paths including Mutex but not string
pprof --list=getdir /bin/ls ls.prof
jeprof --list=getdir /bin/ls ls.prof
(Per-line) annotated source listing for getdir()
pprof --disasm=getdir /bin/ls ls.prof
jeprof --disasm=getdir /bin/ls ls.prof
(Per-PC) annotated disassembly for getdir()
pprof http://localhost:1234/
jeprof http://localhost:1234/
Enters "interactive" mode
pprof --text localhost:1234
jeprof --text localhost:1234
Outputs one line per procedure for localhost:1234
pprof --raw localhost:1234 > ./local.raw
pprof --text ./local.raw
jeprof --raw localhost:1234 > ./local.raw
jeprof --text ./local.raw
Fetches a remote profile for later analysis and then
analyzes it in text mode.
EOF
@ -271,7 +272,8 @@ EOF
sub version_string {
return <<EOF
pprof (part of gperftools $PPROF_VERSION)
jeprof (part of jemalloc $JEPROF_VERSION)
based on pprof (part of gperftools $PPROF_VERSION)
Copyright 1998-2007 Google Inc.
@ -294,8 +296,8 @@ sub Init() {
# Setup tmp-file name and handler to clean it up.
# We do this in the very beginning so that we can use
# error() and cleanup() function anytime here after.
$main::tmpfile_sym = "/tmp/pprof$$.sym";
$main::tmpfile_ps = "/tmp/pprof$$";
$main::tmpfile_sym = "/tmp/jeprof$$.sym";
$main::tmpfile_ps = "/tmp/jeprof$$";
$main::next_tmpfile = 0;
$SIG{'INT'} = \&sighandler;
@ -404,7 +406,7 @@ sub Init() {
"edgefraction=f" => \$main::opt_edgefraction,
"maxdegree=i" => \$main::opt_maxdegree,
"focus=s" => \$main::opt_focus,
"thread=i" => \$main::opt_thread,
"thread=s" => \$main::opt_thread,
"ignore=s" => \$main::opt_ignore,
"scale=i" => \$main::opt_scale,
"heapcheck" => \$main::opt_heapcheck,
@ -707,7 +709,8 @@ sub Main() {
}
if (defined($data->{threads})) {
foreach my $thread (sort { $a <=> $b } keys(%{$data->{threads}})) {
if (!defined($main::opt_thread) || $main::opt_thread == $thread) {
if (defined($main::opt_thread) &&
($main::opt_thread eq '*' || $main::opt_thread == $thread)) {
my $thread_profile = $data->{threads}{$thread};
FilterAndPrint($thread_profile, $symbols, $libs, $thread);
}
@ -801,14 +804,14 @@ sub InteractiveMode {
$| = 1; # Make output unbuffered for interactive mode
my ($orig_profile, $symbols, $libs, $total) = @_;
print STDERR "Welcome to pprof! For help, type 'help'.\n";
print STDERR "Welcome to jeprof! For help, type 'help'.\n";
# Use ReadLine if it's installed and input comes from a console.
if ( -t STDIN &&
!ReadlineMightFail() &&
defined(eval {require Term::ReadLine}) ) {
my $term = new Term::ReadLine 'pprof';
while ( defined ($_ = $term->readline('(pprof) '))) {
my $term = new Term::ReadLine 'jeprof';
while ( defined ($_ = $term->readline('(jeprof) '))) {
$term->addhistory($_) if /\S/;
if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) {
last; # exit when we get an interactive command to quit
@ -816,7 +819,7 @@ sub InteractiveMode {
}
} else { # don't have readline
while (1) {
print STDERR "(pprof) ";
print STDERR "(jeprof) ";
$_ = <STDIN>;
last if ! defined $_ ;
s/\r//g; # turn windows-looking lines into unix-looking lines
@ -1009,7 +1012,7 @@ sub ProcessProfile {
sub InteractiveHelpMessage {
print STDERR <<ENDOFHELP;
Interactive pprof mode
Interactive jeprof mode
Commands:
gv
@ -1052,7 +1055,7 @@ Commands:
Generates callgrind file. If no filename is given, kcachegrind is called.
help - This listing
quit or ^D - End pprof
quit or ^D - End jeprof
For commands that accept optional -ignore tags, samples where any routine in
the stack trace matches the regular expression in any of the -ignore
@ -1497,7 +1500,7 @@ h1 {
}
</style>
<script type="text/javascript">
function pprof_toggle_asm(e) {
function jeprof_toggle_asm(e) {
var target;
if (!e) e = window.event;
if (e.target) target = e.target;
@ -1766,7 +1769,7 @@ sub PrintSource {
if ($html) {
printf $output (
"<h1>%s</h1>%s\n<pre onClick=\"pprof_toggle_asm()\">\n" .
"<h1>%s</h1>%s\n<pre onClick=\"jeprof_toggle_asm()\">\n" .
"Total:%6s %6s (flat / cumulative %s)\n",
HtmlEscape(ShortFunctionName($routine)),
HtmlEscape(CleanFileName($filename)),
@ -3432,7 +3435,7 @@ sub FetchDynamicProfile {
$profile_file .= $suffix;
}
my $profile_dir = $ENV{"PPROF_TMPDIR"} || ($ENV{HOME} . "/pprof");
my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof");
if (! -d $profile_dir) {
mkdir($profile_dir)
|| die("Unable to create profile directory $profile_dir: $!\n");
@ -3648,7 +3651,7 @@ BEGIN {
# Reads the top, 'header' section of a profile, and returns the last
# line of the header, commonly called a 'header line'. The header
# section of a profile consists of zero or more 'command' lines that
# are instructions to pprof, which pprof executes when reading the
# are instructions to jeprof, which jeprof executes when reading the
# header. All 'command' lines start with a %. After the command
# lines is the 'header line', which is a profile-specific line that
# indicates what type of profile it is, and perhaps other global
@ -4255,10 +4258,10 @@ sub ReadSynchProfile {
} elsif ($variable eq "sampling period") {
$sampling_period = $value;
} elsif ($variable eq "ms since reset") {
# Currently nothing is done with this value in pprof
# Currently nothing is done with this value in jeprof
# So we just silently ignore it for now
} elsif ($variable eq "discarded samples") {
# Currently nothing is done with this value in pprof
# Currently nothing is done with this value in jeprof
# So we just silently ignore it for now
} else {
printf STDERR ("Ignoring unnknown variable in /contention output: " .
@ -4564,7 +4567,7 @@ sub ParseLibraries {
}
# Add two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressAdd {
my $addr1 = shift;
my $addr2 = shift;
@ -4618,7 +4621,7 @@ sub AddressAdd {
# Subtract two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressSub {
my $addr1 = shift;
my $addr2 = shift;
@ -4670,7 +4673,7 @@ sub AddressSub {
}
# Increment a hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressInc {
my $addr = shift;
my $sum;
@ -4988,7 +4991,7 @@ sub UnparseAddress {
# 32-bit or ELF 64-bit executable file. The location of the tools
# is determined by considering the following options in this order:
# 1) --tools option, if set
# 2) PPROF_TOOLS environment variable, if set
# 2) JEPROF_TOOLS environment variable, if set
# 3) the environment
sub ConfigureObjTools {
my $prog_file = shift;
@ -5021,7 +5024,7 @@ sub ConfigureObjTools {
# For windows, we provide a version of nm and addr2line as part of
# the opensource release, which is capable of parsing
# Windows-style PDB executables. It should live in the path, or
# in the same directory as pprof.
# in the same directory as jeprof.
$obj_tool_map{"nm_pdb"} = "nm-pdb";
$obj_tool_map{"addr2line_pdb"} = "addr2line-pdb";
}
@ -5040,20 +5043,20 @@ sub ConfigureObjTools {
}
# Returns the path of a caller-specified object tool. If --tools or
# PPROF_TOOLS are specified, then returns the full path to the tool
# JEPROF_TOOLS are specified, then returns the full path to the tool
# with that prefix. Otherwise, returns the path unmodified (which
# means we will look for it on PATH).
sub ConfigureTool {
my $tool = shift;
my $path;
# --tools (or $PPROF_TOOLS) is a comma separated list, where each
# --tools (or $JEPROF_TOOLS) is a comma separated list, where each
# item is either a) a pathname prefix, or b) a map of the form
# <tool>:<path>. First we look for an entry of type (b) for our
# tool. If one is found, we use it. Otherwise, we consider all the
# pathname prefixes in turn, until one yields an existing file. If
# none does, we use a default path.
my $tools = $main::opt_tools || $ENV{"PPROF_TOOLS"} || "";
my $tools = $main::opt_tools || $ENV{"JEPROF_TOOLS"} || "";
if ($tools =~ m/(,|^)\Q$tool\E:([^,]*)/) {
$path = $2;
# TODO(csilvers): sanity-check that $path exists? Hard if it's relative.
@ -5067,11 +5070,11 @@ sub ConfigureTool {
}
if (!$path) {
error("No '$tool' found with prefix specified by " .
"--tools (or \$PPROF_TOOLS) '$tools'\n");
"--tools (or \$JEPROF_TOOLS) '$tools'\n");
}
} else {
# ... otherwise use the version that exists in the same directory as
# pprof. If there's nothing there, use $PATH.
# jeprof. If there's nothing there, use $PATH.
$0 =~ m,[^/]*$,; # this is everything after the last slash
my $dirname = $`; # this is everything up to and including the last slash
if (-x "$dirname$tool") {
@ -5101,7 +5104,7 @@ sub cleanup {
unlink($main::tmpfile_sym);
unlink(keys %main::tempnames);
# We leave any collected profiles in $HOME/pprof in case the user wants
# We leave any collected profiles in $HOME/jeprof in case the user wants
# to look at them later. We print a message informing them of this.
if ((scalar(@main::profile_files) > 0) &&
defined($main::collected_profile)) {
@ -5110,7 +5113,7 @@ sub cleanup {
}
print STDERR "If you want to investigate this profile further, you can do:\n";
print STDERR "\n";
print STDERR " pprof \\\n";
print STDERR " jeprof \\\n";
print STDERR " $main::prog \\\n";
print STDERR " $main::collected_profile\n";
print STDERR "\n";
@ -5295,7 +5298,7 @@ sub GetProcedureBoundaries {
# The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings.
# To make them more readable, we add underscores at interesting places.
# This routine removes the underscores, producing the canonical representation
# used by pprof to represent addresses, particularly in the tested routines.
# used by jeprof to represent addresses, particularly in the tested routines.
sub CanonicalHex {
my $arg = shift;
return join '', (split '_',$arg);

675
src/jemalloc/configure vendored
View File

@ -628,12 +628,14 @@ cfghdrs_in
enable_zone_allocator
enable_tls
enable_lazy_lock
TESTLIBS
jemalloc_version_gid
jemalloc_version_nrev
jemalloc_version_bugfix
jemalloc_version_minor
jemalloc_version_major
jemalloc_version
enable_cache_oblivious
enable_xmalloc
enable_valgrind
enable_utrace
@ -646,6 +648,7 @@ enable_debug
je_
install_suffix
private_namespace
JEMALLOC_CPREFIX
enable_code_coverage
AUTOCONF
LD
@ -706,6 +709,7 @@ objroot
abs_srcroot
srcroot
rev
CONFIG
target_alias
host_alias
build_alias
@ -771,6 +775,12 @@ enable_fill
enable_utrace
enable_valgrind
enable_xmalloc
enable_cache_oblivious
with_lg_tiny_min
with_lg_quantum
with_lg_page
with_lg_page_sizes
with_lg_size_class_group
enable_lazy_lock
enable_tls
enable_zone_allocator
@ -1412,6 +1422,9 @@ Optional Features:
--enable-utrace Enable utrace(2)-based tracing
--disable-valgrind Disable support for Valgrind
--enable-xmalloc Support xmalloc option
--disable-cache-oblivious
Disable support for cache-oblivious allocation
alignment
--enable-lazy-lock Enable lazy locking (only lock when multi-threaded)
--disable-tls Disable thread-local storage (__thread keyword)
--disable-zone-allocator
@ -1433,6 +1446,16 @@ Optional Packages:
--with-static-libunwind=<libunwind.a>
Path to static libunwind library; use rather than
dynamically linking
--with-lg-tiny-min=<lg-tiny-min>
Base 2 log of minimum tiny size class to support
--with-lg-quantum=<lg-quantum>
Base 2 log of minimum allocation alignment
--with-lg-page=<lg-page>
Base 2 log of system page size
--with-lg-page-sizes=<lg-page-sizes>
Base 2 logs of system page sizes to support
--with-lg-size-class-group=<lg-size-class-group>
Base 2 log of size classes per doubling
Some influential environment variables:
CC C compiler command
@ -2467,6 +2490,9 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
CONFIG=`echo ${ac_configure_args} | sed -e 's#'"'"'\([^ ]*\)'"'"'#\1#g'`
rev=2
@ -3479,6 +3505,42 @@ fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror=declaration-after-statement" >&5
$as_echo_n "checking whether compiler supports -Werror=declaration-after-statement... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror=declaration-after-statement"
else
CFLAGS="${CFLAGS} -Werror=declaration-after-statement"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror=declaration-after-statement
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -pipe" >&5
$as_echo_n "checking whether compiler supports -pipe... " >&6; }
TCFLAGS="${CFLAGS}"
@ -4653,9 +4715,10 @@ case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac
CPU_SPINWAIT=""
case "${host_cpu}" in
i[345]86)
;;
i686|x86_64)
if ${je_cv_pause+:} false; then :
$as_echo_n "(cached) " >&6
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pause instruction is compilable" >&5
$as_echo_n "checking whether pause instruction is compilable... " >&6; }
@ -4684,45 +4747,11 @@ fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_pause" >&5
$as_echo "$je_cv_pause" >&6; }
fi
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether SSE2 intrinsics is compilable" >&5
$as_echo_n "checking whether SSE2 intrinsics is compilable... " >&6; }
if ${je_cv_sse2+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <emmintrin.h>
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_sse2=yes
else
je_cv_sse2=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_sse2" >&5
$as_echo "$je_cv_sse2" >&6; }
if test "x${je_cv_sse2}" = "xyes" ; then
cat >>confdefs.h <<_ACEOF
#define HAVE_SSE2
_ACEOF
fi
;;
powerpc)
cat >>confdefs.h <<_ACEOF
@ -4853,6 +4882,7 @@ fi
default_munmap="1"
maps_coalesce="1"
case "${host}" in
*-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS"
@ -4864,7 +4894,7 @@ case "${host}" in
so="dylib"
importlib="${so}"
force_tls="0"
DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)'
DSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'
SOREV="${rev}.${so}"
sbrk_deprecated="1"
;;
@ -4881,7 +4911,14 @@ case "${host}" in
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
;;
*-*-openbsd*|*-*-bitrig*)
*-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
force_tls="0"
;;
*-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf"
$as_echo "#define JEMALLOC_PURGE_MADVISE_FREE " >>confdefs.h
@ -4897,6 +4934,8 @@ case "${host}" in
$as_echo "#define JEMALLOC_THREADED_INIT " >>confdefs.h
$as_echo "#define JEMALLOC_USE_CXX_THROW " >>confdefs.h
default_munmap="0"
;;
*-*-netbsd*)
@ -4949,6 +4988,8 @@ $as_echo "$abi" >&6; }
*-*-mingw* | *-*-cygwin*)
abi="pecoff"
force_tls="0"
force_lazy_lock="1"
maps_coalesce="0"
RPATH=""
so="dll"
if test "x$je_cv_msvc" = "xyes" ; then
@ -5189,6 +5230,216 @@ else
$as_echo "#define JEMALLOC_TLS_MODEL " >>confdefs.h
fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether alloc_size attribute is compilable" >&5
$as_echo_n "checking whether alloc_size attribute is compilable... " >&6; }
if ${je_cv_alloc_size+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(size_t size) __attribute__((alloc_size(1)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_alloc_size=yes
else
je_cv_alloc_size=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_alloc_size" >&5
$as_echo "$je_cv_alloc_size" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_alloc_size}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_ALLOC_SIZE " >>confdefs.h
fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether format(gnu_printf, ...) attribute is compilable" >&5
$as_echo_n "checking whether format(gnu_printf, ...) attribute is compilable... " >&6; }
if ${je_cv_format_gnu_printf+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_format_gnu_printf=yes
else
je_cv_format_gnu_printf=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_format_gnu_printf" >&5
$as_echo "$je_cv_format_gnu_printf" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_gnu_printf}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF " >>confdefs.h
fi
SAVED_CFLAGS="${CFLAGS}"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror" >&5
$as_echo_n "checking whether compiler supports -Werror... " >&6; }
TCFLAGS="${CFLAGS}"
if test "x${CFLAGS}" = "x" ; then
CFLAGS="-Werror"
else
CFLAGS="${CFLAGS} -Werror"
fi
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
return 0;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
je_cv_cflags_appended=-Werror
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
else
je_cv_cflags_appended=
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
CFLAGS="${TCFLAGS}"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether format(printf, ...) attribute is compilable" >&5
$as_echo_n "checking whether format(printf, ...) attribute is compilable... " >&6; }
if ${je_cv_format_printf+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdlib.h>
int
main ()
{
void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_format_printf=yes
else
je_cv_format_printf=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_format_printf" >&5
$as_echo "$je_cv_format_printf" >&6; }
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_printf}" = "xyes" ; then
$as_echo "#define JEMALLOC_HAVE_ATTR_FORMAT_PRINTF " >>confdefs.h
fi
# Check whether --with-rpath was given.
@ -5637,6 +5888,7 @@ _ACEOF
fi
# Check whether --with-export was given.
if test "${with_export+set}" = set; then :
withval=$with_export; if test "x$with_export" = "xno"; then
@ -5777,6 +6029,10 @@ else
fi
if test "x$enable_debug" = "x1" ; then
$as_echo "#define JEMALLOC_DEBUG " >>confdefs.h
fi
if test "x$enable_debug" = "x1" ; then
$as_echo "#define JEMALLOC_DEBUG " >>confdefs.h
@ -6267,6 +6523,11 @@ if test "x$enable_tcache" = "x1" ; then
fi
if test "x${maps_coalesce}" = "x1" ; then
$as_echo "#define JEMALLOC_MAPS_COALESCE " >>confdefs.h
fi
# Check whether --enable-munmap was given.
if test "${enable_munmap+set}" = set; then :
enableval=$enable_munmap; if test "x$enable_munmap" = "xno" ; then
@ -6464,6 +6725,25 @@ if test "x$enable_xmalloc" = "x1" ; then
fi
# Check whether --enable-cache-oblivious was given.
if test "${enable_cache_oblivious+set}" = set; then :
enableval=$enable_cache_oblivious; if test "x$enable_cache_oblivious" = "xno" ; then
enable_cache_oblivious="0"
else
enable_cache_oblivious="1"
fi
else
enable_cache_oblivious="1"
fi
if test "x$enable_cache_oblivious" = "x1" ; then
$as_echo "#define JEMALLOC_CACHE_OBLIVIOUS " >>confdefs.h
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program using __builtin_ffsl is compilable" >&5
$as_echo_n "checking whether a program using __builtin_ffsl is compilable... " >&6; }
@ -6554,13 +6834,50 @@ $as_echo "$je_cv_function_ffsl" >&6; }
fi
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking STATIC_PAGE_SHIFT" >&5
$as_echo_n "checking STATIC_PAGE_SHIFT... " >&6; }
if ${je_cv_static_page_shift+:} false; then :
# Check whether --with-lg_tiny_min was given.
if test "${with_lg_tiny_min+set}" = set; then :
withval=$with_lg_tiny_min; LG_TINY_MIN="$with_lg_tiny_min"
else
LG_TINY_MIN="3"
fi
cat >>confdefs.h <<_ACEOF
#define LG_TINY_MIN $LG_TINY_MIN
_ACEOF
# Check whether --with-lg_quantum was given.
if test "${with_lg_quantum+set}" = set; then :
withval=$with_lg_quantum; LG_QUANTA="$with_lg_quantum"
else
LG_QUANTA="3 4"
fi
if test "x$with_lg_quantum" != "x" ; then
cat >>confdefs.h <<_ACEOF
#define LG_QUANTUM $with_lg_quantum
_ACEOF
fi
# Check whether --with-lg_page was given.
if test "${with_lg_page+set}" = set; then :
withval=$with_lg_page; LG_PAGE="$with_lg_page"
else
LG_PAGE="detect"
fi
if test "x$LG_PAGE" = "xdetect"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking LG_PAGE" >&5
$as_echo_n "checking LG_PAGE... " >&6; }
if ${je_cv_lg_page+:} false; then :
$as_echo_n "(cached) " >&6
else
if test "$cross_compiling" = yes; then :
je_cv_static_page_shift=12
je_cv_lg_page=12
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
@ -6606,51 +6923,76 @@ main ()
}
_ACEOF
if ac_fn_c_try_run "$LINENO"; then :
je_cv_static_page_shift=`cat conftest.out`
je_cv_lg_page=`cat conftest.out`
else
je_cv_static_page_shift=undefined
je_cv_lg_page=undefined
fi
rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
conftest.$ac_objext conftest.beam conftest.$ac_ext
fi
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_static_page_shift" >&5
$as_echo "$je_cv_static_page_shift" >&6; }
if test "x$je_cv_static_page_shift" != "xundefined"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_lg_page" >&5
$as_echo "$je_cv_lg_page" >&6; }
fi
if test "x${je_cv_lg_page}" != "x" ; then
LG_PAGE="${je_cv_lg_page}"
fi
if test "x${LG_PAGE}" != "xundefined" ; then
cat >>confdefs.h <<_ACEOF
#define STATIC_PAGE_SHIFT $je_cv_static_page_shift
#define LG_PAGE $LG_PAGE
_ACEOF
else
as_fn_error $? "cannot determine value for STATIC_PAGE_SHIFT" "$LINENO" 5
as_fn_error $? "cannot determine value for LG_PAGE" "$LINENO" 5
fi
if test "x`git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
rm -f "${srcroot}VERSION"
# Check whether --with-lg_page_sizes was given.
if test "${with_lg_page_sizes+set}" = set; then :
withval=$with_lg_page_sizes; LG_PAGE_SIZES="$with_lg_page_sizes"
else
LG_PAGE_SIZES="$LG_PAGE"
fi
# Check whether --with-lg_size_class_group was given.
if test "${with_lg_size_class_group+set}" = set; then :
withval=$with_lg_size_class_group; LG_SIZE_CLASS_GROUP="$with_lg_size_class_group"
else
LG_SIZE_CLASS_GROUP="2"
fi
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
rm -f "${objroot}VERSION"
for pattern in '[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]'; do
if test ! -e "${srcroot}VERSION" ; then
git describe --long --abbrev=40 --match="${pattern}" > "${srcroot}VERSION.tmp" 2>/dev/null
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${srcroot}VERSION.tmp" "${srcroot}VERSION"
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi
done
fi
rm -f "${srcroot}VERSION.tmp"
if test ! -e "${srcroot}VERSION" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Missing VERSION file, and unable to generate it; creating bogus VERSION" >&5
rm -f "${objroot}VERSION.tmp"
if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Missing VERSION file, and unable to generate it; creating bogus VERSION" >&5
$as_echo "Missing VERSION file, and unable to generate it; creating bogus VERSION" >&6; }
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${srcroot}VERSION"
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${objroot}VERSION"
else
cp ${srcroot}VERSION ${objroot}VERSION
fi
fi
jemalloc_version=`cat "${srcroot}VERSION"`
jemalloc_version=`cat "${objroot}VERSION"`
jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $1}'`
jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $2}'`
jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print $3}'`
@ -6782,6 +7124,93 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT"
SAVED_LIBS="${LIBS}"
LIBS=
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime" >&5
$as_echo_n "checking for library containing clock_gettime... " >&6; }
if ${ac_cv_search_clock_gettime+:} false; then :
$as_echo_n "(cached) " >&6
else
ac_func_search_save_LIBS=$LIBS
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
/* Override any GCC internal prototype to avoid an error.
Use char because int might match the return type of a GCC
builtin and then its argument prototype would still apply. */
#ifdef __cplusplus
extern "C"
#endif
char clock_gettime ();
int
main ()
{
return clock_gettime ();
;
return 0;
}
_ACEOF
for ac_lib in '' rt; do
if test -z "$ac_lib"; then
ac_res="none required"
else
ac_res=-l$ac_lib
LIBS="-l$ac_lib $ac_func_search_save_LIBS"
fi
if ac_fn_c_try_link "$LINENO"; then :
ac_cv_search_clock_gettime=$ac_res
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext
if ${ac_cv_search_clock_gettime+:} false; then :
break
fi
done
if ${ac_cv_search_clock_gettime+:} false; then :
else
ac_cv_search_clock_gettime=no
fi
rm conftest.$ac_ext
LIBS=$ac_func_search_save_LIBS
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime" >&5
$as_echo "$ac_cv_search_clock_gettime" >&6; }
ac_res=$ac_cv_search_clock_gettime
if test "$ac_res" != no; then :
test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
TESTLIBS="${LIBS}"
fi
LIBS="${SAVED_LIBS}"
ac_fn_c_check_func "$LINENO" "secure_getenv" "ac_cv_func_secure_getenv"
if test "x$ac_cv_func_secure_getenv" = xyes; then :
have_secure_getenv="1"
else
have_secure_getenv="0"
fi
if test "x$have_secure_getenv" = "x1" ; then
$as_echo "#define JEMALLOC_HAVE_SECURE_GETENV " >>confdefs.h
fi
ac_fn_c_check_func "$LINENO" "issetugid" "ac_cv_func_issetugid"
if test "x$ac_cv_func_issetugid" = xyes; then :
have_issetugid="1"
else
have_issetugid="0"
fi
if test "x$have_issetugid" = "x1" ; then
$as_echo "#define JEMALLOC_HAVE_ISSETUGID " >>confdefs.h
fi
ac_fn_c_check_func "$LINENO" "_malloc_thread_cleanup" "ac_cv_func__malloc_thread_cleanup"
if test "x$ac_cv_func__malloc_thread_cleanup" = xyes; then :
have__malloc_thread_cleanup="1"
@ -6818,11 +7247,11 @@ else
fi
else
enable_lazy_lock="0"
enable_lazy_lock=""
fi
if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then
if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing lazy-lock to avoid allocator/threading bootstrap issues" >&6; }
enable_lazy_lock="1"
@ -6895,6 +7324,8 @@ fi
fi
$as_echo "#define JEMALLOC_LAZY_LOCK " >>confdefs.h
else
enable_lazy_lock="0"
fi
@ -6907,19 +7338,22 @@ else
fi
else
enable_tls="1"
enable_tls=""
fi
if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing TLS to avoid allocator/threading bootstrap issues" >&5
if test "x${enable_tls}" = "x" ; then
if test "x${force_tls}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing TLS to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing TLS to avoid allocator/threading bootstrap issues" >&6; }
enable_tls="1"
fi
if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing no TLS to avoid allocator/threading bootstrap issues" >&5
enable_tls="1"
elif test "x${force_tls}" = "x0" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: Forcing no TLS to avoid allocator/threading bootstrap issues" >&5
$as_echo "Forcing no TLS to avoid allocator/threading bootstrap issues" >&6; }
enable_tls="0"
enable_tls="0"
else
enable_tls="1"
fi
fi
if test "x${enable_tls}" = "x1" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for TLS" >&5
@ -6950,15 +7384,69 @@ $as_echo "no" >&6; }
enable_tls="0"
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
else
enable_tls="0"
fi
if test "x${enable_tls}" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: TLS enabled despite being marked unusable on this platform" >&5
$as_echo "$as_me: WARNING: TLS enabled despite being marked unusable on this platform" >&2;}
fi
cat >>confdefs.h <<_ACEOF
#define JEMALLOC_TLS
_ACEOF
elif test "x${force_tls}" = "x1" ; then
as_fn_error $? "Failed to configure TLS, which is mandatory for correct function" "$LINENO" 5
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: TLS disabled despite being marked critical on this platform" >&5
$as_echo "$as_me: WARNING: TLS disabled despite being marked critical on this platform" >&2;}
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C11 atomics is compilable" >&5
$as_echo_n "checking whether C11 atomics is compilable... " >&6; }
if ${je_cv_c11atomics+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <stdint.h>
#if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
#include <stdatomic.h>
#else
#error Atomics not available
#endif
int
main ()
{
uint64_t *p = (uint64_t *)0;
uint64_t x = 1;
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
uint64_t r = atomic_fetch_add(a, x) + x;
return (r == 0);
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
je_cv_c11atomics=yes
else
je_cv_c11atomics=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $je_cv_c11atomics" >&5
$as_echo "$je_cv_c11atomics" >&6; }
if test "x${je_cv_c11atomics}" = "xyes" ; then
$as_echo "#define JEMALLOC_C11ATOMICS 1" >>confdefs.h
fi
@ -7300,8 +7788,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then
if test "x${abi}" != "xmacho"; then
as_fn_error $? "--enable-zone-allocator is only supported on Darwin" "$LINENO" 5
fi
$as_echo "#define JEMALLOC_IVSALLOC " >>confdefs.h
$as_echo "#define JEMALLOC_ZONE " >>confdefs.h
@ -7315,7 +7801,7 @@ $as_echo_n "checking malloc zone version... " >&6; }
int
main ()
{
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 14 ? 1 : -1]
static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 14 ? 1 : -1]
;
return 0;
@ -7331,7 +7817,7 @@ else
int
main ()
{
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 15 ? 1 : -1]
static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 15 ? 1 : -1]
;
return 0;
@ -7347,7 +7833,7 @@ else
int
main ()
{
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 16 ? 1 : -1]
static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 16 ? 1 : -1]
;
return 0;
@ -7361,7 +7847,7 @@ if ac_fn_c_try_compile "$LINENO"; then :
int
main ()
{
static foo[sizeof(malloc_introspection_t) == sizeof(void *) * 9 ? 1 : -1]
static int foo[sizeof(malloc_introspection_t) == sizeof(void *) * 9 ? 1 : -1]
;
return 0;
@ -7377,7 +7863,7 @@ else
int
main ()
{
static foo[sizeof(malloc_introspection_t) == sizeof(void *) * 13 ? 1 : -1]
static int foo[sizeof(malloc_introspection_t) == sizeof(void *) * 13 ? 1 : -1]
;
return 0;
@ -7400,7 +7886,7 @@ else
int
main ()
{
static foo[sizeof(malloc_zone_t) == sizeof(void *) * 17 ? 1 : -1]
static int foo[sizeof(malloc_zone_t) == sizeof(void *) * 17 ? 1 : -1]
;
return 0;
@ -7416,7 +7902,7 @@ else
int
main ()
{
static foo[sizeof(malloc_zone_t) > sizeof(void *) * 17 ? 1 : -1]
static int foo[sizeof(malloc_zone_t) > sizeof(void *) * 17 ? 1 : -1]
;
return 0;
@ -7705,7 +8191,7 @@ ac_config_headers="$ac_config_headers $cfghdrs_tup"
ac_config_files="$ac_config_files $cfgoutputs_tup config.stamp bin/jemalloc.sh"
ac_config_files="$ac_config_files $cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof"
@ -8423,8 +8909,13 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
objroot="${objroot}"
SHELL="${SHELL}"
srcdir="${srcdir}"
objroot="${objroot}"
LG_QUANTA="${LG_QUANTA}"
LG_TINY_MIN=${LG_TINY_MIN}
LG_PAGE_SIZES="${LG_PAGE_SIZES}"
LG_SIZE_CLASS_GROUP=${LG_SIZE_CLASS_GROUP}
srcdir="${srcdir}"
@ -8470,7 +8961,9 @@ do
"$cfghdrs_tup") CONFIG_HEADERS="$CONFIG_HEADERS $cfghdrs_tup" ;;
"$cfgoutputs_tup") CONFIG_FILES="$CONFIG_FILES $cfgoutputs_tup" ;;
"config.stamp") CONFIG_FILES="$CONFIG_FILES config.stamp" ;;
"bin/jemalloc-config") CONFIG_FILES="$CONFIG_FILES bin/jemalloc-config" ;;
"bin/jemalloc.sh") CONFIG_FILES="$CONFIG_FILES bin/jemalloc.sh" ;;
"bin/jeprof") CONFIG_FILES="$CONFIG_FILES bin/jeprof" ;;
*) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;;
esac
@ -9060,7 +9553,7 @@ $as_echo "$as_me: executing $ac_file commands" >&6;}
;;
"include/jemalloc/internal/size_classes.h":C)
mkdir -p "${objroot}include/jemalloc/internal"
"${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h"
"${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
;;
"include/jemalloc/jemalloc_protos_jet.h":C)
mkdir -p "${objroot}include/jemalloc"
@ -9129,18 +9622,22 @@ $as_echo "jemalloc version : ${jemalloc_version}" >&6; }
$as_echo "library revision : ${rev}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
$as_echo "" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CONFIG : ${CONFIG}" >&5
$as_echo "CONFIG : ${CONFIG}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CC : ${CC}" >&5
$as_echo "CC : ${CC}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CPPFLAGS : ${CPPFLAGS}" >&5
$as_echo "CPPFLAGS : ${CPPFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CFLAGS : ${CFLAGS}" >&5
$as_echo "CFLAGS : ${CFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: CPPFLAGS : ${CPPFLAGS}" >&5
$as_echo "CPPFLAGS : ${CPPFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LDFLAGS : ${LDFLAGS}" >&5
$as_echo "LDFLAGS : ${LDFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&5
$as_echo "EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBS : ${LIBS}" >&5
$as_echo "LIBS : ${LIBS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: TESTLIBS : ${TESTLIBS}" >&5
$as_echo "TESTLIBS : ${TESTLIBS}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: RPATH_EXTRA : ${RPATH_EXTRA}" >&5
$as_echo "RPATH_EXTRA : ${RPATH_EXTRA}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
@ -9155,12 +9652,12 @@ $as_echo "" >&6; }
$as_echo "PREFIX : ${PREFIX}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: BINDIR : ${BINDIR}" >&5
$as_echo "BINDIR : ${BINDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: DATADIR : ${DATADIR}" >&5
$as_echo "DATADIR : ${DATADIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: INCLUDEDIR : ${INCLUDEDIR}" >&5
$as_echo "INCLUDEDIR : ${INCLUDEDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBDIR : ${LIBDIR}" >&5
$as_echo "LIBDIR : ${LIBDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: DATADIR : ${DATADIR}" >&5
$as_echo "DATADIR : ${DATADIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: MANDIR : ${MANDIR}" >&5
$as_echo "MANDIR : ${MANDIR}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
@ -9217,5 +9714,7 @@ $as_echo "munmap : ${enable_munmap}" >&6; }
$as_echo "lazy_lock : ${enable_lazy_lock}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: tls : ${enable_tls}" >&5
$as_echo "tls : ${enable_tls}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: cache-oblivious : ${enable_cache_oblivious}" >&5
$as_echo "cache-oblivious : ${enable_cache_oblivious}" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ===============================================================================" >&5
$as_echo "===============================================================================" >&6; }

View File

@ -43,6 +43,9 @@ AC_CACHE_CHECK([whether $1 is compilable],
dnl ============================================================================
CONFIG=`echo ${ac_configure_args} | sed -e 's#'"'"'\([^ ]*\)'"'"'#\1#g'`
AC_SUBST([CONFIG])
dnl Library revision.
rev=2
AC_SUBST([rev])
@ -134,6 +137,7 @@ if test "x$CFLAGS" = "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT])
fi
JE_CFLAGS_APPEND([-Wall])
JE_CFLAGS_APPEND([-Werror=declaration-after-statement])
JE_CFLAGS_APPEND([-pipe])
JE_CFLAGS_APPEND([-g3])
elif test "x$je_cv_msvc" = "xyes" ; then
@ -206,23 +210,14 @@ AC_CANONICAL_HOST
dnl CPU-specific settings.
CPU_SPINWAIT=""
case "${host_cpu}" in
i[[345]]86)
;;
i686|x86_64)
JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]],
[je_cv_pause])
AC_CACHE_VAL([je_cv_pause],
[JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]],
[je_cv_pause])])
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
dnl emmintrin.h fails to compile unless MMX, SSE, and SSE2 are
dnl supported.
JE_COMPILABLE([SSE2 intrinsics], [
#include <emmintrin.h>
], [], [je_cv_sse2])
if test "x${je_cv_sse2}" = "xyes" ; then
AC_DEFINE_UNQUOTED([HAVE_SSE2], [ ])
fi
;;
powerpc)
AC_DEFINE_UNQUOTED([HAVE_ALTIVEC], [ ])
@ -263,6 +258,7 @@ dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the
dnl definitions need to be seen before any headers are included, which is a pain
dnl to make happen otherwise.
default_munmap="1"
maps_coalesce="1"
case "${host}" in
*-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS"
@ -273,7 +269,7 @@ case "${host}" in
so="dylib"
importlib="${so}"
force_tls="0"
DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)'
DSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'
SOREV="${rev}.${so}"
sbrk_deprecated="1"
;;
@ -288,7 +284,13 @@ case "${host}" in
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;;
*-*-openbsd*|*-*-bitrig*)
*-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_tls="0"
;;
*-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
@ -300,6 +302,7 @@ case "${host}" in
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ])
default_munmap="0"
;;
*-*-netbsd*)
@ -338,6 +341,8 @@ case "${host}" in
*-*-mingw* | *-*-cygwin*)
abi="pecoff"
force_tls="0"
force_lazy_lock="1"
maps_coalesce="0"
RPATH=""
so="dll"
if test "x$je_cv_msvc" = "xyes" ; then
@ -426,6 +431,36 @@ if test "x${je_cv_tls_model}" = "xyes" ; then
else
AC_DEFINE([JEMALLOC_TLS_MODEL], [ ])
fi
dnl Check for alloc_size attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>],
[void *foo(size_t size) __attribute__((alloc_size(1)));],
[je_cv_alloc_size])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_alloc_size}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_ALLOC_SIZE], [ ])
fi
dnl Check for format(gnu_printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));],
[je_cv_format_gnu_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_gnu_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF], [ ])
fi
dnl Check for format(printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));],
[je_cv_format_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_PRINTF], [ ])
fi
dnl Support optional additions to rpath.
AC_ARG_WITH([rpath],
@ -512,6 +547,7 @@ if test "x$JEMALLOC_PREFIX" != "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], ["$JEMALLOC_PREFIX"])
AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], ["$JEMALLOC_CPREFIX"])
fi
AC_SUBST([JEMALLOC_CPREFIX])
AC_ARG_WITH([export],
[AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])],
@ -630,7 +666,8 @@ fi
dnl Do not compile with debugging by default.
AC_ARG_ENABLE([debug],
[AS_HELP_STRING([--enable-debug], [Build debugging code (implies --enable-ivsalloc)])],
[AS_HELP_STRING([--enable-debug],
[Build debugging code (implies --enable-ivsalloc)])],
[if test "x$enable_debug" = "xno" ; then
enable_debug="0"
else
@ -639,6 +676,9 @@ fi
],
[enable_debug="0"]
)
if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ])
fi
if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ])
enable_ivsalloc="1"
@ -647,7 +687,8 @@ AC_SUBST([enable_debug])
dnl Do not validate pointers by default.
AC_ARG_ENABLE([ivsalloc],
[AS_HELP_STRING([--enable-ivsalloc], [Validate pointers passed through the public API])],
[AS_HELP_STRING([--enable-ivsalloc],
[Validate pointers passed through the public API])],
[if test "x$enable_ivsalloc" = "xno" ; then
enable_ivsalloc="0"
else
@ -823,6 +864,12 @@ if test "x$enable_tcache" = "x1" ; then
fi
AC_SUBST([enable_tcache])
dnl Indicate whether adjacent virtual memory mappings automatically coalesce
dnl (and fragment on demand).
if test "x${maps_coalesce}" = "x1" ; then
AC_DEFINE([JEMALLOC_MAPS_COALESCE], [ ])
fi
dnl Enable VM deallocation via munmap() by default.
AC_ARG_ENABLE([munmap],
[AS_HELP_STRING([--disable-munmap], [Disable VM deallocation via munmap(2)])],
@ -946,6 +993,23 @@ if test "x$enable_xmalloc" = "x1" ; then
fi
AC_SUBST([enable_xmalloc])
dnl Support cache-oblivious allocation alignment by default.
AC_ARG_ENABLE([cache-oblivious],
[AS_HELP_STRING([--disable-cache-oblivious],
[Disable support for cache-oblivious allocation alignment])],
[if test "x$enable_cache_oblivious" = "xno" ; then
enable_cache_oblivious="0"
else
enable_cache_oblivious="1"
fi
],
[enable_cache_oblivious="1"]
)
if test "x$enable_cache_oblivious" = "x1" ; then
AC_DEFINE([JEMALLOC_CACHE_OBLIVIOUS], [ ])
fi
AC_SUBST([enable_cache_oblivious])
dnl ============================================================================
dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found.
dnl One of those two functions should (theoretically) exist on all platforms
@ -984,8 +1048,28 @@ else
fi
fi
AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
[je_cv_static_page_shift],
AC_ARG_WITH([lg_tiny_min],
[AS_HELP_STRING([--with-lg-tiny-min=<lg-tiny-min>],
[Base 2 log of minimum tiny size class to support])],
[LG_TINY_MIN="$with_lg_tiny_min"],
[LG_TINY_MIN="3"])
AC_DEFINE_UNQUOTED([LG_TINY_MIN], [$LG_TINY_MIN])
AC_ARG_WITH([lg_quantum],
[AS_HELP_STRING([--with-lg-quantum=<lg-quantum>],
[Base 2 log of minimum allocation alignment])],
[LG_QUANTA="$with_lg_quantum"],
[LG_QUANTA="3 4"])
if test "x$with_lg_quantum" != "x" ; then
AC_DEFINE_UNQUOTED([LG_QUANTUM], [$with_lg_quantum])
fi
AC_ARG_WITH([lg_page],
[AS_HELP_STRING([--with-lg-page=<lg-page>], [Base 2 log of system page size])],
[LG_PAGE="$with_lg_page"], [LG_PAGE="detect"])
if test "x$LG_PAGE" = "xdetect"; then
AC_CACHE_CHECK([LG_PAGE],
[je_cv_lg_page],
AC_RUN_IFELSE([AC_LANG_PROGRAM(
[[
#include <strings.h>
@ -1021,47 +1105,65 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
return 0;
]])],
[je_cv_static_page_shift=`cat conftest.out`],
[je_cv_static_page_shift=undefined],
[je_cv_static_page_shift=12]))
if test "x$je_cv_static_page_shift" != "xundefined"; then
AC_DEFINE_UNQUOTED([STATIC_PAGE_SHIFT], [$je_cv_static_page_shift])
else
AC_MSG_ERROR([cannot determine value for STATIC_PAGE_SHIFT])
[je_cv_lg_page=`cat conftest.out`],
[je_cv_lg_page=undefined],
[je_cv_lg_page=12]))
fi
if test "x${je_cv_lg_page}" != "x" ; then
LG_PAGE="${je_cv_lg_page}"
fi
if test "x${LG_PAGE}" != "xundefined" ; then
AC_DEFINE_UNQUOTED([LG_PAGE], [$LG_PAGE])
else
AC_MSG_ERROR([cannot determine value for LG_PAGE])
fi
AC_ARG_WITH([lg_page_sizes],
[AS_HELP_STRING([--with-lg-page-sizes=<lg-page-sizes>],
[Base 2 logs of system page sizes to support])],
[LG_PAGE_SIZES="$with_lg_page_sizes"], [LG_PAGE_SIZES="$LG_PAGE"])
AC_ARG_WITH([lg_size_class_group],
[AS_HELP_STRING([--with-lg-size-class-group=<lg-size-class-group>],
[Base 2 log of size classes per doubling])],
[LG_SIZE_CLASS_GROUP="$with_lg_size_class_group"],
[LG_SIZE_CLASS_GROUP="2"])
dnl ============================================================================
dnl jemalloc configuration.
dnl
dnl Set VERSION if source directory is inside a git repository.
if test "x`git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
dnl Pattern globs aren't powerful enough to match both single- and
dnl double-digit version numbers, so iterate over patterns to support up to
dnl version 99.99.99 without any accidental matches.
rm -f "${srcroot}VERSION"
rm -f "${objroot}VERSION"
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
if test ! -e "${srcroot}VERSION" ; then
git describe --long --abbrev=40 --match="${pattern}" > "${srcroot}VERSION.tmp" 2>/dev/null
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${srcroot}VERSION.tmp" "${srcroot}VERSION"
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi
done
fi
rm -f "${srcroot}VERSION.tmp"
if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT(
[Missing VERSION file, and unable to generate it; creating bogus VERSION])
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${srcroot}VERSION"
rm -f "${objroot}VERSION.tmp"
if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT(
[Missing VERSION file, and unable to generate it; creating bogus VERSION])
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${objroot}VERSION"
else
cp ${srcroot}VERSION ${objroot}VERSION
fi
fi
jemalloc_version=`cat "${srcroot}VERSION"`
jemalloc_version=`cat "${objroot}VERSION"`
jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]1}'`
jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]2}'`
jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]3}'`
@ -1088,6 +1190,32 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT"
dnl Check whether clock_gettime(2) is in libc or librt. This function is only
dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS.
SAVED_LIBS="${LIBS}"
LIBS=
AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
AC_SUBST([TESTLIBS])
LIBS="${SAVED_LIBS}"
dnl Check if the GNU-specific secure_getenv function exists.
AC_CHECK_FUNC([secure_getenv],
[have_secure_getenv="1"],
[have_secure_getenv="0"]
)
if test "x$have_secure_getenv" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_SECURE_GETENV], [ ])
fi
dnl Check if the Solaris/BSD issetugid function exists.
AC_CHECK_FUNC([issetugid],
[have_issetugid="1"],
[have_issetugid="0"]
)
if test "x$have_issetugid" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_ISSETUGID], [ ])
fi
dnl Check whether the BSD-specific _malloc_thread_cleanup() exists. If so, use
dnl it rather than pthreads TSD cleanup functions to support cleanup during
dnl thread exit, in order to avoid pthreads library recursion during
@ -1122,9 +1250,9 @@ else
enable_lazy_lock="1"
fi
],
[enable_lazy_lock="0"]
[enable_lazy_lock=""]
)
if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then
if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues])
enable_lazy_lock="1"
fi
@ -1137,6 +1265,8 @@ if test "x$enable_lazy_lock" = "x1" ; then
])
fi
AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ])
else
enable_lazy_lock="0"
fi
AC_SUBST([enable_lazy_lock])
@ -1148,15 +1278,18 @@ else
enable_tls="1"
fi
,
enable_tls="1"
enable_tls=""
)
if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then
AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues])
enable_tls="1"
fi
if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then
AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues])
enable_tls="0"
if test "x${enable_tls}" = "x" ; then
if test "x${force_tls}" = "x1" ; then
AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues])
enable_tls="1"
elif test "x${force_tls}" = "x0" ; then
AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues])
enable_tls="0"
else
enable_tls="1"
fi
fi
if test "x${enable_tls}" = "x1" ; then
AC_MSG_CHECKING([for TLS])
@ -1171,12 +1304,38 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
AC_MSG_RESULT([yes]),
AC_MSG_RESULT([no])
enable_tls="0")
else
enable_tls="0"
fi
AC_SUBST([enable_tls])
if test "x${enable_tls}" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
AC_MSG_WARN([TLS enabled despite being marked unusable on this platform])
fi
AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ])
elif test "x${force_tls}" = "x1" ; then
AC_MSG_ERROR([Failed to configure TLS, which is mandatory for correct function])
AC_MSG_WARN([TLS disabled despite being marked critical on this platform])
fi
dnl ============================================================================
dnl Check for C11 atomics.
JE_COMPILABLE([C11 atomics], [
#include <stdint.h>
#if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
#include <stdatomic.h>
#else
#error Atomics not available
#endif
], [
uint64_t *p = (uint64_t *)0;
uint64_t x = 1;
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
uint64_t r = atomic_fetch_add(a, x) + x;
return (r == 0);
], [je_cv_c11atomics])
if test "x${je_cv_c11atomics}" = "xyes" ; then
AC_DEFINE([JEMALLOC_C11ATOMICS])
fi
dnl ============================================================================
@ -1333,7 +1492,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then
if test "x${abi}" != "xmacho"; then
AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin])
fi
AC_DEFINE([JEMALLOC_IVSALLOC], [ ])
AC_DEFINE([JEMALLOC_ZONE], [ ])
dnl The szone version jumped from 3 to 6 between the OS X 10.5.x and 10.6
@ -1343,7 +1501,7 @@ if test "x${enable_zone_allocator}" = "x1" ; then
AC_DEFUN([JE_ZONE_PROGRAM],
[AC_LANG_PROGRAM(
[#include <malloc/malloc.h>],
[static foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]]
[static int foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]]
)])
AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,14)],[JEMALLOC_ZONE_VERSION=3],[
@ -1471,10 +1629,15 @@ AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [
])
AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [
mkdir -p "${objroot}include/jemalloc/internal"
"${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h"
"${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
], [
SHELL="${SHELL}"
srcdir="${srcdir}"
objroot="${objroot}"
LG_QUANTA="${LG_QUANTA}"
LG_TINY_MIN=${LG_TINY_MIN}
LG_PAGE_SIZES="${LG_PAGE_SIZES}"
LG_SIZE_CLASS_GROUP=${LG_SIZE_CLASS_GROUP}
])
AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [
mkdir -p "${objroot}include/jemalloc"
@ -1521,7 +1684,7 @@ AC_CONFIG_HEADERS([$cfghdrs_tup])
dnl ============================================================================
dnl Generate outputs.
AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc.sh])
AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof])
AC_SUBST([cfgoutputs_in])
AC_SUBST([cfgoutputs_out])
AC_OUTPUT
@ -1532,12 +1695,14 @@ AC_MSG_RESULT([=================================================================
AC_MSG_RESULT([jemalloc version : ${jemalloc_version}])
AC_MSG_RESULT([library revision : ${rev}])
AC_MSG_RESULT([])
AC_MSG_RESULT([CONFIG : ${CONFIG}])
AC_MSG_RESULT([CC : ${CC}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([CFLAGS : ${CFLAGS}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
AC_MSG_RESULT([LIBS : ${LIBS}])
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
AC_MSG_RESULT([])
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])
@ -1545,9 +1710,9 @@ AC_MSG_RESULT([XSLROOT : ${XSLROOT}])
AC_MSG_RESULT([])
AC_MSG_RESULT([PREFIX : ${PREFIX}])
AC_MSG_RESULT([BINDIR : ${BINDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([INCLUDEDIR : ${INCLUDEDIR}])
AC_MSG_RESULT([LIBDIR : ${LIBDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([MANDIR : ${MANDIR}])
AC_MSG_RESULT([])
AC_MSG_RESULT([srcroot : ${srcroot}])
@ -1576,4 +1741,5 @@ AC_MSG_RESULT([xmalloc : ${enable_xmalloc}])
AC_MSG_RESULT([munmap : ${enable_munmap}])
AC_MSG_RESULT([lazy_lock : ${enable_lazy_lock}])
AC_MSG_RESULT([tls : ${enable_tls}])
AC_MSG_RESULT([cache-oblivious : ${enable_cache_oblivious}])
AC_MSG_RESULT([===============================================================================])

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -11,6 +11,7 @@
#define atomic_read_uint64(p) atomic_add_uint64(p, 0)
#define atomic_read_uint32(p) atomic_add_uint32(p, 0)
#define atomic_read_p(p) atomic_add_p(p, NULL)
#define atomic_read_z(p) atomic_add_z(p, 0)
#define atomic_read_u(p) atomic_add_u(p, 0)
@ -19,74 +20,54 @@
#ifdef JEMALLOC_H_INLINES
/*
* All functions return the arithmetic result of the atomic operation. Some
* atomic operation APIs return the value prior to mutation, in which case the
* following functions must redundantly compute the result so that it can be
* returned. These functions are normally inlined, so the extra operations can
* be optimized away if the return values aren't used by the callers.
* All arithmetic functions return the arithmetic result of the atomic
* operation. Some atomic operation APIs return the value prior to mutation, in
* which case the following functions must redundantly compute the result so
* that it can be returned. These functions are normally inlined, so the extra
* operations can be optimized away if the return values aren't used by the
* callers.
*
* <t> atomic_read_<t>(<t> *p) { return (*p); }
* <t> atomic_add_<t>(<t> *p, <t> x) { return (*p + x); }
* <t> atomic_sub_<t>(<t> *p, <t> x) { return (*p - x); }
* bool atomic_cas_<t>(<t> *p, <t> c, <t> s)
* {
* if (*p != c)
* return (true);
* *p = s;
* return (false);
* }
* void atomic_write_<t>(<t> *p, <t> x) { *p = x; }
*/
#ifndef JEMALLOC_ENABLE_INLINE
uint64_t atomic_add_uint64(uint64_t *p, uint64_t x);
uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x);
bool atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s);
void atomic_write_uint64(uint64_t *p, uint64_t x);
uint32_t atomic_add_uint32(uint32_t *p, uint32_t x);
uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x);
bool atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s);
void atomic_write_uint32(uint32_t *p, uint32_t x);
void *atomic_add_p(void **p, void *x);
void *atomic_sub_p(void **p, void *x);
bool atomic_cas_p(void **p, void *c, void *s);
void atomic_write_p(void **p, const void *x);
size_t atomic_add_z(size_t *p, size_t x);
size_t atomic_sub_z(size_t *p, size_t x);
bool atomic_cas_z(size_t *p, size_t c, size_t s);
void atomic_write_z(size_t *p, size_t x);
unsigned atomic_add_u(unsigned *p, unsigned x);
unsigned atomic_sub_u(unsigned *p, unsigned x);
bool atomic_cas_u(unsigned *p, unsigned c, unsigned s);
void atomic_write_u(unsigned *p, unsigned x);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_))
/******************************************************************************/
/* 64-bit operations. */
#if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3)
# ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (__sync_add_and_fetch(p, x));
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (__sync_sub_and_fetch(p, x));
}
#elif (defined(_MSC_VER))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, -((int64_t)x)) - x);
}
#elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64((int64_t)x, (int64_t *)p));
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p));
}
# elif (defined(__amd64__) || defined(__x86_64__))
# if (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
@ -116,6 +97,62 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
return (t + x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgq %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory" /* Clobbers. */
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
asm volatile (
"xchgq %1, %0;" /* Lock is implied by xchgq. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (atomic_fetch_add(a, x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
atomic_store(a, x);
}
# elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
@ -138,7 +175,88 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
return (atomic_fetchadd_long(p, (unsigned long)(-(long)x)) - x);
}
# elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
return (!atomic_cmpset_long(p, (unsigned long)c, (unsigned long)s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
assert(sizeof(uint64_t) == sizeof(unsigned long));
atomic_store_rel_long(p, x);
}
# elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64((int64_t)x, (int64_t *)p));
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p));
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
return (!OSAtomicCompareAndSwap64(c, s, (int64_t *)p));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
uint64_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint64(p);
} while (atomic_cas_uint64(p, o, x));
}
# elif (defined(_MSC_VER))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, x) + x);
}
JEMALLOC_INLINE uint64_t
atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return (InterlockedExchangeAdd64(p, -((int64_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
uint64_t o;
o = InterlockedCompareExchange64(p, s, c);
return (o != c);
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
InterlockedExchange64(p, x);
}
# elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
JEMALLOC_INLINE uint64_t
atomic_add_uint64(uint64_t *p, uint64_t x)
{
@ -152,6 +270,20 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
return (__sync_sub_and_fetch(p, x));
}
JEMALLOC_INLINE bool
atomic_cas_uint64(uint64_t *p, uint64_t c, uint64_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint64(uint64_t *p, uint64_t x)
{
__sync_lock_test_and_set(p, x);
}
# else
# error "Missing implementation for 64-bit atomic operations"
# endif
@ -159,49 +291,7 @@ atomic_sub_uint64(uint64_t *p, uint64_t x)
/******************************************************************************/
/* 32-bit operations. */
#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (__sync_add_and_fetch(p, x));
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (__sync_sub_and_fetch(p, x));
}
#elif (defined(_MSC_VER))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, -((int32_t)x)) - x);
}
#elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32((int32_t)x, (int32_t *)p));
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p));
}
#elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
#if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
@ -231,6 +321,62 @@ atomic_sub_uint32(uint32_t *p, uint32_t x)
return (t + x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint8_t success;
asm volatile (
"lock; cmpxchgl %4, %0;"
"sete %1;"
: "=m" (*p), "=a" (success) /* Outputs. */
: "m" (*p), "a" (c), "r" (s) /* Inputs. */
: "memory"
);
return (!(bool)success);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
asm volatile (
"xchgl %1, %0;" /* Lock is implied by xchgl. */
: "=m" (*p), "+r" (x) /* Outputs. */
: "m" (*p) /* Inputs. */
: "memory" /* Clobbers. */
);
}
# elif (defined(JEMALLOC_C11ATOMICS))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (atomic_fetch_add(a, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (atomic_fetch_sub(a, x) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
return (!atomic_compare_exchange_strong(a, &c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
volatile atomic_uint_least32_t *a = (volatile atomic_uint_least32_t *)p;
atomic_store(a, x);
}
#elif (defined(JEMALLOC_ATOMIC9))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
@ -245,7 +391,84 @@ atomic_sub_uint32(uint32_t *p, uint32_t x)
return (atomic_fetchadd_32(p, (uint32_t)(-(int32_t)x)) - x);
}
#elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!atomic_cmpset_32(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
atomic_store_rel_32(p, x);
}
#elif (defined(JEMALLOC_OSATOMIC))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32((int32_t)x, (int32_t *)p));
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p));
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!OSAtomicCompareAndSwap32(c, s, (int32_t *)p));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
uint32_t o;
/*The documented OSAtomic*() API does not expose an atomic exchange. */
do {
o = atomic_read_uint32(p);
} while (atomic_cas_uint32(p, o, x));
}
#elif (defined(_MSC_VER))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, x) + x);
}
JEMALLOC_INLINE uint32_t
atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return (InterlockedExchangeAdd(p, -((int32_t)x)) - x);
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
uint32_t o;
o = InterlockedCompareExchange(p, s, c);
return (o != c);
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
InterlockedExchange(p, x);
}
#elif (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4) || \
defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
JEMALLOC_INLINE uint32_t
atomic_add_uint32(uint32_t *p, uint32_t x)
{
@ -259,10 +482,72 @@ atomic_sub_uint32(uint32_t *p, uint32_t x)
return (__sync_sub_and_fetch(p, x));
}
JEMALLOC_INLINE bool
atomic_cas_uint32(uint32_t *p, uint32_t c, uint32_t s)
{
return (!__sync_bool_compare_and_swap(p, c, s));
}
JEMALLOC_INLINE void
atomic_write_uint32(uint32_t *p, uint32_t x)
{
__sync_lock_test_and_set(p, x);
}
#else
# error "Missing implementation for 32-bit atomic operations"
#endif
/******************************************************************************/
/* Pointer operations. */
JEMALLOC_INLINE void *
atomic_add_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
#endif
}
JEMALLOC_INLINE void *
atomic_sub_p(void **p, void *x)
{
#if (LG_SIZEOF_PTR == 3)
return ((void *)atomic_add_uint64((uint64_t *)p,
(uint64_t)-((int64_t)x)));
#elif (LG_SIZEOF_PTR == 2)
return ((void *)atomic_add_uint32((uint32_t *)p,
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_p(void **p, void *c, void *s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_p(void **p, const void *x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
/* size_t operations. */
JEMALLOC_INLINE size_t
@ -289,6 +574,28 @@ atomic_sub_z(size_t *p, size_t x)
#endif
}
JEMALLOC_INLINE bool
atomic_cas_z(size_t *p, size_t c, size_t s)
{
#if (LG_SIZEOF_PTR == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_PTR == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_z(size_t *p, size_t x)
{
#if (LG_SIZEOF_PTR == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
/* unsigned operations. */
JEMALLOC_INLINE unsigned
@ -314,6 +621,29 @@ atomic_sub_u(unsigned *p, unsigned x)
(uint32_t)-((int32_t)x)));
#endif
}
JEMALLOC_INLINE bool
atomic_cas_u(unsigned *p, unsigned c, unsigned s)
{
#if (LG_SIZEOF_INT == 3)
return (atomic_cas_uint64((uint64_t *)p, (uint64_t)c, (uint64_t)s));
#elif (LG_SIZEOF_INT == 2)
return (atomic_cas_uint32((uint32_t *)p, (uint32_t)c, (uint32_t)s));
#endif
}
JEMALLOC_INLINE void
atomic_write_u(unsigned *p, unsigned x)
{
#if (LG_SIZEOF_INT == 3)
atomic_write_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 2)
atomic_write_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
/******************************************************************************/
#endif

View File

@ -10,9 +10,7 @@
#ifdef JEMALLOC_H_EXTERNS
void *base_alloc(size_t size);
void *base_calloc(size_t number, size_t size);
extent_node_t *base_node_alloc(void);
void base_node_dalloc(extent_node_t *node);
void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped);
bool base_boot(void);
void base_prefork(void);
void base_postfork_parent(void);

View File

@ -5,7 +5,7 @@
* Size and alignment of memory chunks that are allocated by the OS's virtual
* memory system.
*/
#define LG_CHUNK_DEFAULT 22
#define LG_CHUNK_DEFAULT 21
/* Return the chunk address for allocation address a. */
#define CHUNK_ADDR2BASE(a) \
@ -19,6 +19,16 @@
#define CHUNK_CEILING(s) \
(((s) + chunksize_mask) & ~chunksize_mask)
#define CHUNK_HOOKS_INITIALIZER { \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL \
}
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
@ -30,28 +40,36 @@
extern size_t opt_lg_chunk;
extern const char *opt_dss;
/* Protects stats_chunks; currently not used for any other purpose. */
extern malloc_mutex_t chunks_mtx;
/* Chunk statistics. */
extern chunk_stats_t stats_chunks;
extern rtree_t *chunks_rtree;
extern rtree_t chunks_rtree;
extern size_t chunksize;
extern size_t chunksize_mask; /* (chunksize - 1). */
extern size_t chunk_npages;
extern size_t map_bias; /* Number of arena chunk header pages. */
extern size_t map_misc_offset;
extern size_t arena_maxclass; /* Max size class for arenas. */
extern const chunk_hooks_t chunk_hooks_default;
chunk_hooks_t chunk_hooks_get(arena_t *arena);
chunk_hooks_t chunk_hooks_set(arena_t *arena,
const chunk_hooks_t *chunk_hooks);
bool chunk_register(const void *chunk, const extent_node_t *node);
void chunk_deregister(const void *chunk, const extent_node_t *node);
void *chunk_alloc_base(size_t size);
void *chunk_alloc_arena(chunk_alloc_t *chunk_alloc,
chunk_dalloc_t *chunk_dalloc, unsigned arena_ind, void *new_addr,
size_t size, size_t alignment, bool *zero);
void *chunk_alloc_default(void *new_addr, size_t size, size_t alignment,
bool *zero, unsigned arena_ind);
void chunk_unmap(void *chunk, size_t size);
bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind);
void *chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *new_addr, size_t size, size_t alignment, bool *zero,
bool dalloc_node);
void *chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit);
void chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool committed);
void chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool zeroed, bool committed);
void chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, bool committed);
bool chunk_purge_arena(arena_t *arena, void *chunk, size_t offset,
size_t length);
bool chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, size_t offset, size_t length);
bool chunk_boot(void);
void chunk_prefork(void);
void chunk_postfork_parent(void);
@ -61,6 +79,19 @@ void chunk_postfork_child(void);
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
extent_node_t *chunk_lookup(const void *chunk, bool dependent);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_CHUNK_C_))
JEMALLOC_INLINE extent_node_t *
chunk_lookup(const void *ptr, bool dependent)
{
return (rtree_get(&chunks_rtree, (uintptr_t)ptr, dependent));
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

Some files were not shown because too many files have changed in this diff Show More