Imported Upstream version 1.9.0+dfsg1

This commit is contained in:
Sylvestre Ledru 2016-05-29 17:53:46 +02:00
parent 7453a54e52
commit 54a0048b59
1798 changed files with 78176 additions and 47199 deletions

View File

@ -71,7 +71,8 @@ which includes important information about what platform you're on, what
version of Rust you're using, etc.
Sometimes, a backtrace is helpful, and so including that is nice. To get
a backtrace, set the `RUST_BACKTRACE` environment variable. The easiest way
a backtrace, set the `RUST_BACKTRACE` environment variable to a value
other than `0`. The easiest way
to do this is to invoke `rustc` like this:
```bash
@ -132,8 +133,8 @@ Some common make targets are:
- `make check-stage1-std NO_REBUILD=1` - test the standard library without
rebuilding the entire compiler
- `make check TESTNAME=<substring-of-test-name>` - Run a matching set of tests.
- `TESTNAME` should be a substring of the tests to match against e.g. it could
be the fully qualified test name, or just a part of it.
- `TESTNAME` should be a substring of the tests to match against e.g. it could
be the fully qualified test name, or just a part of it.
`TESTNAME=collections::hash::map::test_map::test_capacity_not_less_than_len`
or `TESTNAME=test_capacity_not_less_than_len`.
- `make check-stage1-rpass TESTNAME=<substring-of-test-name>` - Run a single

View File

@ -77,7 +77,7 @@ build.
Download [MinGW from
here](http://mingw-w64.org/doku.php/download/mingw-builds), and choose the
`threads=win32,exceptions=dwarf/seh` flavor when installing. After installing,
`version=4.9.x,threads=win32,exceptions=dwarf/seh` flavor when installing. Also, make sure to install to a path without spaces in it. After installing,
add its `bin` directory to your `PATH`. This is due to [#28260](https://github.com/rust-lang/rust/issues/28260), in the future,
installing from pacman should be just fine.
@ -177,10 +177,11 @@ To contribute to Rust, please see [CONTRIBUTING](CONTRIBUTING.md).
Rust has an [IRC] culture and most real-time collaboration happens in a
variety of channels on Mozilla's IRC network, irc.mozilla.org. The
most popular channel is [#rust], a venue for general discussion about
Rust, and a good place to ask for help.
Rust. And a good place to ask for help would be [#rust-beginners].
[IRC]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
[#rust]: irc://irc.mozilla.org/rust
[#rust-beginners]: irc://irc.mozilla.org/rust-beginners
## License

View File

@ -1,3 +1,291 @@
Version 1.9.0 (2016-05-26)
==========================
Language
--------
* The `#[deprecated]` attribute when applied to an API will generate
warnings when used. The warnings may be suppressed with
`#[allow(deprecated)]`. [RFC 1270].
* [`fn` item types are zero sized, and each `fn` names a unique
type][1.9fn]. This will break code that transmutes `fn`s, so calling
`transmute` on a `fn` type will generate a warning for a few cycles,
then will be converted to an error.
* [Field and method resolution understand visibility, so private
fields and methods cannot prevent the proper use of public fields
and methods][1.9fv].
* [The parser considers unicode codepoints in the
`PATTERN_WHITE_SPACE` category to be whitespace][1.9ws].
Stabilized APIs
---------------
* [`std::panic`]
* [`std::panic::catch_unwind`][] (renamed from `recover`)
* [`std::panic::resume_unwind`][] (renamed from `propagate`)
* [`std::panic::AssertUnwindSafe`][] (renamed from `AssertRecoverSafe`)
* [`std::panic::UnwindSafe`][] (renamed from `RecoverSafe`)
* [`str::is_char_boundary`]
* [`<*const T>::as_ref`]
* [`<*mut T>::as_ref`]
* [`<*mut T>::as_mut`]
* [`AsciiExt::make_ascii_uppercase`]
* [`AsciiExt::make_ascii_lowercase`]
* [`char::decode_utf16`]
* [`char::DecodeUtf16`]
* [`char::DecodeUtf16Error`]
* [`char::DecodeUtf16Error::unpaired_surrogate`]
* [`BTreeSet::take`]
* [`BTreeSet::replace`]
* [`BTreeSet::get`]
* [`HashSet::take`]
* [`HashSet::replace`]
* [`HashSet::get`]
* [`OsString::with_capacity`]
* [`OsString::clear`]
* [`OsString::capacity`]
* [`OsString::reserve`]
* [`OsString::reserve_exact`]
* [`OsStr::is_empty`]
* [`OsStr::len`]
* [`std::os::unix::thread`]
* [`RawPthread`]
* [`JoinHandleExt`]
* [`JoinHandleExt::as_pthread_t`]
* [`JoinHandleExt::into_pthread_t`]
* [`HashSet::hasher`]
* [`HashMap::hasher`]
* [`CommandExt::exec`]
* [`File::try_clone`]
* [`SocketAddr::set_ip`]
* [`SocketAddr::set_port`]
* [`SocketAddrV4::set_ip`]
* [`SocketAddrV4::set_port`]
* [`SocketAddrV6::set_ip`]
* [`SocketAddrV6::set_port`]
* [`SocketAddrV6::set_flowinfo`]
* [`SocketAddrV6::set_scope_id`]
* [`slice::copy_from_slice`]
* [`ptr::read_volatile`]
* [`ptr::write_volatile`]
* [`OpenOptions::create_new`]
* [`TcpStream::set_nodelay`]
* [`TcpStream::nodelay`]
* [`TcpStream::set_ttl`]
* [`TcpStream::ttl`]
* [`TcpStream::set_only_v6`]
* [`TcpStream::only_v6`]
* [`TcpStream::take_error`]
* [`TcpStream::set_nonblocking`]
* [`TcpListener::set_ttl`]
* [`TcpListener::ttl`]
* [`TcpListener::set_only_v6`]
* [`TcpListener::only_v6`]
* [`TcpListener::take_error`]
* [`TcpListener::set_nonblocking`]
* [`UdpSocket::set_broadcast`]
* [`UdpSocket::broadcast`]
* [`UdpSocket::set_multicast_loop_v4`]
* [`UdpSocket::multicast_loop_v4`]
* [`UdpSocket::set_multicast_ttl_v4`]
* [`UdpSocket::multicast_ttl_v4`]
* [`UdpSocket::set_multicast_loop_v6`]
* [`UdpSocket::multicast_loop_v6`]
* [`UdpSocket::set_multicast_ttl_v6`]
* [`UdpSocket::multicast_ttl_v6`]
* [`UdpSocket::set_ttl`]
* [`UdpSocket::ttl`]
* [`UdpSocket::set_only_v6`]
* [`UdpSocket::only_v6`]
* [`UdpSocket::join_multicast_v4`]
* [`UdpSocket::join_multicast_v6`]
* [`UdpSocket::leave_multicast_v4`]
* [`UdpSocket::leave_multicast_v6`]
* [`UdpSocket::take_error`]
* [`UdpSocket::connect`]
* [`UdpSocket::send`]
* [`UdpSocket::recv`]
* [`UdpSocket::set_nonblocking`]
Libraries
---------
* [`std::sync::Once` is poisoned if its initialization function
fails][1.9o].
* [`cell::Ref` and `cell::RefMut` can contain unsized types][1.9cu].
* [Most types implement `fmt::Debug`][1.9db].
* [The default buffer size used by `BufReader` and `BufWriter` was
reduced to 8K, from 64K][1.9bf]. This is in line with the buffer size
used by other languages.
* [`Instant`, `SystemTime` and `Duration` implement `+=` and `-=`.
`Duration` additionally implements `*=` and `/=`][1.9ta].
* [`Skip` is a `DoubleEndedIterator`][1.9sk].
* [`From<[u8; 4]>` is implemented for `Ipv4Addr`][1.9fi].
* [`Chain` implements `BufRead`][1.9ch].
* [`HashMap`, `HashSet` and iterators are covariant][1.9hc].
Cargo
-----
* [Cargo can now run concurrently][1.9cc].
* [Top-level overrides allow specific revisions of crates to be
overridden through the entire crate graph][1.9ct]. This is intended
to make upgrades easier for large projects, by allowing crates to be
forked temporarily until they've been upgraded and republished.
* [Cargo exports a `CARGO_PKG_AUTHORS` environment variable][1.9cp].
* [Cargo will pass the contents of the `RUSTFLAGS` variable to `rustc`
on the commandline][1.9cf]. `rustc` arguments can also be specified
in the `build.rustflags` configuration key.
Performance
-----------
* [During type unification, the complexity of comparing variables for
equivalance was reduced from `O(n!)` to `O(n)`][1.9tu]. This leads
to major compile-time improvements in some scenarios.
* [`ToString` is specialized for `str`, giving it the same performance
as `to_owned`][1.9ts].
* [Spawning processes with `Command::output` no longer creates extra
threads][1.9sp].
* [`#[derive(PartialEq)]` and `#[derive(PartialOrd)]` emit less code
for C-like enums][1.9cl].
Misc
----
* [Passing the `--quiet` flag to a test runner will produce
much-abbreviated output][1.9q].
* The Rust Project now publishes std binaries for the
`mips-unknown-linux-musl`, `mipsel-unknown-linux-musl`, and
`i586-pc-windows-msvc` targets.
Compatibility Notes
-------------------
* [`std::sync::Once` is poisoned if its initialization function
fails][1.9o].
* [It is illegal to define methods with the same name in overlapping
inherent `impl` blocks][1.9sn].
* [`fn` item types are zero sized, and each `fn` names a unique
type][1.9fn]. This will break code that transmutes `fn`s, so calling
`transmute` on a `fn` type will generate a warning for a few cycles,
then will be converted to an error.
* [Improvements to const evaluation may trigger new errors when integer
literals are out of range][1.9ce].
[1.9bf]: https://github.com/rust-lang/rust/pull/32695
[1.9cc]: https://github.com/rust-lang/cargo/pull/2486
[1.9ce]: https://github.com/rust-lang/rust/pull/30587
[1.9cf]: https://github.com/rust-lang/cargo/pull/2241
[1.9ch]: https://github.com/rust-lang/rust/pull/32541
[1.9cl]: https://github.com/rust-lang/rust/pull/31977
[1.9cp]: https://github.com/rust-lang/cargo/pull/2465
[1.9ct]: https://github.com/rust-lang/cargo/pull/2385
[1.9cu]: https://github.com/rust-lang/rust/pull/32652
[1.9db]: https://github.com/rust-lang/rust/pull/32054
[1.9fi]: https://github.com/rust-lang/rust/pull/32050
[1.9fn]: https://github.com/rust-lang/rust/pull/31710
[1.9fv]: https://github.com/rust-lang/rust/pull/31938
[1.9hc]: https://github.com/rust-lang/rust/pull/32635
[1.9o]: https://github.com/rust-lang/rust/pull/32325
[1.9q]: https://github.com/rust-lang/rust/pull/31887
[1.9sk]: https://github.com/rust-lang/rust/pull/31700
[1.9sn]: https://github.com/rust-lang/rust/pull/31925
[1.9sp]: https://github.com/rust-lang/rust/pull/31618
[1.9ta]: https://github.com/rust-lang/rust/pull/32448
[1.9ts]: https://github.com/rust-lang/rust/pull/32586
[1.9tu]: https://github.com/rust-lang/rust/pull/32062
[1.9ws]: https://github.com/rust-lang/rust/pull/29734
[RFC 1270]: https://github.com/rust-lang/rfcs/blob/master/text/1270-deprecation.md
[`<*const T>::as_ref`]: http://doc.rust-lang.org/nightly/std/primitive.pointer.html#method.as_ref
[`<*mut T>::as_mut`]: http://doc.rust-lang.org/nightly/std/primitive.pointer.html#method.as_mut
[`<*mut T>::as_ref`]: http://doc.rust-lang.org/nightly/std/primitive.pointer.html#method.as_ref
[`slice::copy_from_slice`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.copy_from_slice
[`AsciiExt::make_ascii_lowercase`]: http://doc.rust-lang.org/nightly/std/ascii/trait.AsciiExt.html#tymethod.make_ascii_lowercase
[`AsciiExt::make_ascii_uppercase`]: http://doc.rust-lang.org/nightly/std/ascii/trait.AsciiExt.html#tymethod.make_ascii_uppercase
[`BTreeSet::get`]: http://doc.rust-lang.org/nightly/collections/btree/set/struct.BTreeSet.html#method.get
[`BTreeSet::replace`]: http://doc.rust-lang.org/nightly/collections/btree/set/struct.BTreeSet.html#method.replace
[`BTreeSet::take`]: http://doc.rust-lang.org/nightly/collections/btree/set/struct.BTreeSet.html#method.take
[`CommandExt::exec`]: http://doc.rust-lang.org/nightly/std/os/unix/process/trait.CommandExt.html#tymethod.exec
[`File::try_clone`]: http://doc.rust-lang.org/nightly/std/fs/struct.File.html#method.try_clone
[`HashMap::hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashMap.html#method.hasher
[`HashSet::get`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.get
[`HashSet::hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.hasher
[`HashSet::replace`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.replace
[`HashSet::take`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.take
[`JoinHandleExt::as_pthread_t`]: http://doc.rust-lang.org/nightly/std/os/unix/thread/trait.JoinHandleExt.html#tymethod.as_pthread_t
[`JoinHandleExt::into_pthread_t`]: http://doc.rust-lang.org/nightly/std/os/unix/thread/trait.JoinHandleExt.html#tymethod.into_pthread_t
[`JoinHandleExt`]: http://doc.rust-lang.org/nightly/std/os/unix/thread/trait.JoinHandleExt.html
[`OpenOptions::create_new`]: http://doc.rust-lang.org/nightly/std/fs/struct.OpenOptions.html#method.create_new
[`OsStr::is_empty`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsStr.html#method.is_empty
[`OsStr::len`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsStr.html#method.len
[`OsString::capacity`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsString.html#method.capacity
[`OsString::clear`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsString.html#method.clear
[`OsString::reserve_exact`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsString.html#method.reserve_exact
[`OsString::reserve`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsString.html#method.reserve
[`OsString::with_capacity`]: http://doc.rust-lang.org/nightly/std/ffi/struct.OsString.html#method.with_capacity
[`RawPthread`]: http://doc.rust-lang.org/nightly/std/os/unix/thread/type.RawPthread.html
[`SocketAddr::set_ip`]: http://doc.rust-lang.org/nightly/std/net/enum.SocketAddr.html#method.set_ip
[`SocketAddr::set_port`]: http://doc.rust-lang.org/nightly/std/net/enum.SocketAddr.html#method.set_port
[`SocketAddrV4::set_ip`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV4.html#method.set_ip
[`SocketAddrV4::set_port`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV4.html#method.set_port
[`SocketAddrV6::set_flowinfo`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV6.html#method.set_flowinfo
[`SocketAddrV6::set_ip`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV6.html#method.set_ip
[`SocketAddrV6::set_port`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV6.html#method.set_port
[`SocketAddrV6::set_scope_id`]: http://doc.rust-lang.org/nightly/std/net/struct.SocketAddrV6.html#method.set_scope_id
[`TcpListener::only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.only_v6
[`TcpListener::set_nonblocking`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_nonblocking
[`TcpListener::set_only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_only_v6
[`TcpListener::set_ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_ttl
[`TcpListener::take_error`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.take_error
[`TcpListener::ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.ttl
[`TcpStream::nodelay`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.nodelay
[`TcpStream::only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.only_v6
[`TcpStream::set_nodelay`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_nodelay
[`TcpStream::set_nonblocking`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_nonblocking
[`TcpStream::set_only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_only_v6
[`TcpStream::set_ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.set_ttl
[`TcpStream::take_error`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.take_error
[`TcpStream::ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.TcpStream.html#method.ttl
[`UdpSocket::broadcast`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.broadcast
[`UdpSocket::connect`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.connect
[`UdpSocket::join_multicast_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.join_multicast_v4
[`UdpSocket::join_multicast_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.join_multicast_v6
[`UdpSocket::leave_multicast_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.leave_multicast_v4
[`UdpSocket::leave_multicast_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.leave_multicast_v6
[`UdpSocket::multicast_loop_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.multicast_loop_v4
[`UdpSocket::multicast_loop_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.multicast_loop_v6
[`UdpSocket::multicast_ttl_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.multicast_ttl_v4
[`UdpSocket::multicast_ttl_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.multicast_ttl_v6
[`UdpSocket::only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.only_v6
[`UdpSocket::recv`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.recv
[`UdpSocket::send`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.send
[`UdpSocket::set_broadcast`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_broadcast
[`UdpSocket::set_multicast_loop_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_multicast_loop_v4
[`UdpSocket::set_multicast_loop_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_multicast_loop_v6
[`UdpSocket::set_multicast_ttl_v4`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_multicast_ttl_v4
[`UdpSocket::set_multicast_ttl_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_multicast_ttl_v6
[`UdpSocket::set_nonblocking`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_nonblocking
[`UdpSocket::set_only_v6`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_only_v6
[`UdpSocket::set_ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.set_ttl
[`UdpSocket::take_error`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.take_error
[`UdpSocket::ttl`]: http://doc.rust-lang.org/nightly/std/net/struct.UdpSocket.html#method.ttl
[`char::DecodeUtf16Error::unpaired_surrogate`]: http://doc.rust-lang.org/nightly/std/char/struct.DecodeUtf16Error.html#method.unpaired_surrogate
[`char::DecodeUtf16Error`]: http://doc.rust-lang.org/nightly/std/char/struct.DecodeUtf16Error.html
[`char::DecodeUtf16`]: http://doc.rust-lang.org/nightly/std/char/struct.DecodeUtf16.html
[`char::decode_utf16`]: http://doc.rust-lang.org/nightly/std/char/fn.decode_utf16.html
[`ptr::read_volatile`]: http://doc.rust-lang.org/nightly/std/ptr/fn.read_volatile.html
[`ptr::write_volatile`]: http://doc.rust-lang.org/nightly/std/ptr/fn.write_volatile.html
[`std::os::unix::thread`]: http://doc.rust-lang.org/nightly/std/os/unix/thread/index.html
[`std::panic::AssertUnwindSafe`]: http://doc.rust-lang.org/nightly/std/panic/struct.AssertUnwindSafe.html
[`std::panic::UnwindSafe`]: http://doc.rust-lang.org/nightly/std/panic/trait.UnwindSafe.html
[`std::panic::catch_unwind`]: http://doc.rust-lang.org/nightly/std/panic/fn.catch_unwind.html
[`std::panic::resume_unwind`]: http://doc.rust-lang.org/nightly/std/panic/fn.resume_unwind.html
[`std::panic`]: http://doc.rust-lang.org/nightly/std/panic/index.html
[`str::is_char_boundary`]: http://doc.rust-lang.org/nightly/std/primitive.str.html#method.is_char_boundary
Version 1.8.0 (2016-04-14)
==========================
@ -209,16 +497,6 @@ Compatibility Notes
Version 1.7.0 (2016-03-03)
==========================
Language
--------
* Soundness fixes to the interactions between associated types and
lifetimes, specified in [RFC 1214], [now generate errors][1.7sf] for
code that violates the new rules. This is a significant change that
is known to break existing code, so it has emitted warnings for the
new error cases since 1.4 to give crate authors time to adapt. The
details of what is changing are subtle; read the RFC for more.
Libraries
---------
@ -267,6 +545,17 @@ Libraries
* [`IntoStringError::into_cstring`]
* [`IntoStringError::utf8_error`]
* `Error for IntoStringError`
* Hashing
* [`std::hash::BuildHasher`]
* [`BuildHasher::Hasher`]
* [`BuildHasher::build_hasher`]
* [`std::hash::BuildHasherDefault`]
* [`HashMap::with_hasher`]
* [`HashMap::with_capacity_and_hasher`]
* [`HashSet::with_hasher`]
* [`HashSet::with_capacity_and_hasher`]
* [`std::collections::hash_map::RandomState`]
* [`RandomState::new`]
* [Validating UTF-8 is faster by a factor of between 7 and 14x for
ASCII input][1.7utf8]. This means that creating `String`s and `str`s
from bytes is faster.
@ -288,9 +577,6 @@ Libraries
Misc
----
* [The `--error-format=json` flag to `rustc` causes it to emit errors
in JSON format][1.7j]. This is an unstable flag and so also requires
the `-Z unstable-options` flag.
* [When running tests with `--test`, rustdoc will pass `--cfg`
arguments to the compiler][1.7dt].
* [The compiler is built with RPATH information by default][1.7rpa].
@ -312,6 +598,12 @@ Cargo
Compatibility Notes
-------------------
* Soundness fixes to the interactions between associated types and
lifetimes, specified in [RFC 1214], [now generate errors][1.7sf] for
code that violates the new rules. This is a significant change that
is known to break existing code, so it has emitted warnings for the
new error cases since 1.4 to give crate authors time to adapt. The
details of what is changing are subtle; read the RFC for more.
* [Several bugs in the compiler's visibility calculations were
fixed][1.7v]. Since this was found to break significant amounts of
code, the new errors will be emitted as warnings for several release
@ -320,8 +612,8 @@ Compatibility Notes
that were not intended. In this release, [defaulted type parameters
appearing outside of type definitions will generate a
warning][1.7d], which will become an error in future releases.
* [Parsing "." as a float results in an error instead of
0][1.7p]. That is, `".".parse::<f32>()` returns `Err`, not `Ok(0)`.
* [Parsing "." as a float results in an error instead of 0][1.7p].
That is, `".".parse::<f32>()` returns `Err`, not `Ok(0.0)`.
* [Borrows of closure parameters may not outlive the closure][1.7bc].
[1.7a]: https://github.com/rust-lang/rust/pull/30928
@ -334,7 +626,6 @@ Compatibility Notes
[1.7dta]: https://github.com/rust-lang/rust/pull/30394
[1.7f]: https://github.com/rust-lang/rust/pull/30672
[1.7h]: https://github.com/rust-lang/rust/pull/30818
[1.7j]: https://github.com/rust-lang/rust/pull/30711
[1.7ll]: https://github.com/rust-lang/rust/pull/30663
[1.7m]: https://github.com/rust-lang/rust/pull/30381
[1.7p]: https://github.com/rust-lang/rust/pull/30681
@ -345,11 +636,15 @@ Compatibility Notes
[1.7utf8]: https://github.com/rust-lang/rust/pull/30740
[1.7v]: https://github.com/rust-lang/rust/pull/29973
[RFC 1214]: https://github.com/rust-lang/rfcs/blob/master/text/1214-projections-lifetimes-and-wf.md
[`clone_from_slice`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.clone_from_slice
[`sort_by_key`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.sort_by_key
[`BuildHasher::Hasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.Hasher.html
[`BuildHasher::build_hasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.BuildHasher.html#tymethod.build_hasher
[`CString::into_bytes_with_nul`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_bytes_with_nul
[`CString::into_bytes`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_bytes
[`CString::into_string`]: http://doc.rust-lang.org/nightly/std/ffi/struct.CString.html#method.into_string
[`HashMap::with_capacity_and_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashMap.html#method.with_capacity_and_hasher
[`HashMap::with_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashMap.html#method.with_hasher
[`HashSet::with_capacity_and_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.with_capacity_and_hasher
[`HashSet::with_hasher`]: http://doc.rust-lang.org/nightly/std/collections/struct.HashSet.html#method.with_hasher
[`IntoStringError::into_cstring`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html#method.into_cstring
[`IntoStringError::utf8_error`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html#method.utf8_error
[`Ipv4Addr::is_broadcast`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_broadcast
@ -362,10 +657,12 @@ Compatibility Notes
[`Ipv6Addr::is_multicast`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_multicast
[`Ipv6Addr::is_unspecified`]: http://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_unspecified
[`Path::strip_prefix`]: http://doc.rust-lang.org/nightly/std/path/struct.Path.html#method.strip_prefix
[`RandomState::new`]: http://doc.rust-lang.org/nightly/std/collections/hash_map/struct.RandomState.html#method.new
[`String::as_mut_str`]: http://doc.rust-lang.org/nightly/std/string/struct.String.html#method.as_mut_str
[`String::as_str`]: http://doc.rust-lang.org/nightly/std/string/struct.String.html#method.as_str
[`Vec::as_mut_slice`]: http://doc.rust-lang.org/nightly/std/vec/struct.Vec.html#method.as_mut_slice
[`Vec::as_slice`]: http://doc.rust-lang.org/nightly/std/vec/struct.Vec.html#method.as_slice
[`clone_from_slice`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.clone_from_slice
[`ffi::IntoStringError`]: http://doc.rust-lang.org/nightly/std/ffi/struct.IntoStringError.html
[`i32::checked_neg`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_neg
[`i32::checked_rem`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.checked_rem
@ -381,8 +678,13 @@ Compatibility Notes
[`i32::overflowing_sub`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.overflowing_sub
[`i32::saturating_mul`]: http://doc.rust-lang.org/nightly/std/primitive.i32.html#method.saturating_mul
[`path::StripPrefixError`]: http://doc.rust-lang.org/nightly/std/path/struct.StripPrefixError.html
[`sort_by_key`]: http://doc.rust-lang.org/nightly/std/primitive.slice.html#method.sort_by_key
[`std::collections::hash_map::RandomState`]: http://doc.rust-lang.org/nightly/std/collections/hash_map/struct.RandomState.html
[`std::hash::BuildHasherDefault`]: http://doc.rust-lang.org/nightly/std/hash/struct.BuildHasherDefault.html
[`std::hash::BuildHasher`]: http://doc.rust-lang.org/nightly/std/hash/trait.BuildHasher.html
[`u32::checked_neg`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_neg
[`u32::checked_rem`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_rem
[`u32::checked_neg`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_neg
[`u32::checked_shl`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.checked_shl
[`u32::overflowing_add`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_add
[`u32::overflowing_div`]: http://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_div

26
configure vendored
View File

@ -607,6 +607,8 @@ opt dist-host-only 0 "only install bins for the host architecture"
opt inject-std-version 1 "inject the current compiler version of libstd into programs"
opt llvm-version-check 1 "check if the LLVM version is supported, build anyway"
opt rustbuild 0 "use the rust and cargo based build system"
opt orbit 0 "get MIR where it belongs - everywhere; most importantly, in orbit"
opt codegen-tests 1 "run the src/test/codegen tests"
# Optimization and debugging options. These may be overridden by the release channel, etc.
opt_nosave optimize 1 "build optimized rust code"
@ -713,17 +715,7 @@ if [ -n "$CFG_ENABLE_DEBUG_ASSERTIONS" ]; then putvar CFG_ENABLE_DEBUG_ASSERTION
if [ -n "$CFG_ENABLE_DEBUGINFO" ]; then putvar CFG_ENABLE_DEBUGINFO; fi
if [ -n "$CFG_ENABLE_DEBUG_JEMALLOC" ]; then putvar CFG_ENABLE_DEBUG_JEMALLOC; fi
# A magic value that allows the compiler to use unstable features
# during the bootstrap even when doing so would normally be an error
# because of feature staging or because the build turns on
# warnings-as-errors and unstable features default to warnings. The
# build has to match this key in an env var. Meant to be a mild
# deterrent from users just turning on unstable features on the stable
# channel.
# Basing CFG_BOOTSTRAP_KEY on CFG_BOOTSTRAP_KEY lets it get picked up
# during a Makefile reconfig.
CFG_BOOTSTRAP_KEY="${CFG_BOOTSTRAP_KEY-`date +%H:%M:%S`}"
putvar CFG_BOOTSTRAP_KEY
if [ -n "$CFG_ENABLE_ORBIT" ]; then putvar CFG_ENABLE_ORBIT; fi
step_msg "looking for build programs"
@ -966,11 +958,11 @@ then
LLVM_VERSION=$($LLVM_CONFIG --version)
case $LLVM_VERSION in
(3.[5-8]*)
(3.[6-8]*)
msg "found ok version of LLVM: $LLVM_VERSION"
;;
(*)
err "bad LLVM version: $LLVM_VERSION, need >=3.5"
err "bad LLVM version: $LLVM_VERSION, need >=3.6"
;;
esac
fi
@ -1031,7 +1023,7 @@ then
if [ -n "$CFG_OSX_CLANG_VERSION" ]
then
case $CFG_OSX_CLANG_VERSION in
(7.0* | 7.1* | 7.2*)
(7.0* | 7.1* | 7.2* | 7.3*)
step_msg "found ok version of APPLE CLANG: $CFG_OSX_CLANG_VERSION"
;;
(*)
@ -1249,7 +1241,7 @@ $ pacman -R cmake && pacman -S mingw-w64-x86_64-cmake
bits=x86_64
msvc_part=amd64
;;
i686-*)
i*86-*)
bits=i386
msvc_part=
;;
@ -1494,7 +1486,9 @@ do
LLVM_INST_DIR=$CFG_LLVM_ROOT
do_reconfigure=0
# Check that LLVm FileCheck is available. Needed for the tests
need_cmd $LLVM_INST_DIR/bin/FileCheck
if [ -z "$CFG_DISABLE_CODEGEN_TESTS" ]; then
need_cmd $LLVM_INST_DIR/bin/FileCheck
fi
fi
if [ ${do_reconfigure} -ne 0 ]

View File

@ -268,7 +268,7 @@ the maximum number of threads used for this purpose.
.TP
\fBRUST_TEST_NOCAPTURE\fR
A synonym for the --nocapture flag.
If set to a value other than "0", a synonym for the --nocapture flag.
.TP
\fBRUST_MIN_STACK\fR
@ -276,7 +276,7 @@ Sets the minimum stack size for new threads.
.TP
\fBRUST_BACKTRACE\fR
If set, produces a backtrace in the output of a program which panics.
If set to a value different than "0", produces a backtrace in the output of a program which panics.
.SH "EXAMPLES"
To build an executable from a source file with a main function:

View File

@ -0,0 +1,28 @@
# i586-pc-windows-msvc configuration
CC_i586-pc-windows-msvc="$(CFG_MSVC_CL_i386)" -nologo
LINK_i586-pc-windows-msvc="$(CFG_MSVC_LINK_i386)" -nologo
CXX_i586-pc-windows-msvc="$(CFG_MSVC_CL_i386)" -nologo
CPP_i586-pc-windows-msvc="$(CFG_MSVC_CL_i386)" -nologo
AR_i586-pc-windows-msvc="$(CFG_MSVC_LIB_i386)" -nologo
CFG_LIB_NAME_i586-pc-windows-msvc=$(1).dll
CFG_STATIC_LIB_NAME_i586-pc-windows-msvc=$(1).lib
CFG_LIB_GLOB_i586-pc-windows-msvc=$(1)-*.{dll,lib}
CFG_LIB_DSYM_GLOB_i586-pc-windows-msvc=$(1)-*.dylib.dSYM
CFG_JEMALLOC_CFLAGS_i586-pc-windows-msvc :=
CFG_GCCISH_CFLAGS_i586-pc-windows-msvc := -MD -arch:IA32
CFG_GCCISH_CXXFLAGS_i586-pc-windows-msvc := -MD -arch:IA32
CFG_GCCISH_LINK_FLAGS_i586-pc-windows-msvc :=
CFG_GCCISH_DEF_FLAG_i586-pc-windows-msvc :=
CFG_LLC_FLAGS_i586-pc-windows-msvc :=
CFG_INSTALL_NAME_i586-pc-windows-msvc =
CFG_EXE_SUFFIX_i586-pc-windows-msvc := .exe
CFG_WINDOWSY_i586-pc-windows-msvc := 1
CFG_UNIXY_i586-pc-windows-msvc :=
CFG_LDPATH_i586-pc-windows-msvc :=
CFG_RUN_i586-pc-windows-msvc=$(2)
CFG_RUN_TARG_i586-pc-windows-msvc=$(call CFG_RUN_i586-pc-windows-msvc,,$(2))
CFG_GNU_TRIPLE_i586-pc-windows-msvc := i586-pc-win32
# Currently the build system is not configured to build jemalloc
# with MSVC, so we omit this optional dependency.
CFG_DISABLE_JEMALLOC_i586-pc-windows-msvc := 1

View File

@ -7,9 +7,9 @@ CFG_LIB_NAME_i586-unknown-linux-gnu=lib$(1).so
CFG_STATIC_LIB_NAME_i586-unknown-linux-gnu=lib$(1).a
CFG_LIB_GLOB_i586-unknown-linux-gnu=lib$(1)-*.so
CFG_LIB_DSYM_GLOB_i586-unknown-linux-gnu=lib$(1)-*.dylib.dSYM
CFG_JEMALLOC_CFLAGS_i586-unknown-linux-gnu := -m32 $(CFLAGS)
CFG_GCCISH_CFLAGS_i586-unknown-linux-gnu := -Wall -Werror -g -fPIC -m32 $(CFLAGS)
CFG_GCCISH_CXXFLAGS_i586-unknown-linux-gnu := -fno-rtti $(CXXFLAGS)
CFG_JEMALLOC_CFLAGS_i586-unknown-linux-gnu := -m32 $(CFLAGS) -march=pentium
CFG_GCCISH_CFLAGS_i586-unknown-linux-gnu := -Wall -Werror -g -fPIC -m32 $(CFLAGS) -march=pentium
CFG_GCCISH_CXXFLAGS_i586-unknown-linux-gnu := -fno-rtti $(CXXFLAGS) -march=pentium
CFG_GCCISH_LINK_FLAGS_i586-unknown-linux-gnu := -shared -fPIC -ldl -pthread -lrt -g -m32
CFG_GCCISH_DEF_FLAG_i586-unknown-linux-gnu := -Wl,--export-dynamic,--dynamic-list=
CFG_LLC_FLAGS_i586-unknown-linux-gnu :=

View File

@ -25,5 +25,3 @@ CFG_GNU_TRIPLE_i686-pc-windows-gnu := i686-w64-mingw32
CFG_THIRD_PARTY_OBJECTS_i686-pc-windows-gnu := crt2.o dllcrt2.o
CFG_INSTALLED_OBJECTS_i686-pc-windows-gnu := crt2.o dllcrt2.o rsbegin.o rsend.o
CFG_RUSTRT_HAS_STARTUP_OBJS_i686-pc-windows-gnu := 1
# FIXME(#31030) - there's not a great reason to disable jemalloc here
CFG_DISABLE_JEMALLOC_i686-pc-windows-gnu := 1

View File

@ -25,5 +25,3 @@ CFG_GNU_TRIPLE_x86_64-pc-windows-gnu := x86_64-w64-mingw32
CFG_THIRD_PARTY_OBJECTS_x86_64-pc-windows-gnu := crt2.o dllcrt2.o
CFG_INSTALLED_OBJECTS_x86_64-pc-windows-gnu := crt2.o dllcrt2.o rsbegin.o rsend.o
CFG_RUSTRT_HAS_STARTUP_OBJS_x86_64-pc-windows-gnu := 1
# FIXME(#31030) - there's not a great reason to disable jemalloc here
CFG_DISABLE_JEMALLOC_x86_64-pc-windows-gnu := 1

View File

@ -49,16 +49,18 @@
# automatically generated for all stage/host/target combinations.
################################################################################
TARGET_CRATES := libc std flate arena term \
serialize getopts collections test rand \
log graphviz core rbml alloc \
TARGET_CRATES := libc std term \
getopts collections test rand \
core alloc \
rustc_unicode rustc_bitflags \
alloc_system alloc_jemalloc
RUSTC_CRATES := rustc rustc_typeck rustc_mir rustc_borrowck rustc_resolve rustc_driver \
rustc_trans rustc_back rustc_llvm rustc_privacy rustc_lint \
rustc_data_structures rustc_front rustc_platform_intrinsics \
rustc_plugin rustc_metadata rustc_passes
HOST_CRATES := syntax syntax_ext $(RUSTC_CRATES) rustdoc fmt_macros
rustc_data_structures rustc_platform_intrinsics \
rustc_plugin rustc_metadata rustc_passes rustc_save_analysis \
rustc_const_eval rustc_const_math rustc_incremental
HOST_CRATES := syntax syntax_ext $(RUSTC_CRATES) rustdoc fmt_macros \
flate arena graphviz rbml log serialize
TOOLS := compiletest rustdoc rustc rustbook error_index_generator
DEPS_core :=
@ -84,40 +86,49 @@ DEPS_log := std
DEPS_num := std
DEPS_rbml := std log serialize
DEPS_serialize := std log
DEPS_term := std log
DEPS_test := std getopts serialize rbml term native:rust_test_helpers
DEPS_term := std
DEPS_test := std getopts term native:rust_test_helpers
DEPS_syntax := std term serialize log arena libc rustc_bitflags
DEPS_syntax := std term serialize log arena libc rustc_bitflags rustc_unicode
DEPS_syntax_ext := syntax fmt_macros
DEPS_rustc := syntax fmt_macros flate arena serialize getopts rbml rustc_front\
log graphviz rustc_llvm rustc_back rustc_data_structures
DEPS_rustc_back := std syntax rustc_llvm rustc_front flate log libc
DEPS_rustc_borrowck := rustc rustc_front log graphviz syntax
DEPS_rustc_const_math := std syntax log serialize
DEPS_rustc_const_eval := rustc_const_math rustc syntax log serialize \
rustc_back graphviz
DEPS_rustc := syntax fmt_macros flate arena serialize getopts rbml \
log graphviz rustc_back rustc_data_structures\
rustc_const_math
DEPS_rustc_back := std syntax flate log libc
DEPS_rustc_borrowck := rustc rustc_mir log graphviz syntax
DEPS_rustc_data_structures := std log serialize
DEPS_rustc_driver := arena flate getopts graphviz libc rustc rustc_back rustc_borrowck \
rustc_typeck rustc_mir rustc_resolve log syntax serialize rustc_llvm \
rustc_trans rustc_privacy rustc_lint rustc_front rustc_plugin \
rustc_metadata syntax_ext rustc_passes
DEPS_rustc_front := std syntax log serialize
DEPS_rustc_lint := rustc log syntax
rustc_trans rustc_privacy rustc_lint rustc_plugin \
rustc_metadata syntax_ext rustc_passes rustc_save_analysis rustc_const_eval \
rustc_incremental
DEPS_rustc_lint := rustc log syntax rustc_const_eval
DEPS_rustc_llvm := native:rustllvm libc std rustc_bitflags
DEPS_rustc_metadata := rustc rustc_front syntax rbml
DEPS_rustc_passes := syntax rustc core rustc_front
DEPS_rustc_mir := rustc rustc_front syntax
DEPS_rustc_resolve := arena rustc rustc_front log syntax
DEPS_rustc_platform_intrinsics := rustc rustc_llvm
DEPS_rustc_metadata := rustc syntax rbml rustc_const_math
DEPS_rustc_passes := syntax rustc core rustc_const_eval
DEPS_rustc_mir := rustc syntax rustc_const_math rustc_const_eval
DEPS_rustc_resolve := arena rustc log syntax
DEPS_rustc_platform_intrinsics := std
DEPS_rustc_plugin := rustc rustc_metadata syntax rustc_mir
DEPS_rustc_privacy := rustc rustc_front log syntax
DEPS_rustc_privacy := rustc log syntax
DEPS_rustc_trans := arena flate getopts graphviz libc rustc rustc_back rustc_mir \
log syntax serialize rustc_llvm rustc_front rustc_platform_intrinsics
DEPS_rustc_typeck := rustc syntax rustc_front rustc_platform_intrinsics
log syntax serialize rustc_llvm rustc_platform_intrinsics \
rustc_const_math rustc_const_eval rustc_incremental
DEPS_rustc_incremental := rbml rustc serialize rustc_data_structures
DEPS_rustc_save_analysis := rustc log syntax
DEPS_rustc_typeck := rustc syntax rustc_platform_intrinsics rustc_const_math \
rustc_const_eval
DEPS_rustdoc := rustc rustc_driver native:hoedown serialize getopts \
test rustc_lint rustc_front
test rustc_lint rustc_const_eval
TOOL_DEPS_compiletest := test getopts
TOOL_DEPS_compiletest := test getopts log
TOOL_DEPS_rustdoc := rustdoc
TOOL_DEPS_rustc := rustc_driver
TOOL_DEPS_rustbook := std rustdoc
@ -125,8 +136,8 @@ TOOL_DEPS_error_index_generator := rustdoc syntax serialize
TOOL_SOURCE_compiletest := $(S)src/compiletest/compiletest.rs
TOOL_SOURCE_rustdoc := $(S)src/driver/driver.rs
TOOL_SOURCE_rustc := $(S)src/driver/driver.rs
TOOL_SOURCE_rustbook := $(S)src/rustbook/main.rs
TOOL_SOURCE_error_index_generator := $(S)src/error_index_generator/main.rs
TOOL_SOURCE_rustbook := $(S)src/tools/rustbook/main.rs
TOOL_SOURCE_error_index_generator := $(S)src/tools/error_index_generator/main.rs
ONLY_RLIB_core := 1
ONLY_RLIB_libc := 1

View File

@ -54,7 +54,6 @@ PKG_FILES := \
doc \
driver \
etc \
error_index_generator \
$(foreach crate,$(CRATES),lib$(crate)) \
libcollectionstest \
libcoretest \
@ -65,7 +64,7 @@ PKG_FILES := \
rustc \
snapshots.txt \
rust-installer \
rustbook \
tools \
test) \
$(PKG_GITMODULES) \
$(filter-out config.stamp, \

View File

@ -13,17 +13,28 @@
######################################################################
# The version number
CFG_RELEASE_NUM=1.8.0
CFG_RELEASE_NUM=1.9.0
# An optional number to put after the label, e.g. '.2' -> '-beta.2'
# NB Make sure it starts with a dot to conform to semver pre-release
# versions (section 9)
CFG_PRERELEASE_VERSION=.2
CFG_PRERELEASE_VERSION=.3
# Append a version-dependent hash to each library, so we can install different
# versions in the same place
CFG_FILENAME_EXTRA=$(shell printf '%s' $(CFG_RELEASE)$(CFG_EXTRA_FILENAME) | $(CFG_HASH_COMMAND))
# A magic value that allows the compiler to use unstable features during the
# bootstrap even when doing so would normally be an error because of feature
# staging or because the build turns on warnings-as-errors and unstable features
# default to warnings. The build has to match this key in an env var.
#
# This value is keyed off the release to ensure that all compilers for one
# particular release have the same bootstrap key. Note that this is
# intentionally not "secure" by any definition, this is largely just a deterrent
# from users enabling unstable features on the stable compiler.
CFG_BOOTSTRAP_KEY=$(CFG_FILENAME_EXTRA)
ifeq ($(CFG_RELEASE_CHANNEL),stable)
# This is the normal semver version string, e.g. "0.12.0", "0.12.0-nightly"
CFG_RELEASE=$(CFG_RELEASE_NUM)
@ -134,6 +145,11 @@ ifdef CFG_ENABLE_DEBUGINFO
CFG_RUSTC_FLAGS += -g
endif
ifdef CFG_ENABLE_ORBIT
$(info cfg: launching MIR (CFG_ENABLE_ORBIT))
CFG_RUSTC_FLAGS += -Z orbit
endif
ifdef SAVE_TEMPS
CFG_RUSTC_FLAGS += --save-temps
endif
@ -488,7 +504,7 @@ endif
LD_LIBRARY_PATH_ENV_HOSTDIR$(1)_T_$(2)_H_$(3) := \
$$(CURDIR)/$$(HLIB$(1)_H_$(3)):$$(CFG_LLVM_INST_DIR_$(3))/lib
LD_LIBRARY_PATH_ENV_TARGETDIR$(1)_T_$(2)_H_$(3) := \
$$(CURDIR)/$$(TLIB1_T_$(2)_H_$(CFG_BUILD))
$$(CURDIR)/$$(TLIB$(1)_T_$(2)_H_$(3))
HOST_RPATH_VAR$(1)_T_$(2)_H_$(3) := \
$$(LD_LIBRARY_PATH_ENV_NAME$(1)_T_$(2)_H_$(3))=$$(LD_LIBRARY_PATH_ENV_HOSTDIR$(1)_T_$(2)_H_$(3)):$$$$$$(LD_LIBRARY_PATH_ENV_NAME$(1)_T_$(2)_H_$(3))
@ -501,18 +517,14 @@ RPATH_VAR$(1)_T_$(2)_H_$(3) := $$(HOST_RPATH_VAR$(1)_T_$(2)_H_$(3))
# if you're building a cross config, the host->* parts are
# effectively stage1, since it uses the just-built stage0.
#
# This logic is similar to how the LD_LIBRARY_PATH variable must
# change be slightly different when doing cross compilations.
# The build doesn't copy over all target libraries into
# a new directory, so we need to point the library path at
# the build directory where all the target libraries came
# from (the stage0 build host). Otherwise the relative rpaths
# inside of the rustc binary won't get resolved correctly.
# Also be sure to use the right rpath because we're loading libraries from the
# CFG_BUILD's stage1 directory for our target, so switch this one instance of
# `RPATH_VAR` to get the bootstrap working.
ifeq ($(1),0)
ifneq ($(strip $(CFG_BUILD)),$(strip $(3)))
CFGFLAG$(1)_T_$(2)_H_$(3) = stage1
RPATH_VAR$(1)_T_$(2)_H_$(3) := $$(TARGET_RPATH_VAR$(1)_T_$(2)_H_$(3))
RPATH_VAR$(1)_T_$(2)_H_$(3) := $$(TARGET_RPATH_VAR1_T_$(2)_H_$$(CFG_BUILD))
endif
endif

View File

@ -157,6 +157,8 @@ else ifeq ($(findstring android, $(OSTYPE_$(1))), android)
# If the test suite passes, however, without symbol prefixes then we should be
# good to go!
JEMALLOC_ARGS_$(1) := --disable-tls --with-jemalloc-prefix=je_
else ifeq ($(findstring dragonfly, $(OSTYPE_$(1))), dragonfly)
JEMALLOC_ARGS_$(1) := --with-jemalloc-prefix=je_
endif
ifdef CFG_ENABLE_DEBUG_JEMALLOC
@ -236,11 +238,11 @@ COMPRT_LIB_$(1) := $$(RT_OUTPUT_DIR_$(1))/$$(COMPRT_NAME_$(1))
COMPRT_BUILD_DIR_$(1) := $$(RT_OUTPUT_DIR_$(1))/compiler-rt
ifeq ($$(findstring msvc,$(1)),msvc)
$$(COMPRT_LIB_$(1)): $$(COMPRT_DEPS) $$(MKFILE_DEPS) $$(LLVM_CONFIG_$(1))
$$(COMPRT_LIB_$(1)): $$(COMPRT_DEPS) $$(MKFILE_DEPS) $$(LLVM_CONFIG_$$(CFG_BUILD))
@$$(call E, cmake: compiler-rt)
$$(Q)cd "$$(COMPRT_BUILD_DIR_$(1))"; $$(CFG_CMAKE) "$(S)src/compiler-rt" \
-DCMAKE_BUILD_TYPE=$$(LLVM_BUILD_CONFIG_MODE) \
-DLLVM_CONFIG_PATH=$$(LLVM_CONFIG_$(1)) \
-DLLVM_CONFIG_PATH=$$(LLVM_CONFIG_$$(CFG_BUILD)) \
-G"$$(CFG_CMAKE_GENERATOR)"
$$(Q)$$(CFG_CMAKE) --build "$$(COMPRT_BUILD_DIR_$(1))" \
--target lib/builtins/builtins \
@ -253,7 +255,7 @@ COMPRT_AR_$(1) := $$(AR_$(1))
# We chomp -Werror here because GCC warns about the type signature of
# builtins not matching its own and the build fails. It's a bit hacky,
# but what can we do, we're building libclang-rt using GCC ......
COMPRT_CFLAGS_$(1) := $$(filter-out -Werror -Werror=*,$$(CFG_GCCISH_CFLAGS_$(1))) -std=c99
COMPRT_CFLAGS_$(1) := $$(CFG_GCCISH_CFLAGS_$(1)) -Wno-error -std=c99
# FreeBSD Clang's packaging is problematic; it doesn't copy unwind.h to
# the standard include directory. This should really be in our changes to
@ -361,7 +363,7 @@ $$(BACKTRACE_BUILD_DIR_$(1))/Makefile: $$(BACKTRACE_DEPS) $$(MKFILE_DEPS)
CC="$$(CC_$(1))" \
AR="$$(AR_$(1))" \
RANLIB="$$(AR_$(1)) s" \
CFLAGS="$$(CFG_GCCISH_CFLAGS_$(1):-Werror=) -fno-stack-protector" \
CFLAGS="$$(CFG_GCCISH_CFLAGS_$(1)) -Wno-error -fno-stack-protector" \
$(S)src/libbacktrace/configure --build=$(CFG_GNU_TRIPLE_$(CFG_BUILD)) --host=$(CFG_GNU_TRIPLE_$(1)))
$$(Q)echo '#undef HAVE_ATOMIC_FUNCTIONS' >> \
$$(BACKTRACE_BUILD_DIR_$(1))/config.h

View File

@ -89,6 +89,7 @@ $$(TLIB$(1)_T_$(2)_H_$(3))/stamp.$(4): \
$$(RUSTFLAGS$(1)_$(4)_T_$(2)) \
--out-dir $$(@D) \
-C extra-filename=-$$(CFG_FILENAME_EXTRA) \
-C metadata=$$(CFG_FILENAME_EXTRA) \
$$<
@touch -r $$@.start_time $$@ && rm $$@.start_time
$$(call LIST_ALL_OLD_GLOB_MATCHES, \

View File

@ -299,24 +299,35 @@ check-stage$(1)-T-$(2)-H-$(3)-exec: \
check-stage$(1)-T-$(2)-H-$(3)-cfail-exec \
check-stage$(1)-T-$(2)-H-$(3)-pfail-exec \
check-stage$(1)-T-$(2)-H-$(3)-rpass-valgrind-exec \
check-stage$(1)-T-$(2)-H-$(3)-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-rfail-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-cfail-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec \
check-stage$(1)-T-$(2)-H-$(3)-rustdocck-exec \
check-stage$(1)-T-$(2)-H-$(3)-crates-exec \
check-stage$(1)-T-$(2)-H-$(3)-doc-crates-exec \
check-stage$(1)-T-$(2)-H-$(3)-debuginfo-gdb-exec \
check-stage$(1)-T-$(2)-H-$(3)-debuginfo-lldb-exec \
check-stage$(1)-T-$(2)-H-$(3)-codegen-exec \
check-stage$(1)-T-$(2)-H-$(3)-codegen-units-exec \
check-stage$(1)-T-$(2)-H-$(3)-incremental-exec \
check-stage$(1)-T-$(2)-H-$(3)-doc-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec
ifndef CFG_DISABLE_CODEGEN_TESTS
check-stage$(1)-T-$(2)-H-$(3)-exec: \
check-stage$(1)-T-$(2)-H-$(3)-codegen-exec \
check-stage$(1)-T-$(2)-H-$(3)-codegen-units-exec
endif
# Only test the compiler-dependent crates when the target is
# able to build a compiler (when the target triple is in the set of host triples)
ifneq ($$(findstring $(2),$$(CFG_HOST)),)
check-stage$(1)-T-$(2)-H-$(3)-exec: \
check-stage$(1)-T-$(2)-H-$(3)-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-rfail-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-cfail-full-exec
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec: \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rfail-full-exec
check-stage$(1)-T-$(2)-H-$(3)-crates-exec: \
$$(foreach crate,$$(TEST_CRATES), \
check-stage$(1)-T-$(2)-H-$(3)-$$(crate)-exec)
@ -340,9 +351,7 @@ check-stage$(1)-T-$(2)-H-$(3)-doc-exec: \
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec: \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-valgrind-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rfail-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rfail-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-pretty-exec
endef
@ -379,7 +388,7 @@ $(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): \
@$$(call E, rustc: $$@)
$(Q)CFG_LLVM_LINKAGE_FILE=$$(LLVM_LINKAGE_PATH_$(2)) \
$$(subst @,,$$(STAGE$(1)_T_$(2)_H_$(3))) -o $$@ $$< --test \
-L "$$(RT_OUTPUT_DIR_$(2))" \
-Cmetadata="test-crate" -L "$$(RT_OUTPUT_DIR_$(2))" \
$$(LLVM_LIBDIR_RUSTFLAGS_$(2)) \
$$(RUSTFLAGS_$(4))
@ -473,6 +482,7 @@ DEBUGINFO_LLDB_RS := $(call rwildcard,$(S)src/test/debuginfo/,*.rs)
CODEGEN_RS := $(call rwildcard,$(S)src/test/codegen/,*.rs)
CODEGEN_CC := $(call rwildcard,$(S)src/test/codegen/,*.cc)
CODEGEN_UNITS_RS := $(call rwildcard,$(S)src/test/codegen-units/,*.rs)
INCREMENTAL_RS := $(call rwildcard,$(S)src/test/incremental/,*.rs)
RUSTDOCCK_RS := $(call rwildcard,$(S)src/test/rustdoc/,*.rs)
RPASS_TESTS := $(RPASS_RS)
@ -488,6 +498,7 @@ DEBUGINFO_GDB_TESTS := $(DEBUGINFO_GDB_RS)
DEBUGINFO_LLDB_TESTS := $(DEBUGINFO_LLDB_RS)
CODEGEN_TESTS := $(CODEGEN_RS) $(CODEGEN_CC)
CODEGEN_UNITS_TESTS := $(CODEGEN_UNITS_RS)
INCREMENTAL_TESTS := $(INCREMENTAL_RS)
RUSTDOCCK_TESTS := $(RUSTDOCCK_RS)
CTEST_SRC_BASE_rpass = run-pass
@ -550,6 +561,11 @@ CTEST_BUILD_BASE_codegen-units = codegen-units
CTEST_MODE_codegen-units = codegen-units
CTEST_RUNTOOL_codegen-units = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_incremental = incremental
CTEST_BUILD_BASE_incremental = incremental
CTEST_MODE_incremental = incremental
CTEST_RUNTOOL_incremental = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_rustdocck = rustdoc
CTEST_BUILD_BASE_rustdocck = rustdoc
CTEST_MODE_rustdocck = rustdoc
@ -673,6 +689,7 @@ CTEST_DEPS_debuginfo-lldb_$(1)-T-$(2)-H-$(3) = $$(DEBUGINFO_LLDB_TESTS) \
$(S)src/etc/lldb_rust_formatters.py
CTEST_DEPS_codegen_$(1)-T-$(2)-H-$(3) = $$(CODEGEN_TESTS)
CTEST_DEPS_codegen-units_$(1)-T-$(2)-H-$(3) = $$(CODEGEN_UNITS_TESTS)
CTEST_DEPS_incremental_$(1)-T-$(2)-H-$(3) = $$(INCREMENTAL_TESTS)
CTEST_DEPS_rustdocck_$(1)-T-$(2)-H-$(3) = $$(RUSTDOCCK_TESTS) \
$$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3)) \
$(S)src/etc/htmldocck.py
@ -739,7 +756,7 @@ endif
endef
CTEST_NAMES = rpass rpass-valgrind rpass-full rfail-full cfail-full rfail cfail pfail \
debuginfo-gdb debuginfo-lldb codegen codegen-units rustdocck
debuginfo-gdb debuginfo-lldb codegen codegen-units rustdocck incremental
$(foreach host,$(CFG_HOST), \
$(eval $(foreach target,$(CFG_TARGET), \
@ -937,6 +954,7 @@ TEST_GROUPS = \
debuginfo-lldb \
codegen \
codegen-units \
incremental \
doc \
$(foreach docname,$(DOC_NAMES),doc-$(docname)) \
pretty \

View File

@ -3,16 +3,17 @@ name = "bootstrap"
version = "0.0.0"
dependencies = [
"build_helper 0.1.0",
"cmake 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)",
"cmake 0.1.16 (registry+https://github.com/rust-lang/crates.io-index)",
"filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"gcc 0.3.25 (registry+https://github.com/rust-lang/crates.io-index)",
"gcc 0.3.26 (registry+https://github.com/rust-lang/crates.io-index)",
"getopts 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
"kernel32-sys 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.7 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.9 (registry+https://github.com/rust-lang/crates.io-index)",
"md5 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-serialize 0.3.18 (registry+https://github.com/rust-lang/crates.io-index)",
"toml 0.1.27 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)",
"toml 0.1.28 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -21,10 +22,10 @@ version = "0.1.0"
[[package]]
name = "cmake"
version = "0.1.13"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"gcc 0.3.25 (registry+https://github.com/rust-lang/crates.io-index)",
"gcc 0.3.26 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -32,12 +33,12 @@ name = "filetime"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.7 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "gcc"
version = "0.3.25"
version = "0.3.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -50,13 +51,18 @@ name = "kernel32-sys"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "libc"
version = "0.2.7"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "md5"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -64,25 +70,25 @@ name = "num_cpus"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.7 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-serialize"
version = "0.3.18"
version = "0.3.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "toml"
version = "0.1.27"
version = "0.1.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rustc-serialize 0.3.18 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi"
version = "0.2.5"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]

View File

@ -15,6 +15,10 @@ path = "main.rs"
name = "rustc"
path = "rustc.rs"
[[bin]]
name = "rustdoc"
path = "rustdoc.rs"
[dependencies]
build_helper = { path = "../build_helper" }
cmake = "0.1.10"
@ -27,3 +31,4 @@ winapi = "0.2"
kernel32-sys = "0.2"
gcc = "0.3.17"
libc = "0.2"
md5 = "0.1"

View File

@ -73,7 +73,8 @@ class RustBuild:
if self.rustc().startswith(self.bin_root()) and \
(not os.path.exists(self.rustc()) or self.rustc_out_of_date()):
shutil.rmtree(self.bin_root())
if os.path.exists(self.bin_root()):
shutil.rmtree(self.bin_root())
filename = "rust-std-nightly-" + self.build + ".tar.gz"
url = "https://static.rust-lang.org/dist/" + self.snap_rustc_date()
tarball = os.path.join(rustc_cache, filename)

View File

@ -8,15 +8,15 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::env;
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::Path;
use std::process::Command;
use build_helper::output;
use md5;
use build::Build;
use build::util::mtime;
pub fn collect(build: &mut Build) {
let mut main_mk = String::new();
@ -36,19 +36,23 @@ pub fn collect(build: &mut Build) {
match &build.config.channel[..] {
"stable" => {
build.release = release_num.to_string();
build.package_vers = build.release.clone();
build.unstable_features = false;
}
"beta" => {
build.release = format!("{}-beta{}", release_num,
prerelease_version);
build.package_vers = "beta".to_string();
build.unstable_features = false;
}
"nightly" => {
build.release = format!("{}-nightly", release_num);
build.package_vers = "nightly".to_string();
build.unstable_features = true;
}
_ => {
build.release = format!("{}-dev", release_num);
build.package_vers = build.release.clone();
build.unstable_features = true;
}
}
@ -76,7 +80,8 @@ pub fn collect(build: &mut Build) {
build.short_ver_hash = Some(short_ver_hash);
}
build.bootstrap_key = mtime(Path::new("config.toml")).seconds()
.to_string();
let key = md5::compute(build.release.as_bytes());
build.bootstrap_key = format!("{:02x}{:02x}{:02x}{:02x}",
key[0], key[1], key[2], key[3]);
env::set_var("RUSTC_BOOTSTRAP_KEY", &build.bootstrap_key);
}

View File

@ -0,0 +1,35 @@
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use build::{Build, Compiler};
pub fn linkcheck(build: &Build, stage: u32, host: &str) {
println!("Linkcheck stage{} ({})", stage, host);
let compiler = Compiler::new(stage, host);
build.run(build.tool_cmd(&compiler, "linkchecker")
.arg(build.out.join(host).join("doc")));
}
pub fn cargotest(build: &Build, stage: u32, host: &str) {
let ref compiler = Compiler::new(stage, host);
// Configure PATH to find the right rustc. NB. we have to use PATH
// and not RUSTC because the Cargo test suite has tests that will
// fail if rustc is not spelled `rustc`.
let path = build.sysroot(compiler).join("bin");
let old_path = ::std::env::var("PATH").expect("");
let sep = if cfg!(windows) { ";" } else {":" };
let ref newpath = format!("{}{}{}", path.display(), sep, old_path);
build.run(build.tool_cmd(compiler, "cargotest")
.env("PATH", newpath)
.arg(&build.cargo));
}

View File

@ -15,33 +15,31 @@ use std::process::Command;
use build_helper::output;
use build::util::{exe, staticlib, libdir, mtime, is_dylib};
use build::{Build, Compiler};
use build::util::{exe, staticlib, libdir, mtime, is_dylib, copy};
use build::{Build, Compiler, Mode};
/// Build the standard library.
///
/// This will build the standard library for a particular stage of the build
/// using the `compiler` targeting the `target` architecture. The artifacts
/// created will also be linked into the sysroot directory.
pub fn std<'a>(build: &'a Build, stage: u32, target: &str,
compiler: &Compiler<'a>) {
let host = compiler.host;
println!("Building stage{} std artifacts ({} -> {})", stage,
host, target);
pub fn std<'a>(build: &'a Build, target: &str, compiler: &Compiler<'a>) {
println!("Building stage{} std artifacts ({} -> {})", compiler.stage,
compiler.host, target);
// Move compiler-rt into place as it'll be required by the compiler when
// building the standard library to link the dylib of libstd
let libdir = build.sysroot_libdir(stage, &host, target);
let libdir = build.sysroot_libdir(compiler, target);
let _ = fs::remove_dir_all(&libdir);
t!(fs::create_dir_all(&libdir));
t!(fs::hard_link(&build.compiler_rt_built.borrow()[target],
libdir.join(staticlib("compiler-rt", target))));
copy(&build.compiler_rt_built.borrow()[target],
&libdir.join(staticlib("compiler-rt", target)));
build_startup_objects(build, target, &libdir);
let out_dir = build.cargo_out(stage, &host, true, target);
let out_dir = build.cargo_out(compiler, Mode::Libstd, target);
build.clear_if_dirty(&out_dir, &build.compiler_path(compiler));
let mut cargo = build.cargo(stage, compiler, true, target, "build");
let mut cargo = build.cargo(compiler, Mode::Libstd, target, "build");
cargo.arg("--features").arg(build.std_features())
.arg("--manifest-path")
.arg(build.src.join("src/rustc/std_shim/Cargo.toml"));
@ -58,7 +56,7 @@ pub fn std<'a>(build: &'a Build, stage: u32, target: &str,
}
build.run(&mut cargo);
std_link(build, stage, target, compiler, host);
std_link(build, target, compiler, compiler.host);
}
/// Link all libstd rlibs/dylibs into the sysroot location.
@ -66,12 +64,12 @@ pub fn std<'a>(build: &'a Build, stage: u32, target: &str,
/// Links those artifacts generated in the given `stage` for `target` produced
/// by `compiler` into `host`'s sysroot.
pub fn std_link(build: &Build,
stage: u32,
target: &str,
compiler: &Compiler,
host: &str) {
let libdir = build.sysroot_libdir(stage, host, target);
let out_dir = build.cargo_out(stage, compiler.host, true, target);
let target_compiler = Compiler::new(compiler.stage, host);
let libdir = build.sysroot_libdir(&target_compiler, target);
let out_dir = build.cargo_out(compiler, Mode::Libstd, target);
// If we're linking one compiler host's output into another, then we weren't
// called from the `std` method above. In that case we clean out what's
@ -79,10 +77,24 @@ pub fn std_link(build: &Build,
if host != compiler.host {
let _ = fs::remove_dir_all(&libdir);
t!(fs::create_dir_all(&libdir));
t!(fs::hard_link(&build.compiler_rt_built.borrow()[target],
libdir.join(staticlib("compiler-rt", target))));
copy(&build.compiler_rt_built.borrow()[target],
&libdir.join(staticlib("compiler-rt", target)));
}
add_to_sysroot(&out_dir, &libdir);
if target.contains("musl") &&
(target.contains("x86_64") || target.contains("i686")) {
copy_third_party_objects(build, target, &libdir);
}
}
/// Copies the crt(1,i,n).o startup objects
///
/// Only required for musl targets that statically link to libc
fn copy_third_party_objects(build: &Build, target: &str, into: &Path) {
for &obj in &["crt1.o", "crti.o", "crtn.o"] {
copy(&compiler_file(build.cc(target), obj), &into.join(obj));
}
}
/// Build and prepare startup objects like rsbegin.o and rsend.o
@ -107,34 +119,59 @@ fn build_startup_objects(build: &Build, target: &str, into: &Path) {
}
for obj in ["crt2.o", "dllcrt2.o"].iter() {
t!(fs::copy(compiler_file(build.cc(target), obj), into.join(obj)));
copy(&compiler_file(build.cc(target), obj), &into.join(obj));
}
}
/// Build libtest.
///
/// This will build libtest and supporting libraries for a particular stage of
/// the build using the `compiler` targeting the `target` architecture. The
/// artifacts created will also be linked into the sysroot directory.
pub fn test<'a>(build: &'a Build, target: &str, compiler: &Compiler<'a>) {
println!("Building stage{} test artifacts ({} -> {})", compiler.stage,
compiler.host, target);
let out_dir = build.cargo_out(compiler, Mode::Libtest, target);
build.clear_if_dirty(&out_dir, &libstd_shim(build, compiler, target));
let mut cargo = build.cargo(compiler, Mode::Libtest, target, "build");
cargo.arg("--manifest-path")
.arg(build.src.join("src/rustc/test_shim/Cargo.toml"));
build.run(&mut cargo);
test_link(build, target, compiler, compiler.host);
}
/// Link all libtest rlibs/dylibs into the sysroot location.
///
/// Links those artifacts generated in the given `stage` for `target` produced
/// by `compiler` into `host`'s sysroot.
pub fn test_link(build: &Build,
target: &str,
compiler: &Compiler,
host: &str) {
let target_compiler = Compiler::new(compiler.stage, host);
let libdir = build.sysroot_libdir(&target_compiler, target);
let out_dir = build.cargo_out(compiler, Mode::Libtest, target);
add_to_sysroot(&out_dir, &libdir);
}
/// Build the compiler.
///
/// This will build the compiler for a particular stage of the build using
/// the `compiler` targeting the `target` architecture. The artifacts
/// created will also be linked into the sysroot directory.
pub fn rustc<'a>(build: &'a Build, stage: u32, target: &str,
compiler: &Compiler<'a>) {
let host = compiler.host;
println!("Building stage{} compiler artifacts ({} -> {})", stage,
host, target);
pub fn rustc<'a>(build: &'a Build, target: &str, compiler: &Compiler<'a>) {
println!("Building stage{} compiler artifacts ({} -> {})",
compiler.stage, compiler.host, target);
let out_dir = build.cargo_out(stage, &host, false, target);
build.clear_if_dirty(&out_dir, &libstd_shim(build, stage, &host, target));
let out_dir = build.cargo_out(compiler, Mode::Librustc, target);
build.clear_if_dirty(&out_dir, &libtest_shim(build, compiler, target));
let mut cargo = build.cargo(stage, compiler, false, target, "build");
cargo.arg("--features").arg(build.rustc_features(stage))
let mut cargo = build.cargo(compiler, Mode::Librustc, target, "build");
cargo.arg("--features").arg(build.rustc_features())
.arg("--manifest-path")
.arg(build.src.join("src/rustc/Cargo.toml"));
// In stage0 we may not need to build as many executables
if stage == 0 {
cargo.arg("--bin").arg("rustc");
}
// Set some configuration variables picked up by build scripts and
// the compiler alike
cargo.env("CFG_RELEASE", &build.release)
@ -174,7 +211,7 @@ pub fn rustc<'a>(build: &'a Build, stage: u32, target: &str,
}
build.run(&mut cargo);
rustc_link(build, stage, target, compiler, compiler.host);
rustc_link(build, target, compiler, compiler.host);
}
/// Link all librustc rlibs/dylibs into the sysroot location.
@ -182,24 +219,31 @@ pub fn rustc<'a>(build: &'a Build, stage: u32, target: &str,
/// Links those artifacts generated in the given `stage` for `target` produced
/// by `compiler` into `host`'s sysroot.
pub fn rustc_link(build: &Build,
stage: u32,
target: &str,
compiler: &Compiler,
host: &str) {
let libdir = build.sysroot_libdir(stage, host, target);
let out_dir = build.cargo_out(stage, compiler.host, false, target);
let target_compiler = Compiler::new(compiler.stage, host);
let libdir = build.sysroot_libdir(&target_compiler, target);
let out_dir = build.cargo_out(compiler, Mode::Librustc, target);
add_to_sysroot(&out_dir, &libdir);
}
/// Cargo's output path for the standard library in a given stage, compiled
/// by a particular compiler for the specified target.
fn libstd_shim(build: &Build, stage: u32, host: &str, target: &str) -> PathBuf {
build.cargo_out(stage, host, true, target).join("libstd_shim.rlib")
fn libstd_shim(build: &Build, compiler: &Compiler, target: &str) -> PathBuf {
build.cargo_out(compiler, Mode::Libstd, target).join("libstd_shim.rlib")
}
fn compiler_file(compiler: &Path, file: &str) -> String {
output(Command::new(compiler)
.arg(format!("-print-file-name={}", file))).trim().to_string()
/// Cargo's output path for libtest in a given stage, compiled by a particular
/// compiler for the specified target.
fn libtest_shim(build: &Build, compiler: &Compiler, target: &str) -> PathBuf {
build.cargo_out(compiler, Mode::Libtest, target).join("libtest_shim.rlib")
}
fn compiler_file(compiler: &Path, file: &str) -> PathBuf {
let out = output(Command::new(compiler)
.arg(format!("-print-file-name={}", file)));
PathBuf::from(out.trim())
}
/// Prepare a new compiler from the artifacts in `stage`
@ -209,24 +253,29 @@ fn compiler_file(compiler: &Path, file: &str) -> String {
/// compiler.
pub fn assemble_rustc(build: &Build, stage: u32, host: &str) {
assert!(stage > 0, "the stage0 compiler isn't assembled, it's downloaded");
// The compiler that we're assembling
let target_compiler = Compiler::new(stage, host);
// The compiler that compiled the compiler we're assembling
let build_compiler = Compiler::new(stage - 1, &build.config.build);
// Clear out old files
let sysroot = build.sysroot(stage, host);
let sysroot = build.sysroot(&target_compiler);
let _ = fs::remove_dir_all(&sysroot);
t!(fs::create_dir_all(&sysroot));
// Link in all dylibs to the libdir
let sysroot_libdir = sysroot.join(libdir(host));
t!(fs::create_dir_all(&sysroot_libdir));
let src_libdir = build.sysroot_libdir(stage - 1, &build.config.build, host);
let src_libdir = build.sysroot_libdir(&build_compiler, host);
for f in t!(fs::read_dir(&src_libdir)).map(|f| t!(f)) {
let filename = f.file_name().into_string().unwrap();
if is_dylib(&filename) {
t!(fs::hard_link(&f.path(), sysroot_libdir.join(&filename)));
copy(&f.path(), &sysroot_libdir.join(&filename));
}
}
let out_dir = build.cargo_out(stage - 1, &build.config.build, false, host);
let out_dir = build.cargo_out(&build_compiler, Mode::Librustc, host);
// Link the compiler binary itself into place
let rustc = out_dir.join(exe("rustc", host));
@ -234,7 +283,7 @@ pub fn assemble_rustc(build: &Build, stage: u32, host: &str) {
t!(fs::create_dir_all(&bindir));
let compiler = build.compiler_path(&Compiler::new(stage, host));
let _ = fs::remove_file(&compiler);
t!(fs::hard_link(rustc, compiler));
copy(&rustc, &compiler);
// See if rustdoc exists to link it into place
let rustdoc = exe("rustdoc", host);
@ -242,7 +291,7 @@ pub fn assemble_rustc(build: &Build, stage: u32, host: &str) {
let rustdoc_dst = bindir.join(&rustdoc);
if fs::metadata(&rustdoc_src).is_ok() {
let _ = fs::remove_file(&rustdoc_dst);
t!(fs::hard_link(&rustdoc_src, &rustdoc_dst));
copy(&rustdoc_src, &rustdoc_dst);
}
}
@ -281,7 +330,30 @@ fn add_to_sysroot(out_dir: &Path, sysroot_dst: &Path) {
let (_, path) = paths.iter().map(|path| {
(mtime(&path).seconds(), path)
}).max().unwrap();
t!(fs::hard_link(&path,
sysroot_dst.join(path.file_name().unwrap())));
copy(&path, &sysroot_dst.join(path.file_name().unwrap()));
}
}
/// Build a tool in `src/tools`
///
/// This will build the specified tool with the specified `host` compiler in
/// `stage` into the normal cargo output directory.
pub fn tool(build: &Build, stage: u32, host: &str, tool: &str) {
println!("Building stage{} tool {} ({})", stage, tool, host);
let compiler = Compiler::new(stage, host);
// FIXME: need to clear out previous tool and ideally deps, may require
// isolating output directories or require a pseudo shim step to
// clear out all the info.
//
// Maybe when libstd is compiled it should clear out the rustc of the
// corresponding stage?
// let out_dir = build.cargo_out(stage, &host, Mode::Librustc, target);
// build.clear_if_dirty(&out_dir, &libstd_shim(build, stage, &host, target));
let mut cargo = build.cargo(&compiler, Mode::Tool, host, "build");
cargo.arg("--manifest-path")
.arg(build.src.join(format!("src/tools/{}/Cargo.toml", tool)));
build.run(&mut cargo);
}

292
src/bootstrap/build/dist.rs Normal file
View File

@ -0,0 +1,292 @@
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::fs::{self, File};
use std::io::Write;
use std::path::{PathBuf, Path};
use std::process::Command;
use build::{Build, Compiler};
use build::util::{cp_r, libdir, is_dylib};
fn package_vers(build: &Build) -> &str {
match &build.config.channel[..] {
"stable" => &build.release,
"beta" => "beta",
"nightly" => "nightly",
_ => &build.release,
}
}
fn distdir(build: &Build) -> PathBuf {
build.out.join("dist")
}
fn tmpdir(build: &Build) -> PathBuf {
build.out.join("tmp/dist")
}
pub fn docs(build: &Build, stage: u32, host: &str) {
println!("Dist docs stage{} ({})", stage, host);
let name = format!("rust-docs-{}", package_vers(build));
let image = tmpdir(build).join(format!("{}-{}-image", name, name));
let _ = fs::remove_dir_all(&image);
let dst = image.join("share/doc/rust/html");
t!(fs::create_dir_all(&dst));
let src = build.out.join(host).join("doc");
cp_r(&src, &dst);
let mut cmd = Command::new("sh");
cmd.arg(sanitize_sh(&build.src.join("src/rust-installer/gen-installer.sh")))
.arg("--product-name=Rust-Documentation")
.arg("--rel-manifest-dir=rustlib")
.arg("--success-message=Rust-documentation-is-installed.")
.arg(format!("--image-dir={}", sanitize_sh(&image)))
.arg(format!("--work-dir={}", sanitize_sh(&tmpdir(build))))
.arg(format!("--output-dir={}", sanitize_sh(&distdir(build))))
.arg(format!("--package-name={}-{}", name, host))
.arg("--component-name=rust-docs")
.arg("--legacy-manifest-dirs=rustlib,cargo")
.arg("--bulk-dirs=share/doc/rust/html");
build.run(&mut cmd);
t!(fs::remove_dir_all(&image));
// As part of this step, *also* copy the docs directory to a directory which
// buildbot typically uploads.
if host == build.config.build {
let dst = distdir(build).join("doc").join(&build.package_vers);
t!(fs::create_dir_all(&dst));
cp_r(&src, &dst);
}
}
pub fn mingw(build: &Build, host: &str) {
println!("Dist mingw ({})", host);
let name = format!("rust-mingw-{}", package_vers(build));
let image = tmpdir(build).join(format!("{}-{}-image", name, host));
let _ = fs::remove_dir_all(&image);
// The first argument to the script is a "temporary directory" which is just
// thrown away (this contains the runtime DLLs included in the rustc package
// above) and the second argument is where to place all the MinGW components
// (which is what we want).
//
// FIXME: this script should be rewritten into Rust
let mut cmd = Command::new("python");
cmd.arg(build.src.join("src/etc/make-win-dist.py"))
.arg(tmpdir(build))
.arg(&image)
.arg(host);
build.run(&mut cmd);
let mut cmd = Command::new("sh");
cmd.arg(sanitize_sh(&build.src.join("src/rust-installer/gen-installer.sh")))
.arg("--product-name=Rust-MinGW")
.arg("--rel-manifest-dir=rustlib")
.arg("--success-message=Rust-MinGW-is-installed.")
.arg(format!("--image-dir={}", sanitize_sh(&image)))
.arg(format!("--work-dir={}", sanitize_sh(&tmpdir(build))))
.arg(format!("--output-dir={}", sanitize_sh(&distdir(build))))
.arg(format!("--package-name={}-{}", name, host))
.arg("--component-name=rust-mingw")
.arg("--legacy-manifest-dirs=rustlib,cargo");
build.run(&mut cmd);
t!(fs::remove_dir_all(&image));
}
pub fn rustc(build: &Build, stage: u32, host: &str) {
println!("Dist rustc stage{} ({})", stage, host);
let name = format!("rustc-{}", package_vers(build));
let image = tmpdir(build).join(format!("{}-{}-image", name, host));
let _ = fs::remove_dir_all(&image);
let overlay = tmpdir(build).join(format!("{}-{}-overlay", name, host));
let _ = fs::remove_dir_all(&overlay);
// Prepare the rustc "image", what will actually end up getting installed
prepare_image(build, stage, host, &image);
// Prepare the overlay which is part of the tarball but won't actually be
// installed
t!(fs::create_dir_all(&overlay));
let cp = |file: &str| {
install(&build.src.join(file), &overlay, 0o644);
};
cp("COPYRIGHT");
cp("LICENSE-APACHE");
cp("LICENSE-MIT");
cp("README.md");
// tiny morsel of metadata is used by rust-packaging
let version = &build.version;
t!(t!(File::create(overlay.join("version"))).write_all(version.as_bytes()));
// On MinGW we've got a few runtime DLL dependencies that we need to
// include. The first argument to this script is where to put these DLLs
// (the image we're creating), and the second argument is a junk directory
// to ignore all other MinGW stuff the script creates.
//
// On 32-bit MinGW we're always including a DLL which needs some extra
// licenses to distribute. On 64-bit MinGW we don't actually distribute
// anything requiring us to distribute a license, but it's likely the
// install will *also* include the rust-mingw package, which also needs
// licenses, so to be safe we just include it here in all MinGW packages.
//
// FIXME: this script should be rewritten into Rust
if host.contains("pc-windows-gnu") {
let mut cmd = Command::new("python");
cmd.arg(build.src.join("src/etc/make-win-dist.py"))
.arg(&image)
.arg(tmpdir(build))
.arg(host);
build.run(&mut cmd);
let dst = image.join("share/doc");
t!(fs::create_dir_all(&dst));
cp_r(&build.src.join("src/etc/third-party"), &dst);
}
// Finally, wrap everything up in a nice tarball!
let mut cmd = Command::new("sh");
cmd.arg(sanitize_sh(&build.src.join("src/rust-installer/gen-installer.sh")))
.arg("--product-name=Rust")
.arg("--rel-manifest-dir=rustlib")
.arg("--success-message=Rust-is-ready-to-roll.")
.arg(format!("--image-dir={}", sanitize_sh(&image)))
.arg(format!("--work-dir={}", sanitize_sh(&tmpdir(build))))
.arg(format!("--output-dir={}", sanitize_sh(&distdir(build))))
.arg(format!("--non-installed-overlay={}", sanitize_sh(&overlay)))
.arg(format!("--package-name={}-{}", name, host))
.arg("--component-name=rustc")
.arg("--legacy-manifest-dirs=rustlib,cargo");
build.run(&mut cmd);
t!(fs::remove_dir_all(&image));
t!(fs::remove_dir_all(&overlay));
fn prepare_image(build: &Build, stage: u32, host: &str, image: &Path) {
let src = build.sysroot(&Compiler::new(stage, host));
let libdir = libdir(host);
// Copy rustc/rustdoc binaries
t!(fs::create_dir_all(image.join("bin")));
cp_r(&src.join("bin"), &image.join("bin"));
// Copy runtime DLLs needed by the compiler
if libdir != "bin" {
t!(fs::create_dir_all(image.join(libdir)));
for entry in t!(src.join(libdir).read_dir()).map(|e| t!(e)) {
let name = entry.file_name();
if let Some(s) = name.to_str() {
if is_dylib(s) {
install(&entry.path(), &image.join(libdir), 0o644);
}
}
}
}
// Man pages
t!(fs::create_dir_all(image.join("share/man/man1")));
cp_r(&build.src.join("man"), &image.join("share/man/man1"));
// Debugger scripts
let cp_debugger_script = |file: &str| {
let dst = image.join("lib/rustlib/etc");
t!(fs::create_dir_all(&dst));
install(&build.src.join("src/etc/").join(file), &dst, 0o644);
};
if host.contains("windows") {
// no debugger scripts
} else if host.contains("darwin") {
// lldb debugger scripts
install(&build.src.join("src/etc/rust-lldb"), &image.join("bin"),
0o755);
cp_debugger_script("lldb_rust_formatters.py");
cp_debugger_script("debugger_pretty_printers_common.py");
} else {
// gdb debugger scripts
install(&build.src.join("src/etc/rust-gdb"), &image.join("bin"),
0o755);
cp_debugger_script("gdb_load_rust_pretty_printers.py");
cp_debugger_script("gdb_rust_pretty_printing.py");
cp_debugger_script("debugger_pretty_printers_common.py");
}
// Misc license info
let cp = |file: &str| {
install(&build.src.join(file), &image.join("share/doc/rust"), 0o644);
};
t!(fs::create_dir_all(&image.join("share/doc/rust")));
cp("COPYRIGHT");
cp("LICENSE-APACHE");
cp("LICENSE-MIT");
cp("README.md");
}
}
pub fn std(build: &Build, compiler: &Compiler, target: &str) {
println!("Dist std stage{} ({} -> {})", compiler.stage, compiler.host,
target);
let name = format!("rust-std-{}", package_vers(build));
let image = tmpdir(build).join(format!("{}-{}-image", name, target));
let _ = fs::remove_dir_all(&image);
let dst = image.join("lib/rustlib").join(target);
t!(fs::create_dir_all(&dst));
let src = build.sysroot(compiler).join("lib/rustlib");
cp_r(&src.join(target), &dst);
let mut cmd = Command::new("sh");
cmd.arg(sanitize_sh(&build.src.join("src/rust-installer/gen-installer.sh")))
.arg("--product-name=Rust")
.arg("--rel-manifest-dir=rustlib")
.arg("--success-message=std-is-standing-at-the-ready.")
.arg(format!("--image-dir={}", sanitize_sh(&image)))
.arg(format!("--work-dir={}", sanitize_sh(&tmpdir(build))))
.arg(format!("--output-dir={}", sanitize_sh(&distdir(build))))
.arg(format!("--package-name={}-{}", name, target))
.arg(format!("--component-name=rust-std-{}", target))
.arg("--legacy-manifest-dirs=rustlib,cargo");
build.run(&mut cmd);
t!(fs::remove_dir_all(&image));
}
fn install(src: &Path, dstdir: &Path, perms: u32) {
let dst = dstdir.join(src.file_name().unwrap());
t!(fs::copy(src, &dst));
chmod(&dst, perms);
}
#[cfg(unix)]
fn chmod(path: &Path, perms: u32) {
use std::os::unix::fs::*;
t!(fs::set_permissions(path, fs::Permissions::from_mode(perms)));
}
#[cfg(windows)]
fn chmod(_path: &Path, _perms: u32) {}
// We have to run a few shell scripts, which choke quite a bit on both `\`
// characters and on `C:\` paths, so normalize both of them away.
fn sanitize_sh(path: &Path) -> String {
let path = path.to_str().unwrap().replace("\\", "/");
return change_drive(&path).unwrap_or(path);
fn change_drive(s: &str) -> Option<String> {
let mut ch = s.chars();
let drive = ch.next().unwrap_or('C');
if ch.next() != Some(':') {
return None
}
if ch.next() != Some('/') {
return None
}
Some(format!("/{}/{}", drive, &s[drive.len_utf8() + 2..]))
}
}

View File

@ -8,25 +8,26 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::path::Path;
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::Path;
use std::process::Command;
use build::{Build, Compiler};
use build::util::up_to_date;
use build::{Build, Compiler, Mode};
use build::util::{up_to_date, cp_r};
pub fn rustbook(build: &Build, stage: u32, host: &str, name: &str, out: &Path) {
pub fn rustbook(build: &Build, stage: u32, target: &str, name: &str, out: &Path) {
t!(fs::create_dir_all(out));
let out = out.join(name);
let compiler = Compiler::new(stage, host);
let compiler = Compiler::new(stage, &build.config.build);
let src = build.src.join("src/doc").join(name);
let index = out.join("index.html");
let rustbook = build.tool(&compiler, "rustbook");
if up_to_date(&src, &index) && up_to_date(&rustbook, &index) {
return
}
println!("Rustbook stage{} ({}) - {}", stage, host, name);
println!("Rustbook stage{} ({}) - {}", stage, target, name);
let _ = fs::remove_dir_all(&out);
build.run(build.tool_cmd(&compiler, "rustbook")
.arg("build")
@ -34,11 +35,11 @@ pub fn rustbook(build: &Build, stage: u32, host: &str, name: &str, out: &Path) {
.arg(out));
}
pub fn standalone(build: &Build, stage: u32, host: &str, out: &Path) {
println!("Documenting stage{} standalone ({})", stage, host);
pub fn standalone(build: &Build, stage: u32, target: &str, out: &Path) {
println!("Documenting stage{} standalone ({})", stage, target);
t!(fs::create_dir_all(out));
let compiler = Compiler::new(stage, host);
let compiler = Compiler::new(stage, &build.config.build);
let favicon = build.src.join("src/doc/favicon.inc");
let footer = build.src.join("src/doc/footer.inc");
@ -69,7 +70,7 @@ pub fn standalone(build: &Build, stage: u32, host: &str, out: &Path) {
}
let html = out.join(filename).with_extension("html");
let rustdoc = build.tool(&compiler, "rustdoc");
let rustdoc = build.rustdoc(&compiler);
if up_to_date(&path, &html) &&
up_to_date(&footer, &html) &&
up_to_date(&favicon, &html) &&
@ -79,7 +80,8 @@ pub fn standalone(build: &Build, stage: u32, host: &str, out: &Path) {
continue
}
let mut cmd = build.tool_cmd(&compiler, "rustdoc");
let mut cmd = Command::new(&rustdoc);
build.add_rustc_lib_path(&compiler, &mut cmd);
cmd.arg("--html-after-content").arg(&footer)
.arg("--html-before-content").arg(&version_info)
.arg("--html-in-header").arg(&favicon)
@ -102,3 +104,68 @@ pub fn standalone(build: &Build, stage: u32, host: &str, out: &Path) {
build.run(&mut cmd);
}
}
pub fn std(build: &Build, stage: u32, target: &str, out: &Path) {
println!("Documenting stage{} std ({})", stage, target);
t!(fs::create_dir_all(out));
let compiler = Compiler::new(stage, &build.config.build);
let out_dir = build.stage_out(&compiler, Mode::Libstd)
.join(target).join("doc");
let rustdoc = build.rustdoc(&compiler);
build.clear_if_dirty(&out_dir, &rustdoc);
let mut cargo = build.cargo(&compiler, Mode::Libstd, target, "doc");
cargo.arg("--manifest-path")
.arg(build.src.join("src/rustc/std_shim/Cargo.toml"))
.arg("--features").arg(build.std_features());
build.run(&mut cargo);
cp_r(&out_dir, out)
}
pub fn test(build: &Build, stage: u32, target: &str, out: &Path) {
println!("Documenting stage{} test ({})", stage, target);
let compiler = Compiler::new(stage, &build.config.build);
let out_dir = build.stage_out(&compiler, Mode::Libtest)
.join(target).join("doc");
let rustdoc = build.rustdoc(&compiler);
build.clear_if_dirty(&out_dir, &rustdoc);
let mut cargo = build.cargo(&compiler, Mode::Libtest, target, "doc");
cargo.arg("--manifest-path")
.arg(build.src.join("src/rustc/test_shim/Cargo.toml"));
build.run(&mut cargo);
cp_r(&out_dir, out)
}
pub fn rustc(build: &Build, stage: u32, target: &str, out: &Path) {
println!("Documenting stage{} compiler ({})", stage, target);
let compiler = Compiler::new(stage, &build.config.build);
let out_dir = build.stage_out(&compiler, Mode::Librustc)
.join(target).join("doc");
let rustdoc = build.rustdoc(&compiler);
if !up_to_date(&rustdoc, &out_dir.join("rustc/index.html")) {
t!(fs::remove_dir_all(&out_dir));
}
let mut cargo = build.cargo(&compiler, Mode::Librustc, target, "doc");
cargo.arg("--manifest-path")
.arg(build.src.join("src/rustc/Cargo.toml"))
.arg("--features").arg(build.rustc_features());
build.run(&mut cargo);
cp_r(&out_dir, out)
}
pub fn error_index(build: &Build, stage: u32, target: &str, out: &Path) {
println!("Documenting stage{} error index ({})", stage, target);
t!(fs::create_dir_all(out));
let compiler = Compiler::new(stage, &build.config.build);
let mut index = build.tool_cmd(&compiler, "error_index_generator");
index.arg("html");
index.arg(out.join("error-index.html"));
// FIXME: shouldn't have to pass this env var
index.env("CFG_BUILD", &build.config.build);
build.run(&mut index);
}

View File

@ -30,9 +30,11 @@ macro_rules! t {
mod cc;
mod channel;
mod check;
mod clean;
mod compile;
mod config;
mod dist;
mod doc;
mod flags;
mod native;
@ -75,6 +77,7 @@ pub struct Build {
short_ver_hash: Option<String>,
ver_date: Option<String>,
version: String,
package_vers: String,
bootstrap_key: String,
// Runtime state filled in later on
@ -83,6 +86,13 @@ pub struct Build {
compiler_rt_built: RefCell<HashMap<String, PathBuf>>,
}
pub enum Mode {
Libstd,
Libtest,
Librustc,
Tool,
}
impl Build {
pub fn new(flags: Flags, config: Config) -> Build {
let cwd = t!(env::current_dir());
@ -114,6 +124,7 @@ impl Build {
ver_date: None,
version: String::new(),
bootstrap_key: String::new(),
package_vers: String::new(),
cc: HashMap::new(),
cxx: HashMap::new(),
compiler_rt_built: RefCell::new(HashMap::new()),
@ -131,9 +142,13 @@ impl Build {
return clean::clean(self);
}
self.verbose("finding compilers");
cc::find(self);
self.verbose("running sanity check");
sanity::check(self);
self.verbose("collecting channel variables");
channel::collect(self);
self.verbose("updating submodules");
self.update_submodules();
for target in step::all(self) {
@ -145,19 +160,23 @@ impl Build {
CompilerRt { _dummy } => {
native::compiler_rt(self, target.target);
}
Libstd { stage, compiler } => {
compile::std(self, stage, target.target, &compiler);
Libstd { compiler } => {
compile::std(self, target.target, &compiler);
}
Librustc { stage, compiler } => {
compile::rustc(self, stage, target.target, &compiler);
Libtest { compiler } => {
compile::test(self, target.target, &compiler);
}
LibstdLink { stage, compiler, host } => {
compile::std_link(self, stage, target.target,
&compiler, host);
Librustc { compiler } => {
compile::rustc(self, target.target, &compiler);
}
LibrustcLink { stage, compiler, host } => {
compile::rustc_link(self, stage, target.target,
&compiler, host);
LibstdLink { compiler, host } => {
compile::std_link(self, target.target, &compiler, host);
}
LibtestLink { compiler, host } => {
compile::test_link(self, target.target, &compiler, host);
}
LibrustcLink { compiler, host } => {
compile::rustc_link(self, target.target, &compiler, host);
}
Rustc { stage: 0 } => {
// nothing to do...
@ -165,6 +184,19 @@ impl Build {
Rustc { stage } => {
compile::assemble_rustc(self, stage, target.target);
}
ToolLinkchecker { stage } => {
compile::tool(self, stage, target.target, "linkchecker");
}
ToolRustbook { stage } => {
compile::tool(self, stage, target.target, "rustbook");
}
ToolErrorIndex { stage } => {
compile::tool(self, stage, target.target,
"error_index_generator");
}
ToolCargoTest { stage } => {
compile::tool(self, stage, target.target, "cargotest");
}
DocBook { stage } => {
doc::rustbook(self, stage, target.target, "book", &doc_out);
}
@ -179,7 +211,34 @@ impl Build {
DocStandalone { stage } => {
doc::standalone(self, stage, target.target, &doc_out);
}
Doc { .. } => {} // pseudo-step
DocStd { stage } => {
doc::std(self, stage, target.target, &doc_out);
}
DocTest { stage } => {
doc::test(self, stage, target.target, &doc_out);
}
DocRustc { stage } => {
doc::rustc(self, stage, target.target, &doc_out);
}
DocErrorIndex { stage } => {
doc::error_index(self, stage, target.target, &doc_out);
}
CheckLinkcheck { stage } => {
check::linkcheck(self, stage, target.target);
}
CheckCargoTest { stage } => {
check::cargotest(self, stage, target.target);
}
DistDocs { stage } => dist::docs(self, stage, target.target),
DistMingw { _dummy } => dist::mingw(self, target.target),
DistRustc { stage } => dist::rustc(self, stage, target.target),
DistStd { compiler } => dist::std(self, &compiler, target.target),
Dist { .. } |
Doc { .. } | // pseudo-steps
Check { .. } => {}
}
}
}
@ -229,43 +288,51 @@ impl Build {
/// This will create a `Command` that represents a pending execution of
/// Cargo for the specified stage, whether or not the standard library is
/// being built, and using the specified compiler targeting `target`.
// FIXME: aren't stage/compiler duplicated?
fn cargo(&self, stage: u32, compiler: &Compiler, is_std: bool,
target: &str, cmd: &str) -> Command {
fn cargo(&self,
compiler: &Compiler,
mode: Mode,
target: &str,
cmd: &str) -> Command {
let mut cargo = Command::new(&self.cargo);
let host = compiler.host;
let out_dir = self.stage_out(stage, host, is_std);
let out_dir = self.stage_out(compiler, mode);
cargo.env("CARGO_TARGET_DIR", out_dir)
.arg(cmd)
.arg("--target").arg(target)
.arg("-j").arg(self.jobs().to_string());
.arg("-j").arg(self.jobs().to_string())
.arg("--target").arg(target);
// Customize the compiler we're running. Specify the compiler to cargo
// as our shim and then pass it some various options used to configure
// how the actual compiler itself is called.
cargo.env("RUSTC", self.out.join("bootstrap/debug/rustc"))
.env("RUSTC_REAL", self.compiler_path(compiler))
.env("RUSTC_STAGE", self.stage_arg(stage, compiler).to_string())
.env("RUSTC_STAGE", compiler.stage.to_string())
.env("RUSTC_DEBUGINFO", self.config.rust_debuginfo.to_string())
.env("RUSTC_CODEGEN_UNITS",
self.config.rust_codegen_units.to_string())
.env("RUSTC_DEBUG_ASSERTIONS",
self.config.rust_debug_assertions.to_string())
.env("RUSTC_SNAPSHOT", &self.rustc)
.env("RUSTC_SYSROOT", self.sysroot(stage, host))
.env("RUSTC_SYSROOT", self.sysroot(compiler))
.env("RUSTC_SNAPSHOT_LIBDIR", self.rustc_snapshot_libdir())
.env("RUSTC_FLAGS", self.rustc_flags(target).join(" "))
.env("RUSTC_RPATH", self.config.rust_rpath.to_string())
.env("RUSTDOC", self.tool(compiler, "rustdoc"));
.env("RUSTDOC", self.out.join("bootstrap/debug/rustdoc"))
.env("RUSTDOC_REAL", self.rustdoc(compiler))
.env("RUSTC_FLAGS", self.rustc_flags(target).join(" "));
// Specify some variuos options for build scripts used throughout the
// build.
// Specify some various options for build scripts used throughout
// the build.
//
// FIXME: the guard against msvc shouldn't need to be here
if !target.contains("msvc") {
cargo.env(format!("CC_{}", target), self.cc(target))
.env(format!("AR_{}", target), self.ar(target))
.env(format!("CFLAGS_{}", target), self.cflags(target));
.env(format!("CFLAGS_{}", target), self.cflags(target).join(" "));
}
// If we're building for OSX, inform the compiler and the linker that
// we want to build a compiler runnable on 10.7
if target.contains("apple-darwin") {
cargo.env("MACOSX_DEPLOYMENT_TARGET", "10.7");
}
// Environment variables *required* needed throughout the build
@ -288,43 +355,38 @@ impl Build {
if compiler.is_snapshot(self) {
self.rustc.clone()
} else {
self.sysroot(compiler.stage, compiler.host).join("bin")
.join(exe("rustc", compiler.host))
self.sysroot(compiler).join("bin").join(exe("rustc", compiler.host))
}
}
/// Get the specified tool next to the specified compiler
/// Get the specified tool built by the specified compiler
fn tool(&self, compiler: &Compiler, tool: &str) -> PathBuf {
if compiler.is_snapshot(self) {
assert!(tool == "rustdoc", "no tools other than rustdoc in stage0");
let mut rustdoc = self.rustc.clone();
rustdoc.pop();
rustdoc.push(exe("rustdoc", &self.config.build));
return rustdoc
}
let (stage, host) = (compiler.stage, compiler.host);
self.cargo_out(stage - 1, host, false, host).join(exe(tool, host))
self.cargo_out(compiler, Mode::Tool, compiler.host)
.join(exe(tool, compiler.host))
}
/// Get the `rustdoc` executable next to the specified compiler
fn rustdoc(&self, compiler: &Compiler) -> PathBuf {
let mut rustdoc = self.compiler_path(compiler);
rustdoc.pop();
rustdoc.push(exe("rustdoc", compiler.host));
return rustdoc
}
/// Get a `Command` which is ready to run `tool` in `stage` built for
/// `host`.
#[allow(dead_code)] // this will be used soon
fn tool_cmd(&self, compiler: &Compiler, tool: &str) -> Command {
let mut cmd = Command::new(self.tool(&compiler, tool));
let host = compiler.host;
let stage = compiler.stage;
let paths = vec![
self.cargo_out(stage - 1, host, true, host).join("deps"),
self.cargo_out(stage - 1, host, false, host).join("deps"),
self.cargo_out(compiler, Mode::Libstd, host).join("deps"),
self.cargo_out(compiler, Mode::Libtest, host).join("deps"),
self.cargo_out(compiler, Mode::Librustc, host).join("deps"),
];
add_lib_path(paths, &mut cmd);
return cmd
}
fn stage_arg(&self, stage: u32, compiler: &Compiler) -> u32 {
if stage == 0 && compiler.host != self.config.build {1} else {stage}
}
/// Get the space-separated set of activated features for the standard
/// library.
fn std_features(&self) -> String {
@ -339,15 +401,11 @@ impl Build {
}
/// Get the space-separated set of activated features for the compiler.
fn rustc_features(&self, stage: u32) -> String {
fn rustc_features(&self) -> String {
let mut features = String::new();
if self.config.use_jemalloc {
features.push_str(" jemalloc");
}
if stage > 0 {
features.push_str(" rustdoc");
features.push_str(" rustbook");
}
return features
}
@ -357,35 +415,41 @@ impl Build {
if self.config.rust_optimize {"release"} else {"debug"}
}
fn sysroot(&self, stage: u32, host: &str) -> PathBuf {
if stage == 0 {
self.stage_out(stage, host, false)
fn sysroot(&self, compiler: &Compiler) -> PathBuf {
if compiler.stage == 0 {
self.out.join(compiler.host).join("stage0-sysroot")
} else {
self.out.join(host).join(format!("stage{}", stage))
self.out.join(compiler.host).join(format!("stage{}", compiler.stage))
}
}
fn sysroot_libdir(&self, stage: u32, host: &str, target: &str) -> PathBuf {
self.sysroot(stage, host).join("lib").join("rustlib")
fn sysroot_libdir(&self, compiler: &Compiler, target: &str) -> PathBuf {
self.sysroot(compiler).join("lib").join("rustlib")
.join(target).join("lib")
}
/// Returns the root directory for all output generated in a particular
/// stage when running with a particular host compiler.
///
/// The `is_std` flag indicates whether the root directory is for the
/// bootstrap of the standard library or for the compiler.
fn stage_out(&self, stage: u32, host: &str, is_std: bool) -> PathBuf {
self.out.join(host)
.join(format!("stage{}{}", stage, if is_std {"-std"} else {"-rustc"}))
/// The mode indicates what the root directory is for.
fn stage_out(&self, compiler: &Compiler, mode: Mode) -> PathBuf {
let suffix = match mode {
Mode::Libstd => "-std",
Mode::Libtest => "-test",
Mode::Tool | Mode::Librustc => "-rustc",
};
self.out.join(compiler.host)
.join(format!("stage{}{}", compiler.stage, suffix))
}
/// Returns the root output directory for all Cargo output in a given stage,
/// running a particular comipler, wehther or not we're building the
/// standard library, and targeting the specified architecture.
fn cargo_out(&self, stage: u32, host: &str, is_std: bool,
fn cargo_out(&self,
compiler: &Compiler,
mode: Mode,
target: &str) -> PathBuf {
self.stage_out(stage, host, is_std).join(target).join(self.cargo_dir())
self.stage_out(compiler, mode).join(target).join(self.cargo_dir())
}
/// Root output directory for LLVM compiled for `target`
@ -411,8 +475,7 @@ impl Build {
if compiler.is_snapshot(self) {
self.rustc_snapshot_libdir()
} else {
self.sysroot(compiler.stage, compiler.host)
.join(libdir(compiler.host))
self.sysroot(compiler).join(libdir(compiler.host))
}
}
@ -440,11 +503,20 @@ impl Build {
self.cc[target].0.path()
}
fn cflags(&self, target: &str) -> String {
self.cc[target].0.args().iter()
.map(|s| s.to_string_lossy())
.collect::<Vec<_>>()
.join(" ")
fn cflags(&self, target: &str) -> Vec<String> {
let mut base = self.cc[target].0.args().iter()
.map(|s| s.to_string_lossy().into_owned())
.collect::<Vec<_>>();
// If we're compiling on OSX then we add a few unconditional flags
// indicating that we want libc++ (more filled out than libstdc++) and
// we want to compile for 10.7. This way we can ensure that
// LLVM/jemalloc/etc are all properly compiled.
if target.contains("apple-darwin") {
base.push("-stdlib=libc++".into());
base.push("-mmacosx-version-min=10.7".into());
}
return base
}
fn ar(&self, target: &str) -> &Path {

View File

@ -86,6 +86,9 @@ pub fn llvm(build: &Build, target: &str) {
.define("CMAKE_CXX_COMPILER", build.cxx(target));
}
cfg.build_arg("-j").build_arg(build.jobs().to_string());
cfg.define("CMAKE_C_FLAGS", build.cflags(target).join(" "));
cfg.define("CMAKE_CXX_FLAGS", build.cflags(target).join(" "));
}
// FIXME: we don't actually need to build all LLVM tools and all LLVM
@ -113,7 +116,9 @@ pub fn compiler_rt(build: &Build, target: &str) {
let dst = build.compiler_rt_out(target);
let arch = target.split('-').next().unwrap();
let mode = if build.config.rust_optimize {"Release"} else {"Debug"};
let (dir, build_target, libname) = if target.contains("linux") {
let (dir, build_target, libname) = if target.contains("linux") ||
target.contains("freebsd") ||
target.contains("netbsd") {
let os = if target.contains("android") {"-android"} else {""};
let arch = if arch.starts_with("arm") && target.contains("eabihf") {
"armhf"

View File

@ -79,7 +79,7 @@ pub fn check(build: &mut Build) {
}
// Make sure musl-root is valid if specified
if target.contains("musl") {
if target.contains("musl") && (target.contains("x86_64") || target.contains("i686")) {
match build.config.musl_root {
Some(ref root) => {
if fs::metadata(root.join("lib/libc.a")).is_err() {
@ -119,4 +119,16 @@ $ pacman -R cmake && pacman -S mingw-w64-x86_64-cmake
}
}
}
for host in build.flags.host.iter() {
if !build.config.host.contains(host) {
panic!("specified host `{}` is not in the ./configure list", host);
}
}
for target in build.flags.target.iter() {
if !build.config.target.contains(target) {
panic!("specified target `{}` is not in the ./configure list",
target);
}
}
}

View File

@ -25,26 +25,33 @@ macro_rules! targets {
// compiler executable itself, not any of the support libraries
(rustc, Rustc { stage: u32 }),
// Steps for the two main cargo builds, one for the standard library
// and one for the compiler itself. These are parameterized over the
// stage output they're going to be placed in along with the
// compiler which is producing the copy of libstd or librustc
(libstd, Libstd { stage: u32, compiler: Compiler<'a> }),
(librustc, Librustc { stage: u32, compiler: Compiler<'a> }),
// Steps for the two main cargo builds. These are parameterized over
// the compiler which is producing the artifact.
(libstd, Libstd { compiler: Compiler<'a> }),
(libtest, Libtest { compiler: Compiler<'a> }),
(librustc, Librustc { compiler: Compiler<'a> }),
// Links the standard library/librustc produced by the compiler
// provided into the host's directory also provided.
// Links the target produced by the compiler provided into the
// host's directory also provided.
(libstd_link, LibstdLink {
stage: u32,
compiler: Compiler<'a>,
host: &'a str
}),
(libtest_link, LibtestLink {
compiler: Compiler<'a>,
host: &'a str
}),
(librustc_link, LibrustcLink {
stage: u32,
compiler: Compiler<'a>,
host: &'a str
}),
// Various tools that we can build as part of the build.
(tool_linkchecker, ToolLinkchecker { stage: u32 }),
(tool_rustbook, ToolRustbook { stage: u32 }),
(tool_error_index, ToolErrorIndex { stage: u32 }),
(tool_cargotest, ToolCargoTest { stage: u32 }),
// Steps for long-running native builds. Ideally these wouldn't
// actually exist and would be part of build scripts, but for now
// these are here.
@ -53,11 +60,32 @@ macro_rules! targets {
// with braces are unstable so we just pick something that works.
(llvm, Llvm { _dummy: () }),
(compiler_rt, CompilerRt { _dummy: () }),
// Steps for various pieces of documentation that we can generate,
// the 'doc' step is just a pseudo target to depend on a bunch of
// others.
(doc, Doc { stage: u32 }),
(doc_book, DocBook { stage: u32 }),
(doc_nomicon, DocNomicon { stage: u32 }),
(doc_style, DocStyle { stage: u32 }),
(doc_standalone, DocStandalone { stage: u32 }),
(doc_std, DocStd { stage: u32 }),
(doc_test, DocTest { stage: u32 }),
(doc_rustc, DocRustc { stage: u32 }),
(doc_error_index, DocErrorIndex { stage: u32 }),
// Steps for running tests. The 'check' target is just a pseudo
// target to depend on a bunch of others.
(check, Check { stage: u32, compiler: Compiler<'a> }),
(check_linkcheck, CheckLinkcheck { stage: u32 }),
(check_cargotest, CheckCargoTest { stage: u32 }),
// Distribution targets, creating tarballs
(dist, Dist { stage: u32 }),
(dist_docs, DistDocs { stage: u32 }),
(dist_mingw, DistMingw { _dummy: () }),
(dist_rustc, DistRustc { stage: u32 }),
(dist_std, DistStd { compiler: Compiler<'a> }),
}
}
}
@ -127,10 +155,9 @@ fn top_level(build: &Build) -> Vec<Step> {
}
let host = t.target(host);
if host.target == build.config.build {
targets.push(host.librustc(stage, host.compiler(stage)));
targets.push(host.librustc(host.compiler(stage)));
} else {
targets.push(host.librustc_link(stage, t.compiler(stage),
host.target));
targets.push(host.librustc_link(t.compiler(stage), host.target));
}
for target in build.config.target.iter() {
if !build.flags.target.contains(target) {
@ -139,11 +166,10 @@ fn top_level(build: &Build) -> Vec<Step> {
if host.target == build.config.build {
targets.push(host.target(target)
.libstd(stage, host.compiler(stage)));
.libtest(host.compiler(stage)));
} else {
targets.push(host.target(target)
.libstd_link(stage, t.compiler(stage),
host.target));
.libtest_link(t.compiler(stage), host.target));
}
}
}
@ -158,25 +184,37 @@ fn add_steps<'a>(build: &'a Build,
host: &Step<'a>,
target: &Step<'a>,
targets: &mut Vec<Step<'a>>) {
struct Context<'a> {
stage: u32,
compiler: Compiler<'a>,
_dummy: (),
host: &'a str,
}
for step in build.flags.step.iter() {
let compiler = host.target(&build.config.build).compiler(stage);
match &step[..] {
"libstd" => targets.push(target.libstd(stage, compiler)),
"librustc" => targets.push(target.librustc(stage, compiler)),
"libstd-link" => targets.push(target.libstd_link(stage, compiler,
host.target)),
"librustc-link" => targets.push(target.librustc_link(stage, compiler,
host.target)),
"rustc" => targets.push(host.rustc(stage)),
"llvm" => targets.push(target.llvm(())),
"compiler-rt" => targets.push(target.compiler_rt(())),
"doc-style" => targets.push(host.doc_style(stage)),
"doc-standalone" => targets.push(host.doc_standalone(stage)),
"doc-nomicon" => targets.push(host.doc_nomicon(stage)),
"doc-book" => targets.push(host.doc_book(stage)),
"doc" => targets.push(host.doc(stage)),
_ => panic!("unknown build target: `{}`", step),
// The macro below insists on hygienic access to all local variables, so
// we shove them all in a struct and subvert hygiene by accessing struct
// fields instead,
let cx = Context {
stage: stage,
compiler: host.target(&build.config.build).compiler(stage),
_dummy: (),
host: host.target,
};
macro_rules! add_step {
($(($short:ident, $name:ident { $($arg:ident: $t:ty),* }),)*) => ({$(
let name = stringify!($short).replace("_", "-");
if &step[..] == &name[..] {
targets.push(target.$short($(cx.$arg),*));
continue
}
drop(name);
)*})
}
targets!(add_step);
panic!("unknown step: {}", step);
}
}
@ -209,36 +247,114 @@ impl<'a> Step<'a> {
}
Source::Rustc { stage } => {
let compiler = Compiler::new(stage - 1, &build.config.build);
vec![self.librustc(stage - 1, compiler)]
vec![self.librustc(compiler)]
}
Source::Librustc { stage, compiler } => {
vec![self.libstd(stage, compiler), self.llvm(())]
Source::Librustc { compiler } => {
vec![self.libtest(compiler), self.llvm(())]
}
Source::Libstd { stage: _, compiler } => {
Source::Libtest { compiler } => {
vec![self.libstd(compiler)]
}
Source::Libstd { compiler } => {
vec![self.compiler_rt(()),
self.rustc(compiler.stage).target(compiler.host)]
}
Source::LibrustcLink { stage, compiler, host } => {
vec![self.librustc(stage, compiler),
self.libstd_link(stage, compiler, host)]
Source::LibrustcLink { compiler, host } => {
vec![self.librustc(compiler),
self.libtest_link(compiler, host)]
}
Source::LibstdLink { stage, compiler, host } => {
vec![self.libstd(stage, compiler),
self.target(host).rustc(stage)]
Source::LibtestLink { compiler, host } => {
vec![self.libtest(compiler), self.libstd_link(compiler, host)]
}
Source::LibstdLink { compiler, host } => {
vec![self.libstd(compiler),
self.target(host).rustc(compiler.stage)]
}
Source::CompilerRt { _dummy } => {
vec![self.llvm(()).target(&build.config.build)]
}
Source::Llvm { _dummy } => Vec::new(),
// Note that all doc targets depend on artifacts from the build
// architecture, not the target (which is where we're generating
// docs into).
Source::DocStd { stage } => {
let compiler = self.target(&build.config.build).compiler(stage);
vec![self.libstd(compiler)]
}
Source::DocTest { stage } => {
let compiler = self.target(&build.config.build).compiler(stage);
vec![self.libtest(compiler)]
}
Source::DocBook { stage } |
Source::DocNomicon { stage } |
Source::DocStyle { stage } |
Source::DocStyle { stage } => {
vec![self.target(&build.config.build).tool_rustbook(stage)]
}
Source::DocErrorIndex { stage } => {
vec![self.target(&build.config.build).tool_error_index(stage)]
}
Source::DocStandalone { stage } => {
vec![self.rustc(stage)]
vec![self.target(&build.config.build).rustc(stage)]
}
Source::DocRustc { stage } => {
vec![self.doc_test(stage)]
}
Source::Doc { stage } => {
vec![self.doc_book(stage), self.doc_nomicon(stage),
self.doc_style(stage), self.doc_standalone(stage)]
self.doc_style(stage), self.doc_standalone(stage),
self.doc_std(stage),
self.doc_error_index(stage)]
}
Source::Check { stage, compiler: _ } => {
vec![self.check_linkcheck(stage),
self.dist(stage)]
}
Source::CheckLinkcheck { stage } => {
vec![self.tool_linkchecker(stage), self.doc(stage)]
}
Source::CheckCargoTest { stage } => {
vec![self.tool_cargotest(stage)]
}
Source::ToolLinkchecker { stage } => {
vec![self.libstd(self.compiler(stage))]
}
Source::ToolErrorIndex { stage } |
Source::ToolRustbook { stage } => {
vec![self.librustc(self.compiler(stage))]
}
Source::ToolCargoTest { stage } => {
vec![self.librustc(self.compiler(stage))]
}
Source::DistDocs { stage } => vec![self.doc(stage)],
Source::DistMingw { _dummy: _ } => Vec::new(),
Source::DistRustc { stage } => {
vec![self.rustc(stage)]
}
Source::DistStd { compiler } => {
vec![self.libtest(compiler)]
}
Source::Dist { stage } => {
let mut base = Vec::new();
for host in build.config.host.iter() {
let host = self.target(host);
base.push(host.dist_rustc(stage));
if host.target.contains("windows-gnu") {
base.push(host.dist_mingw(()));
}
let compiler = self.compiler(stage);
for target in build.config.target.iter() {
let target = self.target(target);
base.push(target.dist_docs(stage));
base.push(target.dist_std(compiler));
}
}
return base
}
}
}

View File

@ -30,7 +30,15 @@ pub fn mtime(path: &Path) -> FileTime {
}).unwrap_or(FileTime::zero())
}
#[allow(dead_code)] // this will be used soon
pub fn copy(src: &Path, dst: &Path) {
let res = fs::hard_link(src, dst);
let res = res.or_else(|_| fs::copy(src, dst).map(|_| ()));
if let Err(e) = res {
panic!("failed to copy `{}` to `{}`: {}", src.display(),
dst.display(), e)
}
}
pub fn cp_r(src: &Path, dst: &Path) {
for f in t!(fs::read_dir(src)) {
let f = t!(f);
@ -43,7 +51,7 @@ pub fn cp_r(src: &Path, dst: &Path) {
cp_r(&path, &dst);
} else {
let _ = fs::remove_file(&dst);
t!(fs::hard_link(&path, dst));
copy(&path, &dst);
}
}
}

View File

@ -20,6 +20,7 @@ extern crate libc;
extern crate num_cpus;
extern crate rustc_serialize;
extern crate toml;
extern crate md5;
use std::env;

View File

@ -36,3 +36,11 @@ book:
$(Q)$(BOOTSTRAP) --step doc-book
standalone-docs:
$(Q)$(BOOTSTRAP) --step doc-standalone
check:
$(Q)$(BOOTSTRAP) --step check
check-cargotest:
$(Q)$(BOOTSTRAP) --step check-cargotest
dist:
$(Q)$(BOOTSTRAP) --step dist
.PHONY: dist

View File

@ -8,6 +8,23 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Shim which is passed to Cargo as "rustc" when running the bootstrap.
//!
//! This shim will take care of some various tasks that our build process
//! requires that Cargo can't quite do through normal configuration:
//!
//! 1. When compiling build scripts and build dependencies, we need a guaranteed
//! full standard library available. The only compiler which actually has
//! this is the snapshot, so we detect this situation and always compile with
//! the snapshot compiler.
//! 2. We pass a bunch of `--cfg` and other flags based on what we're compiling
//! (and this slightly differs based on a whether we're using a snapshot or
//! not), so we do that all here.
//!
//! This may one day be replaced by RUSTFLAGS, but the dynamic nature of
//! switching compilers for the bootstrap and for build scripts will probably
//! never get replaced.
extern crate bootstrap;
use std::env;

31
src/bootstrap/rustdoc.rs Normal file
View File

@ -0,0 +1,31 @@
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Shim which is passed to Cargo as "rustdoc" when running the bootstrap.
//!
//! See comments in `src/bootstrap/rustc.rs` for more information.
use std::env;
use std::process::Command;
fn main() {
let args = env::args_os().skip(1).collect::<Vec<_>>();
let rustdoc = env::var_os("RUSTDOC_REAL").unwrap();
let mut cmd = Command::new(rustdoc);
cmd.args(&args)
.arg("--cfg").arg(format!("stage{}", env::var("RUSTC_STAGE").unwrap()))
.arg("--cfg").arg("dox");
std::process::exit(match cmd.status() {
Ok(s) => s.code().unwrap_or(1),
Err(e) => panic!("\n\nfailed to run {:?}: {}\n\n", cmd, e),
})
}

View File

@ -43,10 +43,16 @@ pub fn cc2ar(cc: &Path, target: &str) -> PathBuf {
if target.contains("musl") || target.contains("msvc") {
PathBuf::from("ar")
} else {
let parent = cc.parent().unwrap();
let file = cc.file_name().unwrap().to_str().unwrap();
cc.parent().unwrap().join(file.replace("gcc", "ar")
.replace("cc", "ar")
.replace("clang", "ar"))
for suffix in &["gcc", "cc", "clang"] {
if let Some(idx) = file.rfind(suffix) {
let mut file = file[..idx].to_owned();
file.push_str("ar");
return parent.join(&file);
}
}
parent.join(file)
}
}

View File

@ -207,8 +207,11 @@ elseif(NOT APPLE) # Supported archs for Apple platforms are generated later
test_target_arch(mips "" "-mips32r2" "--target=mips-linux-gnu")
test_target_arch(mips64 "" "-mips64r2" "--target=mips64-linux-gnu" "-mabi=n64")
elseif("${COMPILER_RT_DEFAULT_TARGET_ARCH}" MATCHES "arm")
test_target_arch(arm "" "-march=armv7-a" "-mfloat-abi=soft")
test_target_arch(armhf "" "-march=armv7-a" "-mfloat-abi=hard")
if("${COMPILER_RT_DEFAULT_TARGET_TRIPLE}" MATCHES "eabihf")
test_target_arch(armhf "" "-march=armv7-a" "-mfloat-abi=hard")
else()
test_target_arch(arm "" "-march=armv7-a" "-mfloat-abi=soft")
endif()
elseif("${COMPILER_RT_DEFAULT_TARGET_ARCH}" MATCHES "aarch32")
test_target_arch(aarch32 "" "-march=armv8-a")
elseif("${COMPILER_RT_DEFAULT_TARGET_ARCH}" MATCHES "aarch64")

View File

@ -220,8 +220,6 @@ else () # MSVC
endif () # if (NOT MSVC)
set(arm_SOURCES
arm/adddf3vfp.S
arm/addsf3vfp.S
arm/aeabi_cdcmp.S
arm/aeabi_cdcmpeq_check_nan.c
arm/aeabi_cfcmp.S
@ -242,40 +240,11 @@ set(arm_SOURCES
arm/bswapdi2.S
arm/bswapsi2.S
arm/comparesf2.S
arm/divdf3vfp.S
arm/divmodsi4.S
arm/divsf3vfp.S
arm/divsi3.S
arm/eqdf2vfp.S
arm/eqsf2vfp.S
arm/extendsfdf2vfp.S
arm/fixdfsivfp.S
arm/fixsfsivfp.S
arm/fixunsdfsivfp.S
arm/fixunssfsivfp.S
arm/floatsidfvfp.S
arm/floatsisfvfp.S
arm/floatunssidfvfp.S
arm/floatunssisfvfp.S
arm/gedf2vfp.S
arm/gesf2vfp.S
arm/gtdf2vfp.S
arm/gtsf2vfp.S
arm/ledf2vfp.S
arm/lesf2vfp.S
arm/ltdf2vfp.S
arm/ltsf2vfp.S
arm/modsi3.S
arm/muldf3vfp.S
arm/mulsf3vfp.S
arm/nedf2vfp.S
arm/negdf2vfp.S
arm/negsf2vfp.S
arm/nesf2vfp.S
arm/restore_vfp_d8_d15_regs.S
arm/save_vfp_d8_d15_regs.S
arm/subdf3vfp.S
arm/subsf3vfp.S
arm/switch16.S
arm/switch32.S
arm/switch8.S
@ -301,12 +270,9 @@ set(arm_SOURCES
arm/sync_fetch_and_xor_4.S
arm/sync_fetch_and_xor_8.S
arm/sync_synchronize.S
arm/truncdfsf2vfp.S
arm/udivmodsi4.S
arm/udivsi3.S
arm/umodsi3.S
arm/unorddf2vfp.S
arm/unordsf2vfp.S
${GENERIC_SOURCES})
set(aarch64_SOURCES
@ -328,7 +294,42 @@ set(aarch64_SOURCES
trunctfsf2.c
${GENERIC_SOURCES})
set(armhf_SOURCES ${arm_SOURCES})
set(armhf_SOURCES
arm/adddf3vfp.S
arm/addsf3vfp.S
arm/divdf3vfp.S
arm/divsf3vfp.S
arm/eqdf2vfp.S
arm/eqsf2vfp.S
arm/extendsfdf2vfp.S
arm/fixdfsivfp.S
arm/fixsfsivfp.S
arm/fixunsdfsivfp.S
arm/fixunssfsivfp.S
arm/floatsidfvfp.S
arm/floatsisfvfp.S
arm/floatunssidfvfp.S
arm/floatunssisfvfp.S
arm/gedf2vfp.S
arm/gesf2vfp.S
arm/gtdf2vfp.S
arm/gtsf2vfp.S
arm/ledf2vfp.S
arm/lesf2vfp.S
arm/ltdf2vfp.S
arm/ltsf2vfp.S
arm/muldf3vfp.S
arm/mulsf3vfp.S
arm/nedf2vfp.S
arm/nesf2vfp.S
arm/restore_vfp_d8_d15_regs.S
arm/save_vfp_d8_d15_regs.S
arm/subdf3vfp.S
arm/subsf3vfp.S
arm/truncdfsf2vfp.S
arm/unorddf2vfp.S
arm/unordsf2vfp.S
${arm_SOURCES})
set(armv7_SOURCES ${arm_SOURCES})
set(armv7s_SOURCES ${arm_SOURCES})
set(arm64_SOURCES ${aarch64_SOURCES})

View File

@ -25,7 +25,8 @@ pub enum Mode {
DebugInfoLldb,
Codegen,
Rustdoc,
CodegenUnits
CodegenUnits,
Incremental,
}
impl FromStr for Mode {
@ -43,6 +44,7 @@ impl FromStr for Mode {
"codegen" => Ok(Codegen),
"rustdoc" => Ok(Rustdoc),
"codegen-units" => Ok(CodegenUnits),
"incremental" => Ok(Incremental),
_ => Err(()),
}
}
@ -62,6 +64,7 @@ impl fmt::Display for Mode {
Codegen => "codegen",
Rustdoc => "rustdoc",
CodegenUnits => "codegen-units",
Incremental => "incremental",
}, f)
}
}
@ -69,10 +72,10 @@ impl fmt::Display for Mode {
#[derive(Clone)]
pub struct Config {
// The library paths required for running the compiler
pub compile_lib_path: String,
pub compile_lib_path: PathBuf,
// The library paths required for running compiled programs
pub run_lib_path: String,
pub run_lib_path: PathBuf,
// The rustc executable
pub rustc_path: PathBuf,
@ -155,5 +158,8 @@ pub struct Config {
pub lldb_python_dir: Option<String>,
// Explain what's going on
pub verbose: bool
pub verbose: bool,
// Print one character per test instead of one line
pub quiet: bool,
}

View File

@ -11,11 +11,10 @@
#![crate_type = "bin"]
#![feature(box_syntax)]
#![feature(dynamic_lib)]
#![feature(libc)]
#![feature(rustc_private)]
#![feature(str_char)]
#![feature(test)]
#![feature(question_mark)]
#![deny(warnings)]
@ -71,13 +70,15 @@ pub fn parse_config(args: Vec<String> ) -> Config {
reqopt("", "aux-base", "directory to find auxiliary test files", "PATH"),
reqopt("", "stage-id", "the target-stage identifier", "stageN-TARGET"),
reqopt("", "mode", "which sort of compile tests to run",
"(compile-fail|parse-fail|run-fail|run-pass|run-pass-valgrind|pretty|debug-info)"),
"(compile-fail|parse-fail|run-fail|run-pass|\
run-pass-valgrind|pretty|debug-info|incremental)"),
optflag("", "ignored", "run tests marked as ignored"),
optopt("", "runtool", "supervisor program to run tests under \
(eg. emulator, valgrind)", "PROGRAM"),
optopt("", "host-rustcflags", "flags to pass to rustc for host", "FLAGS"),
optopt("", "target-rustcflags", "flags to pass to rustc for target", "FLAGS"),
optflag("", "verbose", "run tests verbosely, showing all output"),
optflag("", "quiet", "print one character per test instead of one line"),
optopt("", "logfile", "file to log test execution to", "FILE"),
optopt("", "target", "the target to build for", "TARGET"),
optopt("", "host", "the host to build for", "HOST"),
@ -117,15 +118,17 @@ pub fn parse_config(args: Vec<String> ) -> Config {
}
}
let filter = if !matches.free.is_empty() {
Some(matches.free[0].clone())
} else {
None
};
fn make_absolute(path: PathBuf) -> PathBuf {
if path.is_relative() {
env::current_dir().unwrap().join(path)
} else {
path
}
}
Config {
compile_lib_path: matches.opt_str("compile-lib-path").unwrap(),
run_lib_path: matches.opt_str("run-lib-path").unwrap(),
compile_lib_path: make_absolute(opt_path(matches, "compile-lib-path")),
run_lib_path: make_absolute(opt_path(matches, "run-lib-path")),
rustc_path: opt_path(matches, "rustc-path"),
rustdoc_path: opt_path(matches, "rustdoc-path"),
python: matches.opt_str("python").unwrap(),
@ -138,7 +141,7 @@ pub fn parse_config(args: Vec<String> ) -> Config {
stage_id: matches.opt_str("stage-id").unwrap(),
mode: matches.opt_str("mode").unwrap().parse().ok().expect("invalid mode"),
run_ignored: matches.opt_present("ignored"),
filter: filter,
filter: matches.free.first().cloned(),
logfile: matches.opt_str("logfile").map(|s| PathBuf::from(&s)),
runtool: matches.opt_str("runtool"),
host_rustcflags: matches.opt_str("host-rustcflags"),
@ -158,6 +161,7 @@ pub fn parse_config(args: Vec<String> ) -> Config {
!opt_str2(matches.opt_str("adb-test-dir")).is_empty(),
lldb_python_dir: matches.opt_str("lldb-python-dir"),
verbose: matches.opt_present("verbose"),
quiet: matches.opt_present("quiet"),
}
}
@ -191,6 +195,7 @@ pub fn log_config(config: &Config) {
logv(c, format!("adb_device_status: {}",
config.adb_device_status));
logv(c, format!("verbose: {}", config.verbose));
logv(c, format!("quiet: {}", config.quiet));
logv(c, format!("\n"));
}
@ -252,15 +257,16 @@ pub fn run_tests(config: &Config) {
pub fn test_opts(config: &Config) -> test::TestOpts {
test::TestOpts {
filter: match config.filter {
None => None,
Some(ref filter) => Some(filter.clone()),
},
filter: config.filter.clone(),
run_ignored: config.run_ignored,
quiet: config.quiet,
logfile: config.logfile.clone(),
run_tests: true,
bench_benchmarks: true,
nocapture: env::var("RUST_TEST_NOCAPTURE").is_ok(),
nocapture: match env::var("RUST_TEST_NOCAPTURE") {
Ok(val) => &val != "0",
Err(_) => false
},
color: test::AutoColor,
}
}
@ -286,16 +292,16 @@ fn collect_tests_from_dir(config: &Config,
-> io::Result<()> {
// Ignore directories that contain a file
// `compiletest-ignore-dir`.
for file in try!(fs::read_dir(dir)) {
let file = try!(file);
for file in fs::read_dir(dir)? {
let file = file?;
if file.file_name() == *"compiletest-ignore-dir" {
return Ok(());
}
}
let dirs = try!(fs::read_dir(dir));
let dirs = fs::read_dir(dir)?;
for file in dirs {
let file = try!(file);
let file = file?;
let file_path = file.path();
debug!("inspecting file {:?}", file_path.display());
if is_test(config, &file_path) {
@ -316,11 +322,11 @@ fn collect_tests_from_dir(config: &Config,
tests.push(make_test(config, &paths))
} else if file_path.is_dir() {
let relative_file_path = relative_dir_path.join(file.file_name());
try!(collect_tests_from_dir(config,
base,
&file_path,
&relative_file_path,
tests));
collect_tests_from_dir(config,
base,
&file_path,
&relative_file_path,
tests)?;
}
}
Ok(())
@ -354,11 +360,25 @@ pub fn is_test(config: &Config, testfile: &Path) -> bool {
}
pub fn make_test(config: &Config, testpaths: &TestPaths) -> test::TestDescAndFn {
let early_props = header::early_props(config, &testpaths.file);
// The `should-fail` annotation doesn't apply to pretty tests,
// since we run the pretty printer across all tests by default.
// If desired, we could add a `should-fail-pretty` annotation.
let should_panic = match config.mode {
Pretty => test::ShouldPanic::No,
_ => if early_props.should_fail {
test::ShouldPanic::Yes
} else {
test::ShouldPanic::No
}
};
test::TestDescAndFn {
desc: test::TestDesc {
name: make_test_name(config, testpaths),
ignore: header::is_test_ignored(config, &testpaths.file),
should_panic: test::ShouldPanic::No,
ignore: early_props.ignore,
should_panic: should_panic,
},
testfn: make_test_closure(config, testpaths),
}
@ -391,16 +411,26 @@ fn extract_gdb_version(full_version_line: Option<String>) -> Option<String> {
// used to be a regex "(^|[^0-9])([0-9]\.[0-9]+)"
for (pos, c) in full_version_line.char_indices() {
if !c.is_digit(10) { continue }
if pos + 2 >= full_version_line.len() { continue }
if full_version_line.char_at(pos + 1) != '.' { continue }
if !full_version_line.char_at(pos + 2).is_digit(10) { continue }
if pos > 0 && full_version_line.char_at_reverse(pos).is_digit(10) {
if !c.is_digit(10) {
continue
}
if pos + 2 >= full_version_line.len() {
continue
}
if full_version_line[pos + 1..].chars().next().unwrap() != '.' {
continue
}
if !full_version_line[pos + 2..].chars().next().unwrap().is_digit(10) {
continue
}
if pos > 0 && full_version_line[..pos].chars().next_back()
.unwrap().is_digit(10) {
continue
}
let mut end = pos + 3;
while end < full_version_line.len() &&
full_version_line.char_at(end).is_digit(10) {
full_version_line[end..].chars().next()
.unwrap().is_digit(10) {
end += 1;
}
return Some(full_version_line[pos..end].to_owned());
@ -432,13 +462,13 @@ fn extract_lldb_version(full_version_line: Option<String>) -> Option<String> {
for (pos, l) in full_version_line.char_indices() {
if l != 'l' && l != 'L' { continue }
if pos + 5 >= full_version_line.len() { continue }
let l = full_version_line.char_at(pos + 1);
let l = full_version_line[pos + 1..].chars().next().unwrap();
if l != 'l' && l != 'L' { continue }
let d = full_version_line.char_at(pos + 2);
let d = full_version_line[pos + 2..].chars().next().unwrap();
if d != 'd' && d != 'D' { continue }
let b = full_version_line.char_at(pos + 3);
let b = full_version_line[pos + 3..].chars().next().unwrap();
if b != 'b' && b != 'B' { continue }
let dash = full_version_line.char_at(pos + 4);
let dash = full_version_line[pos + 4..].chars().next().unwrap();
if dash != '-' { continue }
let vers = full_version_line[pos + 5..].chars().take_while(|c| {

View File

@ -9,14 +9,54 @@
// except according to those terms.
use self::WhichLine::*;
use std::fmt;
use std::fs::File;
use std::io::BufReader;
use std::io::prelude::*;
use std::path::Path;
use std::str::FromStr;
#[derive(Clone, Debug, PartialEq)]
pub enum ErrorKind {
Help,
Error,
Note,
Suggestion,
Warning,
}
impl FromStr for ErrorKind {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match &s.trim_right_matches(':') as &str {
"HELP" => Ok(ErrorKind::Help),
"ERROR" => Ok(ErrorKind::Error),
"NOTE" => Ok(ErrorKind::Note),
"SUGGESTION" => Ok(ErrorKind::Suggestion),
"WARN" => Ok(ErrorKind::Warning),
"WARNING" => Ok(ErrorKind::Warning),
_ => Err(()),
}
}
}
impl fmt::Display for ErrorKind {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
ErrorKind::Help => write!(f, "help"),
ErrorKind::Error => write!(f, "error"),
ErrorKind::Note => write!(f, "note"),
ErrorKind::Suggestion => write!(f, "suggestion"),
ErrorKind::Warning => write!(f, "warning"),
}
}
}
pub struct ExpectedError {
pub line: usize,
pub kind: String,
pub line_num: usize,
/// What kind of message we expect (e.g. warning, error, suggestion).
/// `None` if not specified or unknown message kind.
pub kind: Option<ErrorKind>,
pub msg: String,
}
@ -30,8 +70,10 @@ enum WhichLine { ThisLine, FollowPrevious(usize), AdjustBackward(usize) }
/// Goal is to enable tests both like: //~^^^ ERROR go up three
/// and also //~^ ERROR message one for the preceding line, and
/// //~| ERROR message two for that same line.
// Load any test directives embedded in the file
pub fn load_errors(testfile: &Path) -> Vec<ExpectedError> {
///
/// If cfg is not None (i.e., in an incremental test), then we look
/// for `//[X]~` instead, where `X` is the current `cfg`.
pub fn load_errors(testfile: &Path, cfg: Option<&str>) -> Vec<ExpectedError> {
let rdr = BufReader::new(File::open(testfile).unwrap());
// `last_nonfollow_error` tracks the most recently seen
@ -44,55 +86,66 @@ pub fn load_errors(testfile: &Path) -> Vec<ExpectedError> {
// updating it in the map callback below.)
let mut last_nonfollow_error = None;
rdr.lines().enumerate().filter_map(|(line_no, ln)| {
parse_expected(last_nonfollow_error,
line_no + 1,
&ln.unwrap())
.map(|(which, error)| {
match which {
FollowPrevious(_) => {}
_ => last_nonfollow_error = Some(error.line),
}
error
})
}).collect()
let tag = match cfg {
Some(rev) => format!("//[{}]~", rev),
None => format!("//~")
};
rdr.lines()
.enumerate()
.filter_map(|(line_num, line)| {
parse_expected(last_nonfollow_error,
line_num + 1,
&line.unwrap(),
&tag)
.map(|(which, error)| {
match which {
FollowPrevious(_) => {}
_ => last_nonfollow_error = Some(error.line_num),
}
error
})
})
.collect()
}
fn parse_expected(last_nonfollow_error: Option<usize>,
line_num: usize,
line: &str) -> Option<(WhichLine, ExpectedError)> {
let start = match line.find("//~") { Some(i) => i, None => return None };
let (follow, adjusts) = if line.char_at(start + 3) == '|' {
line: &str,
tag: &str)
-> Option<(WhichLine, ExpectedError)> {
let start = match line.find(tag) { Some(i) => i, None => return None };
let (follow, adjusts) = if line[start + tag.len()..].chars().next().unwrap() == '|' {
(true, 0)
} else {
(false, line[start + 3..].chars().take_while(|c| *c == '^').count())
(false, line[start + tag.len()..].chars().take_while(|c| *c == '^').count())
};
let kind_start = start + 3 + adjusts + (follow as usize);
let letters = line[kind_start..].chars();
let kind = letters.skip_while(|c| c.is_whitespace())
.take_while(|c| !c.is_whitespace())
.flat_map(|c| c.to_lowercase())
.collect::<String>();
let kind_start = start + tag.len() + adjusts + (follow as usize);
let kind = line[kind_start..].split_whitespace()
.next()
.expect("Encountered unexpected empty comment")
.parse::<ErrorKind>()
.ok();
let letters = line[kind_start..].chars();
let msg = letters.skip_while(|c| c.is_whitespace())
.skip_while(|c| !c.is_whitespace())
.collect::<String>().trim().to_owned();
let (which, line) = if follow {
let (which, line_num) = if follow {
assert!(adjusts == 0, "use either //~| or //~^, not both.");
let line = last_nonfollow_error.unwrap_or_else(|| {
panic!("encountered //~| without preceding //~^ line.")
});
(FollowPrevious(line), line)
let line_num = last_nonfollow_error.expect("encountered //~| without \
preceding //~^ line.");
(FollowPrevious(line_num), line_num)
} else {
let which =
if adjusts > 0 { AdjustBackward(adjusts) } else { ThisLine };
let line = line_num - adjusts;
(which, line)
let line_num = line_num - adjusts;
(which, line_num)
};
debug!("line={} which={:?} kind={:?} msg={:?}", line_num, which, kind, msg);
Some((which, ExpectedError { line: line,
debug!("line={} tag={:?} which={:?} kind={:?} msg={:?}",
line_num, tag, which, kind, msg);
Some((which, ExpectedError { line_num: line_num,
kind: kind,
msg: msg, }))
}

View File

@ -18,11 +18,12 @@ use common::Config;
use common;
use util;
#[derive(Clone, Debug)]
pub struct TestProps {
// Lines that should be expected, in order, on standard out
pub error_patterns: Vec<String> ,
// Extra flags to pass to the compiler
pub compile_flags: Option<String>,
pub compile_flags: Vec<String>,
// Extra flags to pass when the compiled code is run (such as --bench)
pub run_flags: Option<String>,
// If present, the name of a file that this test should match when
@ -30,6 +31,8 @@ pub struct TestProps {
pub pp_exact: Option<PathBuf>,
// Modules from aux directory that should be compiled
pub aux_builds: Vec<String> ,
// Environment settings to use for compiling
pub rustc_env: Vec<(String,String)> ,
// Environment settings to use during execution
pub exec_env: Vec<(String,String)> ,
// Lines to check if they appear in the expected debugger output
@ -50,105 +53,33 @@ pub struct TestProps {
pub pretty_compare_only: bool,
// Patterns which must not appear in the output of a cfail test.
pub forbid_output: Vec<String>,
// Revisions to test for incremental compilation.
pub revisions: Vec<String>,
}
// Load any test directives embedded in the file
pub fn load_props(testfile: &Path) -> TestProps {
let mut error_patterns = Vec::new();
let mut aux_builds = Vec::new();
let mut exec_env = Vec::new();
let mut compile_flags = None;
let mut run_flags = None;
let mut pp_exact = None;
let mut check_lines = Vec::new();
let mut build_aux_docs = false;
let mut force_host = false;
let mut check_stdout = false;
let mut no_prefer_dynamic = false;
let mut pretty_expanded = false;
let mut pretty_mode = None;
let mut pretty_compare_only = false;
let mut forbid_output = Vec::new();
iter_header(testfile, &mut |ln| {
if let Some(ep) = parse_error_pattern(ln) {
error_patterns.push(ep);
}
if compile_flags.is_none() {
compile_flags = parse_compile_flags(ln);
}
if run_flags.is_none() {
run_flags = parse_run_flags(ln);
}
if pp_exact.is_none() {
pp_exact = parse_pp_exact(ln, testfile);
}
if !build_aux_docs {
build_aux_docs = parse_build_aux_docs(ln);
}
if !force_host {
force_host = parse_force_host(ln);
}
if !check_stdout {
check_stdout = parse_check_stdout(ln);
}
if !no_prefer_dynamic {
no_prefer_dynamic = parse_no_prefer_dynamic(ln);
}
if !pretty_expanded {
pretty_expanded = parse_pretty_expanded(ln);
}
if pretty_mode.is_none() {
pretty_mode = parse_pretty_mode(ln);
}
if !pretty_compare_only {
pretty_compare_only = parse_pretty_compare_only(ln);
}
if let Some(ab) = parse_aux_build(ln) {
aux_builds.push(ab);
}
if let Some(ee) = parse_exec_env(ln) {
exec_env.push(ee);
}
if let Some(cl) = parse_check_line(ln) {
check_lines.push(cl);
}
if let Some(of) = parse_forbid_output(ln) {
forbid_output.push(of);
}
true
});
for key in vec!["RUST_TEST_NOCAPTURE", "RUST_TEST_THREADS"] {
match env::var(key) {
Ok(val) =>
if exec_env.iter().find(|&&(ref x, _)| *x == key).is_none() {
exec_env.push((key.to_owned(), val))
},
Err(..) => {}
}
}
TestProps {
let error_patterns = Vec::new();
let aux_builds = Vec::new();
let exec_env = Vec::new();
let run_flags = None;
let pp_exact = None;
let check_lines = Vec::new();
let build_aux_docs = false;
let force_host = false;
let check_stdout = false;
let no_prefer_dynamic = false;
let pretty_expanded = false;
let pretty_compare_only = false;
let forbid_output = Vec::new();
let mut props = TestProps {
error_patterns: error_patterns,
compile_flags: compile_flags,
compile_flags: vec![],
run_flags: run_flags,
pp_exact: pp_exact,
aux_builds: aux_builds,
revisions: vec![],
rustc_env: vec![],
exec_env: exec_env,
check_lines: check_lines,
build_aux_docs: build_aux_docs,
@ -156,13 +87,139 @@ pub fn load_props(testfile: &Path) -> TestProps {
check_stdout: check_stdout,
no_prefer_dynamic: no_prefer_dynamic,
pretty_expanded: pretty_expanded,
pretty_mode: pretty_mode.unwrap_or("normal".to_owned()),
pretty_mode: format!("normal"),
pretty_compare_only: pretty_compare_only,
forbid_output: forbid_output,
};
load_props_into(&mut props, testfile, None);
props
}
/// Load properties from `testfile` into `props`. If a property is
/// tied to a particular revision `foo` (indicated by writing
/// `//[foo]`), then the property is ignored unless `cfg` is
/// `Some("foo")`.
pub fn load_props_into(props: &mut TestProps, testfile: &Path, cfg: Option<&str>) {
iter_header(testfile, cfg, &mut |ln| {
if let Some(ep) = parse_error_pattern(ln) {
props.error_patterns.push(ep);
}
if let Some(flags) = parse_compile_flags(ln) {
props.compile_flags.extend(
flags
.split_whitespace()
.map(|s| s.to_owned()));
}
if let Some(r) = parse_revisions(ln) {
props.revisions.extend(r);
}
if props.run_flags.is_none() {
props.run_flags = parse_run_flags(ln);
}
if props.pp_exact.is_none() {
props.pp_exact = parse_pp_exact(ln, testfile);
}
if !props.build_aux_docs {
props.build_aux_docs = parse_build_aux_docs(ln);
}
if !props.force_host {
props.force_host = parse_force_host(ln);
}
if !props.check_stdout {
props.check_stdout = parse_check_stdout(ln);
}
if !props.no_prefer_dynamic {
props.no_prefer_dynamic = parse_no_prefer_dynamic(ln);
}
if !props.pretty_expanded {
props.pretty_expanded = parse_pretty_expanded(ln);
}
if let Some(m) = parse_pretty_mode(ln) {
props.pretty_mode = m;
}
if !props.pretty_compare_only {
props.pretty_compare_only = parse_pretty_compare_only(ln);
}
if let Some(ab) = parse_aux_build(ln) {
props.aux_builds.push(ab);
}
if let Some(ee) = parse_env(ln, "exec-env") {
props.exec_env.push(ee);
}
if let Some(ee) = parse_env(ln, "rustc-env") {
props.rustc_env.push(ee);
}
if let Some(cl) = parse_check_line(ln) {
props.check_lines.push(cl);
}
if let Some(of) = parse_forbid_output(ln) {
props.forbid_output.push(of);
}
});
for key in vec!["RUST_TEST_NOCAPTURE", "RUST_TEST_THREADS"] {
match env::var(key) {
Ok(val) =>
if props.exec_env.iter().find(|&&(ref x, _)| *x == key).is_none() {
props.exec_env.push((key.to_owned(), val))
},
Err(..) => {}
}
}
}
pub fn is_test_ignored(config: &Config, testfile: &Path) -> bool {
pub struct EarlyProps {
pub ignore: bool,
pub should_fail: bool,
}
// scan the file to detect whether the test should be ignored and
// whether it should panic; these are two things the test runner needs
// to know early, before actually running the test
pub fn early_props(config: &Config, testfile: &Path) -> EarlyProps {
let mut props = EarlyProps {
ignore: false,
should_fail: false,
};
iter_header(testfile, None, &mut |ln| {
props.ignore =
props.ignore ||
parse_name_directive(ln, "ignore-test") ||
parse_name_directive(ln, &ignore_target(config)) ||
parse_name_directive(ln, &ignore_architecture(config)) ||
parse_name_directive(ln, &ignore_stage(config)) ||
parse_name_directive(ln, &ignore_env(config)) ||
(config.mode == common::Pretty &&
parse_name_directive(ln, "ignore-pretty")) ||
(config.target != config.host &&
parse_name_directive(ln, "ignore-cross-compile")) ||
ignore_gdb(config, ln) ||
ignore_lldb(config, ln);
props.should_fail =
props.should_fail ||
parse_name_directive(ln, "should-fail");
});
return props;
fn ignore_target(config: &Config) -> String {
format!("ignore-{}", util::get_os(&config.target))
}
@ -229,39 +286,40 @@ pub fn is_test_ignored(config: &Config, testfile: &Path) -> bool {
false
}
}
let val = iter_header(testfile, &mut |ln| {
!parse_name_directive(ln, "ignore-test") &&
!parse_name_directive(ln, &ignore_target(config)) &&
!parse_name_directive(ln, &ignore_architecture(config)) &&
!parse_name_directive(ln, &ignore_stage(config)) &&
!parse_name_directive(ln, &ignore_env(config)) &&
!(config.mode == common::Pretty && parse_name_directive(ln, "ignore-pretty")) &&
!(config.target != config.host && parse_name_directive(ln, "ignore-cross-compile")) &&
!ignore_gdb(config, ln) &&
!ignore_lldb(config, ln)
});
!val
}
fn iter_header(testfile: &Path, it: &mut FnMut(&str) -> bool) -> bool {
fn iter_header(testfile: &Path,
cfg: Option<&str>,
it: &mut FnMut(&str)) {
let rdr = BufReader::new(File::open(testfile).unwrap());
for ln in rdr.lines() {
// Assume that any directives will be found before the first
// module or function. This doesn't seem to be an optimization
// with a warm page cache. Maybe with a cold one.
let ln = ln.unwrap();
if ln.starts_with("fn") ||
ln.starts_with("mod") {
return true;
} else {
if !(it(ln.trim())) {
return false;
let ln = ln.trim();
if ln.starts_with("fn") || ln.starts_with("mod") {
return;
} else if ln.starts_with("//[") {
// A comment like `//[foo]` is specific to revision `foo`
if let Some(close_brace) = ln.find("]") {
let lncfg = &ln[3..close_brace];
let matches = match cfg {
Some(s) => s == &lncfg[..],
None => false,
};
if matches {
it(&ln[close_brace+1..]);
}
} else {
panic!("malformed condition directive: expected `//[foo]`, found `{}`",
ln)
}
} else if ln.starts_with("//") {
it(&ln[2..]);
}
}
return true;
return;
}
fn parse_error_pattern(line: &str) -> Option<String> {
@ -280,6 +338,11 @@ fn parse_compile_flags(line: &str) -> Option<String> {
parse_name_value_directive(line, "compile-flags")
}
fn parse_revisions(line: &str) -> Option<Vec<String>> {
parse_name_value_directive(line, "revisions")
.map(|r| r.split_whitespace().map(|t| t.to_string()).collect())
}
fn parse_run_flags(line: &str) -> Option<String> {
parse_name_value_directive(line, "run-flags")
}
@ -316,8 +379,8 @@ fn parse_pretty_compare_only(line: &str) -> bool {
parse_name_directive(line, "pretty-compare-only")
}
fn parse_exec_env(line: &str) -> Option<(String, String)> {
parse_name_value_directive(line, "exec-env").map(|nv| {
fn parse_env(line: &str, name: &str) -> Option<(String, String)> {
parse_name_value_directive(line, name).map(|nv| {
// nv is either FOO or FOO=BAR
let mut strs: Vec<String> = nv
.splitn(2, '=')

View File

@ -8,9 +8,8 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![allow(deprecated)]
use std::dynamic_lib::DynamicLibrary;
use std::env;
use std::ffi::OsString;
use std::io::prelude::*;
use std::path::PathBuf;
use std::process::{ExitStatus, Command, Child, Output, Stdio};
@ -18,15 +17,22 @@ use std::process::{ExitStatus, Command, Child, Output, Stdio};
fn add_target_env(cmd: &mut Command, lib_path: &str, aux_path: Option<&str>) {
// Need to be sure to put both the lib_path and the aux path in the dylib
// search path for the child.
let mut path = DynamicLibrary::search_path();
let var = if cfg!(windows) {
"PATH"
} else if cfg!(target_os = "macos") {
"DYLD_LIBRARY_PATH"
} else {
"LD_LIBRARY_PATH"
};
let mut path = env::split_paths(&env::var_os(var).unwrap_or(OsString::new()))
.collect::<Vec<_>>();
if let Some(p) = aux_path {
path.insert(0, PathBuf::from(p))
}
path.insert(0, PathBuf::from(lib_path));
// Add the new dylib search path var
let var = DynamicLibrary::envvar();
let newpath = DynamicLibrary::create_path(&path);
let newpath = env::join_paths(&path).unwrap();
cmd.env(var, newpath);
}

View File

@ -11,7 +11,8 @@
use common::Config;
use common::{CompileFail, ParseFail, Pretty, RunFail, RunPass, RunPassValgrind};
use common::{Codegen, DebugInfoLldb, DebugInfoGdb, Rustdoc, CodegenUnits};
use errors;
use common::{Incremental};
use errors::{self, ErrorKind};
use header::TestProps;
use header;
use procsrv;
@ -59,6 +60,7 @@ pub fn run(config: Config, testpaths: &TestPaths) {
Codegen => run_codegen_test(&config, &props, &testpaths),
Rustdoc => run_rustdoc_test(&config, &props, &testpaths),
CodegenUnits => run_codegen_units_test(&config, &props, &testpaths),
Incremental => run_incremental_test(&config, &props, &testpaths),
}
}
@ -70,39 +72,77 @@ fn get_output(props: &TestProps, proc_res: &ProcRes) -> String {
}
}
fn for_each_revision<OP>(config: &Config, props: &TestProps, testpaths: &TestPaths,
mut op: OP)
where OP: FnMut(&Config, &TestProps, &TestPaths, Option<&str>)
{
if props.revisions.is_empty() {
op(config, props, testpaths, None)
} else {
for revision in &props.revisions {
let mut revision_props = props.clone();
header::load_props_into(&mut revision_props,
&testpaths.file,
Some(&revision));
revision_props.compile_flags.extend(vec![
format!("--cfg"),
format!("{}", revision),
]);
op(config, &revision_props, testpaths, Some(revision));
}
}
}
fn run_cfail_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
for_each_revision(config, props, testpaths, run_cfail_test_revision);
}
fn run_cfail_test_revision(config: &Config,
props: &TestProps,
testpaths: &TestPaths,
revision: Option<&str>) {
let proc_res = compile_test(config, props, testpaths);
if proc_res.status.success() {
fatal_proc_rec(&format!("{} test compiled successfully!", config.mode)[..],
&proc_res);
fatal_proc_rec(
revision,
&format!("{} test compiled successfully!", config.mode)[..],
&proc_res);
}
check_correct_failure_status(&proc_res);
check_correct_failure_status(revision, &proc_res);
if proc_res.status.success() {
fatal("process did not return an error status");
fatal(revision, "process did not return an error status");
}
let output_to_check = get_output(props, &proc_res);
let expected_errors = errors::load_errors(&testpaths.file);
let expected_errors = errors::load_errors(&testpaths.file, revision);
if !expected_errors.is_empty() {
if !props.error_patterns.is_empty() {
fatal("both error pattern and expected errors specified");
fatal(revision, "both error pattern and expected errors specified");
}
check_expected_errors(expected_errors, testpaths, &proc_res);
check_expected_errors(revision, expected_errors, testpaths, &proc_res);
} else {
check_error_patterns(props, testpaths, &output_to_check, &proc_res);
check_error_patterns(revision, props, testpaths, &output_to_check, &proc_res);
}
check_no_compiler_crash(&proc_res);
check_forbid_output(props, &output_to_check, &proc_res);
check_no_compiler_crash(revision, &proc_res);
check_forbid_output(revision, props, &output_to_check, &proc_res);
}
fn run_rfail_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
for_each_revision(config, props, testpaths, run_rfail_test_revision);
}
fn run_rfail_test_revision(config: &Config,
props: &TestProps,
testpaths: &TestPaths,
revision: Option<&str>) {
let proc_res = compile_test(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("compilation failed!", &proc_res);
fatal_proc_rec(revision, "compilation failed!", &proc_res);
}
let proc_res = exec_compiled_test(config, props, testpaths);
@ -110,19 +150,20 @@ fn run_rfail_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
// The value our Makefile configures valgrind to return on failure
const VALGRIND_ERR: i32 = 100;
if proc_res.status.code() == Some(VALGRIND_ERR) {
fatal_proc_rec("run-fail test isn't valgrind-clean!", &proc_res);
fatal_proc_rec(revision, "run-fail test isn't valgrind-clean!", &proc_res);
}
let output_to_check = get_output(props, &proc_res);
check_correct_failure_status(&proc_res);
check_error_patterns(props, testpaths, &output_to_check, &proc_res);
check_correct_failure_status(revision, &proc_res);
check_error_patterns(revision, props, testpaths, &output_to_check, &proc_res);
}
fn check_correct_failure_status(proc_res: &ProcRes) {
fn check_correct_failure_status(revision: Option<&str>, proc_res: &ProcRes) {
// The value the rust runtime returns on failure
const RUST_ERR: i32 = 101;
if proc_res.status.code() != Some(RUST_ERR) {
fatal_proc_rec(
revision,
&format!("failure produced the wrong error: {}",
proc_res.status),
proc_res);
@ -130,20 +171,29 @@ fn check_correct_failure_status(proc_res: &ProcRes) {
}
fn run_rpass_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
for_each_revision(config, props, testpaths, run_rpass_test_revision);
}
fn run_rpass_test_revision(config: &Config,
props: &TestProps,
testpaths: &TestPaths,
revision: Option<&str>) {
let proc_res = compile_test(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("compilation failed!", &proc_res);
fatal_proc_rec(revision, "compilation failed!", &proc_res);
}
let proc_res = exec_compiled_test(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("test run failed!", &proc_res);
fatal_proc_rec(revision, "test run failed!", &proc_res);
}
}
fn run_valgrind_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
if config.valgrind_path.is_none() {
assert!(!config.force_valgrind);
return run_rpass_test(config, props, testpaths);
@ -152,7 +202,7 @@ fn run_valgrind_test(config: &Config, props: &TestProps, testpaths: &TestPaths)
let mut proc_res = compile_test(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("compilation failed!", &proc_res);
fatal_proc_rec(None, "compilation failed!", &proc_res);
}
let mut new_config = config.clone();
@ -160,11 +210,18 @@ fn run_valgrind_test(config: &Config, props: &TestProps, testpaths: &TestPaths)
proc_res = exec_compiled_test(&new_config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("test run failed!", &proc_res);
fatal_proc_rec(None, "test run failed!", &proc_res);
}
}
fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
for_each_revision(config, props, testpaths, run_pretty_test_revision);
}
fn run_pretty_test_revision(config: &Config,
props: &TestProps,
testpaths: &TestPaths,
revision: Option<&str>) {
if props.pp_exact.is_some() {
logv(config, "testing for exact pretty-printing".to_owned());
} else {
@ -180,7 +237,8 @@ fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
let mut round = 0;
while round < rounds {
logv(config, format!("pretty-printing round {}", round));
logv(config, format!("pretty-printing round {} revision {:?}",
round, revision));
let proc_res = print_source(config,
props,
testpaths,
@ -188,8 +246,10 @@ fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
&props.pretty_mode);
if !proc_res.status.success() {
fatal_proc_rec(&format!("pretty-printing failed in round {}", round),
&proc_res);
fatal_proc_rec(revision,
&format!("pretty-printing failed in round {} revision {:?}",
round, revision),
&proc_res);
}
let ProcRes{ stdout, .. } = proc_res;
@ -215,30 +275,32 @@ fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
expected = expected.replace(&cr, "").to_owned();
}
compare_source(&expected, &actual);
compare_source(revision, &expected, &actual);
// If we're only making sure that the output matches then just stop here
if props.pretty_compare_only { return; }
// Finally, let's make sure it actually appears to remain valid code
let proc_res = typecheck_source(config, props, testpaths, actual);
if !proc_res.status.success() {
fatal_proc_rec("pretty-printed source does not typecheck", &proc_res);
fatal_proc_rec(revision, "pretty-printed source does not typecheck", &proc_res);
}
if !props.pretty_expanded { return }
// additionally, run `--pretty expanded` and try to build it.
let proc_res = print_source(config, props, testpaths, srcs[round].clone(), "expanded");
if !proc_res.status.success() {
fatal_proc_rec("pretty-printing (expanded) failed", &proc_res);
fatal_proc_rec(revision, "pretty-printing (expanded) failed", &proc_res);
}
let ProcRes{ stdout: expanded_src, .. } = proc_res;
let proc_res = typecheck_source(config, props, testpaths, expanded_src);
if !proc_res.status.success() {
fatal_proc_rec("pretty-printed source (expanded) does not typecheck",
&proc_res);
fatal_proc_rec(
revision,
"pretty-printed source (expanded) does not typecheck",
&proc_res);
}
return;
@ -256,7 +318,7 @@ fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
testpaths,
pretty_type.to_owned()),
props.exec_env.clone(),
&config.compile_lib_path,
config.compile_lib_path.to_str().unwrap(),
Some(aux_dir.to_str().unwrap()),
Some(src))
}
@ -275,16 +337,16 @@ fn run_pretty_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
"-L".to_owned(),
aux_dir.to_str().unwrap().to_owned());
args.extend(split_maybe_args(&config.target_rustcflags));
args.extend(split_maybe_args(&props.compile_flags));
args.extend(props.compile_flags.iter().cloned());
return ProcArgs {
prog: config.rustc_path.to_str().unwrap().to_owned(),
args: args,
};
}
fn compare_source(expected: &str, actual: &str) {
fn compare_source(revision: Option<&str>, expected: &str, actual: &str) {
if expected != actual {
error("pretty-printed source does not match expected source");
error(revision, "pretty-printed source does not match expected source");
println!("\n\
expected:\n\
------------------------------------------\n\
@ -322,7 +384,7 @@ actual:\n\
"-L".to_owned(),
aux_dir.to_str().unwrap().to_owned());
args.extend(split_maybe_args(&config.target_rustcflags));
args.extend(split_maybe_args(&props.compile_flags));
args.extend(props.compile_flags.iter().cloned());
// FIXME (#9639): This needs to handle non-utf8 paths
return ProcArgs {
prog: config.rustc_path.to_str().unwrap().to_owned(),
@ -332,6 +394,8 @@ actual:\n\
}
fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
let mut config = Config {
target_rustcflags: cleanup_debug_info_options(&config.target_rustcflags),
host_rustcflags: cleanup_debug_info_options(&config.host_rustcflags),
@ -349,7 +413,7 @@ fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testpaths: &TestPa
// compile test file (it should have 'compile-flags:-g' in the header)
let compiler_run_result = compile_test(config, props, testpaths);
if !compiler_run_result.status.success() {
fatal_proc_rec("compilation failed!", &compiler_run_result);
fatal_proc_rec(None, "compilation failed!", &compiler_run_result);
}
let exe_file = make_exe_name(config, testpaths);
@ -441,7 +505,7 @@ fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testpaths: &TestPa
let tool_path = match config.android_cross_path.to_str() {
Some(x) => x.to_owned(),
None => fatal("cannot find android cross path")
None => fatal(None, "cannot find android cross path")
};
let debugger_script = make_out_name(config, testpaths, "debugger.script");
@ -573,14 +637,14 @@ fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testpaths: &TestPa
testpaths,
proc_args,
environment,
&config.run_lib_path,
config.run_lib_path.to_str().unwrap(),
None,
None);
}
}
if !debugger_run_result.status.success() {
fatal("gdb failed to execute");
fatal(None, "gdb failed to execute");
}
check_debugger_output(&debugger_run_result, &check_lines);
@ -600,8 +664,10 @@ fn find_rust_src_root(config: &Config) -> Option<PathBuf> {
}
fn run_debuginfo_lldb_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
if config.lldb_python_dir.is_none() {
fatal("Can't run LLDB test because LLDB's python path is not set.");
fatal(None, "Can't run LLDB test because LLDB's python path is not set.");
}
let mut config = Config {
@ -615,7 +681,7 @@ fn run_debuginfo_lldb_test(config: &Config, props: &TestProps, testpaths: &TestP
// compile test file (it should have 'compile-flags:-g' in the header)
let compile_result = compile_test(config, props, testpaths);
if !compile_result.status.success() {
fatal_proc_rec("compilation failed!", &compile_result);
fatal_proc_rec(None, "compilation failed!", &compile_result);
}
let exe_file = make_exe_name(config, testpaths);
@ -663,8 +729,11 @@ fn run_debuginfo_lldb_test(config: &Config, props: &TestProps, testpaths: &TestP
script_str.push_str("type category enable Rust\n");
// Set breakpoints on every line that contains the string "#break"
let source_file_name = testpaths.file.file_name().unwrap().to_string_lossy();
for line in &breakpoint_lines {
script_str.push_str(&format!("breakpoint set --line {}\n", line));
script_str.push_str(&format!("breakpoint set --file '{}' --line {}\n",
source_file_name,
line));
}
// Append the other commands
@ -692,7 +761,7 @@ fn run_debuginfo_lldb_test(config: &Config, props: &TestProps, testpaths: &TestP
&rust_src_root);
if !debugger_run_result.status.success() {
fatal_proc_rec("Error while running LLDB", &debugger_run_result);
fatal_proc_rec(None, "Error while running LLDB", &debugger_run_result);
}
check_debugger_output(&debugger_run_result, &check_lines);
@ -725,7 +794,7 @@ fn cmd2procres(config: &Config, testpaths: &TestPaths, cmd: &mut Command)
String::from_utf8(stderr).unwrap())
},
Err(e) => {
fatal(&format!("Failed to setup Python process for \
fatal(None, &format!("Failed to setup Python process for \
LLDB script: {}", e))
}
};
@ -775,7 +844,7 @@ fn parse_debugger_commands(testpaths: &TestPaths, debugger_prefix: &str)
});
}
Err(e) => {
fatal(&format!("Error while parsing debugger commands: {}", e))
fatal(None, &format!("Error while parsing debugger commands: {}", e))
}
}
counter += 1;
@ -799,12 +868,28 @@ fn cleanup_debug_info_options(options: &Option<String>) -> Option<String> {
"-g".to_owned(),
"--debuginfo".to_owned()
];
let new_options =
let mut new_options =
split_maybe_args(options).into_iter()
.filter(|x| !options_to_remove.contains(x))
.collect::<Vec<String>>()
.join(" ");
Some(new_options)
.collect::<Vec<String>>();
let mut i = 0;
while i + 1 < new_options.len() {
if new_options[i] == "-Z" {
// FIXME #31005 MIR missing debuginfo currently.
if new_options[i + 1] == "orbit" {
// Remove "-Z" and "orbit".
new_options.remove(i);
new_options.remove(i);
continue;
}
// Always skip over -Z's argument.
i += 1;
}
i += 1;
}
Some(new_options.join(" "))
}
fn check_debugger_output(debugger_run_result: &ProcRes, check_lines: &[String]) {
@ -857,19 +942,21 @@ fn check_debugger_output(debugger_run_result: &ProcRes, check_lines: &[String])
}
}
if i != num_check_lines {
fatal_proc_rec(&format!("line not found in debugger output: {}",
fatal_proc_rec(None, &format!("line not found in debugger output: {}",
check_lines.get(i).unwrap()),
debugger_run_result);
}
}
}
fn check_error_patterns(props: &TestProps,
fn check_error_patterns(revision: Option<&str>,
props: &TestProps,
testpaths: &TestPaths,
output_to_check: &str,
proc_res: &ProcRes) {
if props.error_patterns.is_empty() {
fatal(&format!("no error pattern specified in {:?}",
fatal(revision,
&format!("no error pattern specified in {:?}",
testpaths.file.display()));
}
let mut next_err_idx = 0;
@ -891,58 +978,144 @@ fn check_error_patterns(props: &TestProps,
let missing_patterns = &props.error_patterns[next_err_idx..];
if missing_patterns.len() == 1 {
fatal_proc_rec(&format!("error pattern '{}' not found!", missing_patterns[0]),
proc_res);
fatal_proc_rec(
revision,
&format!("error pattern '{}' not found!", missing_patterns[0]),
proc_res);
} else {
for pattern in missing_patterns {
error(&format!("error pattern '{}' not found!", *pattern));
error(revision, &format!("error pattern '{}' not found!", *pattern));
}
fatal_proc_rec("multiple error patterns not found", proc_res);
fatal_proc_rec(revision, "multiple error patterns not found", proc_res);
}
}
fn check_no_compiler_crash(proc_res: &ProcRes) {
fn check_no_compiler_crash(revision: Option<&str>, proc_res: &ProcRes) {
for line in proc_res.stderr.lines() {
if line.starts_with("error: internal compiler error:") {
fatal_proc_rec("compiler encountered internal error",
proc_res);
fatal_proc_rec(revision,
"compiler encountered internal error",
proc_res);
}
}
}
fn check_forbid_output(props: &TestProps,
fn check_forbid_output(revision: Option<&str>,
props: &TestProps,
output_to_check: &str,
proc_res: &ProcRes) {
for pat in &props.forbid_output {
if output_to_check.contains(pat) {
fatal_proc_rec("forbidden pattern found in compiler output", proc_res);
fatal_proc_rec(revision,
"forbidden pattern found in compiler output",
proc_res);
}
}
}
fn check_expected_errors(expected_errors: Vec<errors::ExpectedError>,
fn check_expected_errors(revision: Option<&str>,
expected_errors: Vec<errors::ExpectedError>,
testpaths: &TestPaths,
proc_res: &ProcRes) {
// true if we found the error in question
let mut found_flags = vec![false; expected_errors.len()];
if proc_res.status.success() {
fatal("process did not return an error status");
fatal_proc_rec(revision, "process did not return an error status", proc_res);
}
let prefixes = expected_errors.iter().map(|ee| {
let expected = format!("{}:{}:", testpaths.file.display(), ee.line);
let expected = format!("{}:{}:", testpaths.file.display(), ee.line_num);
// On windows just translate all '\' path separators to '/'
expected.replace(r"\", "/")
}).collect::<Vec<String>>();
// If the testcase being checked contains at least one expected "help"
// message, then we'll ensure that all "help" messages are expected.
// Otherwise, all "help" messages reported by the compiler will be ignored.
// This logic also applies to "note" messages.
let (expect_help, expect_note) =
expected_errors.iter()
.fold((false, false),
|(acc_help, acc_note), ee|
(acc_help || ee.kind == "help:", acc_note ||
ee.kind == "note:"));
(acc_help || ee.kind == Some(ErrorKind::Help),
acc_note || ee.kind == Some(ErrorKind::Note)));
// Scan and extract our error/warning messages,
// which look like:
// filename:line1:col1: line2:col2: *error:* msg
// filename:line1:col1: line2:col2: *warning:* msg
// where line1:col1: is the starting point, line2:col2:
// is the ending point, and * represents ANSI color codes.
//
// This pattern is ambiguous on windows, because filename may contain
// a colon, so any path prefix must be detected and removed first.
let mut unexpected = 0;
let mut not_found = 0;
for line in proc_res.stderr.lines() {
let mut was_expected = false;
let mut prev = 0;
for (i, ee) in expected_errors.iter().enumerate() {
if !found_flags[i] {
debug!("prefix={} ee.kind={:?} ee.msg={} line={}",
prefixes[i],
ee.kind,
ee.msg,
line);
// Suggestions have no line number in their output, so take on the line number of
// the previous expected error
if ee.kind == Some(ErrorKind::Suggestion) {
assert!(expected_errors[prev].kind == Some(ErrorKind::Help),
"SUGGESTIONs must be preceded by a HELP");
if line.contains(&ee.msg) {
found_flags[i] = true;
was_expected = true;
break;
}
}
if
(prefix_matches(line, &prefixes[i]) || continuation(line)) &&
(ee.kind.is_none() || line.contains(&ee.kind.as_ref().unwrap().to_string())) &&
line.contains(&ee.msg)
{
found_flags[i] = true;
was_expected = true;
break;
}
}
prev = i;
}
// ignore this msg which gets printed at the end
if line.contains("aborting due to") {
was_expected = true;
}
if !was_expected && is_unexpected_compiler_message(line, expect_help, expect_note) {
error(revision, &format!("unexpected compiler message: '{}'", line));
unexpected += 1;
}
}
for (i, &flag) in found_flags.iter().enumerate() {
if !flag {
let ee = &expected_errors[i];
error(revision, &format!("expected {} on line {} not found: {}",
ee.kind.as_ref()
.map_or("message".into(),
|k| k.to_string()),
ee.line_num, ee.msg));
not_found += 1;
}
}
if unexpected > 0 || not_found > 0 {
fatal_proc_rec(
revision,
&format!("{} unexpected errors found, {} expected errors not found",
unexpected, not_found),
proc_res);
}
fn prefix_matches(line: &str, prefix: &str) -> bool {
use std::ascii::AsciiExt;
@ -960,68 +1133,6 @@ fn check_expected_errors(expected_errors: Vec<errors::ExpectedError>,
fn continuation( line: &str) -> bool {
line.starts_with(" ") || line.starts_with("(")
}
// Scan and extract our error/warning messages,
// which look like:
// filename:line1:col1: line2:col2: *error:* msg
// filename:line1:col1: line2:col2: *warning:* msg
// where line1:col1: is the starting point, line2:col2:
// is the ending point, and * represents ANSI color codes.
//
// This pattern is ambiguous on windows, because filename may contain
// a colon, so any path prefix must be detected and removed first.
for line in proc_res.stderr.lines() {
let mut was_expected = false;
let mut prev = 0;
for (i, ee) in expected_errors.iter().enumerate() {
if !found_flags[i] {
debug!("prefix={} ee.kind={} ee.msg={} line={}",
prefixes[i],
ee.kind,
ee.msg,
line);
// Suggestions have no line number in their output, so take on the line number of
// the previous expected error
if ee.kind == "suggestion" {
assert!(expected_errors[prev].kind == "help",
"SUGGESTIONs must be preceded by a HELP");
if line.contains(&ee.msg) {
found_flags[i] = true;
was_expected = true;
break;
}
}
if (prefix_matches(line, &prefixes[i]) || continuation(line)) &&
line.contains(&ee.kind) &&
line.contains(&ee.msg) {
found_flags[i] = true;
was_expected = true;
break;
}
}
prev = i;
}
// ignore this msg which gets printed at the end
if line.contains("aborting due to") {
was_expected = true;
}
if !was_expected && is_unexpected_compiler_message(line, expect_help, expect_note) {
fatal_proc_rec(&format!("unexpected compiler message: '{}'",
line),
proc_res);
}
}
for (i, &flag) in found_flags.iter().enumerate() {
if !flag {
let ee = &expected_errors[i];
fatal_proc_rec(&format!("expected {} on line {} not found: {}",
ee.kind, ee.line, ee.msg),
proc_res);
}
}
}
fn is_unexpected_compiler_message(line: &str, expect_help: bool, expect_note: bool) -> bool {
@ -1066,7 +1177,7 @@ fn scan_char(haystack: &str, needle: char, idx: &mut usize) -> bool {
if *idx >= haystack.len() {
return false;
}
let ch = haystack.char_at(*idx);
let ch = haystack[*idx..].chars().next().unwrap();
if ch != needle {
return false;
}
@ -1077,7 +1188,7 @@ fn scan_char(haystack: &str, needle: char, idx: &mut usize) -> bool {
fn scan_integer(haystack: &str, idx: &mut usize) -> bool {
let mut i = *idx;
while i < haystack.len() {
let ch = haystack.char_at(i);
let ch = haystack[i..].chars().next().unwrap();
if ch < '0' || '9' < ch {
break;
}
@ -1097,7 +1208,7 @@ fn scan_string(haystack: &str, needle: &str, idx: &mut usize) -> bool {
if haystack_i >= haystack.len() {
return false;
}
let ch = haystack.char_at(haystack_i);
let ch = haystack[haystack_i..].chars().next().unwrap();
haystack_i += ch.len_utf8();
if !scan_char(needle, ch, &mut needle_i) {
return false;
@ -1184,7 +1295,7 @@ fn document(config: &Config,
"-o".to_owned(),
out_dir.to_str().unwrap().to_owned(),
testpaths.file.to_str().unwrap().to_owned()];
args.extend(split_maybe_args(&props.compile_flags));
args.extend(props.compile_flags.iter().cloned());
let args = ProcArgs {
prog: config.rustdoc_path.to_str().unwrap().to_owned(),
args: args,
@ -1209,7 +1320,7 @@ fn exec_compiled_test(config: &Config, props: &TestProps,
testpaths,
make_run_args(config, props, testpaths),
env,
&config.run_lib_path,
config.run_lib_path.to_str().unwrap(),
Some(aux_dir.to_str().unwrap()),
None)
}
@ -1281,11 +1392,12 @@ fn compose_and_run_compiler(config: &Config, props: &TestProps,
&aux_testpaths,
aux_args,
Vec::new(),
&config.compile_lib_path,
config.compile_lib_path.to_str().unwrap(),
Some(aux_dir.to_str().unwrap()),
None);
if !auxres.status.success() {
fatal_proc_rec(
None,
&format!("auxiliary build of {:?} failed to compile: ",
aux_testpaths.file.display()),
&auxres);
@ -1302,8 +1414,8 @@ fn compose_and_run_compiler(config: &Config, props: &TestProps,
compose_and_run(config,
testpaths,
args,
Vec::new(),
&config.compile_lib_path,
props.rustc_env.clone(),
config.compile_lib_path.to_str().unwrap(),
Some(aux_dir.to_str().unwrap()),
input)
}
@ -1369,7 +1481,7 @@ fn make_compile_args<F>(config: &Config,
} else {
args.extend(split_maybe_args(&config.target_rustcflags));
}
args.extend(split_maybe_args(&props.compile_flags));
args.extend(props.compile_flags.iter().cloned());
return ProcArgs {
prog: config.rustc_path.to_str().unwrap().to_owned(),
args: args,
@ -1537,13 +1649,20 @@ fn maybe_dump_to_stdout(config: &Config, out: &str, err: &str) {
}
}
fn error(err: &str) { println!("\nerror: {}", err); }
fn error(revision: Option<&str>, err: &str) {
match revision {
Some(rev) => println!("\nerror in revision `{}`: {}", rev, err),
None => println!("\nerror: {}", err)
}
}
fn fatal(err: &str) -> ! { error(err); panic!(); }
fn fatal(revision: Option<&str>, err: &str) -> ! {
error(revision, err); panic!();
}
fn fatal_proc_rec(err: &str, proc_res: &ProcRes) -> ! {
print!("\n\
error: {}\n\
fn fatal_proc_rec(revision: Option<&str>, err: &str, proc_res: &ProcRes) -> ! {
error(revision, err);
print!("\
status: {}\n\
command: {}\n\
stdout:\n\
@ -1555,7 +1674,7 @@ stderr:\n\
{}\n\
------------------------------------------\n\
\n",
err, proc_res.status, proc_res.cmdline, proc_res.stdout,
proc_res.status, proc_res.cmdline, proc_res.stdout,
proc_res.stderr);
panic!();
}
@ -1753,20 +1872,22 @@ fn check_ir_with_filecheck(config: &Config, testpaths: &TestPaths) -> ProcRes {
}
fn run_codegen_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
if config.llvm_bin_path.is_none() {
fatal("missing --llvm-bin-path");
fatal(None, "missing --llvm-bin-path");
}
let mut proc_res = compile_test_and_save_ir(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("compilation failed!", &proc_res);
fatal_proc_rec(None, "compilation failed!", &proc_res);
}
proc_res = check_ir_with_filecheck(config, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("verification with 'FileCheck' failed",
&proc_res);
fatal_proc_rec(None,
"verification with 'FileCheck' failed",
&proc_res);
}
}
@ -1782,13 +1903,15 @@ fn charset() -> &'static str {
}
fn run_rustdoc_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
let out_dir = output_base_name(config, testpaths);
let _ = fs::remove_dir_all(&out_dir);
ensure_dir(&out_dir);
let proc_res = document(config, props, testpaths, &out_dir);
if !proc_res.status.success() {
fatal_proc_rec("rustdoc failed!", &proc_res);
fatal_proc_rec(None, "rustdoc failed!", &proc_res);
}
let root = find_rust_src_root(config).unwrap();
@ -1799,18 +1922,20 @@ fn run_rustdoc_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
.arg(out_dir)
.arg(&testpaths.file));
if !res.status.success() {
fatal_proc_rec("htmldocck failed!", &res);
fatal_proc_rec(None, "htmldocck failed!", &res);
}
}
fn run_codegen_units_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
assert!(props.revisions.is_empty(), "revisions not relevant here");
let proc_res = compile_test(config, props, testpaths);
if !proc_res.status.success() {
fatal_proc_rec("compilation failed!", &proc_res);
fatal_proc_rec(None, "compilation failed!", &proc_res);
}
check_no_compiler_crash(&proc_res);
check_no_compiler_crash(None, &proc_res);
let prefix = "TRANS_ITEM ";
@ -1821,7 +1946,7 @@ fn run_codegen_units_test(config: &Config, props: &TestProps, testpaths: &TestPa
.map(|s| (&s[prefix.len()..]).to_string())
.collect();
let expected: HashSet<String> = errors::load_errors(&testpaths.file)
let expected: HashSet<String> = errors::load_errors(&testpaths.file, None)
.iter()
.map(|e| e.msg.trim().to_string())
.collect();
@ -1843,3 +1968,67 @@ fn run_codegen_units_test(config: &Config, props: &TestProps, testpaths: &TestPa
panic!();
}
}
fn run_incremental_test(config: &Config, props: &TestProps, testpaths: &TestPaths) {
// Basic plan for a test incremental/foo/bar.rs:
// - load list of revisions pass1, fail2, pass3
// - each should begin with `rpass`, `rfail`, or `cfail`
// - if `rpass`, expect compile and execution to succeed
// - if `cfail`, expect compilation to fail
// - if `rfail`, expect execution to fail
// - create a directory build/foo/bar.incremental
// - compile foo/bar.rs with -Z incremental=.../foo/bar.incremental and -C pass1
// - because name of revision starts with "pass", expect success
// - compile foo/bar.rs with -Z incremental=.../foo/bar.incremental and -C fail2
// - because name of revision starts with "fail", expect an error
// - load expected errors as usual, but filter for those that end in `[fail2]`
// - compile foo/bar.rs with -Z incremental=.../foo/bar.incremental and -C pass3
// - because name of revision starts with "pass", expect success
// - execute build/foo/bar.exe and save output
//
// FIXME -- use non-incremental mode as an oracle? That doesn't apply
// to #[rustc_dirty] and clean tests I guess
assert!(!props.revisions.is_empty(), "incremental tests require a list of revisions");
let output_base_name = output_base_name(config, testpaths);
// Create the incremental workproduct directory.
let incremental_dir = output_base_name.with_extension("incremental");
if incremental_dir.exists() {
fs::remove_dir_all(&incremental_dir).unwrap();
}
fs::create_dir_all(&incremental_dir).unwrap();
if config.verbose {
print!("incremental_dir={}", incremental_dir.display());
}
for revision in &props.revisions {
let mut revision_props = props.clone();
header::load_props_into(&mut revision_props, &testpaths.file, Some(&revision));
revision_props.compile_flags.extend(vec![
format!("-Z"),
format!("incremental={}", incremental_dir.display()),
format!("--cfg"),
format!("{}", revision),
]);
if config.verbose {
print!("revision={:?} revision_props={:#?}", revision, revision_props);
}
if revision.starts_with("rpass") {
run_rpass_test_revision(config, &revision_props, testpaths, Some(&revision));
} else if revision.starts_with("rfail") {
run_rfail_test_revision(config, &revision_props, testpaths, Some(&revision));
} else if revision.starts_with("cfail") {
run_cfail_test_revision(config, &revision_props, testpaths, Some(&revision));
} else {
fatal(
Some(revision),
"revision name must begin with rpass, rfail, or cfail");
}
}
}

View File

@ -9,6 +9,7 @@
* [Comments](comments.md)
* [if](if.md)
* [Loops](loops.md)
* [Vectors](vectors.md)
* [Ownership](ownership.md)
* [References and Borrowing](references-and-borrowing.md)
* [Lifetimes](lifetimes.md)
@ -18,7 +19,6 @@
* [Match](match.md)
* [Patterns](patterns.md)
* [Method Syntax](method-syntax.md)
* [Vectors](vectors.md)
* [Strings](strings.md)
* [Generics](generics.md)
* [Traits](traits.md)

View File

@ -131,7 +131,7 @@ declarations.
## Trait objects with associated types
Theres one more bit of syntax we should talk about: trait objects. If you
try to create a trait object from an associated type, like this:
try to create a trait object from a trait with an associated type, like this:
```rust,ignore
# trait Graph {

View File

@ -17,12 +17,12 @@ function result.
The most common case of coercion is removing mutability from a reference:
* `&mut T` to `&T`
An analogous conversion is to remove mutability from a
[raw pointer](raw-pointers.md):
* `*mut T` to `*const T`
References can also be coerced to raw pointers:
* `&T` to `*const T`
@ -32,7 +32,7 @@ References can also be coerced to raw pointers:
Custom coercions may be defined using [`Deref`](deref-coercions.md).
Coercion is transitive.
# `as`
The `as` keyword does safe casting:
@ -64,7 +64,7 @@ A cast `e as U` is also valid in any of the following cases:
and `U` is an integer type; *enum-cast*
* `e` has type `bool` or `char` and `U` is an integer type; *prim-int-cast*
* `e` has type `u8` and `U` is `char`; *u8-char-cast*
For example
```rust
@ -98,9 +98,9 @@ The semantics of numeric casts are:
[float-int]: https://github.com/rust-lang/rust/issues/10184
[float-float]: https://github.com/rust-lang/rust/issues/15536
## Pointer casts
Perhaps surprisingly, it is safe to cast [raw pointers](raw-pointers.md) to and
from integers, and to cast between pointers to different types subject to
some constraints. It is only unsafe to dereference the pointer:
@ -114,7 +114,7 @@ let b = a as u32;
* `e` has type `*T`, `U` has type `*U_0`, and either `U_0: Sized` or
`unsize_kind(T) == unsize_kind(U_0)`; a *ptr-ptr-cast*
* `e` has type `*T` and `U` is a numeric type, while `T: Sized`; *ptr-addr-cast*
* `e` is an integer and `U` is `*U_0`, while `U_0: Sized`; *addr-ptr-cast*

View File

@ -204,7 +204,7 @@ borrow checker. Generally we know that such mutations won't happen in a nested f
to check.
For large, complicated programs, it becomes useful to put some things in `RefCell`s to make things
simpler. For example, a lot of the maps in [the `ctxt` struct][ctxt] in the Rust compiler internals
simpler. For example, a lot of the maps in the `ctxt` struct in the Rust compiler internals
are inside this wrapper. These are only modified once (during creation, which is not right after
initialization) or a couple of times in well-separated places. However, since this struct is
pervasively used everywhere, juggling mutable and immutable pointers would be hard (perhaps
@ -235,7 +235,6 @@ At runtime each borrow causes a modification/check of the refcount.
[cell-mod]: ../std/cell/
[cell]: ../std/cell/struct.Cell.html
[refcell]: ../std/cell/struct.RefCell.html
[ctxt]: ../rustc/middle/ty/struct.ctxt.html
# Synchronous types

View File

@ -371,14 +371,13 @@ assert_eq!(6, answer);
This gives us these long, related errors:
```text
error: the trait `core::marker::Sized` is not implemented for the type
`core::ops::Fn(i32) -> i32` [E0277]
error: the trait bound `core::ops::Fn(i32) -> i32 : core::marker::Sized` is not satisfied [E0277]
fn factory() -> (Fn(i32) -> i32) {
^~~~~~~~~~~~~~~~
note: `core::ops::Fn(i32) -> i32` does not have a constant size known at compile-time
fn factory() -> (Fn(i32) -> i32) {
^~~~~~~~~~~~~~~~
error: the trait `core::marker::Sized` is not implemented for the type `core::ops::Fn(i32) -> i32` [E0277]
error: the trait bound `core::ops::Fn(i32) -> i32 : core::marker::Sized` is not satisfied [E0277]
let f = factory();
^
note: `core::ops::Fn(i32) -> i32` does not have a constant size known at compile-time
@ -502,5 +501,5 @@ assert_eq!(6, answer);
```
By making the inner closure a `move Fn`, we create a new stack frame for our
closure. By `Box`ing it up, weve given it a known size, and allowing it to
closure. By `Box`ing it up, weve given it a known size, allowing it to
escape our stack frame.

View File

@ -8,12 +8,12 @@ extend the compiler's behavior with new syntax extensions, lint checks, etc.
A plugin is a dynamic library crate with a designated *registrar* function that
registers extensions with `rustc`. Other crates can load these extensions using
the crate attribute `#![plugin(...)]`. See the
[`rustc_plugin`](../rustc_plugin/index.html) documentation for more about the
`rustc_plugin` documentation for more about the
mechanics of defining and loading a plugin.
If present, arguments passed as `#![plugin(foo(... args ...))]` are not
interpreted by rustc itself. They are provided to the plugin through the
`Registry`'s [`args` method](../rustc_plugin/registry/struct.Registry.html#method.args).
`Registry`'s `args` method.
In the vast majority of cases, a plugin should *only* be used through
`#![plugin]` and not through an `extern crate` item. Linking a plugin would
@ -30,7 +30,7 @@ of a library.
Plugins can extend Rust's syntax in various ways. One kind of syntax extension
is the procedural macro. These are invoked the same way as [ordinary
macros](macros.html), but the expansion is performed by arbitrary Rust
code that manipulates [syntax trees](../syntax/ast/index.html) at
code that manipulates syntax trees at
compile time.
Let's write a plugin
@ -120,11 +120,8 @@ The advantages over a simple `fn(&str) -> u32` are:
In addition to procedural macros, you can define new
[`derive`](../reference.html#derive)-like attributes and other kinds of
extensions. See
[`Registry::register_syntax_extension`](../rustc_plugin/registry/struct.Registry.html#method.register_syntax_extension)
and the [`SyntaxExtension`
enum](https://doc.rust-lang.org/syntax/ext/base/enum.SyntaxExtension.html). For
a more involved macro example, see
extensions. See `Registry::register_syntax_extension` and the `SyntaxExtension`
enum. For a more involved macro example, see
[`regex_macros`](https://github.com/rust-lang/regex/blob/master/regex_macros/src/lib.rs).
@ -132,7 +129,7 @@ a more involved macro example, see
Some of the [macro debugging tips](macros.html#debugging-macro-code) are applicable.
You can use [`syntax::parse`](../syntax/parse/index.html) to turn token trees into
You can use `syntax::parse` to turn token trees into
higher-level syntax elements like expressions:
```ignore
@ -148,30 +145,21 @@ Looking through [`libsyntax` parser
code](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/parser.rs)
will give you a feel for how the parsing infrastructure works.
Keep the [`Span`s](../syntax/codemap/struct.Span.html) of
everything you parse, for better error reporting. You can wrap
[`Spanned`](../syntax/codemap/struct.Spanned.html) around
your custom data structures.
Keep the `Span`s of everything you parse, for better error reporting. You can
wrap `Spanned` around your custom data structures.
Calling
[`ExtCtxt::span_fatal`](../syntax/ext/base/struct.ExtCtxt.html#method.span_fatal)
will immediately abort compilation. It's better to instead call
[`ExtCtxt::span_err`](../syntax/ext/base/struct.ExtCtxt.html#method.span_err)
and return
[`DummyResult`](../syntax/ext/base/struct.DummyResult.html),
so that the compiler can continue and find further errors.
Calling `ExtCtxt::span_fatal` will immediately abort compilation. It's better to
instead call `ExtCtxt::span_err` and return `DummyResult` so that the compiler
can continue and find further errors.
To print syntax fragments for debugging, you can use
[`span_note`](../syntax/ext/base/struct.ExtCtxt.html#method.span_note) together
with
[`syntax::print::pprust::*_to_string`](https://doc.rust-lang.org/syntax/print/pprust/index.html#functions).
To print syntax fragments for debugging, you can use `span_note` together with
`syntax::print::pprust::*_to_string`.
The example above produced an integer literal using
[`AstBuilder::expr_usize`](../syntax/ext/build/trait.AstBuilder.html#tymethod.expr_usize).
The example above produced an integer literal using `AstBuilder::expr_usize`.
As an alternative to the `AstBuilder` trait, `libsyntax` provides a set of
[quasiquote macros](../syntax/ext/quote/index.html). They are undocumented and
very rough around the edges. However, the implementation may be a good
starting point for an improved quasiquote as an ordinary plugin library.
quasiquote macros. They are undocumented and very rough around the edges.
However, the implementation may be a good starting point for an improved
quasiquote as an ordinary plugin library.
# Lint plugins
@ -239,12 +227,11 @@ foo.rs:4 fn lintme() { }
The components of a lint plugin are:
* one or more `declare_lint!` invocations, which define static
[`Lint`](../rustc/lint/struct.Lint.html) structs;
* one or more `declare_lint!` invocations, which define static `Lint` structs;
* a struct holding any state needed by the lint pass (here, none);
* a [`LintPass`](../rustc/lint/trait.LintPass.html)
* a `LintPass`
implementation defining how to check each syntax element. A single
`LintPass` may call `span_lint` for several different `Lint`s, but should
register them all through the `get_lints` method.

View File

@ -94,6 +94,52 @@ fn main() {
}
```
As closures can capture variables from their environment, we can also try to
bring some data into the other thread:
```rust,ignore
use std::thread;
fn main() {
let x = 1;
thread::spawn(|| {
println!("x is {}", x);
});
}
```
However, this gives us an error:
```text
5:19: 7:6 error: closure may outlive the current function, but it
borrows `x`, which is owned by the current function
...
5:19: 7:6 help: to force the closure to take ownership of `x` (and any other referenced variables),
use the `move` keyword, as shown:
thread::spawn(move || {
println!("x is {}", x);
});
```
This is because by default closures capture variables by reference, and thus the
closure only captures a _reference to `x`_. This is a problem, because the
thread may outlive the scope of `x`, leading to a dangling pointer.
To fix this, we use a `move` closure as mentioned in the error message. `move`
closures are explained in depth [here](closures.html#move-closures); basically
they move variables from their environment into themselves.
```rust
use std::thread;
fn main() {
let x = 1;
thread::spawn(move || {
println!("x is {}", x);
});
}
```
Many languages have the ability to execute threads, but it's wildly unsafe.
There are entire books about how to prevent errors that occur from shared
mutable state. Rust helps out with its type system here as well, by preventing
@ -116,7 +162,7 @@ The same [ownership system](ownership.html) that helps prevent using pointers
incorrectly also helps rule out data races, one of the worst kinds of
concurrency bugs.
As an example, here is a Rust program that would have a data race in many
As an example, here is a Rust program that could have a data race in many
languages. It will not compile:
```ignore
@ -145,23 +191,69 @@ This gives us an error:
```
Rust knows this wouldn't be safe! If we had a reference to `data` in each
thread, and the thread takes ownership of the reference, we'd have three
owners!
thread, and the thread takes ownership of the reference, we'd have three owners!
`data` gets moved out of `main` in the first call to `spawn()`, so subsequent
calls in the loop cannot use this variable.
So, we need some type that lets us have more than one reference to a value and
that we can share between threads, that is it must implement `Sync`.
Note that this specific example will not cause a data race since different array
indices are being accessed. But this can't be determined at compile time, and in
a similar situation where `i` is a constant or is random, you would have a data
race.
We'll use `Arc<T>`, Rust's standard atomic reference count type, which
wraps a value up with some extra runtime bookkeeping which allows us to
share the ownership of the value between multiple references at the same time.
So, we need some type that lets us have more than one owning reference to a
value. Usually, we'd use `Rc<T>` for this, which is a reference counted type
that provides shared ownership. It has some runtime bookkeeping that keeps track
of the number of references to it, hence the "reference count" part of its name.
The bookkeeping consists of a count of how many of these references exist to
the value, hence the reference count part of the name.
Calling `clone()` on an `Rc<T>` will return a new owned reference and bump the
internal reference count. We create one of these for each thread:
```ignore
use std::thread;
use std::time::Duration;
use std::rc::Rc;
fn main() {
let mut data = Rc::new(vec![1, 2, 3]);
for i in 0..3 {
// create a new owned reference
let data_ref = data.clone();
// use it in a thread
thread::spawn(move || {
data_ref[i] += 1;
});
}
thread::sleep(Duration::from_millis(50));
}
```
This won't work, however, and will give us the error:
```text
13:9: 13:22 error: the trait bound `alloc::rc::Rc<collections::vec::Vec<i32>> : core::marker::Send`
is not satisfied
...
13:9: 13:22 note: `alloc::rc::Rc<collections::vec::Vec<i32>>`
cannot be sent between threads safely
```
As the error message mentions, `Rc` cannot be sent between threads safely. This
is because the internal reference count is not maintained in a thread safe
matter and can have a data race.
To solve this, we'll use `Arc<T>`, Rust's standard atomic reference count type.
The Atomic part means `Arc<T>` can safely be accessed from multiple threads.
To do this the compiler guarantees that mutations of the internal count use
indivisible operations which can't have data races.
In essence, `Arc<T>` is a type that lets us share ownership of data _across
threads_.
```ignore
use std::thread;
@ -182,7 +274,7 @@ fn main() {
}
```
We now call `clone()` on our `Arc<T>`, which increases the internal count.
Similarly to last time, we use `clone()` to create a new owned handle.
This handle is then moved into the new thread.
And... still gives us an error.
@ -193,14 +285,21 @@ And... still gives us an error.
^~~~
```
`Arc<T>` assumes one more property about its contents to ensure that it is safe
to share across threads: it assumes its contents are `Sync`. This is true for
our value if it's immutable, but we want to be able to mutate it, so we need
something else to persuade the borrow checker we know what we're doing.
`Arc<T>` by default has immutable contents. It allows the _sharing_ of data
between threads, but shared mutable data is unsafe and when threads are
involved can cause data races!
It looks like we need some type that allows us to safely mutate a shared value,
for example a type that can ensure only one thread at a time is able to
mutate the value inside it at any one time.
Usually when we wish to make something in an immutable position mutable, we use
`Cell<T>` or `RefCell<T>` which allow safe mutation via runtime checks or
otherwise (see also: [Choosing Your Guarantees](choosing-your-guarantees.html)).
However, similar to `Rc`, these are not thread safe. If we try using these, we
will get an error about these types not being `Sync`, and the code will fail to
compile.
It looks like we need some type that allows us to safely mutate a shared value
across threads, for example a type that can ensure only one thread at a time is
able to mutate the value inside it at any one time.
For that, we can use the `Mutex<T>` type!
@ -229,7 +328,17 @@ fn main() {
Note that the value of `i` is bound (copied) to the closure and not shared
among the threads.
Also note that [`lock`](../std/sync/struct.Mutex.html#method.lock) method of
We're "locking" the mutex here. A mutex (short for "mutual exclusion"), as
mentioned, only allows one thread at a time to access a value. When we wish to
access the value, we use `lock()` on it. This will "lock" the mutex, and no
other thread will be able to lock it (and hence, do anything with the value)
until we're done with it. If a thread attempts to lock a mutex which is already
locked, it will wait until the other thread releases the lock.
The lock "release" here is implicit; when the result of the lock (in this case,
`data`) goes out of scope, the lock is automatically released.
Note that [`lock`](../std/sync/struct.Mutex.html#method.lock) method of
[`Mutex`](../std/sync/struct.Mutex.html) has this signature:
```ignore

View File

@ -72,7 +72,7 @@ a [`Drop`][drop] implementation.
# Initializing
Both `const` and `static` have requirements for giving them a value. They must
be given a value thats a constant expression. In other words, you cannot use
be given a value thats a constant expression. In other words, you cannot use
the result of a function call or anything similarly complex or at runtime.
# Which construct should I use?

View File

@ -118,7 +118,7 @@ build deps examples libphrases-a7448e02a0468eaa.rlib native
`libphrases-hash.rlib` is the compiled crate. Before we see how to use this
crate from another crate, lets break it up into multiple files.
# Multiple file crates
# Multiple File Crates
If each crate were just one file, these files would get very large. Its often
easier to split up crates into multiple files, and Rust supports this in two
@ -190,13 +190,19 @@ mod farewells;
```
Again, these declarations tell Rust to look for either
`src/english/greetings.rs` and `src/japanese/greetings.rs` or
`src/english/farewells/mod.rs` and `src/japanese/farewells/mod.rs`. Because
these sub-modules dont have their own sub-modules, weve chosen to make them
`src/english/greetings.rs` and `src/japanese/farewells.rs`. Whew!
`src/english/greetings.rs`, `src/english/farewells.rs`,
`src/japanese/greetings.rs` and `src/japanese/farewells.rs` or
`src/english/greetings/mod.rs`, `src/english/farewells/mod.rs`,
`src/japanese/greetings/mod.rs` and
`src/japanese/farewells/mod.rs`. Because these sub-modules dont have
their own sub-modules, weve chosen to make them
`src/english/greetings.rs`, `src/english/farewells.rs`,
`src/japanese/greetings.rs` and `src/japanese/farewells.rs`. Whew!
The contents of `src/english/greetings.rs` and `src/japanese/farewells.rs` are
both empty at the moment. Lets add some functions.
The contents of `src/english/greetings.rs`,
`src/english/farewells.rs`, `src/japanese/greetings.rs` and
`src/japanese/farewells.rs` are all empty at the moment. Lets add
some functions.
Put this in `src/english/greetings.rs`:

View File

@ -55,7 +55,7 @@ BOOM times 100!!!
BOOM times 1!!!
```
The TNT goes off before the firecracker does, because it was declared
The `tnt` goes off before the `firecracker` does, because it was declared
afterwards. Last in, first out.
So what is `Drop` good for? Generally, `Drop` is used to clean up any resources

View File

@ -2019,6 +2019,16 @@ impl Error for CliError {
CliError::NotFound => "not found",
}
}
fn cause(&self) -> Option<&error::Error> {
match *self {
CliError::Io(ref err) => Some(err),
CliError::Parse(ref err) => Some(err),
// Our custom error doesn't have an underlying cause, but we could
// modify it so that it does.
CliError::NotFound() => None,
}
}
}
```

View File

@ -246,6 +246,19 @@ stack backtrace:
13: 0x0 - <unknown>
```
If you need to override an already set `RUST_BACKTRACE`,
in cases when you cannot just unset the variable,
then set it to `0` to avoid getting a backtrace.
Any other value(even no value at all) turns on backtrace.
```text
$ export RUST_BACKTRACE=1
...
$ RUST_BACKTRACE=0 ./diverges
thread '<main>' panicked at 'This function never returns!', hello.rs:2
note: Run with `RUST_BACKTRACE=1` for a backtrace.
```
`RUST_BACKTRACE` also works with Cargos `run` command:
```text

View File

@ -93,8 +93,8 @@ unofficial locations.
| `armv7-apple-ios` | ✓ | | | ARM iOS |
| `armv7s-apple-ios` | ✓ | | | ARM iOS |
| `aarch64-apple-ios` | ✓ | | | ARM64 iOS |
| `i686-unknown-freebsd` | ✓ | ✓ | | 32-bit FreeBSD |
| `x86_64-unknown-freebsd` | ✓ | ✓ | | 64-bit FreeBSD |
| `i686-unknown-freebsd` | ✓ | ✓ | | 32-bit FreeBSD |
| `x86_64-unknown-freebsd` | ✓ | ✓ | | 64-bit FreeBSD |
| `x86_64-unknown-openbsd` | ✓ | ✓ | | 64-bit OpenBSD |
| `x86_64-unknown-netbsd` | ✓ | ✓ | | 64-bit NetBSD |
| `x86_64-unknown-bitrig` | ✓ | ✓ | | 64-bit Bitrig |
@ -119,19 +119,7 @@ This will download a script, and start the installation. If it all goes well,
youll see this appear:
```text
Welcome to Rust.
This script will download the Rust compiler and its package manager, Cargo, and
install them to /usr/local. You may install elsewhere by running this script
with the --prefix=<path> option.
The installer will run under sudo and may ask you for your password. If you do
not want the script to run sudo then pass it the --disable-sudo flag.
You may uninstall later by running /usr/local/lib/rustlib/uninstall.sh,
or by running this script again with the --uninstall flag.
Continue? (y/N)
Rust is ready to roll.
```
From here, press `y` for yes, and then follow the rest of the prompts.
@ -176,13 +164,15 @@ installed. Doing so will depend on your specific system, consult its
documentation for more details.
If not, there are a number of places where we can get help. The easiest is
[the #rust IRC channel on irc.mozilla.org][irc], which we can access through
[Mibbit][mibbit]. Click that link, and we'll be chatting with other Rustaceans
(a silly nickname we call ourselves) who can help us out. Other great resources
include [the users forum][users], and [Stack Overflow][stackoverflow].
[the #rust-beginners IRC channel on irc.mozilla.org][irc-beginners] and for
general discussion [the #rust IRC channel on irc.mozilla.org][irc], which we
can access through [Mibbit][mibbit]. Then we'll be chatting with other
Rustaceans (a silly nickname we call ourselves) who can help us out. Other great
resources include [the users forum][users] and [Stack Overflow][stackoverflow].
[irc-beginners]: irc://irc.mozilla.org/#rust-beginners
[irc]: irc://irc.mozilla.org/#rust
[mibbit]: http://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust
[mibbit]: http://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-beginners,%23rust
[users]: https://users.rust-lang.org/
[stackoverflow]: http://stackoverflow.com/questions/tagged/rust
@ -429,7 +419,7 @@ first. This leaves the top-level project directory (in this case,
to your code. In this way, using Cargo helps you keep your projects nice and
tidy. There's a place for everything, and everything is in its place.
Now, copy *main.rs* to the *src* directory, and delete the compiled file you
Now, move *main.rs* into the *src* directory, and delete the compiled file you
created with `rustc`. As usual, replace `main` with `main.exe` if you're on
Windows.
@ -513,7 +503,7 @@ Cargo checks to see if any of your projects files have been modified, and onl
rebuilds your project if theyve changed since the last time you built it.
With simple projects, Cargo doesn't bring a whole lot over just using `rustc`,
but it will become useful in future. This is especially true when you start
but it will become useful in the future. This is especially true when you start
using crates; these are synonymous with a library or package in other
programming languages. For complex projects composed of multiple crates, its
much easier to let Cargo coordinate the build. Using Cargo, you can run `cargo

View File

@ -295,7 +295,7 @@ Rust warns us that we havent used the `Result` value. This warning comes from
a special annotation that `io::Result` has. Rust is trying to tell you that
you havent handled a possible error. The right way to suppress the error is
to actually write error handling. Luckily, if we want to crash if theres
a problem, we can use these two little methods. If we can recover from the
a problem, we can use `expect()`. If we can recover from the
error somehow, wed do something else, but well save that for a future
project.
@ -912,7 +912,7 @@ returned by `parse()`, this is an `enum` like `Ordering`, but in this case,
each variant has some data associated with it: `Ok` is a success, and `Err` is a
failure. Each contains more information: the successfully parsed integer, or an
error type. In this case, we `match` on `Ok(num)`, which sets the name `num` to
the unwrapped `Ok` value (ythe integer), and then we return it on the
the unwrapped `Ok` value (the integer), and then we return it on the
right-hand side. In the `Err` case, we dont care what kind of error it is, so
we just use the catch all `_` instead of a name. This catches everything that
isn't `Ok`, and `continue` lets us move to the next iteration of the loop; in

View File

@ -4,7 +4,7 @@ Rusts take on `if` is not particularly complex, but its much more like the
`if` youll find in a dynamically typed language than in a more traditional
systems language. So lets talk about it, to make sure you grasp the nuances.
`if` is a specific form of a more general concept, the branch. The name comes
`if` is a specific form of a more general concept, the branch, whose name comes
from a branch in a tree: a decision point, where depending on a choice,
multiple paths can be taken.

View File

@ -2,8 +2,7 @@
For extremely low-level manipulations and performance reasons, one
might wish to control the CPU directly. Rust supports using inline
assembly to do this via the `asm!` macro. The syntax roughly matches
that of GCC & Clang:
assembly to do this via the `asm!` macro.
```ignore
asm!(assembly template

View File

@ -14,6 +14,11 @@ Now that you know more Rust, we can talk in detail about how this works.
Ranges (the `0..10`) are 'iterators'. An iterator is something that we can
call the `.next()` method on repeatedly, and it gives us a sequence of things.
(By the way, a range with two dots like `0..10` is inclusive on the left (so it
starts at 0) and exclusive on the right (so it ends at 9). A mathematician
would write "[0, 10)". To get a range that goes all the way up to 10 you can
write `0...10`.)
Like this:
```rust

View File

@ -56,8 +56,8 @@ To fix this, we have to make sure that step four never happens after step
three. The ownership system in Rust does this through a concept called
lifetimes, which describe the scope that a reference is valid for.
When we have a function that takes a reference by argument, we can be implicit
or explicit about the lifetime of the reference:
When we have a function that takes an argument by reference, we can be
implicit or explicit about the lifetime of the reference:
```rust
// implicit
@ -282,14 +282,12 @@ to it.
## Lifetime Elision
Rust supports powerful local type inference in function bodies, but its
forbidden in item signatures to allow reasoning about the types based on
the item signature alone. However, for ergonomic reasons a very restricted
secondary inference algorithm called “lifetime elision” applies in function
signatures. It infers only based on the signature components themselves and not
based on the body of the function, only infers lifetime parameters, and does
this with only three easily memorizable and unambiguous rules. This makes
lifetime elision a shorthand for writing an item signature, while not hiding
Rust supports powerful local type inference in the bodies of functions but not in their item signatures.
It's forbidden to allow reasoning about types based on the item signature alone.
However, for ergonomic reasons, a very restricted secondary inference algorithm called
“lifetime elision” does apply when judging lifetimes. Lifetime elision is concerned solely to infer
lifetime parameters using three easily memorizable and unambiguous rules. This means lifetime elision
acts as a shorthand for writing an item signature, while not hiding
away the actual types involved as full local inference would if applied to it.
When talking about lifetime elision, we use the term *input lifetime* and

View File

@ -337,8 +337,8 @@ fn main() {
}
```
Instead you need to pass the variable name into the invocation, so its tagged
with the right syntax context.
Instead you need to pass the variable name into the invocation, so that its
tagged with the right syntax context.
```rust
macro_rules! foo {
@ -470,7 +470,7 @@ which syntactic form it matches.
* `ty`: a type. Examples: `i32`; `Vec<(char, String)>`; `&T`.
* `pat`: a pattern. Examples: `Some(t)`; `(17, 'a')`; `_`.
* `stmt`: a single statement. Example: `let x = 3`.
* `block`: a brace-delimited sequence of statements. Example:
* `block`: a brace-delimited sequence of statements and optionally an expression. Example:
`{ log(error, "hi"); return 12; }`.
* `item`: an [item][item]. Examples: `fn foo() { }`; `struct Bar;`.
* `meta`: a "meta item", as found in attributes. Example: `cfg(target_os = "windows")`.

View File

@ -28,18 +28,18 @@ patterns][patterns] that covers all the patterns that are possible here.
[patterns]: patterns.html
One of the many advantages of `match` is it enforces exhaustiveness checking.
For example if we remove the last arm with the underscore `_`, the compiler will
One of the many advantages of `match` is it enforces exhaustiveness checking.
For example if we remove the last arm with the underscore `_`, the compiler will
give us an error:
```text
error: non-exhaustive patterns: `_` not covered
```
Rust is telling us that we forgot a value. The compiler infers from `x` that it
can have any positive 32bit value; for example 1 to 2,147,483,647. The `_` acts
Rust is telling us that we forgot some value. The compiler infers from `x` that it
can have any 32bit integer value; for example -2,147,483,648 to 2,147,483,647. The `_` acts
as a 'catch-all', and will catch all possible values that *aren't* specified in
an arm of `match`. As you can see with the previous example, we provide `match`
an arm of `match`. As you can see in the previous example, we provide `match`
arms for integers 1-5, if `x` is 6 or any other value, then it is caught by `_`.
`match` is also an expression, which means we can use it on the right-hand
@ -58,7 +58,7 @@ let number = match x {
};
```
Sometimes its a nice way of converting something from one type to another; in
Sometimes its a nice way of converting something from one type to another; in
this example the integers are converted to `String`.
# Matching on enums
@ -90,7 +90,7 @@ fn process_message(msg: Message) {
Again, the Rust compiler checks exhaustiveness, so it demands that you
have a match arm for every variant of the enum. If you leave one off, it
will give you a compile-time error unless you use `_` or provide all possible
will give you a compile-time error unless you use `_` or provide all possible
arms.
Unlike the previous uses of `match`, you cant use the normal `if`

View File

@ -38,7 +38,7 @@ fn start(_argc: isize, _argv: *const *const u8) -> isize {
// for a bare-bones hello world. These are normally
// provided by libstd.
#[lang = "eh_personality"] extern fn eh_personality() {}
#[lang = "panic_fmt"] fn panic_fmt() -> ! { loop {} }
#[lang = "panic_fmt"] extern fn panic_fmt() -> ! { loop {} }
# #[lang = "eh_unwind_resume"] extern fn rust_eh_unwind_resume() {}
# #[no_mangle] pub extern fn rust_eh_register_frames () {}
# #[no_mangle] pub extern fn rust_eh_unregister_frames () {}
@ -65,7 +65,7 @@ pub extern fn main(argc: i32, argv: *const *const u8) -> i32 {
}
#[lang = "eh_personality"] extern fn eh_personality() {}
#[lang = "panic_fmt"] fn panic_fmt() -> ! { loop {} }
#[lang = "panic_fmt"] extern fn panic_fmt() -> ! { loop {} }
# #[lang = "eh_unwind_resume"] extern fn rust_eh_unwind_resume() {}
# #[no_mangle] pub extern fn rust_eh_register_frames () {}
# #[no_mangle] pub extern fn rust_eh_unregister_frames () {}

View File

@ -51,7 +51,7 @@ fn foo() {
}
```
When `v` comes into scope, a new [vector] is created on [the stack][stack],
When `v` comes into scope, a new [vector][vectors] is created on [the stack][stack],
and it allocates space on [the heap][heap] for its elements. When `v` goes out
of scope at the end of `foo()`, Rust will clean up everything related to the
vector, even the heap-allocated memory. This happens deterministically, at the
@ -124,7 +124,7 @@ special annotation here, its the default thing that Rust does.
## The details
The reason that we cannot use a binding after weve moved it is subtle, but
important.
important.
When we write code like this:
@ -148,7 +148,7 @@ The first line allocates memory for the vector object `v` on the stack like
it does for `x` above. But in addition to that it also allocates some memory
on the [heap][sh] for the actual data (`[1, 2, 3]`). Rust copies the address
of this heap allocation to an internal pointer, which is part of the vector
object placed on the stack (let's call it the data pointer).
object placed on the stack (let's call it the data pointer).
It is worth pointing out (even at the risk of stating the obvious) that the
vector object and its data live in separate memory regions instead of being a
@ -163,7 +163,7 @@ does not create a copy of the heap allocation containing the actual data.
Which means that there would be two pointers to the contents of the vector
both pointing to the same memory allocation on the heap. It would violate
Rusts safety guarantees by introducing a data race if one could access both
`v` and `v2` at the same time.
`v` and `v2` at the same time.
For example if we truncated the vector to just two elements through `v2`:

View File

@ -1,7 +1,7 @@
% Patterns
Patterns are quite common in Rust. We use them in [variable
bindings][bindings], [match statements][match], and other places, too. Lets go
bindings][bindings], [match expressions][match], and other places, too. Lets go
on a whirlwind tour of all of the things patterns can do!
[bindings]: variable-bindings.html

View File

@ -7,7 +7,7 @@ of these ones, as well, but these are the most primitive.
# Booleans
Rust has a built in boolean type, named `bool`. It has two values, `true` and `false`:
Rust has a built-in boolean type, named `bool`. It has two values, `true` and `false`:
```rust
let x = true;
@ -89,13 +89,13 @@ Unsigned types use a `u` for their category, and signed types use `i`. The `i`
is for integer. So `u8` is an eight-bit unsigned number, and `i8` is an
eight-bit signed number.
## Fixed size types
## Fixed-size types
Fixed size types have a specific number of bits in their representation. Valid
Fixed-size types have a specific number of bits in their representation. Valid
bit sizes are `8`, `16`, `32`, and `64`. So, `u32` is an unsigned, 32-bit integer,
and `i64` is a signed, 64-bit integer.
## Variable sized types
## Variable-size types
Rust also provides types whose size depends on the size of a pointer of the
underlying machine. These types have size as the category, and come in signed
@ -164,7 +164,7 @@ copying. For example, you might want to reference only one line of a file read
into memory. By nature, a slice is not created directly, but from an existing
variable binding. Slices have a defined length, can be mutable or immutable.
Internally, slices are represented as a pointer to the beginning of the data
Internally, slices are represented as a pointer to the beginning of the data
and a length.
## Slicing syntax

View File

@ -23,7 +23,7 @@ Before we get to the details, two important notes about the ownership system.
Rust has a focus on safety and speed. It accomplishes these goals through many
zero-cost abstractions, which means that in Rust, abstractions cost as little
as possible in order to make them work. The ownership system is a prime example
of a zero cost abstraction. All of the analysis well talk about in this guide
of a zero-cost abstraction. All of the analysis well talk about in this guide
is _done at compile time_. You do not pay any run-time cost for any of these
features.
@ -163,8 +163,8 @@ both at the same time:
* exactly one mutable reference (`&mut T`).
You may notice that this is very similar, though not exactly the same as,
to the definition of a data race:
You may notice that this is very similar to, though not exactly the same as,
the definition of a data race:
> There is a data race when two or more pointers access the same memory
> location at the same time, where at least one of them is writing, and the
@ -211,9 +211,10 @@ fn main() {
```
In other words, the mutable borrow is held through the rest of our example. What
we want is for the mutable borrow to end _before_ we try to call `println!` and
make an immutable borrow. In Rust, borrowing is tied to the scope that the
borrow is valid for. And our scopes look like this:
we want is for the mutable borrow by `y` to end so that the resource can be
returned to the owner, `x`. `x` can then provide a immutable borrow to `println!`.
In Rust, borrowing is tied to the scope that the borrow is valid for. And our
scopes look like this:
```rust,ignore
let mut x = 5;
@ -378,4 +379,3 @@ statement 1 at 3:14
In the above example, `y` is declared before `x`, meaning that `y` lives longer
than `x`, which is not allowed.

View File

@ -44,7 +44,12 @@ let s = "foo\
assert_eq!("foobar", s);
```
Rust has more than only `&str`s though. A `String`, is a heap-allocated string.
Note that you normally cannot access a `str` directly, but only through a `&str`
reference. This is because `str` is an unsized type which requires additional
runtime information to be usable. For more information see the chapter on
[unsized types][ut].
Rust has more than only `&str`s though. A `String` is a heap-allocated string.
This string is growable, and is also guaranteed to be UTF-8. `String`s are
commonly created by converting from a string slice using the `to_string`
method.
@ -89,7 +94,7 @@ Viewing a `String` as a `&str` is cheap, but converting the `&str` to a
## Indexing
Because strings are valid UTF-8, strings do not support indexing:
Because strings are valid UTF-8, they do not support indexing:
```rust,ignore
let s = "hello";
@ -185,5 +190,6 @@ let hello_world = hello + &world;
This is because `&String` can automatically coerce to a `&str`. This is a
feature called [`Deref` coercions][dc].
[ut]: unsized-types.html
[dc]: deref-coercions.html
[connect]: ../std/net/struct.TcpStream.html#method.connect

View File

@ -43,39 +43,40 @@
* `!` (`!expr`): bitwise or logical complement. Overloadable (`Not`).
* `!=` (`var != expr`): nonequality comparison. Overloadable (`PartialEq`).
* `%` (`expr % expr`): arithmetic remainder. Overloadable (`Rem`).
* `%=` (`var %= expr`): arithmetic remainder & assignment.
* `%=` (`var %= expr`): arithmetic remainder & assignment. Overloadable (`RemAssign`).
* `&` (`expr & expr`): bitwise and. Overloadable (`BitAnd`).
* `&` (`&expr`): borrow. See [References and Borrowing].
* `&` (`&type`, `&mut type`, `&'a type`, `&'a mut type`): borrowed pointer type. See [References and Borrowing].
* `&=` (`var &= expr`): bitwise and & assignment.
* `&=` (`var &= expr`): bitwise and & assignment. Overloadable (`BitAndAssign`).
* `&&` (`expr && expr`): logical and.
* `*` (`expr * expr`): arithmetic multiplication. Overloadable (`Mul`).
* `*` (`*expr`): dereference.
* `*` (`*const type`, `*mut type`): raw pointer. See [Raw Pointers].
* `*=` (`var *= expr`): arithmetic multiplication & assignment.
* `*=` (`var *= expr`): arithmetic multiplication & assignment. Overloadable (`MulAssign`).
* `+` (`expr + expr`): arithmetic addition. Overloadable (`Add`).
* `+` (`trait + trait`, `'a + trait`): compound type constraint. See [Traits (Multiple Trait Bounds)].
* `+=` (`var += expr`): arithmetic addition & assignment.
* `+=` (`var += expr`): arithmetic addition & assignment. Overloadable (`AddAssign`).
* `,`: argument and element separator. See [Attributes], [Functions], [Structs], [Generics], [Match], [Closures], [Crates and Modules (Importing Modules with `use`)].
* `-` (`expr - expr`): arithmetic subtraction. Overloadable (`Sub`).
* `-` (`- expr`): arithmetic negation. Overloadable (`Neg`).
* `-=` (`var -= expr`): arithmetic subtraction & assignment.
* `-=` (`var -= expr`): arithmetic subtraction & assignment. Overloadable (`SubAssign`).
* `->` (`fn(…) -> type`, `|…| -> type`): function and closure return type. See [Functions], [Closures].
* `-> !` (`fn(…) -> !`, `|…| -> !`): diverging function or closure. See [Diverging Functions].
* `.` (`expr.ident`): member access. See [Structs], [Method Syntax].
* `..` (`..`, `expr..`, `..expr`, `expr..expr`): right-exclusive range literal.
* `..` (`..expr`): struct literal update syntax. See [Structs (Update syntax)].
* `..` (`variant(x, ..)`, `struct_type { x, .. }`): "and the rest" pattern binding. See [Patterns (Ignoring bindings)].
* `...` (`expr ... expr`): inclusive range pattern. See [Patterns (Ranges)].
* `...` (`...expr`, `expr...expr`) *in an expression*: inclusive range expression. See [Iterators].
* `...` (`expr...expr`) *in a pattern*: inclusive range pattern. See [Patterns (Ranges)].
* `/` (`expr / expr`): arithmetic division. Overloadable (`Div`).
* `/=` (`var /= expr`): arithmetic division & assignment.
* `/=` (`var /= expr`): arithmetic division & assignment. Overloadable (`DivAssign`).
* `:` (`pat: type`, `ident: type`): constraints. See [Variable Bindings], [Functions], [Structs], [Traits].
* `:` (`ident: expr`): struct field initializer. See [Structs].
* `:` (`'a: loop {…}`): loop label. See [Loops (Loops Labels)].
* `;`: statement and item terminator.
* `;` (`[…; len]`): part of fixed-size array syntax. See [Primitive Types (Arrays)].
* `<<` (`expr << expr`): left-shift. Overloadable (`Shl`).
* `<<=` (`var <<= expr`): left-shift & assignment.
* `<<=` (`var <<= expr`): left-shift & assignment. Overloadable (`ShlAssign`).
* `<` (`expr < expr`): less-than comparison. Overloadable (`PartialOrd`).
* `<=` (`var <= expr`): less-than or equal-to comparison. Overloadable (`PartialOrd`).
* `=` (`var = expr`, `ident = type`): assignment/equivalence. See [Variable Bindings], [`type` Aliases], generic parameter defaults.
@ -84,14 +85,14 @@
* `>` (`expr > expr`): greater-than comparison. Overloadable (`PartialOrd`).
* `>=` (`var >= expr`): greater-than or equal-to comparison. Overloadable (`PartialOrd`).
* `>>` (`expr >> expr`): right-shift. Overloadable (`Shr`).
* `>>=` (`var >>= expr`): right-shift & assignment.
* `>>=` (`var >>= expr`): right-shift & assignment. Overloadable (`ShrAssign`).
* `@` (`ident @ pat`): pattern binding. See [Patterns (Bindings)].
* `^` (`expr ^ expr`): bitwise exclusive or. Overloadable (`BitXor`).
* `^=` (`var ^= expr`): bitwise exclusive or & assignment.
* `^=` (`var ^= expr`): bitwise exclusive or & assignment. Overloadable (`BitXorAssign`).
* `|` (`expr | expr`): bitwise or. Overloadable (`BitOr`).
* `|` (`pat | pat`): pattern alternatives. See [Patterns (Multiple patterns)].
* `|` (`|…| expr`): closures. See [Closures].
* `|=` (`var |= expr`): bitwise or & assignment.
* `|=` (`var |= expr`): bitwise or & assignment. Overloadable (`BitOrAssign`).
* `||` (`expr || expr`): logical or.
* `_`: "ignored" pattern binding. See [Patterns (Ignoring bindings)].
@ -205,6 +206,7 @@
[Functions (Early Returns)]: functions.html#early-returns
[Functions]: functions.html
[Generics]: generics.html
[Iterators]: iterators.html
[Lifetimes]: lifetimes.html
[Loops (`for`)]: loops.html#for
[Loops (`loop`)]: loops.html#loop

View File

@ -515,7 +515,3 @@ you add more examples.
We havent covered all of the details with writing documentation tests. For more,
please see the [Documentation chapter](documentation.html).
One final note: documentation tests *cannot* be run on binary crates.
To see more on file arrangement see the [Crates and
Modules](crates-and-modules.html) section.

View File

@ -154,7 +154,7 @@ print_area(5);
We get a compile-time error:
```text
error: the trait `HasArea` is not implemented for the type `_` [E0277]
error: the trait bound `_ : HasArea` is not satisfied [E0277]
```
## Trait bounds on generic structs
@ -496,7 +496,7 @@ impl FooBar for Baz {
If we forget to implement `Foo`, Rust will tell us:
```text
error: the trait `main::Foo` is not implemented for the type `main::Baz` [E0277]
error: the trait bound `main::Baz : main::Foo` is not satisfied [E0277]
```
# Deriving

View File

@ -4,7 +4,7 @@ Rusts main draw is its powerful static guarantees about behavior. But safety
checks are conservative by nature: there are some programs that are actually
safe, but the compiler is not able to verify this is true. To write these kinds
of programs, we need to tell the compiler to relax its restrictions a bit. For
this, Rust has a keyword, `unsafe`. Code using `unsafe` has less restrictions
this, Rust has a keyword, `unsafe`. Code using `unsafe` has fewer restrictions
than normal code does.
Lets go over the syntax, and then well talk semantics. `unsafe` is used in

View File

@ -18,14 +18,14 @@ function, rather than leaving it off. Otherwise, youll get an error.
In many languages, a variable binding would be called a *variable*, but Rusts
variable bindings have a few tricks up their sleeves. For example the
left-hand side of a `let` expression is a [pattern][pattern], not a
left-hand side of a `let` statement is a [pattern][pattern], not a
variable name. This means we can do things like:
```rust
let (x, y) = (1, 2);
```
After this expression is evaluated, `x` will be one, and `y` will be two.
After this statement is evaluated, `x` will be one, and `y` will be two.
Patterns are really powerful, and have [their own section][pattern] in the
book. We dont need those features for now, so well keep this in the back
of our minds as we go forward.

View File

@ -56,8 +56,8 @@ v[j];
Indexing with a non-`usize` type gives an error that looks like this:
```text
error: the trait `core::ops::Index<i32>` is not implemented for the type
`collections::vec::Vec<_>` [E0277]
error: the trait bound `collections::vec::Vec<_> : core::ops::Index<i32>`
is not satisfied [E0277]
v[j];
^~~~
note: the type `collections::vec::Vec<_>` cannot be indexed by `i32`
@ -115,6 +115,36 @@ for i in v {
}
```
Note: You cannot use the vector again once you have iterated by taking ownership of the vector.
You can iterate the vector multiple times by taking a reference to the vector whilst iterating.
For example, the following code does not compile.
```rust,ignore
let v = vec![1, 2, 3, 4, 5];
for i in v {
println!("Take ownership of the vector and its element {}", i);
}
for i in v {
println!("Take ownership of the vector and its element {}", i);
}
```
Whereas the following works perfectly,
```rust
let v = vec![1, 2, 3, 4, 5];
for i in &v {
println!("This is a reference to {}", i);
}
for i in &v {
println!("This is a reference to {}", i);
}
```
Vectors have many more useful methods, which you can read about in [their
API documentation][vec].

View File

@ -1,4 +1,4 @@
% The (old) Rust Compiler Plugins Guide
This content has moved into
[the Rust Programming Language book](book/plugins.html).
[the Rust Programming Language book](book/compiler-plugins.html).

View File

@ -64,7 +64,7 @@ fn main() {
```
```text
<anon>:10:5: 10:8 error: the trait `Trait` is not implemented for the type `&mut i32` [E0277]
<anon>:10:5: 10:8 error: the trait bound `&mut i32 : Trait` is not satisfied [E0277]
<anon>:10 foo(t);
^~~
```

View File

@ -2,7 +2,7 @@
To bring everything together, we're going to write `std::Vec` from scratch.
Because all the best tools for writing unsafe code are unstable, this
project will only work on nightly (as of Rust 1.2.0). With the exception of the
project will only work on nightly (as of Rust 1.9.0). With the exception of the
allocator API, much of the unstable code we'll use is expected to be stabilized
in a similar form as it is today.

View File

@ -379,6 +379,10 @@ Examples of integer literals of various forms:
0usize; // type usize
```
Note that the Rust syntax considers `-1i8` as an application of the [unary minus
operator](#unary-operator-expressions) to an integer literal `1i8`, rather than
a single integer literal.
##### Floating-point literals
A _floating-point literal_ has one of two forms:
@ -1114,6 +1118,16 @@ type Point = (u8, u8);
let p: Point = (41, 68);
```
Currently a type alias to an enum type cannot be used to qualify the
constructors:
```
enum E { A }
type F = E;
let _: F = E::A; // OK
// let _: F = F::A; // Doesn't work
```
### Structs
A _struct_ is a nominal [struct type](#struct-types) defined with the
@ -1191,7 +1205,8 @@ a = Animal::Cat { name: "Spotty".to_string(), weight: 2.7 };
In this example, `Cat` is a _struct-like enum variant_,
whereas `Dog` is simply called an enum variant.
Enums have a discriminant. You can assign them explicitly:
Each enum value has a _discriminant_ which is an integer associated to it. You
can specify it explicitly:
```
enum Foo {
@ -1199,10 +1214,15 @@ enum Foo {
}
```
If a discriminant isn't assigned, they start at zero, and add one for each
The right hand side of the specification is interpreted as an `isize` value,
but the compiler is allowed to use a smaller type in the actual memory layout.
The [`repr` attribute](#ffi-attributes) can be added in order to change
the type of the right hand side and specify the memory layout.
If a discriminant isn't specified, they start at zero, and add one for each
variant, in order.
You can cast an enum to get this value:
You can cast an enum to get its discriminant:
```
# enum Foo { Bar = 123 }
@ -1885,6 +1905,8 @@ type int8_t = i8;
- `should_panic` - indicates that this test function should panic, inverting the success condition.
- `cold` - The function is unlikely to be executed, so optimize it (and calls
to it) differently.
- `naked` - The function utilizes a custom ABI or custom inline ASM that requires
epilogue and prologue to be skipped.
### Static-only attributes
@ -2277,6 +2299,10 @@ The currently implemented features of the reference compiler are:
`#[derive_Foo] #[derive_Bar]`, which can be user-defined syntax
extensions.
* `inclusive_range_syntax` - Allows use of the `a...b` and `...b` syntax for inclusive ranges.
* `inclusive_range` - Allows use of the types that represent desugared inclusive ranges.
* `intrinsics` - Allows use of the "rust-intrinsics" ABI. Compiler intrinsics
are inherently unstable and no promise about them is made.
@ -2747,13 +2773,34 @@ let y = 0..10;
assert_eq!(x, y);
```
Similarly, the `...` operator will construct an object of one of the
`std::ops::RangeInclusive` variants.
```
# #![feature(inclusive_range_syntax)]
1...2; // std::ops::RangeInclusive
...4; // std::ops::RangeToInclusive
```
The following expressions are equivalent.
```
# #![feature(inclusive_range_syntax, inclusive_range)]
let x = std::ops::RangeInclusive::NonEmpty {start: 0, end: 10};
let y = 0...10;
assert_eq!(x, y);
```
### Unary operator expressions
Rust defines the following unary operators. They are all written as prefix operators,
before the expression they apply to.
* `-`
: Negation. May only be applied to numeric types.
: Negation. Signed integer types and floating-point types support negation. It
is an error to apply negation to unsigned types; for example, the compiler
rejects `-1u32`.
* `*`
: Dereference. When applied to a [pointer](#pointer-types) it denotes the
pointed-to location. For pointers to mutable locations, the resulting
@ -3283,6 +3330,10 @@ The primitive types are the following:
* The boolean type `bool` with values `true` and `false`.
* The machine types (integer and floating-point).
* The machine-dependent integer types.
* Arrays
* Tuples
* Slices
* Function pointers
#### Machine types
@ -3860,6 +3911,9 @@ The _heap_ is a general term that describes boxes. The lifetime of an
allocation in the heap depends on the lifetime of the box values pointing to
it. Since box values may themselves be passed in and out of frames, or stored
in the heap, heap allocations may outlive the frame they are allocated within.
An allocation in the heap is guaranteed to reside at a single location in the
heap for the whole lifetime of the allocation - it will never be relocated as
a result of moving a box value.
### Memory ownership

View File

@ -53,7 +53,7 @@ This document is broken into four parts:
cross-cutting topic, starting with
[Ownership and resources](ownership/README.md).
* **[APIs for a changing Rust](changing/README.md)**
* **APIs for a changing Rust**
discusses the forward-compatibility hazards, especially those that interact
with the pre-1.0 library stabilization process.

View File

@ -76,7 +76,7 @@ needs to make about its arguments.
On the other hand, generics can make it more difficult to read and understand a
function's signature. Aim for "natural" parameter types that a neither overly
concrete nor overly abstract. See the discussion on
[traits](../../traits/README.md) for more guidance.
[traits](../traits/README.md) for more guidance.
#### Minimizing ownership assumptions:

View File

@ -27,8 +27,7 @@ explicitly implement to be used by this generic function.
* _Inference_. Since the type parameters to generic functions can usually be
inferred, generic functions can help cut down on verbosity in code where
explicit conversions or other method calls would usually be necessary. See the
[overloading/implicits use case](#use-case-limited-overloading-andor-implicit-conversions)
below.
overloading/implicits use case below.
* _Precise types_. Because generics give a _name_ to the specific type
implementing a trait, it is possible to be precise about places where that
exact type is required or produced. For example, a function
@ -51,7 +50,7 @@ explicitly implement to be used by this generic function.
a `Vec<T>` contains elements of a single concrete type (and, indeed, the
vector representation is specialized to lay these out in line). Sometimes
heterogeneous collections are useful; see
[trait objects](#use-case-trait-objects) below.
trait objects below.
* _Signature verbosity_. Heavy use of generics can bloat function signatures.
**[Ed. note]** This problem may be mitigated by some language improvements; stay tuned.

View File

@ -101,7 +101,7 @@ The convention for a field `foo: T` is:
here may take `&T` or some other type, depending on the context.)
Note that this convention is about getters/setters on ordinary data types, *not*
on [builder objects](../ownership/builders.html).
on [builder objects](../../ownership/builders.html).
### Escape hatches [FIXME]

View File

@ -10,7 +10,3 @@ These are some links to repos with configs which ease the use of rust.
* [kate-config](https://github.com/rust-lang/kate-config)
* [nano-config](https://github.com/rust-lang/nano-config)
* [zsh-config](https://github.com/rust-lang/zsh-config)
## Community-maintained Configs
* [.editorconfig](https://gist.github.com/derhuerst/c9d1b9309e308d9851fa) ([what is this?](http://editorconfig.org/))

View File

@ -117,7 +117,10 @@ class Void(Type):
Type.__init__(self, 0)
def compiler_ctor(self):
return 'void()'
return '::VOID'
def compiler_ctor_ref(self):
return '&' + self.compiler_ctor()
def rust_name(self):
return '()'
@ -163,10 +166,12 @@ class Signed(Number):
def compiler_ctor(self):
if self._llvm_bitwidth is None:
return 'i({})'.format(self.bitwidth())
return '::I{}'.format(self.bitwidth())
else:
return 'i_({}, {})'.format(self.bitwidth(),
self._llvm_bitwidth)
return '::I{}_{}'.format(self.bitwidth(), self._llvm_bitwidth)
def compiler_ctor_ref(self):
return '&' + self.compiler_ctor()
def llvm_name(self):
bw = self._llvm_bitwidth or self.bitwidth()
@ -182,10 +187,12 @@ class Unsigned(Number):
def compiler_ctor(self):
if self._llvm_bitwidth is None:
return 'u({})'.format(self.bitwidth())
return '::U{}'.format(self.bitwidth())
else:
return 'u_({}, {})'.format(self.bitwidth(),
self._llvm_bitwidth)
return '::U{}_{}'.format(self.bitwidth(), self._llvm_bitwidth)
def compiler_ctor_ref(self):
return '&' + self.compiler_ctor()
def llvm_name(self):
bw = self._llvm_bitwidth or self.bitwidth()
@ -200,7 +207,10 @@ class Float(Number):
Number.__init__(self, bitwidth)
def compiler_ctor(self):
return 'f({})'.format(self.bitwidth())
return '::F{}'.format(self.bitwidth())
def compiler_ctor_ref(self):
return '&' + self.compiler_ctor()
def llvm_name(self):
return 'f{}'.format(self.bitwidth())
@ -244,12 +254,16 @@ class Vector(Type):
def compiler_ctor(self):
if self._bitcast is None:
return 'v({}, {})'.format(self._elem.compiler_ctor(),
self._length)
return '{}x{}'.format(self._elem.compiler_ctor(),
self._length)
else:
return 'v_({}, {}, {})'.format(self._elem.compiler_ctor(),
self._bitcast.compiler_ctor(),
self._length)
return '{}x{}_{}'.format(self._elem.compiler_ctor(),
self._length,
self._bitcast.compiler_ctor()
.replace('::', ''))
def compiler_ctor_ref(self):
return '&' + self.compiler_ctor()
def rust_name(self):
return '{}x{}'.format(self._elem.rust_name(), self._length)
@ -284,10 +298,14 @@ class Pointer(Type):
if self._llvm_elem is None:
llvm_elem = 'None'
else:
llvm_elem = 'Some({})'.format(self._llvm_elem.compiler_ctor())
return 'p({}, {}, {})'.format('true' if self._const else 'false',
self._elem.compiler_ctor(),
llvm_elem)
llvm_elem = 'Some({})'.format(self._llvm_elem.compiler_ctor_ref())
return 'Type::Pointer({}, {}, {})'.format(self._elem.compiler_ctor_ref(),
llvm_elem,
'true' if self._const else 'false')
def compiler_ctor_ref(self):
return "{{ static PTR: Type = {}; &PTR }}".format(self.compiler_ctor())
def rust_name(self):
return '*{} {}'.format('const' if self._const else 'mut',
@ -322,8 +340,14 @@ class Aggregate(Type):
raise NotImplementedError()
def compiler_ctor(self):
return 'agg({}, vec![{}])'.format('true' if self._flatten else 'false',
', '.join(elem.compiler_ctor() for elem in self._elems))
parts = "{{ static PARTS: [&'static Type; {}] = [{}]; &PARTS }}"
elems = ', '.join(elem.compiler_ctor_ref() for elem in self._elems)
parts = parts.format(len(self._elems), elems)
return 'Type::Aggregate({}, {})'.format('true' if self._flatten else 'false',
parts)
def compiler_ctor_ref(self):
return "{{ static AGG: Type = {}; &AGG }}".format(self.compiler_ctor())
def rust_name(self):
return '({})'.format(', '.join(elem.rust_name() for elem in self._elems))
@ -518,10 +542,10 @@ class MonomorphicIntrinsic(object):
return self._platform.platform().intrinsic_prefix() + self.intrinsic_suffix()
def compiler_args(self):
return ', '.join(arg.compiler_ctor() for arg in self._args_raw)
return ', '.join(arg.compiler_ctor_ref() for arg in self._args_raw)
def compiler_ret(self):
return self._ret_raw.compiler_ctor()
return self._ret_raw.compiler_ctor_ref()
def compiler_signature(self):
return '({}) -> {}'.format(self.compiler_args(), self.compiler_ret())
@ -691,7 +715,7 @@ def parse_args():
parser.add_argument('-o', '--out', type=argparse.FileType('w'), default=sys.stdout,
help = 'File to output to (default stdout).')
parser.add_argument('-i', '--info', type=argparse.FileType('r'),
help = 'File containing platform specific information to merge into'
help = 'File containing platform specific information to merge into '
'the input files\' header.')
parser.add_argument('in_', metavar="FILE", type=argparse.FileType('r'), nargs='+',
help = 'JSON files to load')
@ -733,24 +757,24 @@ class CompilerDefs(object):
#![allow(unused_imports)]
use {{Intrinsic, i, i_, u, u_, f, v, v_, agg, p, void}};
use {{Intrinsic, Type}};
use IntrinsicDef::Named;
use rustc::middle::ty;
// The default inlining settings trigger a pathological behaviour in
// LLVM, which causes makes compilation very slow. See #28273.
#[inline(never)]
pub fn find<'tcx>(_tcx: &ty::ctxt<'tcx>, name: &str) -> Option<Intrinsic> {{
pub fn find(name: &str) -> Option<Intrinsic> {{
if !name.starts_with("{0}") {{ return None }}
Some(match &name["{0}".len()..] {{'''.format(platform.intrinsic_prefix())
def render(self, mono):
return '''\
"{}" => Intrinsic {{
inputs: vec![{}],
inputs: {{ static INPUTS: [&'static Type; {}] = [{}]; &INPUTS }},
output: {},
definition: Named("{}")
}},'''.format(mono.intrinsic_suffix(),
len(mono._args_raw),
mono.compiler_args(),
mono.compiler_ret(),
mono.llvm_name())

View File

@ -8,6 +8,83 @@
"ret": "f(32-64)",
"args": ["0", "0"]
},
{
"intrinsic": "256_blendv_{0.data_type}",
"width": [256],
"llvm": "blendv.{0.data_type}.256",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "256_broadcast_{0.data_type}",
"width": [256],
"llvm": "vbroadcastf128.{0.data_type}.256",
"ret": "f(32-64)",
"args": ["s8SPc"]
},
{
"intrinsic": "256_cmp_{0.data_type}",
"width": [256],
"llvm": "cmp.{1.data_type}.256",
"ret": "f(32-64)",
"args": ["0", "0", "s8S"]
},
{
"intrinsic": "256_cvtepi32_pd",
"width": [256],
"llvm": "cvtdq2.pd.256",
"ret": "f64",
"args": ["s32h"]
},
{
"intrinsic": "256_cvtepi32_ps",
"width": [256],
"llvm": "cvtdq2.ps.256",
"ret": "f32",
"args": ["s32"]
},
{
"intrinsic": "256_cvtpd_epi32",
"width": [256],
"llvm": "cvt.pd2dq.256",
"ret": "s32h",
"args": ["f64"]
},
{
"intrinsic": "256_cvtpd_ps",
"width": [256],
"llvm": "cvt.pd2.ps.256",
"ret": "f32h",
"args": ["f64"]
},
{
"intrinsic": "256_cvtps_epi32",
"width": [256],
"llvm": "cvt.ps2dq.256",
"ret": "s32",
"args": ["f32"]
},
{
"intrinsic": "256_cvtps_pd",
"width": [256],
"llvm": "cvt.ps2.pd.256",
"ret": "f64",
"args": ["f32h"]
},
{
"intrinsic": "256_cvttpd_epi32",
"width": [256],
"llvm": "cvtt.pd2dq.256",
"ret": "s32h",
"args": ["f64"]
},
{
"intrinsic": "256_cvttps_epi32",
"width": [256],
"llvm": "cvtt.ps2dq.256",
"ret": "s32",
"args": ["f32"]
},
{
"intrinsic": "256_dp_ps",
"width": [256],

View File

@ -0,0 +1,47 @@
{
"llvm_prefix": "llvm.x86.fma.",
"intrinsics": [
{
"intrinsic": "{0.width_mm}_fmadd_{0.data_type}",
"width": [128, 256],
"llvm": "vfmadd.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "{0.width_mm}_fmaddsub_{0.data_type}",
"width": [128, 256],
"llvm": "vfmaddsub.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "{0.width_mm}_fmsub_{0.data_type}",
"width": [128, 256],
"llvm": "vfmsub.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "{0.width_mm}_fmsubadd_{0.data_type}",
"width": [128, 256],
"llvm": "vfmsubadd.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "{0.width_mm}_fnmadd_{0.data_type}",
"width": [128, 256],
"llvm": "vfnmadd.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
},
{
"intrinsic": "{0.width_mm}_fnmsub_{0.data_type}",
"width": [128, 256],
"llvm": "vfnmsub.{0.data_type_short}{0.width_suffix}",
"ret": "f(32-64)",
"args": ["0", "0", "0"]
}
]
}

View File

@ -31,6 +31,7 @@ stable_whitelist = {
'src/libcore',
'src/libstd',
'src/rustc/std_shim',
'src/rustc/test_shim',
'src/test'
}

View File

@ -398,7 +398,7 @@ pub const UNICODE_VERSION: (u64, u64, u64) = (%s, %s, %s);
derived = load_properties("DerivedCoreProperties.txt", want_derived)
scripts = load_properties("Scripts.txt", [])
props = load_properties("PropList.txt",
["White_Space", "Join_Control", "Noncharacter_Code_Point"])
["White_Space", "Join_Control", "Noncharacter_Code_Point", "Pattern_White_Space"])
norm_props = load_properties("DerivedNormalizationProps.txt",
["Full_Composition_Exclusion"])
@ -408,7 +408,7 @@ pub const UNICODE_VERSION: (u64, u64, u64) = (%s, %s, %s);
# category tables
for (name, cat, pfuns) in ("general_category", gencats, ["N", "Cc"]), \
("derived_property", derived, want_derived), \
("property", props, ["White_Space"]):
("property", props, ["White_Space", "Pattern_White_Space"]):
emit_property_module(rf, name, cat, pfuns)
# normalizations and conversions module

View File

@ -1,10 +1,10 @@
Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license:
--------------------------------------------------------------------------------
Copyright (C) 2002-2014 Jason Evans <jasone@canonware.com>.
Copyright (C) 2002-2016 Jason Evans <jasone@canonware.com>.
All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2014 Facebook, Inc. All rights reserved.
Copyright (C) 2009-2016 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

View File

@ -1,10 +1,335 @@
Following are change highlights associated with official releases. Important
bug fixes are all mentioned, but internal enhancements are omitted here for
brevity (even though they are more fun to write about). Much more detail can be
found in the git revision history:
bug fixes are all mentioned, but some internal enhancements are omitted here for
brevity. Much more detail can be found in the git revision history:
https://github.com/jemalloc/jemalloc
* 4.1.0 (February 28, 2016)
This release is primarily about optimizations, but it also incorporates a lot
of portability-motivated refactoring and enhancements. Many people worked on
this release, to an extent that even with the omission here of minor changes
(see git revision history), and of the people who reported and diagnosed
issues, so much of the work was contributed that starting with this release,
changes are annotated with author credits to help reflect the collaborative
effort involved.
New features:
- Implement decay-based unused dirty page purging, a major optimization with
mallctl API impact. This is an alternative to the existing ratio-based
unused dirty page purging, and is intended to eventually become the sole
purging mechanism. New mallctls:
+ opt.purge
+ opt.decay_time
+ arena.<i>.decay
+ arena.<i>.decay_time
+ arenas.decay_time
+ stats.arenas.<i>.decay_time
(@jasone, @cevans87)
- Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration. This was motivated by the desire to
specify --with-malloc-conf=purge:decay , since the default must remain
purge:ratio until the 5.0.0 release. (@jasone)
- Add MS Visual Studio 2015 support. (@rustyx, @yuslepukhin)
- Make *allocx() size class overflow behavior defined. The maximum
size class is now less than PTRDIFF_MAX to protect applications against
numerical overflow, and all allocation functions are guaranteed to indicate
errors rather than potentially crashing if the request size exceeds the
maximum size class. (@jasone)
- jeprof:
+ Add raw heap profile support. (@jasone)
+ Add --retain and --exclude for backtrace symbol filtering. (@jasone)
Optimizations:
- Optimize the fast path to combine various bootstrapping and configuration
checks and execute more streamlined code in the common case. (@interwq)
- Use linear scan for small bitmaps (used for small object tracking). In
addition to speeding up bitmap operations on 64-bit systems, this reduces
allocator metadata overhead by approximately 0.2%. (@djwatson)
- Separate arena_avail trees, which substantially speeds up run tree
operations. (@djwatson)
- Use memoization (boot-time-computed table) for run quantization. Separate
arena_avail trees reduced the importance of this optimization. (@jasone)
- Attempt mmap-based in-place huge reallocation. This can dramatically speed
up incremental huge reallocation. (@jasone)
Incompatible changes:
- Make opt.narenas unsigned rather than size_t. (@jasone)
Bug fixes:
- Fix stats.cactive accounting regression. (@rustyx, @jasone)
- Handle unaligned keys in hash(). This caused problems for some ARM systems.
(@jasone, Christopher Ferris)
- Refactor arenas array. In addition to fixing a fork-related deadlock, this
makes arena lookups faster and simpler. (@jasone)
- Move retained memory allocation out of the default chunk allocation
function, to a location that gets executed even if the application installs
a custom chunk allocation function. This resolves a virtual memory leak.
(@buchgr)
- Fix a potential tsd cleanup leak. (Christopher Ferris, @jasone)
- Fix run quantization. In practice this bug had no impact unless
applications requested memory with alignment exceeding one page.
(@jasone, @djwatson)
- Fix LinuxThreads-specific bootstrapping deadlock. (Cosmin Paraschiv)
- jeprof:
+ Don't discard curl options if timeout is not defined. (@djwatson)
+ Detect failed profile fetches. (@djwatson)
- Fix stats.arenas.<i>.{dss,lg_dirty_mult,decay_time,pactive,pdirty} for
--disable-stats case. (@jasone)
* 4.0.4 (October 24, 2015)
This bugfix release fixes another xallocx() regression. No other regressions
have come to light in over a month, so this is likely a good starting point
for people who prefer to wait for "dot one" releases with all the major issues
shaken out.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large
allocations that have been randomly assigned an offset of 0 when
--enable-cache-oblivious configure option is enabled.
* 4.0.3 (September 24, 2015)
This bugfix release continues the trend of xallocx() and heap profiling fixes.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large
allocations when --enable-cache-oblivious configure option is enabled.
- Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
when resizing from/to a size class that is not a multiple of the chunk size.
- Fix prof_tctx_dump_iter() to filter out nodes that were created after heap
profile dumping started.
- Work around a potentially bad thread-specific data initialization
interaction with NPTL (glibc's pthreads implementation).
* 4.0.2 (September 21, 2015)
This bugfix release addresses a few bugs specific to heap profiling.
Bug fixes:
- Fix ixallocx_prof_sample() to never modify nor create sampled small
allocations. xallocx() is in general incapable of moving small allocations,
so this fix removes buggy code without loss of generality.
- Fix irallocx_prof_sample() to always allocate large regions, even when
alignment is non-zero.
- Fix prof_alloc_rollback() to read tdata from thread-specific data rather
than dereferencing a potentially invalid tctx.
* 4.0.1 (September 15, 2015)
This is a bugfix release that is somewhat high risk due to the amount of
refactoring required to address deep xallocx() problems. As a side effect of
these fixes, xallocx() now tries harder to partially fulfill requests for
optional extra space. Note that a couple of minor heap profiling
optimizations are included, but these are better thought of as performance
fixes that were integral to disovering most of the other bugs.
Optimizations:
- Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
fast path when heap profiling is enabled. Additionally, split a special
case out into arena_prof_tctx_reset(), which also avoids chunk metadata
reads.
- Optimize irallocx_prof() to optimistically update the sampler state. The
prior implementation appears to have been a holdover from when
rallocx()/xallocx() functionality was combined as rallocm().
Bug fixes:
- Fix TLS configuration such that it is enabled by default for platforms on
which it works correctly.
- Fix arenas_cache_cleanup() and arena_get_hard() to handle
allocation/deallocation within the application's thread-specific data
cleanup functions even after arenas_cache is torn down.
- Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
- Fix chunk purge hook calls for in-place huge shrinking reallocation to
specify the old chunk size rather than the new chunk size. This bug caused
no correctness issues for the default chunk purge function, but was
visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
- Fix heap profiling bugs:
+ Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This bug
could cause data structure corruption that would most likely result in a
segfault.
+ Fix irealloc_prof() to prof_alloc_rollback() on OOM.
+ Make one call to prof_active_get_unlocked() per allocation event, and use
the result throughout the relevant functions that handle an allocation
event. Also add a missing check in prof_realloc(). These fixes protect
allocation events against concurrent prof_active changes.
+ Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
in the correct order.
+ Fix prof_realloc() to call prof_free_sampled_object() after calling
prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
the same, the tctx could have been prematurely destroyed.
- Fix portability bugs:
+ Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps. This affected systems with page sizes greater than 8
KiB.
+ Rename index_t to szind_t to avoid an existing type on Solaris.
+ Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
match glibc and avoid compilation errors when including both
jemalloc/jemalloc.h and malloc.h in C++ code.
+ Don't assume that /bin/sh is appropriate when running size_classes.sh
during configuration.
+ Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
+ Link tests to librt if it contains clock_gettime(2).
* 4.0.0 (August 17, 2015)
This version contains many speed and space optimizations, both minor and
major. The major themes are generalization, unification, and simplification.
Although many of these optimizations cause no visible behavior change, their
cumulative effect is substantial.
New features:
- Normalize size class spacing to be consistent across the complete size
range. By default there are four size classes per size doubling, but this
is now configurable via the --with-lg-size-class-group option. Also add the
--with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
--with-lg-tiny-min options, which can be used to tweak page and size class
settings. Impacts:
+ Worst case performance for incrementally growing/shrinking reallocation
is improved because there are far fewer size classes, and therefore
copying happens less often.
+ Internal fragmentation is limited to 20% for all but the smallest size
classes (those less than four times the quantum). (1B + 4 KiB)
and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
+ Chunk fragmentation tends to be lower because there are fewer distinct run
sizes to pack.
- Add support for explicit tcaches. The "tcache.create", "tcache.flush", and
"tcache.destroy" mallctls control tcache lifetime and flushing, and the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
control which tcache is used for each operation.
- Implement per thread heap profiling, as well as the ability to
enable/disable heap profiling on a per thread basis. Add the "prof.reset",
"prof.lg_sample", "thread.prof.name", "thread.prof.active",
"opt.prof_thread_active_init", "prof.thread_active_init", and
"thread.prof.active" mallctls.
- Add support for per arena application-specified chunk allocators, configured
via the "arena.<i>.chunk_hooks" mallctl.
- Refactor huge allocation to be managed by arenas, so that arenas now
function as general purpose independent allocators. This is important in
the context of user-specified chunk allocators, aside from the scalability
benefits. Related new statistics:
+ The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
"stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
mallctls provide high level per arena huge allocation statistics.
+ The "arenas.nhchunks", "arenas.hchunk.<i>.size",
"stats.arenas.<i>.hchunks.<j>.nmalloc",
"stats.arenas.<i>.hchunks.<j>.ndalloc",
"stats.arenas.<i>.hchunks.<j>.nrequests", and
"stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
statistics.
- Add the 'util' column to malloc_stats_print() output, which reports the
proportion of available regions that are currently in use for each small
size class.
- Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
mallctl), so that it is possible to separately enable junk filling for
allocation versus deallocation.
- Add the jemalloc-config script, which provides information about how
jemalloc was configured, and how to integrate it into application builds.
- Add metadata statistics, which are accessible via the "stats.metadata",
"stats.arenas.<i>.metadata.mapped", and
"stats.arenas.<i>.metadata.allocated" mallctls.
- Add the "stats.resident" mallctl, which reports the upper limit of
physically resident memory mapped by the allocator.
- Add per arena control over unused dirty page purging, via the
"arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
"stats.arenas.<i>.lg_dirty_mult" mallctls.
- Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
feature on/off during program execution.
- Add sdallocx(), which implements sized deallocation. The primary
optimization over dallocx() is the removal of a metadata read, which often
suffers an L1 cache miss.
- Add missing header includes in jemalloc/jemalloc.h, so that applications
only have to #include <jemalloc/jemalloc.h>.
- Add support for additional platforms:
+ Bitrig
+ Cygwin
+ DragonFlyBSD
+ iOS
+ OpenBSD
+ OpenRISC/or1k
Optimizations:
- Maintain dirty runs in per arena LRUs rather than in per arena trees of
dirty-run-containing chunks. In practice this change significantly reduces
dirty page purging volume.
- Integrate whole chunks into the unused dirty page purging machinery. This
reduces the cost of repeated huge allocation/deallocation, because it
effectively introduces a cache of chunks.
- Split the arena chunk map into two separate arrays, in order to increase
cache locality for the frequently accessed bits.
- Move small run metadata out of runs, into arena chunk headers. This reduces
run fragmentation, smaller runs reduce external fragmentation for small size
classes, and packed (less uniformly aligned) metadata layout improves CPU
cache set distribution.
- Randomly distribute large allocation base pointer alignment relative to page
boundaries in order to more uniformly utilize CPU cache sets. This can be
disabled via the --disable-cache-oblivious configure option, and queried via
the "config.cache_oblivious" mallctl.
- Micro-optimize the fast paths for the public API functions.
- Refactor thread-specific data to reside in a single structure. This assures
that only a single TLS read is necessary per call into the public API.
- Implement in-place huge allocation growing and shrinking.
- Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
additional optimizations that reduce maximum lookup depth to one or two
levels. This resolves what was a concurrency bottleneck for per arena huge
allocation, because a global data structure is critical for determining
which arenas own which huge allocations.
Incompatible changes:
- Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
warnings by default.
- Assure that the constness of malloc_usable_size()'s return type matches that
of the system implementation.
- Change the heap profile dump format to support per thread heap profiling,
rename pprof to jeprof, and enhance it with the --thread=<n> option. As a
result, the bundled jeprof must now be used rather than the upstream
(gperftools) pprof.
- Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
internally deadlock on some platforms.
- Change the "arenas.nlruns" mallctl type from size_t to unsigned.
- Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
"stats.arenas.<i>.bins.<j>.curregs".
- Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
- Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.
Removed features:
- Remove the *allocm() API, which is superseded by the *allocx() API.
- Remove the --enable-dss options, and make dss non-optional on all platforms
which support sbrk(2).
- Remove the "arenas.purge" mallctl, which was obsoleted by the
"arena.<i>.purge" mallctl in 3.1.0.
- Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
detects whether it is running inside Valgrind.
- Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
"stats.huge.ndalloc" mallctls.
- Remove the --enable-mremap option.
- Remove the "stats.chunks.current", "stats.chunks.total", and
"stats.chunks.high" mallctls.
Bug fixes:
- Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was first released in 3.5.0.
- Fix OOM handling in memalign() and valloc(). A variant of this bug existed
in all releases since 2.0.0, which introduced these functions.
- Fix an OOM-related regression in arena_tcache_fill_small(), which could
cause cache corruption on OOM. This regression was present in all releases
from 2.2.0 through 3.6.0.
- Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
calloc(), and realloc() when profiling is enabled.
- Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
- Fix fallback lg_floor() implementations to handle extremely large inputs.
- Ensure the default purgeable zone is after the default zone on OS X.
- Fix latent bugs in atomic_*().
- Fix the "arena.<i>.dss" mallctl to handle read-only calls.
- Fix tls_model configuration to enable the initial-exec model when possible.
- Mark malloc_conf as a weak symbol so that the application can override it.
- Correctly detect glibc's adaptive pthread mutexes.
- Fix the --without-export configure option.
* 3.6.0 (March 31, 2014)
This version contains a critical bug fix for a regression present in 3.5.0 and
@ -21,7 +346,7 @@ found in the git revision history:
backtracing to be reliable.
- Use dss allocation precedence for huge allocations as well as small/large
allocations.
- Fix test assertion failure message formatting. This bug did not manifect on
- Fix test assertion failure message formatting. This bug did not manifest on
x86_64 systems because of implementation subtleties in va_list.
- Fix inconsequential test failures for hash and SFMT code.
@ -516,7 +841,7 @@ found in the git revision history:
- Make it possible for the application to manually flush a thread's cache, via
the "tcache.flush" mallctl.
- Base maximum dirty page count on proportion of active memory.
- Compute various addtional run-time statistics, including per size class
- Compute various additional run-time statistics, including per size class
statistics for large objects.
- Expose malloc_stats_print(), which can be called repeatedly by the
application.

View File

@ -84,6 +84,14 @@ any of the following arguments (not a definitive list) to 'configure':
versions of jemalloc can coexist in the same installation directory. For
example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.
--with-malloc-conf=<malloc_conf>
Embed <malloc_conf> as a run-time options string that is processed prior to
the malloc_conf global variable, the /etc/malloc.conf symlink, and the
MALLOC_CONF environment variable. For example, to change the default chunk
size to 256 KiB:
--with-malloc-conf=lg_chunk:18
--disable-cc-silence
Disable code that silences non-useful compiler warnings. This is mainly
useful during development when auditing the set of warnings that are being
@ -107,15 +115,15 @@ any of the following arguments (not a definitive list) to 'configure':
there are interactions between the various coverage targets, so it is
usually advisable to run 'make clean' between repeated code coverage runs.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a substantial
performance hit.
--disable-stats
Disable statistics gathering functionality. See the "opt.stats_print"
option documentation for usage details.
--enable-ivsalloc
Enable validation code, which verifies that pointers reside within
jemalloc-owned chunks before dereferencing them. This incurs a minor
performance hit.
--enable-prof
Enable heap profiling and leak detection functionality. See the "opt.prof"
option documentation for usage details. When enabled, there are several
@ -185,10 +193,106 @@ any of the following arguments (not a definitive list) to 'configure':
thread-local variables via the __thread keyword. If TLS is available,
jemalloc uses it for several purposes.
--disable-cache-oblivious
Disable cache-oblivious large allocation alignment for large allocation
requests with no alignment constraints. If this feature is disabled, all
large allocations are page-aligned as an implementation artifact, which can
severely harm CPU cache utilization. However, the cache-oblivious layout
comes at the cost of one extra page per large allocation, which in the
most extreme case increases physical memory usage for the 16 KiB size class
to 20 KiB.
--with-xslroot=<path>
Specify where to find DocBook XSL stylesheets when building the
documentation.
--with-lg-page=<lg-page>
Specify the base 2 log of the system page size. This option is only useful
when cross compiling, since the configure script automatically determines
the host's page size by default.
--with-lg-page-sizes=<lg-page-sizes>
Specify the comma-separated base 2 logs of the page sizes to support. This
option may be useful when cross-compiling in combination with
--with-lg-page, but its primary use case is for integration with FreeBSD's
libc, wherein jemalloc is embedded.
--with-lg-size-class-group=<lg-size-class-group>
Specify the base 2 log of how many size classes to use for each doubling in
size. By default jemalloc uses <lg-size-class-group>=2, which results in
e.g. the following size classes:
[...], 64,
80, 96, 112, 128,
160, [...]
<lg-size-class-group>=3 results in e.g. the following size classes:
[...], 64,
72, 80, 88, 96, 104, 112, 120, 128,
144, [...]
The minimal <lg-size-class-group>=0 causes jemalloc to only provide size
classes that are powers of 2:
[...],
64,
128,
256,
[...]
An implementation detail currently limits the total number of small size
classes to 255, and a compilation error will result if the
<lg-size-class-group> you specify cannot be supported. The limit is
roughly <lg-size-class-group>=4, depending on page size.
--with-lg-quantum=<lg-quantum>
Specify the base 2 log of the minimum allocation alignment. jemalloc needs
to know the minimum alignment that meets the following C standard
requirement (quoted from the April 12, 2011 draft of the C11 standard):
The pointer returned if the allocation succeeds is suitably aligned so
that it may be assigned to a pointer to any type of object with a
fundamental alignment requirement and then used to access such an object
or an array of such objects in the space allocated [...]
This setting is architecture-specific, and although jemalloc includes known
safe values for the most commonly used modern architectures, there is a
wrinkle related to GNU libc (glibc) that may impact your choice of
<lg-quantum>. On most modern architectures, this mandates 16-byte alignment
(<lg-quantum>=4), but the glibc developers chose not to meet this
requirement for performance reasons. An old discussion can be found at
https://sourceware.org/bugzilla/show_bug.cgi?id=206 . Unlike glibc,
jemalloc does follow the C standard by default (caveat: jemalloc
technically cheats if --with-lg-tiny-min is smaller than
--with-lg-quantum), but the fact that Linux systems already work around
this allocator noncompliance means that it is generally safe in practice to
let jemalloc's minimum alignment follow glibc's lead. If you specify
--with-lg-quantum=3 during configuration, jemalloc will provide additional
size classes that are not 16-byte-aligned (24, 40, and 56, assuming
--with-lg-size-class-group=2).
--with-lg-tiny-min=<lg-tiny-min>
Specify the base 2 log of the minimum tiny size class to support. Tiny
size classes are powers of 2 less than the quantum, and are only
incorporated if <lg-tiny-min> is less than <lg-quantum> (see
--with-lg-quantum). Tiny size classes technically violate the C standard
requirement for minimum alignment, and crashes could conceivably result if
the compiler were to generate instructions that made alignment assumptions,
both because illegal instruction traps could result, and because accesses
could straddle page boundaries and cause segmentation faults due to
accessing unmapped addresses.
The default of <lg-tiny-min>=3 works well in practice even on architectures
that technically require 16-byte alignment, probably for the same reason
--with-lg-quantum=3 works. Smaller tiny size classes can, and will, cause
crashes (see https://bugzilla.mozilla.org/show_bug.cgi?id=691003 for an
example).
This option is rarely useful, and is mainly provided as documentation of a
subtle implementation detail. If you do use this option, specify a
value in [3, ..., <lg-quantum>].
The following environment variables (not a definitive list) impact configure's
behavior:

View File

@ -28,6 +28,7 @@ CFLAGS := @CFLAGS@
LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@
IMPORTLIB := @importlib@
@ -48,8 +49,10 @@ cfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@)
cfgoutputs_out := @cfgoutputs_out@
enable_autogen := @enable_autogen@
enable_code_coverage := @enable_code_coverage@
enable_prof := @enable_prof@
enable_valgrind := @enable_valgrind@
enable_zone_allocator := @enable_zone_allocator@
MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF
DSO_LDFLAGS = @DSO_LDFLAGS@
SOREV = @SOREV@
PIC_CFLAGS = @PIC_CFLAGS@
@ -73,16 +76,34 @@ endif
LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)
# Lists of files.
BINS := $(srcroot)bin/pprof $(objroot)bin/jemalloc.sh
BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof
C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h
C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \
$(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \
$(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \
$(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \
$(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \
$(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/prof.c \
$(srcroot)src/quarantine.c $(srcroot)src/rtree.c $(srcroot)src/stats.c \
$(srcroot)src/tcache.c $(srcroot)src/util.c $(srcroot)src/tsd.c
C_SRCS := $(srcroot)src/jemalloc.c \
$(srcroot)src/arena.c \
$(srcroot)src/atomic.c \
$(srcroot)src/base.c \
$(srcroot)src/bitmap.c \
$(srcroot)src/chunk.c \
$(srcroot)src/chunk_dss.c \
$(srcroot)src/chunk_mmap.c \
$(srcroot)src/ckh.c \
$(srcroot)src/ctl.c \
$(srcroot)src/extent.c \
$(srcroot)src/hash.c \
$(srcroot)src/huge.c \
$(srcroot)src/mb.c \
$(srcroot)src/mutex.c \
$(srcroot)src/nstime.c \
$(srcroot)src/pages.c \
$(srcroot)src/prng.c \
$(srcroot)src/prof.c \
$(srcroot)src/quarantine.c \
$(srcroot)src/rtree.c \
$(srcroot)src/stats.c \
$(srcroot)src/tcache.c \
$(srcroot)src/ticker.c \
$(srcroot)src/tsd.c \
$(srcroot)src/util.c
ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c
endif
@ -104,24 +125,29 @@ endif
PC := $(objroot)jemalloc.pc
MAN3 := $(objroot)doc/jemalloc$(install_suffix).3
DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.3)
DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.html)
DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.3)
DOCS := $(DOCS_HTML) $(DOCS_MAN3)
C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/btalloc_1.c $(srcroot)test/src/math.c \
$(srcroot)test/src/mtx.c $(srcroot)test/src/SFMT.c \
$(srcroot)test/src/test.c $(srcroot)test/src/thd.c \
$(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c
$(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/nstime.c $(srcroot)src/util.c
TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/decay.c \
$(srcroot)test/unit/hash.c \
$(srcroot)test/unit/junk.c \
$(srcroot)test/unit/junk_alloc.c \
$(srcroot)test/unit/junk_free.c \
$(srcroot)test/unit/lg_chunk.c \
$(srcroot)test/unit/mallctl.c \
$(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \
$(srcroot)test/unit/mtx.c \
$(srcroot)test/unit/prng.c \
$(srcroot)test/unit/prof_accum.c \
$(srcroot)test/unit/prof_active.c \
$(srcroot)test/unit/prof_gdump.c \
@ -133,8 +159,13 @@ TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/quarantine.c \
$(srcroot)test/unit/rb.c \
$(srcroot)test/unit/rtree.c \
$(srcroot)test/unit/run_quantize.c \
$(srcroot)test/unit/SFMT.c \
$(srcroot)test/unit/size_classes.c \
$(srcroot)test/unit/smoothstep.c \
$(srcroot)test/unit/stats.c \
$(srcroot)test/unit/ticker.c \
$(srcroot)test/unit/nstime.c \
$(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \
$(srcroot)test/unit/zero.c
@ -143,6 +174,7 @@ TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/sdallocx.c \
$(srcroot)test/integration/mallocx.c \
$(srcroot)test/integration/MALLOCX_ARENA.c \
$(srcroot)test/integration/overflow.c \
$(srcroot)test/integration/posix_memalign.c \
$(srcroot)test/integration/rallocx.c \
$(srcroot)test/integration/thread_arena.c \
@ -178,10 +210,10 @@ all: build_lib
dist: build_doc
$(srcroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(objroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl
$(XSLTPROC) -o $@ $(objroot)doc/html.xsl $<
$(srcroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(objroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl
$(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $<
build_doc_html: $(DOCS_HTML)
@ -257,15 +289,15 @@ $(STATIC_LIBS):
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS)
@ -335,18 +367,27 @@ check_unit_dir:
@mkdir -p $(objroot)test/unit
check_integration_dir:
@mkdir -p $(objroot)test/integration
check_stress_dir:
stress_dir:
@mkdir -p $(objroot)test/stress
check_dir: check_unit_dir check_integration_dir check_stress_dir
check_dir: check_unit_dir check_integration_dir
check_unit: tests_unit check_unit_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:ratio" $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay" $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
check_integration_prof: tests_integration check_integration_dir
ifeq ($(enable_prof), 1)
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
endif
check_integration_decay: tests_integration check_integration_dir
$(MALLOC_CONF)="purge:decay,decay_time:-1" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay,decay_time:0" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
check_integration: tests_integration check_integration_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
check_stress: tests_stress check_stress_dir
stress: tests_stress stress_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
check: tests check_dir
$(SHELL) $(objroot)test/test.sh $(TESTS:$(srcroot)%.c=$(objroot)%)
check: check_unit check_integration check_integration_decay check_integration_prof
ifeq ($(enable_code_coverage), 1)
coverage_unit: check_unit
@ -360,7 +401,7 @@ coverage_integration: check_integration
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
coverage_stress: check_stress
coverage_stress: stress
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
@ -405,7 +446,9 @@ clean:
rm -f $(objroot)*.gcov.*
distclean: clean
rm -f $(objroot)bin/jemalloc-config
rm -f $(objroot)bin/jemalloc.sh
rm -f $(objroot)bin/jeprof
rm -f $(objroot)config.log
rm -f $(objroot)config.status
rm -f $(objroot)config.stamp
@ -414,7 +457,7 @@ distclean: clean
relclean: distclean
rm -f $(objroot)configure
rm -f $(srcroot)VERSION
rm -f $(objroot)VERSION
rm -f $(DOCS_HTML)
rm -f $(DOCS_MAN3)

View File

@ -1 +0,0 @@
0.12.0-17867-gdb2939409db26ab4904372c82492cd3488e4c44e

View File

@ -0,0 +1,79 @@
#!/bin/sh
usage() {
cat <<EOF
Usage:
@BINDIR@/jemalloc-config <option>
Options:
--help | -h : Print usage.
--version : Print jemalloc version.
--revision : Print shared library revision number.
--config : Print configure options used to build jemalloc.
--prefix : Print installation directory prefix.
--bindir : Print binary installation directory.
--datadir : Print data installation directory.
--includedir : Print include installation directory.
--libdir : Print library installation directory.
--mandir : Print manual page installation directory.
--cc : Print compiler used to build jemalloc.
--cflags : Print compiler flags used to build jemalloc.
--cppflags : Print preprocessor flags used to build jemalloc.
--ldflags : Print library flags used to build jemalloc.
--libs : Print libraries jemalloc was linked against.
EOF
}
prefix="@prefix@"
exec_prefix="@exec_prefix@"
case "$1" in
--help | -h)
usage
exit 0
;;
--version)
echo "@jemalloc_version@"
;;
--revision)
echo "@rev@"
;;
--config)
echo "@CONFIG@"
;;
--prefix)
echo "@PREFIX@"
;;
--bindir)
echo "@BINDIR@"
;;
--datadir)
echo "@DATADIR@"
;;
--includedir)
echo "@INCLUDEDIR@"
;;
--libdir)
echo "@LIBDIR@"
;;
--mandir)
echo "@MANDIR@"
;;
--cc)
echo "@CC@"
;;
--cflags)
echo "@CFLAGS@"
;;
--cppflags)
echo "@CPPFLAGS@"
;;
--ldflags)
echo "@LDFLAGS@ @EXTRA_LDFLAGS@"
;;
--libs)
echo "@LIBS@"
;;
*)
usage
exit 1
esac

250
src/jemalloc/bin/pprof → src/jemalloc/bin/jeprof.in Executable file → Normal file
View File

@ -40,28 +40,28 @@
#
# Examples:
#
# % tools/pprof "program" "profile"
# % tools/jeprof "program" "profile"
# Enters "interactive" mode
#
# % tools/pprof --text "program" "profile"
# % tools/jeprof --text "program" "profile"
# Generates one line per procedure
#
# % tools/pprof --gv "program" "profile"
# % tools/jeprof --gv "program" "profile"
# Generates annotated call-graph and displays via "gv"
#
# % tools/pprof --gv --focus=Mutex "program" "profile"
# % tools/jeprof --gv --focus=Mutex "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex"
#
# % tools/pprof --gv --focus=Mutex --ignore=string "program" "profile"
# % tools/jeprof --gv --focus=Mutex --ignore=string "program" "profile"
# Restrict to code paths that involve an entry that matches "Mutex"
# and does not match "string"
#
# % tools/pprof --list=IBF_CheckDocid "program" "profile"
# % tools/jeprof --list=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one
# sample that match the --list=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each line.
#
# % tools/pprof --disasm=IBF_CheckDocid "program" "profile"
# % tools/jeprof --disasm=IBF_CheckDocid "program" "profile"
# Generates disassembly listing of all routines with at least one
# sample that match the --disasm=<regexp> pattern. The listing is
# annotated with the flat and cumulative sample counts at each PC value.
@ -72,10 +72,11 @@ use strict;
use warnings;
use Getopt::Long;
my $JEPROF_VERSION = "@jemalloc_version@";
my $PPROF_VERSION = "2.0";
# These are the object tools we use which can come from a
# user-specified location using --tools, from the PPROF_TOOLS
# user-specified location using --tools, from the JEPROF_TOOLS
# environment variable, or from the environment.
my %obj_tool_map = (
"objdump" => "objdump",
@ -94,7 +95,7 @@ my @EVINCE = ("evince"); # could also be xpdf or perhaps acroread
my @KCACHEGRIND = ("kcachegrind");
my @PS2PDF = ("ps2pdf");
# These are used for dynamic profiles
my @URL_FETCHER = ("curl", "-s");
my @URL_FETCHER = ("curl", "-s", "--fail");
# These are the web pages that servers need to support for dynamic profiles
my $HEAP_PAGE = "/pprof/heap";
@ -144,13 +145,13 @@ my $sep_address = undef;
sub usage_string {
return <<EOF;
Usage:
pprof [options] <program> <profiles>
jeprof [options] <program> <profiles>
<profiles> is a space separated list of profile names.
pprof [options] <symbolized-profiles>
jeprof [options] <symbolized-profiles>
<symbolized-profiles> is a list of profile files where each file contains
the necessary symbol mappings as well as profile data (likely generated
with --raw).
pprof [options] <profile>
jeprof [options] <profile>
<profile> is a remote form. Symbols are obtained from host:port$SYMBOL_PAGE
Each name can be:
@ -161,9 +162,9 @@ pprof [options] <profile>
$GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall,
$CENSUSPROFILE_PAGE, or /pprof/filteredprofile.
For instance:
pprof http://myserver.com:80$HEAP_PAGE
jeprof http://myserver.com:80$HEAP_PAGE
If /<service> is omitted, the service defaults to $PROFILE_PAGE (cpu profiling).
pprof --symbols <program>
jeprof --symbols <program>
Maps addresses to symbol names. In this mode, stdin should be a
list of library mappings, in the same format as is found in the heap-
and cpu-profile files (this loosely matches that of /proc/self/maps
@ -202,7 +203,7 @@ Output type:
--pdf Generate PDF to stdout
--svg Generate SVG to stdout
--gif Generate GIF to stdout
--raw Generate symbolized pprof data (useful with remote fetch)
--raw Generate symbolized jeprof data (useful with remote fetch)
Heap-Profile Options:
--inuse_space Display in-use (mega)bytes [default]
@ -222,12 +223,14 @@ Call-graph Options:
--nodefraction=<f> Hide nodes below <f>*total [default=.005]
--edgefraction=<f> Hide edges below <f>*total [default=.001]
--maxdegree=<n> Max incoming/outgoing edges per node [default=8]
--focus=<regexp> Focus on nodes matching <regexp>
--focus=<regexp> Focus on backtraces with nodes matching <regexp>
--thread=<n> Show profile for thread <n>
--ignore=<regexp> Ignore nodes matching <regexp>
--ignore=<regexp> Ignore backtraces with nodes matching <regexp>
--scale=<n> Set GV scaling [default=0]
--heapcheck Make nodes with non-0 object counts
(i.e. direct leak generators) more visible
--retain=<regexp> Retain only nodes that match <regexp>
--exclude=<regexp> Exclude all nodes that match <regexp>
Miscellaneous:
--tools=<prefix or binary:fullpath>[,...] \$PATH for object tool pathnames
@ -236,34 +239,34 @@ Miscellaneous:
--version Version information
Environment Variables:
PPROF_TMPDIR Profiles directory. Defaults to \$HOME/pprof
PPROF_TOOLS Prefix for object tools pathnames
JEPROF_TMPDIR Profiles directory. Defaults to \$HOME/jeprof
JEPROF_TOOLS Prefix for object tools pathnames
Examples:
pprof /bin/ls ls.prof
jeprof /bin/ls ls.prof
Enters "interactive" mode
pprof --text /bin/ls ls.prof
jeprof --text /bin/ls ls.prof
Outputs one line per procedure
pprof --web /bin/ls ls.prof
jeprof --web /bin/ls ls.prof
Displays annotated call-graph in web browser
pprof --gv /bin/ls ls.prof
jeprof --gv /bin/ls ls.prof
Displays annotated call-graph via 'gv'
pprof --gv --focus=Mutex /bin/ls ls.prof
jeprof --gv --focus=Mutex /bin/ls ls.prof
Restricts to code paths including a .*Mutex.* entry
pprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof
jeprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof
Code paths including Mutex but not string
pprof --list=getdir /bin/ls ls.prof
jeprof --list=getdir /bin/ls ls.prof
(Per-line) annotated source listing for getdir()
pprof --disasm=getdir /bin/ls ls.prof
jeprof --disasm=getdir /bin/ls ls.prof
(Per-PC) annotated disassembly for getdir()
pprof http://localhost:1234/
jeprof http://localhost:1234/
Enters "interactive" mode
pprof --text localhost:1234
jeprof --text localhost:1234
Outputs one line per procedure for localhost:1234
pprof --raw localhost:1234 > ./local.raw
pprof --text ./local.raw
jeprof --raw localhost:1234 > ./local.raw
jeprof --text ./local.raw
Fetches a remote profile for later analysis and then
analyzes it in text mode.
EOF
@ -271,7 +274,8 @@ EOF
sub version_string {
return <<EOF
pprof (part of gperftools $PPROF_VERSION)
jeprof (part of jemalloc $JEPROF_VERSION)
based on pprof (part of gperftools $PPROF_VERSION)
Copyright 1998-2007 Google Inc.
@ -294,8 +298,8 @@ sub Init() {
# Setup tmp-file name and handler to clean it up.
# We do this in the very beginning so that we can use
# error() and cleanup() function anytime here after.
$main::tmpfile_sym = "/tmp/pprof$$.sym";
$main::tmpfile_ps = "/tmp/pprof$$";
$main::tmpfile_sym = "/tmp/jeprof$$.sym";
$main::tmpfile_ps = "/tmp/jeprof$$";
$main::next_tmpfile = 0;
$SIG{'INT'} = \&sighandler;
@ -337,6 +341,8 @@ sub Init() {
$main::opt_ignore = '';
$main::opt_scale = 0;
$main::opt_heapcheck = 0;
$main::opt_retain = '';
$main::opt_exclude = '';
$main::opt_seconds = 30;
$main::opt_lib = "";
@ -404,10 +410,12 @@ sub Init() {
"edgefraction=f" => \$main::opt_edgefraction,
"maxdegree=i" => \$main::opt_maxdegree,
"focus=s" => \$main::opt_focus,
"thread=i" => \$main::opt_thread,
"thread=s" => \$main::opt_thread,
"ignore=s" => \$main::opt_ignore,
"scale=i" => \$main::opt_scale,
"heapcheck" => \$main::opt_heapcheck,
"retain=s" => \$main::opt_retain,
"exclude=s" => \$main::opt_exclude,
"inuse_space!" => \$main::opt_inuse_space,
"inuse_objects!" => \$main::opt_inuse_objects,
"alloc_space!" => \$main::opt_alloc_space,
@ -707,7 +715,8 @@ sub Main() {
}
if (defined($data->{threads})) {
foreach my $thread (sort { $a <=> $b } keys(%{$data->{threads}})) {
if (!defined($main::opt_thread) || $main::opt_thread == $thread) {
if (defined($main::opt_thread) &&
($main::opt_thread eq '*' || $main::opt_thread == $thread)) {
my $thread_profile = $data->{threads}{$thread};
FilterAndPrint($thread_profile, $symbols, $libs, $thread);
}
@ -801,14 +810,14 @@ sub InteractiveMode {
$| = 1; # Make output unbuffered for interactive mode
my ($orig_profile, $symbols, $libs, $total) = @_;
print STDERR "Welcome to pprof! For help, type 'help'.\n";
print STDERR "Welcome to jeprof! For help, type 'help'.\n";
# Use ReadLine if it's installed and input comes from a console.
if ( -t STDIN &&
!ReadlineMightFail() &&
defined(eval {require Term::ReadLine}) ) {
my $term = new Term::ReadLine 'pprof';
while ( defined ($_ = $term->readline('(pprof) '))) {
my $term = new Term::ReadLine 'jeprof';
while ( defined ($_ = $term->readline('(jeprof) '))) {
$term->addhistory($_) if /\S/;
if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) {
last; # exit when we get an interactive command to quit
@ -816,7 +825,7 @@ sub InteractiveMode {
}
} else { # don't have readline
while (1) {
print STDERR "(pprof) ";
print STDERR "(jeprof) ";
$_ = <STDIN>;
last if ! defined $_ ;
s/\r//g; # turn windows-looking lines into unix-looking lines
@ -1009,7 +1018,7 @@ sub ProcessProfile {
sub InteractiveHelpMessage {
print STDERR <<ENDOFHELP;
Interactive pprof mode
Interactive jeprof mode
Commands:
gv
@ -1052,7 +1061,7 @@ Commands:
Generates callgrind file. If no filename is given, kcachegrind is called.
help - This listing
quit or ^D - End pprof
quit or ^D - End jeprof
For commands that accept optional -ignore tags, samples where any routine in
the stack trace matches the regular expression in any of the -ignore
@ -1157,8 +1166,21 @@ sub PrintSymbolizedProfile {
}
print '---', "\n";
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
my $profile_marker = $&;
my $profile_marker;
if ($main::profile_type eq 'heap') {
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} elsif ($main::profile_type eq 'growth') {
$GROWTH_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} elsif ($main::profile_type eq 'contention') {
$CONTENTION_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} else { # elsif ($main::profile_type eq 'cpu')
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
}
print '--- ', $profile_marker, "\n";
if (defined($main::collected_profile)) {
# if used with remote fetch, simply dump the collected profile to output.
@ -1168,6 +1190,12 @@ sub PrintSymbolizedProfile {
}
close(SRC);
} else {
# --raw/http: For everything to work correctly for non-remote profiles, we
# would need to extend PrintProfileData() to handle all possible profile
# types, re-enable the code that is currently disabled in ReadCPUProfile()
# and FixCallerAddresses(), and remove the remote profile dumping code in
# the block above.
die "--raw/http: jeprof can only dump remote profiles for --raw\n";
# dump a cpu-format profile to standard out
PrintProfileData($profile);
}
@ -1497,7 +1525,7 @@ h1 {
}
</style>
<script type="text/javascript">
function pprof_toggle_asm(e) {
function jeprof_toggle_asm(e) {
var target;
if (!e) e = window.event;
if (e.target) target = e.target;
@ -1766,7 +1794,7 @@ sub PrintSource {
if ($html) {
printf $output (
"<h1>%s</h1>%s\n<pre onClick=\"pprof_toggle_asm()\">\n" .
"<h1>%s</h1>%s\n<pre onClick=\"jeprof_toggle_asm()\">\n" .
"Total:%6s %6s (flat / cumulative %s)\n",
HtmlEscape(ShortFunctionName($routine)),
HtmlEscape(CleanFileName($filename)),
@ -2818,6 +2846,43 @@ sub ExtractCalls {
return $calls;
}
sub FilterFrames {
my $symbols = shift;
my $profile = shift;
if ($main::opt_retain eq '' && $main::opt_exclude eq '') {
return $profile;
}
my $result = {};
foreach my $k (keys(%{$profile})) {
my $count = $profile->{$k};
my @addrs = split(/\n/, $k);
my @path = ();
foreach my $a (@addrs) {
my $sym;
if (exists($symbols->{$a})) {
$sym = $symbols->{$a}->[0];
} else {
$sym = $a;
}
if ($main::opt_retain ne '' && $sym !~ m/$main::opt_retain/) {
next;
}
if ($main::opt_exclude ne '' && $sym =~ m/$main::opt_exclude/) {
next;
}
push(@path, $a);
}
if (scalar(@path) > 0) {
my $reduced_path = join("\n", @path);
AddEntry($result, $reduced_path, $count);
}
}
return $result;
}
sub RemoveUninterestingFrames {
my $symbols = shift;
my $profile = shift;
@ -2962,6 +3027,9 @@ sub RemoveUninterestingFrames {
my $reduced_path = join("\n", @path);
AddEntry($result, $reduced_path, $count);
}
$result = FilterFrames($symbols, $result);
return $result;
}
@ -3271,7 +3339,7 @@ sub ResolveRedirectionForCurl {
# Add a timeout flat to URL_FETCHER. Returns a new list.
sub AddFetchTimeout {
my $timeout = shift;
my @fetcher = shift;
my @fetcher = @_;
if (defined($timeout)) {
if (join(" ", @fetcher) =~ m/\bcurl -s/) {
push(@fetcher, "--max-time", sprintf("%d", $timeout));
@ -3317,6 +3385,27 @@ sub ReadSymbols {
return $map;
}
sub URLEncode {
my $str = shift;
$str =~ s/([^A-Za-z0-9\-_.!~*'()])/ sprintf "%%%02x", ord $1 /eg;
return $str;
}
sub AppendSymbolFilterParams {
my $url = shift;
my @params = ();
if ($main::opt_retain ne '') {
push(@params, sprintf("retain=%s", URLEncode($main::opt_retain)));
}
if ($main::opt_exclude ne '') {
push(@params, sprintf("exclude=%s", URLEncode($main::opt_exclude)));
}
if (scalar @params > 0) {
$url = sprintf("%s?%s", $url, join("&", @params));
}
return $url;
}
# Fetches and processes symbols to prepare them for use in the profile output
# code. If the optional 'symbol_map' arg is not given, fetches symbols from
# $SYMBOL_PAGE for all PC values found in profile. Otherwise, the raw symbols
@ -3341,9 +3430,11 @@ sub FetchSymbols {
my $command_line;
if (join(" ", @URL_FETCHER) =~ m/\bcurl -s/) {
$url = ResolveRedirectionForCurl($url);
$url = AppendSymbolFilterParams($url);
$command_line = ShellEscape(@URL_FETCHER, "-d", "\@$main::tmpfile_sym",
$url);
} else {
$url = AppendSymbolFilterParams($url);
$command_line = (ShellEscape(@URL_FETCHER, "--post", $url)
. " < " . ShellEscape($main::tmpfile_sym));
}
@ -3424,15 +3515,25 @@ sub FetchDynamicProfile {
}
$url .= sprintf("seconds=%d", $main::opt_seconds);
$fetch_timeout = $main::opt_seconds * 1.01 + 60;
# Set $profile_type for consumption by PrintSymbolizedProfile.
$main::profile_type = 'cpu';
} else {
# For non-CPU profiles, we add a type-extension to
# the target profile file name.
my $suffix = $path;
$suffix =~ s,/,.,g;
$profile_file .= $suffix;
# Set $profile_type for consumption by PrintSymbolizedProfile.
if ($path =~ m/$HEAP_PAGE/) {
$main::profile_type = 'heap';
} elsif ($path =~ m/$GROWTH_PAGE/) {
$main::profile_type = 'growth';
} elsif ($path =~ m/$CONTENTION_PAGE/) {
$main::profile_type = 'contention';
}
}
my $profile_dir = $ENV{"PPROF_TMPDIR"} || ($ENV{HOME} . "/pprof");
my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof");
if (! -d $profile_dir) {
mkdir($profile_dir)
|| die("Unable to create profile directory $profile_dir: $!\n");
@ -3648,7 +3749,7 @@ BEGIN {
# Reads the top, 'header' section of a profile, and returns the last
# line of the header, commonly called a 'header line'. The header
# section of a profile consists of zero or more 'command' lines that
# are instructions to pprof, which pprof executes when reading the
# are instructions to jeprof, which jeprof executes when reading the
# header. All 'command' lines start with a %. After the command
# lines is the 'header line', which is a profile-specific line that
# indicates what type of profile it is, and perhaps other global
@ -3727,6 +3828,8 @@ sub ReadProfile {
my $symbol_marker = $&;
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
my $profile_marker = $&;
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
my $heap_marker = $&;
# Look at first line to see if it is a heap or a CPU profile.
# CPU profile may start with no header at all, and just binary data
@ -3753,7 +3856,13 @@ sub ReadProfile {
$header = ReadProfileHeader(*PROFILE) || "";
}
if ($header =~ m/^--- *($heap_marker|$growth_marker)/o) {
# Skip "--- ..." line for profile types that have their own headers.
$header = ReadProfileHeader(*PROFILE) || "";
}
$main::profile_type = '';
if ($header =~ m/^heap profile:.*$growth_marker/o) {
$main::profile_type = 'growth';
$result = ReadHeapProfile($prog, *PROFILE, $header);
@ -3805,9 +3914,9 @@ sub ReadProfile {
# independent implementation.
sub FixCallerAddresses {
my $stack = shift;
if ($main::use_symbolized_profile) {
return $stack;
} else {
# --raw/http: Always subtract one from pc's, because PrintSymbolizedProfile()
# dumps unadjusted profiles.
{
$stack =~ /(\s)/;
my $delimiter = $1;
my @addrs = split(' ', $stack);
@ -3875,12 +3984,7 @@ sub ReadCPUProfile {
for (my $j = 0; $j < $d; $j++) {
my $pc = $slots->get($i+$j);
# Subtract one from caller pc so we map back to call instr.
# However, don't do this if we're reading a symbolized profile
# file, in which case the subtract-one was done when the file
# was written.
if ($j > 0 && !$main::use_symbolized_profile) {
$pc--;
}
$pc--;
$pc = sprintf("%0*x", $address_length, $pc);
$pcs->{$pc} = 1;
push @k, $pc;
@ -4255,10 +4359,10 @@ sub ReadSynchProfile {
} elsif ($variable eq "sampling period") {
$sampling_period = $value;
} elsif ($variable eq "ms since reset") {
# Currently nothing is done with this value in pprof
# Currently nothing is done with this value in jeprof
# So we just silently ignore it for now
} elsif ($variable eq "discarded samples") {
# Currently nothing is done with this value in pprof
# Currently nothing is done with this value in jeprof
# So we just silently ignore it for now
} else {
printf STDERR ("Ignoring unnknown variable in /contention output: " .
@ -4564,7 +4668,7 @@ sub ParseLibraries {
}
# Add two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressAdd {
my $addr1 = shift;
my $addr2 = shift;
@ -4618,7 +4722,7 @@ sub AddressAdd {
# Subtract two hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressSub {
my $addr1 = shift;
my $addr2 = shift;
@ -4670,7 +4774,7 @@ sub AddressSub {
}
# Increment a hex addresses of length $address_length.
# Run pprof --test for unit test if this is changed.
# Run jeprof --test for unit test if this is changed.
sub AddressInc {
my $addr = shift;
my $sum;
@ -4988,7 +5092,7 @@ sub UnparseAddress {
# 32-bit or ELF 64-bit executable file. The location of the tools
# is determined by considering the following options in this order:
# 1) --tools option, if set
# 2) PPROF_TOOLS environment variable, if set
# 2) JEPROF_TOOLS environment variable, if set
# 3) the environment
sub ConfigureObjTools {
my $prog_file = shift;
@ -5021,7 +5125,7 @@ sub ConfigureObjTools {
# For windows, we provide a version of nm and addr2line as part of
# the opensource release, which is capable of parsing
# Windows-style PDB executables. It should live in the path, or
# in the same directory as pprof.
# in the same directory as jeprof.
$obj_tool_map{"nm_pdb"} = "nm-pdb";
$obj_tool_map{"addr2line_pdb"} = "addr2line-pdb";
}
@ -5040,20 +5144,20 @@ sub ConfigureObjTools {
}
# Returns the path of a caller-specified object tool. If --tools or
# PPROF_TOOLS are specified, then returns the full path to the tool
# JEPROF_TOOLS are specified, then returns the full path to the tool
# with that prefix. Otherwise, returns the path unmodified (which
# means we will look for it on PATH).
sub ConfigureTool {
my $tool = shift;
my $path;
# --tools (or $PPROF_TOOLS) is a comma separated list, where each
# --tools (or $JEPROF_TOOLS) is a comma separated list, where each
# item is either a) a pathname prefix, or b) a map of the form
# <tool>:<path>. First we look for an entry of type (b) for our
# tool. If one is found, we use it. Otherwise, we consider all the
# pathname prefixes in turn, until one yields an existing file. If
# none does, we use a default path.
my $tools = $main::opt_tools || $ENV{"PPROF_TOOLS"} || "";
my $tools = $main::opt_tools || $ENV{"JEPROF_TOOLS"} || "";
if ($tools =~ m/(,|^)\Q$tool\E:([^,]*)/) {
$path = $2;
# TODO(csilvers): sanity-check that $path exists? Hard if it's relative.
@ -5067,11 +5171,11 @@ sub ConfigureTool {
}
if (!$path) {
error("No '$tool' found with prefix specified by " .
"--tools (or \$PPROF_TOOLS) '$tools'\n");
"--tools (or \$JEPROF_TOOLS) '$tools'\n");
}
} else {
# ... otherwise use the version that exists in the same directory as
# pprof. If there's nothing there, use $PATH.
# jeprof. If there's nothing there, use $PATH.
$0 =~ m,[^/]*$,; # this is everything after the last slash
my $dirname = $`; # this is everything up to and including the last slash
if (-x "$dirname$tool") {
@ -5101,7 +5205,7 @@ sub cleanup {
unlink($main::tmpfile_sym);
unlink(keys %main::tempnames);
# We leave any collected profiles in $HOME/pprof in case the user wants
# We leave any collected profiles in $HOME/jeprof in case the user wants
# to look at them later. We print a message informing them of this.
if ((scalar(@main::profile_files) > 0) &&
defined($main::collected_profile)) {
@ -5110,7 +5214,7 @@ sub cleanup {
}
print STDERR "If you want to investigate this profile further, you can do:\n";
print STDERR "\n";
print STDERR " pprof \\\n";
print STDERR " jeprof \\\n";
print STDERR " $main::prog \\\n";
print STDERR " $main::collected_profile\n";
print STDERR "\n";
@ -5295,7 +5399,7 @@ sub GetProcedureBoundaries {
# The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings.
# To make them more readable, we add underscores at interesting places.
# This routine removes the underscores, producing the canonical representation
# used by pprof to represent addresses, particularly in the tested routines.
# used by jeprof to represent addresses, particularly in the tested routines.
sub CanonicalHex {
my $arg = shift;
return join '', (split '_',$arg);

900
src/jemalloc/configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,8 @@
dnl Process this file with autoconf to produce a configure script.
AC_INIT([Makefile.in])
AC_CONFIG_AUX_DIR([build-aux])
dnl ============================================================================
dnl Custom macro definitions.
@ -43,6 +45,9 @@ AC_CACHE_CHECK([whether $1 is compilable],
dnl ============================================================================
CONFIG=`echo ${ac_configure_args} | sed -e 's#'"'"'\([^ ]*\)'"'"'#\1#g'`
AC_SUBST([CONFIG])
dnl Library revision.
rev=2
AC_SUBST([rev])
@ -134,6 +139,8 @@ if test "x$CFLAGS" = "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT])
fi
JE_CFLAGS_APPEND([-Wall])
JE_CFLAGS_APPEND([-Werror=declaration-after-statement])
JE_CFLAGS_APPEND([-Wshorten-64-to-32])
JE_CFLAGS_APPEND([-pipe])
JE_CFLAGS_APPEND([-g3])
elif test "x$je_cv_msvc" = "xyes" ; then
@ -160,13 +167,18 @@ if test "x${je_cv_msvc}" = "xyes" -a "x${ac_cv_header_inttypes_h}" = "xno"; then
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99"
fi
AC_CHECK_SIZEOF([void *])
if test "x${ac_cv_sizeof_void_p}" = "x8" ; then
LG_SIZEOF_PTR=3
elif test "x${ac_cv_sizeof_void_p}" = "x4" ; then
LG_SIZEOF_PTR=2
if test "x${je_cv_msvc}" = "xyes" ; then
LG_SIZEOF_PTR=LG_SIZEOF_PTR_WIN
AC_MSG_RESULT([Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit])
else
AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}])
AC_CHECK_SIZEOF([void *])
if test "x${ac_cv_sizeof_void_p}" = "x8" ; then
LG_SIZEOF_PTR=3
elif test "x${ac_cv_sizeof_void_p}" = "x4" ; then
LG_SIZEOF_PTR=2
else
AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}])
fi
fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_PTR], [$LG_SIZEOF_PTR])
@ -190,6 +202,16 @@ else
fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG], [$LG_SIZEOF_LONG])
AC_CHECK_SIZEOF([long long])
if test "x${ac_cv_sizeof_long_long}" = "x8" ; then
LG_SIZEOF_LONG_LONG=3
elif test "x${ac_cv_sizeof_long_long}" = "x4" ; then
LG_SIZEOF_LONG_LONG=2
else
AC_MSG_ERROR([Unsupported long long size: ${ac_cv_sizeof_long_long}])
fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG_LONG], [$LG_SIZEOF_LONG_LONG])
AC_CHECK_SIZEOF([intmax_t])
if test "x${ac_cv_sizeof_intmax_t}" = "x16" ; then
LG_SIZEOF_INTMAX_T=4
@ -206,22 +228,23 @@ AC_CANONICAL_HOST
dnl CPU-specific settings.
CPU_SPINWAIT=""
case "${host_cpu}" in
i[[345]]86)
;;
i686|x86_64)
JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]],
[je_cv_pause])
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
dnl emmintrin.h fails to compile unless MMX, SSE, and SSE2 are
dnl supported.
JE_COMPILABLE([SSE2 intrinsics], [
#include <emmintrin.h>
], [], [je_cv_sse2])
if test "x${je_cv_sse2}" = "xyes" ; then
AC_DEFINE_UNQUOTED([HAVE_SSE2], [ ])
if test "x${je_cv_msvc}" = "xyes" ; then
AC_CACHE_VAL([je_cv_pause_msvc],
[JE_COMPILABLE([pause instruction MSVC], [],
[[_mm_pause(); return 0;]],
[je_cv_pause_msvc])])
if test "x${je_cv_pause_msvc}" = "xyes" ; then
CPU_SPINWAIT='_mm_pause()'
fi
else
AC_CACHE_VAL([je_cv_pause],
[JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]],
[je_cv_pause])])
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
fi
;;
powerpc)
@ -263,6 +286,7 @@ dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the
dnl definitions need to be seen before any headers are included, which is a pain
dnl to make happen otherwise.
default_munmap="1"
maps_coalesce="1"
case "${host}" in
*-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS"
@ -273,7 +297,7 @@ case "${host}" in
so="dylib"
importlib="${so}"
force_tls="0"
DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)'
DSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'
SOREV="${rev}.${so}"
sbrk_deprecated="1"
;;
@ -288,7 +312,13 @@ case "${host}" in
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;;
*-*-openbsd*|*-*-bitrig*)
*-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_tls="0"
;;
*-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
@ -300,6 +330,7 @@ case "${host}" in
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ])
default_munmap="0"
;;
*-*-netbsd*)
@ -338,6 +369,8 @@ case "${host}" in
*-*-mingw* | *-*-cygwin*)
abi="pecoff"
force_tls="0"
force_lazy_lock="1"
maps_coalesce="0"
RPATH=""
so="dll"
if test "x$je_cv_msvc" = "xyes" ; then
@ -426,6 +459,36 @@ if test "x${je_cv_tls_model}" = "xyes" ; then
else
AC_DEFINE([JEMALLOC_TLS_MODEL], [ ])
fi
dnl Check for alloc_size attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>],
[void *foo(size_t size) __attribute__((alloc_size(1)));],
[je_cv_alloc_size])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_alloc_size}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_ALLOC_SIZE], [ ])
fi
dnl Check for format(gnu_printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));],
[je_cv_format_gnu_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_gnu_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF], [ ])
fi
dnl Check for format(printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));],
[je_cv_format_printf])
CFLAGS="${SAVED_CFLAGS}"
if test "x${je_cv_format_printf}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_PRINTF], [ ])
fi
dnl Support optional additions to rpath.
AC_ARG_WITH([rpath],
@ -512,6 +575,7 @@ if test "x$JEMALLOC_PREFIX" != "x" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], ["$JEMALLOC_PREFIX"])
AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], ["$JEMALLOC_CPREFIX"])
fi
AC_SUBST([JEMALLOC_CPREFIX])
AC_ARG_WITH([export],
[AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])],
@ -539,6 +603,15 @@ AC_ARG_WITH([install_suffix],
install_suffix="$INSTALL_SUFFIX"
AC_SUBST([install_suffix])
dnl Specify default malloc_conf.
AC_ARG_WITH([malloc_conf],
[AS_HELP_STRING([--with-malloc-conf=<malloc_conf>], [config.malloc_conf options string])],
[JEMALLOC_CONFIG_MALLOC_CONF="$with_malloc_conf"],
[JEMALLOC_CONFIG_MALLOC_CONF=""]
)
config_malloc_conf="$JEMALLOC_CONFIG_MALLOC_CONF"
AC_DEFINE_UNQUOTED([JEMALLOC_CONFIG_MALLOC_CONF], ["$config_malloc_conf"])
dnl Substitute @je_@ in jemalloc_protos.h.in, primarily to make generation of
dnl jemalloc_protos_jet.h easy.
je_="je_"
@ -630,7 +703,8 @@ fi
dnl Do not compile with debugging by default.
AC_ARG_ENABLE([debug],
[AS_HELP_STRING([--enable-debug], [Build debugging code (implies --enable-ivsalloc)])],
[AS_HELP_STRING([--enable-debug],
[Build debugging code (implies --enable-ivsalloc)])],
[if test "x$enable_debug" = "xno" ; then
enable_debug="0"
else
@ -639,6 +713,9 @@ fi
],
[enable_debug="0"]
)
if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ])
fi
if test "x$enable_debug" = "x1" ; then
AC_DEFINE([JEMALLOC_DEBUG], [ ])
enable_ivsalloc="1"
@ -647,7 +724,8 @@ AC_SUBST([enable_debug])
dnl Do not validate pointers by default.
AC_ARG_ENABLE([ivsalloc],
[AS_HELP_STRING([--enable-ivsalloc], [Validate pointers passed through the public API])],
[AS_HELP_STRING([--enable-ivsalloc],
[Validate pointers passed through the public API])],
[if test "x$enable_ivsalloc" = "xno" ; then
enable_ivsalloc="0"
else
@ -823,6 +901,12 @@ if test "x$enable_tcache" = "x1" ; then
fi
AC_SUBST([enable_tcache])
dnl Indicate whether adjacent virtual memory mappings automatically coalesce
dnl (and fragment on demand).
if test "x${maps_coalesce}" = "x1" ; then
AC_DEFINE([JEMALLOC_MAPS_COALESCE], [ ])
fi
dnl Enable VM deallocation via munmap() by default.
AC_ARG_ENABLE([munmap],
[AS_HELP_STRING([--disable-munmap], [Disable VM deallocation via munmap(2)])],
@ -946,11 +1030,28 @@ if test "x$enable_xmalloc" = "x1" ; then
fi
AC_SUBST([enable_xmalloc])
dnl Support cache-oblivious allocation alignment by default.
AC_ARG_ENABLE([cache-oblivious],
[AS_HELP_STRING([--disable-cache-oblivious],
[Disable support for cache-oblivious allocation alignment])],
[if test "x$enable_cache_oblivious" = "xno" ; then
enable_cache_oblivious="0"
else
enable_cache_oblivious="1"
fi
],
[enable_cache_oblivious="1"]
)
if test "x$enable_cache_oblivious" = "x1" ; then
AC_DEFINE([JEMALLOC_CACHE_OBLIVIOUS], [ ])
fi
AC_SUBST([enable_cache_oblivious])
dnl ============================================================================
dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found.
dnl One of those two functions should (theoretically) exist on all platforms
dnl that jemalloc currently has a chance of functioning on without modification.
dnl We additionally assume ffs() or __builtin_ffs() are defined if
dnl We additionally assume ffs[ll]() or __builtin_ffs[ll]() are defined if
dnl ffsl() or __builtin_ffsl() are defined, respectively.
JE_COMPILABLE([a program using __builtin_ffsl], [
#include <stdio.h>
@ -963,6 +1064,7 @@ JE_COMPILABLE([a program using __builtin_ffsl], [
}
], [je_cv_gcc_builtin_ffsl])
if test "x${je_cv_gcc_builtin_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [__builtin_ffsll])
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs])
else
@ -977,6 +1079,7 @@ else
}
], [je_cv_function_ffsl])
if test "x${je_cv_function_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [ffsll])
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs])
else
@ -984,8 +1087,28 @@ else
fi
fi
AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
[je_cv_static_page_shift],
AC_ARG_WITH([lg_tiny_min],
[AS_HELP_STRING([--with-lg-tiny-min=<lg-tiny-min>],
[Base 2 log of minimum tiny size class to support])],
[LG_TINY_MIN="$with_lg_tiny_min"],
[LG_TINY_MIN="3"])
AC_DEFINE_UNQUOTED([LG_TINY_MIN], [$LG_TINY_MIN])
AC_ARG_WITH([lg_quantum],
[AS_HELP_STRING([--with-lg-quantum=<lg-quantum>],
[Base 2 log of minimum allocation alignment])],
[LG_QUANTA="$with_lg_quantum"],
[LG_QUANTA="3 4"])
if test "x$with_lg_quantum" != "x" ; then
AC_DEFINE_UNQUOTED([LG_QUANTUM], [$with_lg_quantum])
fi
AC_ARG_WITH([lg_page],
[AS_HELP_STRING([--with-lg-page=<lg-page>], [Base 2 log of system page size])],
[LG_PAGE="$with_lg_page"], [LG_PAGE="detect"])
if test "x$LG_PAGE" = "xdetect"; then
AC_CACHE_CHECK([LG_PAGE],
[je_cv_lg_page],
AC_RUN_IFELSE([AC_LANG_PROGRAM(
[[
#include <strings.h>
@ -1016,52 +1139,70 @@ AC_CACHE_CHECK([STATIC_PAGE_SHIFT],
if (f == NULL) {
return 1;
}
fprintf(f, "%d\n", result);
fprintf(f, "%d", result);
fclose(f);
return 0;
]])],
[je_cv_static_page_shift=`cat conftest.out`],
[je_cv_static_page_shift=undefined],
[je_cv_static_page_shift=12]))
if test "x$je_cv_static_page_shift" != "xundefined"; then
AC_DEFINE_UNQUOTED([STATIC_PAGE_SHIFT], [$je_cv_static_page_shift])
else
AC_MSG_ERROR([cannot determine value for STATIC_PAGE_SHIFT])
[je_cv_lg_page=`cat conftest.out`],
[je_cv_lg_page=undefined],
[je_cv_lg_page=12]))
fi
if test "x${je_cv_lg_page}" != "x" ; then
LG_PAGE="${je_cv_lg_page}"
fi
if test "x${LG_PAGE}" != "xundefined" ; then
AC_DEFINE_UNQUOTED([LG_PAGE], [$LG_PAGE])
else
AC_MSG_ERROR([cannot determine value for LG_PAGE])
fi
AC_ARG_WITH([lg_page_sizes],
[AS_HELP_STRING([--with-lg-page-sizes=<lg-page-sizes>],
[Base 2 logs of system page sizes to support])],
[LG_PAGE_SIZES="$with_lg_page_sizes"], [LG_PAGE_SIZES="$LG_PAGE"])
AC_ARG_WITH([lg_size_class_group],
[AS_HELP_STRING([--with-lg-size-class-group=<lg-size-class-group>],
[Base 2 log of size classes per doubling])],
[LG_SIZE_CLASS_GROUP="$with_lg_size_class_group"],
[LG_SIZE_CLASS_GROUP="2"])
dnl ============================================================================
dnl jemalloc configuration.
dnl
dnl Set VERSION if source directory is inside a git repository.
if test "x`git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
dnl Pattern globs aren't powerful enough to match both single- and
dnl double-digit version numbers, so iterate over patterns to support up to
dnl version 99.99.99 without any accidental matches.
rm -f "${srcroot}VERSION"
rm -f "${objroot}VERSION"
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
if test ! -e "${srcroot}VERSION" ; then
git describe --long --abbrev=40 --match="${pattern}" > "${srcroot}VERSION.tmp" 2>/dev/null
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${srcroot}VERSION.tmp" "${srcroot}VERSION"
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi
done
fi
rm -f "${srcroot}VERSION.tmp"
if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT(
[Missing VERSION file, and unable to generate it; creating bogus VERSION])
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${srcroot}VERSION"
rm -f "${objroot}VERSION.tmp"
if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT(
[Missing VERSION file, and unable to generate it; creating bogus VERSION])
echo "0.0.0-0-g0000000000000000000000000000000000000000" > "${objroot}VERSION"
else
cp ${srcroot}VERSION ${objroot}VERSION
fi
fi
jemalloc_version=`cat "${srcroot}VERSION"`
jemalloc_version=`cat "${objroot}VERSION"`
jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]1}'`
jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]2}'`
jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]3}'`
@ -1088,6 +1229,32 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT"
dnl Check whether clock_gettime(2) is in libc or librt. This function is only
dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS.
SAVED_LIBS="${LIBS}"
LIBS=
AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
AC_SUBST([TESTLIBS])
LIBS="${SAVED_LIBS}"
dnl Check if the GNU-specific secure_getenv function exists.
AC_CHECK_FUNC([secure_getenv],
[have_secure_getenv="1"],
[have_secure_getenv="0"]
)
if test "x$have_secure_getenv" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_SECURE_GETENV], [ ])
fi
dnl Check if the Solaris/BSD issetugid function exists.
AC_CHECK_FUNC([issetugid],
[have_issetugid="1"],
[have_issetugid="0"]
)
if test "x$have_issetugid" = "x1" ; then
AC_DEFINE([JEMALLOC_HAVE_ISSETUGID], [ ])
fi
dnl Check whether the BSD-specific _malloc_thread_cleanup() exists. If so, use
dnl it rather than pthreads TSD cleanup functions to support cleanup during
dnl thread exit, in order to avoid pthreads library recursion during
@ -1122,9 +1289,9 @@ else
enable_lazy_lock="1"
fi
],
[enable_lazy_lock="0"]
[enable_lazy_lock=""]
)
if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then
if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues])
enable_lazy_lock="1"
fi
@ -1137,6 +1304,8 @@ if test "x$enable_lazy_lock" = "x1" ; then
])
fi
AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ])
else
enable_lazy_lock="0"
fi
AC_SUBST([enable_lazy_lock])
@ -1148,15 +1317,18 @@ else
enable_tls="1"
fi
,
enable_tls="1"
enable_tls=""
)
if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then
AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues])
enable_tls="1"
fi
if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then
AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues])
enable_tls="0"
if test "x${enable_tls}" = "x" ; then
if test "x${force_tls}" = "x1" ; then
AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues])
enable_tls="1"
elif test "x${force_tls}" = "x0" ; then
AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues])
enable_tls="0"
else
enable_tls="1"
fi
fi
if test "x${enable_tls}" = "x1" ; then
AC_MSG_CHECKING([for TLS])
@ -1171,12 +1343,38 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
AC_MSG_RESULT([yes]),
AC_MSG_RESULT([no])
enable_tls="0")
else
enable_tls="0"
fi
AC_SUBST([enable_tls])
if test "x${enable_tls}" = "x1" ; then
if test "x${force_tls}" = "x0" ; then
AC_MSG_WARN([TLS enabled despite being marked unusable on this platform])
fi
AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ])
elif test "x${force_tls}" = "x1" ; then
AC_MSG_ERROR([Failed to configure TLS, which is mandatory for correct function])
AC_MSG_WARN([TLS disabled despite being marked critical on this platform])
fi
dnl ============================================================================
dnl Check for C11 atomics.
JE_COMPILABLE([C11 atomics], [
#include <stdint.h>
#if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
#include <stdatomic.h>
#else
#error Atomics not available
#endif
], [
uint64_t *p = (uint64_t *)0;
uint64_t x = 1;
volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;
uint64_t r = atomic_fetch_add(a, x) + x;
return (r == 0);
], [je_cv_c11atomics])
if test "x${je_cv_c11atomics}" = "xyes" ; then
AC_DEFINE([JEMALLOC_C11ATOMICS])
fi
dnl ============================================================================
@ -1333,7 +1531,6 @@ if test "x${enable_zone_allocator}" = "x1" ; then
if test "x${abi}" != "xmacho"; then
AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin])
fi
AC_DEFINE([JEMALLOC_IVSALLOC], [ ])
AC_DEFINE([JEMALLOC_ZONE], [ ])
dnl The szone version jumped from 3 to 6 between the OS X 10.5.x and 10.6
@ -1343,7 +1540,7 @@ if test "x${enable_zone_allocator}" = "x1" ; then
AC_DEFUN([JE_ZONE_PROGRAM],
[AC_LANG_PROGRAM(
[#include <malloc/malloc.h>],
[static foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]]
[static int foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]]
)])
AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,14)],[JEMALLOC_ZONE_VERSION=3],[
@ -1471,10 +1668,15 @@ AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [
])
AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [
mkdir -p "${objroot}include/jemalloc/internal"
"${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h"
"${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
], [
SHELL="${SHELL}"
srcdir="${srcdir}"
objroot="${objroot}"
LG_QUANTA="${LG_QUANTA}"
LG_TINY_MIN=${LG_TINY_MIN}
LG_PAGE_SIZES="${LG_PAGE_SIZES}"
LG_SIZE_CLASS_GROUP=${LG_SIZE_CLASS_GROUP}
])
AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [
mkdir -p "${objroot}include/jemalloc"
@ -1521,7 +1723,7 @@ AC_CONFIG_HEADERS([$cfghdrs_tup])
dnl ============================================================================
dnl Generate outputs.
AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc.sh])
AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof])
AC_SUBST([cfgoutputs_in])
AC_SUBST([cfgoutputs_out])
AC_OUTPUT
@ -1532,12 +1734,14 @@ AC_MSG_RESULT([=================================================================
AC_MSG_RESULT([jemalloc version : ${jemalloc_version}])
AC_MSG_RESULT([library revision : ${rev}])
AC_MSG_RESULT([])
AC_MSG_RESULT([CONFIG : ${CONFIG}])
AC_MSG_RESULT([CC : ${CC}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([CFLAGS : ${CFLAGS}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
AC_MSG_RESULT([LIBS : ${LIBS}])
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
AC_MSG_RESULT([])
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])
@ -1545,9 +1749,9 @@ AC_MSG_RESULT([XSLROOT : ${XSLROOT}])
AC_MSG_RESULT([])
AC_MSG_RESULT([PREFIX : ${PREFIX}])
AC_MSG_RESULT([BINDIR : ${BINDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([INCLUDEDIR : ${INCLUDEDIR}])
AC_MSG_RESULT([LIBDIR : ${LIBDIR}])
AC_MSG_RESULT([DATADIR : ${DATADIR}])
AC_MSG_RESULT([MANDIR : ${MANDIR}])
AC_MSG_RESULT([])
AC_MSG_RESULT([srcroot : ${srcroot}])
@ -1559,6 +1763,7 @@ AC_MSG_RESULT([JEMALLOC_PREFIX : ${JEMALLOC_PREFIX}])
AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE])
AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}])
AC_MSG_RESULT([install_suffix : ${install_suffix}])
AC_MSG_RESULT([malloc_conf : ${config_malloc_conf}])
AC_MSG_RESULT([autogen : ${enable_autogen}])
AC_MSG_RESULT([cc-silence : ${enable_cc_silence}])
AC_MSG_RESULT([debug : ${enable_debug}])
@ -1576,4 +1781,5 @@ AC_MSG_RESULT([xmalloc : ${enable_xmalloc}])
AC_MSG_RESULT([munmap : ${enable_munmap}])
AC_MSG_RESULT([lazy_lock : ${enable_lazy_lock}])
AC_MSG_RESULT([tls : ${enable_tls}])
AC_MSG_RESULT([cache-oblivious : ${enable_cache_oblivious}])
AC_MSG_RESULT([===============================================================================])

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More