diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4c0f93c370..cdbabeaddf 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -86,13 +86,17 @@ benchmarks, generate documentation, install a fresh build of Rust, and more. It's your best friend when working on Rust, allowing you to compile & test your contributions before submission. -All the configuration for the build system lives in [the `mk` directory][mkdir] -in the project root. It can be hard to follow in places, as it uses some -advanced Make features which make for some challenging reading. If you have -questions on the build system internals, try asking in -[`#rust-internals`][pound-rust-internals]. +The build system lives in [the `src/bootstrap` directory][bootstrap] in the +project root. Our build system is itself written in Rust and is based on Cargo +to actually build all the compiler's crates. If you have questions on the build +system internals, try asking in [`#rust-internals`][pound-rust-internals]. -[mkdir]: https://github.com/rust-lang/rust/tree/master/mk/ +[bootstrap]: https://github.com/rust-lang/rust/tree/master/src/bootstrap/ + +> **Note**: the build system was recently rewritten from a jungle of makefiles +> to the current incarnation you'll see in `src/bootstrap`. If you experience +> bugs you can temporarily revert back to the makefiles with +> `--disable-rustbuild` passed to `./configure`. ### Configuration @@ -119,42 +123,112 @@ configuration used later in the build process. Some options to note: To see a full list of options, run `./configure --help`. -### Useful Targets +### Building -Some common make targets are: +Although the `./configure` script will generate a `Makefile`, this is actually +just a thin veneer over the actual build system driver, `x.py`. This file, at +the root of the repository, is used to build, test, and document various parts +of the compiler. You can execute it as: -- `make tips` - show useful targets, variables and other tips for working with - the build system. -- `make rustc-stage1` - build up to (and including) the first stage. For most - cases we don't need to build the stage2 compiler, so we can save time by not - building it. The stage1 compiler is a fully functioning compiler and - (probably) will be enough to determine if your change works as expected. -- `make $host/stage1/bin/rustc` - Where $host is a target triple like x86_64-unknown-linux-gnu. - This will build just rustc, without libstd. This is the fastest way to recompile after - you changed only rustc source code. Note however that the resulting rustc binary - won't have a stdlib to link against by default. You can build libstd once with - `make rustc-stage1`, rustc will pick it up afterwards. libstd is only guaranteed to - work if recompiled, so if there are any issues recompile it. -- `make check` - build the full compiler & run all tests (takes a while). This +```sh +python x.py build +``` + +On some systems you can also use the shorter version: + +```sh +./x.py build +``` + +To learn more about the driver and top-level targets, you can execute: + +```sh +python x.py --help +``` + +The general format for the driver script is: + +```sh +python x.py [] +``` + +Some example commands are `build`, `test`, and `doc`. These will build, test, +and document the specified directory. The second argument, ``, is +optional and defaults to working over the entire compiler. If specified, +however, only that specific directory will be built. For example: + +```sh +# build the entire compiler +python x.py build + +# build all documentation +python x.py doc + +# run all test suites +python x.py test + +# build only the standard library +python x.py build src/libstd + +# test only one particular test suite +python x.py test src/test/rustdoc + +# build only the stage0 libcore library +python x.py build src/libcore --stage 0 +``` + +You can explore the build system throught the various `--help` pages for each +subcommand. For example to learn more about a command you can run: + +``` +python x.py build --help +``` + +To learn about all possible rules you can execute, run: + +``` +python x.py build --help --verbose +``` + +### Useful commands + +Some common invocations of `x.py` are: + +- `x.py build --help` - show the help message and explain the subcommand +- `x.py build src/libtest --stage 1` - build up to (and including) the first + stage. For most cases we don't need to build the stage2 compiler, so we can + save time by not building it. The stage1 compiler is a fully functioning + compiler and (probably) will be enough to determine if your change works as + expected. +- `x.py build src/rustc --stage 1` - This will build just rustc, without libstd. + This is the fastest way to recompile after you changed only rustc source code. + Note however that the resulting rustc binary won't have a stdlib to link + against by default. You can build libstd once with `x.py build src/libstd`, + but it is is only guaranteed to work if recompiled, so if there are any issues + recompile it. +- `x.py test` - build the full compiler & run all tests (takes a while). This is what gets run by the continuous integration system against your pull request. You should run this before submitting to make sure your tests pass & everything builds in the correct manner. -- `make check-stage1-std NO_REBUILD=1` - test the standard library without - rebuilding the entire compiler -- `make check TESTNAME=` - Run a matching set of tests. +- `x.py test src/libstd --stage 1` - test the standard library without + recompiling stage 2. +- `x.py test src/test/run-pass --test-args TESTNAME` - Run a matching set of + tests. - `TESTNAME` should be a substring of the tests to match against e.g. it could be the fully qualified test name, or just a part of it. `TESTNAME=collections::hash::map::test_map::test_capacity_not_less_than_len` or `TESTNAME=test_capacity_not_less_than_len`. -- `make check-stage1-rpass TESTNAME=` - Run a single - rpass test with the stage1 compiler (this will be quicker than running the - command above as we only build the stage1 compiler, not the entire thing). - You can also leave off the `-rpass` to run all stage1 test types. -- `make check-stage1-coretest` - Run stage1 tests in `libcore`. -- `make tidy` - Check that the source code is in compliance with Rust's style - guidelines. There is no official document describing Rust's full guidelines - as of yet, but basic rules like 4 spaces for indentation and no more than 99 - characters in a single line should be kept in mind when writing code. +- `x.py test src/test/run-pass --stage 1 --test-args ` - + Run a single rpass test with the stage1 compiler (this will be quicker than + running the command above as we only build the stage1 compiler, not the entire + thing). You can also leave off the directory argument to run all stage1 test + types. +- `x.py test src/libcore --stage 1` - Run stage1 tests in `libcore`. +- `x.py test src/tools/tidy` - Check that the source code is in compliance with + Rust's style guidelines. There is no official document describing Rust's full + guidelines as of yet, but basic rules like 4 spaces for indentation and no + more than 99 characters in a single line should be kept in mind when writing + code. ## Pull Requests @@ -172,19 +246,17 @@ amount of time you have to wait. You need to have built the compiler at least once before running these will work, but that’s only one full build rather than one each time. - $ make -j8 rustc-stage1 && make check-stage1 + $ python x.py test --stage 1 is one such example, which builds just `rustc`, and then runs the tests. If you’re adding something to the standard library, try - $ make -j8 check-stage1-std NO_REBUILD=1 - -This will not rebuild the compiler, but will run the tests. + $ python x.py test src/libstd --stage 1 Please make sure your pull request is in compliance with Rust's style guidelines by running - $ make tidy + $ python x.py test src/tools/tidy Make this check before every pull request (and every new commit in a pull request) ; you can add [git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) @@ -213,7 +285,7 @@ been approved. The PR then enters the [merge queue][merge-queue], where @bors will run all the tests on every platform we support. If it all works out, @bors will merge your code into `master` and close the pull request. -[merge-queue]: http://buildbot.rust-lang.org/homu/queue/rust +[merge-queue]: https://buildbot.rust-lang.org/homu/queue/rust Speaking of tests, Rust has a comprehensive test suite. More information about it can be found @@ -332,4 +404,4 @@ are: [tlgba]: http://tomlee.co/2014/04/a-more-detailed-tour-of-the-rust-compiler/ [ro]: http://www.rustaceans.org/ [rctd]: ./COMPILER_TESTS.md -[cheatsheet]: http://buildbot.rust-lang.org/homu/ +[cheatsheet]: https://buildbot.rust-lang.org/homu/ diff --git a/README.md b/README.md index 7360651095..2133b17de0 100644 --- a/README.md +++ b/README.md @@ -36,16 +36,14 @@ Read ["Installing Rust"] from [The Book]. ```sh $ ./configure - $ make && make install + $ make && sudo make install ``` - > ***Note:*** You may need to use `sudo make install` if you do not - > normally have permission to modify the destination directory. The - > install locations can be adjusted by passing a `--prefix` argument - > to `configure`. Various other options are also supported – pass + > ***Note:*** Install locations can be adjusted by passing a `--prefix` + > argument to `configure`. Various other options are also supported – pass > `--help` for more information on them. - When complete, `make install` will place several programs into + When complete, `sudo make install` will place several programs into `/usr/local/bin`: `rustc`, the Rust compiler, and `rustdoc`, the API-documentation tool. This install does not include [Cargo], Rust's package manager, which you may also want to build. @@ -108,30 +106,22 @@ MSVC builds of Rust additionally require an installation of Visual Studio 2013 (or later) so `rustc` can use its linker. Make sure to check the “C++ tools” option. -With these dependencies installed, the build takes two steps: +With these dependencies installed, you can build the compiler in a `cmd.exe` +shell with: ```sh -$ ./configure +> python x.py build +``` + +If you're running inside of an msys shell, however, you can run: + +```sh +$ ./configure --build=x86_64-pc-windows-msvc $ make && make install ``` -#### MSVC with rustbuild - -The old build system, based on makefiles, is currently being rewritten into a -Rust-based build system called rustbuild. This can be used to bootstrap the -compiler on MSVC without needing to install MSYS or MinGW. All you need are -[Python 2](https://www.python.org/downloads/), -[CMake](https://cmake.org/download/), and -[Git](https://git-scm.com/downloads) in your PATH (make sure you do not use the -ones from MSYS if you have it installed). You'll also need Visual Studio 2013 or -newer with the C++ tools. Then all you need to do is to kick off rustbuild. - -``` -python x.py build -``` - -Currently rustbuild only works with some known versions of Visual Studio. If you -have a more recent version installed that a part of rustbuild doesn't understand +Currently building Rust only works with some known versions of Visual Studio. If +you have a more recent version installed the build system doesn't understand then you may need to force rustbuild to use an older version. This can be done by manually calling the appropriate vcvars file before running the bootstrap. @@ -149,16 +139,6 @@ $ ./configure $ make docs ``` -Building the documentation requires building the compiler, so the above -details will apply. Once you have the compiler built, you can - -```sh -$ make docs NO_REBUILD=1 -``` - -To make sure you don’t re-build the compiler because you made a change -to some documentation. - The generated documentation will appear in a top-level `doc` directory, created by the `make` rule. diff --git a/RELEASES.md b/RELEASES.md index 222ad3aa11..b8ddeaf32b 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -1,3 +1,720 @@ +Version 1.15.0 (2017-02-02) +=========================== + +Language +-------- + +* Basic procedural macros allowing custom `#[derive]`, aka "macros 1.1", are + stable. This allows popular code-generating crates like Serde and Diesel to + work ergonomically. [RFC 1681]. +* [Tuple structs may be empty. Unary and empty tuple structs may be instantiated + with curly braces][36868]. Part of [RFC 1506]. +* [A number of minor changes to name resolution have been activated][37127]. + They add up to more consistent semantics, allowing for future evolution of + Rust macros. Specified in [RFC 1560], see its section on ["changes"] for + details of what is different. The breaking changes here have been transitioned + through the [`legacy_imports`] lint since 1.14, with no known regressions. +* [In `macro_rules`, `path` fragments can now be parsed as type parameter + bounds][38279] +* [`?Sized` can be used in `where` clauses][37791] +* [There is now a limit on the size of monomorphized types and it can be + modified with the `#![type_size_limit]` crate attribute, similarly to + the `#![recursion_limit]` attribute][37789] + +Compiler +-------- + +* [On Windows, the compiler will apply dllimport attributes when linking to + extern functions][37973]. Additional attributes and flags can control which + library kind is linked and its name. [RFC 1717]. +* [Rust-ABI symbols are no longer exported from cdylibs][38117] +* [The `--test` flag works with procedural macro crates][38107] +* [Fix `extern "aapcs" fn` ABI][37814] +* [The `-C no-stack-check` flag is deprecated][37636]. It does nothing. +* [The `format!` expander recognizes incorrect `printf` and shell-style + formatting directives and suggests the correct format][37613]. +* [Only report one error for all unused imports in an import list][37456] + +Compiler Performance +-------------------- + +* [Avoid unnecessary `mk_ty` calls in `Ty::super_fold_with`][37705] +* [Avoid more unnecessary `mk_ty` calls in `Ty::super_fold_with`][37979] +* [Don't clone in `UnificationTable::probe`][37848] +* [Remove `scope_auxiliary` to cut RSS by 10%][37764] +* [Use small vectors in type walker][37760] +* [Macro expansion performance was improved][37701] +* [Change `HirVec>` to `HirVec` in `hir::Expr`][37642] +* [Replace FNV with a faster hash function][37229] + +Stabilized APIs +--------------- + +* [`std::iter::Iterator::min_by`] +* [`std::iter::Iterator::max_by`] +* [`std::os::*::fs::FileExt`] +* [`std::sync::atomic::Atomic*::get_mut`] +* [`std::sync::atomic::Atomic*::into_inner`] +* [`std::vec::IntoIter::as_slice`] +* [`std::vec::IntoIter::as_mut_slice`] +* [`std::sync::mpsc::Receiver::try_iter`] +* [`std::os::unix::process::CommandExt::before_exec`] +* [`std::rc::Rc::strong_count`] +* [`std::rc::Rc::weak_count`] +* [`std::sync::Arc::strong_count`] +* [`std::sync::Arc::weak_count`] +* [`std::char::encode_utf8`] +* [`std::char::encode_utf16`] +* [`std::cell::Ref::clone`] +* [`std::io::Take::into_inner`] + +Libraries +--------- + +* [The standard sorting algorithm has been rewritten for dramatic performance + improvements][38192]. It is a hybrid merge sort, drawing influences from + Timsort. Previously it was a naive merge sort. +* [`Iterator::nth` no longer has a `Sized` bound][38134] +* [`Extend<&T>` is specialized for `Vec` where `T: Copy`][38182] to improve + performance. +* [`chars().count()` is much faster][37888] and so are [`chars().last()` + and `char_indices().last()`][37882] +* [Fix ARM Objective-C ABI in `std::env::args`][38146] +* [Chinese characters display correctly in `fmt::Debug`][37855] +* [Derive `Default` for `Duration`][37699] +* [Support creation of anonymous pipes on WinXP/2k][37677] +* [`mpsc::RecvTimeoutError` implements `Error`][37527] +* [Don't pass overlapped handles to processes][38835] + +Cargo +----- + +* [In this release, Cargo build scripts no longer have access to the `OUT_DIR` + environment variable at build time via `env!("OUT_DIR")`][cargo/3368]. They + should instead check the variable at runtime with `std::env`. That the value + was set at build time was a bug, and incorrect when cross-compiling. This + change is known to cause breakage. +* [Add `--all` flag to `cargo test`][cargo/3221] +* [Compile statically against the MSVC CRT][cargo/3363] +* [Mix feature flags into fingerprint/metadata shorthash][cargo/3102] +* [Link OpenSSL statically on OSX][cargo/3311] +* [Apply new fingerprinting to build dir outputs][cargo/3310] +* [Test for bad path overrides with summaries][cargo/3336] +* [Require `cargo install --vers` to take a semver version][cargo/3338] +* [Fix retrying crate downloads for network errors][cargo/3348] +* [Implement string lookup for `build.rustflags` config key][cargo/3356] +* [Emit more info on --message-format=json][cargo/3319] +* [Assume `build.rs` in the same directory as `Cargo.toml` is a build script][cargo/3361] +* [Don't ignore errors in workspace manifest][cargo/3409] +* [Fix `--message-format JSON` when rustc emits non-JSON warnings][cargo/3410] + +Tooling +------- + +* [Test runners (binaries built with `--test`) now support a `--list` argument + that lists the tests it contains][38185] +* [Test runners now support a `--exact` argument that makes the test filter + match exactly, instead of matching only a substring of the test name][38181] +* [rustdoc supports a `--playground-url` flag][37763] +* [rustdoc provides more details about `#[should_panic]` errors][37749] + +Misc +---- + +* [The Rust build system is now written in Rust][37817]. The Makefiles may + continue to be used in this release by passing `--disable-rustbuild` to the + configure script, but they will be deleted soon. Note that the new build + system uses a different on-disk layout that will likely affect any scripts + building Rust. +* [Rust supports i686-unknown-openbsd][38086]. Tier 3 support. No testing or + releases. +* [Rust supports the MSP430][37627]. Tier 3 support. No testing or releases. +* [Rust supports the ARMv5TE architecture][37615]. Tier 3 support. No testing or + releases. + +Compatibility Notes +------------------- + +* [A number of minor changes to name resolution have been activated][37127]. + They add up to more consistent semantics, allowing for future evolution of + Rust macros. Specified in [RFC 1560], see its section on ["changes"] for + details of what is different. The breaking changes here have been transitioned + through the [`legacy_imports`] lint since 1.14, with no known regressions. +* [In this release, Cargo build scripts no longer have access to the `OUT_DIR` + environment variable at build time via `env!("OUT_DIR")`][cargo/3368]. They + should instead check the variable at runtime with `std::env`. That the value + was set at build time was a bug, and incorrect when cross-compiling. This + change is known to cause breakage. +* [Higher-ranked lifetimes are no longer allowed to appear _only_ in associated + types][33685]. The [`hr_lifetime_in_assoc_type` lint] has been a warning since + 1.10 and is now an error by default. It will become a hard error in the near + future. +* [The semantics relating modules to file system directories are changing in + minor ways][37602]. This is captured in the new `legacy_directory_ownership` + lint, which is a warning in this release, and will become a hard error in the + future. +* [Rust-ABI symbols are no longer exported from cdylibs][38117] +* [Once `Peekable` peeks a `None` it will return that `None` without re-querying + the underlying iterator][37834] + +["changes"]: https://github.com/rust-lang/rfcs/blob/master/text/1560-name-resolution.md#changes-to-name-resolution-rules +[33685]: https://github.com/rust-lang/rust/issues/33685 +[36868]: https://github.com/rust-lang/rust/pull/36868 +[37127]: https://github.com/rust-lang/rust/pull/37127 +[37229]: https://github.com/rust-lang/rust/pull/37229 +[37456]: https://github.com/rust-lang/rust/pull/37456 +[37527]: https://github.com/rust-lang/rust/pull/37527 +[37602]: https://github.com/rust-lang/rust/pull/37602 +[37613]: https://github.com/rust-lang/rust/pull/37613 +[37615]: https://github.com/rust-lang/rust/pull/37615 +[37636]: https://github.com/rust-lang/rust/pull/37636 +[37642]: https://github.com/rust-lang/rust/pull/37642 +[37677]: https://github.com/rust-lang/rust/pull/37677 +[37699]: https://github.com/rust-lang/rust/pull/37699 +[37701]: https://github.com/rust-lang/rust/pull/37701 +[37705]: https://github.com/rust-lang/rust/pull/37705 +[37749]: https://github.com/rust-lang/rust/pull/37749 +[37760]: https://github.com/rust-lang/rust/pull/37760 +[37763]: https://github.com/rust-lang/rust/pull/37763 +[37764]: https://github.com/rust-lang/rust/pull/37764 +[37789]: https://github.com/rust-lang/rust/pull/37789 +[37791]: https://github.com/rust-lang/rust/pull/37791 +[37814]: https://github.com/rust-lang/rust/pull/37814 +[37817]: https://github.com/rust-lang/rust/pull/37817 +[37834]: https://github.com/rust-lang/rust/pull/37834 +[37848]: https://github.com/rust-lang/rust/pull/37848 +[37855]: https://github.com/rust-lang/rust/pull/37855 +[37882]: https://github.com/rust-lang/rust/pull/37882 +[37888]: https://github.com/rust-lang/rust/pull/37888 +[37973]: https://github.com/rust-lang/rust/pull/37973 +[37979]: https://github.com/rust-lang/rust/pull/37979 +[38086]: https://github.com/rust-lang/rust/pull/38086 +[38107]: https://github.com/rust-lang/rust/pull/38107 +[38117]: https://github.com/rust-lang/rust/pull/38117 +[38134]: https://github.com/rust-lang/rust/pull/38134 +[38146]: https://github.com/rust-lang/rust/pull/38146 +[38181]: https://github.com/rust-lang/rust/pull/38181 +[38182]: https://github.com/rust-lang/rust/pull/38182 +[38185]: https://github.com/rust-lang/rust/pull/38185 +[38192]: https://github.com/rust-lang/rust/pull/38192 +[38279]: https://github.com/rust-lang/rust/pull/38279 +[38835]: https://github.com/rust-lang/rust/pull/38835 +[RFC 1492]: https://github.com/rust-lang/rfcs/blob/master/text/1492-dotdot-in-patterns.md +[RFC 1506]: https://github.com/rust-lang/rfcs/blob/master/text/1506-adt-kinds.md +[RFC 1560]: https://github.com/rust-lang/rfcs/blob/master/text/1560-name-resolution.md +[RFC 1681]: https://github.com/rust-lang/rfcs/blob/master/text/1681-macros-1.1.md +[RFC 1717]: https://github.com/rust-lang/rfcs/blob/master/text/1717-dllimport.md +[`hr_lifetime_in_assoc_type` lint]: https://github.com/rust-lang/rust/issues/33685 +[`legacy_imports`]: https://github.com/rust-lang/rust/pull/38271 +[cargo/3102]: https://github.com/rust-lang/cargo/pull/3102 +[cargo/3221]: https://github.com/rust-lang/cargo/pull/3221 +[cargo/3310]: https://github.com/rust-lang/cargo/pull/3310 +[cargo/3311]: https://github.com/rust-lang/cargo/pull/3311 +[cargo/3319]: https://github.com/rust-lang/cargo/pull/3319 +[cargo/3336]: https://github.com/rust-lang/cargo/pull/3336 +[cargo/3338]: https://github.com/rust-lang/cargo/pull/3338 +[cargo/3348]: https://github.com/rust-lang/cargo/pull/3348 +[cargo/3356]: https://github.com/rust-lang/cargo/pull/3356 +[cargo/3361]: https://github.com/rust-lang/cargo/pull/3361 +[cargo/3363]: https://github.com/rust-lang/cargo/pull/3363 +[cargo/3368]: https://github.com/rust-lang/cargo/issues/3368 +[cargo/3409]: https://github.com/rust-lang/cargo/pull/3409 +[cargo/3410]: https://github.com/rust-lang/cargo/pull/3410 +[`std::iter::Iterator::min_by`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.min_by +[`std::iter::Iterator::max_by`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.max_by +[`std::os::*::fs::FileExt`]: https://doc.rust-lang.org/std/os/unix/fs/trait.FileExt.html +[`std::sync::atomic::Atomic*::get_mut`]: https://doc.rust-lang.org/std/sync/atomic/struct.AtomicU8.html#method.get_mut +[`std::sync::atomic::Atomic*::into_inner`]: https://doc.rust-lang.org/std/sync/atomic/struct.AtomicU8.html#method.into_inner +[`std::vec::IntoIter::as_slice`]: https://doc.rust-lang.org/std/vec/struct.IntoIter.html#method.as_slice +[`std::vec::IntoIter::as_mut_slice`]: https://doc.rust-lang.org/std/vec/struct.IntoIter.html#method.as_mut_slice +[`std::sync::mpsc::Receiver::try_iter`]: https://doc.rust-lang.org/std/sync/mpsc/struct.Receiver.html#method.try_iter +[`std::os::unix::process::CommandExt::before_exec`]: https://doc.rust-lang.org/std/os/unix/process/trait.CommandExt.html#tymethod.before_exec +[`std::rc::Rc::strong_count`]: https://doc.rust-lang.org/std/rc/struct.Rc.html#method.strong_count +[`std::rc::Rc::weak_count`]: https://doc.rust-lang.org/std/rc/struct.Rc.html#method.weak_count +[`std::sync::Arc::strong_count`]: https://doc.rust-lang.org/std/sync/struct.Arc.html#method.strong_count +[`std::sync::Arc::weak_count`]: https://doc.rust-lang.org/std/sync/struct.Arc.html#method.weak_count +[`std::char::encode_utf8`]: https://doc.rust-lang.org/std/primitive.char.html#method.encode_utf8 +[`std::char::encode_utf16`]: https://doc.rust-lang.org/std/primitive.char.html#method.encode_utf16 +[`std::cell::Ref::clone`]: https://doc.rust-lang.org/std/cell/struct.Ref.html#method.clone +[`std::io::Take::into_inner`]: https://doc.rust-lang.org/std/io/struct.Take.html#method.into_inner + + +Version 1.14.0 (2016-12-22) +=========================== + +Language +-------- + +* [`..` matches multiple tuple fields in enum variants, structs + and tuples][36843]. [RFC 1492]. +* [Safe `fn` items can be coerced to `unsafe fn` pointers][37389] +* [`use *` and `use ::*` both glob-import from the crate root][37367] +* [It's now possible to call a `Vec>` without explicit + dereferencing][36822] + +Compiler +-------- + +* [Mark enums with non-zero discriminant as non-zero][37224] +* [Lower-case `static mut` names are linted like other + statics and consts][37162] +* [Fix ICE on some macros in const integer positions + (e.g. `[u8; m!()]`)][36819] +* [Improve error message and snippet for "did you mean `x`"][36798] +* [Add a panic-strategy field to the target specification][36794] +* [Include LLVM version in `--version --verbose`][37200] + +Compile-time Optimizations +-------------------------- + +* [Improve macro expansion performance][37569] +* [Shrink `Expr_::ExprInlineAsm`][37445] +* [Replace all uses of SHA-256 with BLAKE2b][37439] +* [Reduce the number of bytes hashed by `IchHasher`][37427] +* [Avoid more allocations when compiling html5ever][37373] +* [Use `SmallVector` in `CombineFields::instantiate`][37322] +* [Avoid some allocations in the macro parser][37318] +* [Use a faster deflate setting][37298] +* [Add `ArrayVec` and `AccumulateVec` to reduce heap allocations + during interning of slices][37270] +* [Optimize `write_metadata`][37267] +* [Don't process obligation forest cycles when stalled][37231] +* [Avoid many `CrateConfig` clones][37161] +* [Optimize `Substs::super_fold_with`][37108] +* [Optimize `ObligationForest`'s `NodeState` handling][36993] +* [Speed up `plug_leaks`][36917] + +Libraries +--------- + +* [`println!()`, with no arguments, prints newline][36825]. + Previously, an empty string was required to achieve the same. +* [`Wrapping` impls standard binary and unary operators, as well as + the `Sum` and `Product` iterators][37356] +* [Implement `From> for String` and `From> for + Vec`][37326] +* [Improve `fold` performance for `chain`, `cloned`, `map`, and + `VecDeque` iterators][37315] +* [Improve `SipHasher` performance on small values][37312] +* [Add Iterator trait TrustedLen to enable better FromIterator / + Extend][37306] +* [Expand `.zip()` specialization to `.map()` and `.cloned()`][37230] +* [`ReadDir` implements `Debug`][37221] +* [Implement `RefUnwindSafe` for atomic types][37178] +* [Specialize `Vec::extend` to `Vec::extend_from_slice`][37094] +* [Avoid allocations in `Decoder::read_str`][37064] +* [`io::Error` implements `From`][37037] +* [Impl `Debug` for raw pointers to unsized data][36880] +* [Don't reuse `HashMap` random seeds][37470] +* [The internal memory layout of `HashMap` is more cache-friendly, for + significant improvements in some operations][36692] +* [`HashMap` uses less memory on 32-bit architectures][36595] +* [Impl `Add<{str, Cow}>` for `Cow`][36430] + +Cargo +----- + +* [Expose rustc cfg values to build scripts][cargo/3243] +* [Allow cargo to work with read-only `CARGO_HOME`][cargo/3259] +* [Fix passing --features when testing multiple packages][cargo/3280] +* [Use a single profile set per workspace][cargo/3249] +* [Load `replace` sections from lock files][cargo/3220] +* [Ignore `panic` configuration for test/bench profiles][cargo/3175] + +Tooling +------- + +* [rustup is the recommended Rust installation method][1.14rustup] +* This release includes host (rustc) builds for Linux on MIPS, PowerPC, and + S390x. These are [tier 2] platforms and may have major defects. Follow the + instructions on the website to install, or add the targets to an existing + installation with `rustup target add`. The new target triples are: + - `mips-unknown-linux-gnu` + - `mipsel-unknown-linux-gnu` + - `mips64-unknown-linux-gnuabi64` + - `mips64el-unknown-linux-gnuabi64 ` + - `powerpc-unknown-linux-gnu` + - `powerpc64-unknown-linux-gnu` + - `powerpc64le-unknown-linux-gnu` + - `s390x-unknown-linux-gnu ` +* This release includes target (std) builds for ARM Linux running MUSL + libc. These are [tier 2] platforms and may have major defects. Add the + following triples to an existing rustup installation with `rustup target add`: + - `arm-unknown-linux-musleabi` + - `arm-unknown-linux-musleabihf` + - `armv7-unknown-linux-musleabihf` +* This release includes [experimental support for WebAssembly][1.14wasm], via + the `wasm32-unknown-emscripten` target. This target is known to have major + defects. Please test, report, and fix. +* rustup no longer installs documentation by default. Run `rustup + component add rust-docs` to install. +* [Fix line stepping in debugger][37310] +* [Enable line number debuginfo in releases][37280] + +Misc +---- + +* [Disable jemalloc on aarch64/powerpc/mips][37392] +* [Add support for Fuchsia OS][37313] +* [Detect local-rebuild by only MAJOR.MINOR version][37273] + +Compatibility Notes +------------------- + +* [A number of forward-compatibility lints used by the compiler + to gradually introduce language changes have been converted + to deny by default][36894]: + - ["use of inaccessible extern crate erroneously allowed"][36886] + - ["type parameter default erroneously allowed in invalid location"][36887] + - ["detects super or self keywords at the beginning of global path"][36888] + - ["two overlapping inherent impls define an item with the same name + were erroneously allowed"][36889] + - ["floating-point constants cannot be used in patterns"][36890] + - ["constants of struct or enum type can only be used in a pattern if + the struct or enum has `#[derive(PartialEq, Eq)]`"][36891] + - ["lifetimes or labels named `'_` were erroneously allowed"][36892] +* [Prohibit patterns in trait methods without bodies][37378] +* [The atomic `Ordering` enum may not be matched exhaustively][37351] +* [Future-proofing `#[no_link]` breaks some obscure cases][37247] +* [The `$crate` macro variable is accepted in fewer locations][37213] +* [Impls specifying extra region requirements beyond the trait + they implement are rejected][37167] +* [Enums may not be unsized][37111]. Unsized enums are intended to + work but never have. For now they are forbidden. +* [Enforce the shadowing restrictions from RFC 1560 for today's macros][36767] + +[tier 2]: https://forge.rust-lang.org/platform-support.html +[1.14rustup]: https://internals.rust-lang.org/t/beta-testing-rustup-rs/3316/204 +[1.14wasm]: https://users.rust-lang.org/t/compiling-to-the-web-with-rust-and-emscripten/7627 +[36430]: https://github.com/rust-lang/rust/pull/36430 +[36595]: https://github.com/rust-lang/rust/pull/36595 +[36595]: https://github.com/rust-lang/rust/pull/36595 +[36692]: https://github.com/rust-lang/rust/pull/36692 +[36767]: https://github.com/rust-lang/rust/pull/36767 +[36794]: https://github.com/rust-lang/rust/pull/36794 +[36798]: https://github.com/rust-lang/rust/pull/36798 +[36819]: https://github.com/rust-lang/rust/pull/36819 +[36822]: https://github.com/rust-lang/rust/pull/36822 +[36825]: https://github.com/rust-lang/rust/pull/36825 +[36843]: https://github.com/rust-lang/rust/pull/36843 +[36880]: https://github.com/rust-lang/rust/pull/36880 +[36886]: https://github.com/rust-lang/rust/issues/36886 +[36887]: https://github.com/rust-lang/rust/issues/36887 +[36888]: https://github.com/rust-lang/rust/issues/36888 +[36889]: https://github.com/rust-lang/rust/issues/36889 +[36890]: https://github.com/rust-lang/rust/issues/36890 +[36891]: https://github.com/rust-lang/rust/issues/36891 +[36892]: https://github.com/rust-lang/rust/issues/36892 +[36894]: https://github.com/rust-lang/rust/pull/36894 +[36917]: https://github.com/rust-lang/rust/pull/36917 +[36993]: https://github.com/rust-lang/rust/pull/36993 +[37037]: https://github.com/rust-lang/rust/pull/37037 +[37064]: https://github.com/rust-lang/rust/pull/37064 +[37094]: https://github.com/rust-lang/rust/pull/37094 +[37108]: https://github.com/rust-lang/rust/pull/37108 +[37111]: https://github.com/rust-lang/rust/pull/37111 +[37161]: https://github.com/rust-lang/rust/pull/37161 +[37162]: https://github.com/rust-lang/rust/pull/37162 +[37167]: https://github.com/rust-lang/rust/pull/37167 +[37178]: https://github.com/rust-lang/rust/pull/37178 +[37200]: https://github.com/rust-lang/rust/pull/37200 +[37213]: https://github.com/rust-lang/rust/pull/37213 +[37221]: https://github.com/rust-lang/rust/pull/37221 +[37224]: https://github.com/rust-lang/rust/pull/37224 +[37230]: https://github.com/rust-lang/rust/pull/37230 +[37231]: https://github.com/rust-lang/rust/pull/37231 +[37247]: https://github.com/rust-lang/rust/pull/37247 +[37267]: https://github.com/rust-lang/rust/pull/37267 +[37270]: https://github.com/rust-lang/rust/pull/37270 +[37273]: https://github.com/rust-lang/rust/pull/37273 +[37280]: https://github.com/rust-lang/rust/pull/37280 +[37298]: https://github.com/rust-lang/rust/pull/37298 +[37306]: https://github.com/rust-lang/rust/pull/37306 +[37310]: https://github.com/rust-lang/rust/pull/37310 +[37312]: https://github.com/rust-lang/rust/pull/37312 +[37313]: https://github.com/rust-lang/rust/pull/37313 +[37315]: https://github.com/rust-lang/rust/pull/37315 +[37318]: https://github.com/rust-lang/rust/pull/37318 +[37322]: https://github.com/rust-lang/rust/pull/37322 +[37326]: https://github.com/rust-lang/rust/pull/37326 +[37351]: https://github.com/rust-lang/rust/pull/37351 +[37356]: https://github.com/rust-lang/rust/pull/37356 +[37367]: https://github.com/rust-lang/rust/pull/37367 +[37373]: https://github.com/rust-lang/rust/pull/37373 +[37378]: https://github.com/rust-lang/rust/pull/37378 +[37389]: https://github.com/rust-lang/rust/pull/37389 +[37392]: https://github.com/rust-lang/rust/pull/37392 +[37427]: https://github.com/rust-lang/rust/pull/37427 +[37439]: https://github.com/rust-lang/rust/pull/37439 +[37445]: https://github.com/rust-lang/rust/pull/37445 +[37470]: https://github.com/rust-lang/rust/pull/37470 +[37569]: https://github.com/rust-lang/rust/pull/37569 +[RFC 1492]: https://github.com/rust-lang/rfcs/blob/master/text/1492-dotdot-in-patterns.md +[cargo/3175]: https://github.com/rust-lang/cargo/pull/3175 +[cargo/3220]: https://github.com/rust-lang/cargo/pull/3220 +[cargo/3243]: https://github.com/rust-lang/cargo/pull/3243 +[cargo/3249]: https://github.com/rust-lang/cargo/pull/3249 +[cargo/3259]: https://github.com/rust-lang/cargo/pull/3259 +[cargo/3280]: https://github.com/rust-lang/cargo/pull/3280 + + +Version 1.13.0 (2016-11-10) +=========================== + +Language +-------- + +* [Stabilize the `?` operator][36995]. `?` is a simple way to propagate + errors, like the `try!` macro, described in [RFC 0243]. +* [Stabilize macros in type position][36014]. Described in [RFC 873]. +* [Stabilize attributes on statements][36995]. Described in [RFC 0016]. +* [Fix `#[derive]` for empty tuple structs/variants][35728] +* [Fix lifetime rules for 'if' conditions][36029] +* [Avoid loading and parsing unconfigured non-inline modules][36482] + +Compiler +-------- + +* [Add the `-C link-arg` argument][36574] +* [Remove the old AST-based backend from rustc_trans][35764] +* [Don't enable NEON by default on armv7 Linux][35814] +* [Fix debug line number info for macro expansions][35238] +* [Do not emit "class method" debuginfo for types that are not + DICompositeType][36008] +* [Warn about multiple conflicting #[repr] hints][34623] +* [When sizing DST, don't double-count nested struct prefixes][36351] +* [Default RUST_MIN_STACK to 16MiB for now][36505] +* [Improve rlib metadata format][36551]. Reduces rlib size significantly. +* [Reject macros with empty repetitions to avoid infinite loop][36721] +* [Expand macros without recursing to avoid stack overflows][36214] + +Diagnostics +----------- + +* [Replace macro backtraces with labeled local uses][35702] +* [Improve error message for missplaced doc comments][33922] +* [Buffer unix and lock windows to prevent message interleaving][35975] +* [Update lifetime errors to specifically note temporaries][36171] +* [Special case a few colors for Windows][36178] +* [Suggest `use self` when such an import resolves][36289] +* [Be more specific when type parameter shadows primitive type][36338] +* Many minor improvements + +Compile-time Optimizations +-------------------------- + +* [Compute and cache HIR hashes at beginning][35854] +* [Don't hash types in loan paths][36004] +* [Cache projections in trans][35761] +* [Optimize the parser's last token handling][36527] +* [Only instantiate #[inline] functions in codegen units referencing + them][36524]. This leads to big improvements in cases where crates export + define many inline functions without using them directly. +* [Lazily allocate TypedArena's first chunk][36592] +* [Don't allocate during default HashSet creation][36734] + +Stabilized APIs +--------------- + +* [`checked_abs`] +* [`wrapping_abs`] +* [`overflowing_abs`] +* [`RefCell::try_borrow`] +* [`RefCell::try_borrow_mut`] + +Libraries +--------- + +* [Add `assert_ne!` and `debug_assert_ne!`][35074] +* [Make `vec_deque::Drain`, `hash_map::Drain`, and `hash_set::Drain` + covariant][35354] +* [Implement `AsRef<[T]>` for `std::slice::Iter`][35559] +* [Implement `Debug` for `std::vec::IntoIter`][35707] +* [`CString`: avoid excessive growth just to 0-terminate][35871] +* [Implement `CoerceUnsized` for `{Cell, RefCell, UnsafeCell}`][35627] +* [Use arc4rand on FreeBSD][35884] +* [memrchr: Correct aligned offset computation][35969] +* [Improve Demangling of Rust Symbols][36059] +* [Use monotonic time in condition variables][35048] +* [Implement `Debug` for `std::path::{Components,Iter}`][36101] +* [Implement conversion traits for `char`][35755] +* [Fix illegal instruction caused by overflow in channel cloning][36104] +* [Zero first byte of CString on drop][36264] +* [Inherit overflow checks for sum and product][36372] +* [Add missing Eq implementations][36423] +* [Implement `Debug` for `DirEntry`][36631] +* [When `getaddrinfo` returns `EAI_SYSTEM` retrieve actual error from + `errno`][36754] +* [`SipHasher`] is deprecated. Use [`DefaultHasher`]. +* [Implement more traits for `std::io::ErrorKind`][35911] +* [Optimize BinaryHeap bounds checking][36072] +* [Work around pointer aliasing issue in `Vec::extend_from_slice`, + `extend_with_element`][36355] +* [Fix overflow checking in unsigned pow()][34942] + +Cargo +----- + +* This release includes security fixes to both curl and OpenSSL. +* [Fix transitive doctests when panic=abort][cargo/3021] +* [Add --all-features flag to cargo][cargo/3038] +* [Reject path-based dependencies in `cargo package`][cargo/3060] +* [Don't parse the home directory more than once][cargo/3078] +* [Don't try to generate Cargo.lock on empty workspaces][cargo/3092] +* [Update OpenSSL to 1.0.2j][cargo/3121] +* [Add license and license_file to cargo metadata output][cargo/3110] +* [Make crates-io registry URL optional in config; ignore all changes to + source.crates-io][cargo/3089] +* [Don't download dependencies from other platforms][cargo/3123] +* [Build transitive dev-dependencies when needed][cargo/3125] +* [Add support for per-target rustflags in .cargo/config][cargo/3157] +* [Avoid updating registry when adding existing deps][cargo/3144] +* [Warn about path overrides that won't work][cargo/3136] +* [Use workspaces during `cargo install`][cargo/3146] +* [Leak mspdbsrv.exe processes on Windows][cargo/3162] +* [Add --message-format flag][cargo/3000] +* [Pass target environment for rustdoc][cargo/3205] +* [Use `CommandExt::exec` for `cargo run` on Unix][cargo/2818] +* [Update curl and curl-sys][cargo/3241] +* [Call rustdoc test with the correct cfg flags of a package][cargo/3242] + +Tooling +------- + +* [rustdoc: Add the `--sysroot` argument][36586] +* [rustdoc: Fix a couple of issues with the search results][35655] +* [rustdoc: remove the `!` from macro URLs and titles][35234] +* [gdb: Fix pretty-printing special-cased Rust types][35585] +* [rustdoc: Filter more incorrect methods inherited through Deref][36266] + +Misc +---- + +* [Remove unmaintained style guide][35124] +* [Add s390x support][36369] +* [Initial work at Haiku OS support][36727] +* [Add mips-uclibc targets][35734] +* [Crate-ify compiler-rt into compiler-builtins][35021] +* [Add rustc version info (git hash + date) to dist tarball][36213] +* Many documentation improvements + +Compatibility Notes +------------------- + +* [`SipHasher`] is deprecated. Use [`DefaultHasher`]. +* [Deny (by default) transmuting from fn item types to pointer-sized + types][34923]. Continuing the long transition to zero-sized fn items, + per [RFC 401]. +* [Fix `#[derive]` for empty tuple structs/variants][35728]. + Part of [RFC 1506]. +* [Issue deprecation warnings for safe accesses to extern statics][36173] +* [Fix lifetime rules for 'if' conditions][36029]. +* [Inherit overflow checks for sum and product][36372]. +* [Forbid user-defined macros named "macro_rules"][36730]. + +[33922]: https://github.com/rust-lang/rust/pull/33922 +[34623]: https://github.com/rust-lang/rust/pull/34623 +[34923]: https://github.com/rust-lang/rust/pull/34923 +[34942]: https://github.com/rust-lang/rust/pull/34942 +[34982]: https://github.com/rust-lang/rust/pull/34982 +[35021]: https://github.com/rust-lang/rust/pull/35021 +[35048]: https://github.com/rust-lang/rust/pull/35048 +[35074]: https://github.com/rust-lang/rust/pull/35074 +[35124]: https://github.com/rust-lang/rust/pull/35124 +[35234]: https://github.com/rust-lang/rust/pull/35234 +[35238]: https://github.com/rust-lang/rust/pull/35238 +[35354]: https://github.com/rust-lang/rust/pull/35354 +[35559]: https://github.com/rust-lang/rust/pull/35559 +[35585]: https://github.com/rust-lang/rust/pull/35585 +[35627]: https://github.com/rust-lang/rust/pull/35627 +[35655]: https://github.com/rust-lang/rust/pull/35655 +[35702]: https://github.com/rust-lang/rust/pull/35702 +[35707]: https://github.com/rust-lang/rust/pull/35707 +[35728]: https://github.com/rust-lang/rust/pull/35728 +[35734]: https://github.com/rust-lang/rust/pull/35734 +[35755]: https://github.com/rust-lang/rust/pull/35755 +[35761]: https://github.com/rust-lang/rust/pull/35761 +[35764]: https://github.com/rust-lang/rust/pull/35764 +[35814]: https://github.com/rust-lang/rust/pull/35814 +[35854]: https://github.com/rust-lang/rust/pull/35854 +[35871]: https://github.com/rust-lang/rust/pull/35871 +[35884]: https://github.com/rust-lang/rust/pull/35884 +[35911]: https://github.com/rust-lang/rust/pull/35911 +[35969]: https://github.com/rust-lang/rust/pull/35969 +[35975]: https://github.com/rust-lang/rust/pull/35975 +[36004]: https://github.com/rust-lang/rust/pull/36004 +[36008]: https://github.com/rust-lang/rust/pull/36008 +[36014]: https://github.com/rust-lang/rust/pull/36014 +[36029]: https://github.com/rust-lang/rust/pull/36029 +[36059]: https://github.com/rust-lang/rust/pull/36059 +[36072]: https://github.com/rust-lang/rust/pull/36072 +[36101]: https://github.com/rust-lang/rust/pull/36101 +[36104]: https://github.com/rust-lang/rust/pull/36104 +[36171]: https://github.com/rust-lang/rust/pull/36171 +[36173]: https://github.com/rust-lang/rust/pull/36173 +[36178]: https://github.com/rust-lang/rust/pull/36178 +[36213]: https://github.com/rust-lang/rust/pull/36213 +[36214]: https://github.com/rust-lang/rust/pull/36214 +[36264]: https://github.com/rust-lang/rust/pull/36264 +[36266]: https://github.com/rust-lang/rust/pull/36266 +[36289]: https://github.com/rust-lang/rust/pull/36289 +[36338]: https://github.com/rust-lang/rust/pull/36338 +[36351]: https://github.com/rust-lang/rust/pull/36351 +[36355]: https://github.com/rust-lang/rust/pull/36355 +[36369]: https://github.com/rust-lang/rust/pull/36369 +[36372]: https://github.com/rust-lang/rust/pull/36372 +[36423]: https://github.com/rust-lang/rust/pull/36423 +[36482]: https://github.com/rust-lang/rust/pull/36482 +[36505]: https://github.com/rust-lang/rust/pull/36505 +[36524]: https://github.com/rust-lang/rust/pull/36524 +[36527]: https://github.com/rust-lang/rust/pull/36527 +[36551]: https://github.com/rust-lang/rust/pull/36551 +[36574]: https://github.com/rust-lang/rust/pull/36574 +[36586]: https://github.com/rust-lang/rust/pull/36586 +[36592]: https://github.com/rust-lang/rust/pull/36592 +[36631]: https://github.com/rust-lang/rust/pull/36631 +[36639]: https://github.com/rust-lang/rust/pull/36639 +[36721]: https://github.com/rust-lang/rust/pull/36721 +[36727]: https://github.com/rust-lang/rust/pull/36727 +[36730]: https://github.com/rust-lang/rust/pull/36730 +[36734]: https://github.com/rust-lang/rust/pull/36734 +[36754]: https://github.com/rust-lang/rust/pull/36754 +[36995]: https://github.com/rust-lang/rust/pull/36995 +[RFC 0016]: https://github.com/rust-lang/rfcs/blob/master/text/0016-more-attributes.md +[RFC 0243]: https://github.com/rust-lang/rfcs/blob/master/text/0243-trait-based-exception-handling.md +[RFC 1506]: https://github.com/rust-lang/rfcs/blob/master/text/1506-adt-kinds.md +[RFC 401]: https://github.com/rust-lang/rfcs/blob/master/text/0401-coercions.md +[RFC 873]: https://github.com/rust-lang/rfcs/blob/master/text/0873-type-macros.md +[cargo/2818]: https://github.com/rust-lang/cargo/pull/2818 +[cargo/3000]: https://github.com/rust-lang/cargo/pull/3000 +[cargo/3021]: https://github.com/rust-lang/cargo/pull/3021 +[cargo/3038]: https://github.com/rust-lang/cargo/pull/3038 +[cargo/3060]: https://github.com/rust-lang/cargo/pull/3060 +[cargo/3078]: https://github.com/rust-lang/cargo/pull/3078 +[cargo/3089]: https://github.com/rust-lang/cargo/pull/3089 +[cargo/3092]: https://github.com/rust-lang/cargo/pull/3092 +[cargo/3110]: https://github.com/rust-lang/cargo/pull/3110 +[cargo/3121]: https://github.com/rust-lang/cargo/pull/3121 +[cargo/3123]: https://github.com/rust-lang/cargo/pull/3123 +[cargo/3125]: https://github.com/rust-lang/cargo/pull/3125 +[cargo/3136]: https://github.com/rust-lang/cargo/pull/3136 +[cargo/3144]: https://github.com/rust-lang/cargo/pull/3144 +[cargo/3146]: https://github.com/rust-lang/cargo/pull/3146 +[cargo/3157]: https://github.com/rust-lang/cargo/pull/3157 +[cargo/3162]: https://github.com/rust-lang/cargo/pull/3162 +[cargo/3205]: https://github.com/rust-lang/cargo/pull/3205 +[cargo/3241]: https://github.com/rust-lang/cargo/pull/3241 +[cargo/3242]: https://github.com/rust-lang/cargo/pull/3242 +[rustup]: https://www.rustup.rs +[`checked_abs`]: https://doc.rust-lang.org/std/primitive.i32.html#method.checked_abs +[`wrapping_abs`]: https://doc.rust-lang.org/std/primitive.i32.html#method.wrapping_abs +[`overflowing_abs`]: https://doc.rust-lang.org/std/primitive.i32.html#method.overflowing_abs +[`RefCell::try_borrow`]: https://doc.rust-lang.org/std/cell/struct.RefCell.html#method.try_borrow +[`RefCell::try_borrow_mut`]: https://doc.rust-lang.org/std/cell/struct.RefCell.html#method.try_borrow_mut +[`SipHasher`]: https://doc.rust-lang.org/std/hash/struct.SipHasher.html +[`DefaultHasher`]: https://doc.rust-lang.org/std/collections/hash_map/struct.DefaultHasher.html + + Version 1.12.1 (2016-10-20) =========================== diff --git a/configure b/configure index 85a3dd4b93..ab5d2f34fe 100755 --- a/configure +++ b/configure @@ -621,19 +621,22 @@ opt llvm-assertions 0 "build LLVM with assertions" opt debug-assertions 0 "build with debugging assertions" opt fast-make 0 "use .gitmodules as timestamp for submodule deps" opt ccache 0 "invoke gcc/clang via ccache to reuse object files between builds" +opt sccache 0 "invoke gcc/clang via sccache to reuse object files between builds" opt local-rust 0 "use an installed rustc rather than downloading a snapshot" opt local-rebuild 0 "assume local-rust matches the current version, for rebuilds; implies local-rust, and is implied if local-rust already matches the current version" opt llvm-static-stdcpp 0 "statically link to libstdc++ for LLVM" +opt llvm-link-shared 0 "prefer shared linking to LLVM (llvm-config --link-shared)" opt rpath 1 "build rpaths into rustc itself" opt stage0-landing-pads 1 "enable landing pads during bootstrap with stage0" # This is used by the automation to produce single-target nightlies opt dist-host-only 0 "only install bins for the host architecture" opt inject-std-version 1 "inject the current compiler version of libstd into programs" opt llvm-version-check 1 "check if the LLVM version is supported, build anyway" -opt rustbuild 0 "use the rust and cargo based build system" +opt rustbuild 1 "use the rust and cargo based build system" opt codegen-tests 1 "run the src/test/codegen tests" opt option-checking 1 "complain about unrecognized options in this configure script" opt ninja 0 "build LLVM using the Ninja generator (for MSVC, requires building in the correct environment)" +opt vendor 0 "enable usage of vendored Rust crates" # Optimization and debugging options. These may be overridden by the release channel, etc. opt_nosave optimize 1 "build optimized rust code" @@ -641,8 +644,10 @@ opt_nosave optimize-cxx 1 "build optimized C++ code" opt_nosave optimize-llvm 1 "build optimized LLVM" opt_nosave llvm-assertions 0 "build LLVM with assertions" opt_nosave debug-assertions 0 "build with debugging assertions" +opt_nosave llvm-release-debuginfo 0 "build LLVM with debugger metadata" opt_nosave debuginfo 0 "build with debugger metadata" opt_nosave debuginfo-lines 0 "build with line number debugger metadata" +opt_nosave debuginfo-only-std 0 "build only libstd with debugging information" opt_nosave debug-jemalloc 0 "build jemalloc with --enable-debug --enable-fill" valopt localstatedir "/var/lib" "local state directory" @@ -661,11 +666,11 @@ valopt armv7-linux-androideabi-ndk "" "armv7-linux-androideabi NDK standalone pa valopt aarch64-linux-android-ndk "" "aarch64-linux-android NDK standalone path" valopt nacl-cross-path "" "NaCl SDK path (Pepper Canary is recommended). Must be absolute!" valopt musl-root "/usr/local" "MUSL root installation directory (deprecated)" -valopt musl-root-x86_64 "/usr/local" "x86_64-unknown-linux-musl install directory" -valopt musl-root-i686 "/usr/local" "i686-unknown-linux-musl install directory" -valopt musl-root-arm "/usr/local" "arm-unknown-linux-musleabi install directory" -valopt musl-root-armhf "/usr/local" "arm-unknown-linux-musleabihf install directory" -valopt musl-root-armv7 "/usr/local" "armv7-unknown-linux-musleabihf install directory" +valopt musl-root-x86_64 "" "x86_64-unknown-linux-musl install directory" +valopt musl-root-i686 "" "i686-unknown-linux-musl install directory" +valopt musl-root-arm "" "arm-unknown-linux-musleabi install directory" +valopt musl-root-armhf "" "arm-unknown-linux-musleabihf install directory" +valopt musl-root-armv7 "" "armv7-unknown-linux-musleabihf install directory" valopt extra-filename "" "Additional data that is hashed and passed to the -C extra-filename flag" if [ -e ${CFG_SRC_DIR}.git ] @@ -728,15 +733,17 @@ case "$CFG_RELEASE_CHANNEL" in nightly ) msg "overriding settings for $CFG_RELEASE_CHANNEL" CFG_ENABLE_LLVM_ASSERTIONS=1 - - # FIXME(#37364) shouldn't have to disable this on windows-gnu + # FIXME(stage0) re-enable this on the next stage0 now that #35566 is + # fixed case "$CFG_BUILD" in *-pc-windows-gnu) ;; *) - CFG_ENABLE_DEBUGINFO_LINES=1 + CFG_ENABLE_DEBUGINFO_LINES=1 + CFG_ENABLE_DEBUGINFO_ONLY_STD=1 ;; esac + ;; beta | stable) msg "overriding settings for $CFG_RELEASE_CHANNEL" @@ -744,7 +751,8 @@ case "$CFG_RELEASE_CHANNEL" in *-pc-windows-gnu) ;; *) - CFG_ENABLE_DEBUGINFO_LINES=1 + CFG_ENABLE_DEBUGINFO_LINES=1 + CFG_ENABLE_DEBUGINFO_ONLY_STD=1 ;; esac ;; @@ -777,8 +785,10 @@ if [ -n "$CFG_DISABLE_OPTIMIZE_CXX" ]; then putvar CFG_DISABLE_OPTIMIZE_CXX; fi if [ -n "$CFG_DISABLE_OPTIMIZE_LLVM" ]; then putvar CFG_DISABLE_OPTIMIZE_LLVM; fi if [ -n "$CFG_ENABLE_LLVM_ASSERTIONS" ]; then putvar CFG_ENABLE_LLVM_ASSERTIONS; fi if [ -n "$CFG_ENABLE_DEBUG_ASSERTIONS" ]; then putvar CFG_ENABLE_DEBUG_ASSERTIONS; fi +if [ -n "$CFG_ENABLE_LLVM_RELEASE_DEBUGINFO" ]; then putvar CFG_ENABLE_LLVM_RELEASE_DEBUGINFO; fi if [ -n "$CFG_ENABLE_DEBUGINFO" ]; then putvar CFG_ENABLE_DEBUGINFO; fi if [ -n "$CFG_ENABLE_DEBUGINFO_LINES" ]; then putvar CFG_ENABLE_DEBUGINFO_LINES; fi +if [ -n "$CFG_ENABLE_DEBUGINFO_ONLY_STD" ]; then putvar CFG_ENABLE_DEBUGINFO_ONLY_STD; fi if [ -n "$CFG_ENABLE_DEBUG_JEMALLOC" ]; then putvar CFG_ENABLE_DEBUG_JEMALLOC; fi step_msg "looking for build programs" @@ -844,13 +854,22 @@ then fi # For building LLVM -probe_need CFG_CMAKE cmake +if [ -z "$CFG_LLVM_ROOT" ] +then + probe_need CFG_CMAKE cmake +fi # On MacOS X, invoking `javac` pops up a dialog if the JDK is not # installed. Since `javac` is only used if `antlr4` is available, # probe for it only in this case. if [ -n "$CFG_ANTLR4" ] then + CFG_ANTLR4_JAR="\"$(find /usr/ -name antlr-complete.jar 2>/dev/null | head -n 1)\"" + if [ "x" = "x$CFG_ANTLR4_JAR" ] + then + CFG_ANTLR4_JAR="\"$(find ~ -name antlr-complete.jar 2>/dev/null | head -n 1)\"" + fi + putvar CFG_ANTLR4_JAR $CFG_ANTLR4_JAR probe CFG_JAVAC javac fi @@ -1361,7 +1380,7 @@ then fi fi -if [ -z "$CFG_ENABLE_RUSTBUILD" ]; then +if [ -n "$CFG_DISABLE_RUSTBUILD" ]; then step_msg "making directories" @@ -1461,7 +1480,7 @@ fi step_msg "configuring submodules" # Have to be in the top of src directory for this -if [ -z $CFG_DISABLE_MANAGE_SUBMODULES ] && [ -z $CFG_ENABLE_RUSTBUILD ] +if [ -z "$CFG_DISABLE_MANAGE_SUBMODULES" ] && [ -n "$CFG_DISABLE_RUSTBUILD" ] then cd ${CFG_SRC_DIR} @@ -1533,11 +1552,11 @@ do ;; esac - if [ -n "$CFG_ENABLE_RUSTBUILD" ] + if [ -z "$CFG_DISABLE_RUSTBUILD" ] then msg "not configuring LLVM, rustbuild in use" do_reconfigure=0 - elif [ -z $CFG_LLVM_ROOT ] + elif [ -z "$CFG_LLVM_ROOT" ] then LLVM_BUILD_DIR=${CFG_BUILD_DIR}$t/llvm LLVM_INST_DIR=$LLVM_BUILD_DIR @@ -1664,11 +1683,23 @@ do LLVM_CC_64_ARG1="gcc" ;; ("gcc") - LLVM_CXX_32="g++" - LLVM_CC_32="gcc" + if [ -z "$CFG_ENABLE_SCCACHE" ]; then + LLVM_CXX_32="g++" + LLVM_CC_32="gcc" - LLVM_CXX_64="g++" - LLVM_CC_64="gcc" + LLVM_CXX_64="g++" + LLVM_CC_64="gcc" + else + LLVM_CXX_32="sccache" + LLVM_CC_32="sccache" + LLVM_CXX_32_ARG1="g++" + LLVM_CC_32_ARG1="gcc" + + LLVM_CXX_64="sccache" + LLVM_CC_64="sccache" + LLVM_CXX_64_ARG1="g++" + LLVM_CC_64_ARG1="gcc" + fi ;; (*) @@ -1771,6 +1802,8 @@ do if [ -n "$CFG_DISABLE_OPTIMIZE_LLVM" ]; then CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_BUILD_TYPE=Debug" + elif [ -n "$CFG_ENABLE_LLVM_RELEASE_DEBUGINFO" ]; then + CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_BUILD_TYPE=RelWithDebInfo" else CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_BUILD_TYPE=Release" fi @@ -1781,7 +1814,7 @@ do CMAKE_ARGS="$CMAKE_ARGS -DLLVM_ENABLE_ASSERTIONS=ON" fi - CMAKE_ARGS="$CMAKE_ARGS -DLLVM_TARGETS_TO_BUILD='X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend'" + CMAKE_ARGS="$CMAKE_ARGS -DLLVM_TARGETS_TO_BUILD='X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend;MSP430'" CMAKE_ARGS="$CMAKE_ARGS -G '$CFG_CMAKE_GENERATOR'" CMAKE_ARGS="$CMAKE_ARGS $CFG_LLVM_SRC_DIR" @@ -1856,7 +1889,7 @@ do putvar $CFG_LLVM_INST_DIR done -if [ -n "$CFG_ENABLE_RUSTBUILD" ] +if [ -z "$CFG_DISABLE_RUSTBUILD" ] then INPUT_MAKEFILE=src/bootstrap/mk/Makefile.in else @@ -1875,5 +1908,28 @@ else step_msg "complete" fi -msg "run \`make help\`" +if [ "$CFG_SRC_DIR" = `pwd` ]; then + X_PY=x.py +else + X_PY=${CFG_SRC_DIR_RELATIVE}x.py +fi + +if [ -z "$CFG_DISABLE_RUSTBUILD" ]; then + msg "NOTE you have now configured rust to use a rewritten build system" + msg " called rustbuild, and as a result this may have bugs that " + msg " you did not see before. If you experience any issues you can" + msg " go back to the old build system with --disable-rustbuild and" + msg " please feel free to report any bugs!" + msg "" + msg "run \`python ${X_PY} --help\`" +else + warn "the makefile-based build system is deprecated in favor of rustbuild" + msg "" + msg "It is recommended you avoid passing --disable-rustbuild to get your" + msg "build working as the makefiles will be deleted on 2017-02-02. If you" + msg "encounter bugs with rustbuild please file issues against rust-lang/rust" + msg "" + msg "run \`make help\`" +fi + msg diff --git a/mk/cfg/armv5te-unknown-linux-gnueabi.mk b/mk/cfg/armv5te-unknown-linux-gnueabi.mk new file mode 100644 index 0000000000..98567a03c2 --- /dev/null +++ b/mk/cfg/armv5te-unknown-linux-gnueabi.mk @@ -0,0 +1,26 @@ +# armv5-unknown-linux-gnueabi configuration +CROSS_PREFIX_armv5te-unknown-linux-gnueabi=arm-linux-gnueabi- +CC_armv5te-unknown-linux-gnueabi=gcc +CXX_armv5te-unknown-linux-gnueabi=g++ +CPP_armv5te-unknown-linux-gnueabi=gcc -E +AR_armv5te-unknown-linux-gnueabi=ar +CFG_LIB_NAME_armv5te-unknown-linux-gnueabi=lib$(1).so +CFG_STATIC_LIB_NAME_armv5te-unknown-linux-gnueabi=lib$(1).a +CFG_LIB_GLOB_armv5te-unknown-linux-gnueabi=lib$(1)-*.so +CFG_LIB_DSYM_GLOB_armv5te-unknown-linux-gnueabi=lib$(1)-*.dylib.dSYM +CFG_JEMALLOC_CFLAGS_armv5te-unknown-linux-gnueabi := -D__arm__ -mfloat-abi=soft $(CFLAGS) -march=armv5te -marm +CFG_GCCISH_CFLAGS_armv5te-unknown-linux-gnueabi := -Wall -g -fPIC -D__arm__ -mfloat-abi=soft $(CFLAGS) -march=armv5te -marm +CFG_GCCISH_CXXFLAGS_armv5te-unknown-linux-gnueabi := -fno-rtti $(CXXFLAGS) +CFG_GCCISH_LINK_FLAGS_armv5te-unknown-linux-gnueabi := -shared -fPIC -g +CFG_GCCISH_DEF_FLAG_armv5te-unknown-linux-gnueabi := -Wl,--export-dynamic,--dynamic-list= +CFG_LLC_FLAGS_armv5te-unknown-linux-gnueabi := +CFG_INSTALL_NAME_ar,-unknown-linux-gnueabi = +CFG_EXE_SUFFIX_armv5te-unknown-linux-gnueabi := +CFG_WINDOWSY_armv5te-unknown-linux-gnueabi := +CFG_UNIXY_armv5te-unknown-linux-gnueabi := 1 +CFG_LDPATH_armv5te-unknown-linux-gnueabi := +CFG_RUN_armv5te-unknown-linux-gnueabi=$(2) +CFG_RUN_TARG_armv5te-unknown-linux-gnueabi=$(call CFG_RUN_armv5te-unknown-linux-gnueabi,,$(2)) +RUSTC_FLAGS_armv5te-unknown-linux-gnueabi := +RUSTC_CROSS_FLAGS_armv5te-unknown-linux-gnueabi := +CFG_GNU_TRIPLE_armv5te-unknown-linux-gnueabi := armv5te-unknown-linux-gnueabi diff --git a/mk/cfg/i686-unknown-openbsd.mk b/mk/cfg/i686-unknown-openbsd.mk new file mode 100644 index 0000000000..b839937c97 --- /dev/null +++ b/mk/cfg/i686-unknown-openbsd.mk @@ -0,0 +1,24 @@ +# i686-unknown-openbsd configuration +CC_i686-unknown-openbsd=$(CC) +CXX_i686-unknown-openbsd=$(CXX) +CPP_i686-unknown-openbsd=$(CPP) +AR_i686-unknown-openbsd=$(AR) +CFG_LIB_NAME_i686-unknown-openbsd=lib$(1).so +CFG_STATIC_LIB_NAME_i686-unknown-openbsd=lib$(1).a +CFG_LIB_GLOB_i686-unknown-openbsd=lib$(1)-*.so +CFG_LIB_DSYM_GLOB_i686-unknown-openbsd=$(1)-*.dylib.dSYM +CFG_JEMALLOC_CFLAGS_i686-unknown-openbsd := -m32 -I/usr/include $(CFLAGS) +CFG_GCCISH_CFLAGS_i686-unknown-openbsd := -g -fPIC -m32 -I/usr/include $(CFLAGS) +CFG_GCCISH_LINK_FLAGS_i686-unknown-openbsd := -shared -fPIC -g -pthread -m32 +CFG_GCCISH_DEF_FLAG_i686-unknown-openbsd := -Wl,--export-dynamic,--dynamic-list= +CFG_LLC_FLAGS_i686-unknown-openbsd := +CFG_INSTALL_NAME_i686-unknown-openbsd = +CFG_EXE_SUFFIX_i686-unknown-openbsd := +CFG_WINDOWSY_i686-unknown-openbsd := +CFG_UNIXY_i686-unknown-openbsd := 1 +CFG_LDPATH_i686-unknown-openbsd := +CFG_RUN_i686-unknown-openbsd=$(2) +CFG_RUN_TARG_i686-unknown-openbsd=$(call CFG_RUN_i686-unknown-openbsd,,$(2)) +CFG_GNU_TRIPLE_i686-unknown-openbsd := i686-unknown-openbsd +RUSTC_FLAGS_i686-unknown-openbsd=-C linker=$(call FIND_COMPILER,$(CC)) +CFG_DISABLE_JEMALLOC_i686-unknown-openbsd := 1 diff --git a/mk/clean.mk b/mk/clean.mk index 3574f25d9b..7013d9f03f 100644 --- a/mk/clean.mk +++ b/mk/clean.mk @@ -35,7 +35,7 @@ clean-all: clean clean-llvm clean-llvm: $(CLEAN_LLVM_RULES) -clean: clean-misc $(CLEAN_STAGE_RULES) +clean: clean-misc clean-grammar $(CLEAN_STAGE_RULES) clean-misc: @$(call E, cleaning) @@ -47,6 +47,9 @@ clean-misc: $(Q)rm -Rf dist/* $(Q)rm -Rf doc +clean-grammar: + @$(call E, cleaning grammar verification) + $(Q)rm -Rf grammar define CLEAN_GENERIC clean-generic-$(2)-$(1): diff --git a/mk/crates.mk b/mk/crates.mk index 25192bfd27..79df941aeb 100644 --- a/mk/crates.mk +++ b/mk/crates.mk @@ -52,7 +52,7 @@ TARGET_CRATES := libc std term \ getopts collections test rand \ compiler_builtins core alloc \ - rustc_unicode rustc_bitflags \ + std_unicode rustc_bitflags \ alloc_system alloc_jemalloc \ panic_abort panic_unwind unwind RUSTC_CRATES := rustc rustc_typeck rustc_mir rustc_borrowck rustc_resolve rustc_driver \ @@ -65,27 +65,23 @@ HOST_CRATES := syntax syntax_ext proc_macro_tokens proc_macro_plugin syntax_pos TOOLS := compiletest rustdoc rustc rustbook error_index_generator DEPS_core := -DEPS_compiler_builtins := core +DEPS_compiler_builtins := core native:compiler-rt DEPS_alloc := core libc alloc_system DEPS_alloc_system := core libc DEPS_alloc_jemalloc := core libc native:jemalloc -DEPS_collections := core alloc rustc_unicode +DEPS_collections := core alloc std_unicode DEPS_libc := core DEPS_rand := core DEPS_rustc_bitflags := core -DEPS_rustc_unicode := core +DEPS_std_unicode := core DEPS_panic_abort := libc alloc DEPS_panic_unwind := libc alloc unwind DEPS_unwind := libc RUSTFLAGS_compiler_builtins := -lstatic=compiler-rt +RUSTFLAGS_panic_abort := -C panic=abort -# FIXME(stage0): change this to just `RUSTFLAGS_panic_abort := ...` -RUSTFLAGS1_panic_abort := -C panic=abort -RUSTFLAGS2_panic_abort := -C panic=abort -RUSTFLAGS3_panic_abort := -C panic=abort - -DEPS_std := core libc rand alloc collections compiler_builtins rustc_unicode \ +DEPS_std := core libc rand alloc collections compiler_builtins std_unicode \ native:backtrace \ alloc_system panic_abort panic_unwind unwind DEPS_arena := std @@ -100,7 +96,7 @@ DEPS_serialize := std log DEPS_term := std DEPS_test := std getopts term native:rust_test_helpers -DEPS_syntax := std term serialize log arena libc rustc_bitflags rustc_unicode rustc_errors syntax_pos +DEPS_syntax := std term serialize log arena libc rustc_bitflags std_unicode rustc_errors syntax_pos rustc_data_structures DEPS_syntax_ext := syntax syntax_pos rustc_errors fmt_macros proc_macro DEPS_syntax_pos := serialize DEPS_proc_macro_tokens := syntax syntax_pos log @@ -140,7 +136,7 @@ DEPS_rustc_trans := arena flate getopts graphviz libc rustc rustc_back \ DEPS_rustc_incremental := rustc syntax_pos serialize rustc_data_structures DEPS_rustc_save_analysis := rustc log syntax syntax_pos serialize DEPS_rustc_typeck := rustc syntax syntax_pos rustc_platform_intrinsics rustc_const_math \ - rustc_const_eval rustc_errors + rustc_const_eval rustc_errors rustc_data_structures DEPS_rustdoc := rustc rustc_driver native:hoedown serialize getopts test \ rustc_lint rustc_const_eval syntax_pos rustc_data_structures @@ -162,7 +158,7 @@ ONLY_RLIB_libc := 1 ONLY_RLIB_alloc := 1 ONLY_RLIB_rand := 1 ONLY_RLIB_collections := 1 -ONLY_RLIB_rustc_unicode := 1 +ONLY_RLIB_std_unicode := 1 ONLY_RLIB_rustc_bitflags := 1 ONLY_RLIB_alloc_system := 1 ONLY_RLIB_alloc_jemalloc := 1 @@ -173,7 +169,7 @@ ONLY_RLIB_unwind := 1 TARGET_SPECIFIC_alloc_jemalloc := 1 # Documented-by-default crates -DOC_CRATES := std alloc collections core libc rustc_unicode +DOC_CRATES := std alloc collections core libc std_unicode ifeq ($(CFG_DISABLE_JEMALLOC),) RUSTFLAGS_rustc_back := --cfg 'feature="jemalloc"' diff --git a/mk/dist.mk b/mk/dist.mk index cb0bca01e6..238ba8acee 100644 --- a/mk/dist.mk +++ b/mk/dist.mk @@ -65,7 +65,8 @@ PKG_FILES := \ stage0.txt \ rust-installer \ tools \ - test) \ + test \ + vendor) \ $(PKG_GITMODULES) \ $(filter-out config.stamp, \ $(MKFILES_FOR_TARBALL)) diff --git a/mk/grammar.mk b/mk/grammar.mk index 0d527bd068..1bd042adb2 100644 --- a/mk/grammar.mk +++ b/mk/grammar.mk @@ -37,7 +37,7 @@ $(BG): $(BG)RustLexer.class: $(BG) $(SG)RustLexer.g4 $(Q)$(CFG_ANTLR4) -o $(BG) $(SG)RustLexer.g4 - $(Q)$(CFG_JAVAC) -d $(BG) $(BG)RustLexer.java + $(Q)$(CFG_JAVAC) -d $(BG) -classpath $(CFG_ANTLR4_JAR) $(BG)RustLexer.java check-build-lexer-verifier: $(BG)verify diff --git a/mk/llvm.mk b/mk/llvm.mk index 5a91f5fcaa..76367e6f3a 100644 --- a/mk/llvm.mk +++ b/mk/llvm.mk @@ -21,6 +21,8 @@ endif ifdef CFG_DISABLE_OPTIMIZE_LLVM LLVM_BUILD_CONFIG_MODE := Debug +else ifdef CFG_ENABLE_LLVM_RELEASE_DEBUGINFO +LLVM_BUILD_CONFIG_MODE := RelWithDebInfo else LLVM_BUILD_CONFIG_MODE := Release endif diff --git a/mk/main.mk b/mk/main.mk index fd0464aab8..e77182ae39 100644 --- a/mk/main.mk +++ b/mk/main.mk @@ -13,7 +13,7 @@ ###################################################################### # The version number -CFG_RELEASE_NUM=1.14.0 +CFG_RELEASE_NUM=1.15.0 # An optional number to put after the label, e.g. '.2' -> '-beta.2' # NB Make sure it starts with a dot to conform to semver pre-release @@ -285,7 +285,7 @@ endif # LLVM macros ###################################################################### -LLVM_OPTIONAL_COMPONENTS=x86 arm aarch64 mips powerpc pnacl systemz jsbackend +LLVM_OPTIONAL_COMPONENTS=x86 arm aarch64 mips powerpc pnacl systemz jsbackend msp430 LLVM_REQUIRED_COMPONENTS=ipo bitreader bitwriter linker asmparser mcjit \ interpreter instrumentation @@ -372,15 +372,12 @@ CFG_INFO := $(info cfg: disabling unstable features (CFG_DISABLE_UNSTABLE_FEATUR # Turn on feature-staging export CFG_DISABLE_UNSTABLE_FEATURES # Subvert unstable feature lints to do the self-build -export RUSTC_BOOTSTRAP=1 endif ifdef CFG_MUSL_ROOT export CFG_MUSL_ROOT endif -# FIXME: Transitionary measure to bootstrap using the old bootstrap logic. -# Remove this once the bootstrap compiler uses the new login in Issue #36548. -export RUSTC_BOOTSTRAP_KEY=62b3e239 +export RUSTC_BOOTSTRAP := 1 ###################################################################### # Per-stage targets and runner @@ -443,10 +440,7 @@ endif TSREQ$(1)_T_$(2)_H_$(3) = \ $$(HSREQ$(1)_H_$(3)) \ $$(foreach obj,$$(REQUIRED_OBJECTS_$(2)),\ - $$(TLIB$(1)_T_$(2)_H_$(3))/$$(obj)) \ - $$(TLIB0_T_$(2)_H_$(3))/$$(call CFG_STATIC_LIB_NAME_$(2),compiler-rt) -# ^ This copies `libcompiler-rt.a` to the stage0 sysroot -# ^ TODO(stage0) update this to not copy `libcompiler-rt.a` to stage0 + $$(TLIB$(1)_T_$(2)_H_$(3))/$$(obj)) # Prerequisites for a working stageN compiler and libraries, for a specific # target diff --git a/mk/tests.mk b/mk/tests.mk index f3d8f0387b..3317688f04 100644 --- a/mk/tests.mk +++ b/mk/tests.mk @@ -15,14 +15,14 @@ # The names of crates that must be tested -# libcore/librustc_unicode tests are in a separate crate +# libcore/libstd_unicode tests are in a separate crate DEPS_coretest := $(eval $(call RUST_CRATE,coretest)) DEPS_collectionstest := $(eval $(call RUST_CRATE,collectionstest)) -TEST_TARGET_CRATES = $(filter-out core rustc_unicode alloc_system libc \ +TEST_TARGET_CRATES = $(filter-out core std_unicode alloc_system libc \ alloc_jemalloc panic_unwind \ panic_abort,$(TARGET_CRATES)) \ collectionstest coretest @@ -697,6 +697,8 @@ CTEST_DEPS_ui_$(1)-T-$(2)-H-$(3) = $$(UI_TESTS) CTEST_DEPS_mir-opt_$(1)-T-$(2)-H-$(3) = $$(MIR_OPT_TESTS) CTEST_DEPS_rustdocck_$(1)-T-$(2)-H-$(3) = $$(RUSTDOCCK_TESTS) \ $$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3)) \ + $$(CSREQ$(1)_T_$(3)_H_$(3)) \ + $$(SREQ$(1)_T_$(3)_H_$(3)) \ $(S)src/etc/htmldocck.py endef diff --git a/src/Cargo.lock b/src/Cargo.lock new file mode 100644 index 0000000000..9cd77e71b8 --- /dev/null +++ b/src/Cargo.lock @@ -0,0 +1,686 @@ +[root] +name = "unwind" +version = "0.0.0" +dependencies = [ + "core 0.0.0", + "libc 0.0.0", +] + +[[package]] +name = "alloc" +version = "0.0.0" +dependencies = [ + "core 0.0.0", +] + +[[package]] +name = "alloc_jemalloc" +version = "0.0.0" +dependencies = [ + "build_helper 0.1.0", + "core 0.0.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", + "libc 0.0.0", +] + +[[package]] +name = "alloc_system" +version = "0.0.0" +dependencies = [ + "core 0.0.0", + "libc 0.0.0", +] + +[[package]] +name = "arena" +version = "0.0.0" + +[[package]] +name = "bootstrap" +version = "0.0.0" +dependencies = [ + "build_helper 0.1.0", + "cmake 0.1.18 (registry+https://github.com/rust-lang/crates.io-index)", + "filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", + "getopts 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)", + "libc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)", + "num_cpus 0.2.13 (registry+https://github.com/rust-lang/crates.io-index)", + "rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)", + "toml 0.1.30 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "build_helper" +version = "0.1.0" + +[[package]] +name = "cargotest" +version = "0.1.0" + +[[package]] +name = "cmake" +version = "0.1.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +dependencies = [ + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "collections" +version = "0.0.0" +dependencies = [ + "alloc 0.0.0", + "core 0.0.0", + "std_unicode 0.0.0", +] + +[[package]] +name = "compiler_builtins" +version = "0.0.0" +dependencies = [ + "core 0.0.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "compiletest" +version = "0.0.0" +dependencies = [ + "env_logger 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)", + "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)", + "serialize 0.0.0", +] + +[[package]] +name = "core" +version = "0.0.0" + +[[package]] +name = "env_logger" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +dependencies = [ + "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "error_index_generator" +version = "0.0.0" + +[[package]] +name = "filetime" +version = "0.1.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +dependencies = [ + "libc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "flate" +version = "0.0.0" +dependencies = [ + "build_helper 0.1.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "fmt_macros" +version = "0.0.0" + +[[package]] +name = "gcc" +version = "0.3.40" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "getopts" +version = "0.0.0" + +[[package]] +name = "getopts" +version = "0.2.14" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "graphviz" +version = "0.0.0" + +[[package]] +name = "libc" +version = "0.0.0" +dependencies = [ + "core 0.0.0", +] + +[[package]] +name = "libc" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "linkchecker" +version = "0.1.0" + +[[package]] +name = "log" +version = "0.0.0" + +[[package]] +name = "log" +version = "0.3.6" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "num_cpus" +version = "0.2.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +dependencies = [ + "libc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[[package]] +name = "panic_abort" +version = "0.0.0" +dependencies = [ + "core 0.0.0", + "libc 0.0.0", +] + +[[package]] +name = "panic_unwind" +version = "0.0.0" +dependencies = [ + "alloc 0.0.0", + "core 0.0.0", + "libc 0.0.0", + "unwind 0.0.0", +] + +[[package]] +name = "proc_macro" +version = "0.0.0" +dependencies = [ + "syntax 0.0.0", +] + +[[package]] +name = "proc_macro_plugin" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "proc_macro_tokens 0.0.0", + "rustc_plugin 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "proc_macro_tokens" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rand" +version = "0.0.0" +dependencies = [ + "core 0.0.0", +] + +[[package]] +name = "rustbook" +version = "0.0.0" + +[[package]] +name = "rustc" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "flate 0.0.0", + "fmt_macros 0.0.0", + "graphviz 0.0.0", + "log 0.0.0", + "rustc_back 0.0.0", + "rustc_bitflags 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_llvm 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc-main" +version = "0.0.0" +dependencies = [ + "rustc_back 0.0.0", + "rustc_driver 0.0.0", + "rustdoc 0.0.0", +] + +[[package]] +name = "rustc-serialize" +version = "0.3.19" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "rustc_back" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", +] + +[[package]] +name = "rustc_bitflags" +version = "0.0.0" + +[[package]] +name = "rustc_borrowck" +version = "0.0.0" +dependencies = [ + "graphviz 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_mir 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_const_eval" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "graphviz 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_const_math" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", +] + +[[package]] +name = "rustc_data_structures" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "serialize 0.0.0", +] + +[[package]] +name = "rustc_driver" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "flate 0.0.0", + "graphviz 0.0.0", + "log 0.0.0", + "proc_macro_plugin 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_borrowck 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_incremental 0.0.0", + "rustc_lint 0.0.0", + "rustc_llvm 0.0.0", + "rustc_metadata 0.0.0", + "rustc_mir 0.0.0", + "rustc_passes 0.0.0", + "rustc_plugin 0.0.0", + "rustc_privacy 0.0.0", + "rustc_resolve 0.0.0", + "rustc_save_analysis 0.0.0", + "rustc_trans 0.0.0", + "rustc_typeck 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_ext 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_errors" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "serialize 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_incremental" +version = "0.0.0" +dependencies = [ + "graphviz 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_data_structures 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_lint" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_const_eval 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_llvm" +version = "0.0.0" +dependencies = [ + "build_helper 0.1.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", + "rustc_bitflags 0.0.0", +] + +[[package]] +name = "rustc_metadata" +version = "0.0.0" +dependencies = [ + "flate 0.0.0", + "log 0.0.0", + "proc_macro 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_llvm 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_ext 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_mir" +version = "0.0.0" +dependencies = [ + "graphviz 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_bitflags 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_passes" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "rustc 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_const_math 0.0.0", + "rustc_errors 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_platform_intrinsics" +version = "0.0.0" + +[[package]] +name = "rustc_plugin" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_bitflags 0.0.0", + "rustc_errors 0.0.0", + "rustc_metadata 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_privacy" +version = "0.0.0" +dependencies = [ + "rustc 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_resolve" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_errors 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_save_analysis" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "rustc 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_trans" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "flate 0.0.0", + "graphviz 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_bitflags 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_incremental 0.0.0", + "rustc_llvm 0.0.0", + "rustc_platform_intrinsics 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustc_typeck" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "fmt_macros 0.0.0", + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "rustc_platform_intrinsics 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "rustdoc" +version = "0.0.0" +dependencies = [ + "arena 0.0.0", + "build_helper 0.1.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", + "log 0.0.0", + "rustc 0.0.0", + "rustc_back 0.0.0", + "rustc_const_eval 0.0.0", + "rustc_const_math 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_driver 0.0.0", + "rustc_errors 0.0.0", + "rustc_lint 0.0.0", + "rustc_metadata 0.0.0", + "rustc_resolve 0.0.0", + "rustc_trans 0.0.0", + "serialize 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "serialize" +version = "0.0.0" +dependencies = [ + "log 0.0.0", +] + +[[package]] +name = "std" +version = "0.0.0" +dependencies = [ + "alloc 0.0.0", + "alloc_jemalloc 0.0.0", + "alloc_system 0.0.0", + "build_helper 0.1.0", + "collections 0.0.0", + "compiler_builtins 0.0.0", + "core 0.0.0", + "gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)", + "libc 0.0.0", + "panic_abort 0.0.0", + "panic_unwind 0.0.0", + "rand 0.0.0", + "std_unicode 0.0.0", + "unwind 0.0.0", +] + +[[package]] +name = "std_shim" +version = "0.1.0" +dependencies = [ + "core 0.0.0", + "std 0.0.0", +] + +[[package]] +name = "std_unicode" +version = "0.0.0" +dependencies = [ + "core 0.0.0", +] + +[[package]] +name = "syntax" +version = "0.0.0" +dependencies = [ + "log 0.0.0", + "rustc_bitflags 0.0.0", + "rustc_data_structures 0.0.0", + "rustc_errors 0.0.0", + "serialize 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "syntax_ext" +version = "0.0.0" +dependencies = [ + "fmt_macros 0.0.0", + "log 0.0.0", + "proc_macro 0.0.0", + "rustc_errors 0.0.0", + "syntax 0.0.0", + "syntax_pos 0.0.0", +] + +[[package]] +name = "syntax_pos" +version = "0.0.0" +dependencies = [ + "serialize 0.0.0", +] + +[[package]] +name = "term" +version = "0.0.0" + +[[package]] +name = "test" +version = "0.0.0" +dependencies = [ + "getopts 0.0.0", + "term 0.0.0", +] + +[[package]] +name = "test_shim" +version = "0.1.0" +dependencies = [ + "test 0.0.0", +] + +[[package]] +name = "tidy" +version = "0.1.0" + +[[package]] +name = "toml" +version = "0.1.30" +source = "registry+https://github.com/rust-lang/crates.io-index" +dependencies = [ + "rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)", +] + +[metadata] +"checksum cmake 0.1.18 (registry+https://github.com/rust-lang/crates.io-index)" = "0e5bcf27e097a184c1df4437654ed98df3d7a516e8508a6ba45d8b092bbdf283" +"checksum env_logger 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "15abd780e45b3ea4f76b4e9a26ff4843258dd8a3eed2775a0e7368c2e7936c2f" +"checksum filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "5363ab8e4139b8568a6237db5248646e5a8a2f89bd5ccb02092182b11fd3e922" +"checksum gcc 0.3.40 (registry+https://github.com/rust-lang/crates.io-index)" = "872db9e59486ef2b14f8e8c10e9ef02de2bccef6363d7f34835dedb386b3d950" +"checksum getopts 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)" = "d9047cfbd08a437050b363d35ef160452c5fe8ea5187ae0a624708c91581d685" +"checksum libc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)" = "044d1360593a78f5c8e5e710beccdc24ab71d1f01bc19a29bcacdba22e8475d8" +"checksum log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "ab83497bf8bf4ed2a74259c1c802351fcd67a65baa86394b6ba73c36f4838054" +"checksum num_cpus 0.2.13 (registry+https://github.com/rust-lang/crates.io-index)" = "cee7e88156f3f9e19bdd598f8d6c9db7bf4078f99f8381f43a55b09648d1a6e3" +"checksum rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)" = "6159e4e6e559c81bd706afe9c8fd68f547d3e851ce12e76b1de7914bab61691b" +"checksum toml 0.1.30 (registry+https://github.com/rust-lang/crates.io-index)" = "0590d72182e50e879c4da3b11c6488dae18fccb1ae0c7a3eda18e16795844796" diff --git a/src/Cargo.toml b/src/Cargo.toml new file mode 100644 index 0000000000..8fb5c70c41 --- /dev/null +++ b/src/Cargo.toml @@ -0,0 +1,30 @@ +[workspace] +members = [ + "bootstrap", + "rustc", + "rustc/std_shim", + "rustc/test_shim", + "tools/cargotest", + "tools/compiletest", + "tools/error_index_generator", + "tools/linkchecker", + "tools/rustbook", + "tools/tidy", +] + +# Curiously, compiletest will segfault if compiled with opt-level=3 on 64-bit +# MSVC when running the compile-fail test suite when a should-fail test panics. +# But hey if this is removed and it gets past the bots, sounds good to me. +[profile.release] +opt-level = 2 +[profile.bench] +opt-level = 2 + +# These options are controlled from our rustc wrapper script, so turn them off +# here and have them controlled elsewhere. +[profile.dev] +debug = false +debug-assertions = false +[profile.test] +debug = false +debug-assertions = false diff --git a/src/bootstrap/Cargo.toml b/src/bootstrap/Cargo.toml index 9d44ca033e..35f8fb43f7 100644 --- a/src/bootstrap/Cargo.toml +++ b/src/bootstrap/Cargo.toml @@ -27,10 +27,5 @@ num_cpus = "0.2" toml = "0.1" getopts = "0.2" rustc-serialize = "0.3" -gcc = "0.3.36" +gcc = "0.3.38" libc = "0.2" -md5 = "0.1" - -[target.'cfg(windows)'.dependencies] -winapi = "0.2" -kernel32-sys = "0.2" diff --git a/src/bootstrap/README.md b/src/bootstrap/README.md index f73f41ffae..02c64548eb 100644 --- a/src/bootstrap/README.md +++ b/src/bootstrap/README.md @@ -22,7 +22,7 @@ Note that if you're on Unix you should be able to execute the script directly: ./x.py build ``` -The script accepts commands, flags, and filters to determine what to do: +The script accepts commands, flags, and arguments to determine what to do: * `build` - a general purpose command for compiling code. Alone `build` will bootstrap the entire compiler, and otherwise arguments passed indicate what to @@ -32,7 +32,7 @@ The script accepts commands, flags, and filters to determine what to do: # build the whole compiler ./x.py build - # build the stage1 compier + # build the stage1 compiler ./x.py build --stage 1 # build stage0 libstd @@ -42,6 +42,15 @@ The script accepts commands, flags, and filters to determine what to do: ./x.py build --stage 0 src/libtest ``` + If files are dirty that would normally be rebuilt from stage 0, that can be + overidden using `--keep-stage 0`. Using `--keep-stage n` will skip all steps + that belong to stage n or earlier: + + ``` + # keep old build products for stage 0 and build stage 1 + ./x.py build --keep-stage 0 --stage 1 + ``` + * `test` - a command for executing unit tests. Like the `build` command this will execute the entire test suite by default, and otherwise it can be used to select which test suite is run: @@ -54,7 +63,7 @@ The script accepts commands, flags, and filters to determine what to do: ./x.py test src/test/run-pass # execute only some tests in the run-pass test suite - ./x.py test src/test/run-pass --filter my-filter + ./x.py test src/test/run-pass --test-args substring-of-test-name # execute tests in the standard library in stage0 ./x.py test --stage 0 src/libstd @@ -66,17 +75,6 @@ The script accepts commands, flags, and filters to determine what to do: * `doc` - a command for building documentation. Like above can take arguments for what to document. -If you're more used to `./configure` and `make`, however, then you can also -configure the build system to use rustbuild instead of the old makefiles: - -``` -./configure --enable-rustbuild -make -``` - -Afterwards the `Makefile` which is generated will have a few commands like -`make check`, `make tidy`, etc. - ## Configuring rustbuild There are currently two primary methods for configuring the rustbuild build @@ -90,6 +88,13 @@ be found at `src/bootstrap/config.toml.example`, and the configuration file can also be passed as `--config path/to/config.toml` if the build system is being invoked manually (via the python script). +Finally, rustbuild makes use of the [gcc-rs crate] which has [its own +method][env-vars] of configuring C compilers and C flags via environment +variables. + +[gcc-rs crate]: https://github.com/alexcrichton/gcc-rs +[env-vars]: https://github.com/alexcrichton/gcc-rs#external-configuration-via-environment-variables + ## Build stages The rustbuild build system goes through a few phases to actually build the @@ -273,16 +278,17 @@ After that, each module in rustbuild should have enough documentation to keep you up and running. Some general areas that you may be interested in modifying are: -* Adding a new build tool? Take a look at `build/step.rs` for examples of other - tools, as well as `build/mod.rs`. +* Adding a new build tool? Take a look at `bootstrap/step.rs` for examples of + other tools. * Adding a new compiler crate? Look no further! Adding crates can be done by adding a new directory with `Cargo.toml` followed by configuring all `Cargo.toml` files accordingly. * Adding a new dependency from crates.io? We're still working on that, so hold off on that for now. -* Adding a new configuration option? Take a look at `build/config.rs` or perhaps - `build/flags.rs` and then modify the build elsewhere to read that option. -* Adding a sanity check? Take a look at `build/sanity.rs`. +* Adding a new configuration option? Take a look at `bootstrap/config.rs` or + perhaps `bootstrap/flags.rs` and then modify the build elsewhere to read that + option. +* Adding a sanity check? Take a look at `bootstrap/sanity.rs`. If you have any questions feel free to reach out on `#rust-internals` on IRC or open an issue in the bug tracker! diff --git a/src/bootstrap/bin/rustc.rs b/src/bootstrap/bin/rustc.rs index 879eca60cc..c5684e6999 100644 --- a/src/bootstrap/bin/rustc.rs +++ b/src/bootstrap/bin/rustc.rs @@ -30,7 +30,7 @@ extern crate bootstrap; use std::env; use std::ffi::OsString; use std::path::PathBuf; -use std::process::Command; +use std::process::{Command, ExitStatus}; fn main() { let args = env::args_os().skip(1).collect::>(); @@ -125,6 +125,11 @@ fn main() { cmd.arg("-C").arg(format!("codegen-units={}", s)); } + // Emit save-analysis info. + if env::var("RUSTC_SAVE_ANALYSIS") == Ok("api".to_string()) { + cmd.arg("-Zsave-analysis-api"); + } + // Dealing with rpath here is a little special, so let's go into some // detail. First off, `-rpath` is a linker option on Unix platforms // which adds to the runtime dynamic loader path when looking for @@ -153,6 +158,15 @@ fn main() { // to change a flag in a binary? if env::var("RUSTC_RPATH") == Ok("true".to_string()) { let rpath = if target.contains("apple") { + + // Note that we need to take one extra step on OSX to also pass + // `-Wl,-instal_name,@rpath/...` to get things to work right. To + // do that we pass a weird flag to the compiler to get it to do + // so. Note that this is definitely a hack, and we should likely + // flesh out rpath support more fully in the future. + if stage != "0" { + cmd.arg("-Z").arg("osx-rpath-install-name"); + } Some("-Wl,-rpath,@loader_path/../lib") } else if !target.contains("windows") { Some("-Wl,-rpath,$ORIGIN/../lib") @@ -166,8 +180,19 @@ fn main() { } // Actually run the compiler! - std::process::exit(match cmd.status() { - Ok(s) => s.code().unwrap_or(1), + std::process::exit(match exec_cmd(&mut cmd) { + Ok(s) => s.code().unwrap_or(0xfe), Err(e) => panic!("\n\nfailed to run {:?}: {}\n\n", cmd, e), }) } + +#[cfg(unix)] +fn exec_cmd(cmd: &mut Command) -> ::std::io::Result { + use std::os::unix::process::CommandExt; + Err(cmd.exec()) +} + +#[cfg(not(unix))] +fn exec_cmd(cmd: &mut Command) -> ::std::io::Result { + cmd.status() +} diff --git a/src/bootstrap/bootstrap.py b/src/bootstrap/bootstrap.py index 63feea1057..82a546f195 100644 --- a/src/bootstrap/bootstrap.py +++ b/src/bootstrap/bootstrap.py @@ -30,32 +30,37 @@ def get(url, path, verbose=False): sha_path = sha_file.name try: - download(sha_path, sha_url, verbose) + download(sha_path, sha_url, False, verbose) if os.path.exists(path): if verify(path, sha_path, False): - print("using already-download file " + path) + if verbose: + print("using already-download file " + path) return else: - print("ignoring already-download file " + path + " due to failed verification") + if verbose: + print("ignoring already-download file " + path + " due to failed verification") os.unlink(path) - download(temp_path, url, verbose) - if not verify(temp_path, sha_path, True): + download(temp_path, url, True, verbose) + if not verify(temp_path, sha_path, verbose): raise RuntimeError("failed verification") - print("moving {} to {}".format(temp_path, path)) + if verbose: + print("moving {} to {}".format(temp_path, path)) shutil.move(temp_path, path) finally: - delete_if_present(sha_path) - delete_if_present(temp_path) + delete_if_present(sha_path, verbose) + delete_if_present(temp_path, verbose) -def delete_if_present(path): +def delete_if_present(path, verbose): if os.path.isfile(path): - print("removing " + path) + if verbose: + print("removing " + path) os.unlink(path) -def download(path, url, verbose): - print("downloading {} to {}".format(url, path)) +def download(path, url, probably_big, verbose): + if probably_big or verbose: + print("downloading {}".format(url)) # see http://serverfault.com/questions/301128/how-to-download if sys.platform == 'win32': run(["PowerShell.exe", "/nologo", "-Command", @@ -63,17 +68,22 @@ def download(path, url, verbose): ".DownloadFile('{}', '{}')".format(url, path)], verbose=verbose) else: - run(["curl", "-o", path, url], verbose=verbose) + if probably_big or verbose: + option = "-#" + else: + option = "-s" + run(["curl", option, "-Sf", "-o", path, url], verbose=verbose) def verify(path, sha_path, verbose): - print("verifying " + path) + if verbose: + print("verifying " + path) with open(path, "rb") as f: found = hashlib.sha256(f.read()).hexdigest() with open(sha_path, "r") as f: - expected, _ = f.readline().split() + expected = f.readline().split()[0] verified = found == expected - if not verified and verbose: + if not verified: print("invalid checksum:\n" " found: {}\n" " expected: {}".format(found, expected)) @@ -136,7 +146,7 @@ class RustBuild(object): def download_stage0(self): cache_dst = os.path.join(self.build_dir, "cache") rustc_cache = os.path.join(cache_dst, self.stage0_rustc_date()) - cargo_cache = os.path.join(cache_dst, self.stage0_cargo_date()) + cargo_cache = os.path.join(cache_dst, self.stage0_cargo_rev()) if not os.path.exists(rustc_cache): os.makedirs(rustc_cache) if not os.path.exists(cargo_cache): @@ -144,6 +154,7 @@ class RustBuild(object): if self.rustc().startswith(self.bin_root()) and \ (not os.path.exists(self.rustc()) or self.rustc_out_of_date()): + self.print_what_it_means_to_bootstrap() if os.path.exists(self.bin_root()): shutil.rmtree(self.bin_root()) channel = self.stage0_rustc_channel() @@ -167,21 +178,18 @@ class RustBuild(object): if self.cargo().startswith(self.bin_root()) and \ (not os.path.exists(self.cargo()) or self.cargo_out_of_date()): - channel = self.stage0_cargo_channel() - filename = "cargo-{}-{}.tar.gz".format(channel, self.build) - url = "https://static.rust-lang.org/cargo-dist/" + self.stage0_cargo_date() + self.print_what_it_means_to_bootstrap() + filename = "cargo-nightly-{}.tar.gz".format(self.build) + url = "https://s3.amazonaws.com/rust-lang-ci/cargo-builds/" + self.stage0_cargo_rev() tarball = os.path.join(cargo_cache, filename) if not os.path.exists(tarball): get("{}/{}".format(url, filename), tarball, verbose=self.verbose) unpack(tarball, self.bin_root(), match="cargo", verbose=self.verbose) with open(self.cargo_stamp(), 'w') as f: - f.write(self.stage0_cargo_date()) + f.write(self.stage0_cargo_rev()) - def stage0_cargo_date(self): - return self._cargo_date - - def stage0_cargo_channel(self): - return self._cargo_channel + def stage0_cargo_rev(self): + return self._cargo_rev def stage0_rustc_date(self): return self._rustc_date @@ -205,7 +213,7 @@ class RustBuild(object): if not os.path.exists(self.cargo_stamp()) or self.clean: return True with open(self.cargo_stamp(), 'r') as f: - return self.stage0_cargo_date() != f.read() + return self.stage0_cargo_rev() != f.read() def bin_root(self): return os.path.join(self.build_dir, self.build, "stage0") @@ -226,13 +234,16 @@ class RustBuild(object): config = self.get_toml('cargo') if config: return config + config = self.get_mk('CFG_LOCAL_RUST_ROOT') + if config: + return config + '/bin/cargo' + self.exe_suffix() return os.path.join(self.bin_root(), "bin/cargo" + self.exe_suffix()) def rustc(self): config = self.get_toml('rustc') if config: return config - config = self.get_mk('CFG_LOCAL_RUST') + config = self.get_mk('CFG_LOCAL_RUST_ROOT') if config: return config + '/bin/rustc' + self.exe_suffix() return os.path.join(self.bin_root(), "bin/rustc" + self.exe_suffix()) @@ -248,7 +259,27 @@ class RustBuild(object): else: return '' + def print_what_it_means_to_bootstrap(self): + if hasattr(self, 'printed'): + return + self.printed = True + if os.path.exists(self.bootstrap_binary()): + return + if not '--help' in sys.argv or len(sys.argv) == 1: + return + + print('info: the build system for Rust is written in Rust, so this') + print(' script is now going to download a stage0 rust compiler') + print(' and then compile the build system itself') + print('') + print('info: in the meantime you can read more about rustbuild at') + print(' src/bootstrap/README.md before the download finishes') + + def bootstrap_binary(self): + return os.path.join(self.build_dir, "bootstrap/debug/bootstrap") + def build_bootstrap(self): + self.print_what_it_means_to_bootstrap() build_dir = os.path.join(self.build_dir, "bootstrap") if self.clean and os.path.exists(build_dir): shutil.rmtree(build_dir) @@ -259,9 +290,11 @@ class RustBuild(object): env["DYLD_LIBRARY_PATH"] = os.path.join(self.bin_root(), "lib") env["PATH"] = os.path.join(self.bin_root(), "bin") + \ os.pathsep + env["PATH"] - self.run([self.cargo(), "build", "--manifest-path", - os.path.join(self.rust_root, "src/bootstrap/Cargo.toml")], - env) + args = [self.cargo(), "build", "--manifest-path", + os.path.join(self.rust_root, "src/bootstrap/Cargo.toml")] + if self.use_vendored_sources: + args.append("--frozen") + self.run(args, env) def run(self, args, env): proc = subprocess.Popen(args, env=env) @@ -400,9 +433,37 @@ def main(): except: pass + rb.use_vendored_sources = '\nvendor = true' in rb.config_toml or \ + 'CFG_ENABLE_VENDOR' in rb.config_mk + + if 'SUDO_USER' in os.environ: + if os.environ['USER'] != os.environ['SUDO_USER']: + rb.use_vendored_sources = True + print('info: looks like you are running this command under `sudo`') + print(' and so in order to preserve your $HOME this will now') + print(' use vendored sources by default. Note that if this') + print(' does not work you should run a normal build first') + print(' before running a command like `sudo make intall`') + + if rb.use_vendored_sources: + if not os.path.exists('.cargo'): + os.makedirs('.cargo') + with open('.cargo/config','w') as f: + f.write(""" + [source.crates-io] + replace-with = 'vendored-sources' + registry = 'https://example.com' + + [source.vendored-sources] + directory = '{}/src/vendor' + """.format(rb.rust_root)) + else: + if os.path.exists('.cargo'): + shutil.rmtree('.cargo') + data = stage0_data(rb.rust_root) rb._rustc_channel, rb._rustc_date = data['rustc'].split('-', 1) - rb._cargo_channel, rb._cargo_date = data['cargo'].split('-', 1) + rb._cargo_rev = data['cargo'] start_time = time() @@ -414,7 +475,7 @@ def main(): sys.stdout.flush() # Run the bootstrap - args = [os.path.join(rb.build_dir, "bootstrap/debug/bootstrap")] + args = [rb.bootstrap_binary()] args.extend(sys.argv[1:]) env = os.environ.copy() env["BUILD"] = rb.build diff --git a/src/bootstrap/cc.rs b/src/bootstrap/cc.rs index e2bde4a658..aa70e24d95 100644 --- a/src/bootstrap/cc.rs +++ b/src/bootstrap/cc.rs @@ -51,7 +51,7 @@ pub fn find(build: &mut Build) { if let Some(cc) = config.and_then(|c| c.cc.as_ref()) { cfg.compiler(cc); } else { - set_compiler(&mut cfg, "gcc", target, config); + set_compiler(&mut cfg, "gcc", target, config, build); } let compiler = cfg.get_compiler(); @@ -72,7 +72,7 @@ pub fn find(build: &mut Build) { if let Some(cxx) = config.and_then(|c| c.cxx.as_ref()) { cfg.compiler(cxx); } else { - set_compiler(&mut cfg, "g++", host, config); + set_compiler(&mut cfg, "g++", host, config, build); } let compiler = cfg.get_compiler(); build.verbose(&format!("CXX_{} = {:?}", host, compiler.path())); @@ -83,7 +83,8 @@ pub fn find(build: &mut Build) { fn set_compiler(cfg: &mut gcc::Config, gnu_compiler: &str, target: &str, - config: Option<&Target>) { + config: Option<&Target>, + build: &Build) { match target { // When compiling for android we may have the NDK configured in the // config.toml in which case we look there. Otherwise the default @@ -119,6 +120,22 @@ fn set_compiler(cfg: &mut gcc::Config, } } + "mips-unknown-linux-musl" => { + cfg.compiler("mips-linux-musl-gcc"); + } + "mipsel-unknown-linux-musl" => { + cfg.compiler("mipsel-linux-musl-gcc"); + } + + t if t.contains("musl") => { + if let Some(root) = build.musl_root(target) { + let guess = root.join("bin/musl-gcc"); + if guess.exists() { + cfg.compiler(guess); + } + } + } + _ => {} } } diff --git a/src/bootstrap/channel.rs b/src/bootstrap/channel.rs index 879c383404..c38bb33aa0 100644 --- a/src/bootstrap/channel.rs +++ b/src/bootstrap/channel.rs @@ -15,12 +15,11 @@ //! `package_vers`, and otherwise indicating to the compiler what it should //! print out as part of its version information. -use std::fs::{self, File}; +use std::fs::File; use std::io::prelude::*; use std::process::Command; use build_helper::output; -use md5; use Build; @@ -70,7 +69,7 @@ pub fn collect(build: &mut Build) { // If we have a git directory, add in some various SHA information of what // commit this compiler was compiled from. - if fs::metadata(build.src.join(".git")).is_ok() { + if build.src.join(".git").is_dir() { let ver_date = output(Command::new("git").current_dir(&build.src) .arg("log").arg("-1") .arg("--date=short") @@ -91,20 +90,4 @@ pub fn collect(build: &mut Build) { build.ver_hash = Some(ver_hash); build.short_ver_hash = Some(short_ver_hash); } - - // Calculate this compiler's bootstrap key, which is currently defined as - // the first 8 characters of the md5 of the release string. - let key = md5::compute(build.release.as_bytes()); - build.bootstrap_key = format!("{:02x}{:02x}{:02x}{:02x}", - key[0], key[1], key[2], key[3]); - - // Slurp up the stage0 bootstrap key as we're bootstrapping from an - // otherwise stable compiler. - let mut s = String::new(); - t!(t!(File::open(build.src.join("src/stage0.txt"))).read_to_string(&mut s)); - if let Some(line) = s.lines().find(|l| l.starts_with("rustc_key")) { - if let Some(key) = line.split(": ").nth(1) { - build.bootstrap_key_stage0 = key.to_string(); - } - } } diff --git a/src/bootstrap/check.rs b/src/bootstrap/check.rs index 611630c573..aa15825d82 100644 --- a/src/bootstrap/check.rs +++ b/src/bootstrap/check.rs @@ -8,13 +8,16 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -//! Implementation of the various `check-*` targets of the build system. +//! Implementation of the test-related targets of the build system. //! //! This file implements the various regression test suites that we execute on //! our CI. +extern crate build_helper; + use std::collections::HashSet; use std::env; +use std::fmt; use std::fs; use std::path::{PathBuf, Path}; use std::process::Command; @@ -22,10 +25,39 @@ use std::process::Command; use build_helper::output; use {Build, Compiler, Mode}; +use dist; use util::{self, dylib_path, dylib_path_var}; const ADB_TEST_DIR: &'static str = "/data/tmp"; +/// The two modes of the test runner; tests or benchmarks. +#[derive(Copy, Clone)] +pub enum TestKind { + /// Run `cargo test` + Test, + /// Run `cargo bench` + Bench, +} + +impl TestKind { + // Return the cargo subcommand for this test kind + fn subcommand(self) -> &'static str { + match self { + TestKind::Test => "test", + TestKind::Bench => "bench", + } + } +} + +impl fmt::Display for TestKind { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.write_str(match *self { + TestKind::Test => "Testing", + TestKind::Bench => "Benchmarking", + }) + } +} + /// Runs the `linkchecker` tool as compiled in `stage` by the `host` compiler. /// /// This tool in `src/tools` will verify the validity of all our links in the @@ -33,6 +65,8 @@ const ADB_TEST_DIR: &'static str = "/data/tmp"; pub fn linkcheck(build: &Build, stage: u32, host: &str) { println!("Linkcheck stage{} ({})", stage, host); let compiler = Compiler::new(stage, host); + + let _time = util::timeit(); build.run(build.tool_cmd(&compiler, "linkchecker") .arg(build.out.join(host).join("doc"))); } @@ -58,6 +92,7 @@ pub fn cargotest(build: &Build, stage: u32, host: &str) { let out_dir = build.out.join("ct"); t!(fs::create_dir_all(&out_dir)); + let _time = util::timeit(); build.run(build.tool_cmd(compiler, "cargotest") .env("PATH", newpath) .arg(&build.cargo) @@ -90,7 +125,8 @@ pub fn compiletest(build: &Build, target: &str, mode: &str, suite: &str) { - println!("Check compiletest {} ({} -> {})", suite, compiler.host, target); + println!("Check compiletest suite={} mode={} ({} -> {})", + suite, mode, compiler.host, target); let mut cmd = build.tool_cmd(compiler, "compiletest"); // compiletest currently has... a lot of arguments, so let's just pass all @@ -130,9 +166,7 @@ pub fn compiletest(build: &Build, build.test_helpers_out(target).display())); cmd.arg("--target-rustcflags").arg(targetflags.join(" ")); - // FIXME: CFG_PYTHON should probably be detected more robustly elsewhere - let python_default = "python"; - cmd.arg("--docck-python").arg(python_default); + cmd.arg("--docck-python").arg(build.python()); if build.config.build.ends_with("apple-darwin") { // Force /usr/bin/python on OSX for LLDB tests because we're loading the @@ -140,7 +174,7 @@ pub fn compiletest(build: &Build, // (namely not Homebrew-installed python) cmd.arg("--lldb-python").arg("/usr/bin/python"); } else { - cmd.arg("--lldb-python").arg(python_default); + cmd.arg("--lldb-python").arg(build.python()); } if let Some(ref gdb) = build.config.gdb { @@ -186,6 +220,9 @@ pub fn compiletest(build: &Build, // Running a C compiler on MSVC requires a few env vars to be set, to be // sure to set them here. + // + // Note that if we encounter `PATH` we make sure to append to our own `PATH` + // rather than stomp over it. if target.contains("msvc") { for &(ref k, ref v) in build.cc[target].0.env() { if k != "PATH" { @@ -193,7 +230,8 @@ pub fn compiletest(build: &Build, } } } - build.add_bootstrap_key(&mut cmd); + cmd.env("RUSTC_BOOTSTRAP", "1"); + build.add_rust_test_threads(&mut cmd); cmd.arg("--adb-path").arg("adb"); cmd.arg("--adb-test-dir").arg(ADB_TEST_DIR); @@ -205,6 +243,7 @@ pub fn compiletest(build: &Build, cmd.arg("--android-cross-path").arg(""); } + let _time = util::timeit(); build.run(&mut cmd); } @@ -217,6 +256,7 @@ pub fn docs(build: &Build, compiler: &Compiler) { // Do a breadth-first traversal of the `src/doc` directory and just run // tests for all files that end in `*.md` let mut stack = vec![build.src.join("src/doc")]; + let _time = util::timeit(); while let Some(p) = stack.pop() { if p.is_dir() { @@ -242,7 +282,11 @@ pub fn docs(build: &Build, compiler: &Compiler) { pub fn error_index(build: &Build, compiler: &Compiler) { println!("Testing error-index stage{}", compiler.stage); - let output = testdir(build, compiler.host).join("error-index.md"); + let dir = testdir(build, compiler.host); + t!(fs::create_dir_all(&dir)); + let output = dir.join("error-index.md"); + + let _time = util::timeit(); build.run(build.tool_cmd(compiler, "error_index_generator") .arg("markdown") .arg(&output) @@ -254,8 +298,10 @@ pub fn error_index(build: &Build, compiler: &Compiler) { fn markdown_test(build: &Build, compiler: &Compiler, markdown: &Path) { let mut cmd = Command::new(build.rustdoc(compiler)); build.add_rustc_lib_path(compiler, &mut cmd); + build.add_rust_test_threads(&mut cmd); cmd.arg("--test"); cmd.arg(markdown); + cmd.env("RUSTC_BOOTSTRAP", "1"); let mut test_args = build.flags.cmd.test_args().join(" "); if build.config.quiet_tests { @@ -278,6 +324,7 @@ pub fn krate(build: &Build, compiler: &Compiler, target: &str, mode: Mode, + test_kind: TestKind, krate: Option<&str>) { let (name, path, features, root) = match mode { Mode::Libstd => { @@ -291,7 +338,7 @@ pub fn krate(build: &Build, } _ => panic!("can only test libraries"), }; - println!("Testing {} stage{} ({} -> {})", name, compiler.stage, + println!("{} {} stage{} ({} -> {})", test_kind, name, compiler.stage, compiler.host, target); // Build up the base `cargo test` command. @@ -299,7 +346,7 @@ pub fn krate(build: &Build, // Pass in some standard flags then iterate over the graph we've discovered // in `cargo metadata` with the maps above and figure out what `-p` // arguments need to get passed. - let mut cargo = build.cargo(compiler, mode, target, "test"); + let mut cargo = build.cargo(compiler, mode, target, test_kind.subcommand()); cargo.arg("--manifest-path") .arg(build.src.join(path).join("Cargo.toml")) .arg("--features").arg(features); @@ -336,16 +383,25 @@ pub fn krate(build: &Build, dylib_path.insert(0, build.sysroot_libdir(compiler, target)); cargo.env(dylib_path_var(), env::join_paths(&dylib_path).unwrap()); + if target.contains("android") { + cargo.arg("--no-run"); + } else if target.contains("emscripten") { + cargo.arg("--no-run"); + } + + cargo.arg("--"); + if build.config.quiet_tests { - cargo.arg("--"); cargo.arg("--quiet"); } + let _time = util::timeit(); + if target.contains("android") { - build.run(cargo.arg("--no-run")); + build.run(&mut cargo); krate_android(build, compiler, target, mode); } else if target.contains("emscripten") { - build.run(cargo.arg("--no-run")); + build.run(&mut cargo); krate_emscripten(build, compiler, target, mode); } else { cargo.args(&build.flags.cmd.test_args()); @@ -372,14 +428,17 @@ fn krate_android(build: &Build, target, compiler.host, test_file_name); + let quiet = if build.config.quiet_tests { "--quiet" } else { "" }; let program = format!("(cd {dir}; \ LD_LIBRARY_PATH=./{target} ./{test} \ --logfile {log} \ + {quiet} \ {args})", dir = ADB_TEST_DIR, target = target, test = test_file_name, log = log, + quiet = quiet, args = build.flags.cmd.test_args().join(" ")); let output = output(Command::new("adb").arg("shell").arg(&program)); @@ -408,18 +467,12 @@ fn krate_emscripten(build: &Build, let test_file_name = test.to_string_lossy().into_owned(); println!("running {}", test_file_name); let nodejs = build.config.nodejs.as_ref().expect("nodejs not configured"); - let status = Command::new(nodejs) - .arg(&test_file_name) - .stderr(::std::process::Stdio::inherit()) - .status(); - match status { - Ok(status) => { - if !status.success() { - panic!("some tests failed"); - } - } - Err(e) => panic!(format!("failed to execute command: {}", e)), - }; + let mut cmd = Command::new(nodejs); + cmd.arg(&test_file_name); + if build.config.quiet_tests { + cmd.arg("--quiet"); + } + build.run(&mut cmd); } } @@ -467,3 +520,32 @@ pub fn android_copy_libs(build: &Build, } } } + +/// Run "distcheck", a 'make check' from a tarball +pub fn distcheck(build: &Build) { + if build.config.build != "x86_64-unknown-linux-gnu" { + return + } + if !build.config.host.iter().any(|s| s == "x86_64-unknown-linux-gnu") { + return + } + if !build.config.target.iter().any(|s| s == "x86_64-unknown-linux-gnu") { + return + } + + let dir = build.out.join("tmp").join("distcheck"); + let _ = fs::remove_dir_all(&dir); + t!(fs::create_dir_all(&dir)); + + let mut cmd = Command::new("tar"); + cmd.arg("-xzf") + .arg(dist::rust_src_location(build)) + .arg("--strip-components=1") + .current_dir(&dir); + build.run(&mut cmd); + build.run(Command::new("./configure") + .current_dir(&dir)); + build.run(Command::new(build_helper::make(&build.config.build)) + .arg("check") + .current_dir(&dir)); +} diff --git a/src/bootstrap/clean.rs b/src/bootstrap/clean.rs index 75bcbfee6e..e7655458ae 100644 --- a/src/bootstrap/clean.rs +++ b/src/bootstrap/clean.rs @@ -46,6 +46,9 @@ fn rm_rf(build: &Build, path: &Path) { if !path.exists() { return } + if path.is_file() { + return do_op(path, "remove file", |p| fs::remove_file(p)); + } for file in t!(fs::read_dir(path)) { let file = t!(file).path(); diff --git a/src/bootstrap/compile.rs b/src/bootstrap/compile.rs index 5fc4f00672..04d01759ab 100644 --- a/src/bootstrap/compile.rs +++ b/src/bootstrap/compile.rs @@ -120,8 +120,8 @@ fn build_startup_objects(build: &Build, target: &str, into: &Path) { for file in t!(fs::read_dir(build.src.join("src/rtstartup"))) { let file = t!(file); let mut cmd = Command::new(&compiler_path); - build.add_bootstrap_key(&mut cmd); - build.run(cmd.arg("--target").arg(target) + build.run(cmd.env("RUSTC_BOOTSTRAP", "1") + .arg("--target").arg(target) .arg("--emit=obj") .arg("--out-dir").arg(into) .arg(file.path())); @@ -190,6 +190,13 @@ pub fn rustc<'a>(build: &'a Build, target: &str, compiler: &Compiler<'a>) { .env("CFG_PREFIX", build.config.prefix.clone().unwrap_or(String::new())) .env("CFG_LIBDIR_RELATIVE", "lib"); + // If we're not building a compiler with debugging information then remove + // these two env vars which would be set otherwise. + if build.config.rust_debuginfo_only_std { + cargo.env_remove("RUSTC_DEBUGINFO"); + cargo.env_remove("RUSTC_DEBUGINFO_LINES"); + } + if let Some(ref ver_date) = build.ver_date { cargo.env("CFG_VER_DATE", ver_date); } @@ -212,6 +219,9 @@ pub fn rustc<'a>(build: &'a Build, target: &str, compiler: &Compiler<'a>) { cargo.env("LLVM_STATIC_STDCPP", compiler_file(build.cxx(target), "libstdc++.a")); } + if build.config.llvm_link_shared { + cargo.env("LLVM_LINK_SHARED", "1"); + } if let Some(ref s) = build.config.rustc_default_linker { cargo.env("CFG_DEFAULT_LINKER", s); } diff --git a/src/bootstrap/config.rs b/src/bootstrap/config.rs index bb05b75a3f..1f67b52db8 100644 --- a/src/bootstrap/config.rs +++ b/src/bootstrap/config.rs @@ -38,19 +38,22 @@ use util::push_exe_path; /// `src/bootstrap/config.toml.example`. #[derive(Default)] pub struct Config { - pub ccache: bool, + pub ccache: Option, pub ninja: bool, pub verbose: bool, pub submodules: bool, pub compiler_docs: bool, pub docs: bool, + pub vendor: bool, pub target_config: HashMap, // llvm codegen options pub llvm_assertions: bool, pub llvm_optimize: bool, + pub llvm_release_debuginfo: bool, pub llvm_version_check: bool, pub llvm_static_stdcpp: bool, + pub llvm_link_shared: bool, // rust codegen options pub rust_optimize: bool, @@ -58,6 +61,7 @@ pub struct Config { pub rust_debug_assertions: bool, pub rust_debuginfo: bool, pub rust_debuginfo_lines: bool, + pub rust_debuginfo_only_std: bool, pub rust_rpath: bool, pub rustc_default_linker: Option, pub rustc_default_ar: Option, @@ -88,6 +92,7 @@ pub struct Config { pub codegen_tests: bool, pub nodejs: Option, pub gdb: Option, + pub python: Option, } /// Per-target configuration stored in the global configuration structure. @@ -126,19 +131,35 @@ struct Build { docs: Option, submodules: Option, gdb: Option, + vendor: Option, + nodejs: Option, + python: Option, } /// TOML representation of how the LLVM build is configured. #[derive(RustcDecodable, Default)] struct Llvm { - ccache: Option, + ccache: Option, ninja: Option, assertions: Option, optimize: Option, + release_debuginfo: Option, version_check: Option, static_libstdcpp: Option, } +#[derive(RustcDecodable)] +enum StringOrBool { + String(String), + Bool(bool), +} + +impl Default for StringOrBool { + fn default() -> StringOrBool { + StringOrBool::Bool(false) + } +} + /// TOML representation of how the Rust build is configured. #[derive(RustcDecodable, Default)] struct Rust { @@ -147,6 +168,7 @@ struct Rust { debug_assertions: Option, debuginfo: Option, debuginfo_lines: Option, + debuginfo_only_std: Option, debug_jemalloc: Option, use_jemalloc: Option, backtrace: Option, @@ -230,16 +252,28 @@ impl Config { } config.rustc = build.rustc.map(PathBuf::from); config.cargo = build.cargo.map(PathBuf::from); + config.nodejs = build.nodejs.map(PathBuf::from); config.gdb = build.gdb.map(PathBuf::from); + config.python = build.python.map(PathBuf::from); set(&mut config.compiler_docs, build.compiler_docs); set(&mut config.docs, build.docs); set(&mut config.submodules, build.submodules); + set(&mut config.vendor, build.vendor); if let Some(ref llvm) = toml.llvm { - set(&mut config.ccache, llvm.ccache); + match llvm.ccache { + Some(StringOrBool::String(ref s)) => { + config.ccache = Some(s.to_string()) + } + Some(StringOrBool::Bool(true)) => { + config.ccache = Some("ccache".to_string()); + } + Some(StringOrBool::Bool(false)) | None => {} + } set(&mut config.ninja, llvm.ninja); set(&mut config.llvm_assertions, llvm.assertions); set(&mut config.llvm_optimize, llvm.optimize); + set(&mut config.llvm_release_debuginfo, llvm.release_debuginfo); set(&mut config.llvm_version_check, llvm.version_check); set(&mut config.llvm_static_stdcpp, llvm.static_libstdcpp); } @@ -247,6 +281,7 @@ impl Config { set(&mut config.rust_debug_assertions, rust.debug_assertions); set(&mut config.rust_debuginfo, rust.debuginfo); set(&mut config.rust_debuginfo_lines, rust.debuginfo_lines); + set(&mut config.rust_debuginfo_only_std, rust.debuginfo_only_std); set(&mut config.rust_optimize, rust.optimize); set(&mut config.rust_optimize_tests, rust.optimize_tests); set(&mut config.rust_debuginfo_tests, rust.debuginfo_tests); @@ -326,18 +361,20 @@ impl Config { } check! { - ("CCACHE", self.ccache), ("MANAGE_SUBMODULES", self.submodules), ("COMPILER_DOCS", self.compiler_docs), ("DOCS", self.docs), ("LLVM_ASSERTIONS", self.llvm_assertions), + ("LLVM_RELEASE_DEBUGINFO", self.llvm_release_debuginfo), ("OPTIMIZE_LLVM", self.llvm_optimize), ("LLVM_VERSION_CHECK", self.llvm_version_check), ("LLVM_STATIC_STDCPP", self.llvm_static_stdcpp), + ("LLVM_LINK_SHARED", self.llvm_link_shared), ("OPTIMIZE", self.rust_optimize), ("DEBUG_ASSERTIONS", self.rust_debug_assertions), ("DEBUGINFO", self.rust_debuginfo), ("DEBUGINFO_LINES", self.rust_debuginfo_lines), + ("DEBUGINFO_ONLY_STD", self.rust_debuginfo_only_std), ("JEMALLOC", self.use_jemalloc), ("DEBUG_JEMALLOC", self.debug_jemalloc), ("RPATH", self.rust_rpath), @@ -347,6 +384,7 @@ impl Config { ("LOCAL_REBUILD", self.local_rebuild), ("NINJA", self.ninja), ("CODEGEN_TESTS", self.codegen_tests), + ("VENDOR", self.vendor), } match key { @@ -456,6 +494,16 @@ impl Config { self.rustc = Some(push_exe_path(path.clone(), &["bin", "rustc"])); self.cargo = Some(push_exe_path(path, &["bin", "cargo"])); } + "CFG_PYTHON" if value.len() > 0 => { + let path = parse_configure_path(value); + self.python = Some(path); + } + "CFG_ENABLE_CCACHE" if value == "1" => { + self.ccache = Some("ccache".to_string()); + } + "CFG_ENABLE_SCCACHE" if value == "1" => { + self.ccache = Some("sccache".to_string()); + } _ => {} } } diff --git a/src/bootstrap/config.toml.example b/src/bootstrap/config.toml.example index 1289cdba59..22542f8737 100644 --- a/src/bootstrap/config.toml.example +++ b/src/bootstrap/config.toml.example @@ -17,11 +17,16 @@ # Indicates whether the LLVM build is a Release or Debug build #optimize = true +# Indicates whether an LLVM Release build should include debug info +#release-debuginfo = false + # Indicates whether the LLVM assertions are enabled or not #assertions = false # Indicates whether ccache is used when building LLVM #ccache = false +# or alternatively ... +#ccache = "/path/to/ccache" # If an external LLVM root is specified, we automatically check the version by # default to make sure it's within the range that we're expecting, but setting @@ -79,9 +84,22 @@ # Indicate whether submodules are managed and updated automatically. #submodules = true -# The path to (or name of) the GDB executable to use +# The path to (or name of) the GDB executable to use. This is only used for +# executing the debuginfo test suite. #gdb = "gdb" +# The node.js executable to use. Note that this is only used for the emscripten +# target when running tests, otherwise this can be omitted. +#nodejs = "node" + +# Python interpreter to use for various tasks throughout the build, notably +# rustdoc tests, the lldb python interpreter, and some dist bits and pieces. +# Note that Python 2 is currently required. +#python = "python2.7" + +# Indicate whether the vendored sources are used for Rust dependencies or not +#vendor = false + # ============================================================================= # Options for compiling Rust code itself # ============================================================================= @@ -105,6 +123,11 @@ # Whether or not line number debug information is emitted #debuginfo-lines = false +# Whether or not to only build debuginfo for the standard library if enabled. +# If enabled, this will not compile the compiler with debuginfo, just the +# standard library. +#debuginfo-only-std = false + # Whether or not jemalloc is built and enabled #use-jemalloc = true diff --git a/src/bootstrap/dist.rs b/src/bootstrap/dist.rs index 8676f5cc4a..d9663a507d 100644 --- a/src/bootstrap/dist.rs +++ b/src/bootstrap/dist.rs @@ -23,7 +23,7 @@ use std::io::Write; use std::path::{PathBuf, Path}; use std::process::Command; -use {Build, Compiler}; +use {Build, Compiler, Mode}; use util::{cp_r, libdir, is_dylib, cp_filtered, copy}; pub fn package_vers(build: &Build) -> &str { @@ -48,6 +48,11 @@ pub fn tmpdir(build: &Build) -> PathBuf { /// Slurps up documentation from the `stage`'s `host`. pub fn docs(build: &Build, stage: u32, host: &str) { println!("Dist docs stage{} ({})", stage, host); + if !build.config.docs { + println!("\tskipping - docs disabled"); + return + } + let name = format!("rust-docs-{}", package_vers(build)); let image = tmpdir(build).join(format!("{}-{}-image", name, name)); let _ = fs::remove_dir_all(&image); @@ -92,6 +97,7 @@ pub fn mingw(build: &Build, host: &str) { let name = format!("rust-mingw-{}", package_vers(build)); let image = tmpdir(build).join(format!("{}-{}-image", name, host)); let _ = fs::remove_dir_all(&image); + t!(fs::create_dir_all(&image)); // The first argument to the script is a "temporary directory" which is just // thrown away (this contains the runtime DLLs included in the rustc package @@ -99,7 +105,7 @@ pub fn mingw(build: &Build, host: &str) { // (which is what we want). // // FIXME: this script should be rewritten into Rust - let mut cmd = Command::new("python"); + let mut cmd = Command::new(build.python()); cmd.arg(build.src.join("src/etc/make-win-dist.py")) .arg(tmpdir(build)) .arg(&image) @@ -159,7 +165,7 @@ pub fn rustc(build: &Build, stage: u32, host: &str) { // // FIXME: this script should be rewritten into Rust if host.contains("pc-windows-gnu") { - let mut cmd = Command::new("python"); + let mut cmd = Command::new(build.python()); cmd.arg(build.src.join("src/etc/make-win-dist.py")) .arg(&image) .arg(tmpdir(build)) @@ -260,6 +266,14 @@ pub fn debugger_scripts(build: &Build, pub fn std(build: &Build, compiler: &Compiler, target: &str) { println!("Dist std stage{} ({} -> {})", compiler.stage, compiler.host, target); + + // The only true set of target libraries came from the build triple, so + // let's reduce redundant work by only producing archives from that host. + if compiler.host != build.config.build { + println!("\tskipping, not a build host"); + return + } + let name = format!("rust-std-{}", package_vers(build)); let image = tmpdir(build).join(format!("{}-{}-image", name, target)); let _ = fs::remove_dir_all(&image); @@ -284,6 +298,53 @@ pub fn std(build: &Build, compiler: &Compiler, target: &str) { t!(fs::remove_dir_all(&image)); } +pub fn rust_src_location(build: &Build) -> PathBuf { + let plain_name = format!("rustc-{}-src", package_vers(build)); + distdir(build).join(&format!("{}.tar.gz", plain_name)) +} + +/// Creates a tarball of save-analysis metadata, if available. +pub fn analysis(build: &Build, compiler: &Compiler, target: &str) { + println!("Dist analysis"); + + if build.config.channel != "nightly" { + println!("\tskipping - not on nightly channel"); + return; + } + if compiler.host != build.config.build { + println!("\tskipping - not a build host"); + return + } + if compiler.stage != 2 { + println!("\tskipping - not stage2"); + return + } + + let name = format!("rust-analysis-{}", package_vers(build)); + let image = tmpdir(build).join(format!("{}-{}-image", name, target)); + + let src = build.stage_out(compiler, Mode::Libstd).join(target).join("release").join("deps"); + + let image_src = src.join("save-analysis"); + let dst = image.join("lib/rustlib").join(target).join("analysis"); + t!(fs::create_dir_all(&dst)); + cp_r(&image_src, &dst); + + let mut cmd = Command::new("sh"); + cmd.arg(sanitize_sh(&build.src.join("src/rust-installer/gen-installer.sh"))) + .arg("--product-name=Rust") + .arg("--rel-manifest-dir=rustlib") + .arg("--success-message=save-analysis-saved.") + .arg(format!("--image-dir={}", sanitize_sh(&image))) + .arg(format!("--work-dir={}", sanitize_sh(&tmpdir(build)))) + .arg(format!("--output-dir={}", sanitize_sh(&distdir(build)))) + .arg(format!("--package-name={}-{}", name, target)) + .arg(format!("--component-name=rust-analysis-{}", target)) + .arg("--legacy-manifest-dirs=rustlib,cargo"); + build.run(&mut cmd); + t!(fs::remove_dir_all(&image)); +} + /// Creates the `rust-src` installer component and the plain source tarball pub fn rust_src(build: &Build) { println!("Dist src"); @@ -330,6 +391,13 @@ pub fn rust_src(build: &Build) { } } + // If we're inside the vendor directory then we need to preserve + // everything as Cargo's vendoring support tracks all checksums and we + // want to be sure we don't accidentally leave out a file. + if spath.contains("vendor") { + return true + } + let excludes = [ "CVS", "RCS", "SCCS", ".git", ".gitignore", ".gitmodules", ".gitattributes", ".cvsignore", ".svn", ".arch-ids", "{arch}", @@ -374,7 +442,7 @@ pub fn rust_src(build: &Build) { // Create plain source tarball let mut cmd = Command::new("tar"); - cmd.arg("-czf").arg(sanitize_sh(&distdir(build).join(&format!("{}.tar.gz", plain_name)))) + cmd.arg("-czf").arg(sanitize_sh(&rust_src_location(build))) .arg(&plain_name) .current_dir(&dst); build.run(&mut cmd); diff --git a/src/bootstrap/doc.rs b/src/bootstrap/doc.rs index 30c7fefad8..15cb16fad3 100644 --- a/src/bootstrap/doc.rs +++ b/src/bootstrap/doc.rs @@ -146,7 +146,17 @@ pub fn std(build: &Build, stage: u32, target: &str) { let mut cargo = build.cargo(&compiler, Mode::Libstd, target, "doc"); cargo.arg("--manifest-path") .arg(build.src.join("src/rustc/std_shim/Cargo.toml")) - .arg("--features").arg(build.std_features()); + .arg("--features").arg(build.std_features()) + .arg("--no-deps"); + + for krate in &["alloc", "collections", "core", "std", "std_unicode"] { + cargo.arg("-p").arg(krate); + // Create all crate output directories first to make sure rustdoc uses + // relative links. + // FIXME: Cargo should probably do this itself. + t!(fs::create_dir_all(out_dir.join(krate))); + } + build.run(&mut cargo); cp_r(&out_dir, &out) } diff --git a/src/bootstrap/flags.rs b/src/bootstrap/flags.rs index d7516954f1..5c8d7cab96 100644 --- a/src/bootstrap/flags.rs +++ b/src/bootstrap/flags.rs @@ -29,6 +29,7 @@ use step; pub struct Flags { pub verbose: bool, pub stage: Option, + pub keep_stage: Option, pub build: String, pub host: Vec, pub target: Vec, @@ -49,6 +50,10 @@ pub enum Subcommand { paths: Vec, test_args: Vec, }, + Bench { + paths: Vec, + test_args: Vec, + }, Clean, Dist { install: bool, @@ -64,6 +69,7 @@ impl Flags { opts.optmulti("", "host", "host targets to build", "HOST"); opts.optmulti("", "target", "target targets to build", "TARGET"); opts.optopt("", "stage", "stage to build", "N"); + opts.optopt("", "keep-stage", "stage to keep without recompiling", "N"); opts.optopt("", "src", "path to the root of the rust checkout", "DIR"); opts.optopt("j", "jobs", "number of jobs to run in parallel", "JOBS"); opts.optflag("h", "help", "print this help message"); @@ -104,7 +110,6 @@ Arguments: tests that should be compiled and run. For example: ./x.py test src/test/run-pass - ./x.py test src/test/run-pass/assert-* ./x.py test src/libstd --test-args hash_map ./x.py test src/libstd --stage 0 @@ -141,6 +146,7 @@ Arguments: command == "dist" || command == "doc" || command == "test" || + command == "bench" || command == "clean" { println!("Available invocations:"); if args.iter().any(|a| a == "-v") { @@ -163,6 +169,7 @@ println!("\ Subcommands: build Compile either the compiler or libraries test Build and run some test suites + bench Build and run some benchmarks doc Build documentation clean Clean out build directories dist Build and/or install distribution artifacts @@ -210,6 +217,14 @@ To learn more about a subcommand, run `./x.py -h` test_args: m.opt_strs("test-args"), } } + "bench" => { + opts.optmulti("", "test-args", "extra arguments", "ARGS"); + m = parse(&opts); + Subcommand::Bench { + paths: remaining_as_path(&m), + test_args: m.opt_strs("test-args"), + } + } "clean" => { m = parse(&opts); if m.free.len() > 0 { @@ -225,6 +240,7 @@ To learn more about a subcommand, run `./x.py -h` install: m.opt_present("install"), } } + "--help" => usage(0, &opts), cmd => { println!("unknown command: {}", cmd); usage(1, &opts); @@ -243,6 +259,7 @@ To learn more about a subcommand, run `./x.py -h` Flags { verbose: m.opt_present("v"), stage: m.opt_str("stage").map(|j| j.parse().unwrap()), + keep_stage: m.opt_str("keep-stage").map(|j| j.parse().unwrap()), build: m.opt_str("build").unwrap_or_else(|| { env::var("BUILD").unwrap() }), @@ -259,7 +276,8 @@ To learn more about a subcommand, run `./x.py -h` impl Subcommand { pub fn test_args(&self) -> Vec<&str> { match *self { - Subcommand::Test { ref test_args, .. } => { + Subcommand::Test { ref test_args, .. } | + Subcommand::Bench { ref test_args, .. } => { test_args.iter().flat_map(|s| s.split_whitespace()).collect() } _ => Vec::new(), diff --git a/src/bootstrap/job.rs b/src/bootstrap/job.rs index 4558e6f049..c3859275e6 100644 --- a/src/bootstrap/job.rs +++ b/src/bootstrap/job.rs @@ -37,17 +37,95 @@ //! Note that this module has a #[cfg(windows)] above it as none of this logic //! is required on Unix. -extern crate kernel32; -extern crate winapi; +#![allow(bad_style, dead_code)] use std::env; use std::io; use std::mem; -use self::winapi::*; -use self::kernel32::*; +type HANDLE = *mut u8; +type BOOL = i32; +type DWORD = u32; +type LPHANDLE = *mut HANDLE; +type LPVOID = *mut u8; +type JOBOBJECTINFOCLASS = i32; +type SIZE_T = usize; +type LARGE_INTEGER = i64; +type UINT = u32; +type ULONG_PTR = usize; +type ULONGLONG = u64; + +const FALSE: BOOL = 0; +const DUPLICATE_SAME_ACCESS: DWORD = 0x2; +const PROCESS_DUP_HANDLE: DWORD = 0x40; +const JobObjectExtendedLimitInformation: JOBOBJECTINFOCLASS = 9; +const JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE: DWORD = 0x2000; +const SEM_FAILCRITICALERRORS: UINT = 0x0001; +const SEM_NOGPFAULTERRORBOX: UINT = 0x0002; + +extern "system" { + fn CreateJobObjectW(lpJobAttributes: *mut u8, lpName: *const u8) -> HANDLE; + fn CloseHandle(hObject: HANDLE) -> BOOL; + fn GetCurrentProcess() -> HANDLE; + fn OpenProcess(dwDesiredAccess: DWORD, + bInheritHandle: BOOL, + dwProcessId: DWORD) -> HANDLE; + fn DuplicateHandle(hSourceProcessHandle: HANDLE, + hSourceHandle: HANDLE, + hTargetProcessHandle: HANDLE, + lpTargetHandle: LPHANDLE, + dwDesiredAccess: DWORD, + bInheritHandle: BOOL, + dwOptions: DWORD) -> BOOL; + fn AssignProcessToJobObject(hJob: HANDLE, hProcess: HANDLE) -> BOOL; + fn SetInformationJobObject(hJob: HANDLE, + JobObjectInformationClass: JOBOBJECTINFOCLASS, + lpJobObjectInformation: LPVOID, + cbJobObjectInformationLength: DWORD) -> BOOL; + fn SetErrorMode(mode: UINT) -> UINT; +} + +#[repr(C)] +struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION { + BasicLimitInformation: JOBOBJECT_BASIC_LIMIT_INFORMATION, + IoInfo: IO_COUNTERS, + ProcessMemoryLimit: SIZE_T, + JobMemoryLimit: SIZE_T, + PeakProcessMemoryUsed: SIZE_T, + PeakJobMemoryUsed: SIZE_T, +} + +#[repr(C)] +struct IO_COUNTERS { + ReadOperationCount: ULONGLONG, + WriteOperationCount: ULONGLONG, + OtherOperationCount: ULONGLONG, + ReadTransferCount: ULONGLONG, + WriteTransferCount: ULONGLONG, + OtherTransferCount: ULONGLONG, +} + +#[repr(C)] +struct JOBOBJECT_BASIC_LIMIT_INFORMATION { + PerProcessUserTimeLimit: LARGE_INTEGER, + PerJobUserTimeLimit: LARGE_INTEGER, + LimitFlags: DWORD, + MinimumWorkingsetSize: SIZE_T, + MaximumWorkingsetSize: SIZE_T, + ActiveProcessLimit: DWORD, + Affinity: ULONG_PTR, + PriorityClass: DWORD, + SchedulingClass: DWORD, +} pub unsafe fn setup() { + // Tell Windows to not show any UI on errors (such as not finding a required dll + // during startup or terminating abnormally). This is important for running tests, + // since some of them use abnormal termination by design. + // This mode is inherited by all child processes. + let mode = SetErrorMode(SEM_NOGPFAULTERRORBOX); // read inherited flags + SetErrorMode(mode | SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX); + // Create a new job object for us to use let job = CreateJobObjectW(0 as *mut _, 0 as *const _); assert!(job != 0 as *mut _, "{}", io::Error::last_os_error()); diff --git a/src/bootstrap/lib.rs b/src/bootstrap/lib.rs index 3f8e3fe531..cd80c4298d 100644 --- a/src/bootstrap/lib.rs +++ b/src/bootstrap/lib.rs @@ -13,22 +13,69 @@ //! This module, and its descendants, are the implementation of the Rust build //! system. Most of this build system is backed by Cargo but the outer layer //! here serves as the ability to orchestrate calling Cargo, sequencing Cargo -//! builds, building artifacts like LLVM, etc. +//! builds, building artifacts like LLVM, etc. The goals of rustbuild are: //! -//! More documentation can be found in each respective module below. +//! * To be an easily understandable, easily extensible, and maintainable build +//! system. +//! * Leverage standard tools in the Rust ecosystem to build the compiler, aka +//! crates.io and Cargo. +//! * A standard interface to build across all platforms, including MSVC +//! +//! ## Architecture +//! +//! Although this build system defers most of the complicated logic to Cargo +//! itself, it still needs to maintain a list of targets and dependencies which +//! it can itself perform. Rustbuild is made up of a list of rules with +//! dependencies amongst them (created in the `step` module) and then knows how +//! to execute each in sequence. Each time rustbuild is invoked, it will simply +//! iterate through this list of steps and execute each serially in turn. For +//! each step rustbuild relies on the step internally being incremental and +//! parallel. Note, though, that the `-j` parameter to rustbuild gets forwarded +//! to appropriate test harnesses and such. +//! +//! Most of the "meaty" steps that matter are backed by Cargo, which does indeed +//! have its own parallelism and incremental management. Later steps, like +//! tests, aren't incremental and simply run the entire suite currently. +//! +//! When you execute `x.py build`, the steps which are executed are: +//! +//! * First, the python script is run. This will automatically download the +//! stage0 rustc and cargo according to `src/stage0.txt`, or using the cached +//! versions if they're available. These are then used to compile rustbuild +//! itself (using Cargo). Finally, control is then transferred to rustbuild. +//! +//! * Rustbuild takes over, performs sanity checks, probes the environment, +//! reads configuration, builds up a list of steps, and then starts executing +//! them. +//! +//! * The stage0 libstd is compiled +//! * The stage0 libtest is compiled +//! * The stage0 librustc is compiled +//! * The stage1 compiler is assembled +//! * The stage1 libstd, libtest, librustc are compiled +//! * The stage2 compiler is assembled +//! * The stage2 libstd, libtest, librustc are compiled +//! +//! Each step is driven by a separate Cargo project and rustbuild orchestrates +//! copying files between steps and otherwise preparing for Cargo to run. +//! +//! ## Further information +//! +//! More documentation can be found in each respective module below, and you can +//! also check out the `src/bootstrap/README.md` file for more information. extern crate build_helper; extern crate cmake; extern crate filetime; extern crate gcc; extern crate getopts; -extern crate md5; extern crate num_cpus; extern crate rustc_serialize; extern crate toml; use std::collections::HashMap; use std::env; +use std::ffi::OsString; use std::fs::{self, File}; use std::path::{Component, PathBuf, Path}; use std::process::Command; @@ -120,8 +167,6 @@ pub struct Build { version: String, package_vers: String, local_rebuild: bool, - bootstrap_key: String, - bootstrap_key_stage0: String, // Probed tools at runtime lldb_version: Option, @@ -131,6 +176,7 @@ pub struct Build { cc: HashMap)>, cxx: HashMap, crates: HashMap, + is_sudo: bool, } #[derive(Debug)] @@ -141,6 +187,7 @@ struct Crate { doc_step: String, build_step: String, test_step: String, + bench_step: String, } /// The various "modes" of invoking Cargo. @@ -189,6 +236,16 @@ impl Build { }; let local_rebuild = config.local_rebuild; + let is_sudo = match env::var_os("SUDO_USER") { + Some(sudo_user) => { + match env::var_os("USER") { + Some(user) => user != sudo_user, + None => false, + } + } + None => false, + }; + Build { flags: flags, config: config, @@ -204,14 +261,13 @@ impl Build { ver_date: None, version: String::new(), local_rebuild: local_rebuild, - bootstrap_key: String::new(), - bootstrap_key_stage0: String::new(), package_vers: String::new(), cc: HashMap::new(), cxx: HashMap::new(), crates: HashMap::new(), lldb_version: None, lldb_python_dir: None, + is_sudo: is_sudo, } } @@ -418,7 +474,7 @@ impl Build { // how the actual compiler itself is called. // // These variables are primarily all read by - // src/bootstrap/{rustc,rustdoc.rs} + // src/bootstrap/bin/{rustc.rs,rustdoc.rs} cargo.env("RUSTC", self.out.join("bootstrap/debug/rustc")) .env("RUSTC_REAL", self.compiler_path(compiler)) .env("RUSTC_STAGE", stage.to_string()) @@ -437,7 +493,9 @@ impl Build { .env("RUSTDOC_REAL", self.rustdoc(compiler)) .env("RUSTC_FLAGS", self.rustc_flags(target).join(" ")); - self.add_bootstrap_key(&mut cargo); + // Enable usage of unstable features + cargo.env("RUSTC_BOOTSTRAP", "1"); + self.add_rust_test_threads(&mut cargo); // Specify some various options for build scripts used throughout // the build. @@ -449,6 +507,10 @@ impl Build { .env(format!("CFLAGS_{}", target), self.cflags(target).join(" ")); } + if self.config.channel == "nightly" && compiler.stage == 2 { + cargo.env("RUSTC_SAVE_ANALYSIS", "api".to_string()); + } + // Environment variables *required* needed throughout the build // // FIXME: should update code to not require this env var @@ -457,9 +519,13 @@ impl Build { if self.config.verbose || self.flags.verbose { cargo.arg("-v"); } - if self.config.rust_optimize { + // FIXME: cargo bench does not accept `--release` + if self.config.rust_optimize && cmd != "bench" { cargo.arg("--release"); } + if self.config.vendor || self.is_sudo { + cargo.arg("--frozen"); + } return cargo } @@ -491,12 +557,30 @@ impl Build { fn tool_cmd(&self, compiler: &Compiler, tool: &str) -> Command { let mut cmd = Command::new(self.tool(&compiler, tool)); let host = compiler.host; - let paths = vec![ + let mut paths = vec![ self.cargo_out(compiler, Mode::Libstd, host).join("deps"), self.cargo_out(compiler, Mode::Libtest, host).join("deps"), self.cargo_out(compiler, Mode::Librustc, host).join("deps"), self.cargo_out(compiler, Mode::Tool, host).join("deps"), ]; + + // On MSVC a tool may invoke a C compiler (e.g. compiletest in run-make + // mode) and that C compiler may need some extra PATH modification. Do + // so here. + if compiler.host.contains("msvc") { + let curpaths = env::var_os("PATH").unwrap_or(OsString::new()); + let curpaths = env::split_paths(&curpaths).collect::>(); + for &(ref k, ref v) in self.cc[compiler.host].0.env() { + if k != "PATH" { + continue + } + for path in env::split_paths(v) { + if !curpaths.contains(&path) { + paths.push(path); + } + } + } + } add_lib_path(paths, &mut cmd); return cmd } @@ -504,7 +588,7 @@ impl Build { /// Get the space-separated set of activated features for the standard /// library. fn std_features(&self) -> String { - let mut features = String::new(); + let mut features = "panic-unwind".to_string(); if self.config.debug_jemalloc { features.push_str(" debug-jemalloc"); } @@ -650,12 +734,11 @@ impl Build { add_lib_path(vec![self.rustc_libdir(compiler)], cmd); } - /// Adds the compiler's bootstrap key to the environment of `cmd`. - fn add_bootstrap_key(&self, cmd: &mut Command) { - cmd.env("RUSTC_BOOTSTRAP", "1"); - // FIXME: Transitionary measure to bootstrap using the old bootstrap logic. - // Remove this once the bootstrap compiler uses the new login in Issue #36548. - cmd.env("RUSTC_BOOTSTRAP_KEY", "62b3e239"); + /// Adds the `RUST_TEST_THREADS` env var if necessary + fn add_rust_test_threads(&self, cmd: &mut Command) { + if env::var_os("RUST_TEST_THREADS").is_none() { + cmd.env("RUST_TEST_THREADS", self.jobs().to_string()); + } } /// Returns the compiler's libdir where it stores the dynamic libraries that @@ -771,6 +854,11 @@ impl Build { .or(self.config.musl_root.as_ref()) .map(|p| &**p) } + + /// Path to the python interpreter to use + fn python(&self) -> &Path { + self.config.python.as_ref().unwrap() + } } impl<'a> Compiler<'a> { diff --git a/src/bootstrap/metadata.rs b/src/bootstrap/metadata.rs index bf5cc6a4ad..8befb105ff 100644 --- a/src/bootstrap/metadata.rs +++ b/src/bootstrap/metadata.rs @@ -70,6 +70,7 @@ fn build_krate(build: &mut Build, krate: &str) { build_step: format!("build-crate-{}", package.name), doc_step: format!("doc-crate-{}", package.name), test_step: format!("test-crate-{}", package.name), + bench_step: format!("bench-crate-{}", package.name), name: package.name, deps: Vec::new(), path: path, diff --git a/src/bootstrap/mk/Makefile.in b/src/bootstrap/mk/Makefile.in index d403107763..cbcd85fb6b 100644 --- a/src/bootstrap/mk/Makefile.in +++ b/src/bootstrap/mk/Makefile.in @@ -1,4 +1,4 @@ -# Copyright 20126 The Rust Project Developers. See the COPYRIGHT +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT # file at the top-level directory of this distribution and at # http://rust-lang.org/COPYRIGHT. # @@ -23,9 +23,14 @@ all: $(Q)$(BOOTSTRAP) build $(BOOTSTRAP_ARGS) $(Q)$(BOOTSTRAP) doc $(BOOTSTRAP_ARGS) -# Don’t use $(Q) here, always show how to invoke the bootstrap script directly help: - $(BOOTSTRAP) --help + $(Q)echo 'Welcome to the rustbuild build system!' + $(Q)echo + $(Q)echo This makefile is a thin veneer over the ./x.py script located + $(Q)echo in this directory. To get the full power of the build system + $(Q)echo you can run x.py directly. + $(Q)echo + $(Q)echo To learn more run \`./x.py --help\` clean: $(Q)$(BOOTSTRAP) clean $(BOOTSTRAP_ARGS) @@ -50,13 +55,18 @@ check-cargotest: $(Q)$(BOOTSTRAP) test src/tools/cargotest $(BOOTSTRAP_ARGS) dist: $(Q)$(BOOTSTRAP) dist $(BOOTSTRAP_ARGS) +distcheck: + $(Q)$(BOOTSTRAP) dist $(BOOTSTRAP_ARGS) + $(Q)$(BOOTSTRAP) test distcheck $(BOOTSTRAP_ARGS) install: -ifeq (root user, $(USER) $(patsubst %,user,$(SUDO_USER))) - $(Q)echo "'sudo make install' is not supported currently." -else $(Q)$(BOOTSTRAP) dist --install $(BOOTSTRAP_ARGS) -endif tidy: - $(Q)$(BOOTSTRAP) test src/tools/tidy $(BOOTSTRAP_ARGS) + $(Q)$(BOOTSTRAP) test src/tools/tidy $(BOOTSTRAP_ARGS) --stage 0 + +check-stage2-T-arm-linux-androideabi-H-x86_64-unknown-linux-gnu: + $(Q)$(BOOTSTRAP) test --target arm-linux-androideabi +check-stage2-T-x86_64-unknown-linux-musl-H-x86_64-unknown-linux-gnu: + $(Q)$(BOOTSTRAP) test --target x86_64-unknown-linux-gnu + .PHONY: dist diff --git a/src/bootstrap/native.rs b/src/bootstrap/native.rs index 1b4e86fb30..6ba6b8e6c8 100644 --- a/src/bootstrap/native.rs +++ b/src/bootstrap/native.rs @@ -28,7 +28,7 @@ use cmake; use gcc; use Build; -use util::up_to_date; +use util::{self, up_to_date}; /// Compile LLVM for `target`. pub fn llvm(build: &Build, target: &str) { @@ -58,6 +58,7 @@ pub fn llvm(build: &Build, target: &str) { println!("Building LLVM for {}", target); + let _time = util::timeit(); let _ = fs::remove_dir_all(&dst.join("build")); t!(fs::create_dir_all(&dst.join("build"))); let assertions = if build.config.llvm_assertions {"ON"} else {"OFF"}; @@ -67,12 +68,20 @@ pub fn llvm(build: &Build, target: &str) { if build.config.ninja { cfg.generator("Ninja"); } + + let profile = match (build.config.llvm_optimize, build.config.llvm_release_debuginfo) { + (false, _) => "Debug", + (true, false) => "Release", + (true, true) => "RelWithDebInfo", + }; + cfg.target(target) .host(&build.config.build) .out_dir(&dst) - .profile(if build.config.llvm_optimize {"Release"} else {"Debug"}) + .profile(profile) .define("LLVM_ENABLE_ASSERTIONS", assertions) - .define("LLVM_TARGETS_TO_BUILD", "X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend") + .define("LLVM_TARGETS_TO_BUILD", + "X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend;MSP430") .define("LLVM_INCLUDE_EXAMPLES", "OFF") .define("LLVM_INCLUDE_TESTS", "OFF") .define("LLVM_INCLUDE_DOCS", "OFF") @@ -100,10 +109,10 @@ pub fn llvm(build: &Build, target: &str) { // MSVC handles compiler business itself if !target.contains("msvc") { - if build.config.ccache { - cfg.define("CMAKE_C_COMPILER", "ccache") + if let Some(ref ccache) = build.config.ccache { + cfg.define("CMAKE_C_COMPILER", ccache) .define("CMAKE_C_COMPILER_ARG1", build.cc(target)) - .define("CMAKE_CXX_COMPILER", "ccache") + .define("CMAKE_CXX_COMPILER", ccache) .define("CMAKE_CXX_COMPILER_ARG1", build.cxx(target)); } else { cfg.define("CMAKE_C_COMPILER", build.cc(target)) @@ -150,6 +159,17 @@ pub fn test_helpers(build: &Build, target: &str) { println!("Building test helpers"); t!(fs::create_dir_all(&dst)); let mut cfg = gcc::Config::new(); + + // We may have found various cross-compilers a little differently due to our + // extra configuration, so inform gcc of these compilers. Note, though, that + // on MSVC we still need gcc's detection of env vars (ugh). + if !target.contains("msvc") { + if let Some(ar) = build.ar(target) { + cfg.archiver(ar); + } + cfg.compiler(build.cc(target)); + } + cfg.cargo_metadata(false) .out_dir(&dst) .target(target) diff --git a/src/bootstrap/sanity.rs b/src/bootstrap/sanity.rs index cc1b7136d4..2992099687 100644 --- a/src/bootstrap/sanity.rs +++ b/src/bootstrap/sanity.rs @@ -41,10 +41,14 @@ pub fn check(build: &mut Build) { } } let have_cmd = |cmd: &OsStr| { - for path in env::split_paths(&path).map(|p| p.join(cmd)) { - if fs::metadata(&path).is_ok() || - fs::metadata(path.with_extension("exe")).is_ok() { - return Some(path); + for path in env::split_paths(&path) { + let target = path.join(cmd); + let mut cmd_alt = cmd.to_os_string(); + cmd_alt.push(".exe"); + if target.exists() || + target.with_extension("exe").exists() || + target.join(cmd_alt).exists() { + return Some(target); } } return None; @@ -79,17 +83,28 @@ pub fn check(build: &mut Build) { break } - need_cmd("python".as_ref()); - - // Look for the nodejs command, needed for emscripten testing - if let Some(node) = have_cmd("node".as_ref()) { - build.config.nodejs = Some(node); - } else if let Some(node) = have_cmd("nodejs".as_ref()) { - build.config.nodejs = Some(node); + if build.config.python.is_none() { + build.config.python = have_cmd("python2.7".as_ref()); } + if build.config.python.is_none() { + build.config.python = have_cmd("python2".as_ref()); + } + if build.config.python.is_none() { + need_cmd("python".as_ref()); + build.config.python = Some("python".into()); + } + need_cmd(build.config.python.as_ref().unwrap().as_ref()); + if let Some(ref s) = build.config.nodejs { need_cmd(s.as_ref()); + } else { + // Look for the nodejs command, needed for emscripten testing + if let Some(node) = have_cmd("node".as_ref()) { + build.config.nodejs = Some(node); + } else if let Some(node) = have_cmd("nodejs".as_ref()) { + build.config.nodejs = Some(node); + } } if let Some(ref gdb) = build.config.gdb { @@ -208,4 +223,8 @@ $ pacman -R cmake && pacman -S mingw-w64-x86_64-cmake if build.lldb_version.is_some() { build.lldb_python_dir = run(Command::new("lldb").arg("-P")).ok(); } + + if let Some(ref s) = build.config.ccache { + need_cmd(s.as_ref()); + } } diff --git a/src/bootstrap/step.rs b/src/bootstrap/step.rs index 56be2ccb23..66220f9dde 100644 --- a/src/bootstrap/step.rs +++ b/src/bootstrap/step.rs @@ -8,10 +8,28 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +//! Definition of steps of the build system. +//! +//! This is where some of the real meat of rustbuild is located, in how we +//! define targets and the dependencies amongst them. This file can sort of be +//! viewed as just defining targets in a makefile which shell out to predefined +//! functions elsewhere about how to execute the target. +//! +//! The primary function here you're likely interested in is the `build_rules` +//! function. This will create a `Rules` structure which basically just lists +//! everything that rustbuild can do. Each rule has a human-readable name, a +//! path associated with it, some dependencies, and then a closure of how to +//! actually perform the rule. +//! +//! All steps below are defined in self-contained units, so adding a new target +//! to the build system should just involve adding the meta information here +//! along with the actual implementation elsewhere. You can find more comments +//! about how to define rules themselves below. + use std::collections::{HashMap, HashSet}; use std::mem; -use check; +use check::{self, TestKind}; use compile; use dist; use doc; @@ -20,36 +38,6 @@ use install; use native; use {Compiler, Build, Mode}; -#[derive(PartialEq, Eq, Hash, Clone, Debug)] -struct Step<'a> { - name: &'a str, - stage: u32, - host: &'a str, - target: &'a str, -} - -impl<'a> Step<'a> { - fn name(&self, name: &'a str) -> Step<'a> { - Step { name: name, ..*self } - } - - fn stage(&self, stage: u32) -> Step<'a> { - Step { stage: stage, ..*self } - } - - fn host(&self, host: &'a str) -> Step<'a> { - Step { host: host, ..*self } - } - - fn target(&self, target: &'a str) -> Step<'a> { - Step { target: target, ..*self } - } - - fn compiler(&self) -> Compiler<'a> { - Compiler::new(self.stage, self.host) - } -} - pub fn run(build: &Build) { let rules = build_rules(build); let steps = rules.plan(); @@ -57,14 +45,91 @@ pub fn run(build: &Build) { } pub fn build_rules(build: &Build) -> Rules { - let mut rules: Rules = Rules::new(build); + let mut rules = Rules::new(build); + + // This is the first rule that we're going to define for rustbuild, which is + // used to compile LLVM itself. All rules are added through the `rules` + // structure created above and are configured through a builder-style + // interface. + // + // First up we see the `build` method. This represents a rule that's part of + // the top-level `build` subcommand. For example `./x.py build` is what this + // is associating with. Note that this is normally only relevant if you flag + // a rule as `default`, which we'll talk about later. + // + // Next up we'll see two arguments to this method: + // + // * `llvm` - this is the "human readable" name of this target. This name is + // not accessed anywhere outside this file itself (e.g. not in + // the CLI nor elsewhere in rustbuild). The purpose of this is to + // easily define dependencies between rules. That is, other rules + // will depend on this with the name "llvm". + // * `src/llvm` - this is the relevant path to the rule that we're working + // with. This path is the engine behind how commands like + // `./x.py build src/llvm` work. This should typically point + // to the relevant component, but if there's not really a + // path to be assigned here you can pass something like + // `path/to/nowhere` to ignore it. + // + // After we create the rule with the `build` method we can then configure + // various aspects of it. For example this LLVM rule uses `.host(true)` to + // flag that it's a rule only for host targets. In other words, LLVM isn't + // compiled for targets configured through `--target` (e.g. those we're just + // building a standard library for). + // + // Next up the `dep` method will add a dependency to this rule. The closure + // is yielded the step that represents executing the `llvm` rule itself + // (containing information like stage, host, target, ...) and then it must + // return a target that the step depends on. Here LLVM is actually + // interesting where a cross-compiled LLVM depends on the host LLVM, but + // otherwise it has no dependencies. + // + // To handle this we do a bit of dynamic dispatch to see what the dependency + // is. If we're building a LLVM for the build triple, then we don't actually + // have any dependencies! To do that we return a dependency on the "dummy" + // target which does nothing. + // + // If we're build a cross-compiled LLVM, however, we need to assemble the + // libraries from the previous compiler. This step has the same name as + // ours (llvm) but we want it for a different target, so we use the + // builder-style methods on `Step` to configure this target to the build + // triple. + // + // Finally, to finish off this rule, we define how to actually execute it. + // That logic is all defined in the `native` module so we just delegate to + // the relevant function there. The argument to the closure passed to `run` + // is a `Step` (defined below) which encapsulates information like the + // stage, target, host, etc. + rules.build("llvm", "src/llvm") + .host(true) + .dep(move |s| { + if s.target == build.config.build { + dummy(s, build) + } else { + s.target(&build.config.build) + } + }) + .run(move |s| native::llvm(build, s.target)); + + // Ok! After that example rule that's hopefully enough to explain what's + // going on here. You can check out the API docs below and also see a bunch + // more examples of rules directly below as well. + // dummy rule to do nothing, useful when a dep maps to no deps rules.build("dummy", "path/to/nowhere"); - fn dummy<'a>(s: &Step<'a>, build: &'a Build) -> Step<'a> { - s.name("dummy").stage(0) - .target(&build.config.build) - .host(&build.config.build) - } + + // the compiler with no target libraries ready to go + rules.build("rustc", "src/rustc") + .dep(move |s| { + if s.stage == 0 { + dummy(s, build) + } else { + s.name("librustc") + .host(&build.config.build) + .stage(s.stage - 1) + } + }) + .run(move |s| compile::assemble_rustc(build, s.stage, s.target)); // Helper for loading an entire DAG of crates, rooted at `name` let krates = |name: &str| { @@ -85,21 +150,6 @@ pub fn build_rules(build: &Build) -> Rules { return ret }; - rules.build("rustc", "path/to/nowhere") - .dep(move |s| { - if s.stage == 0 { - dummy(s, build) - } else { - s.name("librustc") - .host(&build.config.build) - .stage(s.stage - 1) - } - }) - .run(move |s| compile::assemble_rustc(build, s.stage, s.target)); - rules.build("llvm", "src/llvm") - .host(true) - .run(move |s| native::llvm(build, s.target)); - // ======================================================================== // Crate compilations // @@ -268,37 +318,55 @@ pub fn build_rules(build: &Build) -> Rules { rules.test(&krate.test_step, path) .dep(|s| s.name("libtest")) .run(move |s| check::krate(build, &s.compiler(), s.target, - Mode::Libstd, Some(&krate.name))); + Mode::Libstd, TestKind::Test, + Some(&krate.name))); } rules.test("check-std-all", "path/to/nowhere") .dep(|s| s.name("libtest")) .default(true) - .run(move |s| check::krate(build, &s.compiler(), s.target, Mode::Libstd, - None)); + .run(move |s| check::krate(build, &s.compiler(), s.target, + Mode::Libstd, TestKind::Test, None)); + + // std benchmarks + for (krate, path, _default) in krates("std_shim") { + rules.bench(&krate.bench_step, path) + .dep(|s| s.name("libtest")) + .run(move |s| check::krate(build, &s.compiler(), s.target, + Mode::Libstd, TestKind::Bench, + Some(&krate.name))); + } + rules.bench("bench-std-all", "path/to/nowhere") + .dep(|s| s.name("libtest")) + .default(true) + .run(move |s| check::krate(build, &s.compiler(), s.target, + Mode::Libstd, TestKind::Bench, None)); + for (krate, path, _default) in krates("test_shim") { rules.test(&krate.test_step, path) .dep(|s| s.name("libtest")) .run(move |s| check::krate(build, &s.compiler(), s.target, - Mode::Libtest, Some(&krate.name))); + Mode::Libtest, TestKind::Test, + Some(&krate.name))); } rules.test("check-test-all", "path/to/nowhere") .dep(|s| s.name("libtest")) .default(true) - .run(move |s| check::krate(build, &s.compiler(), s.target, Mode::Libtest, - None)); + .run(move |s| check::krate(build, &s.compiler(), s.target, + Mode::Libtest, TestKind::Test, None)); for (krate, path, _default) in krates("rustc-main") { rules.test(&krate.test_step, path) .dep(|s| s.name("librustc")) .host(true) .run(move |s| check::krate(build, &s.compiler(), s.target, - Mode::Librustc, Some(&krate.name))); + Mode::Librustc, TestKind::Test, + Some(&krate.name))); } rules.test("check-rustc-all", "path/to/nowhere") .dep(|s| s.name("librustc")) .default(true) .host(true) - .run(move |s| check::krate(build, &s.compiler(), s.target, Mode::Librustc, - None)); + .run(move |s| check::krate(build, &s.compiler(), s.target, + Mode::Librustc, TestKind::Test, None)); rules.test("check-linkchecker", "src/tools/linkchecker") .dep(|s| s.name("tool-linkchecker")) @@ -312,10 +380,10 @@ pub fn build_rules(build: &Build) -> Rules { .host(true) .run(move |s| check::cargotest(build, s.stage, s.target)); rules.test("check-tidy", "src/tools/tidy") - .dep(|s| s.name("tool-tidy")) + .dep(|s| s.name("tool-tidy").stage(0)) .default(true) .host(true) - .run(move |s| check::tidy(build, s.stage, s.target)); + .run(move |s| check::tidy(build, 0, s.target)); rules.test("check-error-index", "src/tools/error_index_generator") .dep(|s| s.name("libstd")) .dep(|s| s.name("tool-error-index").host(s.host)) @@ -327,6 +395,10 @@ pub fn build_rules(build: &Build) -> Rules { .default(true) .host(true) .run(move |s| check::docs(build, &s.compiler())); + rules.test("check-distcheck", "distcheck") + .dep(|s| s.name("dist-src")) + .run(move |_| check::distcheck(build)); + rules.build("test-helpers", "src/rt/rust_test_helpers.c") .run(move |s| native::test_helpers(build, s.target)); @@ -386,7 +458,7 @@ pub fn build_rules(build: &Build) -> Rules { for (krate, path, default) in krates("test_shim") { rules.doc(&krate.doc_step, path) .dep(|s| s.name("libtest")) - .default(default && build.config.docs) + .default(default && build.config.compiler_docs) .run(move |s| doc::test(build, s.stage, s.target)); } for (krate, path, default) in krates("rustc-main") { @@ -418,7 +490,12 @@ pub fn build_rules(build: &Build) -> Rules { .default(true) .run(move |s| dist::std(build, &s.compiler(), s.target)); rules.dist("dist-mingw", "path/to/nowhere") - .run(move |s| dist::mingw(build, s.target)); + .default(true) + .run(move |s| { + if s.target.contains("pc-windows-gnu") { + dist::mingw(build, s.target) + } + }); rules.dist("dist-src", "src") .default(true) .host(true) @@ -427,21 +504,98 @@ pub fn build_rules(build: &Build) -> Rules { .default(true) .dep(|s| s.name("default:doc")) .run(move |s| dist::docs(build, s.stage, s.target)); + rules.dist("dist-analysis", "analysis") + .dep(|s| s.name("dist-std")) + .default(true) + .run(move |s| dist::analysis(build, &s.compiler(), s.target)); rules.dist("install", "src") .dep(|s| s.name("default:dist")) .run(move |s| install::install(build, s.stage, s.target)); rules.verify(); - return rules + return rules; + + fn dummy<'a>(s: &Step<'a>, build: &'a Build) -> Step<'a> { + s.name("dummy").stage(0) + .target(&build.config.build) + .host(&build.config.build) + } +} + +#[derive(PartialEq, Eq, Hash, Clone, Debug)] +struct Step<'a> { + /// Human readable name of the rule this step is executing. Possible names + /// are all defined above in `build_rules`. + name: &'a str, + + /// The stage this step is executing in. This is typically 0, 1, or 2. + stage: u32, + + /// This step will likely involve a compiler, and the target that compiler + /// itself is built for is called the host, this variable. Typically this is + /// the target of the build machine itself. + host: &'a str, + + /// The target that this step represents generating. If you're building a + /// standard library for a new suite of targets, for example, this'll be set + /// to those targets. + target: &'a str, +} + +impl<'a> Step<'a> { + /// Creates a new step which is the same as this, except has a new name. + fn name(&self, name: &'a str) -> Step<'a> { + Step { name: name, ..*self } + } + + /// Creates a new step which is the same as this, except has a new stage. + fn stage(&self, stage: u32) -> Step<'a> { + Step { stage: stage, ..*self } + } + + /// Creates a new step which is the same as this, except has a new host. + fn host(&self, host: &'a str) -> Step<'a> { + Step { host: host, ..*self } + } + + /// Creates a new step which is the same as this, except has a new target. + fn target(&self, target: &'a str) -> Step<'a> { + Step { target: target, ..*self } + } + + /// Returns the `Compiler` structure that this step corresponds to. + fn compiler(&self) -> Compiler<'a> { + Compiler::new(self.stage, self.host) + } } struct Rule<'a> { + /// The human readable name of this target, defined in `build_rules`. name: &'a str, + + /// The path associated with this target, used in the `./x.py` driver for + /// easy and ergonomic specification of what to do. path: &'a str, + + /// The "kind" of top-level command that this rule is associated with, only + /// relevant if this is a default rule. kind: Kind, + + /// List of dependencies this rule has. Each dependency is a function from a + /// step that's being executed to another step that should be executed. deps: Vec) -> Step<'a> + 'a>>, + + /// How to actually execute this rule. Takes a step with contextual + /// information and then executes it. run: Box) + 'a>, + + /// Whether or not this is a "default" rule. That basically means that if + /// you run, for example, `./x.py test` whether it's included or not. default: bool, + + /// Whether or not this is a "host" rule, or in other words whether this is + /// only intended for compiler hosts and not for targets that are being + /// generated. host: bool, } @@ -449,6 +603,7 @@ struct Rule<'a> { enum Kind { Build, Test, + Bench, Dist, Doc, } @@ -467,6 +622,8 @@ impl<'a> Rule<'a> { } } +/// Builder pattern returned from the various methods on `Rules` which will add +/// the rule to the internal list on `Drop`. struct RuleBuilder<'a: 'b, 'b> { rules: &'b mut Rules<'a>, rule: Rule<'a>, @@ -528,21 +685,35 @@ impl<'a> Rules<'a> { } } + /// Creates a new rule of `Kind::Build` with the specified human readable + /// name and path associated with it. + /// + /// The builder returned should be configured further with information such + /// as how to actually run this rule. fn build<'b>(&'b mut self, name: &'a str, path: &'a str) -> RuleBuilder<'a, 'b> { self.rule(name, path, Kind::Build) } + /// Same as `build`, but for `Kind::Test`. fn test<'b>(&'b mut self, name: &'a str, path: &'a str) -> RuleBuilder<'a, 'b> { self.rule(name, path, Kind::Test) } + /// Same as `build`, but for `Kind::Bench`. + fn bench<'b>(&'b mut self, name: &'a str, path: &'a str) + -> RuleBuilder<'a, 'b> { + self.rule(name, path, Kind::Bench) + } + + /// Same as `build`, but for `Kind::Doc`. fn doc<'b>(&'b mut self, name: &'a str, path: &'a str) -> RuleBuilder<'a, 'b> { self.rule(name, path, Kind::Doc) } + /// Same as `build`, but for `Kind::Dist`. fn dist<'b>(&'b mut self, name: &'a str, path: &'a str) -> RuleBuilder<'a, 'b> { self.rule(name, path, Kind::Dist) @@ -583,6 +754,7 @@ invalid rule dependency graph detected, was a rule added and maybe typo'd? "build" => Kind::Build, "doc" => Kind::Doc, "test" => Kind::Test, + "bench" => Kind::Bench, "dist" => Kind::Dist, _ => return, }; @@ -602,10 +774,36 @@ invalid rule dependency graph detected, was a rule added and maybe typo'd? /// Construct the top-level build steps that we're going to be executing, /// given the subcommand that our build is performing. fn plan(&self) -> Vec> { + // Ok, the logic here is pretty subtle, and involves quite a few + // conditionals. The basic idea here is to: + // + // 1. First, filter all our rules to the relevant ones. This means that + // the command specified corresponds to one of our `Kind` variants, + // and we filter all rules based on that. + // + // 2. Next, we determine which rules we're actually executing. If a + // number of path filters were specified on the command line we look + // for those, otherwise we look for anything tagged `default`. + // + // 3. Finally, we generate some steps with host and target information. + // + // The last step is by far the most complicated and subtle. The basic + // thinking here is that we want to take the cartesian product of + // specified hosts and targets and build rules with that. The list of + // hosts and targets, if not specified, come from the how this build was + // configured. If the rule we're looking at is a host-only rule the we + // ignore the list of targets and instead consider the list of hosts + // also the list of targets. + // + // Once the host and target lists are generated we take the cartesian + // product of the two and then create a step based off them. Note that + // the stage each step is associated was specified with the `--step` + // flag on the command line. let (kind, paths) = match self.build.flags.cmd { Subcommand::Build { ref paths } => (Kind::Build, &paths[..]), Subcommand::Doc { ref paths } => (Kind::Doc, &paths[..]), Subcommand::Test { ref paths, test_args: _ } => (Kind::Test, &paths[..]), + Subcommand::Bench { ref paths, test_args: _ } => (Kind::Bench, &paths[..]), Subcommand::Dist { install } => { if install { return vec![self.sbuild.name("install")] @@ -631,7 +829,18 @@ invalid rule dependency graph detected, was a rule added and maybe typo'd? } else { &self.build.config.target }; - let arr = if rule.host {hosts} else {targets}; + // If --target was specified but --host wasn't specified, don't run + // any host-only tests + let arr = if rule.host { + if self.build.flags.target.len() > 0 && + self.build.flags.host.len() == 0 { + &hosts[..0] + } else { + hosts + } + } else { + targets + }; hosts.iter().flat_map(move |host| { arr.iter().map(move |target| { @@ -667,11 +876,24 @@ invalid rule dependency graph detected, was a rule added and maybe typo'd? // And finally, iterate over everything and execute it. for step in order.iter() { + if self.build.flags.keep_stage.map_or(false, |s| step.stage <= s) { + self.build.verbose(&format!("keeping step {:?}", step)); + continue; + } self.build.verbose(&format!("executing step {:?}", step)); (self.rules[step.name].run)(step); } } + /// Performs topological sort of dependencies rooted at the `step` + /// specified, pushing all results onto the `order` vector provided. + /// + /// In other words, when this method returns, the `order` vector will + /// contain a list of steps which if executed in order will eventually + /// complete the `step` specified as well. + /// + /// The `added` set specified here is the set of steps that are already + /// present in `order` (and hence don't need to be added again). fn fill(&self, step: Step<'a>, order: &mut Vec>, diff --git a/src/bootstrap/util.rs b/src/bootstrap/util.rs index e028c52236..c9e756b6f9 100644 --- a/src/bootstrap/util.rs +++ b/src/bootstrap/util.rs @@ -18,6 +18,7 @@ use std::ffi::OsString; use std::fs; use std::path::{Path, PathBuf}; use std::process::Command; +use std::time::Instant; use filetime::FileTime; @@ -40,6 +41,12 @@ pub fn mtime(path: &Path) -> FileTime { /// Copies a file from `src` to `dst`, attempting to use hard links and then /// falling back to an actually filesystem copy if necessary. pub fn copy(src: &Path, dst: &Path) { + // A call to `hard_link` will fail if `dst` exists, so remove it if it + // already exists so we can try to help `hard_link` succeed. + let _ = fs::remove_file(&dst); + + // Attempt to "easy copy" by creating a hard link (symlinks don't work on + // windows), but if that fails just fall back to a slow `copy` operation. let res = fs::hard_link(src, dst); let res = res.or_else(|_| fs::copy(src, dst).map(|_| ())); if let Err(e) = res { @@ -189,3 +196,19 @@ pub fn push_exe_path(mut buf: PathBuf, components: &[&str]) -> PathBuf { buf } + +pub struct TimeIt(Instant); + +/// Returns an RAII structure that prints out how long it took to drop. +pub fn timeit() -> TimeIt { + TimeIt(Instant::now()) +} + +impl Drop for TimeIt { + fn drop(&mut self) { + let time = self.0.elapsed(); + println!("\tfinished in {}.{:03}", + time.as_secs(), + time.subsec_nanos() / 1_000_000); + } +} diff --git a/src/build_helper/lib.rs b/src/build_helper/lib.rs index 38844fb6c9..d0d588f46a 100644 --- a/src/build_helper/lib.rs +++ b/src/build_helper/lib.rs @@ -21,7 +21,8 @@ pub fn run(cmd: &mut Command) { pub fn run_silent(cmd: &mut Command) { let status = match cmd.status() { Ok(status) => status, - Err(e) => fail(&format!("failed to execute command: {}", e)), + Err(e) => fail(&format!("failed to execute command: {:?}\nerror: {}", + cmd, e)), }; if !status.success() { fail(&format!("command did not execute successfully: {:?}\n\ @@ -46,6 +47,8 @@ pub fn cc2ar(cc: &Path, target: &str) -> Option { None } else if target.contains("musl") { Some(PathBuf::from("ar")) + } else if target.contains("openbsd") { + Some(PathBuf::from("ar")) } else { let parent = cc.parent().unwrap(); let file = cc.file_name().unwrap().to_str().unwrap(); @@ -60,10 +63,21 @@ pub fn cc2ar(cc: &Path, target: &str) -> Option { } } +pub fn make(host: &str) -> PathBuf { + if host.contains("bitrig") || host.contains("dragonfly") || + host.contains("freebsd") || host.contains("netbsd") || + host.contains("openbsd") { + PathBuf::from("gmake") + } else { + PathBuf::from("make") + } +} + pub fn output(cmd: &mut Command) -> String { let output = match cmd.stderr(Stdio::inherit()).output() { Ok(status) => status, - Err(e) => fail(&format!("failed to execute command: {}", e)), + Err(e) => fail(&format!("failed to execute command: {:?}\nerror: {}", + cmd, e)), }; if !output.status.success() { panic!("command did not execute successfully: {:?}\n\ diff --git a/src/ci/docker/arm-android/Dockerfile b/src/ci/docker/arm-android/Dockerfile new file mode 100644 index 0000000000..8911b4ff0c --- /dev/null +++ b/src/ci/docker/arm-android/Dockerfile @@ -0,0 +1,54 @@ +FROM ubuntu:16.04 + +RUN dpkg --add-architecture i386 && \ + apt-get update && \ + apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + unzip \ + expect \ + openjdk-9-jre \ + sudo \ + libstdc++6:i386 \ + xz-utils + +WORKDIR /android/ +ENV PATH=$PATH:/android/ndk-arm-9/bin:/android/sdk/tools:/android/sdk/platform-tools + +COPY install-ndk.sh install-sdk.sh accept-licenses.sh /android/ +RUN sh /android/install-ndk.sh +RUN sh /android/install-sdk.sh + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb + +COPY start-emulator.sh /android/ + +ENTRYPOINT ["/usr/bin/dumb-init", "--", "/android/start-emulator.sh"] + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +ENV TARGETS=arm-linux-androideabi +ENV TARGETS=$TARGETS,i686-linux-android +ENV TARGETS=$TARGETS,aarch64-linux-android +ENV TARGETS=$TARGETS,armv7-linux-androideabi + +ENV RUST_CONFIGURE_ARGS \ + --target=$TARGETS \ + --arm-linux-androideabi-ndk=/android/ndk-arm-9 \ + --armv7-linux-androideabi-ndk=/android/ndk-arm-9 \ + --i686-linux-android-ndk=/android/ndk-x86-9 \ + --aarch64-linux-android-ndk=/android/ndk-aarch64 +ENV XPY_CHECK test --target arm-linux-androideabi +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/arm-android/accept-licenses.sh b/src/ci/docker/arm-android/accept-licenses.sh new file mode 100755 index 0000000000..8d8f60a5ec --- /dev/null +++ b/src/ci/docker/arm-android/accept-licenses.sh @@ -0,0 +1,15 @@ +#!/usr/bin/expect -f +# ignore-license + +set timeout 1800 +set cmd [lindex $argv 0] +set licenses [lindex $argv 1] + +spawn {*}$cmd +expect { + "Do you accept the license '*'*" { + exp_send "y\r" + exp_continue + } + eof +} diff --git a/src/ci/docker/arm-android/install-ndk.sh b/src/ci/docker/arm-android/install-ndk.sh new file mode 100644 index 0000000000..418ce69c5b --- /dev/null +++ b/src/ci/docker/arm-android/install-ndk.sh @@ -0,0 +1,45 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +cpgdb() { + cp android-ndk-r11c/prebuilt/linux-x86_64/bin/gdb /android/$1/bin/$2-gdb + cp android-ndk-r11c/prebuilt/linux-x86_64/bin/gdb-orig /android/$1/bin/gdb-orig + cp -r android-ndk-r11c/prebuilt/linux-x86_64/share /android/$1/share +} + +# Prep the Android NDK +# +# See https://github.com/servo/servo/wiki/Building-for-Android +curl -O https://dl.google.com/android/repository/android-ndk-r11c-linux-x86_64.zip +unzip -q android-ndk-r11c-linux-x86_64.zip +bash android-ndk-r11c/build/tools/make-standalone-toolchain.sh \ + --platform=android-9 \ + --toolchain=arm-linux-androideabi-4.9 \ + --install-dir=/android/ndk-arm-9 \ + --ndk-dir=/android/android-ndk-r11c \ + --arch=arm +cpgdb ndk-arm-9 arm-linux-androideabi +bash android-ndk-r11c/build/tools/make-standalone-toolchain.sh \ + --platform=android-21 \ + --toolchain=aarch64-linux-android-4.9 \ + --install-dir=/android/ndk-aarch64 \ + --ndk-dir=/android/android-ndk-r11c \ + --arch=arm64 +bash android-ndk-r11c/build/tools/make-standalone-toolchain.sh \ + --platform=android-9 \ + --toolchain=x86-4.9 \ + --install-dir=/android/ndk-x86-9 \ + --ndk-dir=/android/android-ndk-r11c \ + --arch=x86 + +rm -rf ./android-ndk-r11c-linux-x86_64.zip ./android-ndk-r11c diff --git a/src/ci/docker/arm-android/install-sdk.sh b/src/ci/docker/arm-android/install-sdk.sh new file mode 100644 index 0000000000..2db1d46ba2 --- /dev/null +++ b/src/ci/docker/arm-android/install-sdk.sh @@ -0,0 +1,33 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +# Prep the SDK and emulator +# +# Note that the update process requires that we accept a bunch of licenses, and +# we can't just pipe `yes` into it for some reason, so we take the same strategy +# located in https://github.com/appunite/docker by just wrapping it in a script +# which apparently magically accepts the licenses. + +mkdir sdk +curl https://dl.google.com/android/android-sdk_r24.4-linux.tgz | \ + tar xzf - -C sdk --strip-components=1 + +filter="platform-tools,android-18" +filter="$filter,sys-img-armeabi-v7a-android-18" + +./accept-licenses.sh "android - update sdk -a --no-ui --filter $filter" + +echo "no" | android create avd \ + --name arm-18 \ + --target android-18 \ + --abi armeabi-v7a diff --git a/src/ci/docker/arm-android/start-emulator.sh b/src/ci/docker/arm-android/start-emulator.sh new file mode 100755 index 0000000000..93f20b28b8 --- /dev/null +++ b/src/ci/docker/arm-android/start-emulator.sh @@ -0,0 +1,15 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex +ANDROID_EMULATOR_FORCE_32BIT=true \ + emulator @arm-18 -no-window -partition-size 2047 & +exec "$@" diff --git a/src/ci/docker/cross/Dockerfile b/src/ci/docker/cross/Dockerfile new file mode 100644 index 0000000000..08b436313f --- /dev/null +++ b/src/ci/docker/cross/Dockerfile @@ -0,0 +1,75 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gcc-aarch64-linux-gnu libc6-dev-arm64-cross \ + gcc-arm-linux-gnueabi libc6-dev-armel-cross \ + gcc-arm-linux-gnueabihf libc6-dev-armhf-cross \ + gcc-mips-linux-gnu libc6-dev-mips-cross \ + gcc-mipsel-linux-gnu libc6-dev-mipsel-cross \ + gcc-mips64-linux-gnuabi64 libc6-dev-mips64-cross \ + gcc-mips64el-linux-gnuabi64 libc6-dev-mips64el-cross \ + gcc-powerpc-linux-gnu libc6-dev-powerpc-cross \ + gcc-powerpc64-linux-gnu libc6-dev-ppc64-cross \ + gcc-powerpc64le-linux-gnu libc6-dev-ppc64el-cross \ + gcc-s390x-linux-gnu libc6-dev-s390x-cross \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV TARGETS=aarch64-unknown-linux-gnu +ENV TARGETS=$TARGETS,arm-unknown-linux-gnueabi +ENV TARGETS=$TARGETS,arm-unknown-linux-gnueabihf +ENV TARGETS=$TARGETS,armv7-unknown-linux-gnueabihf +ENV TARGETS=$TARGETS,asmjs-unknown-emscripten +ENV TARGETS=$TARGETS,mips-unknown-linux-gnu +ENV TARGETS=$TARGETS,mips64-unknown-linux-gnuabi64 +ENV TARGETS=$TARGETS,mips64el-unknown-linux-gnuabi64 +ENV TARGETS=$TARGETS,mipsel-unknown-linux-gnu +ENV TARGETS=$TARGETS,powerpc-unknown-linux-gnu +ENV TARGETS=$TARGETS,powerpc64-unknown-linux-gnu +ENV TARGETS=$TARGETS,powerpc64le-unknown-linux-gnu +ENV TARGETS=$TARGETS,s390x-unknown-linux-gnu +ENV TARGETS=$TARGETS,wasm32-unknown-emscripten + +#ENV TARGETS=$TARGETS,mips-unknown-linux-musl +#ENV TARGETS=$TARGETS,arm-unknown-linux-musleabi +#ENV TARGETS=$TARGETS,arm-unknown-linux-musleabihf +#ENV TARGETS=$TARGETS,armv7-unknown-linux-musleabihf +#ENV TARGETS=$TARGETS,x86_64-rumprun-netbsd + +ENV RUST_CONFIGURE_ARGS \ + --target=$TARGETS \ + --enable-rustbuild +ENV RUST_CHECK_TARGET "" + +ENV AR_s390x_unknown_linux_gnu=s390x-linux-gnu-ar \ + CC_s390x_unknown_linux_gnu=s390x-linux-gnu-gcc \ + AR_mips64_unknown_linux_gnuabi64=mips64-linux-gnuabi64-ar \ + CC_mips64_unknown_linux_gnuabi64=mips64-linux-gnuabi64-gcc \ + AR_mips64el_unknown_linux_gnuabi64=mips64el-linux-gnuabi64-ar \ + CC_mips64el_unknown_linux_gnuabi64=mips64el-linux-gnuabi64-gcc \ + AR_powerpc64_unknown_linux_gnu=powerpc64-linux-gnu-ar \ + CC_powerpc64_unknown_linux_gnu=powerpc64-linux-gnu-gcc + +# FIXME(rust-lang/rust#36150): powerpc unfortunately aborts right now +ENV NO_LLVM_ASSERTIONS=1 + +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/i686-gnu-nopt/Dockerfile b/src/ci/docker/i686-gnu-nopt/Dockerfile new file mode 100644 index 0000000000..1da33c94c7 --- /dev/null +++ b/src/ci/docker/i686-gnu-nopt/Dockerfile @@ -0,0 +1,29 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++-multilib \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=i686-unknown-linux-gnu --disable-optimize-tests +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/i686-gnu/Dockerfile b/src/ci/docker/i686-gnu/Dockerfile new file mode 100644 index 0000000000..9e5b0e0435 --- /dev/null +++ b/src/ci/docker/i686-gnu/Dockerfile @@ -0,0 +1,29 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++-multilib \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=i686-unknown-linux-gnu +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/run.sh b/src/ci/docker/run.sh new file mode 100755 index 0000000000..ce8b49a92d --- /dev/null +++ b/src/ci/docker/run.sh @@ -0,0 +1,46 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -e + +script=`cd $(dirname $0) && pwd`/`basename $0` +image=$1 + +docker_dir="`dirname $script`" +ci_dir="`dirname $docker_dir`" +src_dir="`dirname $ci_dir`" +root_dir="`dirname $src_dir`" + +docker \ + build \ + --rm \ + -t rust-ci \ + "`dirname "$script"`/$image" + +mkdir -p $HOME/.cargo +mkdir -p $root_dir/obj + +exec docker \ + run \ + --volume "$root_dir:/checkout:ro" \ + --volume "$root_dir/obj:/checkout/obj" \ + --workdir /checkout/obj \ + --env SRC=/checkout \ + --env SCCACHE_BUCKET=$SCCACHE_BUCKET \ + --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ + --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \ + --env CARGO_HOME=/cargo \ + --env LOCAL_USER_ID=`id -u` \ + --volume "$HOME/.cargo:/cargo" \ + --interactive \ + --tty \ + rust-ci \ + /checkout/src/ci/run.sh diff --git a/src/ci/docker/x86_64-freebsd/Dockerfile b/src/ci/docker/x86_64-freebsd/Dockerfile new file mode 100644 index 0000000000..75f3174e2c --- /dev/null +++ b/src/ci/docker/x86_64-freebsd/Dockerfile @@ -0,0 +1,37 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + bzip2 \ + xz-utils \ + wget + +COPY build-toolchain.sh /tmp/ +RUN sh /tmp/build-toolchain.sh + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +ENV \ + AR_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-ar \ + CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-gcc + +ENV RUST_CONFIGURE_ARGS --target=x86_64-unknown-freebsd +ENV RUST_CHECK_TARGET "" +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-freebsd/build-toolchain.sh b/src/ci/docker/x86_64-freebsd/build-toolchain.sh new file mode 100644 index 0000000000..d4bc886d50 --- /dev/null +++ b/src/ci/docker/x86_64-freebsd/build-toolchain.sh @@ -0,0 +1,96 @@ +#!/bin/bash +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +ARCH=x86_64 +BINUTILS=2.25.1 +GCC=5.3.0 + +mkdir binutils +cd binutils + +# First up, build binutils +curl https://ftp.gnu.org/gnu/binutils/binutils-$BINUTILS.tar.bz2 | tar xjf - +mkdir binutils-build +cd binutils-build +../binutils-$BINUTILS/configure \ + --target=$ARCH-unknown-freebsd10 +make -j10 +make install +cd ../.. +rm -rf binutils + +# Next, download the FreeBSD libc and relevant header files + +mkdir freebsd +case "$ARCH" in + x86_64) + URL=ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.2-RELEASE/base.txz + ;; + i686) + URL=ftp://ftp.freebsd.org/pub/FreeBSD/releases/i386/10.2-RELEASE/base.txz + ;; +esac +curl $URL | tar xJf - -C freebsd ./usr/include ./usr/lib ./lib + +dst=/usr/local/$ARCH-unknown-freebsd10 + +cp -r freebsd/usr/include $dst/ +cp freebsd/usr/lib/crt1.o $dst/lib +cp freebsd/usr/lib/Scrt1.o $dst/lib +cp freebsd/usr/lib/crti.o $dst/lib +cp freebsd/usr/lib/crtn.o $dst/lib +cp freebsd/usr/lib/libc.a $dst/lib +cp freebsd/usr/lib/libutil.a $dst/lib +cp freebsd/usr/lib/libutil_p.a $dst/lib +cp freebsd/usr/lib/libm.a $dst/lib +cp freebsd/usr/lib/librt.so.1 $dst/lib +cp freebsd/usr/lib/libexecinfo.so.1 $dst/lib +cp freebsd/lib/libc.so.7 $dst/lib +cp freebsd/lib/libm.so.5 $dst/lib +cp freebsd/lib/libutil.so.9 $dst/lib +cp freebsd/lib/libthr.so.3 $dst/lib/libpthread.so + +ln -s libc.so.7 $dst/lib/libc.so +ln -s libm.so.5 $dst/lib/libm.so +ln -s librt.so.1 $dst/lib/librt.so +ln -s libutil.so.9 $dst/lib/libutil.so +ln -s libexecinfo.so.1 $dst/lib/libexecinfo.so +rm -rf freebsd + +# Finally, download and build gcc to target FreeBSD +mkdir gcc +cd gcc +curl https://ftp.gnu.org/gnu/gcc/gcc-$GCC/gcc-$GCC.tar.bz2 | tar xjf - +cd gcc-$GCC +./contrib/download_prerequisites + +mkdir ../gcc-build +cd ../gcc-build +../gcc-$GCC/configure \ + --enable-languages=c \ + --target=$ARCH-unknown-freebsd10 \ + --disable-multilib \ + --disable-nls \ + --disable-libgomp \ + --disable-libquadmath \ + --disable-libssp \ + --disable-libvtv \ + --disable-libcilkrts \ + --disable-libada \ + --disable-libsanitizer \ + --disable-libquadmath-support \ + --disable-lto +make -j10 +make install +cd ../.. +rm -rf gcc diff --git a/src/ci/docker/x86_64-gnu-cargotest/Dockerfile b/src/ci/docker/x86_64-gnu-cargotest/Dockerfile new file mode 100644 index 0000000000..0d9835e86d --- /dev/null +++ b/src/ci/docker/x86_64-gnu-cargotest/Dockerfile @@ -0,0 +1,30 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + libssl-dev \ + sudo \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu +ENV RUST_CHECK_TARGET check-cargotest +ENV NO_VENDOR 1 +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-gnu-debug/Dockerfile b/src/ci/docker/x86_64-gnu-debug/Dockerfile new file mode 100644 index 0000000000..eec8844229 --- /dev/null +++ b/src/ci/docker/x86_64-gnu-debug/Dockerfile @@ -0,0 +1,32 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS \ + --build=x86_64-unknown-linux-gnu \ + --enable-debug \ + --enable-optimize +ENV RUST_CHECK_TARGET "" +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-gnu-llvm-3.7/Dockerfile b/src/ci/docker/x86_64-gnu-llvm-3.7/Dockerfile new file mode 100644 index 0000000000..4c9198d88e --- /dev/null +++ b/src/ci/docker/x86_64-gnu-llvm-3.7/Dockerfile @@ -0,0 +1,34 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + llvm-3.7-tools \ + libedit-dev \ + zlib1g-dev \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS \ + --build=x86_64-unknown-linux-gnu \ + --llvm-root=/usr/lib/llvm-3.7 +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-gnu-make/Dockerfile b/src/ci/docker/x86_64-gnu-make/Dockerfile new file mode 100644 index 0000000000..1c503aea13 --- /dev/null +++ b/src/ci/docker/x86_64-gnu-make/Dockerfile @@ -0,0 +1,29 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu --disable-rustbuild +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-gnu-nopt/Dockerfile b/src/ci/docker/x86_64-gnu-nopt/Dockerfile new file mode 100644 index 0000000000..66de6ea13a --- /dev/null +++ b/src/ci/docker/x86_64-gnu-nopt/Dockerfile @@ -0,0 +1,29 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu --disable-optimize-tests +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-gnu/Dockerfile b/src/ci/docker/x86_64-gnu/Dockerfile new file mode 100644 index 0000000000..3d71b7ffb9 --- /dev/null +++ b/src/ci/docker/x86_64-gnu/Dockerfile @@ -0,0 +1,29 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + sudo \ + gdb \ + xz-utils + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu +ENV RUST_CHECK_TARGET check +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-musl/Dockerfile b/src/ci/docker/x86_64-musl/Dockerfile new file mode 100644 index 0000000000..96b38067cb --- /dev/null +++ b/src/ci/docker/x86_64-musl/Dockerfile @@ -0,0 +1,38 @@ +FROM ubuntu:16.04 + +RUN apt-get update && apt-get install -y --no-install-recommends \ + g++ \ + make \ + file \ + curl \ + ca-certificates \ + python2.7 \ + git \ + cmake \ + ccache \ + xz-utils \ + sudo \ + gdb + +WORKDIR /build/ +COPY build-musl.sh /build/ +RUN sh /build/build-musl.sh && rm -rf /build + +RUN curl -OL https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64.deb && \ + dpkg -i dumb-init_*.deb && \ + rm dumb-init_*.deb +ENTRYPOINT ["/usr/bin/dumb-init", "--"] + +ENV SCCACHE_DIGEST=7237e38e029342fa27b7ac25412cb9d52554008b12389727320bd533fd7f05b6a96d55485f305caf95e5c8f5f97c3313e10012ccad3e752aba2518f3522ba783 +RUN curl -L https://api.pub.build.mozilla.org/tooltool/sha512/$SCCACHE_DIGEST | \ + tar xJf - -C /usr/local/bin --strip-components=1 + +ENV RUST_CONFIGURE_ARGS \ + --target=x86_64-unknown-linux-musl \ + --musl-root-x86_64=/musl-x86_64 +ENV RUST_CHECK_TARGET check-stage2-T-x86_64-unknown-linux-musl-H-x86_64-unknown-linux-gnu +ENV PATH=$PATH:/musl-x86_64/bin +ENV XPY_CHECK test --target x86_64-unknown-linux-musl + +RUN mkdir /tmp/obj +RUN chmod 777 /tmp/obj diff --git a/src/ci/docker/x86_64-musl/build-musl.sh b/src/ci/docker/x86_64-musl/build-musl.sh new file mode 100644 index 0000000000..2bfbd646b7 --- /dev/null +++ b/src/ci/docker/x86_64-musl/build-musl.sh @@ -0,0 +1,33 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +export CFLAGS="-fPIC" +MUSL=musl-1.1.14 +curl https://www.musl-libc.org/releases/$MUSL.tar.gz | tar xzf - +cd $MUSL +./configure --prefix=/musl-x86_64 --disable-shared +make -j10 +make install +make clean +cd .. + +# To build MUSL we're going to need a libunwind lying around, so acquire that +# here and build it. +curl -L https://github.com/llvm-mirror/llvm/archive/release_37.tar.gz | tar xzf - +curl -L https://github.com/llvm-mirror/libunwind/archive/release_37.tar.gz | tar xzf - +mkdir libunwind-build +cd libunwind-build +cmake ../libunwind-release_37 -DLLVM_PATH=/build/llvm-release_37 \ + -DLIBUNWIND_ENABLE_SHARED=0 +make -j10 +cp lib/libunwind.a /musl-x86_64/lib diff --git a/src/ci/run.sh b/src/ci/run.sh new file mode 100755 index 0000000000..152694346a --- /dev/null +++ b/src/ci/run.sh @@ -0,0 +1,52 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -e + +if [ "$LOCAL_USER_ID" != "" ]; then + useradd --shell /bin/bash -u $LOCAL_USER_ID -o -c "" -m user + export HOME=/home/user + unset LOCAL_USER_ID + exec su --preserve-environment -c "env PATH=$PATH \"$0\"" user +fi + +if [ "$NO_LLVM_ASSERTIONS" = "" ]; then + ENABLE_LLVM_ASSERTIONS=--enable-llvm-assertions +fi + +if [ "$NO_VENDOR" = "" ]; then + ENABLE_VENDOR=--enable-vendor +fi + +set -ex + +$SRC/configure \ + --disable-manage-submodules \ + --enable-debug-assertions \ + --enable-quiet-tests \ + --enable-sccache \ + $ENABLE_VENDOR \ + $ENABLE_LLVM_ASSERTIONS \ + $RUST_CONFIGURE_ARGS + +if [ "$TRAVIS_OS_NAME" = "osx" ]; then + ncpus=$(sysctl -n hw.ncpu) +else + ncpus=$(nproc) +fi + +make -j $ncpus tidy +make -j $ncpus +if [ ! -z "$XPY_CHECK" ]; then + exec python2.7 $SRC/x.py $XPY_CHECK +else + exec make $RUST_CHECK_TARGET -j $ncpus +fi diff --git a/src/compiler-rt/lib/builtins/int_lib.h b/src/compiler-rt/lib/builtins/int_lib.h index 8dfe5672d1..a02e85b741 100644 --- a/src/compiler-rt/lib/builtins/int_lib.h +++ b/src/compiler-rt/lib/builtins/int_lib.h @@ -72,6 +72,28 @@ /* Include internal utility function declarations. */ #include "int_util.h" +/* + * Workaround for LLVM bug 11663. Prevent endless recursion in + * __c?zdi2(), where calls to __builtin_c?z() are expanded to + * __c?zdi2() instead of __c?zsi2(). + * + * Instead of placing this workaround in c?zdi2.c, put it in this + * global header to prevent other C files from making the detour + * through __c?zdi2() as well. + * + * This problem has been observed on FreeBSD for sparc64 and + * mips64 with GCC 4.2.1, and for riscv with GCC 5.2.0. + * Presumably it's any version of GCC, and targeting an arch that + * does not have dedicated bit counting instructions. + */ +#if (defined(__sparc64__) || defined(__mips_n64) || defined(__mips_o64) || defined(__riscv__) \ + || (defined(_MIPS_SIM) && ((_MIPS_SIM == _ABI64) || (_MIPS_SIM == _ABIO64)))) +si_int __clzsi2(si_int); +si_int __ctzsi2(si_int); +#define __builtin_clz __clzsi2 +#define __builtin_ctz __ctzsi2 +#endif /* sparc64 || mips_n64 || mips_o64 || riscv */ + COMPILER_RT_ABI si_int __paritysi2(si_int a); COMPILER_RT_ABI si_int __paritydi2(di_int a); diff --git a/src/doc/book/SUMMARY.md b/src/doc/book/SUMMARY.md index 18aa9f2458..babbafa078 100644 --- a/src/doc/book/SUMMARY.md +++ b/src/doc/book/SUMMARY.md @@ -52,6 +52,7 @@ * [Borrow and AsRef](borrow-and-asref.md) * [Release Channels](release-channels.md) * [Using Rust without the standard library](using-rust-without-the-standard-library.md) + * [Procedural Macros (and custom derive)](procedural-macros.md) * [Nightly Rust](nightly-rust.md) * [Compiler Plugins](compiler-plugins.md) * [Inline Assembly](inline-assembly.md) diff --git a/src/doc/book/associated-types.md b/src/doc/book/associated-types.md index 0998a88c4d..f416e60041 100644 --- a/src/doc/book/associated-types.md +++ b/src/doc/book/associated-types.md @@ -11,7 +11,7 @@ this: trait Graph { fn has_edge(&self, &N, &N) -> bool; fn edges(&self, &N) -> Vec; - // etc + // Etc. } ``` @@ -36,7 +36,7 @@ trait Graph { fn has_edge(&self, &Self::N, &Self::N) -> bool; fn edges(&self, &Self::N) -> Vec; - // etc + // Etc. } ``` diff --git a/src/doc/book/benchmark-tests.md b/src/doc/book/benchmark-tests.md index 797ec94774..e054736eb3 100644 --- a/src/doc/book/benchmark-tests.md +++ b/src/doc/book/benchmark-tests.md @@ -110,7 +110,7 @@ computation entirely. This could be done for the example above by adjusting the # struct X; # impl X { fn iter(&self, _: F) where F: FnMut() -> T {} } let b = X; b.iter(|| { - // note lack of `;` (could also use an explicit `return`). + // Note lack of `;` (could also use an explicit `return`). (0..1000).fold(0, |old, new| old ^ new) }); ``` diff --git a/src/doc/book/box-syntax-and-patterns.md b/src/doc/book/box-syntax-and-patterns.md index 8d83b64d68..cbf65dfa9b 100644 --- a/src/doc/book/box-syntax-and-patterns.md +++ b/src/doc/book/box-syntax-and-patterns.md @@ -38,7 +38,7 @@ so as to avoid copying a large data structure. For example: struct BigStruct { one: i32, two: i32, - // etc + // Etc. one_hundred: i32, } @@ -68,7 +68,7 @@ This is an antipattern in Rust. Instead, write this: struct BigStruct { one: i32, two: i32, - // etc + // Etc. one_hundred: i32, } diff --git a/src/doc/book/casting-between-types.md b/src/doc/book/casting-between-types.md index a101f397c3..296384ab6e 100644 --- a/src/doc/book/casting-between-types.md +++ b/src/doc/book/casting-between-types.md @@ -106,7 +106,7 @@ from integers, and to cast between pointers to different types subject to some constraints. It is only unsafe to dereference the pointer: ```rust -let a = 300 as *const char; // a pointer to location 300 +let a = 300 as *const char; // `a` is a pointer to location 300. let b = a as u32; ``` @@ -135,14 +135,14 @@ cast four bytes into a `u32`: ```rust,ignore let a = [0u8, 0u8, 0u8, 0u8]; -let b = a as u32; // four u8s makes a u32 +let b = a as u32; // Four u8s makes a u32. ``` This errors with: ```text error: non-scalar cast: `[u8; 4]` as `u32` -let b = a as u32; // four u8s makes a u32 +let b = a as u32; // Four u8s makes a u32. ^~~~~~~~ ``` @@ -170,7 +170,7 @@ fn main() { let a = [0u8, 1u8, 0u8, 0u8]; let b = mem::transmute::<[u8; 4], u32>(a); println!("{}", b); // 256 - // or, more concisely: + // Or, more concisely: let c: u32 = mem::transmute(a); println!("{}", c); // 256 } diff --git a/src/doc/book/choosing-your-guarantees.md b/src/doc/book/choosing-your-guarantees.md index d88f619260..9dca3479d3 100644 --- a/src/doc/book/choosing-your-guarantees.md +++ b/src/doc/book/choosing-your-guarantees.md @@ -25,7 +25,7 @@ the following: ```rust let x = Box::new(1); let y = x; -// x no longer accessible here +// `x` is no longer accessible here. ``` Here, the box was _moved_ into `y`. As `x` no longer owns it, the compiler will no longer allow the @@ -291,9 +291,9 @@ the inner data (mutably), and the lock will be released when the guard goes out ```rust,ignore { let guard = mutex.lock(); - // guard dereferences mutably to the inner type + // `guard` dereferences mutably to the inner type. *guard += 1; -} // lock released when destructor runs +} // Lock is released when destructor runs. ``` diff --git a/src/doc/book/closures.md b/src/doc/book/closures.md index fa9f66d43b..a3c7333c6b 100644 --- a/src/doc/book/closures.md +++ b/src/doc/book/closures.md @@ -116,7 +116,7 @@ let mut num = 5; { let plus_num = |x: i32| x + num; -} // plus_num goes out of scope, borrow of num ends +} // `plus_num` goes out of scope; borrow of `num` ends. let y = &mut num; ``` diff --git a/src/doc/book/comments.md b/src/doc/book/comments.md index e7eb48dc42..8fa397cd9a 100644 --- a/src/doc/book/comments.md +++ b/src/doc/book/comments.md @@ -10,7 +10,7 @@ and *doc comments*. ```rust // Line comments are anything after ‘//’ and extend to the end of the line. -let x = 5; // this is also a line comment. +let x = 5; // This is also a line comment. // If you have a long explanation for something, you can put line comments next // to each other. Put a space between the // and your comment so that it’s diff --git a/src/doc/book/compiler-plugins.md b/src/doc/book/compiler-plugins.md index a9a81843ab..ff29358df9 100644 --- a/src/doc/book/compiler-plugins.md +++ b/src/doc/book/compiler-plugins.md @@ -48,7 +48,7 @@ extern crate rustc_plugin; use syntax::parse::token; use syntax::tokenstream::TokenTree; use syntax::ext::base::{ExtCtxt, MacResult, DummyResult, MacEager}; -use syntax::ext::build::AstBuilder; // trait for expr_usize +use syntax::ext::build::AstBuilder; // A trait for expr_usize. use syntax::ext::quote::rt::Span; use rustc_plugin::Registry; diff --git a/src/doc/book/concurrency.md b/src/doc/book/concurrency.md index 41d8345b72..67d89d5484 100644 --- a/src/doc/book/concurrency.md +++ b/src/doc/book/concurrency.md @@ -213,10 +213,10 @@ fn main() { let mut data = Rc::new(vec![1, 2, 3]); for i in 0..3 { - // create a new owned reference + // Create a new owned reference: let data_ref = data.clone(); - // use it in a thread + // Use it in a thread: thread::spawn(move || { data_ref[0] += i; }); @@ -390,8 +390,8 @@ use std::sync::mpsc; fn main() { let data = Arc::new(Mutex::new(0)); - // `tx` is the "transmitter" or "sender" - // `rx` is the "receiver" + // `tx` is the "transmitter" or "sender". + // `rx` is the "receiver". let (tx, rx) = mpsc::channel(); for _ in 0..10 { diff --git a/src/doc/book/crates-and-modules.md b/src/doc/book/crates-and-modules.md index fcb7e0bc7e..0e33663523 100644 --- a/src/doc/book/crates-and-modules.md +++ b/src/doc/book/crates-and-modules.md @@ -126,7 +126,7 @@ Instead of declaring a module like this: ```rust,ignore mod english { - // contents of our module go here + // Contents of our module go here. } ``` diff --git a/src/doc/book/custom-allocators.md b/src/doc/book/custom-allocators.md index d69ef6cf7e..1996305f09 100644 --- a/src/doc/book/custom-allocators.md +++ b/src/doc/book/custom-allocators.md @@ -41,7 +41,7 @@ which allocator is in use is done simply by linking to the desired allocator: extern crate alloc_system; fn main() { - let a = Box::new(4); // allocates from the system allocator + let a = Box::new(4); // Allocates from the system allocator. println!("{}", a); } ``` @@ -57,7 +57,7 @@ uses jemalloc by default one would write: extern crate alloc_jemalloc; pub fn foo() { - let a = Box::new(4); // allocates from jemalloc + let a = Box::new(4); // Allocates from jemalloc. println!("{}", a); } # fn main() {} @@ -72,11 +72,11 @@ crate which implements the allocator API (e.g. the same as `alloc_system` or annotated version of `alloc_system` ```rust,no_run -# // only needed for rustdoc --test down below +# // Only needed for rustdoc --test down below. # #![feature(lang_items)] // The compiler needs to be instructed that this crate is an allocator in order // to realize that when this is linked in another allocator like jemalloc should -// not be linked in +// not be linked in. #![feature(allocator)] #![allocator] @@ -85,7 +85,7 @@ annotated version of `alloc_system` // however, can use all of libcore. #![no_std] -// Let's give a unique name to our custom allocator +// Let's give a unique name to our custom allocator: #![crate_name = "my_allocator"] #![crate_type = "rlib"] @@ -126,7 +126,7 @@ pub extern fn __rust_reallocate(ptr: *mut u8, _old_size: usize, size: usize, #[no_mangle] pub extern fn __rust_reallocate_inplace(_ptr: *mut u8, old_size: usize, _size: usize, _align: usize) -> usize { - old_size // this api is not supported by libc + old_size // This api is not supported by libc. } #[no_mangle] @@ -134,7 +134,7 @@ pub extern fn __rust_usable_size(size: usize, _align: usize) -> usize { size } -# // only needed to get rustdoc to test this +# // Only needed to get rustdoc to test this: # fn main() {} # #[lang = "panic_fmt"] fn panic_fmt() {} # #[lang = "eh_personality"] fn eh_personality() {} @@ -149,7 +149,7 @@ After we compile this crate, it can be used as follows: extern crate my_allocator; fn main() { - let a = Box::new(8); // allocates memory via our custom allocator crate + let a = Box::new(8); // Allocates memory via our custom allocator crate. println!("{}", a); } ``` diff --git a/src/doc/book/deref-coercions.md b/src/doc/book/deref-coercions.md index cabe66f5b2..864cd282d9 100644 --- a/src/doc/book/deref-coercions.md +++ b/src/doc/book/deref-coercions.md @@ -33,13 +33,13 @@ automatically coerce to a `&T`. Here’s an example: ```rust fn foo(s: &str) { - // borrow a string for a second + // Borrow a string for a second. } -// String implements Deref +// String implements Deref. let owned = "Hello".to_string(); -// therefore, this works: +// Therefore, this works: foo(&owned); ``` @@ -55,14 +55,14 @@ type implements `Deref`, so this works: use std::rc::Rc; fn foo(s: &str) { - // borrow a string for a second + // Borrow a string for a second. } -// String implements Deref +// String implements Deref. let owned = "Hello".to_string(); let counted = Rc::new(owned); -// therefore, this works: +// Therefore, this works: foo(&counted); ``` @@ -76,10 +76,10 @@ Another very common implementation provided by the standard library is: ```rust fn foo(s: &[i32]) { - // borrow a slice for a second + // Borrow a slice for a second. } -// Vec implements Deref +// Vec implements Deref. let owned = vec![1, 2, 3]; foo(&owned); diff --git a/src/doc/book/documentation.md b/src/doc/book/documentation.md index 6292ba9aac..f30a95b4e7 100644 --- a/src/doc/book/documentation.md +++ b/src/doc/book/documentation.md @@ -28,7 +28,7 @@ code. You can use documentation comments for this purpose: /// let five = Rc::new(5); /// ``` pub fn new(value: T) -> Rc { - // implementation goes here + // Implementation goes here. } ``` @@ -483,7 +483,7 @@ you have a module in `foo.rs`, you'll often open its code and see this: ```rust //! A module for using `foo`s. //! -//! The `foo` module contains a lot of useful functionality blah blah blah +//! The `foo` module contains a lot of useful functionality blah blah blah... ``` ### Crate documentation diff --git a/src/doc/book/drop.md b/src/doc/book/drop.md index 5513523e56..0b7ddcfbe8 100644 --- a/src/doc/book/drop.md +++ b/src/doc/book/drop.md @@ -18,9 +18,9 @@ impl Drop for HasDrop { fn main() { let x = HasDrop; - // do stuff + // Do stuff. -} // x goes out of scope here +} // `x` goes out of scope here. ``` When `x` goes out of scope at the end of `main()`, the code for `Drop` will diff --git a/src/doc/book/enums.md b/src/doc/book/enums.md index 5e05b4ebbd..790d6ff854 100644 --- a/src/doc/book/enums.md +++ b/src/doc/book/enums.md @@ -51,7 +51,7 @@ possible variants: ```rust,ignore fn process_color_change(msg: Message) { - let Message::ChangeColor(r, g, b) = msg; // compile-time error + let Message::ChangeColor(r, g, b) = msg; // This causes a compile-time error. } ``` diff --git a/src/doc/book/error-handling.md b/src/doc/book/error-handling.md index a62e1b7dfa..0d9f49d66c 100644 --- a/src/doc/book/error-handling.md +++ b/src/doc/book/error-handling.md @@ -65,7 +65,7 @@ and in most cases, the entire program aborts.) Here's an example: ```rust,should_panic // Guess a number between 1 and 10. -// If it matches the number we had in mind, return true. Else, return false. +// If it matches the number we had in mind, return `true`. Else, return `false`. fn guess(n: i32) -> bool { if n < 1 || n > 10 { panic!("Invalid number: {}", n); @@ -350,7 +350,7 @@ fn file_path_ext_explicit(file_path: &str) -> Option<&str> { } fn file_name(file_path: &str) -> Option<&str> { - // implementation elided + // Implementation elided. unimplemented!() } ``` @@ -360,7 +360,7 @@ analysis, but its type doesn't quite fit... ```rust,ignore fn file_path_ext(file_path: &str) -> Option<&str> { - file_name(file_path).map(|x| extension(x)) //Compilation error + file_name(file_path).map(|x| extension(x)) // This causes a compilation error. } ``` @@ -1235,11 +1235,11 @@ use std::fs; use std::io; use std::num; -// We have to jump through some hoops to actually get error values. +// We have to jump through some hoops to actually get error values: let io_err: io::Error = io::Error::last_os_error(); let parse_err: num::ParseIntError = "not a number".parse::().unwrap_err(); -// OK, here are the conversions. +// OK, here are the conversions: let err1: Box = From::from(io_err); let err2: Box = From::from(parse_err); ``` @@ -1609,7 +1609,7 @@ fn main() { let data_path = &matches.free[0]; let city: &str = &matches.free[1]; - // Do stuff with information + // Do stuff with information. } ``` @@ -1747,7 +1747,7 @@ simply ignoring that row. use std::path::Path; struct Row { - // unchanged + // This struct remains unchanged. } struct PopulationCount { @@ -1769,7 +1769,7 @@ fn search>(file_path: P, city: &str) -> Vec { for row in rdr.decode::() { let row = row.unwrap(); match row.population { - None => { } // skip it + None => { } // Skip it. Some(count) => if row.city == city { found.push(PopulationCount { city: row.city, @@ -1825,7 +1825,7 @@ Let's try it: ```rust,ignore use std::error::Error; -// The rest of the code before this is unchanged +// The rest of the code before this is unchanged. fn search> (file_path: P, city: &str) @@ -1836,7 +1836,7 @@ fn search> for row in rdr.decode::() { let row = try!(row); match row.population { - None => { } // skip it + None => { } // Skip it. Some(count) => if row.city == city { found.push(PopulationCount { city: row.city, @@ -1957,7 +1957,7 @@ that it is generic on some type parameter `R` that satisfies ```rust,ignore use std::io; -// The rest of the code before this is unchanged +// The rest of the code before this is unchanged. fn search> (file_path: &Option

, city: &str) @@ -2070,7 +2070,7 @@ fn search> for row in rdr.decode::() { let row = try!(row); match row.population { - None => { } // skip it + None => { } // Skip it. Some(count) => if row.city == city { found.push(PopulationCount { city: row.city, diff --git a/src/doc/book/ffi.md b/src/doc/book/ffi.md index 8709c3f4b7..b53af69442 100644 --- a/src/doc/book/ffi.md +++ b/src/doc/book/ffi.md @@ -95,7 +95,7 @@ internal details. Wrapping the functions which expect buffers involves using the `slice::raw` module to manipulate Rust vectors as pointers to memory. Rust's vectors are guaranteed to be a contiguous block of memory. The -length is number of elements currently contained, and the capacity is the total size in elements of +length is the number of elements currently contained, and the capacity is the total size in elements of the allocated memory. The length is less than or equal to the capacity. ```rust @@ -277,7 +277,7 @@ extern { fn main() { unsafe { register_callback(callback); - trigger_callback(); // Triggers the callback + trigger_callback(); // Triggers the callback. } } ``` @@ -294,7 +294,7 @@ int32_t register_callback(rust_callback callback) { } void trigger_callback() { - cb(7); // Will call callback(7) in Rust + cb(7); // Will call callback(7) in Rust. } ``` @@ -320,13 +320,13 @@ Rust code: #[repr(C)] struct RustObject { a: i32, - // other members + // Other members... } extern "C" fn callback(target: *mut RustObject, a: i32) { println!("I'm called from C with value {0}", a); unsafe { - // Update the value in RustObject with the value received from the callback + // Update the value in RustObject with the value received from the callback: (*target).a = a; } } @@ -339,7 +339,7 @@ extern { } fn main() { - // Create the object that will be referenced in the callback + // Create the object that will be referenced in the callback: let mut rust_object = Box::new(RustObject { a: 5 }); unsafe { @@ -363,7 +363,7 @@ int32_t register_callback(void* callback_target, rust_callback callback) { } void trigger_callback() { - cb(cb_target, 7); // Will call callback(&rustObject, 7) in Rust + cb(cb_target, 7); // Will call callback(&rustObject, 7) in Rust. } ``` @@ -606,7 +606,7 @@ use libc::c_int; # #[cfg(hidden)] extern "C" { - /// Register the callback. + /// Registers the callback. fn register(cb: Option c_int>, c_int) -> c_int>); } # unsafe fn register(_: Option c_int>, @@ -662,26 +662,31 @@ attribute turns off Rust's name mangling, so that it is easier to link to. It’s important to be mindful of `panic!`s when working with FFI. A `panic!` across an FFI boundary is undefined behavior. If you’re writing code that may -panic, you should run it in another thread, so that the panic doesn’t bubble up -to C: +panic, you should run it in a closure with [`catch_unwind()`]: ```rust -use std::thread; +use std::panic::catch_unwind; #[no_mangle] pub extern fn oh_no() -> i32 { - let h = thread::spawn(|| { + let result = catch_unwind(|| { panic!("Oops!"); }); - - match h.join() { - Ok(_) => 1, - Err(_) => 0, + match result { + Ok(_) => 0, + Err(_) => 1, } } -# fn main() {} + +fn main() {} ``` +Please note that [`catch_unwind()`] will only catch unwinding panics, not +those who abort the process. See the documentation of [`catch_unwind()`] +for more information. + +[`catch_unwind()`]: https://doc.rust-lang.org/std/panic/fn.catch_unwind.html + # Representing opaque structs Sometimes, a C library wants to provide a pointer to something, but not let you diff --git a/src/doc/book/functions.md b/src/doc/book/functions.md index b040684d05..b453936fe0 100644 --- a/src/doc/book/functions.md +++ b/src/doc/book/functions.md @@ -135,7 +135,7 @@ In Rust, however, using `let` to introduce a binding is _not_ an expression. The following will produce a compile-time error: ```rust,ignore -let x = (let y = 5); // expected identifier, found keyword `let` +let x = (let y = 5); // Expected identifier, found keyword `let`. ``` The compiler is telling us here that it was expecting to see the beginning of @@ -151,7 +151,7 @@ other returned value would be too surprising: ```rust let mut y = 5; -let x = (y = 6); // x has the value `()`, not `6` +let x = (y = 6); // `x` has the value `()`, not `6`. ``` The second kind of statement in Rust is the *expression statement*. Its @@ -183,7 +183,7 @@ But what about early returns? Rust does have a keyword for that, `return`: fn foo(x: i32) -> i32 { return x; - // we never run this code! + // We never run this code! x + 1 } ``` @@ -307,10 +307,10 @@ fn plus_one(i: i32) -> i32 { i + 1 } -// without type inference +// Without type inference: let f: fn(i32) -> i32 = plus_one; -// with type inference +// With type inference: let f = plus_one; ``` diff --git a/src/doc/book/generics.md b/src/doc/book/generics.md index 9ab601419c..eafad6a05f 100644 --- a/src/doc/book/generics.md +++ b/src/doc/book/generics.md @@ -78,7 +78,7 @@ We can write functions that take generic types with a similar syntax: ```rust fn takes_anything(x: T) { - // do something with x + // Do something with `x`. } ``` diff --git a/src/doc/book/getting-started.md b/src/doc/book/getting-started.md index 203dbf16ca..5aae693ad6 100644 --- a/src/doc/book/getting-started.md +++ b/src/doc/book/getting-started.md @@ -4,112 +4,25 @@ This first chapter of the book will get us going with Rust and its tooling. First, we’ll install Rust. Then, the classic ‘Hello World’ program. Finally, we’ll talk about Cargo, Rust’s build system and package manager. -# Installing Rust - -The first step to using Rust is to install it. Generally speaking, you’ll need -an Internet connection to run the commands in this section, as we’ll be -downloading Rust from the Internet. - We’ll be showing off a number of commands using a terminal, and those lines all start with `$`. You don't need to type in the `$`s, they are there to indicate the start of each command. We’ll see many tutorials and examples around the web that follow this convention: `$` for commands run as our regular user, and `#` for commands we should be running as an administrator. -## Platform support +# Installing Rust -The Rust compiler runs on, and compiles to, a great number of platforms, though -not all platforms are equally supported. Rust's support levels are organized -into three tiers, each with a different set of guarantees. +The first step to using Rust is to install it. Generally speaking, you’ll need +an Internet connection to run the commands in this section, as we’ll be +downloading Rust from the Internet. -Platforms are identified by their "target triple" which is the string to inform -the compiler what kind of output should be produced. The columns below indicate -whether the corresponding component works on the specified platform. +The Rust compiler runs on, and compiles to, a great number of platforms, but is +best supported on Linux, Mac, and Windows, on the x86 and x86-64 CPU +architecture. There are official builds of the Rust compiler and standard +library for these platforms and more. [For full details on Rust platform support +see the website][platform-support]. -### Tier 1 - -Tier 1 platforms can be thought of as "guaranteed to build and work". -Specifically they will each satisfy the following requirements: - -* Automated testing is set up to run tests for the platform. -* Landing changes to the `rust-lang/rust` repository's master branch is gated on - tests passing. -* Official release artifacts are provided for the platform. -* Documentation for how to use and how to build the platform is available. - -| Target | std |rustc|cargo| notes | -|-------------------------------|-----|-----|-----|----------------------------| -| `i686-apple-darwin` | ✓ | ✓ | ✓ | 32-bit OSX (10.7+, Lion+) | -| `i686-pc-windows-gnu` | ✓ | ✓ | ✓ | 32-bit MinGW (Windows 7+) | -| `i686-pc-windows-msvc` | ✓ | ✓ | ✓ | 32-bit MSVC (Windows 7+) | -| `i686-unknown-linux-gnu` | ✓ | ✓ | ✓ | 32-bit Linux (2.6.18+) | -| `x86_64-apple-darwin` | ✓ | ✓ | ✓ | 64-bit OSX (10.7+, Lion+) | -| `x86_64-pc-windows-gnu` | ✓ | ✓ | ✓ | 64-bit MinGW (Windows 7+) | -| `x86_64-pc-windows-msvc` | ✓ | ✓ | ✓ | 64-bit MSVC (Windows 7+) | -| `x86_64-unknown-linux-gnu` | ✓ | ✓ | ✓ | 64-bit Linux (2.6.18+) | - -### Tier 2 - -Tier 2 platforms can be thought of as "guaranteed to build". Automated tests -are not run so it's not guaranteed to produce a working build, but platforms -often work to quite a good degree and patches are always welcome! Specifically, -these platforms are required to have each of the following: - -* Automated building is set up, but may not be running tests. -* Landing changes to the `rust-lang/rust` repository's master branch is gated on - platforms **building**. Note that this means for some platforms only the - standard library is compiled, but for others the full bootstrap is run. -* Official release artifacts are provided for the platform. - -| Target | std |rustc|cargo| notes | -|-------------------------------|-----|-----|-----|----------------------------| -| `aarch64-apple-ios` | ✓ | | | ARM64 iOS | -| `aarch64-unknown-linux-gnu` | ✓ | ✓ | ✓ | ARM64 Linux (2.6.18+) | -| `arm-linux-androideabi` | ✓ | | | ARM Android | -| `arm-unknown-linux-gnueabi` | ✓ | ✓ | ✓ | ARM Linux (2.6.18+) | -| `arm-unknown-linux-gnueabihf` | ✓ | ✓ | ✓ | ARM Linux (2.6.18+) | -| `armv7-apple-ios` | ✓ | | | ARM iOS | -|`armv7-unknown-linux-gnueabihf`| ✓ | ✓ | ✓ | ARMv7 Linux (2.6.18+) | -| `armv7s-apple-ios` | ✓ | | | ARM iOS | -| `i386-apple-ios` | ✓ | | | 32-bit x86 iOS | -| `i586-pc-windows-msvc` | ✓ | | | 32-bit Windows w/o SSE | -| `mips-unknown-linux-gnu` | ✓ | | | MIPS Linux (2.6.18+) | -| `mips-unknown-linux-musl` | ✓ | | | MIPS Linux with MUSL | -| `mipsel-unknown-linux-gnu` | ✓ | | | MIPS (LE) Linux (2.6.18+) | -| `mipsel-unknown-linux-musl` | ✓ | | | MIPS (LE) Linux with MUSL | -| `powerpc-unknown-linux-gnu` | ✓ | | | PowerPC Linux (2.6.18+) | -| `powerpc64-unknown-linux-gnu` | ✓ | | | PPC64 Linux (2.6.18+) | -|`powerpc64le-unknown-linux-gnu`| ✓ | | | PPC64LE Linux (2.6.18+) | -| `x86_64-apple-ios` | ✓ | | | 64-bit x86 iOS | -| `x86_64-rumprun-netbsd` | ✓ | | | 64-bit NetBSD Rump Kernel | -| `x86_64-unknown-freebsd` | ✓ | ✓ | ✓ | 64-bit FreeBSD | -| `x86_64-unknown-linux-musl` | ✓ | | | 64-bit Linux with MUSL | -| `x86_64-unknown-netbsd` | ✓ | ✓ | ✓ | 64-bit NetBSD | - -### Tier 3 - -Tier 3 platforms are those which Rust has support for, but landing changes is -not gated on the platform either building or passing tests. Working builds for -these platforms may be spotty as their reliability is often defined in terms of -community contributions. Additionally, release artifacts and installers are not -provided, but there may be community infrastructure producing these in -unofficial locations. - -| Target | std |rustc|cargo| notes | -|-------------------------------|-----|-----|-----|----------------------------| -| `aarch64-linux-android` | ✓ | | | ARM64 Android | -| `armv7-linux-androideabi` | ✓ | | | ARM-v7a Android | -| `i686-linux-android` | ✓ | | | 32-bit x86 Android | -| `i686-pc-windows-msvc` (XP) | ✓ | | | Windows XP support | -| `i686-unknown-freebsd` | ✓ | ✓ | ✓ | 32-bit FreeBSD | -| `x86_64-pc-windows-msvc` (XP) | ✓ | | | Windows XP support | -| `x86_64-sun-solaris` | ✓ | ✓ | | 64-bit Solaris/SunOS | -| `x86_64-unknown-bitrig` | ✓ | ✓ | | 64-bit Bitrig | -| `x86_64-unknown-dragonfly` | ✓ | ✓ | | 64-bit DragonFlyBSD | -| `x86_64-unknown-openbsd` | ✓ | ✓ | | 64-bit OpenBSD | - -Note that this table can be expanded over time, this isn't the exhaustive set of -tier 3 platforms that will ever be! +[platform-support]: https://forge.rust-lang.org/platform-support.html ## Installing Rust diff --git a/src/doc/book/guessing-game.md b/src/doc/book/guessing-game.md index a3ab4803bc..c854b7c373 100644 --- a/src/doc/book/guessing-game.md +++ b/src/doc/book/guessing-game.md @@ -19,6 +19,7 @@ has a command that does that for us. Let’s give it a shot: ```bash $ cd ~/projects $ cargo new guessing_game --bin + Created binary (application) `guessing_game` project $ cd guessing_game ``` @@ -51,6 +52,7 @@ Let’s try compiling what Cargo gave us: ```{bash} $ cargo build Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.53 secs ``` Excellent! Open up your `src/main.rs` again. We’ll be writing all of @@ -61,6 +63,7 @@ Remember the `run` command from last chapter? Try it out again here: ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs Running `target/debug/guessing_game` Hello, world! ``` @@ -155,8 +158,8 @@ take a name on the left hand side of the assignment, it actually accepts a to use for now: ```rust -let foo = 5; // immutable. -let mut bar = 5; // mutable +let foo = 5; // `foo` is immutable. +let mut bar = 5; // `bar` is mutable. ``` [immutable]: mutability.html @@ -282,10 +285,13 @@ we’ll get a warning: ```bash $ cargo build Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) -src/main.rs:10:5: 10:39 warning: unused result which must be used, -#[warn(unused_must_use)] on by default -src/main.rs:10 io::stdin().read_line(&mut guess); - ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +warning: unused result which must be used, #[warn(unused_must_use)] on by default + --> src/main.rs:10:5 + | +10 | io::stdin().read_line(&mut guess); + | ^ + + Finished debug [unoptimized + debuginfo] target(s) in 0.42 secs ``` Rust warns us that we haven’t used the `Result` value. This warning comes from @@ -321,6 +327,7 @@ Anyway, that’s the tour. We can run what we have with `cargo run`: ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.44 secs Running `target/debug/guessing_game` Guess the number! Please input your guess. @@ -373,11 +380,12 @@ Now, without changing any of our code, let’s build our project: ```bash $ cargo build Updating registry `https://github.com/rust-lang/crates.io-index` - Downloading rand v0.3.8 - Downloading libc v0.1.6 - Compiling libc v0.1.6 - Compiling rand v0.3.8 + Downloading rand v0.3.14 + Downloading libc v0.2.17 + Compiling libc v0.2.17 + Compiling rand v0.3.14 Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 5.88 secs ``` (You may see different versions, of course.) @@ -399,22 +407,24 @@ If we run `cargo build` again, we’ll get different output: ```bash $ cargo build + Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs ``` -That’s right, no output! Cargo knows that our project has been built, and that +That’s right, nothing was done! Cargo knows that our project has been built, and that all of its dependencies are built, and so there’s no reason to do all that stuff. With nothing to do, it simply exits. If we open up `src/main.rs` again, -make a trivial change, and then save it again, we’ll only see one line: +make a trivial change, and then save it again, we’ll only see two lines: ```bash $ cargo build Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.45 secs ``` So, we told Cargo we wanted any `0.3.x` version of `rand`, and so it fetched the latest -version at the time this was written, `v0.3.8`. But what happens when next -week, version `v0.3.9` comes out, with an important bugfix? While getting -bugfixes is important, what if `0.3.9` contains a regression that breaks our +version at the time this was written, `v0.3.14`. But what happens when next +week, version `v0.3.15` comes out, with an important bugfix? While getting +bugfixes is important, what if `0.3.15` contains a regression that breaks our code? The answer to this problem is the `Cargo.lock` file you’ll now find in your @@ -423,11 +433,11 @@ figures out all of the versions that fit your criteria, and then writes them to the `Cargo.lock` file. When you build your project in the future, Cargo will see that the `Cargo.lock` file exists, and then use that specific version rather than do all the work of figuring out versions again. This lets you -have a repeatable build automatically. In other words, we’ll stay at `0.3.8` +have a repeatable build automatically. In other words, we’ll stay at `0.3.14` until we explicitly upgrade, and so will anyone who we share our code with, thanks to the lock file. -What about when we _do_ want to use `v0.3.9`? Cargo has another command, +What about when we _do_ want to use `v0.3.15`? Cargo has another command, `update`, which says ‘ignore the lock, figure out all the latest versions that fit what we’ve specified. If that works, write those versions out to the lock file’. But, by default, Cargo will only look for versions larger than `0.3.0` @@ -510,6 +520,7 @@ Try running our new program a few times: ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.55 secs Running `target/debug/guessing_game` Guess the number! The secret number is: 7 @@ -517,6 +528,7 @@ Please input your guess. 4 You guessed: 4 $ cargo run + Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs Running `target/debug/guessing_game` Guess the number! The secret number is: 83 @@ -618,15 +630,20 @@ I did mention that this won’t quite compile yet, though. Let’s try it: ```bash $ cargo build Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) -src/main.rs:28:21: 28:35 error: mismatched types: - expected `&collections::string::String`, - found `&_` -(expected struct `collections::string::String`, - found integral variable) [E0308] -src/main.rs:28 match guess.cmp(&secret_number) { - ^~~~~~~~~~~~~~ +error[E0308]: mismatched types + --> src/main.rs:23:21 + | +23 | match guess.cmp(&secret_number) { + | ^^^^^^^^^^^^^^ expected struct `std::string::String`, found integral variable + | + = note: expected type `&std::string::String` + = note: found type `&{integer}` + error: aborting due to previous error -Could not compile `guessing_game`. + +error: Could not compile `guessing_game`. + +To learn more, run the command again with --verbose. ``` Whew! This is a big error. The core of it is that we have ‘mismatched types’. @@ -722,6 +739,7 @@ Let’s try our program out! ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.57 secs Running `target/guessing_game` Guess the number! The secret number is: 58 @@ -785,6 +803,7 @@ and quit. Observe: ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.58 secs Running `target/guessing_game` Guess the number! The secret number is: 59 @@ -919,6 +938,7 @@ Now we should be good! Let’s try: ```bash $ cargo run Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game) + Finished debug [unoptimized + debuginfo] target(s) in 0.57 secs Running `target/guessing_game` Guess the number! The secret number is: 61 diff --git a/src/doc/book/inline-assembly.md b/src/doc/book/inline-assembly.md index 62e196a7cc..e531d5d7fc 100644 --- a/src/doc/book/inline-assembly.md +++ b/src/doc/book/inline-assembly.md @@ -34,7 +34,7 @@ fn foo() { } } -// other platforms +// Other platforms: #[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))] fn foo() { /* ... */ } @@ -130,7 +130,7 @@ stay valid. # #![feature(asm)] # #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] # fn main() { unsafe { -// Put the value 0x200 in eax +// Put the value 0x200 in eax: asm!("mov $$0x200, %eax" : /* no outputs */ : /* no inputs */ : "eax"); # } } # #[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))] diff --git a/src/doc/book/lang-items.md b/src/doc/book/lang-items.md index de7dbab3f1..6a08c1b6bb 100644 --- a/src/doc/book/lang-items.md +++ b/src/doc/book/lang-items.md @@ -32,7 +32,7 @@ pub struct Box(*mut T); unsafe fn allocate(size: usize, _align: usize) -> *mut u8 { let p = libc::malloc(size as libc::size_t) as *mut u8; - // malloc failed + // Check if `malloc` failed: if p as usize == 0 { abort(); } @@ -46,8 +46,8 @@ unsafe fn deallocate(ptr: *mut u8, _size: usize, _align: usize) { } #[lang = "box_free"] -unsafe fn box_free(ptr: *mut T) { - deallocate(ptr as *mut u8, ::core::mem::size_of::(), ::core::mem::align_of::()); +unsafe fn box_free(ptr: *mut T) { + deallocate(ptr as *mut u8, ::core::mem::size_of_val(&*ptr), ::core::mem::align_of_val(&*ptr)); } #[start] diff --git a/src/doc/book/lifetimes.md b/src/doc/book/lifetimes.md index df1ee5a293..140e27d192 100644 --- a/src/doc/book/lifetimes.md +++ b/src/doc/book/lifetimes.md @@ -54,13 +54,13 @@ dangling pointer or ‘use after free’, when the resource is memory. A small example of such a situation would be: ```rust,compile_fail -let r; // Introduce reference: r +let r; // Introduce reference: `r`. { - let i = 1; // Introduce scoped value: i - r = &i; // Store reference of i in r -} // i goes out of scope and is dropped. + let i = 1; // Introduce scoped value: `i`. + r = &i; // Store reference of `i` in `r`. +} // `i` goes out of scope and is dropped. -println!("{}", r); // r still refers to i +println!("{}", r); // `r` still refers to `i`. ``` To fix this, we have to make sure that step four never happens after step @@ -81,9 +81,9 @@ let lang = "en"; let v; { - let p = format!("lang:{}=", lang); // -+ p goes into scope + let p = format!("lang:{}=", lang); // -+ `p` comes into scope. v = skip_prefix(line, p.as_str()); // | -} // -+ p goes out of scope +} // -+ `p` goes out of scope. println!("{}", v); ``` @@ -191,7 +191,7 @@ struct Foo<'a> { } fn main() { - let y = &5; // this is the same as `let _y = 5; let y = &_y;` + let y = &5; // This is the same as `let _y = 5; let y = &_y;`. let f = Foo { x: y }; println!("{}", f.x); @@ -233,7 +233,7 @@ impl<'a> Foo<'a> { } fn main() { - let y = &5; // this is the same as `let _y = 5; let y = &_y;` + let y = &5; // This is the same as `let _y = 5; let y = &_y;`. let f = Foo { x: y }; println!("x is: {}", f.x()); @@ -274,11 +274,11 @@ valid for. For example: ```rust fn main() { - let y = &5; // -+ y goes into scope + let y = &5; // -+ `y` comes into scope. // | - // stuff // | + // Stuff... // | // | -} // -+ y goes out of scope +} // -+ `y` goes out of scope. ``` Adding in our `Foo`: @@ -289,11 +289,12 @@ struct Foo<'a> { } fn main() { - let y = &5; // -+ y goes into scope - let f = Foo { x: y }; // -+ f goes into scope - // stuff // | + let y = &5; // -+ `y` comes into scope. + let f = Foo { x: y }; // -+ `f` comes into scope. // | -} // -+ f and y go out of scope + // Stuff... // | + // | +} // -+ `f` and `y` go out of scope. ``` Our `f` lives within the scope of `y`, so everything works. What if it didn’t? @@ -305,16 +306,16 @@ struct Foo<'a> { } fn main() { - let x; // -+ x goes into scope + let x; // -+ `x` comes into scope. // | { // | - let y = &5; // ---+ y goes into scope - let f = Foo { x: y }; // ---+ f goes into scope - x = &f.x; // | | error here - } // ---+ f and y go out of scope + let y = &5; // ---+ `y` comes into scope. + let f = Foo { x: y }; // ---+ `f` comes into scope. + x = &f.x; // | | This causes an error. + } // ---+ `f` and y go out of scope. // | println!("{}", x); // | -} // -+ x goes out of scope +} // -+ `x` goes out of scope. ``` Whew! As you can see here, the scopes of `f` and `y` are smaller than the scope @@ -351,7 +352,7 @@ to it. Rust supports powerful local type inference in the bodies of functions but not in their item signatures. It's forbidden to allow reasoning about types based on the item signature alone. However, for ergonomic reasons, a very restricted secondary inference algorithm called -“lifetime elision” does apply when judging lifetimes. Lifetime elision is concerned solely to infer +“lifetime elision” does apply when judging lifetimes. Lifetime elision is concerned solely with inferring lifetime parameters using three easily memorizable and unambiguous rules. This means lifetime elision acts as a shorthand for writing an item signature, while not hiding away the actual types involved as full local inference would if applied to it. diff --git a/src/doc/book/loops.md b/src/doc/book/loops.md index e4cb861d3b..688e8c5526 100644 --- a/src/doc/book/loops.md +++ b/src/doc/book/loops.md @@ -202,8 +202,8 @@ of the outer loops, you can use labels to specify which loop the `break` or ```rust 'outer: for x in 0..10 { 'inner: for y in 0..10 { - if x % 2 == 0 { continue 'outer; } // continues the loop over x - if y % 2 == 0 { continue 'inner; } // continues the loop over y + if x % 2 == 0 { continue 'outer; } // Continues the loop over `x`. + if y % 2 == 0 { continue 'inner; } // Continues the loop over `y`. println!("x: {}, y: {}", x, y); } } diff --git a/src/doc/book/macros.md b/src/doc/book/macros.md index 78fe07ec1b..7f52b33948 100644 --- a/src/doc/book/macros.md +++ b/src/doc/book/macros.md @@ -533,33 +533,33 @@ An example: ```rust macro_rules! m1 { () => (()) } -// visible here: m1 +// Visible here: `m1`. mod foo { - // visible here: m1 + // Visible here: `m1`. #[macro_export] macro_rules! m2 { () => (()) } - // visible here: m1, m2 + // Visible here: `m1`, `m2`. } -// visible here: m1 +// Visible here: `m1`. macro_rules! m3 { () => (()) } -// visible here: m1, m3 +// Visible here: `m1`, `m3`. #[macro_use] mod bar { - // visible here: m1, m3 + // Visible here: `m1`, `m3`. macro_rules! m4 { () => (()) } - // visible here: m1, m3, m4 + // Visible here: `m1`, `m3`, `m4`. } -// visible here: m1, m3, m4 +// Visible here: `m1`, `m3`, `m4`. # fn main() { } ``` @@ -644,7 +644,7 @@ macro_rules! bct { (1, $p:tt, $($ps:tt),* ; $($ds:tt),*) => (bct!($($ps),*, 1, $p ; $($ds),*)); - // halt on empty data string + // Halt on empty data string: ( $($ps:tt),* ; ) => (()); } @@ -694,7 +694,7 @@ Like this: assert!(true); assert_eq!(5, 3 + 2); -// nope :( +// Nope :( assert!(5 < 3); assert_eq!(5, 3); diff --git a/src/doc/book/mutability.md b/src/doc/book/mutability.md index a0a49d55e1..18017cc4a5 100644 --- a/src/doc/book/mutability.md +++ b/src/doc/book/mutability.md @@ -6,7 +6,7 @@ status: ```rust,ignore let x = 5; -x = 6; // error! +x = 6; // Error! ``` We can introduce mutability with the `mut` keyword: @@ -14,7 +14,7 @@ We can introduce mutability with the `mut` keyword: ```rust let mut x = 5; -x = 6; // no problem! +x = 6; // No problem! ``` This is a mutable [variable binding][vb]. When a binding is mutable, it means @@ -136,7 +136,7 @@ some fields mutable and some immutable: ```rust,ignore struct Point { x: i32, - mut y: i32, // nope + mut y: i32, // Nope. } ``` @@ -154,7 +154,7 @@ a.x = 10; let b = Point { x: 5, y: 6}; -b.x = 10; // error: cannot assign to immutable field `b.x` +b.x = 10; // Error: cannot assign to immutable field `b.x`. ``` [struct]: structs.html diff --git a/src/doc/book/no-stdlib.md b/src/doc/book/no-stdlib.md index 2604ca8d4c..a06de35c0c 100644 --- a/src/doc/book/no-stdlib.md +++ b/src/doc/book/no-stdlib.md @@ -41,10 +41,10 @@ in the same format as C: #![feature(start)] #![no_std] -// Pull in the system libc library for what crt0.o likely requires +// Pull in the system libc library for what crt0.o likely requires. extern crate libc; -// Entry point for this program +// Entry point for this program. #[start] fn start(_argc: isize, _argv: *const *const u8) -> isize { 0 @@ -84,10 +84,10 @@ compiler's name mangling too: #![no_std] #![no_main] -// Pull in the system libc library for what crt0.o likely requires +// Pull in the system libc library for what crt0.o likely requires. extern crate libc; -// Entry point for this program +// Entry point for this program. #[no_mangle] // ensure that this symbol is called `main` in the output pub extern fn main(_argc: i32, _argv: *const *const u8) -> i32 { 0 diff --git a/src/doc/book/operators-and-overloading.md b/src/doc/book/operators-and-overloading.md index 424e2cda61..78ff871046 100644 --- a/src/doc/book/operators-and-overloading.md +++ b/src/doc/book/operators-and-overloading.md @@ -69,7 +69,7 @@ impl Add for Point { type Output = f64; fn add(self, rhs: i32) -> f64 { - // add an i32 to a Point and get an f64 + // Add an i32 to a Point and get an f64. # 1.0 } } diff --git a/src/doc/book/ownership.md b/src/doc/book/ownership.md index a711397b21..11eda399ad 100644 --- a/src/doc/book/ownership.md +++ b/src/doc/book/ownership.md @@ -107,7 +107,7 @@ try to use something after we’ve passed it as an argument: ```rust,ignore fn take(v: Vec) { - // what happens here isn’t important. + // What happens here isn’t important. } let v = vec![1, 2, 3]; @@ -264,9 +264,9 @@ Of course, if we had to hand ownership back with every function we wrote: ```rust fn foo(v: Vec) -> Vec { - // do stuff with v + // Do stuff with `v`. - // hand back ownership + // Hand back ownership. v } ``` @@ -275,9 +275,9 @@ This would get very tedious. It gets worse the more things we want to take owner ```rust fn foo(v1: Vec, v2: Vec) -> (Vec, Vec, i32) { - // do stuff with v1 and v2 + // Do stuff with `v1` and `v2`. - // hand back ownership, and the result of our function + // Hand back ownership, and the result of our function. (v1, v2, 42) } diff --git a/src/doc/book/patterns.md b/src/doc/book/patterns.md index 910b137547..b50fa01b8e 100644 --- a/src/doc/book/patterns.md +++ b/src/doc/book/patterns.md @@ -163,7 +163,7 @@ ignore parts of a larger structure: ```rust fn coordinate() -> (i32, i32, i32) { - // generate and return some sort of triple tuple + // Generate and return some sort of triple tuple. # (1, 2, 3) } @@ -182,7 +182,7 @@ let tuple: (u32, String) = (5, String::from("five")); // Here, tuple is moved, because the String moved: let (x, _s) = tuple; -// The next line would give "error: use of partially moved value: `tuple`" +// The next line would give "error: use of partially moved value: `tuple`". // println!("Tuple is: {:?}", tuple); // However, diff --git a/src/doc/book/primitive-types.md b/src/doc/book/primitive-types.md index ea0bdf29fc..c4169d64cc 100644 --- a/src/doc/book/primitive-types.md +++ b/src/doc/book/primitive-types.md @@ -54,9 +54,9 @@ bigger numbers. If a number literal has nothing to cause its type to be inferred, it defaults: ```rust -let x = 42; // x has type i32 +let x = 42; // `x` has type `i32`. -let y = 1.0; // y has type f64 +let y = 1.0; // `y` has type `f64`. ``` Here’s a list of the different numeric types, with links to their documentation @@ -177,8 +177,8 @@ length of the slice: ```rust let a = [0, 1, 2, 3, 4]; -let complete = &a[..]; // A slice containing all of the elements in a -let middle = &a[1..4]; // A slice of a: only the elements 1, 2, and 3 +let complete = &a[..]; // A slice containing all of the elements in `a`. +let middle = &a[1..4]; // A slice of `a`: only the elements `1`, `2`, and `3`. ``` Slices have type `&[T]`. We’ll talk about that `T` when we cover @@ -264,8 +264,8 @@ You can disambiguate a single-element tuple from a value in parentheses with a comma: ```rust -(0,); // single-element tuple -(0); // zero in parentheses +(0,); // A single-element tuple. +(0); // A zero in parentheses. ``` ## Tuple Indexing diff --git a/src/doc/book/procedural-macros.md b/src/doc/book/procedural-macros.md new file mode 100644 index 0000000000..37d3d20c06 --- /dev/null +++ b/src/doc/book/procedural-macros.md @@ -0,0 +1,213 @@ +% Procedural Macros (and custom Derive) + +As you've seen throughout the rest of the book, Rust provides a mechanism +called "derive" that lets you implement traits easily. For example, + +```rust +#[derive(Debug)] +struct Point { + x: i32, + y: i32, +} +``` + +is a lot simpler than + +```rust +struct Point { + x: i32, + y: i32, +} + +use std::fmt; + +impl fmt::Debug for Point { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "Point {{ x: {}, y: {} }}", self.x, self.y) + } +} +``` + +Rust includes several traits that you can derive, but it also lets you define +your own. We can accomplish this task through a feature of Rust called +"procedural macros." Eventually, procedural macros will allow for all sorts of +advanced metaprogramming in Rust, but today, they're only for custom derive. + +Let's build a very simple trait, and derive it with custom derive. + +## Hello World + +So the first thing we need to do is start a new crate for our project. + +```bash +$ cargo new --bin hello-world +``` + +All we want is to be able to call `hello_world()` on a derived type. Something +like this: + +```rust,ignore +#[derive(HelloWorld)] +struct Pancakes; + +fn main() { + Pancakes::hello_world(); +} +``` + +With some kind of nice output, like `Hello, World! My name is Pancakes.`. + +Let's go ahead and write up what we think our macro will look like from a user +perspective. In `src/main.rs` we write: + +```rust,ignore +#[macro_use] +extern crate hello_world_derive; + +trait HelloWorld { + fn hello_world(); +} + +#[derive(HelloWorld)] +struct FrenchToast; + +#[derive(HelloWorld)] +struct Waffles; + +fn main() { + FrenchToast::hello_world(); + Waffles::hello_world(); +} +``` + +Great. So now we just need to actually write the procedural macro. At the +moment, procedural macros need to be in their own crate. Eventually, this +restriction may be lifted, but for now, it's required. As such, there's a +convention; for a crate named `foo`, a custom derive procedural macro is called +`foo-derive`. Let's start a new crate called `hello-world-derive` inside our +`hello-world` project. + +```bash +$ cargo new hello-world-derive +``` + +To make sure that our `hello-world` crate is able to find this new crate we've +created, we'll add it to our toml: + +```toml +[dependencies] +hello-world-derive = { path = "hello-world-derive" } +``` + +As for our the source of our `hello-world-derive` crate, here's an example: + +```rust,ignore +extern crate proc_macro; +extern crate syn; +#[macro_use] +extern crate quote; + +use proc_macro::TokenStream; + +#[proc_macro_derive(HelloWorld)] +pub fn hello_world(input: TokenStream) -> TokenStream { + // Construct a string representation of the type definition + let s = input.to_string(); + + // Parse the string representation + let ast = syn::parse_macro_input(&s).unwrap(); + + // Build the impl + let gen = impl_hello_world(&ast); + + // Return the generated impl + gen.parse().unwrap() +} +``` + +So there is a lot going on here. We have introduced two new crates: [`syn`] and +[`quote`]. As you may have noticed, `input: TokenSteam` is immediately converted +to a `String`. This `String` is a string representation of the Rust code for which +we are deriving `HelloWorld` for. At the moment, the only thing you can do with a +`TokenStream` is convert it to a string. A richer API will exist in the future. + +So what we really need is to be able to _parse_ Rust code into something +usable. This is where `syn` comes to play. `syn` is a crate for parsing Rust +code. The other crate we've introduced is `quote`. It's essentially the dual of +`syn` as it will make generating Rust code really easy. We could write this +stuff on our own, but it's much simpler to use these libraries. Writing a full +parser for Rust code is no simple task. + +[`syn`]: https://crates.io/crates/syn +[`quote`]: https://crates.io/crates/quote + +The comments seem to give us a pretty good idea of our overall strategy. We +are going to take a `String` of the Rust code for the type we are deriving, parse +it using `syn`, construct the implementation of `hello_world` (using `quote`), +then pass it back to Rust compiler. + +One last note: you'll see some `unwrap()`s there. If you want to provide an +error for a procedural macro, then you should `panic!` with the error message. +In this case, we're keeping it as simple as possible. + +Great, so let's write `impl_hello_world(&ast)`. + +```rust,ignore +fn impl_hello_world(ast: &syn::MacroInput) -> quote::Tokens { + let name = &ast.ident; + quote! { + impl HelloWorld for #name { + fn hello_world() { + println!("Hello, World! My name is {}", stringify!(#name)); + } + } + } +} +``` + +So this is where quotes comes in. The `ast` argument is a struct that gives us +a representation of our type (which can be either a `struct` or an `enum`). +Check out the [docs](https://docs.rs/syn/0.10.5/syn/struct.MacroInput.html), +there is some useful information there. We are able to get the name of the +type using `ast.ident`. The `quote!` macro let's us write up the Rust code +that we wish to return and convert it into `Tokens`. `quote!` let's us use some +really cool templating mechanics; we simply write `#name` and `quote!` will +replace it with the variable named `name`. You can even do some repetition +similar to regular macros work. You should check out the +[docs](https://docs.rs/quote) for a good introduction. + +So I think that's it. Oh, well, we do need to add dependencies for `syn` and +`quote` in the `cargo.toml` for `hello-world-derive`. + +```toml +[dependencies] +syn = "0.10.5" +quote = "0.3.10" +``` + +That should be it. Let's try to compile `hello-world`. + +```bash +error: the `#[proc_macro_derive]` attribute is only usable with crates of the `proc-macro` crate type + --> hello-world-derive/src/lib.rs:8:3 + | +8 | #[proc_macro_derive(HelloWorld)] + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``` + +Oh, so it appears that we need to declare that our `hello-world-derive` crate is +a `proc-macro` crate type. How do we do this? Like this: + +```toml +[lib] +proc-macro = true +``` + +Ok so now, let's compile `hello-world`. Executing `cargo run` now yields: + +```bash +Hello, World! My name is FrenchToast +Hello, World! My name is Waffles +``` + +We've done it! diff --git a/src/doc/book/raw-pointers.md b/src/doc/book/raw-pointers.md index ae100aec3b..2386475d15 100644 --- a/src/doc/book/raw-pointers.md +++ b/src/doc/book/raw-pointers.md @@ -101,11 +101,11 @@ programmer *must* guarantee this. The recommended method for the conversion is: ```rust -// explicit cast +// Explicit cast: let i: u32 = 1; let p_imm: *const u32 = &i as *const u32; -// implicit coercion +// Implicit coercion: let mut m: u32 = 2; let p_mut: *mut u32 = &mut m; diff --git a/src/doc/book/references-and-borrowing.md b/src/doc/book/references-and-borrowing.md index 1e2f061b06..6c9c4fa7dd 100644 --- a/src/doc/book/references-and-borrowing.md +++ b/src/doc/book/references-and-borrowing.md @@ -46,9 +46,9 @@ like this: ```rust fn foo(v1: Vec, v2: Vec) -> (Vec, Vec, i32) { - // do stuff with v1 and v2 + // Do stuff with `v1` and `v2`. - // hand back ownership, and the result of our function + // Hand back ownership, and the result of our function. (v1, v2, 42) } @@ -63,9 +63,9 @@ the first step: ```rust fn foo(v1: &Vec, v2: &Vec) -> i32 { - // do stuff with v1 and v2 + // Do stuff with `v1` and `v2`. - // return the answer + // Return the answer. 42 } @@ -74,7 +74,7 @@ let v2 = vec![1, 2, 3]; let answer = foo(&v1, &v2); -// we can use v1 and v2 here! +// We can use `v1` and `v2` here! ``` A more concrete example: @@ -88,10 +88,10 @@ fn main() { // Borrow two vectors and sum them. // This kind of borrowing does not allow mutation through the borrowed reference. fn foo(v1: &Vec, v2: &Vec) -> i32 { - // do stuff with v1 and v2 + // Do stuff with `v1` and `v2`. let s1 = sum_vec(v1); let s2 = sum_vec(v2); - // return the answer + // Return the answer. s1 + s2 } @@ -248,12 +248,12 @@ scopes look like this: fn main() { let mut x = 5; - let y = &mut x; // -+ &mut borrow of x starts here + let y = &mut x; // -+ &mut borrow of `x` starts here. // | *y += 1; // | // | - println!("{}", x); // -+ - try to borrow x here -} // -+ &mut borrow of x ends here + println!("{}", x); // -+ - Try to borrow `x` here. +} // -+ &mut borrow of `x` ends here. ``` @@ -265,11 +265,11 @@ So when we add the curly braces: let mut x = 5; { - let y = &mut x; // -+ &mut borrow starts here + let y = &mut x; // -+ &mut borrow starts here. *y += 1; // | -} // -+ ... and ends here +} // -+ ... and ends here. -println!("{}", x); // <- try to borrow x here +println!("{}", x); // <- Try to borrow `x` here. ``` There’s no problem. Our mutable borrow goes out of scope before we create an diff --git a/src/doc/book/strings.md b/src/doc/book/strings.md index 135778c38b..6af15d8768 100644 --- a/src/doc/book/strings.md +++ b/src/doc/book/strings.md @@ -83,10 +83,10 @@ converted using `&*`. ```rust,no_run use std::net::TcpStream; -TcpStream::connect("192.168.0.1:3000"); // &str parameter +TcpStream::connect("192.168.0.1:3000"); // Parameter is of type &str. let addr_string = "192.168.0.1:3000".to_string(); -TcpStream::connect(&*addr_string); // convert addr_string to &str +TcpStream::connect(&*addr_string); // Convert `addr_string` to &str. ``` Viewing a `String` as a `&str` is cheap, but converting the `&str` to a @@ -138,7 +138,7 @@ You can get something similar to an index like this: ```rust # let hachiko = "忠犬ハチ公"; -let dog = hachiko.chars().nth(1); // kinda like hachiko[1] +let dog = hachiko.chars().nth(1); // Kinda like `hachiko[1]`. ``` This emphasizes that we have to walk from the beginning of the list of `chars`. diff --git a/src/doc/book/structs.md b/src/doc/book/structs.md index 328db25b81..cfd00cf997 100644 --- a/src/doc/book/structs.md +++ b/src/doc/book/structs.md @@ -61,7 +61,7 @@ write something like this: ```rust,ignore struct Point { - mut x: i32, + mut x: i32, // This causes an error. y: i32, } ``` @@ -82,9 +82,9 @@ fn main() { point.x = 5; - let point = point; // now immutable + let point = point; // `point` is now immutable. - point.y = 6; // this causes an error + point.y = 6; // This causes an error. } ``` @@ -234,10 +234,10 @@ rather than positions. You can define a `struct` with no members at all: ```rust -struct Electron {} // use empty braces... -struct Proton; // ...or just a semicolon +struct Electron {} // Use empty braces... +struct Proton; // ...or just a semicolon. -// whether you declared the struct with braces or not, do the same when creating one +// Whether you declared the struct with braces or not, do the same when creating one. let x = Electron {}; let y = Proton; ``` diff --git a/src/doc/book/testing.md b/src/doc/book/testing.md index 0e6cdb8f09..ebeb992319 100644 --- a/src/doc/book/testing.md +++ b/src/doc/book/testing.md @@ -23,7 +23,11 @@ $ cd adder Cargo will automatically generate a simple test when you make a new project. Here's the contents of `src/lib.rs`: -```rust +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# #[cfg(test)] mod tests { #[test] @@ -32,6 +36,18 @@ mod tests { } ``` +For now, let's remove the `mod` bit, and focus on just the function: + +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# +#[test] +fn it_works() { +} +``` + Note the `#[test]`. This attribute indicates that this is a test function. It currently has no body. That's good enough to pass! We can run the tests with `cargo test`: @@ -39,10 +55,11 @@ currently has no body. That's good enough to pass! We can run the tests with ```bash $ cargo test Compiling adder v0.1.0 (file:///home/you/projects/adder) - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.15 secs + Running target/debug/deps/adder-941f01916ca4a642 running 1 test -test tests::it_works ... ok +test it_works ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured @@ -58,13 +75,15 @@ for the test we wrote, and another for documentation tests. We'll talk about those later. For now, see this line: ```text -test tests::it_works ... ok +test it_works ... ok ``` Note the `it_works`. This comes from the name of our function: ```rust +# fn main() { fn it_works() { +} # } ``` @@ -77,8 +96,11 @@ test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured So why does our do-nothing test pass? Any test which doesn't `panic!` passes, and any test that does `panic!` fails. Let's make our test fail: -```rust -# fn main() {} +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# #[test] fn it_works() { assert!(false); @@ -92,19 +114,21 @@ run our tests again: ```bash $ cargo test Compiling adder v0.1.0 (file:///home/you/projects/adder) - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.17 secs + Running target/debug/deps/adder-941f01916ca4a642 running 1 test -test tests::it_works ... FAILED +test it_works ... FAILED failures: ----- test::it_works stdout ---- - thread 'tests::it_works' panicked at 'assertion failed: false', src/lib.rs:5 +---- it_works stdout ---- + thread 'it_works' panicked at 'assertion failed: false', src/lib.rs:5 +note: Run with `RUST_BACKTRACE=1` for a backtrace. failures: - tests::it_works + it_works test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured @@ -114,7 +138,7 @@ error: test failed Rust indicates that our test failed: ```text -test tests::it_works ... FAILED +test it_works ... FAILED ``` And that's reflected in the summary line: @@ -147,8 +171,11 @@ This is useful if you want to integrate `cargo test` into other tooling. We can invert our test's failure with another attribute: `should_panic`: -```rust -# fn main() {} +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# #[test] #[should_panic] fn it_works() { @@ -161,10 +188,11 @@ This test will now succeed if we `panic!` and fail if we complete. Let's try it: ```bash $ cargo test Compiling adder v0.1.0 (file:///home/you/projects/adder) - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.17 secs + Running target/debug/deps/adder-941f01916ca4a642 running 1 test -test tests::it_works ... ok +test it_works ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured @@ -178,8 +206,11 @@ test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured Rust provides another macro, `assert_eq!`, that compares two arguments for equality: -```rust -# fn main() {} +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# #[test] #[should_panic] fn it_works() { @@ -193,10 +224,11 @@ passes: ```bash $ cargo test Compiling adder v0.1.0 (file:///home/you/projects/adder) - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.21 secs + Running target/debug/deps/adder-941f01916ca4a642 running 1 test -test tests::it_works ... ok +test it_works ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured @@ -213,8 +245,11 @@ parameter can be added to the `should_panic` attribute. The test harness will make sure that the failure message contains the provided text. A safer version of the example above would be: -```rust -# fn main() {} +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# #[test] #[should_panic(expected = "assertion failed")] fn it_works() { @@ -225,7 +260,10 @@ fn it_works() { That's all there is to the basics! Let's write one 'real' test: ```rust,ignore -# fn main() {} +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# pub fn add_two(a: i32) -> i32 { a + 2 } @@ -244,8 +282,15 @@ some known arguments and compare it to the expected output. Sometimes a few specific tests can be very time-consuming to execute. These can be disabled by default by using the `ignore` attribute: -```rust -# fn main() {} +```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# +pub fn add_two(a: i32) -> i32 { + a + 2 +} + #[test] fn it_works() { assert_eq!(4, add_two(2)); @@ -254,7 +299,7 @@ fn it_works() { #[test] #[ignore] fn expensive_test() { - // code that takes an hour to run + // Code that takes an hour to run... } ``` @@ -264,7 +309,8 @@ not: ```bash $ cargo test Compiling adder v0.1.0 (file:///home/you/projects/adder) - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.20 secs + Running target/debug/deps/adder-941f01916ca4a642 running 2 tests test expensive_test ... ignored @@ -283,7 +329,8 @@ The expensive tests can be run explicitly using `cargo test -- --ignored`: ```bash $ cargo test -- --ignored - Running target/debug/deps/adder-91b3e234d4ed382a + Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs + Running target/debug/deps/adder-941f01916ca4a642 running 1 test test expensive_test ... ok @@ -310,7 +357,10 @@ was missing from our last example. Let's explain what this does. The idiomatic way of writing our example looks like this: ```rust,ignore -# fn main() {} +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# pub fn add_two(a: i32) -> i32 { a + 2 } @@ -339,7 +389,10 @@ a large module, and so this is a common use of globs. Let's change our `src/lib.rs` to make use of it: ```rust,ignore -# fn main() {} +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# pub fn add_two(a: i32) -> i32 { a + 2 } @@ -389,9 +442,14 @@ To write an integration test, let's make a `tests` directory and put a `tests/integration_test.rs` file inside with this as its contents: ```rust,ignore +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# +# // Sadly, this code will not work in play.rust-lang.org, because we have no +# // crate adder to import. You'll need to try this part on your own machine. extern crate adder; -# fn main() {} #[test] fn it_works() { assert_eq!(4, adder::add_two(2)); @@ -452,7 +510,10 @@ running examples in your documentation (**note:** this only works in library crates, not binary crates). Here's a fleshed-out `src/lib.rs` with examples: ```rust,ignore -# fn main() {} +# // The next line exists to trick play.rust-lang.org into running our code as a +# // test: +# // fn main +# //! The `adder` crate provides functions that add numbers to other numbers. //! //! # Examples @@ -525,3 +586,45 @@ you add more examples. We haven’t covered all of the details with writing documentation tests. For more, please see the [Documentation chapter](documentation.html). + +# Testing and concurrency + +One thing that is important to note when writing tests is that they may be run +concurrently using threads. For this reason you should take care that your tests +are written in such a way as to not depend on each-other, or on any shared +state. "Shared state" can also include the environment, such as the current +working directory, or environment variables. + +If this is an issue it is possible to control this concurrency, either by +setting the environment variable `RUST_TEST_THREADS`, or by passing the argument +`--test-threads` to the tests: + +```bash +$ RUST_TEST_THREADS=1 cargo test # Run tests with no concurrency +... +$ cargo test -- --test-threads=1 # Same as above +... +``` + +# Test output + +By default Rust's test library captures and discards output to standard +out/error, e.g. output from `println!()`. This too can be controlled using the +environment or a switch: + + +```bash +$ RUST_TEST_NOCAPTURE=1 cargo test # Preserve stdout/stderr +... +$ cargo test -- --nocapture # Same as above +... +``` + +However a better method avoiding capture is to use logging rather than raw +output. Rust has a [standard logging API][log], which provides a frontend to +multiple logging implementations. This can be used in conjunction with the +default [env_logger] to output any debugging information in a manner that can be +controlled at runtime. + +[log]: https://crates.io/crates/log +[env_logger]: https://crates.io/crates/env_logger diff --git a/src/doc/book/trait-objects.md b/src/doc/book/trait-objects.md index b1aee579aa..a0396a75fa 100644 --- a/src/doc/book/trait-objects.md +++ b/src/doc/book/trait-objects.md @@ -221,8 +221,8 @@ struct FooVtable { // u8: fn call_method_on_u8(x: *const ()) -> String { - // the compiler guarantees that this function is only called - // with `x` pointing to a u8 + // The compiler guarantees that this function is only called + // with `x` pointing to a u8. let byte: &u8 = unsafe { &*(x as *const u8) }; byte.method() @@ -233,7 +233,7 @@ static Foo_for_u8_vtable: FooVtable = FooVtable { size: 1, align: 1, - // cast to a function pointer + // Cast to a function pointer: method: call_method_on_u8 as fn(*const ()) -> String, }; @@ -241,8 +241,8 @@ static Foo_for_u8_vtable: FooVtable = FooVtable { // String: fn call_method_on_String(x: *const ()) -> String { - // the compiler guarantees that this function is only called - // with `x` pointing to a String + // The compiler guarantees that this function is only called + // with `x` pointing to a String. let string: &String = unsafe { &*(x as *const String) }; string.method() @@ -250,7 +250,7 @@ fn call_method_on_String(x: *const ()) -> String { static Foo_for_String_vtable: FooVtable = FooVtable { destructor: /* compiler magic */, - // values for a 64-bit computer, halve them for 32-bit ones + // Values for a 64-bit computer, halve them for 32-bit ones. size: 24, align: 8, @@ -278,17 +278,17 @@ let x: u8 = 1; // let b: &Foo = &a; let b = TraitObject { - // store the data + // Store the data: data: &a, - // store the methods + // Store the methods: vtable: &Foo_for_String_vtable }; // let y: &Foo = x; let y = TraitObject { - // store the data + // Store the data: data: &x, - // store the methods + // Store the methods: vtable: &Foo_for_u8_vtable }; diff --git a/src/doc/book/traits.md b/src/doc/book/traits.md index b0d954adf6..4747869b65 100644 --- a/src/doc/book/traits.md +++ b/src/doc/book/traits.md @@ -243,28 +243,22 @@ to know more about [operator traits][operators-and-overloading]. # Rules for implementing traits So far, we’ve only added trait implementations to structs, but you can -implement a trait for any type. So technically, we _could_ implement `HasArea` -for `i32`: +implement a trait for any type such as `f32`: ```rust -trait HasArea { - fn area(&self) -> f64; +trait ApproxEqual { + fn approx_equal(&self, other: &Self) -> bool; } - -impl HasArea for i32 { - fn area(&self) -> f64 { - println!("this is silly"); - - *self as f64 +impl ApproxEqual for f32 { + fn approx_equal(&self, other: &Self) -> bool { + // Appropriate for `self` and `other` being close to 1.0. + (self - other).abs() <= ::std::f32::EPSILON } } -5.area(); +println!("{}", 1.0.approx_equal(&1.00000001)); ``` -It is considered poor style to implement methods on such primitive types, even -though it is possible. - This may seem like the Wild West, but there are two restrictions around implementing traits that prevent this from getting out of hand. The first is that if the trait isn’t defined in your scope, it doesn’t apply. Here’s an @@ -276,9 +270,9 @@ won’t have its methods: ```rust,ignore let mut f = std::fs::File::create("foo.txt").expect("Couldn’t create foo.txt"); -let buf = b"whatever"; // byte string literal. buf: &[u8; 8] +let buf = b"whatever"; // buf: &[u8; 8], a byte string literal. let result = f.write(buf); -# result.unwrap(); // ignore the error +# result.unwrap(); // Ignore the error. ``` Here’s the error: @@ -297,7 +291,7 @@ use std::io::Write; let mut f = std::fs::File::create("foo.txt").expect("Couldn’t create foo.txt"); let buf = b"whatever"; let result = f.write(buf); -# result.unwrap(); // ignore the error +# result.unwrap(); // Ignore the error. ``` This will compile without error. @@ -419,14 +413,14 @@ impl ConvertTo for i32 { fn convert(&self) -> i64 { *self as i64 } } -// can be called with T == i32 +// Can be called with T == i32. fn normal>(x: &T) -> i64 { x.convert() } -// can be called with T == i64 +// Can be called with T == i64. fn inverse(x: i32) -> T - // this is using ConvertTo as if it were "ConvertTo" + // This is using ConvertTo as if it were "ConvertTo". where i32: ConvertTo { x.convert() } @@ -476,15 +470,15 @@ impl Foo for OverrideDefault { fn is_invalid(&self) -> bool { println!("Called OverrideDefault.is_invalid!"); - true // overrides the expected value of is_invalid() + true // Overrides the expected value of `is_invalid()`. } } let default = UseDefault; -assert!(!default.is_invalid()); // prints "Called UseDefault.is_valid." +assert!(!default.is_invalid()); // Prints "Called UseDefault.is_valid." let over = OverrideDefault; -assert!(over.is_invalid()); // prints "Called OverrideDefault.is_invalid!" +assert!(over.is_invalid()); // Prints "Called OverrideDefault.is_invalid!" ``` # Inheritance diff --git a/src/doc/book/unsafe.md b/src/doc/book/unsafe.md index 9cab586b82..a272afa70b 100644 --- a/src/doc/book/unsafe.md +++ b/src/doc/book/unsafe.md @@ -12,7 +12,7 @@ four contexts. The first one is to mark a function as unsafe: ```rust unsafe fn danger_will_robinson() { - // scary stuff + // Scary stuff... } ``` @@ -23,7 +23,7 @@ The second use of `unsafe` is an unsafe block: ```rust unsafe { - // scary stuff + // Scary stuff... } ``` diff --git a/src/doc/book/variable-bindings.md b/src/doc/book/variable-bindings.md index 03f17371de..37b6c0513f 100644 --- a/src/doc/book/variable-bindings.md +++ b/src/doc/book/variable-bindings.md @@ -47,7 +47,7 @@ let x: i32 = 5; ``` If I asked you to read this out loud to the rest of the class, you’d say “`x` -is a binding with the type `i32` and the value `five`.” +is a binding with the type `i32` and the value `5`.” In this case we chose to represent `x` as a 32-bit signed integer. Rust has many different primitive integer types. They begin with `i` for signed integers @@ -194,7 +194,7 @@ fn main() { let y: i32 = 3; println!("The value of x is {} and value of y is {}", x, y); } - println!("The value of x is {} and value of y is {}", x, y); // This won't work + println!("The value of x is {} and value of y is {}", x, y); // This won't work. } ``` @@ -207,7 +207,7 @@ Instead we get this error: $ cargo build Compiling hello v0.1.0 (file:///home/you/projects/hello_world) main.rs:7:62: 7:63 error: unresolved name `y`. Did you mean `x`? [E0425] -main.rs:7 println!("The value of x is {} and value of y is {}", x, y); // This won't work +main.rs:7 println!("The value of x is {} and value of y is {}", x, y); // This won't work. ^ note: in expansion of format_args! :2:25: 2:56 note: expansion site @@ -229,13 +229,13 @@ scope will override the previous binding. ```rust let x: i32 = 8; { - println!("{}", x); // Prints "8" + println!("{}", x); // Prints "8". let x = 12; - println!("{}", x); // Prints "12" + println!("{}", x); // Prints "12". } -println!("{}", x); // Prints "8" +println!("{}", x); // Prints "8". let x = 42; -println!("{}", x); // Prints "42" +println!("{}", x); // Prints "42". ``` Shadowing and mutable bindings may appear as two sides of the same coin, but @@ -249,8 +249,8 @@ by any means. ```rust let mut x: i32 = 1; x = 7; -let x = x; // x is now immutable and is bound to 7 +let x = x; // `x` is now immutable and is bound to `7`. let y = 4; -let y = "I can also be bound to text!"; // y is now of a different type +let y = "I can also be bound to text!"; // `y` is now of a different type. ``` diff --git a/src/doc/book/vectors.md b/src/doc/book/vectors.md index cb6781cdf2..b948a54f44 100644 --- a/src/doc/book/vectors.md +++ b/src/doc/book/vectors.md @@ -17,7 +17,7 @@ situation, this is just convention.) There’s an alternate form of `vec!` for repeating an initial value: ```rust -let v = vec![0; 10]; // ten zeroes +let v = vec![0; 10]; // A vector of ten zeroes. ``` Vectors store their contents as contiguous arrays of `T` on the heap. This means @@ -46,10 +46,10 @@ let v = vec![1, 2, 3, 4, 5]; let i: usize = 0; let j: i32 = 0; -// works +// Works: v[i]; -// doesn’t +// Doesn’t: v[j]; ``` diff --git a/src/doc/index.md b/src/doc/index.md index f8a1ec134d..71dfcf0b06 100644 --- a/src/doc/index.md +++ b/src/doc/index.md @@ -17,7 +17,7 @@ the language. [**The Rust Reference**][ref]. While Rust does not have a specification, the reference tries to describe its working in -detail. It tends to be out of date. +detail. It is accurate, but not necessarily complete. [**Standard Library API Reference**][api]. Documentation for the standard library. diff --git a/src/doc/reference.md b/src/doc/reference.md index 711a13d21f..e7cfe4bb2e 100644 --- a/src/doc/reference.md +++ b/src/doc/reference.md @@ -21,6 +21,11 @@ separately by extracting documentation attributes from their source code. Many of the features that one might expect to be language features are library features in Rust, so what you're looking for may be there, not here. +Finally, this document is not normative. It may include details that are +specific to `rustc` itself, and should not be taken as a specification for +the Rust language. We intend to produce such a document someday, but this +is what we have for now. + You may also be interested in the [grammar]. [book]: book/index.html @@ -550,26 +555,24 @@ mod a { # fn main() {} ``` -# Syntax extensions +# Macros A number of minor features of Rust are not central enough to have their own syntax, and yet are not implementable as functions. Instead, they are given names, and invoked through a consistent syntax: `some_extension!(...)`. -Users of `rustc` can define new syntax extensions in two ways: - -* [Compiler plugins][plugin] can include arbitrary Rust code that - manipulates syntax trees at compile time. Note that the interface - for compiler plugins is considered highly unstable. +Users of `rustc` can define new macros in two ways: * [Macros](book/macros.html) define new syntax in a higher-level, declarative way. +* [Procedural Macros][procedural macros] can be used to implement custom derive. + +And one unstable way: [compiler plugins][plugin]. ## Macros `macro_rules` allows users to define syntax extension in a declarative way. We -call such extensions "macros by example" or simply "macros" — to be distinguished -from the "procedural macros" defined in [compiler plugins][plugin]. +call such extensions "macros by example" or simply "macros". Currently, macros can expand to expressions, statements, items, or patterns. @@ -598,7 +601,8 @@ syntax named by _designator_. Valid designators are: * `ty`: a [type](#types) * `ident`: an [identifier](#identifiers) * `path`: a [path](#paths) -* `tt`: either side of the `=>` in macro rules +* `tt`: a token tree (a single [token](#tokens) or a sequence of token trees surrounded + by matching `()`, `[]`, or `{}`) * `meta`: the contents of an [attribute](#attributes) In the transcriber, the @@ -646,6 +650,28 @@ Rust syntax is restricted in two ways: [RFC 550]: https://github.com/rust-lang/rfcs/blob/master/text/0550-macro-future-proofing.md +## Procedrual Macros + +"Procedrual macros" are the second way to implement a macro. For now, the only +thing they can be used for is to implement derive on your own types. See +[the book][procedural macros] for a tutorial. + +Procedural macros involve a few different parts of the language and its +standard libraries. First is the `proc_macro` crate, included with Rust, +that defines an interface for building a procedrual macro. The +`#[proc_macro_derive(Foo)]` attribute is used to mark the the deriving +function. This function must have the type signature: + +```rust,ignore +use proc_macro::TokenStream; + +#[proc_macro_derive(Hello)] +pub fn hello_world(input: TokenStream) -> TokenStream +``` + +Finally, procedural macros must be in their own crate, with the `proc-macro` +crate type. + # Crates and source files Although Rust, like any other language, can be implemented by an interpreter as @@ -735,13 +761,14 @@ There are several kinds of item: * [`extern crate` declarations](#extern-crate-declarations) * [`use` declarations](#use-declarations) * [modules](#modules) -* [functions](#functions) +* [function definitions](#functions) +* [`extern` blocks](#external-blocks) * [type definitions](grammar.html#type-definitions) -* [structs](#structs) -* [enumerations](#enumerations) +* [struct definitions](#structs) +* [enumeration definitions](#enumerations) * [constant items](#constant-items) * [static items](#static-items) -* [traits](#traits) +* [trait definitions](#traits) * [implementations](#implementations) Some items form an implicit scope for the declaration of sub-items. In other @@ -2302,6 +2329,9 @@ impl PartialEq for Foo { } ``` +You can implement `derive` for your own type through [procedural +macros](#procedural-macros). + ### Compiler Features Certain aspects of Rust may be implemented in the compiler, but they're not @@ -2457,11 +2487,6 @@ The currently implemented features of the reference compiler are: * `unboxed_closures` - Rust's new closure design, which is currently a work in progress feature with many known bugs. -* `unmarked_api` - Allows use of items within a `#![staged_api]` crate - which have not been marked with a stability marker. - Such items should not be allowed by the compiler to exist, - so if you need this there probably is a compiler bug. - * `allow_internal_unstable` - Allows `macro_rules!` macros to be tagged with the `#[allow_internal_unstable]` attribute, designed to allow `std` macros to call @@ -2469,18 +2494,19 @@ The currently implemented features of the reference compiler are: internally without imposing on callers (i.e. making them behave like function calls in terms of encapsulation). -* - `default_type_parameter_fallback` - Allows type parameter defaults to - influence type inference. -* - `stmt_expr_attributes` - Allows attributes on expressions. +* `default_type_parameter_fallback` - Allows type parameter defaults to + influence type inference. -* - `type_ascription` - Allows type ascription expressions `expr: Type`. +* `stmt_expr_attributes` - Allows attributes on expressions. -* - `abi_vectorcall` - Allows the usage of the vectorcall calling convention - (e.g. `extern "vectorcall" func fn_();`) +* `type_ascription` - Allows type ascription expressions `expr: Type`. -* - `abi_sysv64` - Allows the usage of the system V AMD64 calling convention - (e.g. `extern "sysv64" func fn_();`) +* `abi_vectorcall` - Allows the usage of the vectorcall calling convention + (e.g. `extern "vectorcall" func fn_();`) + +* `abi_sysv64` - Allows the usage of the system V AMD64 calling convention + (e.g. `extern "sysv64" func fn_();`) If a feature is promoted to a language feature, then all existing programs will start to receive compilation warnings about `#![feature]` directives which enabled @@ -4109,6 +4135,16 @@ be ignored in favor of only building the artifacts specified by command line. in dynamic libraries. This form of output is used to produce statically linked executables as well as `staticlib` outputs. +* `--crate-type=proc-macro`, `#[crate_type = "proc-macro"]` - The output + produced is not specified, but if a `-L` path is provided to it then the + compiler will recognize the output artifacts as a macro and it can be loaded + for a program. If a crate is compiled with the `proc-macro` crate type it + will forbid exporting any items in the crate other than those functions + tagged `#[proc_macro_derive]` and those functions must also be placed at the + crate root. Finally, the compiler will automatically set the + `cfg(proc_macro)` annotation whenever any crate type of a compilation is the + `proc-macro` crate type. + Note that these outputs are stackable in the sense that if multiple are specified, then the compiler will produce each form of output at once without having to recompile. However, this only applies for outputs specified by the @@ -4286,3 +4322,4 @@ that have since been removed): [ffi]: book/ffi.html [plugin]: book/compiler-plugins.html +[procedural macros]: book/procedural-macros.html diff --git a/src/doc/rust.css b/src/doc/rust.css index 932594b991..664bc0fdab 100644 --- a/src/doc/rust.css +++ b/src/doc/rust.css @@ -44,7 +44,9 @@ font-family: 'Source Code Pro'; font-style: normal; font-weight: 400; - src: local('Source Code Pro'), url("SourceCodePro-Regular.woff") format('woff'); + /* Avoid using locally installed font because bad versions are in circulation: + * see https://github.com/rust-lang/rust/issues/24355 */ + src: url("SourceCodePro-Regular.woff") format('woff'); } *:not(body) { diff --git a/src/etc/char_private.py b/src/etc/char_private.py index 3566d14352..9d15f98e06 100644 --- a/src/etc/char_private.py +++ b/src/etc/char_private.py @@ -11,11 +11,16 @@ # except according to those terms. # This script uses the following Unicode tables: -# - Categories.txt +# - UnicodeData.txt + +from collections import namedtuple +import csv import os import subprocess +NUM_CODEPOINTS=0x110000 + def to_ranges(iter): current = None for i in iter: @@ -28,10 +33,10 @@ def to_ranges(iter): if current is not None: yield tuple(current) -def get_escaped(dictionary): - for i in range(0x110000): - if dictionary.get(i, "Cn") in "Cc Cf Cs Co Cn Zl Zp Zs".split() and i != ord(' '): - yield i +def get_escaped(codepoints): + for c in codepoints: + if (c.class_ or "Cn") in "Cc Cf Cs Co Cn Zl Zp Zs".split() and c.value != ord(' '): + yield c.value def get_file(f): try: @@ -40,10 +45,41 @@ def get_file(f): subprocess.run(["curl", "-O", f], check=True) return open(os.path.basename(f)) -def main(): - file = get_file("http://www.unicode.org/notes/tn36/Categories.txt") +Codepoint = namedtuple('Codepoint', 'value class_') - dictionary = {int(line.split()[0], 16): line.split()[1] for line in file} +def get_codepoints(f): + r = csv.reader(f, delimiter=";") + prev_codepoint = 0 + class_first = None + for row in r: + codepoint = int(row[0], 16) + name = row[1] + class_ = row[2] + + if class_first is not None: + if not name.endswith("Last>"): + raise ValueError("Missing Last after First") + + for c in range(prev_codepoint + 1, codepoint): + yield Codepoint(c, class_first) + + class_first = None + if name.endswith("First>"): + class_first = class_ + + yield Codepoint(codepoint, class_) + prev_codepoint = codepoint + + if class_first != None: + raise ValueError("Missing Last after First") + + for c in range(prev_codepoint + 1, NUM_CODEPOINTS): + yield Codepoint(c, None) + +def main(): + file = get_file("http://www.unicode.org/Public/UNIDATA/UnicodeData.txt") + + codepoints = get_codepoints(file) CUTOFF=0x10000 singletons0 = [] @@ -52,7 +88,7 @@ def main(): normal1 = [] extra = [] - for a, b in to_ranges(get_escaped(dictionary)): + for a, b in to_ranges(get_escaped(codepoints)): if a > 2 * CUTOFF: extra.append((a, b - a)) elif a == b - 1: diff --git a/src/etc/generate-deriving-span-tests.py b/src/etc/generate-deriving-span-tests.py index 790fc89428..6642da858e 100755 --- a/src/etc/generate-deriving-span-tests.py +++ b/src/etc/generate-deriving-span-tests.py @@ -37,8 +37,6 @@ TEMPLATE = """// Copyright {year} The Rust Project Developers. See the COPYRIGHT // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - {error_deriving} struct Error; {code} @@ -106,7 +104,6 @@ STRUCT = 2 ALL = STRUCT | ENUM traits = { - 'Zero': (STRUCT, [], 1), 'Default': (STRUCT, [], 1), 'FromPrimitive': (0, [], 0), # only works for C-like enums @@ -116,7 +113,7 @@ traits = { for (trait, supers, errs) in [('Clone', [], 1), ('PartialEq', [], 2), - ('PartialOrd', ['PartialEq'], 8), + ('PartialOrd', ['PartialEq'], 9), ('Eq', ['PartialEq'], 1), ('Ord', ['Eq', 'PartialOrd', 'PartialEq'], 1), ('Debug', [], 1), diff --git a/src/grammar/README.md b/src/grammar/README.md new file mode 100644 index 0000000000..cd2dd38de3 --- /dev/null +++ b/src/grammar/README.md @@ -0,0 +1,33 @@ +# Reference grammar. + +Uses [antlr4](http://www.antlr.org/) and a custom Rust tool to compare +ASTs/token streams generated. You can use the `make check-lexer` target to +run all of the available tests. + +The build of the rust part is included with `make tidy` and can be run with `make check-build-lexer-verifier`. + +# Manual build + +To use manually, assuming antlr4 ist installed at `/usr/share/java/antlr-complete.jar`: + +``` +antlr4 RustLexer.g4 +javac -classpath /usr/share/java/antlr-complete.jar *.java +rustc -O verify.rs +for file in ../*/**.rs; do + echo $file; + grun RustLexer tokens -tokens < "$file" | ./verify "$file" RustLexer.tokens || break +done +``` + +Note That the `../*/**.rs` glob will match every `*.rs` file in the above +directory and all of its recursive children. This is a zsh extension. + + +## Cleanup + +To cleanup you can use a command like this: + +```bash +rm -f verify *.class *.java *.tokens +``` diff --git a/src/grammar/RustLexer.g4 b/src/grammar/RustLexer.g4 new file mode 100644 index 0000000000..a63fc59e50 --- /dev/null +++ b/src/grammar/RustLexer.g4 @@ -0,0 +1,197 @@ +lexer grammar RustLexer; + +@lexer::members { + public boolean is_at(int pos) { + return _input.index() == pos; + } +} + + +tokens { + EQ, LT, LE, EQEQ, NE, GE, GT, ANDAND, OROR, NOT, TILDE, PLUS, + MINUS, STAR, SLASH, PERCENT, CARET, AND, OR, SHL, SHR, BINOP, + BINOPEQ, LARROW, AT, DOT, DOTDOT, DOTDOTDOT, COMMA, SEMI, COLON, + MOD_SEP, RARROW, FAT_ARROW, LPAREN, RPAREN, LBRACKET, RBRACKET, + LBRACE, RBRACE, POUND, DOLLAR, UNDERSCORE, LIT_CHAR, LIT_BYTE, + LIT_INTEGER, LIT_FLOAT, LIT_STR, LIT_STR_RAW, LIT_BYTE_STR, + LIT_BYTE_STR_RAW, QUESTION, IDENT, LIFETIME, WHITESPACE, DOC_COMMENT, + COMMENT, SHEBANG, UTF8_BOM +} + +import xidstart , xidcontinue; + + +/* Expression-operator symbols */ + +EQ : '=' ; +LT : '<' ; +LE : '<=' ; +EQEQ : '==' ; +NE : '!=' ; +GE : '>=' ; +GT : '>' ; +ANDAND : '&&' ; +OROR : '||' ; +NOT : '!' ; +TILDE : '~' ; +PLUS : '+' ; +MINUS : '-' ; +STAR : '*' ; +SLASH : '/' ; +PERCENT : '%' ; +CARET : '^' ; +AND : '&' ; +OR : '|' ; +SHL : '<<' ; +SHR : '>>' ; +LARROW : '<-' ; + +BINOP + : PLUS + | SLASH + | MINUS + | STAR + | PERCENT + | CARET + | AND + | OR + | SHL + | SHR + | LARROW + ; + +BINOPEQ : BINOP EQ ; + +/* "Structural symbols" */ + +AT : '@' ; +DOT : '.' ; +DOTDOT : '..' ; +DOTDOTDOT : '...' ; +COMMA : ',' ; +SEMI : ';' ; +COLON : ':' ; +MOD_SEP : '::' ; +RARROW : '->' ; +FAT_ARROW : '=>' ; +LPAREN : '(' ; +RPAREN : ')' ; +LBRACKET : '[' ; +RBRACKET : ']' ; +LBRACE : '{' ; +RBRACE : '}' ; +POUND : '#'; +DOLLAR : '$' ; +UNDERSCORE : '_' ; + +// Literals + +fragment HEXIT + : [0-9a-fA-F] + ; + +fragment CHAR_ESCAPE + : [nrt\\'"0] + | [xX] HEXIT HEXIT + | 'u' HEXIT HEXIT HEXIT HEXIT + | 'U' HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT + | 'u{' HEXIT '}' + | 'u{' HEXIT HEXIT '}' + | 'u{' HEXIT HEXIT HEXIT '}' + | 'u{' HEXIT HEXIT HEXIT HEXIT '}' + | 'u{' HEXIT HEXIT HEXIT HEXIT HEXIT '}' + | 'u{' HEXIT HEXIT HEXIT HEXIT HEXIT HEXIT '}' + ; + +fragment SUFFIX + : IDENT + ; + +fragment INTEGER_SUFFIX + : { _input.LA(1) != 'e' && _input.LA(1) != 'E' }? SUFFIX + ; + +LIT_CHAR + : '\'' ( '\\' CHAR_ESCAPE + | ~[\\'\n\t\r] + | '\ud800' .. '\udbff' '\udc00' .. '\udfff' + ) + '\'' SUFFIX? + ; + +LIT_BYTE + : 'b\'' ( '\\' ( [xX] HEXIT HEXIT + | [nrt\\'"0] ) + | ~[\\'\n\t\r] '\udc00'..'\udfff'? + ) + '\'' SUFFIX? + ; + +LIT_INTEGER + + : [0-9][0-9_]* INTEGER_SUFFIX? + | '0b' [01_]+ INTEGER_SUFFIX? + | '0o' [0-7_]+ INTEGER_SUFFIX? + | '0x' [0-9a-fA-F_]+ INTEGER_SUFFIX? + ; + +LIT_FLOAT + : [0-9][0-9_]* ('.' { + /* dot followed by another dot is a range, not a float */ + _input.LA(1) != '.' && + /* dot followed by an identifier is an integer with a function call, not a float */ + _input.LA(1) != '_' && + !(_input.LA(1) >= 'a' && _input.LA(1) <= 'z') && + !(_input.LA(1) >= 'A' && _input.LA(1) <= 'Z') + }? | ('.' [0-9][0-9_]*)? ([eE] [-+]? [0-9][0-9_]*)? SUFFIX?) + ; + +LIT_STR + : '"' ('\\\n' | '\\\r\n' | '\\' CHAR_ESCAPE | .)*? '"' SUFFIX? + ; + +LIT_BYTE_STR : 'b' LIT_STR ; +LIT_BYTE_STR_RAW : 'b' LIT_STR_RAW ; + +/* this is a bit messy */ + +fragment LIT_STR_RAW_INNER + : '"' .*? '"' + | LIT_STR_RAW_INNER2 + ; + +fragment LIT_STR_RAW_INNER2 + : POUND LIT_STR_RAW_INNER POUND + ; + +LIT_STR_RAW + : 'r' LIT_STR_RAW_INNER SUFFIX? + ; + + +QUESTION : '?'; + +IDENT : XID_Start XID_Continue* ; + +fragment QUESTION_IDENTIFIER : QUESTION? IDENT; + +LIFETIME : '\'' IDENT ; + +WHITESPACE : [ \r\n\t]+ ; + +UNDOC_COMMENT : '////' ~[\n]* -> type(COMMENT) ; +YESDOC_COMMENT : '///' ~[\r\n]* -> type(DOC_COMMENT) ; +OUTER_DOC_COMMENT : '//!' ~[\r\n]* -> type(DOC_COMMENT) ; +LINE_COMMENT : '//' ( ~[/\n] ~[\n]* )? -> type(COMMENT) ; + +DOC_BLOCK_COMMENT + : ('/**' ~[*] | '/*!') (DOC_BLOCK_COMMENT | .)*? '*/' -> type(DOC_COMMENT) + ; + +BLOCK_COMMENT : '/*' (BLOCK_COMMENT | .)*? '*/' -> type(COMMENT) ; + +/* these appear at the beginning of a file */ + +SHEBANG : '#!' { is_at(2) && _input.LA(1) != '[' }? ~[\r\n]* -> type(SHEBANG) ; + +UTF8_BOM : '\ufeff' { is_at(1) }? -> skip ; diff --git a/src/grammar/check.sh b/src/grammar/check.sh new file mode 100755 index 0000000000..70a8f6fca2 --- /dev/null +++ b/src/grammar/check.sh @@ -0,0 +1,52 @@ +#!/bin/sh + +# ignore-license + +# Run the reference lexer against libsyntax and compare the tokens and spans. +# If "// ignore-lexer-test" is present in the file, it will be ignored. + + +# Argument $1 is the file to check, $2 is the classpath to use, $3 is the path +# to the grun binary, $4 is the path to the verify binary, $5 is the path to +# RustLexer.tokens +if [ "${VERBOSE}" == "1" ]; then + set -x +fi + +passed=0 +failed=0 +skipped=0 + +check() { + grep --silent "// ignore-lexer-test" "$1"; + + # if it is *not* found... + if [ $? -eq 1 ]; then + cd $2 # This `cd` is so java will pick up RustLexer.class. I could not + # figure out how to wrangle the CLASSPATH, just adding build/grammar + # did not seem to have any effect. + if $3 RustLexer tokens -tokens < $1 | $4 $1 $5; then + echo "pass: $1" + passed=`expr $passed + 1` + else + echo "fail: $1" + failed=`expr $failed + 1` + fi + else + echo "skip: $1" + skipped=`expr $skipped + 1` + fi +} + +for file in $(find $1 -iname '*.rs' ! -path '*/test/compile-fail*'); do + check "$file" $2 $3 $4 $5 +done + +printf "\ntest result: " + +if [ $failed -eq 0 ]; then + printf "ok. $passed passed; $failed failed; $skipped skipped\n\n" +else + printf "failed. $passed passed; $failed failed; $skipped skipped\n\n" + exit 1 +fi diff --git a/src/grammar/lexer.l b/src/grammar/lexer.l new file mode 100644 index 0000000000..77737c9949 --- /dev/null +++ b/src/grammar/lexer.l @@ -0,0 +1,343 @@ +%{ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#include +#include + +static int num_hashes; +static int end_hashes; +static int saw_non_hash; + +%} + +%option stack +%option yylineno + +%x str +%x rawstr +%x rawstr_esc_begin +%x rawstr_esc_body +%x rawstr_esc_end +%x byte +%x bytestr +%x rawbytestr +%x rawbytestr_nohash +%x pound +%x shebang_or_attr +%x ltorchar +%x linecomment +%x doc_line +%x blockcomment +%x doc_block +%x suffix + +ident [a-zA-Z\x80-\xff_][a-zA-Z0-9\x80-\xff_]* + +%% + +{ident} { BEGIN(INITIAL); } +(.|\n) { yyless(0); BEGIN(INITIAL); } + +[ \n\t\r] { } + +\xef\xbb\xbf { + // UTF-8 byte order mark (BOM), ignore if in line 1, error otherwise + if (yyget_lineno() != 1) { + return -1; + } +} + +\/\/(\/|\!) { BEGIN(doc_line); yymore(); } +\n { BEGIN(INITIAL); + yyleng--; + yytext[yyleng] = 0; + return ((yytext[2] == '!') ? INNER_DOC_COMMENT : OUTER_DOC_COMMENT); + } +[^\n]* { yymore(); } + +\/\/|\/\/\/\/ { BEGIN(linecomment); } +\n { BEGIN(INITIAL); } +[^\n]* { } + +\/\*(\*|\!)[^*] { yy_push_state(INITIAL); yy_push_state(doc_block); yymore(); } +\/\* { yy_push_state(doc_block); yymore(); } +\*\/ { + yy_pop_state(); + if (yy_top_state() == doc_block) { + yymore(); + } else { + return ((yytext[2] == '!') ? INNER_DOC_COMMENT : OUTER_DOC_COMMENT); + } +} +(.|\n) { yymore(); } + +\/\* { yy_push_state(blockcomment); } +\/\* { yy_push_state(blockcomment); } +\*\/ { yy_pop_state(); } +(.|\n) { } + +_ { return UNDERSCORE; } +as { return AS; } +box { return BOX; } +break { return BREAK; } +const { return CONST; } +continue { return CONTINUE; } +crate { return CRATE; } +else { return ELSE; } +enum { return ENUM; } +extern { return EXTERN; } +false { return FALSE; } +fn { return FN; } +for { return FOR; } +if { return IF; } +impl { return IMPL; } +in { return IN; } +let { return LET; } +loop { return LOOP; } +match { return MATCH; } +mod { return MOD; } +move { return MOVE; } +mut { return MUT; } +priv { return PRIV; } +proc { return PROC; } +pub { return PUB; } +ref { return REF; } +return { return RETURN; } +self { return SELF; } +static { return STATIC; } +struct { return STRUCT; } +trait { return TRAIT; } +true { return TRUE; } +type { return TYPE; } +typeof { return TYPEOF; } +unsafe { return UNSAFE; } +use { return USE; } +where { return WHERE; } +while { return WHILE; } + +{ident} { return IDENT; } + +0x[0-9a-fA-F_]+ { BEGIN(suffix); return LIT_INTEGER; } +0o[0-8_]+ { BEGIN(suffix); return LIT_INTEGER; } +0b[01_]+ { BEGIN(suffix); return LIT_INTEGER; } +[0-9][0-9_]* { BEGIN(suffix); return LIT_INTEGER; } +[0-9][0-9_]*\.(\.|[a-zA-Z]) { yyless(yyleng - 2); BEGIN(suffix); return LIT_INTEGER; } + +[0-9][0-9_]*\.[0-9_]*([eE][-\+]?[0-9_]+)? { BEGIN(suffix); return LIT_FLOAT; } +[0-9][0-9_]*(\.[0-9_]*)?[eE][-\+]?[0-9_]+ { BEGIN(suffix); return LIT_FLOAT; } + +; { return ';'; } +, { return ','; } +\.\.\. { return DOTDOTDOT; } +\.\. { return DOTDOT; } +\. { return '.'; } +\( { return '('; } +\) { return ')'; } +\{ { return '{'; } +\} { return '}'; } +\[ { return '['; } +\] { return ']'; } +@ { return '@'; } +# { BEGIN(pound); yymore(); } +\! { BEGIN(shebang_or_attr); yymore(); } +\[ { + BEGIN(INITIAL); + yyless(2); + return SHEBANG; +} +[^\[\n]*\n { + // Since the \n was eaten as part of the token, yylineno will have + // been incremented to the value 2 if the shebang was on the first + // line. This yyless undoes that, setting yylineno back to 1. + yyless(yyleng - 1); + if (yyget_lineno() == 1) { + BEGIN(INITIAL); + return SHEBANG_LINE; + } else { + BEGIN(INITIAL); + yyless(2); + return SHEBANG; + } +} +. { BEGIN(INITIAL); yyless(1); return '#'; } + +\~ { return '~'; } +:: { return MOD_SEP; } +: { return ':'; } +\$ { return '$'; } +\? { return '?'; } + +== { return EQEQ; } +=> { return FAT_ARROW; } += { return '='; } +\!= { return NE; } +\! { return '!'; } +\<= { return LE; } +\<\< { return SHL; } +\<\<= { return SHLEQ; } +\< { return '<'; } +\>= { return GE; } +\>\> { return SHR; } +\>\>= { return SHREQ; } +\> { return '>'; } + +\x27 { BEGIN(ltorchar); yymore(); } +static { BEGIN(INITIAL); return STATIC_LIFETIME; } +{ident} { BEGIN(INITIAL); return LIFETIME; } +\\[nrt\\\x27\x220]\x27 { BEGIN(suffix); return LIT_CHAR; } +\\x[0-9a-fA-F]{2}\x27 { BEGIN(suffix); return LIT_CHAR; } +\\u\{[0-9a-fA-F]?{6}\}\x27 { BEGIN(suffix); return LIT_CHAR; } +.\x27 { BEGIN(suffix); return LIT_CHAR; } +[\x80-\xff]{2,4}\x27 { BEGIN(suffix); return LIT_CHAR; } +<> { BEGIN(INITIAL); return -1; } + +b\x22 { BEGIN(bytestr); yymore(); } +\x22 { BEGIN(suffix); return LIT_BYTE_STR; } + +<> { return -1; } +\\[n\nrt\\\x27\x220] { yymore(); } +\\x[0-9a-fA-F]{2} { yymore(); } +\\u\{[0-9a-fA-F]?{6}\} { yymore(); } +\\[^n\nrt\\\x27\x220] { return -1; } +(.|\n) { yymore(); } + +br\x22 { BEGIN(rawbytestr_nohash); yymore(); } +\x22 { BEGIN(suffix); return LIT_BYTE_STR_RAW; } +(.|\n) { yymore(); } +<> { return -1; } + +br/# { + BEGIN(rawbytestr); + yymore(); + num_hashes = 0; + saw_non_hash = 0; + end_hashes = 0; +} +# { + if (!saw_non_hash) { + num_hashes++; + } else if (end_hashes != 0) { + end_hashes++; + if (end_hashes == num_hashes) { + BEGIN(INITIAL); + return LIT_BYTE_STR_RAW; + } + } + yymore(); +} +\x22# { + end_hashes = 1; + if (end_hashes == num_hashes) { + BEGIN(INITIAL); + return LIT_BYTE_STR_RAW; + } + yymore(); +} +(.|\n) { + if (!saw_non_hash) { + saw_non_hash = 1; + } + if (end_hashes != 0) { + end_hashes = 0; + } + yymore(); +} +<> { return -1; } + +b\x27 { BEGIN(byte); yymore(); } +\\[nrt\\\x27\x220]\x27 { BEGIN(INITIAL); return LIT_BYTE; } +\\x[0-9a-fA-F]{2}\x27 { BEGIN(INITIAL); return LIT_BYTE; } +\\u[0-9a-fA-F]{4}\x27 { BEGIN(INITIAL); return LIT_BYTE; } +\\U[0-9a-fA-F]{8}\x27 { BEGIN(INITIAL); return LIT_BYTE; } +.\x27 { BEGIN(INITIAL); return LIT_BYTE; } +<> { BEGIN(INITIAL); return -1; } + +r\x22 { BEGIN(rawstr); yymore(); } +\x22 { BEGIN(suffix); return LIT_STR_RAW; } +(.|\n) { yymore(); } +<> { return -1; } + +r/# { + BEGIN(rawstr_esc_begin); + yymore(); + num_hashes = 0; + saw_non_hash = 0; + end_hashes = 0; +} + +# { + num_hashes++; + yymore(); +} +\x22 { + BEGIN(rawstr_esc_body); + yymore(); +} +(.|\n) { return -1; } + +\x22/# { + BEGIN(rawstr_esc_end); + yymore(); + } +(.|\n) { + yymore(); + } + +# { + end_hashes++; + if (end_hashes == num_hashes) { + BEGIN(INITIAL); + return LIT_STR_RAW; + } + yymore(); + } +[^#] { + end_hashes = 0; + BEGIN(rawstr_esc_body); + yymore(); + } + +<> { return -1; } + +\x22 { BEGIN(str); yymore(); } +\x22 { BEGIN(suffix); return LIT_STR; } + +<> { return -1; } +\\[n\nr\rt\\\x27\x220] { yymore(); } +\\x[0-9a-fA-F]{2} { yymore(); } +\\u\{[0-9a-fA-F]?{6}\} { yymore(); } +\\[^n\nrt\\\x27\x220] { return -1; } +(.|\n) { yymore(); } + +\<- { return LARROW; } +-\> { return RARROW; } +- { return '-'; } +-= { return MINUSEQ; } +&& { return ANDAND; } +& { return '&'; } +&= { return ANDEQ; } +\|\| { return OROR; } +\| { return '|'; } +\|= { return OREQ; } +\+ { return '+'; } +\+= { return PLUSEQ; } +\* { return '*'; } +\*= { return STAREQ; } +\/ { return '/'; } +\/= { return SLASHEQ; } +\^ { return '^'; } +\^= { return CARETEQ; } +% { return '%'; } +%= { return PERCENTEQ; } + +<> { return 0; } + +%% diff --git a/src/grammar/parser-lalr-main.c b/src/grammar/parser-lalr-main.c new file mode 100644 index 0000000000..db88a1f299 --- /dev/null +++ b/src/grammar/parser-lalr-main.c @@ -0,0 +1,203 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#include +#include +#include +#include + +extern int yylex(); +extern int rsparse(); + +#define PUSHBACK_LEN 4 + +static char pushback[PUSHBACK_LEN]; +static int verbose; + +void print(const char* format, ...) { + va_list args; + va_start(args, format); + if (verbose) { + vprintf(format, args); + } + va_end(args); +} + +// If there is a non-null char at the head of the pushback queue, +// dequeue it and shift the rest of the queue forwards. Otherwise, +// return the token from calling yylex. +int rslex() { + if (pushback[0] == '\0') { + return yylex(); + } else { + char c = pushback[0]; + memmove(pushback, pushback + 1, PUSHBACK_LEN - 1); + pushback[PUSHBACK_LEN - 1] = '\0'; + return c; + } +} + +// Note: this does nothing if the pushback queue is full. As long as +// there aren't more than PUSHBACK_LEN consecutive calls to push_back +// in an action, this shouldn't be a problem. +void push_back(char c) { + for (int i = 0; i < PUSHBACK_LEN; ++i) { + if (pushback[i] == '\0') { + pushback[i] = c; + break; + } + } +} + +extern int rsdebug; + +struct node { + struct node *next; + struct node *prev; + int own_string; + char const *name; + int n_elems; + struct node *elems[]; +}; + +struct node *nodes = NULL; +int n_nodes; + +struct node *mk_node(char const *name, int n, ...) { + va_list ap; + int i = 0; + unsigned sz = sizeof(struct node) + (n * sizeof(struct node *)); + struct node *nn, *nd = (struct node *)malloc(sz); + + print("# New %d-ary node: %s = %p\n", n, name, nd); + + nd->own_string = 0; + nd->prev = NULL; + nd->next = nodes; + if (nodes) { + nodes->prev = nd; + } + nodes = nd; + + nd->name = name; + nd->n_elems = n; + + va_start(ap, n); + while (i < n) { + nn = va_arg(ap, struct node *); + print("# arg[%d]: %p\n", i, nn); + print("# (%s ...)\n", nn->name); + nd->elems[i++] = nn; + } + va_end(ap); + n_nodes++; + return nd; +} + +struct node *mk_atom(char *name) { + struct node *nd = mk_node((char const *)strdup(name), 0); + nd->own_string = 1; + return nd; +} + +struct node *mk_none() { + return mk_atom(""); +} + +struct node *ext_node(struct node *nd, int n, ...) { + va_list ap; + int i = 0, c = nd->n_elems + n; + unsigned sz = sizeof(struct node) + (c * sizeof(struct node *)); + struct node *nn; + + print("# Extending %d-ary node by %d nodes: %s = %p", + nd->n_elems, c, nd->name, nd); + + if (nd->next) { + nd->next->prev = nd->prev; + } + if (nd->prev) { + nd->prev->next = nd->next; + } + nd = realloc(nd, sz); + nd->prev = NULL; + nd->next = nodes; + nodes->prev = nd; + nodes = nd; + + print(" ==> %p\n", nd); + + va_start(ap, n); + while (i < n) { + nn = va_arg(ap, struct node *); + print("# arg[%d]: %p\n", i, nn); + print("# (%s ...)\n", nn->name); + nd->elems[nd->n_elems++] = nn; + ++i; + } + va_end(ap); + return nd; +} + +int const indent_step = 4; + +void print_indent(int depth) { + while (depth) { + if (depth-- % indent_step == 0) { + print("|"); + } else { + print(" "); + } + } +} + +void print_node(struct node *n, int depth) { + int i = 0; + print_indent(depth); + if (n->n_elems == 0) { + print("%s\n", n->name); + } else { + print("(%s\n", n->name); + for (i = 0; i < n->n_elems; ++i) { + print_node(n->elems[i], depth + indent_step); + } + print_indent(depth); + print(")\n"); + } +} + +int main(int argc, char **argv) { + if (argc == 2 && strcmp(argv[1], "-v") == 0) { + verbose = 1; + } else { + verbose = 0; + } + int ret = 0; + struct node *tmp; + memset(pushback, '\0', PUSHBACK_LEN); + ret = rsparse(); + print("--- PARSE COMPLETE: ret:%d, n_nodes:%d ---\n", ret, n_nodes); + if (nodes) { + print_node(nodes, 0); + } + while (nodes) { + tmp = nodes; + nodes = tmp->next; + if (tmp->own_string) { + free((void*)tmp->name); + } + free(tmp); + } + return ret; +} + +void rserror(char const *s) { + fprintf(stderr, "%s\n", s); +} diff --git a/src/grammar/parser-lalr.y b/src/grammar/parser-lalr.y new file mode 100644 index 0000000000..3aa76d168d --- /dev/null +++ b/src/grammar/parser-lalr.y @@ -0,0 +1,1938 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +%{ +#define YYERROR_VERBOSE +#define YYSTYPE struct node * +struct node; +extern int yylex(); +extern void yyerror(char const *s); +extern struct node *mk_node(char const *name, int n, ...); +extern struct node *mk_atom(char *text); +extern struct node *mk_none(); +extern struct node *ext_node(struct node *nd, int n, ...); +extern void push_back(char c); +extern char *yytext; +%} +%debug + +%token SHL +%token SHR +%token LE +%token EQEQ +%token NE +%token GE +%token ANDAND +%token OROR +%token SHLEQ +%token SHREQ +%token MINUSEQ +%token ANDEQ +%token OREQ +%token PLUSEQ +%token STAREQ +%token SLASHEQ +%token CARETEQ +%token PERCENTEQ +%token DOTDOT +%token DOTDOTDOT +%token MOD_SEP +%token RARROW +%token LARROW +%token FAT_ARROW +%token LIT_BYTE +%token LIT_CHAR +%token LIT_INTEGER +%token LIT_FLOAT +%token LIT_STR +%token LIT_STR_RAW +%token LIT_BYTE_STR +%token LIT_BYTE_STR_RAW +%token IDENT +%token UNDERSCORE +%token LIFETIME + +// keywords +%token SELF +%token STATIC +%token AS +%token BREAK +%token CRATE +%token ELSE +%token ENUM +%token EXTERN +%token FALSE +%token FN +%token FOR +%token IF +%token IMPL +%token IN +%token LET +%token LOOP +%token MATCH +%token MOD +%token MOVE +%token MUT +%token PRIV +%token PUB +%token REF +%token RETURN +%token STRUCT +%token TRUE +%token TRAIT +%token TYPE +%token UNSAFE +%token USE +%token WHILE +%token CONTINUE +%token PROC +%token BOX +%token CONST +%token WHERE +%token TYPEOF +%token INNER_DOC_COMMENT +%token OUTER_DOC_COMMENT + +%token SHEBANG +%token SHEBANG_LINE +%token STATIC_LIFETIME + + /* + Quoting from the Bison manual: + + "Finally, the resolution of conflicts works by comparing the precedence + of the rule being considered with that of the lookahead token. If the + token's precedence is higher, the choice is to shift. If the rule's + precedence is higher, the choice is to reduce. If they have equal + precedence, the choice is made based on the associativity of that + precedence level. The verbose output file made by ‘-v’ (see Invoking + Bison) says how each conflict was resolved" + */ + +// We expect no shift/reduce or reduce/reduce conflicts in this grammar; +// all potential ambiguities are scrutinized and eliminated manually. +%expect 0 + +// fake-precedence symbol to cause '|' bars in lambda context to parse +// at low precedence, permit things like |x| foo = bar, where '=' is +// otherwise lower-precedence than '|'. Also used for proc() to cause +// things like proc() a + b to parse as proc() { a + b }. +%precedence LAMBDA + +%precedence SELF + +// MUT should be lower precedence than IDENT so that in the pat rule, +// "& MUT pat" has higher precedence than "binding_mode ident [@ pat]" +%precedence MUT + +// IDENT needs to be lower than '{' so that 'foo {' is shifted when +// trying to decide if we've got a struct-construction expr (esp. in +// contexts like 'if foo { .') +// +// IDENT also needs to be lower precedence than '<' so that '<' in +// 'foo:bar . <' is shifted (in a trait reference occurring in a +// bounds list), parsing as foo:(bar) rather than (foo:bar). +%precedence IDENT + +// A couple fake-precedence symbols to use in rules associated with + +// and < in trailing type contexts. These come up when you have a type +// in the RHS of operator-AS, such as "foo as bar". The "<" there +// has to be shifted so the parser keeps trying to parse a type, even +// though it might well consider reducing the type "bar" and then +// going on to "<" as a subsequent binop. The "+" case is with +// trailing type-bounds ("foo as bar:A+B"), for the same reason. +%precedence SHIFTPLUS + +%precedence MOD_SEP +%precedence RARROW ':' + +// In where clauses, "for" should have greater precedence when used as +// a higher ranked constraint than when used as the beginning of a +// for_in_type (which is a ty) +%precedence FORTYPE +%precedence FOR + +// Binops & unops, and their precedences +%precedence BOX +%precedence BOXPLACE +%nonassoc DOTDOT + +// RETURN needs to be lower-precedence than tokens that start +// prefix_exprs +%precedence RETURN + +%right '=' SHLEQ SHREQ MINUSEQ ANDEQ OREQ PLUSEQ STAREQ SLASHEQ CARETEQ PERCENTEQ +%right LARROW +%left OROR +%left ANDAND +%left EQEQ NE +%left '<' '>' LE GE +%left '|' +%left '^' +%left '&' +%left SHL SHR +%left '+' '-' +%precedence AS +%left '*' '/' '%' +%precedence '!' + +%precedence '{' '[' '(' '.' + +%precedence RANGE + +%start crate + +%% + +//////////////////////////////////////////////////////////////////////// +// Part 1: Items and attributes +//////////////////////////////////////////////////////////////////////// + +crate +: maybe_shebang inner_attrs maybe_mod_items { mk_node("crate", 2, $2, $3); } +| maybe_shebang maybe_mod_items { mk_node("crate", 1, $2); } +; + +maybe_shebang +: SHEBANG_LINE +| %empty +; + +maybe_inner_attrs +: inner_attrs +| %empty { $$ = mk_none(); } +; + +inner_attrs +: inner_attr { $$ = mk_node("InnerAttrs", 1, $1); } +| inner_attrs inner_attr { $$ = ext_node($1, 1, $2); } +; + +inner_attr +: SHEBANG '[' meta_item ']' { $$ = mk_node("InnerAttr", 1, $3); } +| INNER_DOC_COMMENT { $$ = mk_node("InnerAttr", 1, mk_node("doc-comment", 1, mk_atom(yytext))); } +; + +maybe_outer_attrs +: outer_attrs +| %empty { $$ = mk_none(); } +; + +outer_attrs +: outer_attr { $$ = mk_node("OuterAttrs", 1, $1); } +| outer_attrs outer_attr { $$ = ext_node($1, 1, $2); } +; + +outer_attr +: '#' '[' meta_item ']' { $$ = $3; } +| OUTER_DOC_COMMENT { $$ = mk_node("doc-comment", 1, mk_atom(yytext)); } +; + +meta_item +: ident { $$ = mk_node("MetaWord", 1, $1); } +| ident '=' lit { $$ = mk_node("MetaNameValue", 2, $1, $3); } +| ident '(' meta_seq ')' { $$ = mk_node("MetaList", 2, $1, $3); } +| ident '(' meta_seq ',' ')' { $$ = mk_node("MetaList", 2, $1, $3); } +; + +meta_seq +: %empty { $$ = mk_none(); } +| meta_item { $$ = mk_node("MetaItems", 1, $1); } +| meta_seq ',' meta_item { $$ = ext_node($1, 1, $3); } +; + +maybe_mod_items +: mod_items +| %empty { $$ = mk_none(); } +; + +mod_items +: mod_item { $$ = mk_node("Items", 1, $1); } +| mod_items mod_item { $$ = ext_node($1, 1, $2); } +; + +attrs_and_vis +: maybe_outer_attrs visibility { $$ = mk_node("AttrsAndVis", 2, $1, $2); } +; + +mod_item +: attrs_and_vis item { $$ = mk_node("Item", 2, $1, $2); } +; + +// items that can appear outside of a fn block +item +: stmt_item +| item_macro +; + +// items that can appear in "stmts" +stmt_item +: item_static +| item_const +| item_type +| block_item +| view_item +; + +item_static +: STATIC ident ':' ty '=' expr ';' { $$ = mk_node("ItemStatic", 3, $2, $4, $6); } +| STATIC MUT ident ':' ty '=' expr ';' { $$ = mk_node("ItemStatic", 3, $3, $5, $7); } +; + +item_const +: CONST ident ':' ty '=' expr ';' { $$ = mk_node("ItemConst", 3, $2, $4, $6); } +; + +item_macro +: path_expr '!' maybe_ident parens_delimited_token_trees ';' { $$ = mk_node("ItemMacro", 3, $1, $3, $4); } +| path_expr '!' maybe_ident braces_delimited_token_trees { $$ = mk_node("ItemMacro", 3, $1, $3, $4); } +| path_expr '!' maybe_ident brackets_delimited_token_trees ';'{ $$ = mk_node("ItemMacro", 3, $1, $3, $4); } +; + +view_item +: use_item +| extern_fn_item +| EXTERN CRATE ident ';' { $$ = mk_node("ViewItemExternCrate", 1, $3); } +| EXTERN CRATE ident AS ident ';' { $$ = mk_node("ViewItemExternCrate", 2, $3, $5); } +; + +extern_fn_item +: EXTERN maybe_abi item_fn { $$ = mk_node("ViewItemExternFn", 2, $2, $3); } +; + +use_item +: USE view_path ';' { $$ = mk_node("ViewItemUse", 1, $2); } +; + +view_path +: path_no_types_allowed { $$ = mk_node("ViewPathSimple", 1, $1); } +| path_no_types_allowed MOD_SEP '{' '}' { $$ = mk_node("ViewPathList", 2, $1, mk_atom("ViewPathListEmpty")); } +| MOD_SEP '{' '}' { $$ = mk_node("ViewPathList", 1, mk_atom("ViewPathListEmpty")); } +| path_no_types_allowed MOD_SEP '{' idents_or_self '}' { $$ = mk_node("ViewPathList", 2, $1, $4); } +| MOD_SEP '{' idents_or_self '}' { $$ = mk_node("ViewPathList", 1, $3); } +| path_no_types_allowed MOD_SEP '{' idents_or_self ',' '}' { $$ = mk_node("ViewPathList", 2, $1, $4); } +| MOD_SEP '{' idents_or_self ',' '}' { $$ = mk_node("ViewPathList", 1, $3); } +| path_no_types_allowed MOD_SEP '*' { $$ = mk_node("ViewPathGlob", 1, $1); } +| '{' '}' { $$ = mk_atom("ViewPathListEmpty"); } +| '{' idents_or_self '}' { $$ = mk_node("ViewPathList", 1, $2); } +| '{' idents_or_self ',' '}' { $$ = mk_node("ViewPathList", 1, $2); } +| path_no_types_allowed AS ident { $$ = mk_node("ViewPathSimple", 2, $1, $3); } +; + +block_item +: item_fn +| item_unsafe_fn +| item_mod +| item_foreign_mod { $$ = mk_node("ItemForeignMod", 1, $1); } +| item_struct +| item_enum +| item_trait +| item_impl +; + +maybe_ty_ascription +: ':' ty_sum { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +maybe_init_expr +: '=' expr { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +// structs +item_struct +: STRUCT ident generic_params maybe_where_clause struct_decl_args +{ + $$ = mk_node("ItemStruct", 4, $2, $3, $4, $5); +} +| STRUCT ident generic_params struct_tuple_args maybe_where_clause ';' +{ + $$ = mk_node("ItemStruct", 4, $2, $3, $4, $5); +} +| STRUCT ident generic_params maybe_where_clause ';' +{ + $$ = mk_node("ItemStruct", 3, $2, $3, $4); +} +; + +struct_decl_args +: '{' struct_decl_fields '}' { $$ = $2; } +| '{' struct_decl_fields ',' '}' { $$ = $2; } +; + +struct_tuple_args +: '(' struct_tuple_fields ')' { $$ = $2; } +| '(' struct_tuple_fields ',' ')' { $$ = $2; } +; + +struct_decl_fields +: struct_decl_field { $$ = mk_node("StructFields", 1, $1); } +| struct_decl_fields ',' struct_decl_field { $$ = ext_node($1, 1, $3); } +| %empty { $$ = mk_none(); } +; + +struct_decl_field +: attrs_and_vis ident ':' ty_sum { $$ = mk_node("StructField", 3, $1, $2, $4); } +; + +struct_tuple_fields +: struct_tuple_field { $$ = mk_node("StructFields", 1, $1); } +| struct_tuple_fields ',' struct_tuple_field { $$ = ext_node($1, 1, $3); } +; + +struct_tuple_field +: attrs_and_vis ty_sum { $$ = mk_node("StructField", 2, $1, $2); } +; + +// enums +item_enum +: ENUM ident generic_params maybe_where_clause '{' enum_defs '}' { $$ = mk_node("ItemEnum", 0); } +| ENUM ident generic_params maybe_where_clause '{' enum_defs ',' '}' { $$ = mk_node("ItemEnum", 0); } +; + +enum_defs +: enum_def { $$ = mk_node("EnumDefs", 1, $1); } +| enum_defs ',' enum_def { $$ = ext_node($1, 1, $3); } +| %empty { $$ = mk_none(); } +; + +enum_def +: attrs_and_vis ident enum_args { $$ = mk_node("EnumDef", 3, $1, $2, $3); } +; + +enum_args +: '{' struct_decl_fields '}' { $$ = mk_node("EnumArgs", 1, $2); } +| '{' struct_decl_fields ',' '}' { $$ = mk_node("EnumArgs", 1, $2); } +| '(' maybe_ty_sums ')' { $$ = mk_node("EnumArgs", 1, $2); } +| '=' expr { $$ = mk_node("EnumArgs", 1, $2); } +| %empty { $$ = mk_none(); } +; + +item_mod +: MOD ident ';' { $$ = mk_node("ItemMod", 1, $2); } +| MOD ident '{' maybe_mod_items '}' { $$ = mk_node("ItemMod", 2, $2, $4); } +| MOD ident '{' inner_attrs maybe_mod_items '}' { $$ = mk_node("ItemMod", 3, $2, $4, $5); } +; + +item_foreign_mod +: EXTERN maybe_abi '{' maybe_foreign_items '}' { $$ = mk_node("ItemForeignMod", 1, $4); } +| EXTERN maybe_abi '{' inner_attrs maybe_foreign_items '}' { $$ = mk_node("ItemForeignMod", 2, $4, $5); } +; + +maybe_abi +: str +| %empty { $$ = mk_none(); } +; + +maybe_foreign_items +: foreign_items +| %empty { $$ = mk_none(); } +; + +foreign_items +: foreign_item { $$ = mk_node("ForeignItems", 1, $1); } +| foreign_items foreign_item { $$ = ext_node($1, 1, $2); } +; + +foreign_item +: attrs_and_vis STATIC item_foreign_static { $$ = mk_node("ForeignItem", 2, $1, $3); } +| attrs_and_vis item_foreign_fn { $$ = mk_node("ForeignItem", 2, $1, $2); } +| attrs_and_vis UNSAFE item_foreign_fn { $$ = mk_node("ForeignItem", 2, $1, $3); } +; + +item_foreign_static +: maybe_mut ident ':' ty ';' { $$ = mk_node("StaticItem", 3, $1, $2, $4); } +; + +item_foreign_fn +: FN ident generic_params fn_decl_allow_variadic maybe_where_clause ';' { $$ = mk_node("ForeignFn", 4, $2, $3, $4, $5); } +; + +fn_decl_allow_variadic +: fn_params_allow_variadic ret_ty { $$ = mk_node("FnDecl", 2, $1, $2); } +; + +fn_params_allow_variadic +: '(' ')' { $$ = mk_none(); } +| '(' params ')' { $$ = $2; } +| '(' params ',' ')' { $$ = $2; } +| '(' params ',' DOTDOTDOT ')' { $$ = $2; } +; + +visibility +: PUB { $$ = mk_atom("Public"); } +| %empty { $$ = mk_atom("Inherited"); } +; + +idents_or_self +: ident_or_self { $$ = mk_node("IdentsOrSelf", 1, $1); } +| ident_or_self AS ident { $$ = mk_node("IdentsOrSelf", 2, $1, $3); } +| idents_or_self ',' ident_or_self { $$ = ext_node($1, 1, $3); } +; + +ident_or_self +: ident +| SELF { $$ = mk_atom(yytext); } +; + +item_type +: TYPE ident generic_params maybe_where_clause '=' ty_sum ';' { $$ = mk_node("ItemTy", 4, $2, $3, $4, $6); } +; + +for_sized +: FOR '?' ident { $$ = mk_node("ForSized", 1, $3); } +| FOR ident '?' { $$ = mk_node("ForSized", 1, $2); } +| %empty { $$ = mk_none(); } +; + +item_trait +: maybe_unsafe TRAIT ident generic_params for_sized maybe_ty_param_bounds maybe_where_clause '{' maybe_trait_items '}' +{ + $$ = mk_node("ItemTrait", 7, $1, $3, $4, $5, $6, $7, $9); +} +; + +maybe_trait_items +: trait_items +| %empty { $$ = mk_none(); } +; + +trait_items +: trait_item { $$ = mk_node("TraitItems", 1, $1); } +| trait_items trait_item { $$ = ext_node($1, 1, $2); } +; + +trait_item +: trait_const +| trait_type +| trait_method +; + +trait_const +: maybe_outer_attrs CONST ident maybe_ty_ascription maybe_const_default ';' { $$ = mk_node("ConstTraitItem", 4, $1, $3, $4, $5); } +; + +maybe_const_default +: '=' expr { $$ = mk_node("ConstDefault", 1, $2); } +| %empty { $$ = mk_none(); } +; + +trait_type +: maybe_outer_attrs TYPE ty_param ';' { $$ = mk_node("TypeTraitItem", 2, $1, $3); } +; + +maybe_unsafe +: UNSAFE { $$ = mk_atom("Unsafe"); } +| %empty { $$ = mk_none(); } +; + +trait_method +: type_method { $$ = mk_node("Required", 1, $1); } +| method { $$ = mk_node("Provided", 1, $1); } +; + +type_method +: attrs_and_vis maybe_unsafe FN ident generic_params fn_decl_with_self_allow_anon_params maybe_where_clause ';' +{ + $$ = mk_node("TypeMethod", 6, $1, $2, $4, $5, $6, $7); +} +| attrs_and_vis maybe_unsafe EXTERN maybe_abi FN ident generic_params fn_decl_with_self_allow_anon_params maybe_where_clause ';' +{ + $$ = mk_node("TypeMethod", 7, $1, $2, $4, $6, $7, $8, $9); +} +; + +method +: attrs_and_vis maybe_unsafe FN ident generic_params fn_decl_with_self_allow_anon_params maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("Method", 7, $1, $2, $4, $5, $6, $7, $8); +} +| attrs_and_vis maybe_unsafe EXTERN maybe_abi FN ident generic_params fn_decl_with_self_allow_anon_params maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("Method", 8, $1, $2, $4, $6, $7, $8, $9, $10); +} +; + +impl_method +: attrs_and_vis maybe_unsafe FN ident generic_params fn_decl_with_self maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("Method", 7, $1, $2, $4, $5, $6, $7, $8); +} +| attrs_and_vis maybe_unsafe EXTERN maybe_abi FN ident generic_params fn_decl_with_self maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("Method", 8, $1, $2, $4, $6, $7, $8, $9, $10); +} +; + +// There are two forms of impl: +// +// impl (<...>)? TY { ... } +// impl (<...>)? TRAIT for TY { ... } +// +// Unfortunately since TY can begin with '<' itself -- as part of a +// TyQualifiedPath type -- there's an s/r conflict when we see '<' after IMPL: +// should we reduce one of the early rules of TY (such as maybe_once) +// or shall we continue shifting into the generic_params list for the +// impl? +// +// The production parser disambiguates a different case here by +// permitting / requiring the user to provide parens around types when +// they are ambiguous with traits. We do the same here, regrettably, +// by splitting ty into ty and ty_prim. +item_impl +: maybe_unsafe IMPL generic_params ty_prim_sum maybe_where_clause '{' maybe_inner_attrs maybe_impl_items '}' +{ + $$ = mk_node("ItemImpl", 6, $1, $3, $4, $5, $7, $8); +} +| maybe_unsafe IMPL generic_params '(' ty ')' maybe_where_clause '{' maybe_inner_attrs maybe_impl_items '}' +{ + $$ = mk_node("ItemImpl", 6, $1, $3, 5, $6, $9, $10); +} +| maybe_unsafe IMPL generic_params trait_ref FOR ty_sum maybe_where_clause '{' maybe_inner_attrs maybe_impl_items '}' +{ + $$ = mk_node("ItemImpl", 6, $3, $4, $6, $7, $9, $10); +} +| maybe_unsafe IMPL generic_params '!' trait_ref FOR ty_sum maybe_where_clause '{' maybe_inner_attrs maybe_impl_items '}' +{ + $$ = mk_node("ItemImplNeg", 7, $1, $3, $5, $7, $8, $10, $11); +} +| maybe_unsafe IMPL generic_params trait_ref FOR DOTDOT '{' '}' +{ + $$ = mk_node("ItemImplDefault", 3, $1, $3, $4); +} +| maybe_unsafe IMPL generic_params '!' trait_ref FOR DOTDOT '{' '}' +{ + $$ = mk_node("ItemImplDefaultNeg", 3, $1, $3, $4); +} +; + +maybe_impl_items +: impl_items +| %empty { $$ = mk_none(); } +; + +impl_items +: impl_item { $$ = mk_node("ImplItems", 1, $1); } +| impl_item impl_items { $$ = ext_node($1, 1, $2); } +; + +impl_item +: impl_method +| attrs_and_vis item_macro { $$ = mk_node("ImplMacroItem", 2, $1, $2); } +| impl_const +| impl_type +; + +impl_const +: attrs_and_vis item_const { $$ = mk_node("ImplConst", 1, $1, $2); } +; + +impl_type +: attrs_and_vis TYPE ident generic_params '=' ty_sum ';' { $$ = mk_node("ImplType", 4, $1, $3, $4, $6); } +; + +item_fn +: FN ident generic_params fn_decl maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("ItemFn", 5, $2, $3, $4, $5, $6); +} +; + +item_unsafe_fn +: UNSAFE FN ident generic_params fn_decl maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("ItemUnsafeFn", 5, $3, $4, $5, $6, $7); +} +| UNSAFE EXTERN maybe_abi FN ident generic_params fn_decl maybe_where_clause inner_attrs_and_block +{ + $$ = mk_node("ItemUnsafeFn", 6, $3, $5, $6, $7, $8, $9); +} +; + +fn_decl +: fn_params ret_ty { $$ = mk_node("FnDecl", 2, $1, $2); } +; + +fn_decl_with_self +: fn_params_with_self ret_ty { $$ = mk_node("FnDecl", 2, $1, $2); } +; + +fn_decl_with_self_allow_anon_params +: fn_anon_params_with_self ret_ty { $$ = mk_node("FnDecl", 2, $1, $2); } +; + +fn_params +: '(' maybe_params ')' { $$ = $2; } +; + +fn_anon_params +: '(' anon_param anon_params_allow_variadic_tail ')' { $$ = ext_node($2, 1, $3); } +| '(' ')' { $$ = mk_none(); } +; + +fn_params_with_self +: '(' maybe_mut SELF maybe_ty_ascription maybe_comma_params ')' { $$ = mk_node("SelfValue", 3, $2, $4, $5); } +| '(' '&' maybe_mut SELF maybe_ty_ascription maybe_comma_params ')' { $$ = mk_node("SelfRegion", 3, $3, $5, $6); } +| '(' '&' lifetime maybe_mut SELF maybe_ty_ascription maybe_comma_params ')' { $$ = mk_node("SelfRegion", 4, $3, $4, $6, $7); } +| '(' maybe_params ')' { $$ = mk_node("SelfStatic", 1, $2); } +; + +fn_anon_params_with_self +: '(' maybe_mut SELF maybe_ty_ascription maybe_comma_anon_params ')' { $$ = mk_node("SelfValue", 3, $2, $4, $5); } +| '(' '&' maybe_mut SELF maybe_ty_ascription maybe_comma_anon_params ')' { $$ = mk_node("SelfRegion", 3, $3, $5, $6); } +| '(' '&' lifetime maybe_mut SELF maybe_ty_ascription maybe_comma_anon_params ')' { $$ = mk_node("SelfRegion", 4, $3, $4, $6, $7); } +| '(' maybe_anon_params ')' { $$ = mk_node("SelfStatic", 1, $2); } +; + +maybe_params +: params +| params ',' +| %empty { $$ = mk_none(); } +; + +params +: param { $$ = mk_node("Args", 1, $1); } +| params ',' param { $$ = ext_node($1, 1, $3); } +; + +param +: pat ':' ty_sum { $$ = mk_node("Arg", 2, $1, $3); } +; + +inferrable_params +: inferrable_param { $$ = mk_node("InferrableParams", 1, $1); } +| inferrable_params ',' inferrable_param { $$ = ext_node($1, 1, $3); } +; + +inferrable_param +: pat maybe_ty_ascription { $$ = mk_node("InferrableParam", 2, $1, $2); } +; + +maybe_unboxed_closure_kind +: %empty +| ':' +| '&' maybe_mut ':' +; + +maybe_comma_params +: ',' { $$ = mk_none(); } +| ',' params { $$ = $2; } +| ',' params ',' { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +maybe_comma_anon_params +: ',' { $$ = mk_none(); } +| ',' anon_params { $$ = $2; } +| ',' anon_params ',' { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +maybe_anon_params +: anon_params +| anon_params ',' +| %empty { $$ = mk_none(); } +; + +anon_params +: anon_param { $$ = mk_node("Args", 1, $1); } +| anon_params ',' anon_param { $$ = ext_node($1, 1, $3); } +; + +// anon means it's allowed to be anonymous (type-only), but it can +// still have a name +anon_param +: named_arg ':' ty { $$ = mk_node("Arg", 2, $1, $3); } +| ty +; + +anon_params_allow_variadic_tail +: ',' DOTDOTDOT { $$ = mk_none(); } +| ',' anon_param anon_params_allow_variadic_tail { $$ = mk_node("Args", 2, $2, $3); } +| %empty { $$ = mk_none(); } +; + +named_arg +: ident +| UNDERSCORE { $$ = mk_atom("PatWild"); } +| '&' ident { $$ = $2; } +| '&' UNDERSCORE { $$ = mk_atom("PatWild"); } +| ANDAND ident { $$ = $2; } +| ANDAND UNDERSCORE { $$ = mk_atom("PatWild"); } +| MUT ident { $$ = $2; } +; + +ret_ty +: RARROW '!' { $$ = mk_none(); } +| RARROW ty { $$ = mk_node("ret-ty", 1, $2); } +| %prec IDENT %empty { $$ = mk_none(); } +; + +generic_params +: '<' lifetimes '>' { $$ = mk_node("Generics", 2, $2, mk_none()); } +| '<' lifetimes ',' '>' { $$ = mk_node("Generics", 2, $2, mk_none()); } +| '<' lifetimes SHR { push_back('>'); $$ = mk_node("Generics", 2, $2, mk_none()); } +| '<' lifetimes ',' SHR { push_back('>'); $$ = mk_node("Generics", 2, $2, mk_none()); } +| '<' lifetimes ',' ty_params '>' { $$ = mk_node("Generics", 2, $2, $4); } +| '<' lifetimes ',' ty_params ',' '>' { $$ = mk_node("Generics", 2, $2, $4); } +| '<' lifetimes ',' ty_params SHR { push_back('>'); $$ = mk_node("Generics", 2, $2, $4); } +| '<' lifetimes ',' ty_params ',' SHR { push_back('>'); $$ = mk_node("Generics", 2, $2, $4); } +| '<' ty_params '>' { $$ = mk_node("Generics", 2, mk_none(), $2); } +| '<' ty_params ',' '>' { $$ = mk_node("Generics", 2, mk_none(), $2); } +| '<' ty_params SHR { push_back('>'); $$ = mk_node("Generics", 2, mk_none(), $2); } +| '<' ty_params ',' SHR { push_back('>'); $$ = mk_node("Generics", 2, mk_none(), $2); } +| %empty { $$ = mk_none(); } +; + +maybe_where_clause +: %empty { $$ = mk_none(); } +| where_clause +; + +where_clause +: WHERE where_predicates { $$ = mk_node("WhereClause", 1, $2); } +| WHERE where_predicates ',' { $$ = mk_node("WhereClause", 1, $2); } +; + +where_predicates +: where_predicate { $$ = mk_node("WherePredicates", 1, $1); } +| where_predicates ',' where_predicate { $$ = ext_node($1, 1, $3); } +; + +where_predicate +: maybe_for_lifetimes lifetime ':' bounds { $$ = mk_node("WherePredicate", 3, $1, $2, $4); } +| maybe_for_lifetimes ty ':' ty_param_bounds { $$ = mk_node("WherePredicate", 3, $1, $2, $4); } +; + +maybe_for_lifetimes +: FOR '<' lifetimes '>' { $$ = mk_none(); } +| %prec FORTYPE %empty { $$ = mk_none(); } + +ty_params +: ty_param { $$ = mk_node("TyParams", 1, $1); } +| ty_params ',' ty_param { $$ = ext_node($1, 1, $3); } +; + +// A path with no type parameters; e.g. `foo::bar::Baz` +// +// These show up in 'use' view-items, because these are processed +// without respect to types. +path_no_types_allowed +: ident { $$ = mk_node("ViewPath", 1, $1); } +| MOD_SEP ident { $$ = mk_node("ViewPath", 1, $2); } +| SELF { $$ = mk_node("ViewPath", 1, mk_atom("Self")); } +| MOD_SEP SELF { $$ = mk_node("ViewPath", 1, mk_atom("Self")); } +| path_no_types_allowed MOD_SEP ident { $$ = ext_node($1, 1, $3); } +; + +// A path with a lifetime and type parameters, with no double colons +// before the type parameters; e.g. `foo::bar<'a>::Baz` +// +// These show up in "trait references", the components of +// type-parameter bounds lists, as well as in the prefix of the +// path_generic_args_and_bounds rule, which is the full form of a +// named typed expression. +// +// They do not have (nor need) an extra '::' before '<' because +// unlike in expr context, there are no "less-than" type exprs to +// be ambiguous with. +path_generic_args_without_colons +: %prec IDENT + ident { $$ = mk_node("components", 1, $1); } +| %prec IDENT + ident generic_args { $$ = mk_node("components", 2, $1, $2); } +| %prec IDENT + ident '(' maybe_ty_sums ')' ret_ty { $$ = mk_node("components", 2, $1, $3); } +| %prec IDENT + path_generic_args_without_colons MOD_SEP ident { $$ = ext_node($1, 1, $3); } +| %prec IDENT + path_generic_args_without_colons MOD_SEP ident generic_args { $$ = ext_node($1, 2, $3, $4); } +| %prec IDENT + path_generic_args_without_colons MOD_SEP ident '(' maybe_ty_sums ')' ret_ty { $$ = ext_node($1, 2, $3, $5); } +; + +generic_args +: '<' generic_values '>' { $$ = $2; } +| '<' generic_values SHR { push_back('>'); $$ = $2; } +| '<' generic_values GE { push_back('='); $$ = $2; } +| '<' generic_values SHREQ { push_back('>'); push_back('='); $$ = $2; } +// If generic_args starts with "<<", the first arg must be a +// TyQualifiedPath because that's the only type that can start with a +// '<'. This rule parses that as the first ty_sum and then continues +// with the rest of generic_values. +| SHL ty_qualified_path_and_generic_values '>' { $$ = $2; } +| SHL ty_qualified_path_and_generic_values SHR { push_back('>'); $$ = $2; } +| SHL ty_qualified_path_and_generic_values GE { push_back('='); $$ = $2; } +| SHL ty_qualified_path_and_generic_values SHREQ { push_back('>'); push_back('='); $$ = $2; } +; + +generic_values +: maybe_lifetimes maybe_ty_sums_and_or_bindings { $$ = mk_node("GenericValues", 2, $1, $2); } +; + +maybe_ty_sums_and_or_bindings +: ty_sums +| ty_sums ',' +| ty_sums ',' bindings { $$ = mk_node("TySumsAndBindings", 2, $1, $3); } +| bindings +| bindings ',' +| %empty { $$ = mk_none(); } +; + +maybe_bindings +: ',' bindings { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +//////////////////////////////////////////////////////////////////////// +// Part 2: Patterns +//////////////////////////////////////////////////////////////////////// + +pat +: UNDERSCORE { $$ = mk_atom("PatWild"); } +| '&' pat { $$ = mk_node("PatRegion", 1, $2); } +| '&' MUT pat { $$ = mk_node("PatRegion", 1, $3); } +| ANDAND pat { $$ = mk_node("PatRegion", 1, mk_node("PatRegion", 1, $2)); } +| '(' ')' { $$ = mk_atom("PatUnit"); } +| '(' pat_tup ')' { $$ = mk_node("PatTup", 1, $2); } +| '(' pat_tup ',' ')' { $$ = mk_node("PatTup", 1, $2); } +| '[' pat_vec ']' { $$ = mk_node("PatVec", 1, $2); } +| lit_or_path +| lit_or_path DOTDOTDOT lit_or_path { $$ = mk_node("PatRange", 2, $1, $3); } +| path_expr '{' pat_struct '}' { $$ = mk_node("PatStruct", 2, $1, $3); } +| path_expr '(' DOTDOT ')' { $$ = mk_node("PatEnum", 1, $1); } +| path_expr '(' pat_tup ')' { $$ = mk_node("PatEnum", 2, $1, $3); } +| path_expr '!' maybe_ident delimited_token_trees { $$ = mk_node("PatMac", 3, $1, $3, $4); } +| binding_mode ident { $$ = mk_node("PatIdent", 2, $1, $2); } +| ident '@' pat { $$ = mk_node("PatIdent", 3, mk_node("BindByValue", 1, mk_atom("MutImmutable")), $1, $3); } +| binding_mode ident '@' pat { $$ = mk_node("PatIdent", 3, $1, $2, $4); } +| BOX pat { $$ = mk_node("PatUniq", 1, $2); } +| '<' ty_sum maybe_as_trait_ref '>' MOD_SEP ident { $$ = mk_node("PatQualifiedPath", 3, $2, $3, $6); } +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident maybe_as_trait_ref '>' MOD_SEP ident +{ + $$ = mk_node("PatQualifiedPath", 3, mk_node("PatQualifiedPath", 3, $2, $3, $6), $7, $10); +} +; + +pats_or +: pat { $$ = mk_node("Pats", 1, $1); } +| pats_or '|' pat { $$ = ext_node($1, 1, $3); } +; + +binding_mode +: REF { $$ = mk_node("BindByRef", 1, mk_atom("MutImmutable")); } +| REF MUT { $$ = mk_node("BindByRef", 1, mk_atom("MutMutable")); } +| MUT { $$ = mk_node("BindByValue", 1, mk_atom("MutMutable")); } +; + +lit_or_path +: path_expr { $$ = mk_node("PatLit", 1, $1); } +| lit { $$ = mk_node("PatLit", 1, $1); } +| '-' lit { $$ = mk_node("PatLit", 1, $2); } +; + +pat_field +: ident { $$ = mk_node("PatField", 1, $1); } +| binding_mode ident { $$ = mk_node("PatField", 2, $1, $2); } +| BOX ident { $$ = mk_node("PatField", 2, mk_atom("box"), $2); } +| BOX binding_mode ident { $$ = mk_node("PatField", 3, mk_atom("box"), $2, $3); } +| ident ':' pat { $$ = mk_node("PatField", 2, $1, $3); } +| binding_mode ident ':' pat { $$ = mk_node("PatField", 3, $1, $2, $4); } +; + +pat_fields +: pat_field { $$ = mk_node("PatFields", 1, $1); } +| pat_fields ',' pat_field { $$ = ext_node($1, 1, $3); } +; + +pat_struct +: pat_fields { $$ = mk_node("PatStruct", 2, $1, mk_atom("false")); } +| pat_fields ',' { $$ = mk_node("PatStruct", 2, $1, mk_atom("false")); } +| pat_fields ',' DOTDOT { $$ = mk_node("PatStruct", 2, $1, mk_atom("true")); } +| DOTDOT { $$ = mk_node("PatStruct", 1, mk_atom("true")); } +; + +pat_tup +: pat { $$ = mk_node("pat_tup", 1, $1); } +| pat_tup ',' pat { $$ = ext_node($1, 1, $3); } +; + +pat_vec +: pat_vec_elts { $$ = mk_node("PatVec", 2, $1, mk_none()); } +| pat_vec_elts ',' { $$ = mk_node("PatVec", 2, $1, mk_none()); } +| pat_vec_elts DOTDOT { $$ = mk_node("PatVec", 2, $1, mk_none()); } +| pat_vec_elts ',' DOTDOT { $$ = mk_node("PatVec", 2, $1, mk_none()); } +| pat_vec_elts DOTDOT ',' pat_vec_elts { $$ = mk_node("PatVec", 2, $1, $4); } +| pat_vec_elts DOTDOT ',' pat_vec_elts ',' { $$ = mk_node("PatVec", 2, $1, $4); } +| pat_vec_elts ',' DOTDOT ',' pat_vec_elts { $$ = mk_node("PatVec", 2, $1, $5); } +| pat_vec_elts ',' DOTDOT ',' pat_vec_elts ',' { $$ = mk_node("PatVec", 2, $1, $5); } +| DOTDOT ',' pat_vec_elts { $$ = mk_node("PatVec", 2, mk_none(), $3); } +| DOTDOT ',' pat_vec_elts ',' { $$ = mk_node("PatVec", 2, mk_none(), $3); } +| DOTDOT { $$ = mk_node("PatVec", 2, mk_none(), mk_none()); } +| %empty { $$ = mk_node("PatVec", 2, mk_none(), mk_none()); } +; + +pat_vec_elts +: pat { $$ = mk_node("PatVecElts", 1, $1); } +| pat_vec_elts ',' pat { $$ = ext_node($1, 1, $3); } +; + +//////////////////////////////////////////////////////////////////////// +// Part 3: Types +//////////////////////////////////////////////////////////////////////// + +ty +: ty_prim +| ty_closure +| '<' ty_sum maybe_as_trait_ref '>' MOD_SEP ident { $$ = mk_node("TyQualifiedPath", 3, $2, $3, $6); } +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident maybe_as_trait_ref '>' MOD_SEP ident { $$ = mk_node("TyQualifiedPath", 3, mk_node("TyQualifiedPath", 3, $2, $3, $6), $7, $10); } +| '(' ty_sums ')' { $$ = mk_node("TyTup", 1, $2); } +| '(' ty_sums ',' ')' { $$ = mk_node("TyTup", 1, $2); } +| '(' ')' { $$ = mk_atom("TyNil"); } +; + +ty_prim +: %prec IDENT path_generic_args_without_colons { $$ = mk_node("TyPath", 2, mk_node("global", 1, mk_atom("false")), $1); } +| %prec IDENT MOD_SEP path_generic_args_without_colons { $$ = mk_node("TyPath", 2, mk_node("global", 1, mk_atom("true")), $2); } +| %prec IDENT SELF MOD_SEP path_generic_args_without_colons { $$ = mk_node("TyPath", 2, mk_node("self", 1, mk_atom("true")), $3); } +| BOX ty { $$ = mk_node("TyBox", 1, $2); } +| '*' maybe_mut_or_const ty { $$ = mk_node("TyPtr", 2, $2, $3); } +| '&' ty { $$ = mk_node("TyRptr", 2, mk_atom("MutImmutable"), $2); } +| '&' MUT ty { $$ = mk_node("TyRptr", 2, mk_atom("MutMutable"), $3); } +| ANDAND ty { $$ = mk_node("TyRptr", 1, mk_node("TyRptr", 2, mk_atom("MutImmutable"), $2)); } +| ANDAND MUT ty { $$ = mk_node("TyRptr", 1, mk_node("TyRptr", 2, mk_atom("MutMutable"), $3)); } +| '&' lifetime maybe_mut ty { $$ = mk_node("TyRptr", 3, $2, $3, $4); } +| ANDAND lifetime maybe_mut ty { $$ = mk_node("TyRptr", 1, mk_node("TyRptr", 3, $2, $3, $4)); } +| '[' ty ']' { $$ = mk_node("TyVec", 1, $2); } +| '[' ty ',' DOTDOT expr ']' { $$ = mk_node("TyFixedLengthVec", 2, $2, $5); } +| '[' ty ';' expr ']' { $$ = mk_node("TyFixedLengthVec", 2, $2, $4); } +| TYPEOF '(' expr ')' { $$ = mk_node("TyTypeof", 1, $3); } +| UNDERSCORE { $$ = mk_atom("TyInfer"); } +| ty_bare_fn +| ty_proc +| for_in_type +; + +ty_bare_fn +: FN ty_fn_decl { $$ = $2; } +| UNSAFE FN ty_fn_decl { $$ = $3; } +| EXTERN maybe_abi FN ty_fn_decl { $$ = $4; } +| UNSAFE EXTERN maybe_abi FN ty_fn_decl { $$ = $5; } +; + +ty_fn_decl +: generic_params fn_anon_params ret_ty { $$ = mk_node("TyFnDecl", 3, $1, $2, $3); } +; + +ty_closure +: UNSAFE '|' anon_params '|' maybe_bounds ret_ty { $$ = mk_node("TyClosure", 3, $3, $5, $6); } +| '|' anon_params '|' maybe_bounds ret_ty { $$ = mk_node("TyClosure", 3, $2, $4, $5); } +| UNSAFE OROR maybe_bounds ret_ty { $$ = mk_node("TyClosure", 2, $3, $4); } +| OROR maybe_bounds ret_ty { $$ = mk_node("TyClosure", 2, $2, $3); } +; + +ty_proc +: PROC generic_params fn_params maybe_bounds ret_ty { $$ = mk_node("TyProc", 4, $2, $3, $4, $5); } +; + +for_in_type +: FOR '<' maybe_lifetimes '>' for_in_type_suffix { $$ = mk_node("ForInType", 2, $3, $5); } +; + +for_in_type_suffix +: ty_proc +| ty_bare_fn +| trait_ref +| ty_closure +; + +maybe_mut +: MUT { $$ = mk_atom("MutMutable"); } +| %prec MUT %empty { $$ = mk_atom("MutImmutable"); } +; + +maybe_mut_or_const +: MUT { $$ = mk_atom("MutMutable"); } +| CONST { $$ = mk_atom("MutImmutable"); } +| %empty { $$ = mk_atom("MutImmutable"); } +; + +ty_qualified_path_and_generic_values +: ty_qualified_path maybe_bindings +{ + $$ = mk_node("GenericValues", 3, mk_none(), mk_node("TySums", 1, mk_node("TySum", 1, $1)), $2); +} +| ty_qualified_path ',' ty_sums maybe_bindings +{ + $$ = mk_node("GenericValues", 3, mk_none(), mk_node("TySums", 2, $1, $3), $4); +} +; + +ty_qualified_path +: ty_sum AS trait_ref '>' MOD_SEP ident { $$ = mk_node("TyQualifiedPath", 3, $1, $3, $6); } +| ty_sum AS trait_ref '>' MOD_SEP ident '+' ty_param_bounds { $$ = mk_node("TyQualifiedPath", 3, $1, $3, $6); } +; + +maybe_ty_sums +: ty_sums +| ty_sums ',' +| %empty { $$ = mk_none(); } +; + +ty_sums +: ty_sum { $$ = mk_node("TySums", 1, $1); } +| ty_sums ',' ty_sum { $$ = ext_node($1, 1, $3); } +; + +ty_sum +: ty { $$ = mk_node("TySum", 1, $1); } +| ty '+' ty_param_bounds { $$ = mk_node("TySum", 2, $1, $3); } +; + +ty_prim_sum +: ty_prim { $$ = mk_node("TySum", 1, $1); } +| ty_prim '+' ty_param_bounds { $$ = mk_node("TySum", 2, $1, $3); } +; + +maybe_ty_param_bounds +: ':' ty_param_bounds { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +ty_param_bounds +: boundseq +| %empty { $$ = mk_none(); } +; + +boundseq +: polybound +| boundseq '+' polybound { $$ = ext_node($1, 1, $3); } +; + +polybound +: FOR '<' maybe_lifetimes '>' bound { $$ = mk_node("PolyBound", 2, $3, $5); } +| bound +| '?' bound { $$ = $2; } +; + +bindings +: binding { $$ = mk_node("Bindings", 1, $1); } +| bindings ',' binding { $$ = ext_node($1, 1, $3); } +; + +binding +: ident '=' ty { mk_node("Binding", 2, $1, $3); } +; + +ty_param +: ident maybe_ty_param_bounds maybe_ty_default { $$ = mk_node("TyParam", 3, $1, $2, $3); } +| ident '?' ident maybe_ty_param_bounds maybe_ty_default { $$ = mk_node("TyParam", 4, $1, $3, $4, $5); } +; + +maybe_bounds +: %prec SHIFTPLUS + ':' bounds { $$ = $2; } +| %prec SHIFTPLUS %empty { $$ = mk_none(); } +; + +bounds +: bound { $$ = mk_node("bounds", 1, $1); } +| bounds '+' bound { $$ = ext_node($1, 1, $3); } +; + +bound +: lifetime +| trait_ref +; + +maybe_ltbounds +: %prec SHIFTPLUS + ':' ltbounds { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +ltbounds +: lifetime { $$ = mk_node("ltbounds", 1, $1); } +| ltbounds '+' lifetime { $$ = ext_node($1, 1, $3); } +; + +maybe_ty_default +: '=' ty_sum { $$ = mk_node("TyDefault", 1, $2); } +| %empty { $$ = mk_none(); } +; + +maybe_lifetimes +: lifetimes +| lifetimes ',' +| %empty { $$ = mk_none(); } +; + +lifetimes +: lifetime_and_bounds { $$ = mk_node("Lifetimes", 1, $1); } +| lifetimes ',' lifetime_and_bounds { $$ = ext_node($1, 1, $3); } +; + +lifetime_and_bounds +: LIFETIME maybe_ltbounds { $$ = mk_node("lifetime", 2, mk_atom(yytext), $2); } +| STATIC_LIFETIME { $$ = mk_atom("static_lifetime"); } +; + +lifetime +: LIFETIME { $$ = mk_node("lifetime", 1, mk_atom(yytext)); } +| STATIC_LIFETIME { $$ = mk_atom("static_lifetime"); } +; + +trait_ref +: %prec IDENT path_generic_args_without_colons +| %prec IDENT MOD_SEP path_generic_args_without_colons { $$ = $2; } +; + +//////////////////////////////////////////////////////////////////////// +// Part 4: Blocks, statements, and expressions +//////////////////////////////////////////////////////////////////////// + +inner_attrs_and_block +: '{' maybe_inner_attrs maybe_stmts '}' { $$ = mk_node("ExprBlock", 2, $2, $3); } +; + +block +: '{' maybe_stmts '}' { $$ = mk_node("ExprBlock", 1, $2); } +; + +maybe_stmts +: stmts +| stmts nonblock_expr { $$ = ext_node($1, 1, $2); } +| nonblock_expr +| %empty { $$ = mk_none(); } +; + +// There are two sub-grammars within a "stmts: exprs" derivation +// depending on whether each stmt-expr is a block-expr form; this is to +// handle the "semicolon rule" for stmt sequencing that permits +// writing +// +// if foo { bar } 10 +// +// as a sequence of two stmts (one if-expr stmt, one lit-10-expr +// stmt). Unfortunately by permitting juxtaposition of exprs in +// sequence like that, the non-block expr grammar has to have a +// second limited sub-grammar that excludes the prefix exprs that +// are ambiguous with binops. That is to say: +// +// {10} - 1 +// +// should parse as (progn (progn 10) (- 1)) not (- (progn 10) 1), that +// is to say, two statements rather than one, at least according to +// the mainline rust parser. +// +// So we wind up with a 3-way split in exprs that occur in stmt lists: +// block, nonblock-prefix, and nonblock-nonprefix. +// +// In non-stmts contexts, expr can relax this trichotomy. +// +// There is also one other expr subtype: nonparen_expr disallows exprs +// surrounded by parens (including tuple expressions), this is +// necessary for BOX (place) expressions, so a parens expr following +// the BOX is always parsed as the place. + +stmts +: stmt { $$ = mk_node("stmts", 1, $1); } +| stmts stmt { $$ = ext_node($1, 1, $2); } +; + +stmt +: let +| stmt_item +| PUB stmt_item { $$ = $2; } +| outer_attrs stmt_item { $$ = $2; } +| outer_attrs PUB stmt_item { $$ = $3; } +| full_block_expr +| block +| nonblock_expr ';' +| ';' { $$ = mk_none(); } +; + +maybe_exprs +: exprs +| exprs ',' +| %empty { $$ = mk_none(); } +; + +maybe_expr +: expr +| %empty { $$ = mk_none(); } +; + +exprs +: expr { $$ = mk_node("exprs", 1, $1); } +| exprs ',' expr { $$ = ext_node($1, 1, $3); } +; + +path_expr +: path_generic_args_with_colons +| MOD_SEP path_generic_args_with_colons { $$ = $2; } +| SELF MOD_SEP path_generic_args_with_colons { $$ = mk_node("SelfPath", 1, $3); } +; + +// A path with a lifetime and type parameters with double colons before +// the type parameters; e.g. `foo::bar::<'a>::Baz::` +// +// These show up in expr context, in order to disambiguate from "less-than" +// expressions. +path_generic_args_with_colons +: ident { $$ = mk_node("components", 1, $1); } +| path_generic_args_with_colons MOD_SEP ident { $$ = ext_node($1, 1, $3); } +| path_generic_args_with_colons MOD_SEP generic_args { $$ = ext_node($1, 1, $3); } +; + +// the braces-delimited macro is a block_expr so it doesn't appear here +macro_expr +: path_expr '!' maybe_ident parens_delimited_token_trees { $$ = mk_node("MacroExpr", 3, $1, $3, $4); } +| path_expr '!' maybe_ident brackets_delimited_token_trees { $$ = mk_node("MacroExpr", 3, $1, $3, $4); } +; + +nonblock_expr +: lit { $$ = mk_node("ExprLit", 1, $1); } +| %prec IDENT + path_expr { $$ = mk_node("ExprPath", 1, $1); } +| SELF { $$ = mk_node("ExprPath", 1, mk_node("ident", 1, mk_atom("self"))); } +| macro_expr { $$ = mk_node("ExprMac", 1, $1); } +| path_expr '{' struct_expr_fields '}' { $$ = mk_node("ExprStruct", 2, $1, $3); } +| nonblock_expr '.' path_generic_args_with_colons { $$ = mk_node("ExprField", 2, $1, $3); } +| nonblock_expr '.' LIT_INTEGER { $$ = mk_node("ExprTupleIndex", 1, $1); } +| nonblock_expr '[' maybe_expr ']' { $$ = mk_node("ExprIndex", 2, $1, $3); } +| nonblock_expr '(' maybe_exprs ')' { $$ = mk_node("ExprCall", 2, $1, $3); } +| '[' vec_expr ']' { $$ = mk_node("ExprVec", 1, $2); } +| '(' maybe_exprs ')' { $$ = mk_node("ExprParen", 1, $2); } +| CONTINUE { $$ = mk_node("ExprAgain", 0); } +| CONTINUE lifetime { $$ = mk_node("ExprAgain", 1, $2); } +| RETURN { $$ = mk_node("ExprRet", 0); } +| RETURN expr { $$ = mk_node("ExprRet", 1, $2); } +| BREAK { $$ = mk_node("ExprBreak", 0); } +| BREAK lifetime { $$ = mk_node("ExprBreak", 1, $2); } +| nonblock_expr LARROW expr { $$ = mk_node("ExprInPlace", 2, $1, $3); } +| nonblock_expr '=' expr { $$ = mk_node("ExprAssign", 2, $1, $3); } +| nonblock_expr SHLEQ expr { $$ = mk_node("ExprAssignShl", 2, $1, $3); } +| nonblock_expr SHREQ expr { $$ = mk_node("ExprAssignShr", 2, $1, $3); } +| nonblock_expr MINUSEQ expr { $$ = mk_node("ExprAssignSub", 2, $1, $3); } +| nonblock_expr ANDEQ expr { $$ = mk_node("ExprAssignBitAnd", 2, $1, $3); } +| nonblock_expr OREQ expr { $$ = mk_node("ExprAssignBitOr", 2, $1, $3); } +| nonblock_expr PLUSEQ expr { $$ = mk_node("ExprAssignAdd", 2, $1, $3); } +| nonblock_expr STAREQ expr { $$ = mk_node("ExprAssignMul", 2, $1, $3); } +| nonblock_expr SLASHEQ expr { $$ = mk_node("ExprAssignDiv", 2, $1, $3); } +| nonblock_expr CARETEQ expr { $$ = mk_node("ExprAssignBitXor", 2, $1, $3); } +| nonblock_expr PERCENTEQ expr { $$ = mk_node("ExprAssignRem", 2, $1, $3); } +| nonblock_expr OROR expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiOr"), $1, $3); } +| nonblock_expr ANDAND expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAnd"), $1, $3); } +| nonblock_expr EQEQ expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiEq"), $1, $3); } +| nonblock_expr NE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiNe"), $1, $3); } +| nonblock_expr '<' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLt"), $1, $3); } +| nonblock_expr '>' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGt"), $1, $3); } +| nonblock_expr LE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLe"), $1, $3); } +| nonblock_expr GE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGe"), $1, $3); } +| nonblock_expr '|' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitOr"), $1, $3); } +| nonblock_expr '^' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitXor"), $1, $3); } +| nonblock_expr '&' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitAnd"), $1, $3); } +| nonblock_expr SHL expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShl"), $1, $3); } +| nonblock_expr SHR expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShr"), $1, $3); } +| nonblock_expr '+' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAdd"), $1, $3); } +| nonblock_expr '-' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiSub"), $1, $3); } +| nonblock_expr '*' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiMul"), $1, $3); } +| nonblock_expr '/' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiDiv"), $1, $3); } +| nonblock_expr '%' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiRem"), $1, $3); } +| nonblock_expr DOTDOT { $$ = mk_node("ExprRange", 2, $1, mk_none()); } +| nonblock_expr DOTDOT expr { $$ = mk_node("ExprRange", 2, $1, $3); } +| DOTDOT expr { $$ = mk_node("ExprRange", 2, mk_none(), $2); } +| DOTDOT { $$ = mk_node("ExprRange", 2, mk_none(), mk_none()); } +| nonblock_expr AS ty { $$ = mk_node("ExprCast", 2, $1, $3); } +| BOX nonparen_expr { $$ = mk_node("ExprBox", 1, $2); } +| %prec BOXPLACE BOX '(' maybe_expr ')' nonblock_expr { $$ = mk_node("ExprBox", 2, $3, $5); } +| expr_qualified_path +| nonblock_prefix_expr +; + +expr +: lit { $$ = mk_node("ExprLit", 1, $1); } +| %prec IDENT + path_expr { $$ = mk_node("ExprPath", 1, $1); } +| SELF { $$ = mk_node("ExprPath", 1, mk_node("ident", 1, mk_atom("self"))); } +| macro_expr { $$ = mk_node("ExprMac", 1, $1); } +| path_expr '{' struct_expr_fields '}' { $$ = mk_node("ExprStruct", 2, $1, $3); } +| expr '.' path_generic_args_with_colons { $$ = mk_node("ExprField", 2, $1, $3); } +| expr '.' LIT_INTEGER { $$ = mk_node("ExprTupleIndex", 1, $1); } +| expr '[' maybe_expr ']' { $$ = mk_node("ExprIndex", 2, $1, $3); } +| expr '(' maybe_exprs ')' { $$ = mk_node("ExprCall", 2, $1, $3); } +| '(' maybe_exprs ')' { $$ = mk_node("ExprParen", 1, $2); } +| '[' vec_expr ']' { $$ = mk_node("ExprVec", 1, $2); } +| CONTINUE { $$ = mk_node("ExprAgain", 0); } +| CONTINUE ident { $$ = mk_node("ExprAgain", 1, $2); } +| RETURN { $$ = mk_node("ExprRet", 0); } +| RETURN expr { $$ = mk_node("ExprRet", 1, $2); } +| BREAK { $$ = mk_node("ExprBreak", 0); } +| BREAK ident { $$ = mk_node("ExprBreak", 1, $2); } +| expr LARROW expr { $$ = mk_node("ExprInPlace", 2, $1, $3); } +| expr '=' expr { $$ = mk_node("ExprAssign", 2, $1, $3); } +| expr SHLEQ expr { $$ = mk_node("ExprAssignShl", 2, $1, $3); } +| expr SHREQ expr { $$ = mk_node("ExprAssignShr", 2, $1, $3); } +| expr MINUSEQ expr { $$ = mk_node("ExprAssignSub", 2, $1, $3); } +| expr ANDEQ expr { $$ = mk_node("ExprAssignBitAnd", 2, $1, $3); } +| expr OREQ expr { $$ = mk_node("ExprAssignBitOr", 2, $1, $3); } +| expr PLUSEQ expr { $$ = mk_node("ExprAssignAdd", 2, $1, $3); } +| expr STAREQ expr { $$ = mk_node("ExprAssignMul", 2, $1, $3); } +| expr SLASHEQ expr { $$ = mk_node("ExprAssignDiv", 2, $1, $3); } +| expr CARETEQ expr { $$ = mk_node("ExprAssignBitXor", 2, $1, $3); } +| expr PERCENTEQ expr { $$ = mk_node("ExprAssignRem", 2, $1, $3); } +| expr OROR expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiOr"), $1, $3); } +| expr ANDAND expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAnd"), $1, $3); } +| expr EQEQ expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiEq"), $1, $3); } +| expr NE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiNe"), $1, $3); } +| expr '<' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLt"), $1, $3); } +| expr '>' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGt"), $1, $3); } +| expr LE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLe"), $1, $3); } +| expr GE expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGe"), $1, $3); } +| expr '|' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitOr"), $1, $3); } +| expr '^' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitXor"), $1, $3); } +| expr '&' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitAnd"), $1, $3); } +| expr SHL expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShl"), $1, $3); } +| expr SHR expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShr"), $1, $3); } +| expr '+' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAdd"), $1, $3); } +| expr '-' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiSub"), $1, $3); } +| expr '*' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiMul"), $1, $3); } +| expr '/' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiDiv"), $1, $3); } +| expr '%' expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiRem"), $1, $3); } +| expr DOTDOT { $$ = mk_node("ExprRange", 2, $1, mk_none()); } +| expr DOTDOT expr { $$ = mk_node("ExprRange", 2, $1, $3); } +| DOTDOT expr { $$ = mk_node("ExprRange", 2, mk_none(), $2); } +| DOTDOT { $$ = mk_node("ExprRange", 2, mk_none(), mk_none()); } +| expr AS ty { $$ = mk_node("ExprCast", 2, $1, $3); } +| BOX nonparen_expr { $$ = mk_node("ExprBox", 1, $2); } +| %prec BOXPLACE BOX '(' maybe_expr ')' expr { $$ = mk_node("ExprBox", 2, $3, $5); } +| expr_qualified_path +| block_expr +| block +| nonblock_prefix_expr +; + +nonparen_expr +: lit { $$ = mk_node("ExprLit", 1, $1); } +| %prec IDENT + path_expr { $$ = mk_node("ExprPath", 1, $1); } +| SELF { $$ = mk_node("ExprPath", 1, mk_node("ident", 1, mk_atom("self"))); } +| macro_expr { $$ = mk_node("ExprMac", 1, $1); } +| path_expr '{' struct_expr_fields '}' { $$ = mk_node("ExprStruct", 2, $1, $3); } +| nonparen_expr '.' path_generic_args_with_colons { $$ = mk_node("ExprField", 2, $1, $3); } +| nonparen_expr '.' LIT_INTEGER { $$ = mk_node("ExprTupleIndex", 1, $1); } +| nonparen_expr '[' maybe_expr ']' { $$ = mk_node("ExprIndex", 2, $1, $3); } +| nonparen_expr '(' maybe_exprs ')' { $$ = mk_node("ExprCall", 2, $1, $3); } +| '[' vec_expr ']' { $$ = mk_node("ExprVec", 1, $2); } +| CONTINUE { $$ = mk_node("ExprAgain", 0); } +| CONTINUE ident { $$ = mk_node("ExprAgain", 1, $2); } +| RETURN { $$ = mk_node("ExprRet", 0); } +| RETURN expr { $$ = mk_node("ExprRet", 1, $2); } +| BREAK { $$ = mk_node("ExprBreak", 0); } +| BREAK ident { $$ = mk_node("ExprBreak", 1, $2); } +| nonparen_expr LARROW nonparen_expr { $$ = mk_node("ExprInPlace", 2, $1, $3); } +| nonparen_expr '=' nonparen_expr { $$ = mk_node("ExprAssign", 2, $1, $3); } +| nonparen_expr SHLEQ nonparen_expr { $$ = mk_node("ExprAssignShl", 2, $1, $3); } +| nonparen_expr SHREQ nonparen_expr { $$ = mk_node("ExprAssignShr", 2, $1, $3); } +| nonparen_expr MINUSEQ nonparen_expr { $$ = mk_node("ExprAssignSub", 2, $1, $3); } +| nonparen_expr ANDEQ nonparen_expr { $$ = mk_node("ExprAssignBitAnd", 2, $1, $3); } +| nonparen_expr OREQ nonparen_expr { $$ = mk_node("ExprAssignBitOr", 2, $1, $3); } +| nonparen_expr PLUSEQ nonparen_expr { $$ = mk_node("ExprAssignAdd", 2, $1, $3); } +| nonparen_expr STAREQ nonparen_expr { $$ = mk_node("ExprAssignMul", 2, $1, $3); } +| nonparen_expr SLASHEQ nonparen_expr { $$ = mk_node("ExprAssignDiv", 2, $1, $3); } +| nonparen_expr CARETEQ nonparen_expr { $$ = mk_node("ExprAssignBitXor", 2, $1, $3); } +| nonparen_expr PERCENTEQ nonparen_expr { $$ = mk_node("ExprAssignRem", 2, $1, $3); } +| nonparen_expr OROR nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiOr"), $1, $3); } +| nonparen_expr ANDAND nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAnd"), $1, $3); } +| nonparen_expr EQEQ nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiEq"), $1, $3); } +| nonparen_expr NE nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiNe"), $1, $3); } +| nonparen_expr '<' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLt"), $1, $3); } +| nonparen_expr '>' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGt"), $1, $3); } +| nonparen_expr LE nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiLe"), $1, $3); } +| nonparen_expr GE nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiGe"), $1, $3); } +| nonparen_expr '|' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitOr"), $1, $3); } +| nonparen_expr '^' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitXor"), $1, $3); } +| nonparen_expr '&' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitAnd"), $1, $3); } +| nonparen_expr SHL nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShl"), $1, $3); } +| nonparen_expr SHR nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiShr"), $1, $3); } +| nonparen_expr '+' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiAdd"), $1, $3); } +| nonparen_expr '-' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiSub"), $1, $3); } +| nonparen_expr '*' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiMul"), $1, $3); } +| nonparen_expr '/' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiDiv"), $1, $3); } +| nonparen_expr '%' nonparen_expr { $$ = mk_node("ExprBinary", 3, mk_atom("BiRem"), $1, $3); } +| nonparen_expr DOTDOT { $$ = mk_node("ExprRange", 2, $1, mk_none()); } +| nonparen_expr DOTDOT nonparen_expr { $$ = mk_node("ExprRange", 2, $1, $3); } +| DOTDOT nonparen_expr { $$ = mk_node("ExprRange", 2, mk_none(), $2); } +| DOTDOT { $$ = mk_node("ExprRange", 2, mk_none(), mk_none()); } +| nonparen_expr AS ty { $$ = mk_node("ExprCast", 2, $1, $3); } +| BOX nonparen_expr { $$ = mk_node("ExprBox", 1, $2); } +| %prec BOXPLACE BOX '(' maybe_expr ')' expr { $$ = mk_node("ExprBox", 1, $3, $5); } +| expr_qualified_path +| block_expr +| block +| nonblock_prefix_expr +; + +expr_nostruct +: lit { $$ = mk_node("ExprLit", 1, $1); } +| %prec IDENT + path_expr { $$ = mk_node("ExprPath", 1, $1); } +| SELF { $$ = mk_node("ExprPath", 1, mk_node("ident", 1, mk_atom("self"))); } +| macro_expr { $$ = mk_node("ExprMac", 1, $1); } +| expr_nostruct '.' path_generic_args_with_colons { $$ = mk_node("ExprField", 2, $1, $3); } +| expr_nostruct '.' LIT_INTEGER { $$ = mk_node("ExprTupleIndex", 1, $1); } +| expr_nostruct '[' maybe_expr ']' { $$ = mk_node("ExprIndex", 2, $1, $3); } +| expr_nostruct '(' maybe_exprs ')' { $$ = mk_node("ExprCall", 2, $1, $3); } +| '[' vec_expr ']' { $$ = mk_node("ExprVec", 1, $2); } +| '(' maybe_exprs ')' { $$ = mk_node("ExprParen", 1, $2); } +| CONTINUE { $$ = mk_node("ExprAgain", 0); } +| CONTINUE ident { $$ = mk_node("ExprAgain", 1, $2); } +| RETURN { $$ = mk_node("ExprRet", 0); } +| RETURN expr { $$ = mk_node("ExprRet", 1, $2); } +| BREAK { $$ = mk_node("ExprBreak", 0); } +| BREAK ident { $$ = mk_node("ExprBreak", 1, $2); } +| expr_nostruct LARROW expr_nostruct { $$ = mk_node("ExprInPlace", 2, $1, $3); } +| expr_nostruct '=' expr_nostruct { $$ = mk_node("ExprAssign", 2, $1, $3); } +| expr_nostruct SHLEQ expr_nostruct { $$ = mk_node("ExprAssignShl", 2, $1, $3); } +| expr_nostruct SHREQ expr_nostruct { $$ = mk_node("ExprAssignShr", 2, $1, $3); } +| expr_nostruct MINUSEQ expr_nostruct { $$ = mk_node("ExprAssignSub", 2, $1, $3); } +| expr_nostruct ANDEQ expr_nostruct { $$ = mk_node("ExprAssignBitAnd", 2, $1, $3); } +| expr_nostruct OREQ expr_nostruct { $$ = mk_node("ExprAssignBitOr", 2, $1, $3); } +| expr_nostruct PLUSEQ expr_nostruct { $$ = mk_node("ExprAssignAdd", 2, $1, $3); } +| expr_nostruct STAREQ expr_nostruct { $$ = mk_node("ExprAssignMul", 2, $1, $3); } +| expr_nostruct SLASHEQ expr_nostruct { $$ = mk_node("ExprAssignDiv", 2, $1, $3); } +| expr_nostruct CARETEQ expr_nostruct { $$ = mk_node("ExprAssignBitXor", 2, $1, $3); } +| expr_nostruct PERCENTEQ expr_nostruct { $$ = mk_node("ExprAssignRem", 2, $1, $3); } +| expr_nostruct OROR expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiOr"), $1, $3); } +| expr_nostruct ANDAND expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiAnd"), $1, $3); } +| expr_nostruct EQEQ expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiEq"), $1, $3); } +| expr_nostruct NE expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiNe"), $1, $3); } +| expr_nostruct '<' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiLt"), $1, $3); } +| expr_nostruct '>' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiGt"), $1, $3); } +| expr_nostruct LE expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiLe"), $1, $3); } +| expr_nostruct GE expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiGe"), $1, $3); } +| expr_nostruct '|' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitOr"), $1, $3); } +| expr_nostruct '^' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitXor"), $1, $3); } +| expr_nostruct '&' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiBitAnd"), $1, $3); } +| expr_nostruct SHL expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiShl"), $1, $3); } +| expr_nostruct SHR expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiShr"), $1, $3); } +| expr_nostruct '+' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiAdd"), $1, $3); } +| expr_nostruct '-' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiSub"), $1, $3); } +| expr_nostruct '*' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiMul"), $1, $3); } +| expr_nostruct '/' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiDiv"), $1, $3); } +| expr_nostruct '%' expr_nostruct { $$ = mk_node("ExprBinary", 3, mk_atom("BiRem"), $1, $3); } +| expr_nostruct DOTDOT %prec RANGE { $$ = mk_node("ExprRange", 2, $1, mk_none()); } +| expr_nostruct DOTDOT expr_nostruct { $$ = mk_node("ExprRange", 2, $1, $3); } +| DOTDOT expr_nostruct { $$ = mk_node("ExprRange", 2, mk_none(), $2); } +| DOTDOT { $$ = mk_node("ExprRange", 2, mk_none(), mk_none()); } +| expr_nostruct AS ty { $$ = mk_node("ExprCast", 2, $1, $3); } +| BOX nonparen_expr { $$ = mk_node("ExprBox", 1, $2); } +| %prec BOXPLACE BOX '(' maybe_expr ')' expr_nostruct { $$ = mk_node("ExprBox", 1, $3, $5); } +| expr_qualified_path +| block_expr +| block +| nonblock_prefix_expr_nostruct +; + +nonblock_prefix_expr_nostruct +: '-' expr_nostruct { $$ = mk_node("ExprUnary", 2, mk_atom("UnNeg"), $2); } +| '!' expr_nostruct { $$ = mk_node("ExprUnary", 2, mk_atom("UnNot"), $2); } +| '*' expr_nostruct { $$ = mk_node("ExprUnary", 2, mk_atom("UnDeref"), $2); } +| '&' maybe_mut expr_nostruct { $$ = mk_node("ExprAddrOf", 2, $2, $3); } +| ANDAND maybe_mut expr_nostruct { $$ = mk_node("ExprAddrOf", 1, mk_node("ExprAddrOf", 2, $2, $3)); } +| lambda_expr_nostruct +| MOVE lambda_expr_nostruct { $$ = $2; } +| proc_expr_nostruct +; + +nonblock_prefix_expr +: '-' expr { $$ = mk_node("ExprUnary", 2, mk_atom("UnNeg"), $2); } +| '!' expr { $$ = mk_node("ExprUnary", 2, mk_atom("UnNot"), $2); } +| '*' expr { $$ = mk_node("ExprUnary", 2, mk_atom("UnDeref"), $2); } +| '&' maybe_mut expr { $$ = mk_node("ExprAddrOf", 2, $2, $3); } +| ANDAND maybe_mut expr { $$ = mk_node("ExprAddrOf", 1, mk_node("ExprAddrOf", 2, $2, $3)); } +| lambda_expr +| MOVE lambda_expr { $$ = $2; } +| proc_expr +; + +expr_qualified_path +: '<' ty_sum maybe_as_trait_ref '>' MOD_SEP ident maybe_qpath_params +{ + $$ = mk_node("ExprQualifiedPath", 4, $2, $3, $6, $7); +} +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident maybe_as_trait_ref '>' MOD_SEP ident +{ + $$ = mk_node("ExprQualifiedPath", 3, mk_node("ExprQualifiedPath", 3, $2, $3, $6), $7, $10); +} +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident generic_args maybe_as_trait_ref '>' MOD_SEP ident +{ + $$ = mk_node("ExprQualifiedPath", 3, mk_node("ExprQualifiedPath", 4, $2, $3, $6, $7), $8, $11); +} +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident maybe_as_trait_ref '>' MOD_SEP ident generic_args +{ + $$ = mk_node("ExprQualifiedPath", 4, mk_node("ExprQualifiedPath", 3, $2, $3, $6), $7, $10, $11); +} +| SHL ty_sum maybe_as_trait_ref '>' MOD_SEP ident generic_args maybe_as_trait_ref '>' MOD_SEP ident generic_args +{ + $$ = mk_node("ExprQualifiedPath", 4, mk_node("ExprQualifiedPath", 4, $2, $3, $6, $7), $8, $11, $12); +} + +maybe_qpath_params +: MOD_SEP generic_args { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +maybe_as_trait_ref +: AS trait_ref { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +lambda_expr +: %prec LAMBDA + OROR ret_ty expr { $$ = mk_node("ExprFnBlock", 3, mk_none(), $2, $3); } +| %prec LAMBDA + '|' maybe_unboxed_closure_kind '|' ret_ty expr { $$ = mk_node("ExprFnBlock", 3, mk_none(), $4, $5); } +| %prec LAMBDA + '|' inferrable_params '|' ret_ty expr { $$ = mk_node("ExprFnBlock", 3, $2, $4, $5); } +| %prec LAMBDA + '|' '&' maybe_mut ':' inferrable_params '|' ret_ty expr { $$ = mk_node("ExprFnBlock", 3, $5, $7, $8); } +| %prec LAMBDA + '|' ':' inferrable_params '|' ret_ty expr { $$ = mk_node("ExprFnBlock", 3, $3, $5, $6); } +; + +lambda_expr_nostruct +: %prec LAMBDA + OROR expr_nostruct { $$ = mk_node("ExprFnBlock", 2, mk_none(), $2); } +| %prec LAMBDA + '|' maybe_unboxed_closure_kind '|' expr_nostruct { $$ = mk_node("ExprFnBlock", 2, mk_none(), $4); } +| %prec LAMBDA + '|' inferrable_params '|' expr_nostruct { $$ = mk_node("ExprFnBlock", 2, $2, $4); } +| %prec LAMBDA + '|' '&' maybe_mut ':' inferrable_params '|' expr_nostruct { $$ = mk_node("ExprFnBlock", 2, $5, $7); } +| %prec LAMBDA + '|' ':' inferrable_params '|' expr_nostruct { $$ = mk_node("ExprFnBlock", 2, $3, $5); } + +; + +proc_expr +: %prec LAMBDA + PROC '(' ')' expr { $$ = mk_node("ExprProc", 2, mk_none(), $4); } +| %prec LAMBDA + PROC '(' inferrable_params ')' expr { $$ = mk_node("ExprProc", 2, $3, $5); } +; + +proc_expr_nostruct +: %prec LAMBDA + PROC '(' ')' expr_nostruct { $$ = mk_node("ExprProc", 2, mk_none(), $4); } +| %prec LAMBDA + PROC '(' inferrable_params ')' expr_nostruct { $$ = mk_node("ExprProc", 2, $3, $5); } +; + +vec_expr +: maybe_exprs +| exprs ';' expr { $$ = mk_node("VecRepeat", 2, $1, $3); } +; + +struct_expr_fields +: field_inits +| field_inits ',' +| maybe_field_inits default_field_init { $$ = ext_node($1, 1, $2); } +; + +maybe_field_inits +: field_inits +| field_inits ',' +| %empty { $$ = mk_none(); } +; + +field_inits +: field_init { $$ = mk_node("FieldInits", 1, $1); } +| field_inits ',' field_init { $$ = ext_node($1, 1, $3); } +; + +field_init +: ident ':' expr { $$ = mk_node("FieldInit", 2, $1, $3); } +; + +default_field_init +: DOTDOT expr { $$ = mk_node("DefaultFieldInit", 1, $2); } +; + +block_expr +: expr_match +| expr_if +| expr_if_let +| expr_while +| expr_while_let +| expr_loop +| expr_for +| UNSAFE block { $$ = mk_node("UnsafeBlock", 1, $2); } +| path_expr '!' maybe_ident braces_delimited_token_trees { $$ = mk_node("Macro", 3, $1, $3, $4); } +; + +full_block_expr +: block_expr +| full_block_expr '.' path_generic_args_with_colons %prec IDENT { $$ = mk_node("ExprField", 2, $1, $3); } +| full_block_expr '.' path_generic_args_with_colons '[' maybe_expr ']' { $$ = mk_node("ExprIndex", 3, $1, $3, $5); } +| full_block_expr '.' path_generic_args_with_colons '(' maybe_exprs ')' { $$ = mk_node("ExprCall", 3, $1, $3, $5); } +| full_block_expr '.' LIT_INTEGER { $$ = mk_node("ExprTupleIndex", 1, $1); } +; + +expr_match +: MATCH expr_nostruct '{' '}' { $$ = mk_node("ExprMatch", 1, $2); } +| MATCH expr_nostruct '{' match_clauses '}' { $$ = mk_node("ExprMatch", 2, $2, $4); } +| MATCH expr_nostruct '{' match_clauses nonblock_match_clause '}' { $$ = mk_node("ExprMatch", 2, $2, ext_node($4, 1, $5)); } +| MATCH expr_nostruct '{' nonblock_match_clause '}' { $$ = mk_node("ExprMatch", 2, $2, mk_node("Arms", 1, $4)); } +; + +match_clauses +: match_clause { $$ = mk_node("Arms", 1, $1); } +| match_clauses match_clause { $$ = ext_node($1, 1, $2); } +; + +match_clause +: nonblock_match_clause ',' +| block_match_clause +| block_match_clause ',' +; + +nonblock_match_clause +: maybe_outer_attrs pats_or maybe_guard FAT_ARROW nonblock_expr { $$ = mk_node("Arm", 4, $1, $2, $3, $5); } +| maybe_outer_attrs pats_or maybe_guard FAT_ARROW full_block_expr { $$ = mk_node("Arm", 4, $1, $2, $3, $5); } +; + +block_match_clause +: maybe_outer_attrs pats_or maybe_guard FAT_ARROW block { $$ = mk_node("Arm", 4, $1, $2, $3, $5); } +; + +maybe_guard +: IF expr_nostruct { $$ = $2; } +| %empty { $$ = mk_none(); } +; + +expr_if +: IF expr_nostruct block { $$ = mk_node("ExprIf", 2, $2, $3); } +| IF expr_nostruct block ELSE block_or_if { $$ = mk_node("ExprIf", 3, $2, $3, $5); } +; + +expr_if_let +: IF LET pat '=' expr_nostruct block { $$ = mk_node("ExprIfLet", 3, $3, $5, $6); } +| IF LET pat '=' expr_nostruct block ELSE block_or_if { $$ = mk_node("ExprIfLet", 4, $3, $5, $6, $8); } +; + +block_or_if +: block +| expr_if +| expr_if_let +; + +expr_while +: maybe_label WHILE expr_nostruct block { $$ = mk_node("ExprWhile", 3, $1, $3, $4); } +; + +expr_while_let +: maybe_label WHILE LET pat '=' expr_nostruct block { $$ = mk_node("ExprWhileLet", 4, $1, $4, $6, $7); } +; + +expr_loop +: maybe_label LOOP block { $$ = mk_node("ExprLoop", 2, $1, $3); } +; + +expr_for +: maybe_label FOR pat IN expr_nostruct block { $$ = mk_node("ExprForLoop", 4, $1, $3, $5, $6); } +; + +maybe_label +: lifetime ':' +| %empty { $$ = mk_none(); } +; + +let +: LET pat maybe_ty_ascription maybe_init_expr ';' { $$ = mk_node("DeclLocal", 3, $2, $3, $4); } +; + +//////////////////////////////////////////////////////////////////////// +// Part 5: Macros and misc. rules +//////////////////////////////////////////////////////////////////////// + +lit +: LIT_BYTE { $$ = mk_node("LitByte", 1, mk_atom(yytext)); } +| LIT_CHAR { $$ = mk_node("LitChar", 1, mk_atom(yytext)); } +| LIT_INTEGER { $$ = mk_node("LitInteger", 1, mk_atom(yytext)); } +| LIT_FLOAT { $$ = mk_node("LitFloat", 1, mk_atom(yytext)); } +| TRUE { $$ = mk_node("LitBool", 1, mk_atom(yytext)); } +| FALSE { $$ = mk_node("LitBool", 1, mk_atom(yytext)); } +| str +; + +str +: LIT_STR { $$ = mk_node("LitStr", 1, mk_atom(yytext), mk_atom("CookedStr")); } +| LIT_STR_RAW { $$ = mk_node("LitStr", 1, mk_atom(yytext), mk_atom("RawStr")); } +| LIT_BYTE_STR { $$ = mk_node("LitByteStr", 1, mk_atom(yytext), mk_atom("ByteStr")); } +| LIT_BYTE_STR_RAW { $$ = mk_node("LitByteStr", 1, mk_atom(yytext), mk_atom("RawByteStr")); } +; + +maybe_ident +: %empty { $$ = mk_none(); } +| ident +; + +ident +: IDENT { $$ = mk_node("ident", 1, mk_atom(yytext)); } +; + +unpaired_token +: SHL { $$ = mk_atom(yytext); } +| SHR { $$ = mk_atom(yytext); } +| LE { $$ = mk_atom(yytext); } +| EQEQ { $$ = mk_atom(yytext); } +| NE { $$ = mk_atom(yytext); } +| GE { $$ = mk_atom(yytext); } +| ANDAND { $$ = mk_atom(yytext); } +| OROR { $$ = mk_atom(yytext); } +| LARROW { $$ = mk_atom(yytext); } +| SHLEQ { $$ = mk_atom(yytext); } +| SHREQ { $$ = mk_atom(yytext); } +| MINUSEQ { $$ = mk_atom(yytext); } +| ANDEQ { $$ = mk_atom(yytext); } +| OREQ { $$ = mk_atom(yytext); } +| PLUSEQ { $$ = mk_atom(yytext); } +| STAREQ { $$ = mk_atom(yytext); } +| SLASHEQ { $$ = mk_atom(yytext); } +| CARETEQ { $$ = mk_atom(yytext); } +| PERCENTEQ { $$ = mk_atom(yytext); } +| DOTDOT { $$ = mk_atom(yytext); } +| DOTDOTDOT { $$ = mk_atom(yytext); } +| MOD_SEP { $$ = mk_atom(yytext); } +| RARROW { $$ = mk_atom(yytext); } +| FAT_ARROW { $$ = mk_atom(yytext); } +| LIT_BYTE { $$ = mk_atom(yytext); } +| LIT_CHAR { $$ = mk_atom(yytext); } +| LIT_INTEGER { $$ = mk_atom(yytext); } +| LIT_FLOAT { $$ = mk_atom(yytext); } +| LIT_STR { $$ = mk_atom(yytext); } +| LIT_STR_RAW { $$ = mk_atom(yytext); } +| LIT_BYTE_STR { $$ = mk_atom(yytext); } +| LIT_BYTE_STR_RAW { $$ = mk_atom(yytext); } +| IDENT { $$ = mk_atom(yytext); } +| UNDERSCORE { $$ = mk_atom(yytext); } +| LIFETIME { $$ = mk_atom(yytext); } +| SELF { $$ = mk_atom(yytext); } +| STATIC { $$ = mk_atom(yytext); } +| AS { $$ = mk_atom(yytext); } +| BREAK { $$ = mk_atom(yytext); } +| CRATE { $$ = mk_atom(yytext); } +| ELSE { $$ = mk_atom(yytext); } +| ENUM { $$ = mk_atom(yytext); } +| EXTERN { $$ = mk_atom(yytext); } +| FALSE { $$ = mk_atom(yytext); } +| FN { $$ = mk_atom(yytext); } +| FOR { $$ = mk_atom(yytext); } +| IF { $$ = mk_atom(yytext); } +| IMPL { $$ = mk_atom(yytext); } +| IN { $$ = mk_atom(yytext); } +| LET { $$ = mk_atom(yytext); } +| LOOP { $$ = mk_atom(yytext); } +| MATCH { $$ = mk_atom(yytext); } +| MOD { $$ = mk_atom(yytext); } +| MOVE { $$ = mk_atom(yytext); } +| MUT { $$ = mk_atom(yytext); } +| PRIV { $$ = mk_atom(yytext); } +| PUB { $$ = mk_atom(yytext); } +| REF { $$ = mk_atom(yytext); } +| RETURN { $$ = mk_atom(yytext); } +| STRUCT { $$ = mk_atom(yytext); } +| TRUE { $$ = mk_atom(yytext); } +| TRAIT { $$ = mk_atom(yytext); } +| TYPE { $$ = mk_atom(yytext); } +| UNSAFE { $$ = mk_atom(yytext); } +| USE { $$ = mk_atom(yytext); } +| WHILE { $$ = mk_atom(yytext); } +| CONTINUE { $$ = mk_atom(yytext); } +| PROC { $$ = mk_atom(yytext); } +| BOX { $$ = mk_atom(yytext); } +| CONST { $$ = mk_atom(yytext); } +| WHERE { $$ = mk_atom(yytext); } +| TYPEOF { $$ = mk_atom(yytext); } +| INNER_DOC_COMMENT { $$ = mk_atom(yytext); } +| OUTER_DOC_COMMENT { $$ = mk_atom(yytext); } +| SHEBANG { $$ = mk_atom(yytext); } +| STATIC_LIFETIME { $$ = mk_atom(yytext); } +| ';' { $$ = mk_atom(yytext); } +| ',' { $$ = mk_atom(yytext); } +| '.' { $$ = mk_atom(yytext); } +| '@' { $$ = mk_atom(yytext); } +| '#' { $$ = mk_atom(yytext); } +| '~' { $$ = mk_atom(yytext); } +| ':' { $$ = mk_atom(yytext); } +| '$' { $$ = mk_atom(yytext); } +| '=' { $$ = mk_atom(yytext); } +| '?' { $$ = mk_atom(yytext); } +| '!' { $$ = mk_atom(yytext); } +| '<' { $$ = mk_atom(yytext); } +| '>' { $$ = mk_atom(yytext); } +| '-' { $$ = mk_atom(yytext); } +| '&' { $$ = mk_atom(yytext); } +| '|' { $$ = mk_atom(yytext); } +| '+' { $$ = mk_atom(yytext); } +| '*' { $$ = mk_atom(yytext); } +| '/' { $$ = mk_atom(yytext); } +| '^' { $$ = mk_atom(yytext); } +| '%' { $$ = mk_atom(yytext); } +; + +token_trees +: %empty { $$ = mk_node("TokenTrees", 0); } +| token_trees token_tree { $$ = ext_node($1, 1, $2); } +; + +token_tree +: delimited_token_trees +| unpaired_token { $$ = mk_node("TTTok", 1, $1); } +; + +delimited_token_trees +: parens_delimited_token_trees +| braces_delimited_token_trees +| brackets_delimited_token_trees +; + +parens_delimited_token_trees +: '(' token_trees ')' +{ + $$ = mk_node("TTDelim", 3, + mk_node("TTTok", 1, mk_atom("(")), + $2, + mk_node("TTTok", 1, mk_atom(")"))); +} +; + +braces_delimited_token_trees +: '{' token_trees '}' +{ + $$ = mk_node("TTDelim", 3, + mk_node("TTTok", 1, mk_atom("{")), + $2, + mk_node("TTTok", 1, mk_atom("}"))); +} +; + +brackets_delimited_token_trees +: '[' token_trees ']' +{ + $$ = mk_node("TTDelim", 3, + mk_node("TTTok", 1, mk_atom("[")), + $2, + mk_node("TTTok", 1, mk_atom("]"))); +} +; diff --git a/src/grammar/raw-string-literal-ambiguity.md b/src/grammar/raw-string-literal-ambiguity.md new file mode 100644 index 0000000000..c909f23331 --- /dev/null +++ b/src/grammar/raw-string-literal-ambiguity.md @@ -0,0 +1,64 @@ +Rust's lexical grammar is not context-free. Raw string literals are the source +of the problem. Informally, a raw string literal is an `r`, followed by `N` +hashes (where N can be zero), a quote, any characters, then a quote followed +by `N` hashes. Critically, once inside the first pair of quotes, +another quote cannot be followed by `N` consecutive hashes. e.g. +`r###""###"###` is invalid. + +This grammar describes this as best possible: + + R -> 'r' S + S -> '"' B '"' + S -> '#' S '#' + B -> . B + B -> ε + +Where `.` represents any character, and `ε` the empty string. Consider the +string `r#""#"#`. This string is not a valid raw string literal, but can be +accepted as one by the above grammar, using the derivation: + + R : #""#"# + S : ""#" + S : "# + B : # + B : ε + +(Where `T : U` means the rule `T` is applied, and `U` is the remainder of the +string.) The difficulty arises from the fact that it is fundamentally +context-sensitive. In particular, the context needed is the number of hashes. + +To prove that Rust's string literals are not context-free, we will use +the fact that context-free languages are closed under intersection with +regular languages, and the +[pumping lemma for context-free languages](https://en.wikipedia.org/wiki/Pumping_lemma_for_context-free_languages). + +Consider the regular language `R = r#+""#*"#+`. If Rust's raw string literals are +context-free, then their intersection with `R`, `R'`, should also be context-free. +Therefore, to prove that raw string literals are not context-free, +it is sufficient to prove that `R'` is not context-free. + +The language `R'` is `{r#^n""#^m"#^n | m < n}`. + +Assume `R'` *is* context-free. Then `R'` has some pumping length `p > 0` for which +the pumping lemma applies. Consider the following string `s` in `R'`: + +`r#^p""#^{p-1}"#^p` + +e.g. for `p = 2`: `s = r##""#"##` + +Then `s = uvwxy` for some choice of `uvwxy` such that `vx` is non-empty, +`|vwx| < p+1`, and `uv^iwx^iy` is in `R'` for all `i >= 0`. + +Neither `v` nor `x` can contain a `"` or `r`, as the number of these characters +in any string in `R'` is fixed. So `v` and `x` contain only hashes. +Consequently, of the three sequences of hashes, `v` and `x` combined +can only pump two of them. +If we ever choose the central sequence of hashes, then one of the outer sequences +will not grow when we pump, leading to an imbalance between the outer sequences. +Therefore, we must pump both outer sequences of hashes. However, +there are `p+2` characters between these two sequences of hashes, and `|vwx|` must +be less than `p+1`. Therefore we have a contradiction, and `R'` must not be +context-free. + +Since `R'` is not context-free, it follows that the Rust's raw string literals +must not be context-free. diff --git a/src/grammar/testparser.py b/src/grammar/testparser.py new file mode 100755 index 0000000000..37be41b935 --- /dev/null +++ b/src/grammar/testparser.py @@ -0,0 +1,76 @@ +#!/usr/bin/env python +# +# Copyright 2015 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +# ignore-tidy-linelength + +import sys + +import os +import subprocess +import argparse + +# usage: testparser.py [-h] [-p PARSER [PARSER ...]] -s SOURCE_DIR + +# Parsers should read from stdin and return exit status 0 for a +# successful parse, and nonzero for an unsuccessful parse + +parser = argparse.ArgumentParser() +parser.add_argument('-p', '--parser', nargs='+') +parser.add_argument('-s', '--source-dir', nargs=1, required=True) +args = parser.parse_args(sys.argv[1:]) + +total = 0 +ok = {} +bad = {} +for parser in args.parser: + ok[parser] = 0 + bad[parser] = [] +devnull = open(os.devnull, 'w') +print("\n") + +for base, dirs, files in os.walk(args.source_dir[0]): + for f in filter(lambda p: p.endswith('.rs'), files): + p = os.path.join(base, f) + parse_fail = 'parse-fail' in p + if sys.version_info.major == 3: + lines = open(p, encoding='utf-8').readlines() + else: + lines = open(p).readlines() + if any('ignore-test' in line or 'ignore-lexer-test' in line for line in lines): + continue + total += 1 + for parser in args.parser: + if subprocess.call(parser, stdin=open(p), stderr=subprocess.STDOUT, stdout=devnull) == 0: + if parse_fail: + bad[parser].append(p) + else: + ok[parser] += 1 + else: + if parse_fail: + ok[parser] += 1 + else: + bad[parser].append(p) + parser_stats = ', '.join(['{}: {}'.format(parser, ok[parser]) for parser in args.parser]) + sys.stdout.write("\033[K\r total: {}, {}, scanned {}" + .format(total, os.path.relpath(parser_stats), os.path.relpath(p))) + +devnull.close() + +print("\n") + +for parser in args.parser: + filename = os.path.basename(parser) + '.bad' + print("writing {} files that did not yield the correct result with {} to {}".format(len(bad[parser]), parser, filename)) + with open(filename, "w") as f: + for p in bad[parser]: + f.write(p) + f.write("\n") diff --git a/src/grammar/tokens.h b/src/grammar/tokens.h new file mode 100644 index 0000000000..081bd05025 --- /dev/null +++ b/src/grammar/tokens.h @@ -0,0 +1,91 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +enum Token { + SHL = 257, // Parser generators reserve 0-256 for char literals + SHR, + LE, + EQEQ, + NE, + GE, + ANDAND, + OROR, + SHLEQ, + SHREQ, + MINUSEQ, + ANDEQ, + OREQ, + PLUSEQ, + STAREQ, + SLASHEQ, + CARETEQ, + PERCENTEQ, + DOTDOT, + DOTDOTDOT, + MOD_SEP, + RARROW, + FAT_ARROW, + LIT_BYTE, + LIT_CHAR, + LIT_INTEGER, + LIT_FLOAT, + LIT_STR, + LIT_STR_RAW, + LIT_BYTE_STR, + LIT_BYTE_STR_RAW, + IDENT, + UNDERSCORE, + LIFETIME, + + // keywords + SELF, + STATIC, + AS, + BREAK, + CRATE, + ELSE, + ENUM, + EXTERN, + FALSE, + FN, + FOR, + IF, + IMPL, + IN, + LET, + LOOP, + MATCH, + MOD, + MOVE, + MUT, + PRIV, + PUB, + REF, + RETURN, + STRUCT, + TRUE, + TRAIT, + TYPE, + UNSAFE, + USE, + WHILE, + CONTINUE, + PROC, + BOX, + CONST, + WHERE, + TYPEOF, + INNER_DOC_COMMENT, + OUTER_DOC_COMMENT, + + SHEBANG, + SHEBANG_LINE, + STATIC_LIFETIME +}; diff --git a/src/grammar/verify.rs b/src/grammar/verify.rs new file mode 100644 index 0000000000..919fc98e43 --- /dev/null +++ b/src/grammar/verify.rs @@ -0,0 +1,361 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(plugin, rustc_private)] + +extern crate syntax; +extern crate syntax_pos; +extern crate rustc; + +#[macro_use] +extern crate log; + +use std::collections::HashMap; +use std::env; +use std::fs::File; +use std::io::{BufRead, Read}; +use std::path::Path; + +use syntax::parse::lexer; +use rustc::dep_graph::DepGraph; +use rustc::session::{self, config}; +use rustc::middle::cstore::DummyCrateStore; + +use std::rc::Rc; +use syntax::ast; +use syntax::codemap; +use syntax::parse::token::{self, BinOpToken, DelimToken, Lit, Token}; +use syntax::parse::lexer::TokenAndSpan; +use syntax_pos::Pos; + +use syntax::symbol::{Symbol, keywords}; + +fn parse_token_list(file: &str) -> HashMap { + fn id() -> token::Token { + Token::Ident(ast::Ident::with_empty_ctxt(keywords::Invalid.name())) + } + + let mut res = HashMap::new(); + + res.insert("-1".to_string(), Token::Eof); + + for line in file.split('\n') { + let eq = match line.trim().rfind('=') { + Some(val) => val, + None => continue + }; + + let val = &line[..eq]; + let num = &line[eq + 1..]; + + let tok = match val { + "SHR" => Token::BinOp(BinOpToken::Shr), + "DOLLAR" => Token::Dollar, + "LT" => Token::Lt, + "STAR" => Token::BinOp(BinOpToken::Star), + "FLOAT_SUFFIX" => id(), + "INT_SUFFIX" => id(), + "SHL" => Token::BinOp(BinOpToken::Shl), + "LBRACE" => Token::OpenDelim(DelimToken::Brace), + "RARROW" => Token::RArrow, + "LIT_STR" => Token::Literal(Lit::Str_(keywords::Invalid.name()), None), + "DOTDOT" => Token::DotDot, + "MOD_SEP" => Token::ModSep, + "DOTDOTDOT" => Token::DotDotDot, + "NOT" => Token::Not, + "AND" => Token::BinOp(BinOpToken::And), + "LPAREN" => Token::OpenDelim(DelimToken::Paren), + "ANDAND" => Token::AndAnd, + "AT" => Token::At, + "LBRACKET" => Token::OpenDelim(DelimToken::Bracket), + "LIT_STR_RAW" => Token::Literal(Lit::StrRaw(keywords::Invalid.name(), 0), None), + "RPAREN" => Token::CloseDelim(DelimToken::Paren), + "SLASH" => Token::BinOp(BinOpToken::Slash), + "COMMA" => Token::Comma, + "LIFETIME" => Token::Lifetime( + ast::Ident::with_empty_ctxt(keywords::Invalid.name())), + "CARET" => Token::BinOp(BinOpToken::Caret), + "TILDE" => Token::Tilde, + "IDENT" => id(), + "PLUS" => Token::BinOp(BinOpToken::Plus), + "LIT_CHAR" => Token::Literal(Lit::Char(keywords::Invalid.name()), None), + "LIT_BYTE" => Token::Literal(Lit::Byte(keywords::Invalid.name()), None), + "EQ" => Token::Eq, + "RBRACKET" => Token::CloseDelim(DelimToken::Bracket), + "COMMENT" => Token::Comment, + "DOC_COMMENT" => Token::DocComment(keywords::Invalid.name()), + "DOT" => Token::Dot, + "EQEQ" => Token::EqEq, + "NE" => Token::Ne, + "GE" => Token::Ge, + "PERCENT" => Token::BinOp(BinOpToken::Percent), + "RBRACE" => Token::CloseDelim(DelimToken::Brace), + "BINOP" => Token::BinOp(BinOpToken::Plus), + "POUND" => Token::Pound, + "OROR" => Token::OrOr, + "LIT_INTEGER" => Token::Literal(Lit::Integer(keywords::Invalid.name()), None), + "BINOPEQ" => Token::BinOpEq(BinOpToken::Plus), + "LIT_FLOAT" => Token::Literal(Lit::Float(keywords::Invalid.name()), None), + "WHITESPACE" => Token::Whitespace, + "UNDERSCORE" => Token::Underscore, + "MINUS" => Token::BinOp(BinOpToken::Minus), + "SEMI" => Token::Semi, + "COLON" => Token::Colon, + "FAT_ARROW" => Token::FatArrow, + "OR" => Token::BinOp(BinOpToken::Or), + "GT" => Token::Gt, + "LE" => Token::Le, + "LIT_BINARY" => Token::Literal(Lit::ByteStr(keywords::Invalid.name()), None), + "LIT_BINARY_RAW" => Token::Literal( + Lit::ByteStrRaw(keywords::Invalid.name(), 0), None), + "QUESTION" => Token::Question, + "SHEBANG" => Token::Shebang(keywords::Invalid.name()), + _ => continue, + }; + + res.insert(num.to_string(), tok); + } + + debug!("Token map: {:?}", res); + res +} + +fn str_to_binop(s: &str) -> token::BinOpToken { + match s { + "+" => BinOpToken::Plus, + "/" => BinOpToken::Slash, + "-" => BinOpToken::Minus, + "*" => BinOpToken::Star, + "%" => BinOpToken::Percent, + "^" => BinOpToken::Caret, + "&" => BinOpToken::And, + "|" => BinOpToken::Or, + "<<" => BinOpToken::Shl, + ">>" => BinOpToken::Shr, + _ => panic!("Bad binop str `{}`", s), + } +} + +/// Assuming a string/byte string literal, strip out the leading/trailing +/// hashes and surrounding quotes/raw/byte prefix. +fn fix(mut lit: &str) -> ast::Name { + let prefix: Vec = lit.chars().take(2).collect(); + if prefix[0] == 'r' { + if prefix[1] == 'b' { + lit = &lit[2..] + } else { + lit = &lit[1..]; + } + } else if prefix[0] == 'b' { + lit = &lit[1..]; + } + + let leading_hashes = count(lit); + + // +1/-1 to adjust for single quotes + Symbol::intern(&lit[leading_hashes + 1..lit.len() - leading_hashes - 1]) +} + +/// Assuming a char/byte literal, strip the 'b' prefix and the single quotes. +fn fixchar(mut lit: &str) -> ast::Name { + let prefix = lit.chars().next().unwrap(); + if prefix == 'b' { + lit = &lit[1..]; + } + + Symbol::intern(&lit[1..lit.len() - 1]) +} + +fn count(lit: &str) -> usize { + lit.chars().take_while(|c| *c == '#').count() +} + +fn parse_antlr_token(s: &str, tokens: &HashMap, surrogate_pairs_pos: &[usize], + has_bom: bool) + -> TokenAndSpan { + // old regex: + // \[@(?P\d+),(?P\d+):(?P\d+)='(?P.+?)',<(?P-?\d+)>,\d+:\d+] + let start = s.find("[@").unwrap(); + let comma = start + s[start..].find(",").unwrap(); + let colon = comma + s[comma..].find(":").unwrap(); + let content_start = colon + s[colon..].find("='").unwrap(); + // Use rfind instead of find, because we don't want to stop at the content + let content_end = content_start + s[content_start..].rfind("',<").unwrap(); + let toknum_end = content_end + s[content_end..].find(">,").unwrap(); + + let start = &s[comma + 1 .. colon]; + let end = &s[colon + 1 .. content_start]; + let content = &s[content_start + 2 .. content_end]; + let toknum = &s[content_end + 3 .. toknum_end]; + + let not_found = format!("didn't find token {:?} in the map", toknum); + let proto_tok = tokens.get(toknum).expect(¬_found[..]); + + let nm = Symbol::intern(content); + + debug!("What we got: content (`{}`), proto: {:?}", content, proto_tok); + + let real_tok = match *proto_tok { + Token::BinOp(..) => Token::BinOp(str_to_binop(content)), + Token::BinOpEq(..) => Token::BinOpEq(str_to_binop(&content[..content.len() - 1])), + Token::Literal(Lit::Str_(..), n) => Token::Literal(Lit::Str_(fix(content)), n), + Token::Literal(Lit::StrRaw(..), n) => Token::Literal(Lit::StrRaw(fix(content), + count(content)), n), + Token::Literal(Lit::Char(..), n) => Token::Literal(Lit::Char(fixchar(content)), n), + Token::Literal(Lit::Byte(..), n) => Token::Literal(Lit::Byte(fixchar(content)), n), + Token::DocComment(..) => Token::DocComment(nm), + Token::Literal(Lit::Integer(..), n) => Token::Literal(Lit::Integer(nm), n), + Token::Literal(Lit::Float(..), n) => Token::Literal(Lit::Float(nm), n), + Token::Literal(Lit::ByteStr(..), n) => Token::Literal(Lit::ByteStr(nm), n), + Token::Literal(Lit::ByteStrRaw(..), n) => Token::Literal(Lit::ByteStrRaw(fix(content), + count(content)), n), + Token::Ident(..) => Token::Ident(ast::Ident::with_empty_ctxt(nm)), + Token::Lifetime(..) => Token::Lifetime(ast::Ident::with_empty_ctxt(nm)), + ref t => t.clone() + }; + + let start_offset = if real_tok == Token::Eof { + 1 + } else { + 0 + }; + + let offset = if has_bom { 1 } else { 0 }; + + let mut lo = start.parse::().unwrap() - start_offset - offset; + let mut hi = end.parse::().unwrap() + 1 - offset; + + // Adjust the span: For each surrogate pair already encountered, subtract one position. + lo -= surrogate_pairs_pos.binary_search(&(lo as usize)).unwrap_or_else(|x| x) as u32; + hi -= surrogate_pairs_pos.binary_search(&(hi as usize)).unwrap_or_else(|x| x) as u32; + + let sp = syntax_pos::Span { + lo: syntax_pos::BytePos(lo), + hi: syntax_pos::BytePos(hi), + expn_id: syntax_pos::NO_EXPANSION + }; + + TokenAndSpan { + tok: real_tok, + sp: sp + } +} + +fn tok_cmp(a: &token::Token, b: &token::Token) -> bool { + match a { + &Token::Ident(id) => match b { + &Token::Ident(id2) => id == id2, + _ => false + }, + _ => a == b + } +} + +fn span_cmp(antlr_sp: codemap::Span, rust_sp: codemap::Span, cm: &codemap::CodeMap) -> bool { + antlr_sp.expn_id == rust_sp.expn_id && + antlr_sp.lo.to_usize() == cm.bytepos_to_file_charpos(rust_sp.lo).to_usize() && + antlr_sp.hi.to_usize() == cm.bytepos_to_file_charpos(rust_sp.hi).to_usize() +} + +fn main() { + fn next(r: &mut lexer::StringReader) -> TokenAndSpan { + use syntax::parse::lexer::Reader; + r.next_token() + } + + let mut args = env::args().skip(1); + let filename = args.next().unwrap(); + if filename.find("parse-fail").is_some() { + return; + } + + // Rust's lexer + let mut code = String::new(); + File::open(&Path::new(&filename)).unwrap().read_to_string(&mut code).unwrap(); + + let surrogate_pairs_pos: Vec = code.chars().enumerate() + .filter(|&(_, c)| c as usize > 0xFFFF) + .map(|(n, _)| n) + .enumerate() + .map(|(x, n)| x + n) + .collect(); + + let has_bom = code.starts_with("\u{feff}"); + + debug!("Pairs: {:?}", surrogate_pairs_pos); + + let options = config::basic_options(); + let session = session::build_session(options, &DepGraph::new(false), None, + syntax::errors::registry::Registry::new(&[]), + Rc::new(DummyCrateStore)); + let filemap = session.parse_sess.codemap() + .new_filemap("".to_string(), None, code); + let mut lexer = lexer::StringReader::new(session.diagnostic(), filemap); + let cm = session.codemap(); + + // ANTLR + let mut token_file = File::open(&Path::new(&args.next().unwrap())).unwrap(); + let mut token_list = String::new(); + token_file.read_to_string(&mut token_list).unwrap(); + let token_map = parse_token_list(&token_list[..]); + + let stdin = std::io::stdin(); + let lock = stdin.lock(); + let lines = lock.lines(); + let antlr_tokens = lines.map(|l| parse_antlr_token(l.unwrap().trim(), + &token_map, + &surrogate_pairs_pos[..], + has_bom)); + + for antlr_tok in antlr_tokens { + let rustc_tok = next(&mut lexer); + if rustc_tok.tok == Token::Eof && antlr_tok.tok == Token::Eof { + continue + } + + assert!(span_cmp(antlr_tok.sp, rustc_tok.sp, cm), "{:?} and {:?} have different spans", + rustc_tok, + antlr_tok); + + macro_rules! matches { + ( $($x:pat),+ ) => ( + match rustc_tok.tok { + $($x => match antlr_tok.tok { + $x => { + if !tok_cmp(&rustc_tok.tok, &antlr_tok.tok) { + // FIXME #15677: needs more robust escaping in + // antlr + warn!("Different names for {:?} and {:?}", rustc_tok, antlr_tok); + } + } + _ => panic!("{:?} is not {:?}", antlr_tok, rustc_tok) + },)* + ref c => assert!(c == &antlr_tok.tok, "{:?} is not {:?}", antlr_tok, rustc_tok) + } + ) + } + + matches!( + Token::Literal(Lit::Byte(..), _), + Token::Literal(Lit::Char(..), _), + Token::Literal(Lit::Integer(..), _), + Token::Literal(Lit::Float(..), _), + Token::Literal(Lit::Str_(..), _), + Token::Literal(Lit::StrRaw(..), _), + Token::Literal(Lit::ByteStr(..), _), + Token::Literal(Lit::ByteStrRaw(..), _), + Token::Ident(..), + Token::Lifetime(..), + Token::Interpolated(..), + Token::DocComment(..), + Token::Shebang(..) + ); + } +} diff --git a/src/grammar/xidcontinue.g4 b/src/grammar/xidcontinue.g4 new file mode 100644 index 0000000000..f3a1a3b40f --- /dev/null +++ b/src/grammar/xidcontinue.g4 @@ -0,0 +1,473 @@ +lexer grammar Xidcontinue; + +fragment XID_Continue: + '\u0030' .. '\u0039' + | '\u0041' .. '\u005a' + | '\u005f' + | '\u0061' .. '\u007a' + | '\u00aa' + | '\u00b5' + | '\u00b7' + | '\u00ba' + | '\u00c0' .. '\u00d6' + | '\u00d8' .. '\u00f6' + | '\u00f8' .. '\u0236' + | '\u0250' .. '\u02c1' + | '\u02c6' .. '\u02d1' + | '\u02e0' .. '\u02e4' + | '\u02ee' + | '\u0300' .. '\u0357' + | '\u035d' .. '\u036f' + | '\u0386' + | '\u0388' .. '\u038a' + | '\u038c' + | '\u038e' .. '\u03a1' + | '\u03a3' .. '\u03ce' + | '\u03d0' .. '\u03f5' + | '\u03f7' .. '\u03fb' + | '\u0400' .. '\u0481' + | '\u0483' .. '\u0486' + | '\u048a' .. '\u04ce' + | '\u04d0' .. '\u04f5' + | '\u04f8' .. '\u04f9' + | '\u0500' .. '\u050f' + | '\u0531' .. '\u0556' + | '\u0559' + | '\u0561' .. '\u0587' + | '\u0591' .. '\u05a1' + | '\u05a3' .. '\u05b9' + | '\u05bb' .. '\u05bd' + | '\u05bf' + | '\u05c1' .. '\u05c2' + | '\u05c4' + | '\u05d0' .. '\u05ea' + | '\u05f0' .. '\u05f2' + | '\u0610' .. '\u0615' + | '\u0621' .. '\u063a' + | '\u0640' .. '\u0658' + | '\u0660' .. '\u0669' + | '\u066e' .. '\u06d3' + | '\u06d5' .. '\u06dc' + | '\u06df' .. '\u06e8' + | '\u06ea' .. '\u06fc' + | '\u06ff' + | '\u0710' .. '\u074a' + | '\u074d' .. '\u074f' + | '\u0780' .. '\u07b1' + | '\u0901' .. '\u0939' + | '\u093c' .. '\u094d' + | '\u0950' .. '\u0954' + | '\u0958' .. '\u0963' + | '\u0966' .. '\u096f' + | '\u0981' .. '\u0983' + | '\u0985' .. '\u098c' + | '\u098f' .. '\u0990' + | '\u0993' .. '\u09a8' + | '\u09aa' .. '\u09b0' + | '\u09b2' + | '\u09b6' .. '\u09b9' + | '\u09bc' .. '\u09c4' + | '\u09c7' .. '\u09c8' + | '\u09cb' .. '\u09cd' + | '\u09d7' + | '\u09dc' .. '\u09dd' + | '\u09df' .. '\u09e3' + | '\u09e6' .. '\u09f1' + | '\u0a01' .. '\u0a03' + | '\u0a05' .. '\u0a0a' + | '\u0a0f' .. '\u0a10' + | '\u0a13' .. '\u0a28' + | '\u0a2a' .. '\u0a30' + | '\u0a32' .. '\u0a33' + | '\u0a35' .. '\u0a36' + | '\u0a38' .. '\u0a39' + | '\u0a3c' + | '\u0a3e' .. '\u0a42' + | '\u0a47' .. '\u0a48' + | '\u0a4b' .. '\u0a4d' + | '\u0a59' .. '\u0a5c' + | '\u0a5e' + | '\u0a66' .. '\u0a74' + | '\u0a81' .. '\u0a83' + | '\u0a85' .. '\u0a8d' + | '\u0a8f' .. '\u0a91' + | '\u0a93' .. '\u0aa8' + | '\u0aaa' .. '\u0ab0' + | '\u0ab2' .. '\u0ab3' + | '\u0ab5' .. '\u0ab9' + | '\u0abc' .. '\u0ac5' + | '\u0ac7' .. '\u0ac9' + | '\u0acb' .. '\u0acd' + | '\u0ad0' + | '\u0ae0' .. '\u0ae3' + | '\u0ae6' .. '\u0aef' + | '\u0b01' .. '\u0b03' + | '\u0b05' .. '\u0b0c' + | '\u0b0f' .. '\u0b10' + | '\u0b13' .. '\u0b28' + | '\u0b2a' .. '\u0b30' + | '\u0b32' .. '\u0b33' + | '\u0b35' .. '\u0b39' + | '\u0b3c' .. '\u0b43' + | '\u0b47' .. '\u0b48' + | '\u0b4b' .. '\u0b4d' + | '\u0b56' .. '\u0b57' + | '\u0b5c' .. '\u0b5d' + | '\u0b5f' .. '\u0b61' + | '\u0b66' .. '\u0b6f' + | '\u0b71' + | '\u0b82' .. '\u0b83' + | '\u0b85' .. '\u0b8a' + | '\u0b8e' .. '\u0b90' + | '\u0b92' .. '\u0b95' + | '\u0b99' .. '\u0b9a' + | '\u0b9c' + | '\u0b9e' .. '\u0b9f' + | '\u0ba3' .. '\u0ba4' + | '\u0ba8' .. '\u0baa' + | '\u0bae' .. '\u0bb5' + | '\u0bb7' .. '\u0bb9' + | '\u0bbe' .. '\u0bc2' + | '\u0bc6' .. '\u0bc8' + | '\u0bca' .. '\u0bcd' + | '\u0bd7' + | '\u0be7' .. '\u0bef' + | '\u0c01' .. '\u0c03' + | '\u0c05' .. '\u0c0c' + | '\u0c0e' .. '\u0c10' + | '\u0c12' .. '\u0c28' + | '\u0c2a' .. '\u0c33' + | '\u0c35' .. '\u0c39' + | '\u0c3e' .. '\u0c44' + | '\u0c46' .. '\u0c48' + | '\u0c4a' .. '\u0c4d' + | '\u0c55' .. '\u0c56' + | '\u0c60' .. '\u0c61' + | '\u0c66' .. '\u0c6f' + | '\u0c82' .. '\u0c83' + | '\u0c85' .. '\u0c8c' + | '\u0c8e' .. '\u0c90' + | '\u0c92' .. '\u0ca8' + | '\u0caa' .. '\u0cb3' + | '\u0cb5' .. '\u0cb9' + | '\u0cbc' .. '\u0cc4' + | '\u0cc6' .. '\u0cc8' + | '\u0cca' .. '\u0ccd' + | '\u0cd5' .. '\u0cd6' + | '\u0cde' + | '\u0ce0' .. '\u0ce1' + | '\u0ce6' .. '\u0cef' + | '\u0d02' .. '\u0d03' + | '\u0d05' .. '\u0d0c' + | '\u0d0e' .. '\u0d10' + | '\u0d12' .. '\u0d28' + | '\u0d2a' .. '\u0d39' + | '\u0d3e' .. '\u0d43' + | '\u0d46' .. '\u0d48' + | '\u0d4a' .. '\u0d4d' + | '\u0d57' + | '\u0d60' .. '\u0d61' + | '\u0d66' .. '\u0d6f' + | '\u0d82' .. '\u0d83' + | '\u0d85' .. '\u0d96' + | '\u0d9a' .. '\u0db1' + | '\u0db3' .. '\u0dbb' + | '\u0dbd' + | '\u0dc0' .. '\u0dc6' + | '\u0dca' + | '\u0dcf' .. '\u0dd4' + | '\u0dd6' + | '\u0dd8' .. '\u0ddf' + | '\u0df2' .. '\u0df3' + | '\u0e01' .. '\u0e3a' + | '\u0e40' .. '\u0e4e' + | '\u0e50' .. '\u0e59' + | '\u0e81' .. '\u0e82' + | '\u0e84' + | '\u0e87' .. '\u0e88' + | '\u0e8a' + | '\u0e8d' + | '\u0e94' .. '\u0e97' + | '\u0e99' .. '\u0e9f' + | '\u0ea1' .. '\u0ea3' + | '\u0ea5' + | '\u0ea7' + | '\u0eaa' .. '\u0eab' + | '\u0ead' .. '\u0eb9' + | '\u0ebb' .. '\u0ebd' + | '\u0ec0' .. '\u0ec4' + | '\u0ec6' + | '\u0ec8' .. '\u0ecd' + | '\u0ed0' .. '\u0ed9' + | '\u0edc' .. '\u0edd' + | '\u0f00' + | '\u0f18' .. '\u0f19' + | '\u0f20' .. '\u0f29' + | '\u0f35' + | '\u0f37' + | '\u0f39' + | '\u0f3e' .. '\u0f47' + | '\u0f49' .. '\u0f6a' + | '\u0f71' .. '\u0f84' + | '\u0f86' .. '\u0f8b' + | '\u0f90' .. '\u0f97' + | '\u0f99' .. '\u0fbc' + | '\u0fc6' + | '\u1000' .. '\u1021' + | '\u1023' .. '\u1027' + | '\u1029' .. '\u102a' + | '\u102c' .. '\u1032' + | '\u1036' .. '\u1039' + | '\u1040' .. '\u1049' + | '\u1050' .. '\u1059' + | '\u10a0' .. '\u10c5' + | '\u10d0' .. '\u10f8' + | '\u1100' .. '\u1159' + | '\u115f' .. '\u11a2' + | '\u11a8' .. '\u11f9' + | '\u1200' .. '\u1206' + | '\u1208' .. '\u1246' + | '\u1248' + | '\u124a' .. '\u124d' + | '\u1250' .. '\u1256' + | '\u1258' + | '\u125a' .. '\u125d' + | '\u1260' .. '\u1286' + | '\u1288' + | '\u128a' .. '\u128d' + | '\u1290' .. '\u12ae' + | '\u12b0' + | '\u12b2' .. '\u12b5' + | '\u12b8' .. '\u12be' + | '\u12c0' + | '\u12c2' .. '\u12c5' + | '\u12c8' .. '\u12ce' + | '\u12d0' .. '\u12d6' + | '\u12d8' .. '\u12ee' + | '\u12f0' .. '\u130e' + | '\u1310' + | '\u1312' .. '\u1315' + | '\u1318' .. '\u131e' + | '\u1320' .. '\u1346' + | '\u1348' .. '\u135a' + | '\u1369' .. '\u1371' + | '\u13a0' .. '\u13f4' + | '\u1401' .. '\u166c' + | '\u166f' .. '\u1676' + | '\u1681' .. '\u169a' + | '\u16a0' .. '\u16ea' + | '\u16ee' .. '\u16f0' + | '\u1700' .. '\u170c' + | '\u170e' .. '\u1714' + | '\u1720' .. '\u1734' + | '\u1740' .. '\u1753' + | '\u1760' .. '\u176c' + | '\u176e' .. '\u1770' + | '\u1772' .. '\u1773' + | '\u1780' .. '\u17b3' + | '\u17b6' .. '\u17d3' + | '\u17d7' + | '\u17dc' .. '\u17dd' + | '\u17e0' .. '\u17e9' + | '\u180b' .. '\u180d' + | '\u1810' .. '\u1819' + | '\u1820' .. '\u1877' + | '\u1880' .. '\u18a9' + | '\u1900' .. '\u191c' + | '\u1920' .. '\u192b' + | '\u1930' .. '\u193b' + | '\u1946' .. '\u196d' + | '\u1970' .. '\u1974' + | '\u1d00' .. '\u1d6b' + | '\u1e00' .. '\u1e9b' + | '\u1ea0' .. '\u1ef9' + | '\u1f00' .. '\u1f15' + | '\u1f18' .. '\u1f1d' + | '\u1f20' .. '\u1f45' + | '\u1f48' .. '\u1f4d' + | '\u1f50' .. '\u1f57' + | '\u1f59' + | '\u1f5b' + | '\u1f5d' + | '\u1f5f' .. '\u1f7d' + | '\u1f80' .. '\u1fb4' + | '\u1fb6' .. '\u1fbc' + | '\u1fbe' + | '\u1fc2' .. '\u1fc4' + | '\u1fc6' .. '\u1fcc' + | '\u1fd0' .. '\u1fd3' + | '\u1fd6' .. '\u1fdb' + | '\u1fe0' .. '\u1fec' + | '\u1ff2' .. '\u1ff4' + | '\u1ff6' .. '\u1ffc' + | '\u203f' .. '\u2040' + | '\u2054' + | '\u2071' + | '\u207f' + | '\u20d0' .. '\u20dc' + | '\u20e1' + | '\u20e5' .. '\u20ea' + | '\u2102' + | '\u2107' + | '\u210a' .. '\u2113' + | '\u2115' + | '\u2118' .. '\u211d' + | '\u2124' + | '\u2126' + | '\u2128' + | '\u212a' .. '\u2131' + | '\u2133' .. '\u2139' + | '\u213d' .. '\u213f' + | '\u2145' .. '\u2149' + | '\u2160' .. '\u2183' + | '\u3005' .. '\u3007' + | '\u3021' .. '\u302f' + | '\u3031' .. '\u3035' + | '\u3038' .. '\u303c' + | '\u3041' .. '\u3096' + | '\u3099' .. '\u309a' + | '\u309d' .. '\u309f' + | '\u30a1' .. '\u30ff' + | '\u3105' .. '\u312c' + | '\u3131' .. '\u318e' + | '\u31a0' .. '\u31b7' + | '\u31f0' .. '\u31ff' + | '\u3400' .. '\u4db5' + | '\u4e00' .. '\u9fa5' + | '\ua000' .. '\ua48c' + | '\uac00' .. '\ud7a3' + | '\uf900' .. '\ufa2d' + | '\ufa30' .. '\ufa6a' + | '\ufb00' .. '\ufb06' + | '\ufb13' .. '\ufb17' + | '\ufb1d' .. '\ufb28' + | '\ufb2a' .. '\ufb36' + | '\ufb38' .. '\ufb3c' + | '\ufb3e' + | '\ufb40' .. '\ufb41' + | '\ufb43' .. '\ufb44' + | '\ufb46' .. '\ufbb1' + | '\ufbd3' .. '\ufc5d' + | '\ufc64' .. '\ufd3d' + | '\ufd50' .. '\ufd8f' + | '\ufd92' .. '\ufdc7' + | '\ufdf0' .. '\ufdf9' + | '\ufe00' .. '\ufe0f' + | '\ufe20' .. '\ufe23' + | '\ufe33' .. '\ufe34' + | '\ufe4d' .. '\ufe4f' + | '\ufe71' + | '\ufe73' + | '\ufe77' + | '\ufe79' + | '\ufe7b' + | '\ufe7d' + | '\ufe7f' .. '\ufefc' + | '\uff10' .. '\uff19' + | '\uff21' .. '\uff3a' + | '\uff3f' + | '\uff41' .. '\uff5a' + | '\uff65' .. '\uffbe' + | '\uffc2' .. '\uffc7' + | '\uffca' .. '\uffcf' + | '\uffd2' .. '\uffd7' + | '\uffda' .. '\uffdc' + | '\ud800' '\udc00' .. '\udc0a' + | '\ud800' '\udc0d' .. '\udc25' + | '\ud800' '\udc28' .. '\udc39' + | '\ud800' '\udc3c' .. '\udc3c' + | '\ud800' '\udc3f' .. '\udc4c' + | '\ud800' '\udc50' .. '\udc5c' + | '\ud800' '\udc80' .. '\udcf9' + | '\ud800' '\udf00' .. '\udf1d' + | '\ud800' '\udf30' .. '\udf49' + | '\ud800' '\udf80' .. '\udf9c' + | '\ud801' '\ue000' .. '\ue09c' + | '\ud801' '\ue0a0' .. '\ue0a8' + | '\ud802' '\ue400' .. '\ue404' + | '\ud802' '\u0808' + | '\ud802' '\ue40a' .. '\ue434' + | '\ud802' '\ue437' .. '\ue437' + | '\ud802' '\u083c' + | '\ud802' '\u083f' + | '\ud834' '\uad65' .. '\uad68' + | '\ud834' '\uad6d' .. '\uad71' + | '\ud834' '\uad7b' .. '\uad81' + | '\ud834' '\uad85' .. '\uad8a' + | '\ud834' '\uadaa' .. '\uadac' + | '\ud835' '\ub000' .. '\ub053' + | '\ud835' '\ub056' .. '\ub09b' + | '\ud835' '\ub09e' .. '\ub09e' + | '\ud835' '\ud4a2' + | '\ud835' '\ub0a5' .. '\ub0a5' + | '\ud835' '\ub0a9' .. '\ub0ab' + | '\ud835' '\ub0ae' .. '\ub0b8' + | '\ud835' '\ud4bb' + | '\ud835' '\ub0bd' .. '\ub0c2' + | '\ud835' '\ub0c5' .. '\ub104' + | '\ud835' '\ub107' .. '\ub109' + | '\ud835' '\ub10d' .. '\ub113' + | '\ud835' '\ub116' .. '\ub11b' + | '\ud835' '\ub11e' .. '\ub138' + | '\ud835' '\ub13b' .. '\ub13d' + | '\ud835' '\ub140' .. '\ub143' + | '\ud835' '\ud546' + | '\ud835' '\ub14a' .. '\ub14f' + | '\ud835' '\ub152' .. '\ub2a2' + | '\ud835' '\ub2a8' .. '\ub2bf' + | '\ud835' '\ub2c2' .. '\ub2d9' + | '\ud835' '\ub2dc' .. '\ub2f9' + | '\ud835' '\ub2fc' .. '\ub313' + | '\ud835' '\ub316' .. '\ub333' + | '\ud835' '\ub336' .. '\ub34d' + | '\ud835' '\ub350' .. '\ub36d' + | '\ud835' '\ub370' .. '\ub387' + | '\ud835' '\ub38a' .. '\ub3a7' + | '\ud835' '\ub3aa' .. '\ub3c1' + | '\ud835' '\ub3c4' .. '\ub3c8' + | '\ud835' '\ub3ce' .. '\ub3fe' + | '\ud840' '\udc00' .. '\udffe' + | '\ud841' '\ue000' .. '\ue3fe' + | '\ud842' '\ue400' .. '\ue7fe' + | '\ud843' '\ue800' .. '\uebfe' + | '\ud844' '\uec00' .. '\ueffe' + | '\ud845' '\uf000' .. '\uf3fe' + | '\ud846' '\uf400' .. '\uf7fe' + | '\ud847' '\uf800' .. '\ufbfe' + | '\ud848' '\ufc00' .. '\ufffe' + | '\ud849' '\u0000' .. '\u03fe' + | '\ud84a' '\u0400' .. '\u07fe' + | '\ud84b' '\u0800' .. '\u0bfe' + | '\ud84c' '\u0c00' .. '\u0ffe' + | '\ud84d' '\u1000' .. '\u13fe' + | '\ud84e' '\u1400' .. '\u17fe' + | '\ud84f' '\u1800' .. '\u1bfe' + | '\ud850' '\u1c00' .. '\u1ffe' + | '\ud851' '\u2000' .. '\u23fe' + | '\ud852' '\u2400' .. '\u27fe' + | '\ud853' '\u2800' .. '\u2bfe' + | '\ud854' '\u2c00' .. '\u2ffe' + | '\ud855' '\u3000' .. '\u33fe' + | '\ud856' '\u3400' .. '\u37fe' + | '\ud857' '\u3800' .. '\u3bfe' + | '\ud858' '\u3c00' .. '\u3ffe' + | '\ud859' '\u4000' .. '\u43fe' + | '\ud85a' '\u4400' .. '\u47fe' + | '\ud85b' '\u4800' .. '\u4bfe' + | '\ud85c' '\u4c00' .. '\u4ffe' + | '\ud85d' '\u5000' .. '\u53fe' + | '\ud85e' '\u5400' .. '\u57fe' + | '\ud85f' '\u5800' .. '\u5bfe' + | '\ud860' '\u5c00' .. '\u5ffe' + | '\ud861' '\u6000' .. '\u63fe' + | '\ud862' '\u6400' .. '\u67fe' + | '\ud863' '\u6800' .. '\u6bfe' + | '\ud864' '\u6c00' .. '\u6ffe' + | '\ud865' '\u7000' .. '\u73fe' + | '\ud866' '\u7400' .. '\u77fe' + | '\ud867' '\u7800' .. '\u7bfe' + | '\ud868' '\u7c00' .. '\u7ffe' + | '\ud869' '\u8000' .. '\u82d5' + | '\ud87e' '\ud400' .. '\ud61c' + | '\udb40' '\udd00' .. '\uddee' + ; diff --git a/src/grammar/xidstart.g4 b/src/grammar/xidstart.g4 new file mode 100644 index 0000000000..53fb50f458 --- /dev/null +++ b/src/grammar/xidstart.g4 @@ -0,0 +1,379 @@ +lexer grammar Xidstart; + +fragment XID_Start : + '\u0041' .. '\u005a' + | '_' + | '\u0061' .. '\u007a' + | '\u00aa' + | '\u00b5' + | '\u00ba' + | '\u00c0' .. '\u00d6' + | '\u00d8' .. '\u00f6' + | '\u00f8' .. '\u0236' + | '\u0250' .. '\u02c1' + | '\u02c6' .. '\u02d1' + | '\u02e0' .. '\u02e4' + | '\u02ee' + | '\u0386' + | '\u0388' .. '\u038a' + | '\u038c' + | '\u038e' .. '\u03a1' + | '\u03a3' .. '\u03ce' + | '\u03d0' .. '\u03f5' + | '\u03f7' .. '\u03fb' + | '\u0400' .. '\u0481' + | '\u048a' .. '\u04ce' + | '\u04d0' .. '\u04f5' + | '\u04f8' .. '\u04f9' + | '\u0500' .. '\u050f' + | '\u0531' .. '\u0556' + | '\u0559' + | '\u0561' .. '\u0587' + | '\u05d0' .. '\u05ea' + | '\u05f0' .. '\u05f2' + | '\u0621' .. '\u063a' + | '\u0640' .. '\u064a' + | '\u066e' .. '\u066f' + | '\u0671' .. '\u06d3' + | '\u06d5' + | '\u06e5' .. '\u06e6' + | '\u06ee' .. '\u06ef' + | '\u06fa' .. '\u06fc' + | '\u06ff' + | '\u0710' + | '\u0712' .. '\u072f' + | '\u074d' .. '\u074f' + | '\u0780' .. '\u07a5' + | '\u07b1' + | '\u0904' .. '\u0939' + | '\u093d' + | '\u0950' + | '\u0958' .. '\u0961' + | '\u0985' .. '\u098c' + | '\u098f' .. '\u0990' + | '\u0993' .. '\u09a8' + | '\u09aa' .. '\u09b0' + | '\u09b2' + | '\u09b6' .. '\u09b9' + | '\u09bd' + | '\u09dc' .. '\u09dd' + | '\u09df' .. '\u09e1' + | '\u09f0' .. '\u09f1' + | '\u0a05' .. '\u0a0a' + | '\u0a0f' .. '\u0a10' + | '\u0a13' .. '\u0a28' + | '\u0a2a' .. '\u0a30' + | '\u0a32' .. '\u0a33' + | '\u0a35' .. '\u0a36' + | '\u0a38' .. '\u0a39' + | '\u0a59' .. '\u0a5c' + | '\u0a5e' + | '\u0a72' .. '\u0a74' + | '\u0a85' .. '\u0a8d' + | '\u0a8f' .. '\u0a91' + | '\u0a93' .. '\u0aa8' + | '\u0aaa' .. '\u0ab0' + | '\u0ab2' .. '\u0ab3' + | '\u0ab5' .. '\u0ab9' + | '\u0abd' + | '\u0ad0' + | '\u0ae0' .. '\u0ae1' + | '\u0b05' .. '\u0b0c' + | '\u0b0f' .. '\u0b10' + | '\u0b13' .. '\u0b28' + | '\u0b2a' .. '\u0b30' + | '\u0b32' .. '\u0b33' + | '\u0b35' .. '\u0b39' + | '\u0b3d' + | '\u0b5c' .. '\u0b5d' + | '\u0b5f' .. '\u0b61' + | '\u0b71' + | '\u0b83' + | '\u0b85' .. '\u0b8a' + | '\u0b8e' .. '\u0b90' + | '\u0b92' .. '\u0b95' + | '\u0b99' .. '\u0b9a' + | '\u0b9c' + | '\u0b9e' .. '\u0b9f' + | '\u0ba3' .. '\u0ba4' + | '\u0ba8' .. '\u0baa' + | '\u0bae' .. '\u0bb5' + | '\u0bb7' .. '\u0bb9' + | '\u0c05' .. '\u0c0c' + | '\u0c0e' .. '\u0c10' + | '\u0c12' .. '\u0c28' + | '\u0c2a' .. '\u0c33' + | '\u0c35' .. '\u0c39' + | '\u0c60' .. '\u0c61' + | '\u0c85' .. '\u0c8c' + | '\u0c8e' .. '\u0c90' + | '\u0c92' .. '\u0ca8' + | '\u0caa' .. '\u0cb3' + | '\u0cb5' .. '\u0cb9' + | '\u0cbd' + | '\u0cde' + | '\u0ce0' .. '\u0ce1' + | '\u0d05' .. '\u0d0c' + | '\u0d0e' .. '\u0d10' + | '\u0d12' .. '\u0d28' + | '\u0d2a' .. '\u0d39' + | '\u0d60' .. '\u0d61' + | '\u0d85' .. '\u0d96' + | '\u0d9a' .. '\u0db1' + | '\u0db3' .. '\u0dbb' + | '\u0dbd' + | '\u0dc0' .. '\u0dc6' + | '\u0e01' .. '\u0e30' + | '\u0e32' + | '\u0e40' .. '\u0e46' + | '\u0e81' .. '\u0e82' + | '\u0e84' + | '\u0e87' .. '\u0e88' + | '\u0e8a' + | '\u0e8d' + | '\u0e94' .. '\u0e97' + | '\u0e99' .. '\u0e9f' + | '\u0ea1' .. '\u0ea3' + | '\u0ea5' + | '\u0ea7' + | '\u0eaa' .. '\u0eab' + | '\u0ead' .. '\u0eb0' + | '\u0eb2' + | '\u0ebd' + | '\u0ec0' .. '\u0ec4' + | '\u0ec6' + | '\u0edc' .. '\u0edd' + | '\u0f00' + | '\u0f40' .. '\u0f47' + | '\u0f49' .. '\u0f6a' + | '\u0f88' .. '\u0f8b' + | '\u1000' .. '\u1021' + | '\u1023' .. '\u1027' + | '\u1029' .. '\u102a' + | '\u1050' .. '\u1055' + | '\u10a0' .. '\u10c5' + | '\u10d0' .. '\u10f8' + | '\u1100' .. '\u1159' + | '\u115f' .. '\u11a2' + | '\u11a8' .. '\u11f9' + | '\u1200' .. '\u1206' + | '\u1208' .. '\u1246' + | '\u1248' + | '\u124a' .. '\u124d' + | '\u1250' .. '\u1256' + | '\u1258' + | '\u125a' .. '\u125d' + | '\u1260' .. '\u1286' + | '\u1288' + | '\u128a' .. '\u128d' + | '\u1290' .. '\u12ae' + | '\u12b0' + | '\u12b2' .. '\u12b5' + | '\u12b8' .. '\u12be' + | '\u12c0' + | '\u12c2' .. '\u12c5' + | '\u12c8' .. '\u12ce' + | '\u12d0' .. '\u12d6' + | '\u12d8' .. '\u12ee' + | '\u12f0' .. '\u130e' + | '\u1310' + | '\u1312' .. '\u1315' + | '\u1318' .. '\u131e' + | '\u1320' .. '\u1346' + | '\u1348' .. '\u135a' + | '\u13a0' .. '\u13f4' + | '\u1401' .. '\u166c' + | '\u166f' .. '\u1676' + | '\u1681' .. '\u169a' + | '\u16a0' .. '\u16ea' + | '\u16ee' .. '\u16f0' + | '\u1700' .. '\u170c' + | '\u170e' .. '\u1711' + | '\u1720' .. '\u1731' + | '\u1740' .. '\u1751' + | '\u1760' .. '\u176c' + | '\u176e' .. '\u1770' + | '\u1780' .. '\u17b3' + | '\u17d7' + | '\u17dc' + | '\u1820' .. '\u1877' + | '\u1880' .. '\u18a8' + | '\u1900' .. '\u191c' + | '\u1950' .. '\u196d' + | '\u1970' .. '\u1974' + | '\u1d00' .. '\u1d6b' + | '\u1e00' .. '\u1e9b' + | '\u1ea0' .. '\u1ef9' + | '\u1f00' .. '\u1f15' + | '\u1f18' .. '\u1f1d' + | '\u1f20' .. '\u1f45' + | '\u1f48' .. '\u1f4d' + | '\u1f50' .. '\u1f57' + | '\u1f59' + | '\u1f5b' + | '\u1f5d' + | '\u1f5f' .. '\u1f7d' + | '\u1f80' .. '\u1fb4' + | '\u1fb6' .. '\u1fbc' + | '\u1fbe' + | '\u1fc2' .. '\u1fc4' + | '\u1fc6' .. '\u1fcc' + | '\u1fd0' .. '\u1fd3' + | '\u1fd6' .. '\u1fdb' + | '\u1fe0' .. '\u1fec' + | '\u1ff2' .. '\u1ff4' + | '\u1ff6' .. '\u1ffc' + | '\u2071' + | '\u207f' + | '\u2102' + | '\u2107' + | '\u210a' .. '\u2113' + | '\u2115' + | '\u2118' .. '\u211d' + | '\u2124' + | '\u2126' + | '\u2128' + | '\u212a' .. '\u2131' + | '\u2133' .. '\u2139' + | '\u213d' .. '\u213f' + | '\u2145' .. '\u2149' + | '\u2160' .. '\u2183' + | '\u3005' .. '\u3007' + | '\u3021' .. '\u3029' + | '\u3031' .. '\u3035' + | '\u3038' .. '\u303c' + | '\u3041' .. '\u3096' + | '\u309d' .. '\u309f' + | '\u30a1' .. '\u30fa' + | '\u30fc' .. '\u30ff' + | '\u3105' .. '\u312c' + | '\u3131' .. '\u318e' + | '\u31a0' .. '\u31b7' + | '\u31f0' .. '\u31ff' + | '\u3400' .. '\u4db5' + | '\u4e00' .. '\u9fa5' + | '\ua000' .. '\ua48c' + | '\uac00' .. '\ud7a3' + | '\uf900' .. '\ufa2d' + | '\ufa30' .. '\ufa6a' + | '\ufb00' .. '\ufb06' + | '\ufb13' .. '\ufb17' + | '\ufb1d' + | '\ufb1f' .. '\ufb28' + | '\ufb2a' .. '\ufb36' + | '\ufb38' .. '\ufb3c' + | '\ufb3e' + | '\ufb40' .. '\ufb41' + | '\ufb43' .. '\ufb44' + | '\ufb46' .. '\ufbb1' + | '\ufbd3' .. '\ufc5d' + | '\ufc64' .. '\ufd3d' + | '\ufd50' .. '\ufd8f' + | '\ufd92' .. '\ufdc7' + | '\ufdf0' .. '\ufdf9' + | '\ufe71' + | '\ufe73' + | '\ufe77' + | '\ufe79' + | '\ufe7b' + | '\ufe7d' + | '\ufe7f' .. '\ufefc' + | '\uff21' .. '\uff3a' + | '\uff41' .. '\uff5a' + | '\uff66' .. '\uff9d' + | '\uffa0' .. '\uffbe' + | '\uffc2' .. '\uffc7' + | '\uffca' .. '\uffcf' + | '\uffd2' .. '\uffd7' + | '\uffda' .. '\uffdc' + | '\ud800' '\udc00' .. '\udc0a' + | '\ud800' '\udc0d' .. '\udc25' + | '\ud800' '\udc28' .. '\udc39' + | '\ud800' '\udc3c' .. '\udc3c' + | '\ud800' '\udc3f' .. '\udc4c' + | '\ud800' '\udc50' .. '\udc5c' + | '\ud800' '\udc80' .. '\udcf9' + | '\ud800' '\udf00' .. '\udf1d' + | '\ud800' '\udf30' .. '\udf49' + | '\ud800' '\udf80' .. '\udf9c' + | '\ud801' '\ue000' .. '\ue09c' + | '\ud802' '\ue400' .. '\ue404' + | '\ud802' '\u0808' + | '\ud802' '\ue40a' .. '\ue434' + | '\ud802' '\ue437' .. '\ue437' + | '\ud802' '\u083c' + | '\ud802' '\u083f' + | '\ud835' '\ub000' .. '\ub053' + | '\ud835' '\ub056' .. '\ub09b' + | '\ud835' '\ub09e' .. '\ub09e' + | '\ud835' '\ud4a2' + | '\ud835' '\ub0a5' .. '\ub0a5' + | '\ud835' '\ub0a9' .. '\ub0ab' + | '\ud835' '\ub0ae' .. '\ub0b8' + | '\ud835' '\ud4bb' + | '\ud835' '\ub0bd' .. '\ub0c2' + | '\ud835' '\ub0c5' .. '\ub104' + | '\ud835' '\ub107' .. '\ub109' + | '\ud835' '\ub10d' .. '\ub113' + | '\ud835' '\ub116' .. '\ub11b' + | '\ud835' '\ub11e' .. '\ub138' + | '\ud835' '\ub13b' .. '\ub13d' + | '\ud835' '\ub140' .. '\ub143' + | '\ud835' '\ud546' + | '\ud835' '\ub14a' .. '\ub14f' + | '\ud835' '\ub152' .. '\ub2a2' + | '\ud835' '\ub2a8' .. '\ub2bf' + | '\ud835' '\ub2c2' .. '\ub2d9' + | '\ud835' '\ub2dc' .. '\ub2f9' + | '\ud835' '\ub2fc' .. '\ub313' + | '\ud835' '\ub316' .. '\ub333' + | '\ud835' '\ub336' .. '\ub34d' + | '\ud835' '\ub350' .. '\ub36d' + | '\ud835' '\ub370' .. '\ub387' + | '\ud835' '\ub38a' .. '\ub3a7' + | '\ud835' '\ub3aa' .. '\ub3c1' + | '\ud835' '\ub3c4' .. '\ub3c8' + | '\ud840' '\udc00' .. '\udffe' + | '\ud841' '\ue000' .. '\ue3fe' + | '\ud842' '\ue400' .. '\ue7fe' + | '\ud843' '\ue800' .. '\uebfe' + | '\ud844' '\uec00' .. '\ueffe' + | '\ud845' '\uf000' .. '\uf3fe' + | '\ud846' '\uf400' .. '\uf7fe' + | '\ud847' '\uf800' .. '\ufbfe' + | '\ud848' '\ufc00' .. '\ufffe' + | '\ud849' '\u0000' .. '\u03fe' + | '\ud84a' '\u0400' .. '\u07fe' + | '\ud84b' '\u0800' .. '\u0bfe' + | '\ud84c' '\u0c00' .. '\u0ffe' + | '\ud84d' '\u1000' .. '\u13fe' + | '\ud84e' '\u1400' .. '\u17fe' + | '\ud84f' '\u1800' .. '\u1bfe' + | '\ud850' '\u1c00' .. '\u1ffe' + | '\ud851' '\u2000' .. '\u23fe' + | '\ud852' '\u2400' .. '\u27fe' + | '\ud853' '\u2800' .. '\u2bfe' + | '\ud854' '\u2c00' .. '\u2ffe' + | '\ud855' '\u3000' .. '\u33fe' + | '\ud856' '\u3400' .. '\u37fe' + | '\ud857' '\u3800' .. '\u3bfe' + | '\ud858' '\u3c00' .. '\u3ffe' + | '\ud859' '\u4000' .. '\u43fe' + | '\ud85a' '\u4400' .. '\u47fe' + | '\ud85b' '\u4800' .. '\u4bfe' + | '\ud85c' '\u4c00' .. '\u4ffe' + | '\ud85d' '\u5000' .. '\u53fe' + | '\ud85e' '\u5400' .. '\u57fe' + | '\ud85f' '\u5800' .. '\u5bfe' + | '\ud860' '\u5c00' .. '\u5ffe' + | '\ud861' '\u6000' .. '\u63fe' + | '\ud862' '\u6400' .. '\u67fe' + | '\ud863' '\u6800' .. '\u6bfe' + | '\ud864' '\u6c00' .. '\u6ffe' + | '\ud865' '\u7000' .. '\u73fe' + | '\ud866' '\u7400' .. '\u77fe' + | '\ud867' '\u7800' .. '\u7bfe' + | '\ud868' '\u7c00' .. '\u7ffe' + | '\ud869' '\u8000' .. '\u82d5' + | '\ud87e' '\ud400' .. '\ud61c' + ; diff --git a/src/liballoc/arc.rs b/src/liballoc/arc.rs index d6096d5894..1cad8f7f40 100644 --- a/src/liballoc/arc.rs +++ b/src/liballoc/arc.rs @@ -274,6 +274,68 @@ impl Arc { Ok(elem) } } + + /// Consumes the `Arc`, returning the wrapped pointer. + /// + /// To avoid a memory leak the pointer must be converted back to an `Arc` using + /// [`Arc::from_raw`][from_raw]. + /// + /// [from_raw]: struct.Arc.html#method.from_raw + /// + /// # Examples + /// + /// ``` + /// #![feature(rc_raw)] + /// + /// use std::sync::Arc; + /// + /// let x = Arc::new(10); + /// let x_ptr = Arc::into_raw(x); + /// assert_eq!(unsafe { *x_ptr }, 10); + /// ``` + #[unstable(feature = "rc_raw", issue = "37197")] + pub fn into_raw(this: Self) -> *mut T { + let ptr = unsafe { &mut (**this.ptr).data as *mut _ }; + mem::forget(this); + ptr + } + + /// Constructs an `Arc` from a raw pointer. + /// + /// The raw pointer must have been previously returned by a call to a + /// [`Arc::into_raw`][into_raw]. + /// + /// This function is unsafe because improper use may lead to memory problems. For example, a + /// double-free may occur if the function is called twice on the same raw pointer. + /// + /// [into_raw]: struct.Arc.html#method.into_raw + /// + /// # Examples + /// + /// ``` + /// #![feature(rc_raw)] + /// + /// use std::sync::Arc; + /// + /// let x = Arc::new(10); + /// let x_ptr = Arc::into_raw(x); + /// + /// unsafe { + /// // Convert back to an `Arc` to prevent leak. + /// let x = Arc::from_raw(x_ptr); + /// assert_eq!(*x, 10); + /// + /// // Further calls to `Arc::from_raw(x_ptr)` would be memory unsafe. + /// } + /// + /// // The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling! + /// ``` + #[unstable(feature = "rc_raw", issue = "37197")] + pub unsafe fn from_raw(ptr: *mut T) -> Self { + // To find the corresponding pointer to the `ArcInner` we need to subtract the offset of the + // `data` field from the pointer. + Arc { ptr: Shared::new((ptr as *mut u8).offset(-offset_of!(ArcInner, data)) as *mut _) } + } } impl Arc { @@ -319,16 +381,17 @@ impl Arc { /// Gets the number of [`Weak`][weak] pointers to this value. /// - /// Be careful how you use this information, because another thread - /// may change the weak count at any time. - /// /// [weak]: struct.Weak.html /// + /// # Safety + /// + /// This method by itself is safe, but using it correctly requires extra care. + /// Another thread can change the weak count at any time, + /// including potentially between calling this method and acting on the result. + /// /// # Examples /// /// ``` - /// #![feature(arc_counts)] - /// /// use std::sync::Arc; /// /// let five = Arc::new(5); @@ -339,22 +402,22 @@ impl Arc { /// assert_eq!(1, Arc::weak_count(&five)); /// ``` #[inline] - #[unstable(feature = "arc_counts", reason = "not clearly useful, and racy", - issue = "28356")] + #[stable(feature = "arc_counts", since = "1.15.0")] pub fn weak_count(this: &Self) -> usize { this.inner().weak.load(SeqCst) - 1 } /// Gets the number of strong (`Arc`) pointers to this value. /// - /// Be careful how you use this information, because another thread - /// may change the strong count at any time. + /// # Safety + /// + /// This method by itself is safe, but using it correctly requires extra care. + /// Another thread can change the strong count at any time, + /// including potentially between calling this method and acting on the result. /// /// # Examples /// /// ``` - /// #![feature(arc_counts)] - /// /// use std::sync::Arc; /// /// let five = Arc::new(5); @@ -365,8 +428,7 @@ impl Arc { /// assert_eq!(2, Arc::strong_count(&five)); /// ``` #[inline] - #[unstable(feature = "arc_counts", reason = "not clearly useful, and racy", - issue = "28356")] + #[stable(feature = "arc_counts", since = "1.15.0")] pub fn strong_count(this: &Self) -> usize { this.inner().strong.load(SeqCst) } @@ -1183,6 +1245,23 @@ mod tests { assert_eq!(Arc::try_unwrap(x), Ok(5)); } + #[test] + fn into_from_raw() { + let x = Arc::new(box "hello"); + let y = x.clone(); + + let x_ptr = Arc::into_raw(x); + drop(y); + unsafe { + assert_eq!(**x_ptr, "hello"); + + let x = Arc::from_raw(x_ptr); + assert_eq!(**x, "hello"); + + assert_eq!(Arc::try_unwrap(x).map(|x| *x), Ok("hello")); + } + } + #[test] fn test_cowarc_clone_make_mut() { let mut cow0 = Arc::new(75); diff --git a/src/liballoc/boxed.rs b/src/liballoc/boxed.rs index 28f4dda140..addb056f53 100644 --- a/src/liballoc/boxed.rs +++ b/src/liballoc/boxed.rs @@ -524,6 +524,9 @@ impl Iterator for Box { fn size_hint(&self) -> (usize, Option) { (**self).size_hint() } + fn nth(&mut self, n: usize) -> Option { + (**self).nth(n) + } } #[stable(feature = "rust1", since = "1.0.0")] impl DoubleEndedIterator for Box { @@ -532,7 +535,14 @@ impl DoubleEndedIterator for Box { } } #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for Box {} +impl ExactSizeIterator for Box { + fn len(&self) -> usize { + (**self).len() + } + fn is_empty(&self) -> bool { + (**self).is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for Box {} diff --git a/src/liballoc/heap.rs b/src/liballoc/heap.rs index bfed8a8e83..12809171b7 100644 --- a/src/liballoc/heap.rs +++ b/src/liballoc/heap.rs @@ -17,7 +17,7 @@ use core::{isize, usize}; #[cfg(not(test))] -use core::intrinsics::{min_align_of, size_of}; +use core::intrinsics::{min_align_of_val, size_of_val}; #[allow(improper_ctypes)] extern "C" { @@ -152,11 +152,12 @@ unsafe fn exchange_free(ptr: *mut u8, old_size: usize, align: usize) { #[cfg(not(test))] #[lang = "box_free"] #[inline] -unsafe fn box_free(ptr: *mut T) { - let size = size_of::(); +unsafe fn box_free(ptr: *mut T) { + let size = size_of_val(&*ptr); + let align = min_align_of_val(&*ptr); // We do not allocate for Box when T is ZST, so deallocation is also not necessary. if size != 0 { - deallocate(ptr as *mut u8, size, min_align_of::()); + deallocate(ptr as *mut u8, size, align); } } diff --git a/src/liballoc/lib.rs b/src/liballoc/lib.rs index 31491106d9..f9dfdc0e07 100644 --- a/src/liballoc/lib.rs +++ b/src/liballoc/lib.rs @@ -74,11 +74,13 @@ #![feature(allocator)] #![feature(box_syntax)] +#![feature(cfg_target_has_atomic)] #![feature(coerce_unsized)] #![feature(const_fn)] #![feature(core_intrinsics)] #![feature(custom_attribute)] #![feature(dropck_parametricity)] +#![cfg_attr(not(test), feature(exact_size_is_empty))] #![feature(fundamental)] #![feature(lang_items)] #![feature(needs_allocator)] @@ -99,6 +101,10 @@ #[macro_use] extern crate std; +// Module with internal macros used by other modules (needs to be included before other modules). +#[macro_use] +mod macros; + // Heaps provided for low-level allocation strategies pub mod heap; @@ -117,6 +123,7 @@ mod boxed { } #[cfg(test)] mod boxed_test; +#[cfg(target_has_atomic = "ptr")] pub mod arc; pub mod rc; pub mod raw_vec; diff --git a/src/liballoc/macros.rs b/src/liballoc/macros.rs new file mode 100644 index 0000000000..7da91c87e9 --- /dev/null +++ b/src/liballoc/macros.rs @@ -0,0 +1,28 @@ +// Copyright 2013-2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Private macro to get the offset of a struct field in bytes from the address of the struct. +macro_rules! offset_of { + ($container:path, $field:ident) => {{ + // Make sure the field actually exists. This line ensures that a compile-time error is + // generated if $field is accessed through a Deref impl. + let $container { $field : _, .. }; + + // Create an (invalid) instance of the container and calculate the offset to its + // field. Using a null pointer might be UB if `&(*(0 as *const T)).field` is interpreted to + // be nullptr deref. + let invalid: $container = ::core::mem::uninitialized(); + let offset = &invalid.$field as *const _ as usize - &invalid as *const _ as usize; + + // Do not run destructors on the made up invalid instance. + ::core::mem::forget(invalid); + offset as isize + }}; +} diff --git a/src/liballoc/oom.rs b/src/liballoc/oom.rs index d355d59185..3640156fec 100644 --- a/src/liballoc/oom.rs +++ b/src/liballoc/oom.rs @@ -8,12 +8,10 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use core::sync::atomic::{AtomicPtr, Ordering}; -use core::mem; +#[cfg(target_has_atomic = "ptr")] +pub use self::imp::set_oom_handler; use core::intrinsics; -static OOM_HANDLER: AtomicPtr<()> = AtomicPtr::new(default_oom_handler as *mut ()); - fn default_oom_handler() -> ! { // The default handler can't do much more since we can't assume the presence // of libc or any way of printing an error message. @@ -26,17 +24,38 @@ fn default_oom_handler() -> ! { #[unstable(feature = "oom", reason = "not a scrutinized interface", issue = "27700")] pub fn oom() -> ! { - let value = OOM_HANDLER.load(Ordering::SeqCst); - let handler: fn() -> ! = unsafe { mem::transmute(value) }; - handler(); + self::imp::oom() } -/// Set a custom handler for out-of-memory conditions -/// -/// To avoid recursive OOM failures, it is critical that the OOM handler does -/// not allocate any memory itself. -#[unstable(feature = "oom", reason = "not a scrutinized interface", - issue = "27700")] -pub fn set_oom_handler(handler: fn() -> !) { - OOM_HANDLER.store(handler as *mut (), Ordering::SeqCst); +#[cfg(target_has_atomic = "ptr")] +mod imp { + use core::mem; + use core::sync::atomic::{AtomicPtr, Ordering}; + + static OOM_HANDLER: AtomicPtr<()> = AtomicPtr::new(super::default_oom_handler as *mut ()); + + #[inline(always)] + pub fn oom() -> ! { + let value = OOM_HANDLER.load(Ordering::SeqCst); + let handler: fn() -> ! = unsafe { mem::transmute(value) }; + handler(); + } + + /// Set a custom handler for out-of-memory conditions + /// + /// To avoid recursive OOM failures, it is critical that the OOM handler does + /// not allocate any memory itself. + #[unstable(feature = "oom", reason = "not a scrutinized interface", + issue = "27700")] + pub fn set_oom_handler(handler: fn() -> !) { + OOM_HANDLER.store(handler as *mut (), Ordering::SeqCst); + } +} + +#[cfg(not(target_has_atomic = "ptr"))] +mod imp { + #[inline(always)] + pub fn oom() -> ! { + super::default_oom_handler() + } } diff --git a/src/liballoc/rc.rs b/src/liballoc/rc.rs index 740d13c476..86f8c74664 100644 --- a/src/liballoc/rc.rs +++ b/src/liballoc/rc.rs @@ -12,35 +12,35 @@ //! Single-threaded reference-counting pointers. //! -//! The type [`Rc`][rc] provides shared ownership of a value of type `T`, -//! allocated in the heap. Invoking [`clone`][clone] on `Rc` produces a new -//! pointer to the same value in the heap. When the last `Rc` pointer to a +//! The type [`Rc`][`Rc`] provides shared ownership of a value of type `T`, +//! allocated in the heap. Invoking [`clone()`][clone] on [`Rc`] produces a new +//! pointer to the same value in the heap. When the last [`Rc`] pointer to a //! given value is destroyed, the pointed-to value is also destroyed. //! //! Shared references in Rust disallow mutation by default, and `Rc` is no -//! exception. If you need to mutate through an `Rc`, use [`Cell`][cell] or -//! [`RefCell`][refcell]. +//! exception. If you need to mutate through an [`Rc`], use [`Cell`] or +//! [`RefCell`]. //! -//! `Rc` uses non-atomic reference counting. This means that overhead is very -//! low, but an `Rc` cannot be sent between threads, and consequently `Rc` +//! [`Rc`] uses non-atomic reference counting. This means that overhead is very +//! low, but an [`Rc`] cannot be sent between threads, and consequently [`Rc`] //! does not implement [`Send`][send]. As a result, the Rust compiler -//! will check *at compile time* that you are not sending `Rc`s between +//! will check *at compile time* that you are not sending [`Rc`]s between //! threads. If you need multi-threaded, atomic reference counting, use //! [`sync::Arc`][arc]. //! -//! The [`downgrade`][downgrade] method can be used to create a non-owning -//! [`Weak`][weak] pointer. A `Weak` pointer can be [`upgrade`][upgrade]d -//! to an `Rc`, but this will return [`None`][option] if the value has +//! The [`downgrade()`][downgrade] method can be used to create a non-owning +//! [`Weak`] pointer. A [`Weak`] pointer can be [`upgrade`][upgrade]d +//! to an [`Rc`], but this will return [`None`] if the value has //! already been dropped. //! -//! A cycle between `Rc` pointers will never be deallocated. For this reason, -//! `Weak` is used to break cycles. For example, a tree could have strong -//! `Rc` pointers from parent nodes to children, and `Weak` pointers from +//! A cycle between [`Rc`] pointers will never be deallocated. For this reason, +//! [`Weak`] is used to break cycles. For example, a tree could have strong +//! [`Rc`] pointers from parent nodes to children, and [`Weak`] pointers from //! children back to their parents. //! -//! `Rc` automatically dereferences to `T` (via the [`Deref`][deref] trait), -//! so you can call `T`'s methods on a value of type `Rc`. To avoid name -//! clashes with `T`'s methods, the methods of `Rc` itself are [associated +//! `Rc` automatically dereferences to `T` (via the [`Deref`] trait), +//! so you can call `T`'s methods on a value of type [`Rc`][`Rc`]. To avoid name +//! clashes with `T`'s methods, the methods of [`Rc`][`Rc`] itself are [associated //! functions][assoc], called using function-like syntax: //! //! ``` @@ -50,28 +50,15 @@ //! Rc::downgrade(&my_rc); //! ``` //! -//! `Weak` does not auto-dereference to `T`, because the value may have +//! [`Weak`][`Weak`] does not auto-dereference to `T`, because the value may have //! already been destroyed. //! -//! [rc]: struct.Rc.html -//! [weak]: struct.Weak.html -//! [clone]: ../../std/clone/trait.Clone.html#tymethod.clone -//! [cell]: ../../std/cell/struct.Cell.html -//! [refcell]: ../../std/cell/struct.RefCell.html -//! [send]: ../../std/marker/trait.Send.html -//! [arc]: ../../std/sync/struct.Arc.html -//! [deref]: ../../std/ops/trait.Deref.html -//! [downgrade]: struct.Rc.html#method.downgrade -//! [upgrade]: struct.Weak.html#method.upgrade -//! [option]: ../../std/option/enum.Option.html -//! [assoc]: ../../book/method-syntax.html#associated-functions -//! //! # Examples //! //! Consider a scenario where a set of `Gadget`s are owned by a given `Owner`. //! We want to have our `Gadget`s point to their `Owner`. We can't do this with //! unique ownership, because more than one gadget may belong to the same -//! `Owner`. `Rc` allows us to share an `Owner` between multiple `Gadget`s, +//! `Owner`. [`Rc`] allows us to share an `Owner` between multiple `Gadget`s, //! and have the `Owner` remain allocated as long as any `Gadget` points at it. //! //! ``` @@ -127,20 +114,20 @@ //! ``` //! //! If our requirements change, and we also need to be able to traverse from -//! `Owner` to `Gadget`, we will run into problems. An `Rc` pointer from `Owner` +//! `Owner` to `Gadget`, we will run into problems. An [`Rc`] pointer from `Owner` //! to `Gadget` introduces a cycle between the values. This means that their //! reference counts can never reach 0, and the values will remain allocated -//! forever: a memory leak. In order to get around this, we can use `Weak` +//! forever: a memory leak. In order to get around this, we can use [`Weak`] //! pointers. //! //! Rust actually makes it somewhat difficult to produce this loop in the first //! place. In order to end up with two values that point at each other, one of -//! them needs to be mutable. This is difficult because `Rc` enforces +//! them needs to be mutable. This is difficult because [`Rc`] enforces //! memory safety by only giving out shared references to the value it wraps, //! and these don't allow direct mutation. We need to wrap the part of the -//! value we wish to mutate in a [`RefCell`][refcell], which provides *interior +//! value we wish to mutate in a [`RefCell`], which provides *interior //! mutability*: a method to achieve mutability through a shared reference. -//! `RefCell` enforces Rust's borrowing rules at runtime. +//! [`RefCell`] enforces Rust's borrowing rules at runtime. //! //! ``` //! use std::rc::Rc; @@ -214,6 +201,19 @@ //! // Gadget Man, so he gets destroyed as well. //! } //! ``` +//! +//! [`Rc`]: struct.Rc.html +//! [`Weak`]: struct.Weak.html +//! [clone]: ../../std/clone/trait.Clone.html#tymethod.clone +//! [`Cell`]: ../../std/cell/struct.Cell.html +//! [`RefCell`]: ../../std/cell/struct.RefCell.html +//! [send]: ../../std/marker/trait.Send.html +//! [arc]: ../../std/sync/struct.Arc.html +//! [`Deref`]: ../../std/ops/trait.Deref.html +//! [downgrade]: struct.Rc.html#method.downgrade +//! [upgrade]: struct.Weak.html#method.upgrade +//! [`None`]: ../../std/option/enum.Option.html#variant.None +//! [assoc]: ../../book/method-syntax.html#associated-functions #![stable(feature = "rust1", since = "1.0.0")] @@ -251,9 +251,11 @@ struct RcBox { /// See the [module-level documentation](./index.html) for more details. /// /// The inherent methods of `Rc` are all associated functions, which means -/// that you have to call them as e.g. `Rc::get_mut(&value)` instead of -/// `value.get_mut()`. This avoids conflicts with methods of the inner +/// that you have to call them as e.g. [`Rc::get_mut(&value)`][get_mut] instead of +/// `value.get_mut()`. This avoids conflicts with methods of the inner /// type `T`. +/// +/// [get_mut]: #method.get_mut #[stable(feature = "rust1", since = "1.0.0")] pub struct Rc { ptr: Shared>, @@ -318,7 +320,7 @@ impl Rc { #[inline] #[stable(feature = "rc_unique", since = "1.4.0")] pub fn try_unwrap(this: Self) -> Result { - if Rc::would_unwrap(&this) { + if Rc::strong_count(&this) == 1 { unsafe { let val = ptr::read(&*this); // copy the contained object @@ -337,32 +339,78 @@ impl Rc { } /// Checks whether [`Rc::try_unwrap`][try_unwrap] would return - /// [`Ok`][result]. + /// [`Ok`]. /// /// [try_unwrap]: struct.Rc.html#method.try_unwrap - /// [result]: ../../std/result/enum.Result.html + /// [`Ok`]: ../../std/result/enum.Result.html#variant.Ok + #[unstable(feature = "rc_would_unwrap", + reason = "just added for niche usecase", + issue = "28356")] + #[rustc_deprecated(since = "1.15.0", reason = "too niche; use `strong_count` instead")] + pub fn would_unwrap(this: &Self) -> bool { + Rc::strong_count(&this) == 1 + } + + /// Consumes the `Rc`, returning the wrapped pointer. + /// + /// To avoid a memory leak the pointer must be converted back to an `Rc` using + /// [`Rc::from_raw`][from_raw]. + /// + /// [from_raw]: struct.Rc.html#method.from_raw /// /// # Examples /// /// ``` - /// #![feature(rc_would_unwrap)] + /// #![feature(rc_raw)] /// /// use std::rc::Rc; /// - /// let x = Rc::new(3); - /// assert!(Rc::would_unwrap(&x)); - /// assert_eq!(Rc::try_unwrap(x), Ok(3)); - /// - /// let x = Rc::new(4); - /// let _y = x.clone(); - /// assert!(!Rc::would_unwrap(&x)); - /// assert_eq!(*Rc::try_unwrap(x).unwrap_err(), 4); + /// let x = Rc::new(10); + /// let x_ptr = Rc::into_raw(x); + /// assert_eq!(unsafe { *x_ptr }, 10); /// ``` - #[unstable(feature = "rc_would_unwrap", - reason = "just added for niche usecase", - issue = "28356")] - pub fn would_unwrap(this: &Self) -> bool { - Rc::strong_count(&this) == 1 + #[unstable(feature = "rc_raw", issue = "37197")] + pub fn into_raw(this: Self) -> *mut T { + let ptr = unsafe { &mut (**this.ptr).value as *mut _ }; + mem::forget(this); + ptr + } + + /// Constructs an `Rc` from a raw pointer. + /// + /// The raw pointer must have been previously returned by a call to a + /// [`Rc::into_raw`][into_raw]. + /// + /// This function is unsafe because improper use may lead to memory problems. For example, a + /// double-free may occur if the function is called twice on the same raw pointer. + /// + /// [into_raw]: struct.Rc.html#method.into_raw + /// + /// # Examples + /// + /// ``` + /// #![feature(rc_raw)] + /// + /// use std::rc::Rc; + /// + /// let x = Rc::new(10); + /// let x_ptr = Rc::into_raw(x); + /// + /// unsafe { + /// // Convert back to an `Rc` to prevent leak. + /// let x = Rc::from_raw(x_ptr); + /// assert_eq!(*x, 10); + /// + /// // Further calls to `Rc::from_raw(x_ptr)` would be memory unsafe. + /// } + /// + /// // The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling! + /// ``` + #[unstable(feature = "rc_raw", issue = "37197")] + pub unsafe fn from_raw(ptr: *mut T) -> Self { + // To find the corresponding pointer to the `RcBox` we need to subtract the offset of the + // `value` field from the pointer. + Rc { ptr: Shared::new((ptr as *mut u8).offset(-offset_of!(RcBox, value)) as *mut _) } } } @@ -418,8 +466,6 @@ impl Rc { /// # Examples /// /// ``` - /// #![feature(rc_counts)] - /// /// use std::rc::Rc; /// /// let five = Rc::new(5); @@ -428,8 +474,7 @@ impl Rc { /// assert_eq!(1, Rc::weak_count(&five)); /// ``` #[inline] - #[unstable(feature = "rc_counts", reason = "not clearly useful", - issue = "28356")] + #[stable(feature = "rc_counts", since = "1.15.0")] pub fn weak_count(this: &Self) -> usize { this.weak() - 1 } @@ -439,8 +484,6 @@ impl Rc { /// # Examples /// /// ``` - /// #![feature(rc_counts)] - /// /// use std::rc::Rc; /// /// let five = Rc::new(5); @@ -449,8 +492,7 @@ impl Rc { /// assert_eq!(2, Rc::strong_count(&five)); /// ``` #[inline] - #[unstable(feature = "rc_counts", reason = "not clearly useful", - issue = "28356")] + #[stable(feature = "rc_counts", since = "1.15.0")] pub fn strong_count(this: &Self) -> usize { this.strong() } @@ -459,21 +501,11 @@ impl Rc { /// this inner value. /// /// [weak]: struct.Weak.html - /// - /// # Examples - /// - /// ``` - /// #![feature(rc_counts)] - /// - /// use std::rc::Rc; - /// - /// let five = Rc::new(5); - /// - /// assert!(Rc::is_unique(&five)); - /// ``` #[inline] - #[unstable(feature = "rc_counts", reason = "uniqueness has unclear meaning", + #[unstable(feature = "is_unique", reason = "uniqueness has unclear meaning", issue = "28356")] + #[rustc_deprecated(since = "1.15.0", + reason = "too niche; use `strong_count` and `weak_count` instead")] pub fn is_unique(this: &Self) -> bool { Rc::weak_count(this) == 0 && Rc::strong_count(this) == 1 } @@ -481,14 +513,14 @@ impl Rc { /// Returns a mutable reference to the inner value, if there are /// no other `Rc` or [`Weak`][weak] pointers to the same value. /// - /// Returns [`None`][option] otherwise, because it is not safe to + /// Returns [`None`] otherwise, because it is not safe to /// mutate a shared value. /// /// See also [`make_mut`][make_mut], which will [`clone`][clone] /// the inner value when it's shared. /// /// [weak]: struct.Weak.html - /// [option]: ../../std/option/enum.Option.html + /// [`None`]: ../../std/option/enum.Option.html#variant.None /// [make_mut]: struct.Rc.html#method.make_mut /// [clone]: ../../std/clone/trait.Clone.html#tymethod.clone /// @@ -1287,6 +1319,23 @@ mod tests { assert_eq!(Rc::try_unwrap(x), Ok(5)); } + #[test] + fn into_from_raw() { + let x = Rc::new(box "hello"); + let y = x.clone(); + + let x_ptr = Rc::into_raw(x); + drop(y); + unsafe { + assert_eq!(**x_ptr, "hello"); + + let x = Rc::from_raw(x_ptr); + assert_eq!(**x, "hello"); + + assert_eq!(Rc::try_unwrap(x).map(|x| *x), Ok("hello")); + } + } + #[test] fn get_mut() { let mut x = Rc::new(3); diff --git a/src/liballoc_jemalloc/Cargo.toml b/src/liballoc_jemalloc/Cargo.toml index 25b3c8a3a0..01393be994 100644 --- a/src/liballoc_jemalloc/Cargo.toml +++ b/src/liballoc_jemalloc/Cargo.toml @@ -9,6 +9,7 @@ links = "jemalloc" name = "alloc_jemalloc" path = "lib.rs" test = false +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/liballoc_jemalloc/build.rs b/src/liballoc_jemalloc/build.rs index 08a1f8ae8c..60b7875a97 100644 --- a/src/liballoc_jemalloc/build.rs +++ b/src/liballoc_jemalloc/build.rs @@ -69,6 +69,7 @@ fn main() { .read_dir() .unwrap() .map(|e| e.unwrap()) + .filter(|e| &*e.file_name() != ".git") .collect::>(); while let Some(entry) = stack.pop() { let path = entry.path(); @@ -150,11 +151,17 @@ fn main() { cmd.arg(format!("--build={}", build_helper::gnu_target(&host))); run(&mut cmd); - run(Command::new("make") - .current_dir(&build_dir) - .arg("build_lib_static") - .arg("-j") - .arg(env::var("NUM_JOBS").expect("NUM_JOBS was not set"))); + let mut make = Command::new(build_helper::make(&host)); + make.current_dir(&build_dir) + .arg("build_lib_static"); + + // mingw make seems... buggy? unclear... + if !host.contains("windows") { + make.arg("-j") + .arg(env::var("NUM_JOBS").expect("NUM_JOBS was not set")); + } + + run(&mut make); if target.contains("windows") { println!("cargo:rustc-link-lib=static=jemalloc"); diff --git a/src/liballoc_system/Cargo.toml b/src/liballoc_system/Cargo.toml index 88e8e2d7ad..8e3c2c0b9c 100644 --- a/src/liballoc_system/Cargo.toml +++ b/src/liballoc_system/Cargo.toml @@ -7,6 +7,7 @@ version = "0.0.0" name = "alloc_system" path = "lib.rs" test = false +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/libcollections/Cargo.toml b/src/libcollections/Cargo.toml index 65d456e750..ab882fde9c 100644 --- a/src/libcollections/Cargo.toml +++ b/src/libcollections/Cargo.toml @@ -10,8 +10,12 @@ path = "lib.rs" [dependencies] alloc = { path = "../liballoc" } core = { path = "../libcore" } -rustc_unicode = { path = "../librustc_unicode" } +std_unicode = { path = "../libstd_unicode" } [[test]] name = "collectionstest" path = "../libcollectionstest/lib.rs" + +[[bench]] +name = "collectionstest" +path = "../libcollectionstest/lib.rs" diff --git a/src/libcollections/binary_heap.rs b/src/libcollections/binary_heap.rs index b4be8a4321..8d0c76c364 100644 --- a/src/libcollections/binary_heap.rs +++ b/src/libcollections/binary_heap.rs @@ -986,7 +986,11 @@ impl<'a, T> DoubleEndedIterator for Iter<'a, T> { } #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T> ExactSizeIterator for Iter<'a, T> {} +impl<'a, T> ExactSizeIterator for Iter<'a, T> { + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for Iter<'a, T> {} @@ -1022,7 +1026,11 @@ impl DoubleEndedIterator for IntoIter { } #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for IntoIter {} +impl ExactSizeIterator for IntoIter { + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for IntoIter {} @@ -1057,7 +1065,11 @@ impl<'a, T: 'a> DoubleEndedIterator for Drain<'a, T> { } #[stable(feature = "drain", since = "1.6.0")] -impl<'a, T: 'a> ExactSizeIterator for Drain<'a, T> {} +impl<'a, T: 'a> ExactSizeIterator for Drain<'a, T> { + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T: 'a> FusedIterator for Drain<'a, T> {} diff --git a/src/libcollections/enum_set.rs b/src/libcollections/enum_set.rs index 2d12b4ccff..79e0021b14 100644 --- a/src/libcollections/enum_set.rs +++ b/src/libcollections/enum_set.rs @@ -16,7 +16,7 @@ #![unstable(feature = "enumset", reason = "matches collection reform specification, \ waiting for dust to settle", - issue = "0")] + issue = "37966")] use core::marker; use core::fmt; diff --git a/src/libcollections/lib.rs b/src/libcollections/lib.rs index 23d6edd6d7..68b067012d 100644 --- a/src/libcollections/lib.rs +++ b/src/libcollections/lib.rs @@ -36,6 +36,7 @@ #![cfg_attr(not(test), feature(char_escape_debug))] #![feature(core_intrinsics)] #![feature(dropck_parametricity)] +#![feature(exact_size_is_empty)] #![feature(fmt_internals)] #![feature(fused)] #![feature(heap_api)] @@ -46,18 +47,19 @@ #![feature(placement_in)] #![feature(placement_new_protocol)] #![feature(shared)] +#![feature(slice_get_slice)] #![feature(slice_patterns)] #![feature(specialization)] #![feature(staged_api)] -#![feature(step_by)] #![feature(trusted_len)] #![feature(unicode)] #![feature(unique)] +#![feature(untagged_unions)] #![cfg_attr(test, feature(rand, test))] #![no_std] -extern crate rustc_unicode; +extern crate std_unicode; extern crate alloc; #[cfg(test)] diff --git a/src/libcollections/slice.rs b/src/libcollections/slice.rs index 75796cf94b..5fb8cd6e1e 100644 --- a/src/libcollections/slice.rs +++ b/src/libcollections/slice.rs @@ -98,8 +98,7 @@ #![cfg_attr(test, allow(unused_imports, dead_code))] use alloc::boxed::Box; -use core::cmp::Ordering::{self, Greater, Less}; -use core::cmp; +use core::cmp::Ordering::{self, Greater}; use core::mem::size_of; use core::mem; use core::ptr; @@ -118,6 +117,8 @@ pub use core::slice::{SplitMut, ChunksMut, Split}; pub use core::slice::{SplitN, RSplitN, SplitNMut, RSplitNMut}; #[stable(feature = "rust1", since = "1.0.0")] pub use core::slice::{from_raw_parts, from_raw_parts_mut}; +#[unstable(feature = "slice_get_slice", issue = "35729")] +pub use core::slice::SliceIndex; //////////////////////////////////////////////////////////////////////////////// // Basic slice extension methods @@ -353,7 +354,9 @@ impl [T] { /// ``` #[stable(feature = "rust1", since = "1.0.0")] #[inline] - pub fn get(&self, index: usize) -> Option<&T> { + pub fn get(&self, index: I) -> Option<&I::Output> + where I: SliceIndex + { core_slice::SliceExt::get(self, index) } @@ -372,7 +375,9 @@ impl [T] { /// or `None` if the index is out of bounds #[stable(feature = "rust1", since = "1.0.0")] #[inline] - pub fn get_mut(&mut self, index: usize) -> Option<&mut T> { + pub fn get_mut(&mut self, index: I) -> Option<&mut I::Output> + where I: SliceIndex + { core_slice::SliceExt::get_mut(self, index) } @@ -390,7 +395,9 @@ impl [T] { /// ``` #[stable(feature = "rust1", since = "1.0.0")] #[inline] - pub unsafe fn get_unchecked(&self, index: usize) -> &T { + pub unsafe fn get_unchecked(&self, index: I) -> &I::Output + where I: SliceIndex + { core_slice::SliceExt::get_unchecked(self, index) } @@ -410,7 +417,9 @@ impl [T] { /// ``` #[stable(feature = "rust1", since = "1.0.0")] #[inline] - pub unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut T { + pub unsafe fn get_unchecked_mut(&mut self, index: I) -> &mut I::Output + where I: SliceIndex + { core_slice::SliceExt::get_unchecked_mut(self, index) } @@ -1032,8 +1041,8 @@ impl [T] { /// This is equivalent to `self.sort_by(|a, b| a.cmp(b))`. /// - /// This sort is stable and `O(n log n)` worst-case but allocates - /// approximately `2 * n` where `n` is the length of `self`. + /// This sort is stable and `O(n log n)` worst-case, but allocates + /// temporary storage half the size of `self`. /// /// # Examples /// @@ -1054,8 +1063,8 @@ impl [T] { /// Sorts the slice, in place, using `f` to extract a key by which to /// order the sort by. /// - /// This sort is stable and `O(n log n)` worst-case but allocates - /// approximately `2 * n`, where `n` is the length of `self`. + /// This sort is stable and `O(n log n)` worst-case, but allocates + /// temporary storage half the size of `self`. /// /// # Examples /// @@ -1076,8 +1085,8 @@ impl [T] { /// Sorts the slice, in place, using `compare` to compare /// elements. /// - /// This sort is stable and `O(n log n)` worst-case but allocates - /// approximately `2 * n`, where `n` is the length of `self`. + /// This sort is stable and `O(n log n)` worst-case, but allocates + /// temporary storage half the size of `self`. /// /// # Examples /// @@ -1295,213 +1304,333 @@ impl ToOwned for [T] { // Sorting //////////////////////////////////////////////////////////////////////////////// -fn insertion_sort(v: &mut [T], mut compare: F) +/// Inserts `v[0]` into pre-sorted sequence `v[1..]` so that whole `v[..]` becomes sorted. +/// +/// This is the integral subroutine of insertion sort. +fn insert_head(v: &mut [T], compare: &mut F) where F: FnMut(&T, &T) -> Ordering { - let len = v.len() as isize; - let buf_v = v.as_mut_ptr(); - - // 1 <= i < len; - for i in 1..len { - // j satisfies: 0 <= j <= i; - let mut j = i; + if v.len() >= 2 && compare(&v[0], &v[1]) == Greater { unsafe { - // `i` is in bounds. - let read_ptr = buf_v.offset(i) as *const T; + // There are three ways to implement insertion here: + // + // 1. Swap adjacent elements until the first one gets to its final destination. + // However, this way we copy data around more than is necessary. If elements are big + // structures (costly to copy), this method will be slow. + // + // 2. Iterate until the right place for the first element is found. Then shift the + // elements succeeding it to make room for it and finally place it into the + // remaining hole. This is a good method. + // + // 3. Copy the first element into a temporary variable. Iterate until the right place + // for it is found. As we go along, copy every traversed element into the slot + // preceding it. Finally, copy data from the temporary variable into the remaining + // hole. This method is very good. Benchmarks demonstrated slightly better + // performance than with the 2nd method. + // + // All methods were benchmarked, and the 3rd showed best results. So we chose that one. + let mut tmp = NoDrop { value: ptr::read(&v[0]) }; - // find where to insert, we need to do strict <, - // rather than <=, to maintain stability. + // Intermediate state of the insertion process is always tracked by `hole`, which + // serves two purposes: + // 1. Protects integrity of `v` from panics in `compare`. + // 2. Fills the remaining hole in `v` in the end. + // + // Panic safety: + // + // If `compare` panics at any point during the process, `hole` will get dropped and + // fill the hole in `v` with `tmp`, thus ensuring that `v` still holds every object it + // initially held exactly once. + let mut hole = InsertionHole { + src: &mut tmp.value, + dest: &mut v[1], + }; + ptr::copy_nonoverlapping(&v[1], &mut v[0], 1); - // 0 <= j - 1 < len, so .offset(j - 1) is in bounds. - while j > 0 && compare(&*read_ptr, &*buf_v.offset(j - 1)) == Less { - j -= 1; + for i in 2..v.len() { + if compare(&tmp.value, &v[i]) != Greater { + break; + } + ptr::copy_nonoverlapping(&v[i], &mut v[i - 1], 1); + hole.dest = &mut v[i]; } + // `hole` gets dropped and thus copies `tmp` into the remaining hole in `v`. + } + } - // shift everything to the right, to make space to - // insert this value. + // Holds a value, but never drops it. + #[allow(unions_with_drop_fields)] + union NoDrop { + value: T + } - // j + 1 could be `len` (for the last `i`), but in - // that case, `i == j` so we don't copy. The - // `.offset(j)` is always in bounds. + // When dropped, copies from `src` into `dest`. + struct InsertionHole { + src: *mut T, + dest: *mut T, + } - if i != j { - let tmp = ptr::read(read_ptr); - ptr::copy(&*buf_v.offset(j), buf_v.offset(j + 1), (i - j) as usize); - ptr::copy_nonoverlapping(&tmp, buf_v.offset(j), 1); - mem::forget(tmp); - } + impl Drop for InsertionHole { + fn drop(&mut self) { + unsafe { ptr::copy_nonoverlapping(self.src, self.dest, 1); } } } } -fn merge_sort(v: &mut [T], mut compare: F) +/// Merges non-decreasing runs `v[..mid]` and `v[mid..]` using `buf` as temporary storage, and +/// stores the result into `v[..]`. +/// +/// # Safety +/// +/// The two slices must be non-empty and `mid` must be in bounds. Buffer `buf` must be long enough +/// to hold a copy of the shorter slice. Also, `T` must not be a zero-sized type. +unsafe fn merge(v: &mut [T], mid: usize, buf: *mut T, compare: &mut F) where F: FnMut(&T, &T) -> Ordering { - // warning: this wildly uses unsafe. - const BASE_INSERTION: usize = 32; - const LARGE_INSERTION: usize = 16; - - // FIXME #12092: smaller insertion runs seems to make sorting - // vectors of large elements a little faster on some platforms, - // but hasn't been tested/tuned extensively - let insertion = if size_of::() <= 16 { - BASE_INSERTION - } else { - LARGE_INSERTION - }; - let len = v.len(); + let v = v.as_mut_ptr(); + let v_mid = v.offset(mid as isize); + let v_end = v.offset(len as isize); - // short vectors get sorted in-place via insertion sort to avoid allocations - if len <= insertion { - insertion_sort(v, compare); - return; - } + // The merge process first copies the shorter run into `buf`. Then it traces the newly copied + // run and the longer run forwards (or backwards), comparing their next unconsumed elements and + // copying the lesser (or greater) one into `v`. + // + // As soon as the shorter run is fully consumed, the process is done. If the longer run gets + // consumed first, then we must copy whatever is left of the shorter run into the remaining + // hole in `v`. + // + // Intermediate state of the process is always tracked by `hole`, which serves two purposes: + // 1. Protects integrity of `v` from panics in `compare`. + // 2. Fills the remaining hole in `v` if the longer run gets consumed first. + // + // Panic safety: + // + // If `compare` panics at any point during the process, `hole` will get dropped and fill the + // hole in `v` with the unconsumed range in `buf`, thus ensuring that `v` still holds every + // object it initially held exactly once. + let mut hole; - // allocate some memory to use as scratch memory, we keep the - // length 0 so we can keep shallow copies of the contents of `v` - // without risking the dtors running on an object twice if - // `compare` panics. - let mut working_space = Vec::with_capacity(2 * len); - // these both are buffers of length `len`. - let mut buf_dat = working_space.as_mut_ptr(); - let mut buf_tmp = unsafe { buf_dat.offset(len as isize) }; + if mid <= len - mid { + // The left run is shorter. + ptr::copy_nonoverlapping(v, buf, mid); + hole = MergeHole { + start: buf, + end: buf.offset(mid as isize), + dest: v, + }; - // length `len`. - let buf_v = v.as_ptr(); + // Initially, these pointers point to the beginnings of their arrays. + let left = &mut hole.start; + let mut right = v_mid; + let out = &mut hole.dest; - // step 1. sort short runs with insertion sort. This takes the - // values from `v` and sorts them into `buf_dat`, leaving that - // with sorted runs of length INSERTION. + while *left < hole.end && right < v_end { + // Consume the lesser side. + // If equal, prefer the left run to maintain stability. + let to_copy = if compare(&**left, &*right) == Greater { + get_and_increment(&mut right) + } else { + get_and_increment(left) + }; + ptr::copy_nonoverlapping(to_copy, get_and_increment(out), 1); + } + } else { + // The right run is shorter. + ptr::copy_nonoverlapping(v_mid, buf, len - mid); + hole = MergeHole { + start: buf, + end: buf.offset((len - mid) as isize), + dest: v_mid, + }; - // We could hardcode the sorting comparisons here, and we could - // manipulate/step the pointers themselves, rather than repeatedly - // .offset-ing. - for start in (0..len).step_by(insertion) { - // start <= i < len; - for i in start..cmp::min(start + insertion, len) { - // j satisfies: start <= j <= i; - let mut j = i as isize; - unsafe { - // `i` is in bounds. - let read_ptr = buf_v.offset(i as isize); + // Initially, these pointers point past the ends of their arrays. + let left = &mut hole.dest; + let right = &mut hole.end; + let mut out = v_end; - // find where to insert, we need to do strict <, - // rather than <=, to maintain stability. - - // start <= j - 1 < len, so .offset(j - 1) is in - // bounds. - while j > start as isize && compare(&*read_ptr, &*buf_dat.offset(j - 1)) == Less { - j -= 1; - } - - // shift everything to the right, to make space to - // insert this value. - - // j + 1 could be `len` (for the last `i`), but in - // that case, `i == j` so we don't copy. The - // `.offset(j)` is always in bounds. - ptr::copy(&*buf_dat.offset(j), buf_dat.offset(j + 1), i - j as usize); - ptr::copy_nonoverlapping(read_ptr, buf_dat.offset(j), 1); - } + while v < *left && buf < *right { + // Consume the greater side. + // If equal, prefer the right run to maintain stability. + let to_copy = if compare(&*left.offset(-1), &*right.offset(-1)) == Greater { + decrement_and_get(left) + } else { + decrement_and_get(right) + }; + ptr::copy_nonoverlapping(to_copy, decrement_and_get(&mut out), 1); } } + // Finally, `hole` gets dropped. If the shorter run was not fully consumed, whatever remains of + // it will now be copied into the hole in `v`. - // step 2. merge the sorted runs. - let mut width = insertion; - while width < len { - // merge the sorted runs of length `width` in `buf_dat` two at - // a time, placing the result in `buf_tmp`. - - // 0 <= start <= len. - for start in (0..len).step_by(2 * width) { - // manipulate pointers directly for speed (rather than - // using a `for` loop with `range` and `.offset` inside - // that loop). - unsafe { - // the end of the first run & start of the - // second. Offset of `len` is defined, since this is - // precisely one byte past the end of the object. - let right_start = buf_dat.offset(cmp::min(start + width, len) as isize); - // end of the second. Similar reasoning to the above re safety. - let right_end_idx = cmp::min(start + 2 * width, len); - let right_end = buf_dat.offset(right_end_idx as isize); - - // the pointers to the elements under consideration - // from the two runs. - - // both of these are in bounds. - let mut left = buf_dat.offset(start as isize); - let mut right = right_start; - - // where we're putting the results, it is a run of - // length `2*width`, so we step it once for each step - // of either `left` or `right`. `buf_tmp` has length - // `len`, so these are in bounds. - let mut out = buf_tmp.offset(start as isize); - let out_end = buf_tmp.offset(right_end_idx as isize); - - // If left[last] <= right[0], they are already in order: - // fast-forward the left side (the right side is handled - // in the loop). - // If `right` is not empty then left is not empty, and - // the offsets are in bounds. - if right != right_end && compare(&*right.offset(-1), &*right) != Greater { - let elems = (right_start as usize - left as usize) / mem::size_of::(); - ptr::copy_nonoverlapping(&*left, out, elems); - out = out.offset(elems as isize); - left = right_start; - } - - while out < out_end { - // Either the left or the right run are exhausted, - // so just copy the remainder from the other run - // and move on; this gives a huge speed-up (order - // of 25%) for mostly sorted vectors (the best - // case). - if left == right_start { - // the number remaining in this run. - let elems = (right_end as usize - right as usize) / mem::size_of::(); - ptr::copy_nonoverlapping(&*right, out, elems); - break; - } else if right == right_end { - let elems = (right_start as usize - left as usize) / mem::size_of::(); - ptr::copy_nonoverlapping(&*left, out, elems); - break; - } - - // check which side is smaller, and that's the - // next element for the new run. - - // `left < right_start` and `right < right_end`, - // so these are valid. - let to_copy = if compare(&*left, &*right) == Greater { - step(&mut right) - } else { - step(&mut left) - }; - ptr::copy_nonoverlapping(&*to_copy, out, 1); - step(&mut out); - } - } - } - - mem::swap(&mut buf_dat, &mut buf_tmp); - - width *= 2; - } - - // write the result to `v` in one go, so that there are never two copies - // of the same object in `v`. - unsafe { - ptr::copy_nonoverlapping(&*buf_dat, v.as_mut_ptr(), len); - } - - // increment the pointer, returning the old pointer. - #[inline(always)] - unsafe fn step(ptr: &mut *mut T) -> *mut T { + unsafe fn get_and_increment(ptr: &mut *mut T) -> *mut T { let old = *ptr; *ptr = ptr.offset(1); old } + + unsafe fn decrement_and_get(ptr: &mut *mut T) -> *mut T { + *ptr = ptr.offset(-1); + *ptr + } + + // When dropped, copies the range `start..end` into `dest..`. + struct MergeHole { + start: *mut T, + end: *mut T, + dest: *mut T, + } + + impl Drop for MergeHole { + fn drop(&mut self) { + // `T` is not a zero-sized type, so it's okay to divide by it's size. + let len = (self.end as usize - self.start as usize) / mem::size_of::(); + unsafe { ptr::copy_nonoverlapping(self.start, self.dest, len); } + } + } +} + +/// This merge sort borrows some (but not all) ideas from TimSort, which is described in detail +/// [here](http://svn.python.org/projects/python/trunk/Objects/listsort.txt). +/// +/// The algorithm identifies strictly descending and non-descending subsequences, which are called +/// natural runs. There is a stack of pending runs yet to be merged. Each newly found run is pushed +/// onto the stack, and then some pairs of adjacent runs are merged until these two invariants are +/// satisfied, for every `i` in `0 .. runs.len() - 2`: +/// +/// 1. `runs[i].len > runs[i + 1].len` +/// 2. `runs[i].len > runs[i + 1].len + runs[i + 2].len` +/// +/// The invariants ensure that the total running time is `O(n log n)` worst-case. +fn merge_sort(v: &mut [T], mut compare: F) + where F: FnMut(&T, &T) -> Ordering +{ + // Sorting has no meaningful behavior on zero-sized types. + if size_of::() == 0 { + return; + } + + // FIXME #12092: These numbers are platform-specific and need more extensive testing/tuning. + // + // If `v` has length up to `insertion_len`, simply switch to insertion sort because it is going + // to perform better than merge sort. For bigger types `T`, the threshold is smaller. + // + // Short runs are extended using insertion sort to span at least `min_run` elements, in order + // to improve performance. + let (max_insertion, min_run) = if size_of::() <= 16 { + (64, 32) + } else { + (32, 16) + }; + + let len = v.len(); + + // Short arrays get sorted in-place via insertion sort to avoid allocations. + if len <= max_insertion { + if len >= 2 { + for i in (0..len-1).rev() { + insert_head(&mut v[i..], &mut compare); + } + } + return; + } + + // Allocate a buffer to use as scratch memory. We keep the length 0 so we can keep in it + // shallow copies of the contents of `v` without risking the dtors running on copies if + // `compare` panics. When merging two sorted runs, this buffer holds a copy of the shorter run, + // which will always have length at most `len / 2`. + let mut buf = Vec::with_capacity(len / 2); + + // In order to identify natural runs in `v`, we traverse it backwards. That might seem like a + // strange decision, but consider the fact that merges more often go in the opposite direction + // (forwards). According to benchmarks, merging forwards is slightly faster than merging + // backwards. To conclude, identifying runs by traversing backwards improves performance. + let mut runs = vec![]; + let mut end = len; + while end > 0 { + // Find the next natural run, and reverse it if it's strictly descending. + let mut start = end - 1; + if start > 0 { + start -= 1; + if compare(&v[start], &v[start + 1]) == Greater { + while start > 0 && compare(&v[start - 1], &v[start]) == Greater { + start -= 1; + } + v[start..end].reverse(); + } else { + while start > 0 && compare(&v[start - 1], &v[start]) != Greater { + start -= 1; + } + } + } + + // Insert some more elements into the run if it's too short. Insertion sort is faster than + // merge sort on short sequences, so this significantly improves performance. + while start > 0 && end - start < min_run { + start -= 1; + insert_head(&mut v[start..end], &mut compare); + } + + // Push this run onto the stack. + runs.push(Run { + start: start, + len: end - start, + }); + end = start; + + // Merge some pairs of adjacent runs to satisfy the invariants. + while let Some(r) = collapse(&runs) { + let left = runs[r + 1]; + let right = runs[r]; + unsafe { + merge(&mut v[left.start .. right.start + right.len], left.len, buf.as_mut_ptr(), + &mut compare); + } + runs[r] = Run { + start: left.start, + len: left.len + right.len, + }; + runs.remove(r + 1); + } + } + + // Finally, exactly one run must remain in the stack. + debug_assert!(runs.len() == 1 && runs[0].start == 0 && runs[0].len == len); + + // Examines the stack of runs and identifies the next pair of runs to merge. More specifically, + // if `Some(r)` is returned, that means `runs[r]` and `runs[r + 1]` must be merged next. If the + // algorithm should continue building a new run instead, `None` is returned. + // + // TimSort is infamous for it's buggy implementations, as described here: + // http://envisage-project.eu/timsort-specification-and-verification/ + // + // The gist of the story is: we must enforce the invariants on the top four runs on the stack. + // Enforcing them on just top three is not sufficient to ensure that the invariants will still + // hold for *all* runs in the stack. + // + // This function correctly checks invariants for the top four runs. Additionally, if the top + // run starts at index 0, it will always demand a merge operation until the stack is fully + // collapsed, in order to complete the sort. + #[inline] + fn collapse(runs: &[Run]) -> Option { + let n = runs.len(); + if n >= 2 && (runs[n - 1].start == 0 || + runs[n - 2].len <= runs[n - 1].len || + (n >= 3 && runs[n - 3].len <= runs[n - 2].len + runs[n - 1].len) || + (n >= 4 && runs[n - 4].len <= runs[n - 3].len + runs[n - 2].len)) { + if n >= 3 && runs[n - 3].len < runs[n - 1].len { + Some(n - 3) + } else { + Some(n - 2) + } + } else { + None + } + } + + #[derive(Clone, Copy)] + struct Run { + start: usize, + len: usize, + } } diff --git a/src/libcollections/str.rs b/src/libcollections/str.rs index 48a74bdecb..d4be0914f1 100644 --- a/src/libcollections/str.rs +++ b/src/libcollections/str.rs @@ -24,12 +24,12 @@ use core::str::pattern::Pattern; use core::str::pattern::{Searcher, ReverseSearcher, DoubleEndedSearcher}; use core::mem; use core::iter::FusedIterator; -use rustc_unicode::str::{UnicodeStr, Utf16Encoder}; +use std_unicode::str::{UnicodeStr, Utf16Encoder}; use vec_deque::VecDeque; use borrow::{Borrow, ToOwned}; use string::String; -use rustc_unicode; +use std_unicode; use vec::Vec; use slice::SliceConcatExt; use boxed::Box; @@ -54,7 +54,7 @@ pub use core::str::{from_utf8, Chars, CharIndices, Bytes}; #[stable(feature = "rust1", since = "1.0.0")] pub use core::str::{from_utf8_unchecked, ParseBoolError}; #[stable(feature = "rust1", since = "1.0.0")] -pub use rustc_unicode::str::SplitWhitespace; +pub use std_unicode::str::SplitWhitespace; #[stable(feature = "rust1", since = "1.0.0")] pub use core::str::pattern; @@ -1705,7 +1705,7 @@ impl str { } fn case_ignoreable_then_cased>(iter: I) -> bool { - use rustc_unicode::derived_property::{Cased, Case_Ignorable}; + use std_unicode::derived_property::{Cased, Case_Ignorable}; match iter.skip_while(|&c| Case_Ignorable(c)).next() { Some(c) => Cased(c), None => false, diff --git a/src/libcollections/string.rs b/src/libcollections/string.rs index 348eb6fb5f..b4c41a99a6 100644 --- a/src/libcollections/string.rs +++ b/src/libcollections/string.rs @@ -63,8 +63,8 @@ use core::mem; use core::ops::{self, Add, AddAssign, Index, IndexMut}; use core::ptr; use core::str::pattern::Pattern; -use rustc_unicode::char::{decode_utf16, REPLACEMENT_CHARACTER}; -use rustc_unicode::str as unicode_str; +use std_unicode::char::{decode_utf16, REPLACEMENT_CHARACTER}; +use std_unicode::str as unicode_str; use borrow::{Cow, ToOwned}; use range::RangeArgument; @@ -1129,8 +1129,6 @@ impl String { #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub fn insert(&mut self, idx: usize, ch: char) { - let len = self.len(); - assert!(idx <= len); assert!(self.is_char_boundary(idx)); let mut bits = [0; 4]; let bits = ch.encode_utf8(&mut bits).as_bytes(); @@ -1184,7 +1182,6 @@ impl String { reason = "recent addition", issue = "35553")] pub fn insert_str(&mut self, idx: usize, string: &str) { - assert!(idx <= self.len()); assert!(self.is_char_boundary(idx)); unsafe { @@ -1260,6 +1257,38 @@ impl String { self.len() == 0 } + /// Divide one string into two at an index. + /// + /// The argument, `mid`, should be a byte offset from the start of the string. It must also + /// be on the boundary of a UTF-8 code point. + /// + /// The two strings returned go from the start of the string to `mid`, and from `mid` to the end + /// of the string. + /// + /// # Panics + /// + /// Panics if `mid` is not on a `UTF-8` code point boundary, or if it is beyond the last + /// code point of the string. + /// + /// # Examples + /// + /// ``` + /// # #![feature(string_split_off)] + /// # fn main() { + /// let mut hello = String::from("Hello, World!"); + /// let world = hello.split_off(7); + /// assert_eq!(hello, "Hello, "); + /// assert_eq!(world, "World!"); + /// # } + /// ``` + #[inline] + #[unstable(feature = "string_split_off", issue = "38080")] + pub fn split_off(&mut self, mid: usize) -> String { + assert!(self.is_char_boundary(mid)); + let other = self.vec.split_off(mid); + unsafe { String::from_utf8_unchecked(other) } + } + /// Truncates this `String`, removing all contents. /// /// While this means the `String` will have a length of zero, it does not diff --git a/src/libcollections/vec.rs b/src/libcollections/vec.rs index 71c49ee616..f2ef54f6e5 100644 --- a/src/libcollections/vec.rs +++ b/src/libcollections/vec.rs @@ -1244,7 +1244,7 @@ impl Vec { /// ``` #[stable(feature = "vec_extend_from_slice", since = "1.6.0")] pub fn extend_from_slice(&mut self, other: &[T]) { - self.extend(other.iter().cloned()) + self.spec_extend(other.iter()) } } @@ -1499,26 +1499,7 @@ impl ops::DerefMut for Vec { impl FromIterator for Vec { #[inline] fn from_iter>(iter: I) -> Vec { - // Unroll the first iteration, as the vector is going to be - // expanded on this iteration in every case when the iterable is not - // empty, but the loop in extend_desugared() is not going to see the - // vector being full in the few subsequent loop iterations. - // So we get better branch prediction. - let mut iterator = iter.into_iter(); - let mut vector = match iterator.next() { - None => return Vec::new(), - Some(element) => { - let (lower, _) = iterator.size_hint(); - let mut vector = Vec::with_capacity(lower.saturating_add(1)); - unsafe { - ptr::write(vector.get_unchecked_mut(0), element); - vector.set_len(1); - } - vector - } - }; - vector.extend_desugared(iterator); - vector + >::from_iter(iter.into_iter()) } } @@ -1586,36 +1567,64 @@ impl<'a, T> IntoIterator for &'a mut Vec { impl Extend for Vec { #[inline] fn extend>(&mut self, iter: I) { - self.extend_desugared(iter.into_iter()) + self.spec_extend(iter.into_iter()) } } -trait IsTrustedLen : Iterator { - fn trusted_len(&self) -> Option { None } +// Specialization trait used for Vec::from_iter and Vec::extend +trait SpecExtend { + fn from_iter(iter: I) -> Self; + fn spec_extend(&mut self, iter: I); } -impl IsTrustedLen for I where I: Iterator { } -impl IsTrustedLen for I where I: TrustedLen +impl SpecExtend for Vec + where I: Iterator, { - fn trusted_len(&self) -> Option { - let (low, high) = self.size_hint(); + default fn from_iter(mut iterator: I) -> Self { + // Unroll the first iteration, as the vector is going to be + // expanded on this iteration in every case when the iterable is not + // empty, but the loop in extend_desugared() is not going to see the + // vector being full in the few subsequent loop iterations. + // So we get better branch prediction. + let mut vector = match iterator.next() { + None => return Vec::new(), + Some(element) => { + let (lower, _) = iterator.size_hint(); + let mut vector = Vec::with_capacity(lower.saturating_add(1)); + unsafe { + ptr::write(vector.get_unchecked_mut(0), element); + vector.set_len(1); + } + vector + } + }; + vector.spec_extend(iterator); + vector + } + + default fn spec_extend(&mut self, iter: I) { + self.extend_desugared(iter) + } +} + +impl SpecExtend for Vec + where I: TrustedLen, +{ + fn from_iter(iterator: I) -> Self { + let mut vector = Vec::new(); + vector.spec_extend(iterator); + vector + } + + fn spec_extend(&mut self, iterator: I) { + // This is the case for a TrustedLen iterator. + let (low, high) = iterator.size_hint(); if let Some(high_value) = high { debug_assert_eq!(low, high_value, "TrustedLen iterator's size hint is not exact: {:?}", (low, high)); } - high - } -} - -impl Vec { - fn extend_desugared>(&mut self, mut iterator: I) { - // This function should be the moral equivalent of: - // - // for item in iterator { - // self.push(item); - // } - if let Some(additional) = iterator.trusted_len() { + if let Some(additional) = high { self.reserve(additional); unsafe { let mut ptr = self.as_mut_ptr().offset(self.len() as isize); @@ -1628,17 +1637,57 @@ impl Vec { } } } else { - while let Some(element) = iterator.next() { - let len = self.len(); - if len == self.capacity() { - let (lower, _) = iterator.size_hint(); - self.reserve(lower.saturating_add(1)); - } - unsafe { - ptr::write(self.get_unchecked_mut(len), element); - // NB can't overflow since we would have had to alloc the address space - self.set_len(len + 1); - } + self.extend_desugared(iterator) + } + } +} + +impl<'a, T: 'a, I> SpecExtend<&'a T, I> for Vec + where I: Iterator, + T: Clone, +{ + default fn from_iter(iterator: I) -> Self { + SpecExtend::from_iter(iterator.cloned()) + } + + default fn spec_extend(&mut self, iterator: I) { + self.spec_extend(iterator.cloned()) + } +} + +impl<'a, T: 'a> SpecExtend<&'a T, slice::Iter<'a, T>> for Vec + where T: Copy, +{ + fn spec_extend(&mut self, iterator: slice::Iter<'a, T>) { + let slice = iterator.as_slice(); + self.reserve(slice.len()); + unsafe { + let len = self.len(); + self.set_len(len + slice.len()); + self.get_unchecked_mut(len..).copy_from_slice(slice); + } + } +} + +impl Vec { + fn extend_desugared>(&mut self, mut iterator: I) { + // This is the case for a general iterator. + // + // This function should be the moral equivalent of: + // + // for item in iterator { + // self.push(item); + // } + while let Some(element) = iterator.next() { + let len = self.len(); + if len == self.capacity() { + let (lower, _) = iterator.size_hint(); + self.reserve(lower.saturating_add(1)); + } + unsafe { + ptr::write(self.get_unchecked_mut(len), element); + // NB can't overflow since we would have had to alloc the address space + self.set_len(len + 1); } } } @@ -1647,7 +1696,7 @@ impl Vec { #[stable(feature = "extend_ref", since = "1.2.0")] impl<'a, T: 'a + Copy> Extend<&'a T> for Vec { fn extend>(&mut self, iter: I) { - self.extend(iter.into_iter().map(|&x| x)) + self.spec_extend(iter.into_iter()) } } @@ -1853,14 +1902,13 @@ impl IntoIter { /// # Examples /// /// ``` - /// # #![feature(vec_into_iter_as_slice)] /// let vec = vec!['a', 'b', 'c']; /// let mut into_iter = vec.into_iter(); /// assert_eq!(into_iter.as_slice(), &['a', 'b', 'c']); /// let _ = into_iter.next().unwrap(); /// assert_eq!(into_iter.as_slice(), &['b', 'c']); /// ``` - #[unstable(feature = "vec_into_iter_as_slice", issue = "35601")] + #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] pub fn as_slice(&self) -> &[T] { unsafe { slice::from_raw_parts(self.ptr, self.len()) @@ -1872,7 +1920,6 @@ impl IntoIter { /// # Examples /// /// ``` - /// # #![feature(vec_into_iter_as_slice)] /// let vec = vec!['a', 'b', 'c']; /// let mut into_iter = vec.into_iter(); /// assert_eq!(into_iter.as_slice(), &['a', 'b', 'c']); @@ -1881,7 +1928,7 @@ impl IntoIter { /// assert_eq!(into_iter.next().unwrap(), 'b'); /// assert_eq!(into_iter.next().unwrap(), 'z'); /// ``` - #[unstable(feature = "vec_into_iter_as_slice", issue = "35601")] + #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] pub fn as_mut_slice(&self) -> &mut [T] { unsafe { slice::from_raw_parts_mut(self.ptr as *mut T, self.len()) @@ -1966,7 +2013,11 @@ impl DoubleEndedIterator for IntoIter { } #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for IntoIter {} +impl ExactSizeIterator for IntoIter { + fn is_empty(&self) -> bool { + self.ptr == self.end + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for IntoIter {} @@ -2060,7 +2111,11 @@ impl<'a, T> Drop for Drain<'a, T> { #[stable(feature = "drain", since = "1.6.0")] -impl<'a, T> ExactSizeIterator for Drain<'a, T> {} +impl<'a, T> ExactSizeIterator for Drain<'a, T> { + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for Drain<'a, T> {} diff --git a/src/libcollections/vec_deque.rs b/src/libcollections/vec_deque.rs index 5397193cab..dbe3fec205 100644 --- a/src/libcollections/vec_deque.rs +++ b/src/libcollections/vec_deque.rs @@ -810,7 +810,7 @@ impl VecDeque { /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn is_empty(&self) -> bool { - self.len() == 0 + self.tail == self.head } /// Create a draining iterator that removes the specified range in the @@ -1916,7 +1916,11 @@ impl<'a, T> DoubleEndedIterator for Iter<'a, T> { } #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T> ExactSizeIterator for Iter<'a, T> {} +impl<'a, T> ExactSizeIterator for Iter<'a, T> { + fn is_empty(&self) -> bool { + self.head == self.tail + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for Iter<'a, T> {} @@ -1980,7 +1984,11 @@ impl<'a, T> DoubleEndedIterator for IterMut<'a, T> { } #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T> ExactSizeIterator for IterMut<'a, T> {} +impl<'a, T> ExactSizeIterator for IterMut<'a, T> { + fn is_empty(&self) -> bool { + self.head == self.tail + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for IterMut<'a, T> {} @@ -2017,7 +2025,11 @@ impl DoubleEndedIterator for IntoIter { } #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for IntoIter {} +impl ExactSizeIterator for IntoIter { + fn is_empty(&self) -> bool { + self.inner.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for IntoIter {} diff --git a/src/libcollectionstest/lib.rs b/src/libcollectionstest/lib.rs index 14ec8d58be..d4fb5ea03a 100644 --- a/src/libcollectionstest/lib.rs +++ b/src/libcollectionstest/lib.rs @@ -18,20 +18,21 @@ #![feature(const_fn)] #![feature(dedup_by)] #![feature(enumset)] +#![feature(exact_size_is_empty)] #![feature(pattern)] #![feature(rand)] #![feature(repeat_str)] #![feature(step_by)] #![feature(str_escape)] #![feature(str_replacen)] +#![feature(string_split_off)] #![feature(test)] #![feature(unboxed_closures)] #![feature(unicode)] -#![feature(vec_into_iter_as_slice)] extern crate collections; extern crate test; -extern crate rustc_unicode; +extern crate std_unicode; use std::hash::{Hash, Hasher}; use std::collections::hash_map::DefaultHasher; diff --git a/src/libcollectionstest/slice.rs b/src/libcollectionstest/slice.rs index a6230ef471..1b52214dee 100644 --- a/src/libcollectionstest/slice.rs +++ b/src/libcollectionstest/slice.rs @@ -383,7 +383,7 @@ fn test_reverse() { #[test] fn test_sort() { - for len in 4..25 { + for len in (2..25).chain(500..510) { for _ in 0..100 { let mut v: Vec<_> = thread_rng().gen_iter::().take(len).collect(); let mut v1 = v.clone(); @@ -410,7 +410,7 @@ fn test_sort() { #[test] fn test_sort_stability() { - for len in 4..25 { + for len in (2..25).chain(500..510) { for _ in 0..10 { let mut counts = [0; 10]; @@ -441,6 +441,13 @@ fn test_sort_stability() { } } +#[test] +fn test_sort_zero_sized_type() { + // Should not panic. + [(); 10].sort(); + [(); 100].sort(); +} + #[test] fn test_concat() { let v: [Vec; 0] = []; @@ -633,6 +640,16 @@ fn test_iter_clone() { assert_eq!(it.next(), jt.next()); } +#[test] +fn test_iter_is_empty() { + let xs = [1, 2, 5, 10, 11]; + for i in 0..xs.len() { + for j in i..xs.len() { + assert_eq!(xs[i..j].iter().is_empty(), xs[i..j].is_empty()); + } + } +} + #[test] fn test_mut_iterator() { let mut xs = [1, 2, 3, 4, 5]; @@ -1328,89 +1345,104 @@ mod bench { }) } - #[bench] - fn sort_random_small(b: &mut Bencher) { + fn gen_ascending(len: usize) -> Vec { + (0..len as u64).collect() + } + + fn gen_descending(len: usize) -> Vec { + (0..len as u64).rev().collect() + } + + fn gen_random(len: usize) -> Vec { let mut rng = thread_rng(); - b.iter(|| { - let mut v: Vec<_> = rng.gen_iter::().take(5).collect(); - v.sort(); - }); - b.bytes = 5 * mem::size_of::() as u64; + rng.gen_iter::().take(len).collect() } - #[bench] - fn sort_random_medium(b: &mut Bencher) { + fn gen_mostly_ascending(len: usize) -> Vec { let mut rng = thread_rng(); - b.iter(|| { - let mut v: Vec<_> = rng.gen_iter::().take(100).collect(); - v.sort(); - }); - b.bytes = 100 * mem::size_of::() as u64; + let mut v = gen_ascending(len); + for _ in (0usize..).take_while(|x| x * x <= len) { + let x = rng.gen::() % len; + let y = rng.gen::() % len; + v.swap(x, y); + } + v } - #[bench] - fn sort_random_large(b: &mut Bencher) { + fn gen_mostly_descending(len: usize) -> Vec { let mut rng = thread_rng(); - b.iter(|| { - let mut v: Vec<_> = rng.gen_iter::().take(10000).collect(); - v.sort(); - }); - b.bytes = 10000 * mem::size_of::() as u64; + let mut v = gen_descending(len); + for _ in (0usize..).take_while(|x| x * x <= len) { + let x = rng.gen::() % len; + let y = rng.gen::() % len; + v.swap(x, y); + } + v } - #[bench] - fn sort_sorted(b: &mut Bencher) { - let mut v: Vec<_> = (0..10000).collect(); - b.iter(|| { - v.sort(); - }); - b.bytes = (v.len() * mem::size_of_val(&v[0])) as u64; - } - - type BigSortable = (u64, u64, u64, u64); - - #[bench] - fn sort_big_random_small(b: &mut Bencher) { + fn gen_big_random(len: usize) -> Vec<[u64; 16]> { let mut rng = thread_rng(); - b.iter(|| { - let mut v = rng.gen_iter::() - .take(5) - .collect::>(); - v.sort(); - }); - b.bytes = 5 * mem::size_of::() as u64; + rng.gen_iter().map(|x| [x; 16]).take(len).collect() } - #[bench] - fn sort_big_random_medium(b: &mut Bencher) { - let mut rng = thread_rng(); - b.iter(|| { - let mut v = rng.gen_iter::() - .take(100) - .collect::>(); - v.sort(); - }); - b.bytes = 100 * mem::size_of::() as u64; + fn gen_big_ascending(len: usize) -> Vec<[u64; 16]> { + (0..len as u64).map(|x| [x; 16]).take(len).collect() } - #[bench] - fn sort_big_random_large(b: &mut Bencher) { - let mut rng = thread_rng(); - b.iter(|| { - let mut v = rng.gen_iter::() - .take(10000) - .collect::>(); - v.sort(); - }); - b.bytes = 10000 * mem::size_of::() as u64; + fn gen_big_descending(len: usize) -> Vec<[u64; 16]> { + (0..len as u64).rev().map(|x| [x; 16]).take(len).collect() } + macro_rules! sort_bench { + ($name:ident, $gen:expr, $len:expr) => { + #[bench] + fn $name(b: &mut Bencher) { + b.iter(|| $gen($len).sort()); + b.bytes = $len * mem::size_of_val(&$gen(1)[0]) as u64; + } + } + } + + sort_bench!(sort_small_random, gen_random, 10); + sort_bench!(sort_small_ascending, gen_ascending, 10); + sort_bench!(sort_small_descending, gen_descending, 10); + + sort_bench!(sort_small_big_random, gen_big_random, 10); + sort_bench!(sort_small_big_ascending, gen_big_ascending, 10); + sort_bench!(sort_small_big_descending, gen_big_descending, 10); + + sort_bench!(sort_medium_random, gen_random, 100); + sort_bench!(sort_medium_ascending, gen_ascending, 100); + sort_bench!(sort_medium_descending, gen_descending, 100); + + sort_bench!(sort_large_random, gen_random, 10000); + sort_bench!(sort_large_ascending, gen_ascending, 10000); + sort_bench!(sort_large_descending, gen_descending, 10000); + sort_bench!(sort_large_mostly_ascending, gen_mostly_ascending, 10000); + sort_bench!(sort_large_mostly_descending, gen_mostly_descending, 10000); + + sort_bench!(sort_large_big_random, gen_big_random, 10000); + sort_bench!(sort_large_big_ascending, gen_big_ascending, 10000); + sort_bench!(sort_large_big_descending, gen_big_descending, 10000); + #[bench] - fn sort_big_sorted(b: &mut Bencher) { - let mut v: Vec = (0..10000).map(|i| (i, i, i, i)).collect(); + fn sort_large_random_expensive(b: &mut Bencher) { + let len = 10000; b.iter(|| { - v.sort(); + let mut count = 0; + let cmp = move |a: &u64, b: &u64| { + count += 1; + if count % 1_000_000_000 == 0 { + panic!("should not happen"); + } + (*a as f64).cos().partial_cmp(&(*b as f64).cos()).unwrap() + }; + + let mut v = gen_random(len); + v.sort_by(cmp); + + black_box(count); }); - b.bytes = (v.len() * mem::size_of_val(&v[0])) as u64; + b.bytes = len as u64 * mem::size_of::() as u64; } } diff --git a/src/libcollectionstest/str.rs b/src/libcollectionstest/str.rs index cc56bbf489..384579ce6b 100644 --- a/src/libcollectionstest/str.rs +++ b/src/libcollectionstest/str.rs @@ -530,7 +530,7 @@ fn from_utf8_mostly_ascii() { #[test] fn test_is_utf16() { - use rustc_unicode::str::is_utf16; + use std_unicode::str::is_utf16; macro_rules! pos { ($($e:expr),*) => { { $(assert!(is_utf16($e));)* } } @@ -767,6 +767,7 @@ fn test_iterator() { pos += 1; } assert_eq!(pos, v.len()); + assert_eq!(s.chars().count(), v.len()); } #[test] @@ -814,6 +815,14 @@ fn test_iterator_clone() { assert!(it.clone().zip(it).all(|(x,y)| x == y)); } +#[test] +fn test_iterator_last() { + let s = "ศไทย中华Việt Nam"; + let mut it = s.chars(); + it.next(); + assert_eq!(it.last(), Some('m')); +} + #[test] fn test_bytesator() { let s = "ศไทย中华Việt Nam"; @@ -911,6 +920,14 @@ fn test_char_indices_revator() { assert_eq!(pos, p.len()); } +#[test] +fn test_char_indices_last() { + let s = "ศไทย中华Việt Nam"; + let mut it = s.char_indices(); + it.next(); + assert_eq!(it.last(), Some((27, 'm'))); +} + #[test] fn test_splitn_char_iterator() { let data = "\nMäry häd ä little lämb\nLittle lämb\n"; @@ -1169,7 +1186,7 @@ fn test_rev_split_char_iterator_no_trailing() { #[test] fn test_utf16_code_units() { - use rustc_unicode::str::Utf16Encoder; + use std_unicode::str::Utf16Encoder; assert_eq!(Utf16Encoder::new(vec!['é', '\u{1F4A9}'].into_iter()).collect::>(), [0xE9, 0xD83D, 0xDCA9]) } diff --git a/src/libcollectionstest/string.rs b/src/libcollectionstest/string.rs index 98de33bdaa..a7d85d0bea 100644 --- a/src/libcollectionstest/string.rs +++ b/src/libcollectionstest/string.rs @@ -132,7 +132,7 @@ fn test_from_utf16() { let s_as_utf16 = s.encode_utf16().collect::>(); let u_as_string = String::from_utf16(&u).unwrap(); - assert!(::rustc_unicode::str::is_utf16(&u)); + assert!(::std_unicode::str::is_utf16(&u)); assert_eq!(s_as_utf16, u); assert_eq!(u_as_string, s); @@ -231,6 +231,45 @@ fn test_pop() { assert_eq!(data, "ประเทศไทย中"); } +#[test] +fn test_split_off_empty() { + let orig = "Hello, world!"; + let mut split = String::from(orig); + let empty: String = split.split_off(orig.len()); + assert!(empty.is_empty()); +} + +#[test] +#[should_panic] +fn test_split_off_past_end() { + let orig = "Hello, world!"; + let mut split = String::from(orig); + split.split_off(orig.len() + 1); +} + +#[test] +#[should_panic] +fn test_split_off_mid_char() { + let mut orig = String::from("山"); + orig.split_off(1); +} + +#[test] +fn test_split_off_ascii() { + let mut ab = String::from("ABCD"); + let cd = ab.split_off(2); + assert_eq!(ab, "AB"); + assert_eq!(cd, "CD"); +} + +#[test] +fn test_split_off_unicode() { + let mut nihon = String::from("日本語"); + let go = nihon.split_off("日本".len()); + assert_eq!(nihon, "日本"); + assert_eq!(go, "語"); +} + #[test] fn test_str_truncate() { let mut s = String::from("12345"); diff --git a/src/libcollectionstest/vec_deque.rs b/src/libcollectionstest/vec_deque.rs index f1ea85a6c5..cdf022e4f0 100644 --- a/src/libcollectionstest/vec_deque.rs +++ b/src/libcollectionstest/vec_deque.rs @@ -1007,3 +1007,24 @@ fn assert_covariance() { d } } + +#[test] +fn test_is_empty() { + let mut v = VecDeque::::new(); + assert!(v.is_empty()); + assert!(v.iter().is_empty()); + assert!(v.iter_mut().is_empty()); + v.extend(&[2, 3, 4]); + assert!(!v.is_empty()); + assert!(!v.iter().is_empty()); + assert!(!v.iter_mut().is_empty()); + while let Some(_) = v.pop_front() { + assert_eq!(v.is_empty(), v.len() == 0); + assert_eq!(v.iter().is_empty(), v.iter().len() == 0); + assert_eq!(v.iter_mut().is_empty(), v.iter_mut().len() == 0); + } + assert!(v.is_empty()); + assert!(v.iter().is_empty()); + assert!(v.iter_mut().is_empty()); + assert!(v.into_iter().is_empty()); +} diff --git a/src/libcompiler_builtins/Cargo.toml b/src/libcompiler_builtins/Cargo.toml index a52873fc32..1a549ae823 100644 --- a/src/libcompiler_builtins/Cargo.toml +++ b/src/libcompiler_builtins/Cargo.toml @@ -7,6 +7,9 @@ version = "0.0.0" [lib] name = "compiler_builtins" path = "lib.rs" +test = false +bench = false +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/libcompiler_builtins/build.rs b/src/libcompiler_builtins/build.rs index acbd39bb16..f61e2281a5 100644 --- a/src/libcompiler_builtins/build.rs +++ b/src/libcompiler_builtins/build.rs @@ -94,6 +94,7 @@ fn main() { cfg.flag("-fvisibility=hidden"); cfg.flag("-fomit-frame-pointer"); cfg.flag("-ffreestanding"); + cfg.define("VISIBILITY_HIDDEN", None); } let mut sources = Sources::new(); diff --git a/src/libcore/Cargo.toml b/src/libcore/Cargo.toml index 3b406ac044..a72c712ad1 100644 --- a/src/libcore/Cargo.toml +++ b/src/libcore/Cargo.toml @@ -7,7 +7,12 @@ version = "0.0.0" name = "core" path = "lib.rs" test = false +bench = false [[test]] name = "coretest" path = "../libcoretest/lib.rs" + +[[bench]] +name = "coretest" +path = "../libcoretest/lib.rs" diff --git a/src/libcore/cell.rs b/src/libcore/cell.rs index 64a7a8c5ef..c3f862e7c5 100644 --- a/src/libcore/cell.rs +++ b/src/libcore/cell.rs @@ -393,6 +393,8 @@ pub struct RefCell { /// An enumeration of values returned from the `state` method on a `RefCell`. #[derive(Copy, Clone, PartialEq, Eq, Debug)] #[unstable(feature = "borrow_state", issue = "27733")] +#[rustc_deprecated(since = "1.15.0", reason = "use `try_borrow` instead")] +#[allow(deprecated)] pub enum BorrowState { /// The cell is currently being read, there is at least one active `borrow`. Reading, @@ -511,6 +513,8 @@ impl RefCell { /// } /// ``` #[unstable(feature = "borrow_state", issue = "27733")] + #[rustc_deprecated(since = "1.15.0", reason = "use `try_borrow` instead")] + #[allow(deprecated)] #[inline] pub fn borrow_state(&self) -> BorrowState { match self.borrow.get() { @@ -888,9 +892,7 @@ impl<'b, T: ?Sized> Ref<'b, T> { /// `Ref::clone(...)`. A `Clone` implementation or a method would interfere /// with the widespread use of `r.borrow().clone()` to clone the contents of /// a `RefCell`. - #[unstable(feature = "cell_extras", - reason = "likely to be moved to a method, pending language changes", - issue = "27746")] + #[stable(feature = "cell_extras", since = "1.15.0")] #[inline] pub fn clone(orig: &Ref<'b, T>) -> Ref<'b, T> { Ref { diff --git a/src/libcore/char.rs b/src/libcore/char.rs index 26d28049a4..c14ae6e089 100644 --- a/src/libcore/char.rs +++ b/src/libcore/char.rs @@ -10,7 +10,7 @@ //! Character manipulation. //! -//! For more details, see ::rustc_unicode::char (a.k.a. std::char) +//! For more details, see ::std_unicode::char (a.k.a. std::char) #![allow(non_snake_case)] #![stable(feature = "core_char", since = "1.2.0")] @@ -238,7 +238,7 @@ impl fmt::Display for CharTryFromError { /// A 'radix' here is sometimes also called a 'base'. A radix of two /// indicates a binary number, a radix of ten, decimal, and a radix of /// sixteen, hexadecimal, to give some common values. Arbitrary -/// radicum are supported. +/// radices are supported. /// /// `from_digit()` will return `None` if the input is not a digit in /// the given radix. @@ -327,9 +327,9 @@ pub trait CharExt { fn len_utf8(self) -> usize; #[stable(feature = "core", since = "1.6.0")] fn len_utf16(self) -> usize; - #[unstable(feature = "unicode", issue = "27784")] + #[stable(feature = "unicode_encode_char", since = "1.15.0")] fn encode_utf8(self, dst: &mut [u8]) -> &mut str; - #[unstable(feature = "unicode", issue = "27784")] + #[stable(feature = "unicode_encode_char", since = "1.15.0")] fn encode_utf16(self, dst: &mut [u16]) -> &mut [u16]; } diff --git a/src/libcore/char_private.rs b/src/libcore/char_private.rs index 708e7cc15e..ddc473592a 100644 --- a/src/libcore/char_private.rs +++ b/src/libcore/char_private.rs @@ -11,6 +11,8 @@ // NOTE: The following code was generated by "src/etc/char_private.py", // do not edit directly! +use slice::SliceExt; + fn check(x: u16, singletons: &[u16], normal: &[u16]) -> bool { for &s in singletons { if x == s { @@ -42,7 +44,16 @@ pub fn is_printable(x: char) -> bool { } else if x < 0x20000 { check(lower, SINGLETONS1, NORMAL1) } else { - if 0x20000 <= x && x < 0x2f800 { + if 0x2a6d7 <= x && x < 0x2a700 { + return false; + } + if 0x2b735 <= x && x < 0x2b740 { + return false; + } + if 0x2b81e <= x && x < 0x2b820 { + return false; + } + if 0x2cea2 <= x && x < 0x2f800 { return false; } if 0x2fa1e <= x && x < 0xe0100 { @@ -62,10 +73,13 @@ const SINGLETONS0: &'static [u16] = &[ 0x38b, 0x38d, 0x3a2, + 0x530, 0x557, 0x558, 0x560, 0x588, + 0x58b, + 0x58c, 0x590, 0x61c, 0x61d, @@ -79,10 +93,8 @@ const SINGLETONS0: &'static [u16] = &[ 0x83f, 0x85c, 0x85d, - 0x8a1, - 0x8ff, - 0x978, - 0x980, + 0x8b5, + 0x8e2, 0x984, 0x98d, 0x98e, @@ -154,14 +166,11 @@ const SINGLETONS0: &'static [u16] = &[ 0xc0d, 0xc11, 0xc29, - 0xc34, 0xc45, 0xc49, 0xc57, 0xc64, 0xc65, - 0xc80, - 0xc81, 0xc84, 0xc8d, 0xc91, @@ -193,6 +202,8 @@ const SINGLETONS0: &'static [u16] = &[ 0xdbf, 0xdd5, 0xdd7, + 0xdf0, + 0xdf1, 0xe83, 0xe85, 0xe86, @@ -245,6 +256,10 @@ const SINGLETONS0: &'static [u16] = &[ 0x1317, 0x135b, 0x135c, + 0x13f6, + 0x13f7, + 0x13fe, + 0x13ff, 0x1680, 0x170d, 0x176d, @@ -253,6 +268,7 @@ const SINGLETONS0: &'static [u16] = &[ 0x17df, 0x180e, 0x180f, + 0x191f, 0x196e, 0x196f, 0x1a1c, @@ -260,6 +276,9 @@ const SINGLETONS0: &'static [u16] = &[ 0x1a5f, 0x1a7d, 0x1a7e, + 0x1aae, + 0x1aaf, + 0x1cf7, 0x1f16, 0x1f17, 0x1f1e, @@ -285,7 +304,12 @@ const SINGLETONS0: &'static [u16] = &[ 0x2072, 0x2073, 0x208f, - 0x2700, + 0x23ff, + 0x2b74, + 0x2b75, + 0x2b96, + 0x2b97, + 0x2bc9, 0x2c2f, 0x2c5f, 0x2d26, @@ -306,8 +330,11 @@ const SINGLETONS0: &'static [u16] = &[ 0x318f, 0x321f, 0x32ff, - 0xa78f, + 0xa7af, + 0xa8fe, + 0xa8ff, 0xa9ce, + 0xa9ff, 0xaa4e, 0xaa4f, 0xaa5a, @@ -317,6 +344,7 @@ const SINGLETONS0: &'static [u16] = &[ 0xab0f, 0xab10, 0xab27, + 0xab2f, 0xabee, 0xabef, 0xfa6e, @@ -350,7 +378,7 @@ const SINGLETONS1: &'static [u16] = &[ 0x3e, 0x4e, 0x4f, - 0x31f, + 0x18f, 0x39e, 0x49e, 0x49f, @@ -361,6 +389,9 @@ const SINGLETONS1: &'static [u16] = &[ 0x83d, 0x83e, 0x856, + 0x8f3, + 0x9d0, + 0x9d1, 0xa04, 0xa14, 0xa18, @@ -368,6 +399,49 @@ const SINGLETONS1: &'static [u16] = &[ 0xb57, 0x10bd, 0x1135, + 0x11ce, + 0x11cf, + 0x11e0, + 0x1212, + 0x1287, + 0x1289, + 0x128e, + 0x129e, + 0x1304, + 0x130d, + 0x130e, + 0x1311, + 0x1312, + 0x1329, + 0x1331, + 0x1334, + 0x133a, + 0x133b, + 0x1345, + 0x1346, + 0x1349, + 0x134a, + 0x134e, + 0x134f, + 0x1364, + 0x1365, + 0x145a, + 0x145c, + 0x15b6, + 0x15b7, + 0x1c09, + 0x1c37, + 0x1c90, + 0x1c91, + 0x1ca8, + 0x246f, + 0x6a5f, + 0x6aee, + 0x6aef, + 0x6b5a, + 0x6b62, + 0xbc9a, + 0xbc9b, 0xd127, 0xd128, 0xd455, @@ -395,6 +469,14 @@ const SINGLETONS1: &'static [u16] = &[ 0xd6a7, 0xd7cc, 0xd7cd, + 0xdaa0, + 0xe007, + 0xe019, + 0xe01a, + 0xe022, + 0xe025, + 0xe8c5, + 0xe8c6, 0xee04, 0xee20, 0xee23, @@ -429,31 +511,25 @@ const SINGLETONS1: &'static [u16] = &[ 0xeeaa, 0xf0af, 0xf0b0, - 0xf0bf, 0xf0c0, 0xf0d0, 0xf12f, - 0xf336, - 0xf3c5, - 0xf43f, - 0xf441, - 0xf4f8, - 0xf53e, - 0xf53f, + 0xf91f, + 0xf931, + 0xf932, + 0xf93f, ]; const NORMAL0: &'static [u16] = &[ 0x0, 0x20, 0x7f, 0x22, - 0x37f, 0x5, - 0x528, 0x9, - 0x58b, 0x4, + 0x380, 0x4, 0x5c8, 0x8, 0x5eb, 0x5, 0x5f5, 0x11, 0x7b2, 0xe, 0x7fb, 0x5, 0x85f, 0x41, - 0x8ad, 0x37, + 0x8be, 0x16, 0x9b3, 0x3, 0x9cf, 0x8, 0x9d8, 0x4, @@ -465,7 +541,8 @@ const NORMAL0: &'static [u16] = &[ 0xa5f, 0x7, 0xa76, 0xb, 0xad1, 0xf, - 0xaf2, 0xf, + 0xaf2, 0x7, + 0xafa, 0x7, 0xb4e, 0x8, 0xb58, 0x4, 0xb78, 0xa, @@ -478,21 +555,19 @@ const NORMAL0: &'static [u16] = &[ 0xbc3, 0x3, 0xbd1, 0x6, 0xbd8, 0xe, - 0xbfb, 0x6, + 0xbfb, 0x5, 0xc3a, 0x3, 0xc4e, 0x7, - 0xc5a, 0x6, + 0xc5b, 0x5, 0xc70, 0x8, 0xcce, 0x7, 0xcd7, 0x7, - 0xcf3, 0xf, - 0xd4f, 0x8, - 0xd58, 0x8, - 0xd76, 0x3, + 0xcf3, 0xe, + 0xd50, 0x4, 0xd97, 0x3, 0xdc7, 0x3, 0xdcb, 0x4, - 0xde0, 0x12, + 0xde0, 0x6, 0xdf5, 0xc, 0xe3b, 0x4, 0xe5c, 0x25, @@ -503,9 +578,8 @@ const NORMAL0: &'static [u16] = &[ 0x10c8, 0x5, 0x137d, 0x3, 0x139a, 0x6, - 0x13f5, 0xb, 0x169d, 0x3, - 0x16f1, 0xf, + 0x16f9, 0x7, 0x1715, 0xb, 0x1737, 0x9, 0x1754, 0xc, @@ -516,7 +590,6 @@ const NORMAL0: &'static [u16] = &[ 0x1878, 0x8, 0x18ab, 0x5, 0x18f6, 0xa, - 0x191d, 0x3, 0x192c, 0x4, 0x193c, 0x4, 0x1941, 0x3, @@ -526,34 +599,34 @@ const NORMAL0: &'static [u16] = &[ 0x19db, 0x3, 0x1a8a, 0x6, 0x1a9a, 0x6, - 0x1aae, 0x52, + 0x1abf, 0x41, 0x1b4c, 0x4, 0x1b7d, 0x3, 0x1bf4, 0x8, 0x1c38, 0x3, 0x1c4a, 0x3, - 0x1c80, 0x40, + 0x1c89, 0x37, 0x1cc8, 0x8, - 0x1cf7, 0x9, - 0x1de7, 0x15, + 0x1cfa, 0x6, + 0x1df6, 0x5, 0x1fff, 0x11, 0x2028, 0x8, 0x205f, 0x11, 0x209d, 0x3, - 0x20ba, 0x16, + 0x20bf, 0x11, 0x20f1, 0xf, - 0x218a, 0x6, - 0x23f4, 0xc, + 0x218c, 0x4, 0x2427, 0x19, 0x244b, 0x15, - 0x2b4d, 0x3, - 0x2b5a, 0xa6, + 0x2bba, 0x3, + 0x2bd2, 0x1a, + 0x2bf0, 0x10, 0x2cf4, 0x5, 0x2d28, 0x5, 0x2d68, 0x7, 0x2d71, 0xe, 0x2d97, 0x9, - 0x2e3c, 0x44, + 0x2e45, 0x3b, 0x2ef4, 0xc, 0x2fd6, 0x1a, 0x2ffc, 0x5, @@ -561,32 +634,28 @@ const NORMAL0: &'static [u16] = &[ 0x312e, 0x3, 0x31bb, 0x5, 0x31e4, 0xc, - 0x3400, 0x19c0, - 0x4e00, 0x5200, + 0x4db6, 0xa, + 0x9fd6, 0x2a, 0xa48d, 0x3, 0xa4c7, 0x9, 0xa62c, 0x14, - 0xa698, 0x7, 0xa6f8, 0x8, - 0xa794, 0xc, - 0xa7ab, 0x4d, + 0xa7b8, 0x3f, 0xa82c, 0x4, 0xa83a, 0x6, 0xa878, 0x8, - 0xa8c5, 0x9, + 0xa8c6, 0x8, 0xa8da, 0x6, - 0xa8fc, 0x4, 0xa954, 0xb, 0xa97d, 0x3, 0xa9da, 0x4, - 0xa9e0, 0x20, 0xaa37, 0x9, - 0xaa7c, 0x4, 0xaac3, 0x18, 0xaaf7, 0xa, 0xab17, 0x9, - 0xab2f, 0x91, - 0xabfa, 0x2bb6, + 0xab66, 0xa, + 0xabfa, 0x6, + 0xd7a4, 0xc, 0xd7c7, 0x4, 0xd7fc, 0x2104, 0xfada, 0x26, @@ -596,7 +665,6 @@ const NORMAL0: &'static [u16] = &[ 0xfd40, 0x10, 0xfdc8, 0x28, 0xfe1a, 0x6, - 0xfe27, 0x9, 0xfe6c, 0x4, 0xfefd, 0x4, 0xffbf, 0x3, @@ -608,61 +676,123 @@ const NORMAL1: &'static [u16] = &[ 0xfb, 0x5, 0x103, 0x4, 0x134, 0x3, - 0x18b, 0x5, - 0x19c, 0x34, + 0x19c, 0x4, + 0x1a1, 0x2f, 0x1fe, 0x82, 0x29d, 0x3, - 0x2d1, 0x2f, + 0x2d1, 0xf, + 0x2fc, 0x4, 0x324, 0xc, - 0x34b, 0x35, + 0x34b, 0x5, + 0x37b, 0x5, 0x3c4, 0x4, 0x3d6, 0x2a, - 0x4aa, 0x356, + 0x4aa, 0x6, + 0x4d4, 0x4, + 0x4fc, 0x4, + 0x528, 0x8, + 0x564, 0xb, + 0x570, 0x90, + 0x737, 0x9, + 0x756, 0xa, + 0x768, 0x98, 0x839, 0x3, - 0x860, 0xa0, + 0x89f, 0x8, + 0x8b0, 0x30, + 0x8f6, 0x5, 0x91c, 0x3, 0x93a, 0x5, 0x940, 0x40, - 0x9b8, 0x6, - 0x9c0, 0x40, + 0x9b8, 0x4, 0xa07, 0x5, 0xa34, 0x4, 0xa3b, 0x4, 0xa48, 0x8, 0xa59, 0x7, - 0xa80, 0x80, + 0xaa0, 0x20, + 0xae7, 0x4, + 0xaf7, 0x9, 0xb36, 0x3, 0xb73, 0x5, - 0xb80, 0x80, - 0xc49, 0x217, + 0xb92, 0x7, + 0xb9d, 0xc, + 0xbb0, 0x50, + 0xc49, 0x37, + 0xcb3, 0xd, + 0xcf3, 0x7, + 0xd00, 0x160, 0xe7f, 0x181, 0x104e, 0x4, - 0x1070, 0x10, + 0x1070, 0xf, 0x10c2, 0xe, 0x10e9, 0x7, 0x10fa, 0x6, - 0x1144, 0x3c, - 0x11c9, 0x7, - 0x11da, 0x4a6, + 0x1144, 0xc, + 0x1177, 0x9, + 0x11f5, 0xb, + 0x123f, 0x41, + 0x12aa, 0x6, + 0x12eb, 0x5, + 0x12fa, 0x6, + 0x1351, 0x6, + 0x1358, 0x5, + 0x136d, 0x3, + 0x1375, 0x8b, + 0x145e, 0x22, + 0x14c8, 0x8, + 0x14da, 0xa6, + 0x15de, 0x22, + 0x1645, 0xb, + 0x165a, 0x6, + 0x166d, 0x13, 0x16b8, 0x8, - 0x16ca, 0x936, - 0x236f, 0x91, - 0x2463, 0xd, - 0x2474, 0xb8c, - 0x342f, 0x33d1, - 0x6a39, 0x4c7, + 0x16ca, 0x36, + 0x171a, 0x3, + 0x172c, 0x4, + 0x1740, 0x160, + 0x18f3, 0xc, + 0x1900, 0x1c0, + 0x1af9, 0x107, + 0x1c46, 0xa, + 0x1c6d, 0x3, + 0x1cb7, 0x349, + 0x239a, 0x66, + 0x2475, 0xb, + 0x2544, 0xabc, + 0x342f, 0xfd1, + 0x4647, 0x21b9, + 0x6a39, 0x7, + 0x6a6a, 0x4, + 0x6a70, 0x60, + 0x6af6, 0xa, + 0x6b46, 0xa, + 0x6b78, 0x5, + 0x6b90, 0x370, 0x6f45, 0xb, 0x6f7f, 0x10, - 0x6fa0, 0x4060, - 0xb002, 0x1ffe, + 0x6fa0, 0x40, + 0x6fe1, 0x1f, + 0x87ed, 0x13, + 0x8af3, 0x250d, + 0xb002, 0xbfe, + 0xbc6b, 0x5, + 0xbc7d, 0x3, + 0xbc89, 0x7, + 0xbca0, 0x1360, 0xd0f6, 0xa, 0xd173, 0x8, - 0xd1de, 0x22, + 0xd1e9, 0x17, 0xd246, 0xba, 0xd357, 0x9, 0xd372, 0x8e, 0xd547, 0x3, - 0xd800, 0x1600, + 0xda8c, 0xf, + 0xdab0, 0x550, + 0xe02b, 0x7d5, + 0xe8d7, 0x29, + 0xe94b, 0x5, + 0xe95a, 0x4, + 0xe960, 0x4a0, 0xee3c, 0x6, 0xee43, 0x4, 0xee9c, 0x5, @@ -670,24 +800,27 @@ const NORMAL1: &'static [u16] = &[ 0xeef2, 0x10e, 0xf02c, 0x4, 0xf094, 0xc, - 0xf0e0, 0x20, - 0xf10b, 0x5, + 0xf0f6, 0xa, + 0xf10d, 0x3, 0xf16c, 0x4, - 0xf19b, 0x4b, + 0xf1ad, 0x39, 0xf203, 0xd, - 0xf23b, 0x5, + 0xf23c, 0x4, 0xf249, 0x7, 0xf252, 0xae, - 0xf321, 0xf, - 0xf37d, 0x3, - 0xf394, 0xc, - 0xf3cb, 0x15, - 0xf3f1, 0xf, - 0xf4fd, 0x3, - 0xf544, 0xc, - 0xf568, 0x93, - 0xf641, 0x4, - 0xf650, 0x30, - 0xf6c6, 0x3a, - 0xf774, 0x88c, + 0xf6d3, 0xd, + 0xf6ed, 0x3, + 0xf6f7, 0x9, + 0xf774, 0xc, + 0xf7d5, 0x2b, + 0xf80c, 0x4, + 0xf848, 0x8, + 0xf85a, 0x6, + 0xf888, 0x8, + 0xf8ae, 0x62, + 0xf928, 0x8, + 0xf94c, 0x4, + 0xf95f, 0x21, + 0xf992, 0x2e, + 0xf9c1, 0x63f, ]; diff --git a/src/libcore/fmt/mod.rs b/src/libcore/fmt/mod.rs index 2d75a8ec42..2ba7d6e8bd 100644 --- a/src/libcore/fmt/mod.rs +++ b/src/libcore/fmt/mod.rs @@ -12,7 +12,7 @@ #![stable(feature = "rust1", since = "1.0.0")] -use cell::{UnsafeCell, Cell, RefCell, Ref, RefMut, BorrowState}; +use cell::{UnsafeCell, Cell, RefCell, Ref, RefMut}; use marker::PhantomData; use mem; use num::flt2dec; @@ -166,7 +166,9 @@ pub struct Formatter<'a> { // NB. Argument is essentially an optimized partially applied formatting function, // equivalent to `exists T.(&T, fn(&T, &mut Formatter) -> Result`. -enum Void {} +struct Void { + _priv: (), +} /// This struct represents the generic "argument" which is taken by the Xprintf /// family of functions. It contains a function to format the given value. At @@ -1632,13 +1634,13 @@ impl Debug for Cell { #[stable(feature = "rust1", since = "1.0.0")] impl Debug for RefCell { fn fmt(&self, f: &mut Formatter) -> Result { - match self.borrow_state() { - BorrowState::Unused | BorrowState::Reading => { + match self.try_borrow() { + Ok(borrow) => { f.debug_struct("RefCell") - .field("value", &self.borrow()) + .field("value", &borrow) .finish() } - BorrowState::Writing => { + Err(_) => { f.debug_struct("RefCell") .field("value", &"") .finish() diff --git a/src/libcore/hash/mod.rs b/src/libcore/hash/mod.rs index ac36cbaace..18b465d85a 100644 --- a/src/libcore/hash/mod.rs +++ b/src/libcore/hash/mod.rs @@ -255,10 +255,44 @@ pub trait BuildHasher { fn build_hasher(&self) -> Self::Hasher; } -/// A structure which implements `BuildHasher` for all `Hasher` types which also -/// implement `Default`. +/// The `BuildHasherDefault` structure is used in scenarios where one has a +/// type that implements [`Hasher`] and [`Default`], but needs that type to +/// implement [`BuildHasher`]. /// -/// This struct is 0-sized and does not need construction. +/// This structure is zero-sized and does not need construction. +/// +/// # Examples +/// +/// Using `BuildHasherDefault` to specify a custom [`BuildHasher`] for +/// [`HashMap`]: +/// +/// ``` +/// use std::collections::HashMap; +/// use std::hash::{BuildHasherDefault, Hasher}; +/// +/// #[derive(Default)] +/// struct MyHasher; +/// +/// impl Hasher for MyHasher { +/// fn write(&mut self, bytes: &[u8]) { +/// // Your hashing algorithm goes here! +/// unimplemented!() +/// } +/// +/// fn finish(&self) -> u64 { +/// // Your hashing algorithm goes here! +/// unimplemented!() +/// } +/// } +/// +/// type MyBuildHasher = BuildHasherDefault; +/// +/// let hash_map = HashMap::::default(); +/// ``` +/// +/// [`BuildHasher`]: trait.BuildHasher.html +/// [`Default`]: ../default/trait.Default.html +/// [`Hasher`]: trait.Hasher.html #[stable(since = "1.7.0", feature = "build_hasher")] pub struct BuildHasherDefault(marker::PhantomData); diff --git a/src/libcore/intrinsics.rs b/src/libcore/intrinsics.rs index e844a15848..3726eee9a9 100644 --- a/src/libcore/intrinsics.rs +++ b/src/libcore/intrinsics.rs @@ -51,76 +51,472 @@ extern "rust-intrinsic" { // NB: These intrinsics take raw pointers because they mutate aliased // memory, which is not valid for either `&` or `&mut`. + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_acq(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_rel(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_acqrel(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_relaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_failacq(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_acq_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange). pub fn atomic_cxchg_acqrel_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_acq(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_rel(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_acqrel(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as both the `success` and `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_relaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_failacq(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_acq_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Stores a value if the current value is the same as the `old` value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `compare_exchange_weak` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `success` and + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `failure` parameters. For example, + /// [`AtomicBool::compare_exchange_weak`] + /// (../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak). pub fn atomic_cxchgweak_acqrel_failrelaxed(dst: *mut T, old: T, src: T) -> (T, bool); + /// Loads the current value of the pointer. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `load` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::load`](../../std/sync/atomic/struct.AtomicBool.html#method.load). pub fn atomic_load(src: *const T) -> T; + /// Loads the current value of the pointer. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `load` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::load`](../../std/sync/atomic/struct.AtomicBool.html#method.load). pub fn atomic_load_acq(src: *const T) -> T; + /// Loads the current value of the pointer. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `load` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::load`](../../std/sync/atomic/struct.AtomicBool.html#method.load). pub fn atomic_load_relaxed(src: *const T) -> T; pub fn atomic_load_unordered(src: *const T) -> T; + /// Stores the value at the specified memory location. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `store` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::store`](../../std/sync/atomic/struct.AtomicBool.html#method.store). pub fn atomic_store(dst: *mut T, val: T); + /// Stores the value at the specified memory location. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `store` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::store`](../../std/sync/atomic/struct.AtomicBool.html#method.store). pub fn atomic_store_rel(dst: *mut T, val: T); + /// Stores the value at the specified memory location. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `store` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::store`](../../std/sync/atomic/struct.AtomicBool.html#method.store). pub fn atomic_store_relaxed(dst: *mut T, val: T); pub fn atomic_store_unordered(dst: *mut T, val: T); + /// Stores the value at the specified memory location, returning the old value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `swap` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap). pub fn atomic_xchg(dst: *mut T, src: T) -> T; + /// Stores the value at the specified memory location, returning the old value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `swap` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap). pub fn atomic_xchg_acq(dst: *mut T, src: T) -> T; + /// Stores the value at the specified memory location, returning the old value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `swap` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap). pub fn atomic_xchg_rel(dst: *mut T, src: T) -> T; + /// Stores the value at the specified memory location, returning the old value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `swap` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap). pub fn atomic_xchg_acqrel(dst: *mut T, src: T) -> T; + /// Stores the value at the specified memory location, returning the old value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `swap` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap). pub fn atomic_xchg_relaxed(dst: *mut T, src: T) -> T; + /// Add to the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_add` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add). pub fn atomic_xadd(dst: *mut T, src: T) -> T; + /// Add to the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_add` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add). pub fn atomic_xadd_acq(dst: *mut T, src: T) -> T; + /// Add to the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_add` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add). pub fn atomic_xadd_rel(dst: *mut T, src: T) -> T; + /// Add to the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_add` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add). pub fn atomic_xadd_acqrel(dst: *mut T, src: T) -> T; + /// Add to the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_add` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add). pub fn atomic_xadd_relaxed(dst: *mut T, src: T) -> T; + /// Subtract from the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_sub` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub). pub fn atomic_xsub(dst: *mut T, src: T) -> T; + /// Subtract from the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_sub` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub). pub fn atomic_xsub_acq(dst: *mut T, src: T) -> T; + /// Subtract from the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_sub` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub). pub fn atomic_xsub_rel(dst: *mut T, src: T) -> T; + /// Subtract from the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_sub` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub). pub fn atomic_xsub_acqrel(dst: *mut T, src: T) -> T; + /// Subtract from the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_sub` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub). pub fn atomic_xsub_relaxed(dst: *mut T, src: T) -> T; + /// Bitwise and with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_and` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and). pub fn atomic_and(dst: *mut T, src: T) -> T; + /// Bitwise and with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_and` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and). pub fn atomic_and_acq(dst: *mut T, src: T) -> T; + /// Bitwise and with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_and` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and). pub fn atomic_and_rel(dst: *mut T, src: T) -> T; + /// Bitwise and with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_and` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and). pub fn atomic_and_acqrel(dst: *mut T, src: T) -> T; + /// Bitwise and with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_and` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and). pub fn atomic_and_relaxed(dst: *mut T, src: T) -> T; + /// Bitwise nand with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand). pub fn atomic_nand(dst: *mut T, src: T) -> T; + /// Bitwise nand with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand). pub fn atomic_nand_acq(dst: *mut T, src: T) -> T; + /// Bitwise nand with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand). pub fn atomic_nand_rel(dst: *mut T, src: T) -> T; + /// Bitwise nand with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand). pub fn atomic_nand_acqrel(dst: *mut T, src: T) -> T; + /// Bitwise nand with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand). pub fn atomic_nand_relaxed(dst: *mut T, src: T) -> T; + /// Bitwise or with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_or` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or). pub fn atomic_or(dst: *mut T, src: T) -> T; + /// Bitwise or with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_or` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or). pub fn atomic_or_acq(dst: *mut T, src: T) -> T; + /// Bitwise or with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_or` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or). pub fn atomic_or_rel(dst: *mut T, src: T) -> T; + /// Bitwise or with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_or` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or). pub fn atomic_or_acqrel(dst: *mut T, src: T) -> T; + /// Bitwise or with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_or` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or). pub fn atomic_or_relaxed(dst: *mut T, src: T) -> T; + /// Bitwise xor with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_xor` method by passing + /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor). pub fn atomic_xor(dst: *mut T, src: T) -> T; + /// Bitwise xor with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_xor` method by passing + /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor). pub fn atomic_xor_acq(dst: *mut T, src: T) -> T; + /// Bitwise xor with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_xor` method by passing + /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor). pub fn atomic_xor_rel(dst: *mut T, src: T) -> T; + /// Bitwise xor with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_xor` method by passing + /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor). pub fn atomic_xor_acqrel(dst: *mut T, src: T) -> T; + /// Bitwise xor with the current value, returning the previous value. + /// The stabilized version of this intrinsic is available on the + /// `std::sync::atomic` types via the `fetch_xor` method by passing + /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html) + /// as the `order`. For example, + /// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor). pub fn atomic_xor_relaxed(dst: *mut T, src: T) -> T; pub fn atomic_max(dst: *mut T, src: T) -> T; @@ -631,8 +1027,12 @@ extern "rust-intrinsic" { pub fn volatile_set_memory(dst: *mut T, val: u8, count: usize); /// Perform a volatile load from the `src` pointer. + /// The stabilized version of this intrinsic is + /// [`std::ptr::read_volatile`](../../std/ptr/fn.read_volatile.html). pub fn volatile_load(src: *const T) -> T; /// Perform a volatile store to the `dst` pointer. + /// The stabilized version of this intrinsic is + /// [`std::ptr::write_volatile`](../../std/ptr/fn.write_volatile.html). pub fn volatile_store(dst: *mut T, val: T); /// Returns the square root of an `f32` @@ -766,12 +1166,21 @@ extern "rust-intrinsic" { pub fn bswap(x: T) -> T; /// Performs checked integer addition. + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `overflowing_add` method. For example, + /// [`std::u32::overflowing_add`](../../std/primitive.u32.html#method.overflowing_add) pub fn add_with_overflow(x: T, y: T) -> (T, bool); /// Performs checked integer subtraction + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `overflowing_sub` method. For example, + /// [`std::u32::overflowing_sub`](../../std/primitive.u32.html#method.overflowing_sub) pub fn sub_with_overflow(x: T, y: T) -> (T, bool); /// Performs checked integer multiplication + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `overflowing_mul` method. For example, + /// [`std::u32::overflowing_mul`](../../std/primitive.u32.html#method.overflowing_mul) pub fn mul_with_overflow(x: T, y: T) -> (T, bool); /// Performs an unchecked division, resulting in undefined behavior @@ -782,10 +1191,19 @@ extern "rust-intrinsic" { pub fn unchecked_rem(x: T, y: T) -> T; /// Returns (a + b) mod 2^N, where N is the width of T in bits. + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `wrapping_add` method. For example, + /// [`std::u32::wrapping_add`](../../std/primitive.u32.html#method.wrapping_add) pub fn overflowing_add(a: T, b: T) -> T; /// Returns (a - b) mod 2^N, where N is the width of T in bits. + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `wrapping_sub` method. For example, + /// [`std::u32::wrapping_sub`](../../std/primitive.u32.html#method.wrapping_sub) pub fn overflowing_sub(a: T, b: T) -> T; /// Returns (a * b) mod 2^N, where N is the width of T in bits. + /// The stabilized versions of this intrinsic are available on the integer + /// primitives via the `wrapping_mul` method. For example, + /// [`std::u32::wrapping_mul`](../../std/primitive.u32.html#method.wrapping_mul) pub fn overflowing_mul(a: T, b: T) -> T; /// Returns the value of the discriminant for the variant in 'v', diff --git a/src/libcore/iter/iterator.rs b/src/libcore/iter/iterator.rs index 5a12f5db19..ec590d2bd0 100644 --- a/src/libcore/iter/iterator.rs +++ b/src/libcore/iter/iterator.rs @@ -35,11 +35,14 @@ pub trait Iterator { /// Advances the iterator and returns the next value. /// - /// Returns `None` when iteration is finished. Individual iterator + /// Returns [`None`] when iteration is finished. Individual iterator /// implementations may choose to resume iteration, and so calling `next()` - /// again may or may not eventually start returning `Some(Item)` again at some + /// again may or may not eventually start returning [`Some(Item)`] again at some /// point. /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`Some(Item)`]: ../../std/option/enum.Option.html#variant.Some + /// /// # Examples /// /// Basic usage: @@ -69,9 +72,9 @@ pub trait Iterator { /// Specifically, `size_hint()` returns a tuple where the first element /// is the lower bound, and the second element is the upper bound. /// - /// The second half of the tuple that is returned is an `Option`. A - /// `None` here means that either there is no known upper bound, or the - /// upper bound is larger than `usize`. + /// The second half of the tuple that is returned is an [`Option`]`<`[`usize`]`>`. + /// A [`None`] here means that either there is no known upper bound, or the + /// upper bound is larger than [`usize`]. /// /// # Implementation notes /// @@ -91,6 +94,10 @@ pub trait Iterator { /// The default implementation returns `(0, None)` which is correct for any /// iterator. /// + /// [`usize`]: ../../std/primitive.usize.html + /// [`Option`]: ../../std/option/enum.Option.html + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -134,23 +141,26 @@ pub trait Iterator { /// Consumes the iterator, counting the number of iterations and returning it. /// /// This method will evaluate the iterator until its [`next()`] returns - /// `None`. Once `None` is encountered, `count()` returns the number of + /// [`None`]. Once [`None`] is encountered, `count()` returns the number of /// times it called [`next()`]. /// /// [`next()`]: #tymethod.next + /// [`None`]: ../../std/option/enum.Option.html#variant.None /// /// # Overflow Behavior /// /// The method does no guarding against overflows, so counting elements of - /// an iterator with more than `usize::MAX` elements either produces the + /// an iterator with more than [`usize::MAX`] elements either produces the /// wrong result or panics. If debug assertions are enabled, a panic is /// guaranteed. /// /// # Panics /// - /// This function might panic if the iterator has more than `usize::MAX` + /// This function might panic if the iterator has more than [`usize::MAX`] /// elements. /// + /// [`usize::MAX`]: ../../std/isize/constant.MAX.html + /// /// # Examples /// /// Basic usage: @@ -172,10 +182,12 @@ pub trait Iterator { /// Consumes the iterator, returning the last element. /// - /// This method will evaluate the iterator until it returns `None`. While - /// doing so, it keeps track of the current element. After `None` is + /// This method will evaluate the iterator until it returns [`None`]. While + /// doing so, it keeps track of the current element. After [`None`] is /// returned, `last()` will then return the last element it saw. /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -202,9 +214,11 @@ pub trait Iterator { /// Like most indexing operations, the count starts from zero, so `nth(0)` /// returns the first value, `nth(1)` the second, and so on. /// - /// `nth()` will return `None` if `n` is greater than or equal to the length of the + /// `nth()` will return [`None`] if `n` is greater than or equal to the length of the /// iterator. /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -233,7 +247,7 @@ pub trait Iterator { /// ``` #[inline] #[stable(feature = "rust1", since = "1.0.0")] - fn nth(&mut self, mut n: usize) -> Option where Self: Sized { + fn nth(&mut self, mut n: usize) -> Option { for x in self { if n == 0 { return Some(x) } n -= 1; @@ -306,8 +320,8 @@ pub trait Iterator { /// /// In other words, it zips two iterators together, into a single one. /// - /// When either iterator returns `None`, all further calls to `next()` - /// will return `None`. + /// When either iterator returns [`None`], all further calls to [`next()`] + /// will return [`None`]. /// /// # Examples /// @@ -346,7 +360,7 @@ pub trait Iterator { /// ``` /// /// `zip()` is often used to zip an infinite iterator to a finite one. - /// This works because the finite iterator will eventually return `None`, + /// This works because the finite iterator will eventually return [`None`], /// ending the zipper. Zipping with `(0..)` can look a lot like [`enumerate()`]: /// /// ``` @@ -365,6 +379,8 @@ pub trait Iterator { /// ``` /// /// [`enumerate()`]: trait.Iterator.html#method.enumerate + /// [`next()`]: ../../std/iter/trait.Iterator.html#tymethod.next + /// [`None`]: ../../std/option/enum.Option.html#variant.None #[inline] #[stable(feature = "rust1", since = "1.0.0")] fn zip(self, other: U) -> Zip where @@ -501,11 +517,9 @@ pub trait Iterator { /// /// The closure must return an [`Option`]. `filter_map()` creates an /// iterator which calls this closure on each element. If the closure - /// returns `Some(element)`, then that element is returned. If the - /// closure returns `None`, it will try again, and call the closure on the - /// next element, seeing if it will return `Some`. - /// - /// [`Option`]: ../../std/option/enum.Option.html + /// returns [`Some(element)`][`Some`], then that element is returned. If the + /// closure returns [`None`], it will try again, and call the closure on the + /// next element, seeing if it will return [`Some`]. /// /// Why `filter_map()` and not just [`filter()`].[`map()`]? The key is in this /// part: @@ -513,11 +527,11 @@ pub trait Iterator { /// [`filter()`]: #method.filter /// [`map()`]: #method.map /// - /// > If the closure returns `Some(element)`, then that element is returned. + /// > If the closure returns [`Some(element)`][`Some`], then that element is returned. /// /// In other words, it removes the [`Option`] layer automatically. If your /// mapping is already returning an [`Option`] and you want to skip over - /// `None`s, then `filter_map()` is much, much nicer to use. + /// [`None`]s, then `filter_map()` is much, much nicer to use. /// /// # Examples /// @@ -547,7 +561,11 @@ pub trait Iterator { /// assert_eq!(iter.next(), None); /// ``` /// - /// There's an extra layer of `Some` in there. + /// There's an extra layer of [`Some`] in there. + /// + /// [`Option`]: ../../std/option/enum.Option.html + /// [`Some`]: ../../std/option/enum.Option.html#variant.Some + /// [`None`]: ../../std/option/enum.Option.html#variant.None #[inline] #[stable(feature = "rust1", since = "1.0.0")] fn filter_map(self, f: F) -> FilterMap where @@ -567,21 +585,20 @@ pub trait Iterator { /// different sized integer, the [`zip()`] function provides similar /// functionality. /// - /// [`usize`]: ../../std/primitive.usize.html - /// [`zip()`]: #method.zip - /// /// # Overflow Behavior /// /// The method does no guarding against overflows, so enumerating more than /// [`usize::MAX`] elements either produces the wrong result or panics. If /// debug assertions are enabled, a panic is guaranteed. /// - /// [`usize::MAX`]: ../../std/usize/constant.MAX.html - /// /// # Panics /// /// The returned iterator might panic if the to-be-returned index would - /// overflow a `usize`. + /// overflow a [`usize`]. + /// + /// [`usize::MAX`]: ../../std/usize/constant.MAX.html + /// [`usize`]: ../../std/primitive.usize.html + /// [`zip()`]: #method.zip /// /// # Examples /// @@ -607,12 +624,13 @@ pub trait Iterator { /// Adds a [`peek()`] method to an iterator. See its documentation for /// more information. /// - /// Note that the underlying iterator is still advanced when `peek` is + /// Note that the underlying iterator is still advanced when [`peek()`] is /// called for the first time: In order to retrieve the next element, - /// `next` is called on the underlying iterator, hence any side effects of - /// the `next` method will occur. + /// [`next()`] is called on the underlying iterator, hence any side effects of + /// the [`next()`] method will occur. /// /// [`peek()`]: struct.Peekable.html#method.peek + /// [`next()`]: ../../std/iter/trait.Iterator.html#tymethod.next /// /// # Examples /// @@ -894,12 +912,12 @@ pub trait Iterator { /// an extra layer of indirection. `flat_map()` will remove this extra layer /// on its own. /// - /// [`map()`]: #method.map - /// /// Another way of thinking about `flat_map()`: [`map()`]'s closure returns /// one item for each element, and `flat_map()`'s closure returns an /// iterator for each element. /// + /// [`map()`]: #method.map + /// /// # Examples /// /// Basic usage: @@ -921,11 +939,14 @@ pub trait Iterator { FlatMap{iter: self, f: f, frontiter: None, backiter: None } } - /// Creates an iterator which ends after the first `None`. + /// Creates an iterator which ends after the first [`None`]. /// - /// After an iterator returns `None`, future calls may or may not yield - /// `Some(T)` again. `fuse()` adapts an iterator, ensuring that after a - /// `None` is given, it will always return `None` forever. + /// After an iterator returns [`None`], future calls may or may not yield + /// [`Some(T)`] again. `fuse()` adapts an iterator, ensuring that after a + /// [`None`] is given, it will always return [`None`] forever. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`Some(T)`]: ../../std/option/enum.Option.html#variant.Some /// /// # Examples /// @@ -1082,19 +1103,15 @@ pub trait Iterator { /// library, used in a variety of contexts. /// /// The most basic pattern in which `collect()` is used is to turn one - /// collection into another. You take a collection, call `iter()` on it, + /// collection into another. You take a collection, call [`iter()`] on it, /// do a bunch of transformations, and then `collect()` at the end. /// /// One of the keys to `collect()`'s power is that many things you might /// not think of as 'collections' actually are. For example, a [`String`] /// is a collection of [`char`]s. And a collection of [`Result`] can - /// be thought of as single `Result, E>`. See the examples + /// be thought of as single [`Result`]`, E>`. See the examples /// below for more. /// - /// [`String`]: ../../std/string/struct.String.html - /// [`Result`]: ../../std/result/enum.Result.html - /// [`char`]: ../../std/primitive.char.html - /// /// Because `collect()` is so general, it can cause problems with type /// inference. As such, `collect()` is one of the few times you'll see /// the syntax affectionately known as the 'turbofish': `::<>`. This @@ -1172,7 +1189,7 @@ pub trait Iterator { /// assert_eq!("hello", hello); /// ``` /// - /// If you have a list of [`Result`]s, you can use `collect()` to + /// If you have a list of [`Result`][`Result`]s, you can use `collect()` to /// see if any of them failed: /// /// ``` @@ -1190,6 +1207,11 @@ pub trait Iterator { /// // gives us the list of answers /// assert_eq!(Ok(vec![1, 3]), result); /// ``` + /// + /// [`iter()`]: ../../std/iter/trait.Iterator.html#tymethod.next + /// [`String`]: ../../std/string/struct.String.html + /// [`char`]: ../../std/primitive.char.html + /// [`Result`]: ../../std/result/enum.Result.html #[inline] #[stable(feature = "rust1", since = "1.0.0")] fn collect>(self) -> B where Self: Sized { @@ -1281,6 +1303,8 @@ pub trait Iterator { /// use a `for` loop with a list of things to build up a result. Those /// can be turned into `fold()`s: /// + /// [`for`]: ../../book/loops.html#for + /// /// ``` /// let numbers = [1, 2, 3, 4, 5]; /// @@ -1414,8 +1438,8 @@ pub trait Iterator { /// /// `find()` takes a closure that returns `true` or `false`. It applies /// this closure to each element of the iterator, and if any of them return - /// `true`, then `find()` returns `Some(element)`. If they all return - /// `false`, it returns `None`. + /// `true`, then `find()` returns [`Some(element)`]. If they all return + /// `false`, it returns [`None`]. /// /// `find()` is short-circuiting; in other words, it will stop processing /// as soon as the closure returns `true`. @@ -1425,6 +1449,9 @@ pub trait Iterator { /// argument is a double reference. You can see this effect in the /// examples below, with `&&x`. /// + /// [`Some(element)`]: ../../std/option/enum.Option.html#variant.Some + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -1465,8 +1492,8 @@ pub trait Iterator { /// /// `position()` takes a closure that returns `true` or `false`. It applies /// this closure to each element of the iterator, and if one of them - /// returns `true`, then `position()` returns `Some(index)`. If all of - /// them return `false`, it returns `None`. + /// returns `true`, then `position()` returns [`Some(index)`]. If all of + /// them return `false`, it returns [`None`]. /// /// `position()` is short-circuiting; in other words, it will stop /// processing as soon as it finds a `true`. @@ -1474,7 +1501,7 @@ pub trait Iterator { /// # Overflow Behavior /// /// The method does no guarding against overflows, so if there are more - /// than `usize::MAX` non-matching elements, it either produces the wrong + /// than [`usize::MAX`] non-matching elements, it either produces the wrong /// result or panics. If debug assertions are enabled, a panic is /// guaranteed. /// @@ -1483,6 +1510,10 @@ pub trait Iterator { /// This function might panic if the iterator has more than `usize::MAX` /// non-matching elements. /// + /// [`Some(index)`]: ../../std/option/enum.Option.html#variant.Some + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`usize::MAX`]: ../../std/usize/constant.MAX.html + /// /// # Examples /// /// Basic usage: @@ -1528,11 +1559,14 @@ pub trait Iterator { /// `rposition()` takes a closure that returns `true` or `false`. It applies /// this closure to each element of the iterator, starting from the end, /// and if one of them returns `true`, then `rposition()` returns - /// `Some(index)`. If all of them return `false`, it returns `None`. + /// [`Some(index)`]. If all of them return `false`, it returns [`None`]. /// /// `rposition()` is short-circuiting; in other words, it will stop /// processing as soon as it finds a `true`. /// + /// [`Some(index)`]: ../../std/option/enum.Option.html#variant.Some + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -1662,12 +1696,11 @@ pub trait Iterator { /// # Examples /// /// ``` - /// #![feature(iter_max_by)] /// let a = [-3_i32, 0, 1, 5, -10]; /// assert_eq!(*a.iter().max_by(|x, y| x.cmp(y)).unwrap(), 5); /// ``` #[inline] - #[unstable(feature = "iter_max_by", issue="36105")] + #[stable(feature = "iter_max_by", since = "1.15.0")] fn max_by(self, mut compare: F) -> Option where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, { @@ -1712,12 +1745,11 @@ pub trait Iterator { /// # Examples /// /// ``` - /// #![feature(iter_min_by)] /// let a = [-3_i32, 0, 1, 5, -10]; /// assert_eq!(*a.iter().min_by(|x, y| x.cmp(y)).unwrap(), -10); /// ``` #[inline] - #[unstable(feature = "iter_min_by", issue="36105")] + #[stable(feature = "iter_min_by", since = "1.15.0")] fn min_by(self, mut compare: F) -> Option where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, { @@ -1798,11 +1830,13 @@ pub trait Iterator { (ts, us) } - /// Creates an iterator which `clone()`s all of its elements. + /// Creates an iterator which [`clone()`]s all of its elements. /// /// This is useful when you have an iterator over `&T`, but you need an /// iterator over `T`. /// + /// [`clone()`]: ../../std/clone/trait.Clone.html#tymethod.clone + /// /// # Examples /// /// Basic usage: @@ -1827,10 +1861,12 @@ pub trait Iterator { /// Repeats an iterator endlessly. /// - /// Instead of stopping at `None`, the iterator will instead start again, + /// Instead of stopping at [`None`], the iterator will instead start again, /// from the beginning. After iterating again, it will start at the /// beginning again. And again. And again. Forever. /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// /// # Examples /// /// Basic usage: @@ -1862,7 +1898,7 @@ pub trait Iterator { /// /// # Panics /// - /// When calling `sum` and a primitive integer type is being returned, this + /// When calling `sum()` and a primitive integer type is being returned, this /// method will panic if the computation overflows and debug assertions are /// enabled. /// @@ -1890,7 +1926,7 @@ pub trait Iterator { /// /// # Panics /// - /// When calling `product` and a primitive integer type is being returned, + /// When calling `product()` and a primitive integer type is being returned, /// method will panic if the computation overflows and debug assertions are /// enabled. /// @@ -2141,4 +2177,7 @@ impl<'a, I: Iterator + ?Sized> Iterator for &'a mut I { type Item = I::Item; fn next(&mut self) -> Option { (**self).next() } fn size_hint(&self) -> (usize, Option) { (**self).size_hint() } + fn nth(&mut self, n: usize) -> Option { + (**self).nth(n) + } } diff --git a/src/libcore/iter/mod.rs b/src/libcore/iter/mod.rs index cd2e0cb11d..3999db0d63 100644 --- a/src/libcore/iter/mod.rs +++ b/src/libcore/iter/mod.rs @@ -225,12 +225,12 @@ //! often called 'iterator adapters', as they're a form of the 'adapter //! pattern'. //! -//! Common iterator adapters include [`map()`], [`take()`], and [`collect()`]. +//! Common iterator adapters include [`map()`], [`take()`], and [`filter()`]. //! For more, see their documentation. //! //! [`map()`]: trait.Iterator.html#method.map //! [`take()`]: trait.Iterator.html#method.take -//! [`collect()`]: trait.Iterator.html#method.collect +//! [`filter()`]: trait.Iterator.html#method.filter //! //! # Laziness //! @@ -268,7 +268,7 @@ //! [`map()`]: trait.Iterator.html#method.map //! //! The two most common ways to evaluate an iterator are to use a `for` loop -//! like this, or using the [`collect()`] adapter to produce a new collection. +//! like this, or using the [`collect()`] method to produce a new collection. //! //! [`collect()`]: trait.Iterator.html#method.collect //! @@ -368,7 +368,16 @@ impl DoubleEndedIterator for Rev where I: DoubleEndedIterator { #[stable(feature = "rust1", since = "1.0.0")] impl ExactSizeIterator for Rev - where I: ExactSizeIterator + DoubleEndedIterator {} + where I: ExactSizeIterator + DoubleEndedIterator +{ + fn len(&self) -> usize { + self.iter.len() + } + + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for Rev @@ -425,7 +434,15 @@ impl<'a, I, T: 'a> DoubleEndedIterator for Cloned #[stable(feature = "iter_cloned", since = "1.1.0")] impl<'a, I, T: 'a> ExactSizeIterator for Cloned where I: ExactSizeIterator, T: Clone -{} +{ + fn len(&self) -> usize { + self.it.len() + } + + fn is_empty(&self) -> bool { + self.it.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, I, T: 'a> FusedIterator for Cloned @@ -920,7 +937,7 @@ unsafe impl TrustedLen for Zip /// you can also [`map()`] backwards: /// /// ```rust -/// let v: Vec = vec![1, 2, 3].into_iter().rev().map(|x| x + 1).collect(); +/// let v: Vec = vec![1, 2, 3].into_iter().map(|x| x + 1).rev().collect(); /// /// assert_eq!(v, [4, 3, 2]); /// ``` @@ -1007,7 +1024,16 @@ impl DoubleEndedIterator for Map where #[stable(feature = "rust1", since = "1.0.0")] impl ExactSizeIterator for Map - where F: FnMut(I::Item) -> B {} + where F: FnMut(I::Item) -> B +{ + fn len(&self) -> usize { + self.iter.len() + } + + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for Map @@ -1236,7 +1262,15 @@ impl DoubleEndedIterator for Enumerate where } #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for Enumerate where I: ExactSizeIterator {} +impl ExactSizeIterator for Enumerate where I: ExactSizeIterator { + fn len(&self) -> usize { + self.iter.len() + } + + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[doc(hidden)] unsafe impl TrustedRandomAccess for Enumerate @@ -1273,54 +1307,68 @@ unsafe impl TrustedLen for Enumerate #[stable(feature = "rust1", since = "1.0.0")] pub struct Peekable { iter: I, - peeked: Option, + /// Remember a peeked value, even if it was None. + peeked: Option>, } +// Peekable must remember if a None has been seen in the `.peek()` method. +// It ensures that `.peek(); .peek();` or `.peek(); .next();` only advances the +// underlying iterator at most once. This does not by itself make the iterator +// fused. #[stable(feature = "rust1", since = "1.0.0")] impl Iterator for Peekable { type Item = I::Item; #[inline] fn next(&mut self) -> Option { - match self.peeked { - Some(_) => self.peeked.take(), + match self.peeked.take() { + Some(v) => v, None => self.iter.next(), } } #[inline] #[rustc_inherit_overflow_checks] - fn count(self) -> usize { - (if self.peeked.is_some() { 1 } else { 0 }) + self.iter.count() + fn count(mut self) -> usize { + match self.peeked.take() { + Some(None) => 0, + Some(Some(_)) => 1 + self.iter.count(), + None => self.iter.count(), + } } #[inline] fn nth(&mut self, n: usize) -> Option { - match self.peeked { - Some(_) if n == 0 => self.peeked.take(), - Some(_) => { - self.peeked = None; - self.iter.nth(n-1) - }, - None => self.iter.nth(n) + match self.peeked.take() { + // the .take() below is just to avoid "move into pattern guard" + Some(ref mut v) if n == 0 => v.take(), + Some(None) => None, + Some(Some(_)) => self.iter.nth(n - 1), + None => self.iter.nth(n), } } #[inline] - fn last(self) -> Option { - self.iter.last().or(self.peeked) + fn last(mut self) -> Option { + let peek_opt = match self.peeked.take() { + Some(None) => return None, + Some(v) => v, + None => None, + }; + self.iter.last().or(peek_opt) } #[inline] fn size_hint(&self) -> (usize, Option) { + let peek_len = match self.peeked { + Some(None) => return (0, Some(0)), + Some(Some(_)) => 1, + None => 0, + }; let (lo, hi) = self.iter.size_hint(); - if self.peeked.is_some() { - let lo = lo.saturating_add(1); - let hi = hi.and_then(|x| x.checked_add(1)); - (lo, hi) - } else { - (lo, hi) - } + let lo = lo.saturating_add(peek_len); + let hi = hi.and_then(|x| x.checked_add(peek_len)); + (lo, hi) } } @@ -1372,9 +1420,13 @@ impl Peekable { #[stable(feature = "rust1", since = "1.0.0")] pub fn peek(&mut self) -> Option<&I::Item> { if self.peeked.is_none() { - self.peeked = self.iter.next(); + self.peeked = Some(self.iter.next()); + } + match self.peeked { + Some(Some(ref value)) => Some(value), + Some(None) => None, + _ => unreachable!(), } - self.peeked.as_ref() } } @@ -1927,7 +1979,15 @@ impl DoubleEndedIterator for Fuse #[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for Fuse where I: ExactSizeIterator {} +impl ExactSizeIterator for Fuse where I: ExactSizeIterator { + fn len(&self) -> usize { + self.iter.len() + } + + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} /// An iterator that calls a function with a reference to each element before /// yielding it. @@ -1994,7 +2054,16 @@ impl DoubleEndedIterator for Inspect #[stable(feature = "rust1", since = "1.0.0")] impl ExactSizeIterator for Inspect - where F: FnMut(&I::Item) {} + where F: FnMut(&I::Item) +{ + fn len(&self) -> usize { + self.iter.len() + } + + fn is_empty(&self) -> bool { + self.iter.is_empty() + } +} #[unstable(feature = "fused", issue = "35602")] impl FusedIterator for Inspect diff --git a/src/libcore/iter/traits.rs b/src/libcore/iter/traits.rs index bc4be073c5..c5465549ad 100644 --- a/src/libcore/iter/traits.rs +++ b/src/libcore/iter/traits.rs @@ -552,14 +552,25 @@ pub trait ExactSizeIterator: Iterator { } #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, I: ExactSizeIterator + ?Sized> ExactSizeIterator for &'a mut I {} +impl<'a, I: ExactSizeIterator + ?Sized> ExactSizeIterator for &'a mut I { + fn len(&self) -> usize { + (**self).len() + } + fn is_empty(&self) -> bool { + (**self).is_empty() + } +} /// Trait to represent types that can be created by summing up an iterator. /// -/// This trait is used to implement the `sum` method on iterators. Types which -/// implement the trait can be generated by the `sum` method. Like -/// `FromIterator` this trait should rarely be called directly and instead -/// interacted with through `Iterator::sum`. +/// This trait is used to implement the [`sum()`] method on iterators. Types which +/// implement the trait can be generated by the [`sum()`] method. Like +/// [`FromIterator`] this trait should rarely be called directly and instead +/// interacted with through [`Iterator::sum()`]. +/// +/// [`sum()`]: ../../std/iter/trait.Sum.html#tymethod.sum +/// [`FromIterator`]: ../../std/iter/trait.FromIterator.html +/// [`Iterator::sum()`]: ../../std/iter/trait.Iterator.html#method.sum #[stable(feature = "iter_arith_traits", since = "1.12.0")] pub trait Sum: Sized { /// Method which takes an iterator and generates `Self` from the elements by @@ -571,10 +582,14 @@ pub trait Sum: Sized { /// Trait to represent types that can be created by multiplying elements of an /// iterator. /// -/// This trait is used to implement the `product` method on iterators. Types -/// which implement the trait can be generated by the `product` method. Like -/// `FromIterator` this trait should rarely be called directly and instead -/// interacted with through `Iterator::product`. +/// This trait is used to implement the [`product()`] method on iterators. Types +/// which implement the trait can be generated by the [`product()`] method. Like +/// [`FromIterator`] this trait should rarely be called directly and instead +/// interacted with through [`Iterator::product()`]. +/// +/// [`product()`]: ../../std/iter/trait.Product.html#tymethod.product +/// [`FromIterator`]: ../../std/iter/trait.FromIterator.html +/// [`Iterator::product()`]: ../../std/iter/trait.Iterator.html#method.product #[stable(feature = "iter_arith_traits", since = "1.12.0")] pub trait Product: Sized { /// Method which takes an iterator and generates `Self` from the elements by @@ -658,13 +673,17 @@ float_sum_product! { f32 f64 } /// An iterator that always continues to yield `None` when exhausted. /// /// Calling next on a fused iterator that has returned `None` once is guaranteed -/// to return `None` again. This trait is should be implemented by all iterators +/// to return [`None`] again. This trait is should be implemented by all iterators /// that behave this way because it allows for some significant optimizations. /// /// Note: In general, you should not use `FusedIterator` in generic bounds if -/// you need a fused iterator. Instead, you should just call `Iterator::fused()` -/// on the iterator. If the iterator is already fused, the additional `Fuse` +/// you need a fused iterator. Instead, you should just call [`Iterator::fuse()`] +/// on the iterator. If the iterator is already fused, the additional [`Fuse`] /// wrapper will be a no-op with no performance penalty. +/// +/// [`None`]: ../../std/option/enum.Option.html#variant.None +/// [`Iterator::fuse()`]: ../../std/iter/trait.Iterator.html#method.fuse +/// [`Fuse`]: ../../std/iter/struct.Fuse.html #[unstable(feature = "fused", issue = "35602")] pub trait FusedIterator: Iterator {} @@ -674,16 +693,20 @@ impl<'a, I: FusedIterator + ?Sized> FusedIterator for &'a mut I {} /// An iterator that reports an accurate length using size_hint. /// /// The iterator reports a size hint where it is either exact -/// (lower bound is equal to upper bound), or the upper bound is `None`. -/// The upper bound must only be `None` if the actual iterator length is -/// larger than `usize::MAX`. +/// (lower bound is equal to upper bound), or the upper bound is [`None`]. +/// The upper bound must only be [`None`] if the actual iterator length is +/// larger than [`usize::MAX`]. /// /// The iterator must produce exactly the number of elements it reported. /// /// # Safety /// /// This trait must only be implemented when the contract is upheld. -/// Consumers of this trait must inspect `.size_hint()`’s upper bound. +/// Consumers of this trait must inspect [`.size_hint()`]’s upper bound. +/// +/// [`None`]: ../../std/option/enum.Option.html#variant.None +/// [`usize::MAX`]: ../../std/usize/constant.MAX.html +/// [`.size_hint()`]: ../../std/iter/trait.Iterator.html#method.size_hint #[unstable(feature = "trusted_len", issue = "37572")] pub unsafe trait TrustedLen : Iterator {} diff --git a/src/libcore/lib.rs b/src/libcore/lib.rs index 07f5e725e2..9834fca5fd 100644 --- a/src/libcore/lib.rs +++ b/src/libcore/lib.rs @@ -89,7 +89,6 @@ #![feature(specialization)] #![feature(staged_api)] #![feature(unboxed_closures)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(never_type)] #![feature(prelude_import)] diff --git a/src/libcore/macros.rs b/src/libcore/macros.rs index 23c2e2142c..b3f5363f5b 100644 --- a/src/libcore/macros.rs +++ b/src/libcore/macros.rs @@ -350,6 +350,21 @@ macro_rules! try { /// /// assert_eq!(w, b"testformatted arguments"); /// ``` +/// +/// A module can import both `std::fmt::Write` and `std::io::Write` and call `write!` on objects +/// implementing either, as objects do not typically implement both. However, the module must +/// import the traits qualified so their names do not conflict: +/// +/// ``` +/// use std::fmt::Write as FmtWrite; +/// use std::io::Write as IoWrite; +/// +/// let mut s = String::new(); +/// let mut v = Vec::new(); +/// write!(&mut s, "{} {}", "abc", 123).unwrap(); // uses fmt::Write::write_fmt +/// write!(&mut v, "s = {:?}", s).unwrap(); // uses io::Write::write_fmt +/// assert_eq!(v, b"s = \"abc 123\""); +/// ``` #[macro_export] #[stable(feature = "core", since = "1.6.0")] macro_rules! write { @@ -394,6 +409,21 @@ macro_rules! write { /// /// assert_eq!(&w[..], "test\nformatted arguments\n".as_bytes()); /// ``` +/// +/// A module can import both `std::fmt::Write` and `std::io::Write` and call `write!` on objects +/// implementing either, as objects do not typically implement both. However, the module must +/// import the traits qualified so their names do not conflict: +/// +/// ``` +/// use std::fmt::Write as FmtWrite; +/// use std::io::Write as IoWrite; +/// +/// let mut s = String::new(); +/// let mut v = Vec::new(); +/// writeln!(&mut s, "{} {}", "abc", 123).unwrap(); // uses fmt::Write::write_fmt +/// writeln!(&mut v, "s = {:?}", s).unwrap(); // uses io::Write::write_fmt +/// assert_eq!(v, b"s = \"abc 123\\n\"\n"); +/// ``` #[macro_export] #[stable(feature = "rust1", since = "1.0.0")] macro_rules! writeln { diff --git a/src/libcore/marker.rs b/src/libcore/marker.rs index bdb0dd8e7d..9af10966ed 100644 --- a/src/libcore/marker.rs +++ b/src/libcore/marker.rs @@ -26,15 +26,15 @@ use hash::Hasher; /// appropriate. /// /// An example of a non-`Send` type is the reference-counting pointer -/// [`rc::Rc`][rc]. If two threads attempt to clone `Rc`s that point to the same +/// [`rc::Rc`][`Rc`]. If two threads attempt to clone [`Rc`]s that point to the same /// reference-counted value, they might try to update the reference count at the -/// same time, which is [undefined behavior][ub] because `Rc` doesn't use atomic +/// same time, which is [undefined behavior][ub] because [`Rc`] doesn't use atomic /// operations. Its cousin [`sync::Arc`][arc] does use atomic operations (incurring /// some overhead) and thus is `Send`. /// /// See [the Nomicon](../../nomicon/send-and-sync.html) for more details. /// -/// [rc]: ../../std/rc/struct.Rc.html +/// [`Rc`]: ../../std/rc/struct.Rc.html /// [arc]: ../../std/sync/struct.Arc.html /// [ub]: ../../reference.html#behavior-considered-undefined #[stable(feature = "rust1", since = "1.0.0")] @@ -183,20 +183,17 @@ pub trait Unsize { /// Copies happen implicitly, for example as part of an assignment `y = x`. The behavior of /// `Copy` is not overloadable; it is always a simple bit-wise copy. /// -/// Cloning is an explicit action, `x.clone()`. The implementation of [`Clone`][clone] can +/// Cloning is an explicit action, `x.clone()`. The implementation of [`Clone`] can /// provide any type-specific behavior necessary to duplicate values safely. For example, -/// the implementation of `Clone` for [`String`][string] needs to copy the pointed-to string -/// buffer in the heap. A simple bitwise copy of `String` values would merely copy the -/// pointer, leading to a double free down the line. For this reason, `String` is `Clone` +/// the implementation of [`Clone`] for [`String`] needs to copy the pointed-to string +/// buffer in the heap. A simple bitwise copy of [`String`] values would merely copy the +/// pointer, leading to a double free down the line. For this reason, [`String`] is [`Clone`] /// but not `Copy`. /// -/// `Clone` is a supertrait of `Copy`, so everything which is `Copy` must also implement -/// `Clone`. If a type is `Copy` then its `Clone` implementation need only return `*self` +/// [`Clone`] is a supertrait of `Copy`, so everything which is `Copy` must also implement +/// [`Clone`]. If a type is `Copy` then its [`Clone`] implementation need only return `*self` /// (see the example above). /// -/// [clone]: ../clone/trait.Clone.html -/// [string]: ../../std/string/struct.String.html -/// /// ## When can my type be `Copy`? /// /// A type can implement `Copy` if all of its components implement `Copy`. For example, this @@ -210,7 +207,7 @@ pub trait Unsize { /// } /// ``` /// -/// A struct can be `Copy`, and `i32` is `Copy`, therefore `Point` is eligible to be `Copy`. +/// A struct can be `Copy`, and [`i32`] is `Copy`, therefore `Point` is eligible to be `Copy`. /// By contrast, consider /// /// ``` @@ -231,8 +228,8 @@ pub trait Unsize { /// ## When *can't* my type be `Copy`? /// /// Some types can't be copied safely. For example, copying `&mut T` would create an aliased -/// mutable reference. Copying [`String`] would duplicate responsibility for managing the `String`'s -/// buffer, leading to a double free. +/// mutable reference. Copying [`String`] would duplicate responsibility for managing the +/// [`String`]'s buffer, leading to a double free. /// /// Generalizing the latter case, any type implementing [`Drop`] can't be `Copy`, because it's /// managing some resource besides its own [`size_of::()`] bytes. @@ -255,6 +252,9 @@ pub trait Unsize { /// [`String`]: ../../std/string/struct.String.html /// [`Drop`]: ../../std/ops/trait.Drop.html /// [`size_of::()`]: ../../std/mem/fn.size_of.html +/// [`Clone`]: ../clone/trait.Clone.html +/// [`String`]: ../../std/string/struct.String.html +/// [`i32`]: ../../std/primitive.i32.html #[stable(feature = "rust1", since = "1.0.0")] #[lang = "copy"] pub trait Copy : Clone { @@ -290,20 +290,20 @@ pub trait Copy : Clone { /// mutability" in a non-thread-safe form, such as [`cell::Cell`][cell] /// and [`cell::RefCell`][refcell]. These types allow for mutation of /// their contents even through an immutable, shared reference. For -/// example the `set` method on `Cell` takes `&self`, so it requires -/// only a shared reference `&Cell`. The method performs no -/// synchronization, thus `Cell` cannot be `Sync`. +/// example the `set` method on [`Cell`][cell] takes `&self`, so it requires +/// only a shared reference [`&Cell`][cell]. The method performs no +/// synchronization, thus [`Cell`][cell] cannot be `Sync`. /// /// Another example of a non-`Sync` type is the reference-counting -/// pointer [`rc::Rc`][rc]. Given any reference `&Rc`, you can clone -/// a new `Rc`, modifying the reference counts in a non-atomic way. +/// pointer [`rc::Rc`][rc]. Given any reference [`&Rc`][rc], you can clone +/// a new [`Rc`][rc], modifying the reference counts in a non-atomic way. /// /// For cases when one does need thread-safe interior mutability, /// Rust provides [atomic data types], as well as explicit locking via /// [`sync::Mutex`][mutex] and [`sync::RWLock`][rwlock]. These types /// ensure that any mutation cannot cause data races, hence the types /// are `Sync`. Likewise, [`sync::Arc`][arc] provides a thread-safe -/// analogue of `Rc`. +/// analogue of [`Rc`][rc]. /// /// Any types with interior mutability must also use the /// [`cell::UnsafeCell`][unsafecell] wrapper around the value(s) which diff --git a/src/libcore/mem.rs b/src/libcore/mem.rs index e0aa25724c..209107ef92 100644 --- a/src/libcore/mem.rs +++ b/src/libcore/mem.rs @@ -337,7 +337,7 @@ pub unsafe fn zeroed() -> T { /// Bypasses Rust's normal memory-initialization checks by pretending to /// produce a value of type `T`, while doing nothing at all. /// -/// **This is incredibly dangerous, and should not be done lightly. Deeply +/// **This is incredibly dangerous and should not be done lightly. Deeply /// consider initializing your memory with a default value instead.** /// /// This is useful for [FFI] functions and initializing arrays sometimes, @@ -352,24 +352,18 @@ pub unsafe fn zeroed() -> T { /// a boolean, your program may take one, both, or neither of the branches. /// /// Writing to the uninitialized value is similarly dangerous. Rust believes the -/// value is initialized, and will therefore try to [`Drop`][drop] the uninitialized +/// value is initialized, and will therefore try to [`Drop`] the uninitialized /// value and its fields if you try to overwrite it in a normal manner. The only way /// to safely initialize an uninitialized value is with [`ptr::write`][write], /// [`ptr::copy`][copy], or [`ptr::copy_nonoverlapping`][copy_no]. /// -/// If the value does implement `Drop`, it must be initialized before +/// If the value does implement [`Drop`], it must be initialized before /// it goes out of scope (and therefore would be dropped). Note that this /// includes a `panic` occurring and unwinding the stack suddenly. /// -/// [ub]: ../../reference.html#behavior-considered-undefined -/// [write]: ../ptr/fn.write.html -/// [copy]: ../intrinsics/fn.copy.html -/// [copy_no]: ../intrinsics/fn.copy_nonoverlapping.html -/// [drop]: ../ops/trait.Drop.html -/// /// # Examples /// -/// Here's how to safely initialize an array of `Vec`s. +/// Here's how to safely initialize an array of [`Vec`]s. /// /// ``` /// use std::mem; @@ -410,8 +404,8 @@ pub unsafe fn zeroed() -> T { /// ``` /// /// This example emphasizes exactly how delicate and dangerous using `mem::uninitialized` -/// can be. Note that the `vec!` macro *does* let you initialize every element with a -/// value that is only `Clone`, so the following is semantically equivalent and +/// can be. Note that the [`vec!`] macro *does* let you initialize every element with a +/// value that is only [`Clone`], so the following is semantically equivalent and /// vastly less dangerous, as long as you can live with an extra heap /// allocation: /// @@ -419,6 +413,15 @@ pub unsafe fn zeroed() -> T { /// let data: Vec> = vec![Vec::new(); 1000]; /// println!("{:?}", &data[0]); /// ``` +/// +/// [`Vec`]: ../../std/vec/struct.Vec.html +/// [`vec!`]: ../../std/macro.vec.html +/// [`Clone`]: ../../std/clone/trait.Clone.html +/// [ub]: ../../reference.html#behavior-considered-undefined +/// [write]: ../ptr/fn.write.html +/// [copy]: ../intrinsics/fn.copy.html +/// [copy_no]: ../intrinsics/fn.copy_nonoverlapping.html +/// [`Drop`]: ../ops/trait.Drop.html #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub unsafe fn uninitialized() -> T { @@ -492,7 +495,7 @@ pub fn swap(x: &mut T, y: &mut T) { /// } /// ``` /// -/// Note that `T` does not necessarily implement `Clone`, so it can't even clone and reset +/// Note that `T` does not necessarily implement [`Clone`], so it can't even clone and reset /// `self.buf`. But `replace` can be used to disassociate the original value of `self.buf` from /// `self`, allowing it to be returned: /// @@ -507,6 +510,8 @@ pub fn swap(x: &mut T, y: &mut T) { /// } /// } /// ``` +/// +/// [`Clone`]: ../../std/clone/trait.Clone.html #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub fn replace(dest: &mut T, mut src: T) -> T { @@ -571,8 +576,8 @@ pub fn replace(dest: &mut T, mut src: T) -> T { /// v.push(4); // no problems /// ``` /// -/// Since `RefCell` enforces the borrow rules at runtime, `drop` can -/// release a `RefCell` borrow: +/// Since [`RefCell`] enforces the borrow rules at runtime, `drop` can +/// release a [`RefCell`] borrow: /// /// ``` /// use std::cell::RefCell; @@ -588,7 +593,7 @@ pub fn replace(dest: &mut T, mut src: T) -> T { /// println!("{}", *borrow); /// ``` /// -/// Integers and other types implementing `Copy` are unaffected by `drop`. +/// Integers and other types implementing [`Copy`] are unaffected by `drop`. /// /// ``` /// #[derive(Copy, Clone)] @@ -602,6 +607,8 @@ pub fn replace(dest: &mut T, mut src: T) -> T { /// println!("x: {}, y: {}", x, y.0); // still available /// ``` /// +/// [`RefCell`]: ../../std/cell/struct.RefCell.html +/// [`Copy`]: ../../std/marker/trait.Copy.html #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub fn drop(_x: T) { } diff --git a/src/libcore/option.rs b/src/libcore/option.rs index 607e16887a..8871e1fa84 100644 --- a/src/libcore/option.rs +++ b/src/libcore/option.rs @@ -659,6 +659,16 @@ impl Option { impl<'a, T: Clone> Option<&'a T> { /// Maps an `Option<&T>` to an `Option` by cloning the contents of the /// option. + /// + /// # Examples + /// + /// ``` + /// let x = 12; + /// let opt_x = Some(&x); + /// assert_eq!(opt_x, Some(&12)); + /// let cloned = opt_x.cloned(); + /// assert_eq!(cloned, Some(12)); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn cloned(self) -> Option { self.map(|t| t.clone()) diff --git a/src/libcore/ptr.rs b/src/libcore/ptr.rs index 2ad38de72b..e3ca8eca76 100644 --- a/src/libcore/ptr.rs +++ b/src/libcore/ptr.rs @@ -117,6 +117,8 @@ pub unsafe fn replace(dest: *mut T, mut src: T) -> T { /// `zero_memory`, or `copy_memory`). Note that `*src = foo` counts as a use /// because it will attempt to drop the value previously at `*src`. /// +/// The pointer must be aligned; use `read_unaligned` if that is not the case. +/// /// # Examples /// /// Basic usage: @@ -137,6 +139,44 @@ pub unsafe fn read(src: *const T) -> T { tmp } +/// Reads the value from `src` without moving it. This leaves the +/// memory in `src` unchanged. +/// +/// Unlike `read`, the pointer may be unaligned. +/// +/// # Safety +/// +/// Beyond accepting a raw pointer, this is unsafe because it semantically +/// moves the value out of `src` without preventing further usage of `src`. +/// If `T` is not `Copy`, then care must be taken to ensure that the value at +/// `src` is not used before the data is overwritten again (e.g. with `write`, +/// `zero_memory`, or `copy_memory`). Note that `*src = foo` counts as a use +/// because it will attempt to drop the value previously at `*src`. +/// +/// # Examples +/// +/// Basic usage: +/// +/// ``` +/// #![feature(ptr_unaligned)] +/// +/// let x = 12; +/// let y = &x as *const i32; +/// +/// unsafe { +/// assert_eq!(std::ptr::read_unaligned(y), 12); +/// } +/// ``` +#[inline(always)] +#[unstable(feature = "ptr_unaligned", issue = "37955")] +pub unsafe fn read_unaligned(src: *const T) -> T { + let mut tmp: T = mem::uninitialized(); + copy_nonoverlapping(src as *const u8, + &mut tmp as *mut T as *mut u8, + mem::size_of::()); + tmp +} + /// Overwrites a memory location with the given value without reading or /// dropping the old value. /// @@ -151,6 +191,8 @@ pub unsafe fn read(src: *const T) -> T { /// This is appropriate for initializing uninitialized memory, or overwriting /// memory that has previously been `read` from. /// +/// The pointer must be aligned; use `write_unaligned` if that is not the case. +/// /// # Examples /// /// Basic usage: @@ -171,6 +213,47 @@ pub unsafe fn write(dst: *mut T, src: T) { intrinsics::move_val_init(&mut *dst, src) } +/// Overwrites a memory location with the given value without reading or +/// dropping the old value. +/// +/// Unlike `write`, the pointer may be unaligned. +/// +/// # Safety +/// +/// This operation is marked unsafe because it accepts a raw pointer. +/// +/// It does not drop the contents of `dst`. This is safe, but it could leak +/// allocations or resources, so care must be taken not to overwrite an object +/// that should be dropped. +/// +/// This is appropriate for initializing uninitialized memory, or overwriting +/// memory that has previously been `read` from. +/// +/// # Examples +/// +/// Basic usage: +/// +/// ``` +/// #![feature(ptr_unaligned)] +/// +/// let mut x = 0; +/// let y = &mut x as *mut i32; +/// let z = 12; +/// +/// unsafe { +/// std::ptr::write_unaligned(y, z); +/// assert_eq!(std::ptr::read_unaligned(y), 12); +/// } +/// ``` +#[inline] +#[unstable(feature = "ptr_unaligned", issue = "37955")] +pub unsafe fn write_unaligned(dst: *mut T, src: T) { + copy_nonoverlapping(&src as *const T as *const u8, + dst as *mut u8, + mem::size_of::()); + mem::forget(src); +} + /// Performs a volatile read of the value from `src` without moving it. This /// leaves the memory in `src` unchanged. /// diff --git a/src/libcore/slice.rs b/src/libcore/slice.rs index 871b63145c..a4a90e7a9d 100644 --- a/src/libcore/slice.rs +++ b/src/libcore/slice.rs @@ -38,10 +38,14 @@ use cmp; use fmt; use intrinsics::assume; use iter::*; -use ops::{self, RangeFull}; +use ops::{FnMut, self}; +use option::Option; +use option::Option::{None, Some}; +use result::Result; +use result::Result::{Ok, Err}; use ptr; use mem; -use marker; +use marker::{Copy, Send, Sync, Sized, self}; use iter_private::TrustedRandomAccess; #[repr(C)] @@ -80,7 +84,8 @@ pub trait SliceExt { #[stable(feature = "core", since = "1.6.0")] fn chunks(&self, size: usize) -> Chunks; #[stable(feature = "core", since = "1.6.0")] - fn get(&self, index: usize) -> Option<&Self::Item>; + fn get(&self, index: I) -> Option<&I::Output> + where I: SliceIndex; #[stable(feature = "core", since = "1.6.0")] fn first(&self) -> Option<&Self::Item>; #[stable(feature = "core", since = "1.6.0")] @@ -90,7 +95,8 @@ pub trait SliceExt { #[stable(feature = "core", since = "1.6.0")] fn last(&self) -> Option<&Self::Item>; #[stable(feature = "core", since = "1.6.0")] - unsafe fn get_unchecked(&self, index: usize) -> &Self::Item; + unsafe fn get_unchecked(&self, index: I) -> &I::Output + where I: SliceIndex; #[stable(feature = "core", since = "1.6.0")] fn as_ptr(&self) -> *const Self::Item; #[stable(feature = "core", since = "1.6.0")] @@ -108,7 +114,8 @@ pub trait SliceExt { #[stable(feature = "core", since = "1.6.0")] fn is_empty(&self) -> bool { self.len() == 0 } #[stable(feature = "core", since = "1.6.0")] - fn get_mut(&mut self, index: usize) -> Option<&mut Self::Item>; + fn get_mut(&mut self, index: I) -> Option<&mut I::Output> + where I: SliceIndex; #[stable(feature = "core", since = "1.6.0")] fn iter_mut(&mut self) -> IterMut; #[stable(feature = "core", since = "1.6.0")] @@ -137,7 +144,8 @@ pub trait SliceExt { #[stable(feature = "core", since = "1.6.0")] fn reverse(&mut self); #[stable(feature = "core", since = "1.6.0")] - unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut Self::Item; + unsafe fn get_unchecked_mut(&mut self, index: I) -> &mut I::Output + where I: SliceIndex; #[stable(feature = "core", since = "1.6.0")] fn as_mut_ptr(&mut self) -> *mut Self::Item; @@ -258,8 +266,10 @@ impl SliceExt for [T] { } #[inline] - fn get(&self, index: usize) -> Option<&T> { - if index < self.len() { Some(&self[index]) } else { None } + fn get(&self, index: I) -> Option<&I::Output> + where I: SliceIndex + { + index.get(self) } #[inline] @@ -284,8 +294,10 @@ impl SliceExt for [T] { } #[inline] - unsafe fn get_unchecked(&self, index: usize) -> &T { - &*(self.as_ptr().offset(index as isize)) + unsafe fn get_unchecked(&self, index: I) -> &I::Output + where I: SliceIndex + { + index.get_unchecked(self) } #[inline] @@ -323,8 +335,10 @@ impl SliceExt for [T] { } #[inline] - fn get_mut(&mut self, index: usize) -> Option<&mut T> { - if index < self.len() { Some(&mut self[index]) } else { None } + fn get_mut(&mut self, index: I) -> Option<&mut I::Output> + where I: SliceIndex + { + index.get_mut(self) } #[inline] @@ -451,8 +465,10 @@ impl SliceExt for [T] { } #[inline] - unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut T { - &mut *self.as_mut_ptr().offset(index as isize) + unsafe fn get_unchecked_mut(&mut self, index: I) -> &mut I::Output + where I: SliceIndex + { + index.get_unchecked_mut(self) } #[inline] @@ -515,23 +531,26 @@ impl SliceExt for [T] { } #[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index for [T] { - type Output = T; +#[rustc_on_unimplemented = "slice indices are of type `usize` or ranges of `usize`"] +impl ops::Index for [T] + where I: SliceIndex +{ + type Output = I::Output; - fn index(&self, index: usize) -> &T { - // NB built-in indexing - &(*self)[index] + #[inline] + fn index(&self, index: I) -> &I::Output { + index.index(self) } } #[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut for [T] { +#[rustc_on_unimplemented = "slice indices are of type `usize` or ranges of `usize`"] +impl ops::IndexMut for [T] + where I: SliceIndex +{ #[inline] - fn index_mut(&mut self, index: usize) -> &mut T { - // NB built-in indexing - &mut (*self)[index] + fn index_mut(&mut self, index: I) -> &mut I::Output { + index.index_mut(self) } } @@ -547,205 +566,349 @@ fn slice_index_order_fail(index: usize, end: usize) -> ! { panic!("slice index starts at {} but ends at {}", index, end); } +/// A helper trait used for indexing operations. +#[unstable(feature = "slice_get_slice", issue = "35729")] +#[rustc_on_unimplemented = "slice indices are of type `usize` or ranges of `usize`"] +pub trait SliceIndex { + /// The output type returned by methods. + type Output: ?Sized; -/// Implements slicing with syntax `&self[begin .. end]`. -/// -/// Returns a slice of self for the index range [`begin`..`end`). -/// -/// This operation is `O(1)`. -/// -/// # Panics -/// -/// Requires that `begin <= end` and `end <= self.len()`, -/// otherwise slicing will panic. -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index> for [T] { + /// Returns a shared reference to the output at this location, if in + /// bounds. + fn get(self, slice: &[T]) -> Option<&Self::Output>; + + /// Returns a mutable reference to the output at this location, if in + /// bounds. + fn get_mut(self, slice: &mut [T]) -> Option<&mut Self::Output>; + + /// Returns a shared reference to the output at this location, without + /// performing any bounds checking. + unsafe fn get_unchecked(self, slice: &[T]) -> &Self::Output; + + /// Returns a mutable reference to the output at this location, without + /// performing any bounds checking. + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut Self::Output; + + /// Returns a shared reference to the output at this location, panicking + /// if out of bounds. + fn index(self, slice: &[T]) -> &Self::Output; + + /// Returns a mutable reference to the output at this location, panicking + /// if out of bounds. + fn index_mut(self, slice: &mut [T]) -> &mut Self::Output; +} + +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for usize { + type Output = T; + + #[inline] + fn get(self, slice: &[T]) -> Option<&T> { + if self < slice.len() { + unsafe { + Some(self.get_unchecked(slice)) + } + } else { + None + } + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut T> { + if self < slice.len() { + unsafe { + Some(self.get_unchecked_mut(slice)) + } + } else { + None + } + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &T { + &*slice.as_ptr().offset(self as isize) + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut T { + &mut *slice.as_mut_ptr().offset(self as isize) + } + + #[inline] + fn index(self, slice: &[T]) -> &T { + // NB: use intrinsic indexing + &(*slice)[self] + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut T { + // NB: use intrinsic indexing + &mut (*slice)[self] + } +} + +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::Range { type Output = [T]; #[inline] - fn index(&self, index: ops::Range) -> &[T] { - if index.start > index.end { - slice_index_order_fail(index.start, index.end); - } else if index.end > self.len() { - slice_index_len_fail(index.end, self.len()); + fn get(self, slice: &[T]) -> Option<&[T]> { + if self.start > self.end || self.end > slice.len() { + None + } else { + unsafe { + Some(self.get_unchecked(slice)) + } + } + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + if self.start > self.end || self.end > slice.len() { + None + } else { + unsafe { + Some(self.get_unchecked_mut(slice)) + } + } + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + from_raw_parts(slice.as_ptr().offset(self.start as isize), self.end - self.start) + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + from_raw_parts_mut(slice.as_mut_ptr().offset(self.start as isize), self.end - self.start) + } + + #[inline] + fn index(self, slice: &[T]) -> &[T] { + if self.start > self.end { + slice_index_order_fail(self.start, self.end); + } else if self.end > slice.len() { + slice_index_len_fail(self.end, slice.len()); } unsafe { - from_raw_parts ( - self.as_ptr().offset(index.start as isize), - index.end - index.start - ) + self.get_unchecked(slice) + } + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + if self.start > self.end { + slice_index_order_fail(self.start, self.end); + } else if self.end > slice.len() { + slice_index_len_fail(self.end, slice.len()); + } + unsafe { + self.get_unchecked_mut(slice) } } } -/// Implements slicing with syntax `&self[.. end]`. -/// -/// Returns a slice of self from the beginning until but not including -/// the index `end`. -/// -/// Equivalent to `&self[0 .. end]` -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index> for [T] { +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::RangeTo { type Output = [T]; #[inline] - fn index(&self, index: ops::RangeTo) -> &[T] { - self.index(0 .. index.end) + fn get(self, slice: &[T]) -> Option<&[T]> { + (0..self.end).get(slice) + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + (0..self.end).get_mut(slice) + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + (0..self.end).get_unchecked(slice) + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + (0..self.end).get_unchecked_mut(slice) + } + + #[inline] + fn index(self, slice: &[T]) -> &[T] { + (0..self.end).index(slice) + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + (0..self.end).index_mut(slice) } } -/// Implements slicing with syntax `&self[begin ..]`. -/// -/// Returns a slice of self from and including the index `begin` until the end. -/// -/// Equivalent to `&self[begin .. self.len()]` -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index> for [T] { +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::RangeFrom { type Output = [T]; #[inline] - fn index(&self, index: ops::RangeFrom) -> &[T] { - self.index(index.start .. self.len()) + fn get(self, slice: &[T]) -> Option<&[T]> { + (self.start..slice.len()).get(slice) + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + (self.start..slice.len()).get_mut(slice) + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + (self.start..slice.len()).get_unchecked(slice) + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + (self.start..slice.len()).get_unchecked_mut(slice) + } + + #[inline] + fn index(self, slice: &[T]) -> &[T] { + (self.start..slice.len()).index(slice) + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + (self.start..slice.len()).index_mut(slice) } } -/// Implements slicing with syntax `&self[..]`. -/// -/// Returns a slice of the whole slice. This operation cannot panic. -/// -/// Equivalent to `&self[0 .. self.len()]` -#[stable(feature = "rust1", since = "1.0.0")] -impl ops::Index for [T] { +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::RangeFull { type Output = [T]; #[inline] - fn index(&self, _index: RangeFull) -> &[T] { - self + fn get(self, slice: &[T]) -> Option<&[T]> { + Some(slice) + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + Some(slice) + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + slice + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + slice + } + + #[inline] + fn index(self, slice: &[T]) -> &[T] { + slice + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + slice } } -#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index> for [T] { + +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::RangeInclusive { type Output = [T]; #[inline] - fn index(&self, index: ops::RangeInclusive) -> &[T] { - match index { + fn get(self, slice: &[T]) -> Option<&[T]> { + match self { + ops::RangeInclusive::Empty { .. } => Some(&[]), + ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => None, + ops::RangeInclusive::NonEmpty { start, end } => (start..end + 1).get(slice), + } + } + + #[inline] + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + match self { + ops::RangeInclusive::Empty { .. } => Some(&mut []), + ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => None, + ops::RangeInclusive::NonEmpty { start, end } => (start..end + 1).get_mut(slice), + } + } + + #[inline] + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + match self { ops::RangeInclusive::Empty { .. } => &[], - ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => - panic!("attempted to index slice up to maximum usize"), - ops::RangeInclusive::NonEmpty { start, end } => - self.index(start .. end+1) + ops::RangeInclusive::NonEmpty { start, end } => (start..end + 1).get_unchecked(slice), + } + } + + #[inline] + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + match self { + ops::RangeInclusive::Empty { .. } => &mut [], + ops::RangeInclusive::NonEmpty { start, end } => { + (start..end + 1).get_unchecked_mut(slice) + } + } + } + + #[inline] + fn index(self, slice: &[T]) -> &[T] { + match self { + ops::RangeInclusive::Empty { .. } => &[], + ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => { + panic!("attempted to index slice up to maximum usize"); + }, + ops::RangeInclusive::NonEmpty { start, end } => (start..end + 1).index(slice), + } + } + + #[inline] + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + match self { + ops::RangeInclusive::Empty { .. } => &mut [], + ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => { + panic!("attempted to index slice up to maximum usize"); + }, + ops::RangeInclusive::NonEmpty { start, end } => (start..end + 1).index_mut(slice), } } } -#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::Index> for [T] { + +#[stable(feature = "slice-get-slice-impls", since = "1.13.0")] +impl SliceIndex for ops::RangeToInclusive { type Output = [T]; #[inline] - fn index(&self, index: ops::RangeToInclusive) -> &[T] { - self.index(0...index.end) + fn get(self, slice: &[T]) -> Option<&[T]> { + (0...self.end).get(slice) } -} -/// Implements mutable slicing with syntax `&mut self[begin .. end]`. -/// -/// Returns a slice of self for the index range [`begin`..`end`). -/// -/// This operation is `O(1)`. -/// -/// # Panics -/// -/// Requires that `begin <= end` and `end <= self.len()`, -/// otherwise slicing will panic. -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut> for [T] { #[inline] - fn index_mut(&mut self, index: ops::Range) -> &mut [T] { - if index.start > index.end { - slice_index_order_fail(index.start, index.end); - } else if index.end > self.len() { - slice_index_len_fail(index.end, self.len()); - } - unsafe { - from_raw_parts_mut( - self.as_mut_ptr().offset(index.start as isize), - index.end - index.start - ) - } + fn get_mut(self, slice: &mut [T]) -> Option<&mut [T]> { + (0...self.end).get_mut(slice) } -} -/// Implements mutable slicing with syntax `&mut self[.. end]`. -/// -/// Returns a slice of self from the beginning until but not including -/// the index `end`. -/// -/// Equivalent to `&mut self[0 .. end]` -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut> for [T] { #[inline] - fn index_mut(&mut self, index: ops::RangeTo) -> &mut [T] { - self.index_mut(0 .. index.end) + unsafe fn get_unchecked(self, slice: &[T]) -> &[T] { + (0...self.end).get_unchecked(slice) } -} -/// Implements mutable slicing with syntax `&mut self[begin ..]`. -/// -/// Returns a slice of self from and including the index `begin` until the end. -/// -/// Equivalent to `&mut self[begin .. self.len()]` -#[stable(feature = "rust1", since = "1.0.0")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut> for [T] { #[inline] - fn index_mut(&mut self, index: ops::RangeFrom) -> &mut [T] { - let len = self.len(); - self.index_mut(index.start .. len) + unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] { + (0...self.end).get_unchecked_mut(slice) } -} -/// Implements mutable slicing with syntax `&mut self[..]`. -/// -/// Returns a slice of the whole slice. This operation can not panic. -/// -/// Equivalent to `&mut self[0 .. self.len()]` -#[stable(feature = "rust1", since = "1.0.0")] -impl ops::IndexMut for [T] { #[inline] - fn index_mut(&mut self, _index: RangeFull) -> &mut [T] { - self + fn index(self, slice: &[T]) -> &[T] { + (0...self.end).index(slice) } -} -#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut> for [T] { #[inline] - fn index_mut(&mut self, index: ops::RangeInclusive) -> &mut [T] { - match index { - ops::RangeInclusive::Empty { .. } => &mut [], - ops::RangeInclusive::NonEmpty { end, .. } if end == usize::max_value() => - panic!("attempted to index slice up to maximum usize"), - ops::RangeInclusive::NonEmpty { start, end } => - self.index_mut(start .. end+1) - } - } -} -#[unstable(feature = "inclusive_range", reason = "recently added, follows RFC", issue = "28237")] -#[rustc_on_unimplemented = "slice indices are of type `usize`"] -impl ops::IndexMut> for [T] { - #[inline] - fn index_mut(&mut self, index: ops::RangeToInclusive) -> &mut [T] { - self.index_mut(0...index.end) + fn index_mut(self, slice: &mut [T]) -> &mut [T] { + (0...self.end).index_mut(slice) } } @@ -983,7 +1146,11 @@ impl<'a, T> Iter<'a, T> { iterator!{struct Iter -> *const T, &'a T} #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T> ExactSizeIterator for Iter<'a, T> {} +impl<'a, T> ExactSizeIterator for Iter<'a, T> { + fn is_empty(&self) -> bool { + self.ptr == self.end + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for Iter<'a, T> {} @@ -1107,7 +1274,11 @@ impl<'a, T> IterMut<'a, T> { iterator!{struct IterMut -> *mut T, &'a mut T} #[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T> ExactSizeIterator for IterMut<'a, T> {} +impl<'a, T> ExactSizeIterator for IterMut<'a, T> { + fn is_empty(&self) -> bool { + self.ptr == self.end + } +} #[unstable(feature = "fused", issue = "35602")] impl<'a, T> FusedIterator for IterMut<'a, T> {} diff --git a/src/libcore/str/mod.rs b/src/libcore/str/mod.rs index 196750254a..de418b831c 100644 --- a/src/libcore/str/mod.rs +++ b/src/libcore/str/mod.rs @@ -424,6 +424,17 @@ impl<'a> Iterator for Chars<'a> { }) } + #[inline] + fn count(self) -> usize { + // length in `char` is equal to the number of non-continuation bytes + let bytes_len = self.iter.len(); + let mut cont_bytes = 0; + for &byte in self.iter { + cont_bytes += utf8_is_cont_byte(byte) as usize; + } + bytes_len - cont_bytes + } + #[inline] fn size_hint(&self) -> (usize, Option) { let len = self.iter.len(); @@ -432,6 +443,12 @@ impl<'a> Iterator for Chars<'a> { // `isize::MAX` (that's well below `usize::MAX`). ((len + 3) / 4, Some(len)) } + + #[inline] + fn last(mut self) -> Option { + // No need to go through the entire string. + self.next_back() + } } #[stable(feature = "rust1", since = "1.0.0")] @@ -501,10 +518,21 @@ impl<'a> Iterator for CharIndices<'a> { } } + #[inline] + fn count(self) -> usize { + self.iter.count() + } + #[inline] fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } + + #[inline] + fn last(mut self) -> Option<(usize, char)> { + // No need to go through the entire string. + self.next_back() + } } #[stable(feature = "rust1", since = "1.0.0")] @@ -590,6 +618,11 @@ impl<'a> ExactSizeIterator for Bytes<'a> { fn len(&self) -> usize { self.0.len() } + + #[inline] + fn is_empty(&self) -> bool { + self.0.is_empty() + } } #[unstable(feature = "fused", issue = "35602")] diff --git a/src/libcore/sync/atomic.rs b/src/libcore/sync/atomic.rs index c10f7e39fc..198db0e7c0 100644 --- a/src/libcore/sync/atomic.rs +++ b/src/libcore/sync/atomic.rs @@ -203,7 +203,6 @@ impl AtomicBool { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::{AtomicBool, Ordering}; /// /// let mut some_bool = AtomicBool::new(true); @@ -212,7 +211,7 @@ impl AtomicBool { /// assert_eq!(some_bool.load(Ordering::SeqCst), false); /// ``` #[inline] - #[unstable(feature = "atomic_access", issue = "35603")] + #[stable(feature = "atomic_access", since = "1.15.0")] pub fn get_mut(&mut self) -> &mut bool { unsafe { &mut *(self.v.get() as *mut bool) } } @@ -225,14 +224,13 @@ impl AtomicBool { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::AtomicBool; /// /// let some_bool = AtomicBool::new(true); /// assert_eq!(some_bool.into_inner(), true); /// ``` #[inline] - #[unstable(feature = "atomic_access", issue = "35603")] + #[stable(feature = "atomic_access", since = "1.15.0")] pub fn into_inner(self) -> bool { unsafe { self.v.into_inner() != 0 } } @@ -588,7 +586,6 @@ impl AtomicPtr { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::{AtomicPtr, Ordering}; /// /// let mut atomic_ptr = AtomicPtr::new(&mut 10); @@ -596,7 +593,7 @@ impl AtomicPtr { /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5); /// ``` #[inline] - #[unstable(feature = "atomic_access", issue = "35603")] + #[stable(feature = "atomic_access", since = "1.15.0")] pub fn get_mut(&mut self) -> &mut *mut T { unsafe { &mut *self.p.get() } } @@ -609,14 +606,13 @@ impl AtomicPtr { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::AtomicPtr; /// /// let atomic_ptr = AtomicPtr::new(&mut 5); /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5); /// ``` #[inline] - #[unstable(feature = "atomic_access", issue = "35603")] + #[stable(feature = "atomic_access", since = "1.15.0")] pub fn into_inner(self) -> *mut T { unsafe { self.p.into_inner() } } @@ -883,7 +879,6 @@ macro_rules! atomic_int { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::{AtomicIsize, Ordering}; /// /// let mut some_isize = AtomicIsize::new(10); @@ -905,7 +900,6 @@ macro_rules! atomic_int { /// # Examples /// /// ``` - /// #![feature(atomic_access)] /// use std::sync::atomic::AtomicIsize; /// /// let some_isize = AtomicIsize::new(5); @@ -1261,7 +1255,7 @@ atomic_int!{ stable(feature = "rust1", since = "1.0.0"), stable(feature = "extended_compare_and_swap", since = "1.10.0"), stable(feature = "atomic_debug", since = "1.3.0"), - unstable(feature = "atomic_access", issue = "35603"), + stable(feature = "atomic_access", since = "1.15.0"), isize AtomicIsize ATOMIC_ISIZE_INIT } #[cfg(target_has_atomic = "ptr")] @@ -1269,7 +1263,7 @@ atomic_int!{ stable(feature = "rust1", since = "1.0.0"), stable(feature = "extended_compare_and_swap", since = "1.10.0"), stable(feature = "atomic_debug", since = "1.3.0"), - unstable(feature = "atomic_access", issue = "35603"), + stable(feature = "atomic_access", since = "1.15.0"), usize AtomicUsize ATOMIC_USIZE_INIT } diff --git a/src/libcore/tuple.rs b/src/libcore/tuple.rs index c3608b60a3..55d55079dd 100644 --- a/src/libcore/tuple.rs +++ b/src/libcore/tuple.rs @@ -13,11 +13,6 @@ use cmp::*; use cmp::Ordering::*; -// FIXME(#19630) Remove this work-around -macro_rules! e { - ($e:expr) => { $e } -} - // macro for implementing n-ary tuple functions and operations macro_rules! tuple_impls { ($( @@ -29,7 +24,7 @@ macro_rules! tuple_impls { #[stable(feature = "rust1", since = "1.0.0")] impl<$($T:Clone),+> Clone for ($($T,)+) { fn clone(&self) -> ($($T,)+) { - ($(e!(self.$idx.clone()),)+) + ($(self.$idx.clone(),)+) } } @@ -37,11 +32,11 @@ macro_rules! tuple_impls { impl<$($T:PartialEq),+> PartialEq for ($($T,)+) { #[inline] fn eq(&self, other: &($($T,)+)) -> bool { - e!($(self.$idx == other.$idx)&&+) + $(self.$idx == other.$idx)&&+ } #[inline] fn ne(&self, other: &($($T,)+)) -> bool { - e!($(self.$idx != other.$idx)||+) + $(self.$idx != other.$idx)||+ } } diff --git a/src/libcoretest/cell.rs b/src/libcoretest/cell.rs index a7c230ba97..724a312ea7 100644 --- a/src/libcoretest/cell.rs +++ b/src/libcoretest/cell.rs @@ -59,22 +59,22 @@ fn double_imm_borrow() { fn no_mut_then_imm_borrow() { let x = RefCell::new(0); let _b1 = x.borrow_mut(); - assert_eq!(x.borrow_state(), BorrowState::Writing); + assert!(x.try_borrow().is_err()); } #[test] fn no_imm_then_borrow_mut() { let x = RefCell::new(0); let _b1 = x.borrow(); - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow_mut().is_err()); } #[test] fn no_double_borrow_mut() { let x = RefCell::new(0); - assert_eq!(x.borrow_state(), BorrowState::Unused); + assert!(x.try_borrow().is_ok()); let _b1 = x.borrow_mut(); - assert_eq!(x.borrow_state(), BorrowState::Writing); + assert!(x.try_borrow().is_err()); } #[test] @@ -102,7 +102,8 @@ fn double_borrow_single_release_no_borrow_mut() { { let _b2 = x.borrow(); } - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); } #[test] @@ -119,14 +120,18 @@ fn ref_clone_updates_flag() { let x = RefCell::new(0); { let b1 = x.borrow(); - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); { let _b2 = Ref::clone(&b1); - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); } - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); } - assert_eq!(x.borrow_state(), BorrowState::Unused); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_ok()); } #[test] @@ -134,15 +139,19 @@ fn ref_map_does_not_update_flag() { let x = RefCell::new(Some(5)); { let b1: Ref> = x.borrow(); - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); { let b2: Ref = Ref::map(b1, |o| o.as_ref().unwrap()); assert_eq!(*b2, 5); - assert_eq!(x.borrow_state(), BorrowState::Reading); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_err()); } - assert_eq!(x.borrow_state(), BorrowState::Unused); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_ok()); } - assert_eq!(x.borrow_state(), BorrowState::Unused); + assert!(x.try_borrow().is_ok()); + assert!(x.try_borrow_mut().is_ok()); } #[test] @@ -247,5 +256,3 @@ fn refcell_ref_coercion() { assert_eq!(&*coerced, comp); } } - - diff --git a/src/libcoretest/char.rs b/src/libcoretest/char.rs index 7da0b6902f..b4088ffbf8 100644 --- a/src/libcoretest/char.rs +++ b/src/libcoretest/char.rs @@ -162,6 +162,8 @@ fn test_escape_debug() { assert_eq!(s, "~"); let s = string('é'); assert_eq!(s, "é"); + let s = string('文'); + assert_eq!(s, "文"); let s = string('\x00'); assert_eq!(s, "\\u{0}"); let s = string('\x1f'); diff --git a/src/libcoretest/iter.rs b/src/libcoretest/iter.rs index 58b6444ef8..274539dfa6 100644 --- a/src/libcoretest/iter.rs +++ b/src/libcoretest/iter.rs @@ -274,6 +274,74 @@ fn test_iterator_peekable_last() { let mut it = ys.iter().peekable(); assert_eq!(it.peek(), Some(&&0)); assert_eq!(it.last(), Some(&0)); + + let mut it = ys.iter().peekable(); + assert_eq!(it.next(), Some(&0)); + assert_eq!(it.peek(), None); + assert_eq!(it.last(), None); +} + +/// This is an iterator that follows the Iterator contract, +/// but it is not fused. After having returned None once, it will start +/// producing elements if .next() is called again. +pub struct CycleIter<'a, T: 'a> { + index: usize, + data: &'a [T], +} + +pub fn cycle(data: &[T]) -> CycleIter { + CycleIter { + index: 0, + data: data, + } +} + +impl<'a, T> Iterator for CycleIter<'a, T> { + type Item = &'a T; + fn next(&mut self) -> Option { + let elt = self.data.get(self.index); + self.index += 1; + self.index %= 1 + self.data.len(); + elt + } +} + +#[test] +fn test_iterator_peekable_remember_peek_none_1() { + // Check that the loop using .peek() terminates + let data = [1, 2, 3]; + let mut iter = cycle(&data).peekable(); + + let mut n = 0; + while let Some(_) = iter.next() { + let is_the_last = iter.peek().is_none(); + assert_eq!(is_the_last, n == data.len() - 1); + n += 1; + if n > data.len() { break; } + } + assert_eq!(n, data.len()); +} + +#[test] +fn test_iterator_peekable_remember_peek_none_2() { + let data = [0]; + let mut iter = cycle(&data).peekable(); + iter.next(); + assert_eq!(iter.peek(), None); + assert_eq!(iter.last(), None); +} + +#[test] +fn test_iterator_peekable_remember_peek_none_3() { + let data = [0]; + let mut iter = cycle(&data).peekable(); + iter.peek(); + assert_eq!(iter.nth(0), Some(&0)); + + let mut iter = cycle(&data).peekable(); + iter.next(); + assert_eq!(iter.peek(), None); + assert_eq!(iter.nth(0), None); } #[test] diff --git a/src/libcoretest/lib.rs b/src/libcoretest/lib.rs index b8c01e570f..d12616a97a 100644 --- a/src/libcoretest/lib.rs +++ b/src/libcoretest/lib.rs @@ -10,9 +10,7 @@ #![deny(warnings)] -#![feature(borrow_state)] #![feature(box_syntax)] -#![feature(cell_extras)] #![feature(char_escape_debug)] #![feature(const_fn)] #![feature(core_private_bignum)] @@ -32,15 +30,14 @@ #![feature(try_from)] #![feature(unicode)] #![feature(unique)] -#![feature(iter_max_by)] -#![feature(iter_min_by)] #![feature(ordering_chaining)] #![feature(result_unwrap_or_default)] +#![feature(ptr_unaligned)] extern crate core; extern crate test; extern crate libc; -extern crate rustc_unicode; +extern crate std_unicode; extern crate rand; mod any; diff --git a/src/libcoretest/num/flt2dec/strategy/dragon.rs b/src/libcoretest/num/flt2dec/strategy/dragon.rs index 79dcca7671..08c2cd0a73 100644 --- a/src/libcoretest/num/flt2dec/strategy/dragon.rs +++ b/src/libcoretest/num/flt2dec/strategy/dragon.rs @@ -11,7 +11,6 @@ use std::prelude::v1::*; use std::{i16, f64}; use super::super::*; -use core::num::flt2dec::*; use core::num::bignum::Big32x40 as Big; use core::num::flt2dec::strategy::dragon::*; diff --git a/src/libcoretest/num/flt2dec/strategy/grisu.rs b/src/libcoretest/num/flt2dec/strategy/grisu.rs index 2d4afceda1..311bd25235 100644 --- a/src/libcoretest/num/flt2dec/strategy/grisu.rs +++ b/src/libcoretest/num/flt2dec/strategy/grisu.rs @@ -10,7 +10,6 @@ use std::{i16, f64}; use super::super::*; -use core::num::flt2dec::*; use core::num::flt2dec::strategy::grisu::*; #[test] diff --git a/src/libcoretest/ptr.rs b/src/libcoretest/ptr.rs index f7fe61896f..7f6f472bfb 100644 --- a/src/libcoretest/ptr.rs +++ b/src/libcoretest/ptr.rs @@ -9,6 +9,7 @@ // except according to those terms. use core::ptr::*; +use core::cell::RefCell; #[test] fn test() { @@ -189,3 +190,25 @@ pub fn test_variadic_fnptr() { let mut s = SipHasher::new(); assert_eq!(p.hash(&mut s), q.hash(&mut s)); } + +#[test] +fn write_unaligned_drop() { + thread_local! { + static DROPS: RefCell> = RefCell::new(Vec::new()); + } + + struct Dropper(u32); + + impl Drop for Dropper { + fn drop(&mut self) { + DROPS.with(|d| d.borrow_mut().push(self.0)); + } + } + + { + let c = Dropper(0); + let mut t = Dropper(1); + unsafe { write_unaligned(&mut t, c); } + } + DROPS.with(|d| assert_eq!(*d.borrow(), [0])); +} diff --git a/src/libcoretest/slice.rs b/src/libcoretest/slice.rs index f82ab44ada..ad39e6b081 100644 --- a/src/libcoretest/slice.rs +++ b/src/libcoretest/slice.rs @@ -180,3 +180,47 @@ fn test_windows_last() { let c2 = v2.windows(2); assert_eq!(c2.last().unwrap()[0], 3); } + +#[test] +fn get_range() { + let v: &[i32] = &[0, 1, 2, 3, 4, 5]; + assert_eq!(v.get(..), Some(&[0, 1, 2, 3, 4, 5][..])); + assert_eq!(v.get(..2), Some(&[0, 1][..])); + assert_eq!(v.get(2..), Some(&[2, 3, 4, 5][..])); + assert_eq!(v.get(1..4), Some(&[1, 2, 3][..])); + assert_eq!(v.get(7..), None); + assert_eq!(v.get(7..10), None); +} + +#[test] +fn get_mut_range() { + let mut v: &mut [i32] = &mut [0, 1, 2, 3, 4, 5]; + assert_eq!(v.get_mut(..), Some(&mut [0, 1, 2, 3, 4, 5][..])); + assert_eq!(v.get_mut(..2), Some(&mut [0, 1][..])); + assert_eq!(v.get_mut(2..), Some(&mut [2, 3, 4, 5][..])); + assert_eq!(v.get_mut(1..4), Some(&mut [1, 2, 3][..])); + assert_eq!(v.get_mut(7..), None); + assert_eq!(v.get_mut(7..10), None); +} + +#[test] +fn get_unchecked_range() { + unsafe { + let v: &[i32] = &[0, 1, 2, 3, 4, 5]; + assert_eq!(v.get_unchecked(..), &[0, 1, 2, 3, 4, 5][..]); + assert_eq!(v.get_unchecked(..2), &[0, 1][..]); + assert_eq!(v.get_unchecked(2..), &[2, 3, 4, 5][..]); + assert_eq!(v.get_unchecked(1..4), &[1, 2, 3][..]); + } +} + +#[test] +fn get_unchecked_mut_range() { + unsafe { + let v: &mut [i32] = &mut [0, 1, 2, 3, 4, 5]; + assert_eq!(v.get_unchecked_mut(..), &mut [0, 1, 2, 3, 4, 5][..]); + assert_eq!(v.get_unchecked_mut(..2), &mut [0, 1][..]); + assert_eq!(v.get_unchecked_mut(2..), &mut[2, 3, 4, 5][..]); + assert_eq!(v.get_unchecked_mut(1..4), &mut [1, 2, 3][..]); + } +} diff --git a/src/libfmt_macros/lib.rs b/src/libfmt_macros/lib.rs index e7d401f092..b179a16e55 100644 --- a/src/libfmt_macros/lib.rs +++ b/src/libfmt_macros/lib.rs @@ -139,7 +139,7 @@ pub struct Parser<'a> { input: &'a str, cur: iter::Peekable>, /// Error messages accumulated during parsing - pub errors: Vec, + pub errors: Vec<(string::String, Option)>, /// Current position of implicit positional argument pointer curarg: usize, } @@ -165,7 +165,9 @@ impl<'a> Iterator for Parser<'a> { if self.consume('}') { Some(String(self.string(pos + 1))) } else { - self.err("unmatched `}` found"); + self.err_with_note("unmatched `}` found", + "if you intended to print `}`, \ + you can escape it using `}}`"); None } } @@ -192,7 +194,14 @@ impl<'a> Parser<'a> { /// String, but I think it does when this eventually uses conditions so it /// might as well start using it now. fn err(&mut self, msg: &str) { - self.errors.push(msg.to_owned()); + self.errors.push((msg.to_owned(), None)); + } + + /// Notifies of an error. The message doesn't actually need to be of type + /// String, but I think it does when this eventually uses conditions so it + /// might as well start using it now. + fn err_with_note(&mut self, msg: &str, note: &str) { + self.errors.push((msg.to_owned(), Some(note.to_owned()))); } /// Optionally consumes the specified character. If the character is not at @@ -222,7 +231,13 @@ impl<'a> Parser<'a> { self.err(&format!("expected `{:?}`, found `{:?}`", c, maybe)); } } else { - self.err(&format!("expected `{:?}` but string was terminated", c)); + let msg = &format!("expected `{:?}` but string was terminated", c); + if c == '}' { + self.err_with_note(msg, + "if you intended to print `{`, you can escape it using `{{`"); + } else { + self.err(msg); + } } } diff --git a/src/libgraphviz/lib.rs b/src/libgraphviz/lib.rs index 03057af4a8..220051c9d3 100644 --- a/src/libgraphviz/lib.rs +++ b/src/libgraphviz/lib.rs @@ -295,7 +295,6 @@ #![cfg_attr(not(stage0), deny(warnings))] #![feature(str_escape)] -#![cfg_attr(stage0, feature(question_mark))] use self::LabelText::*; diff --git a/src/liblibc/.travis.yml b/src/liblibc/.travis.yml index 1b05af7bc6..71c771d2b7 100644 --- a/src/liblibc/.travis.yml +++ b/src/liblibc/.travis.yml @@ -20,7 +20,7 @@ script: osx_image: xcode7.3 env: global: - secure: eIDEoQdTyglcsTD13zSGotAX2HDhRSXIaaTnVZTThqLSrySOc3/6KY3qmOc2Msf7XaBqfFy9QA+alk7OwfePp253eiy1Kced67ffjjFOytEcRT7FlQiYpcYQD6WNHZEj62/bJBO4LTM9sGtWNCTJVEDKW0WM8mUK7qNuC+honPM= + secure: "e2/3QjgRN9atOuSHp22TrYG7QVKcYUWY48Hi9b60w+r1+BhPkTseIJLte7WefRhdXtqpjjUJTooKDhnurFOeHaCT+nmBgiv+FPU893sBl4bhesY4m0vgUJVbNZcs6lTImYekWVb+aqjGdgV/XAgCw7c3kPmrZV0MzGDWL64Xaps=" matrix: include: # 1.0.0 compat @@ -33,7 +33,7 @@ matrix: # build documentation - os: linux env: TARGET=x86_64-unknown-linux-gnu - rust: stable + rust: nightly-2016-11-26 script: sh ci/dox.sh # stable compat @@ -87,7 +87,10 @@ matrix: rust: stable - os: linux env: TARGET=mips64-unknown-linux-gnuabi64 - rust: nightly + rust: beta + - os: linux + env: TARGET=mips-unknown-linux-gnu + rust: beta # beta - os: linux @@ -100,14 +103,10 @@ matrix: # nightly - os: linux env: TARGET=x86_64-unknown-linux-gnu - rust: nightly + rust: nightly-2016-11-26 - os: osx env: TARGET=x86_64-apple-darwin - rust: nightly - - os: linux - env: TARGET=mips-unknown-linux-gnu - # not sure why this has to be nightly... - rust: nightly + rust: nightly-2016-11-26 # QEMU based targets that compile in an emulator - os: linux @@ -119,6 +118,8 @@ matrix: script: sh ci/run-docker.sh $TARGET install: +cache: cargo + notifications: email: on_success: never diff --git a/src/liblibc/Cargo.lock b/src/liblibc/Cargo.lock index 5332bf28bf..53b07dcc3f 100644 --- a/src/liblibc/Cargo.lock +++ b/src/liblibc/Cargo.lock @@ -3,7 +3,7 @@ name = "libc-test" version = "0.1.0" dependencies = [ "ctest 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)", - "libc 0.2.17", + "libc 0.2.18", ] [[package]] @@ -48,7 +48,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" [[package]] name = "libc" -version = "0.2.17" +version = "0.2.18" [[package]] name = "log" diff --git a/src/liblibc/Cargo.toml b/src/liblibc/Cargo.toml index c08ab3aab9..7ac523e27a 100644 --- a/src/liblibc/Cargo.toml +++ b/src/liblibc/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "libc" -version = "0.2.17" +version = "0.2.18" authors = ["The Rust Project Developers"] license = "MIT/Apache-2.0" readme = "README.md" diff --git a/src/liblibc/README.md b/src/liblibc/README.md index 5ea812320f..2552836435 100644 --- a/src/liblibc/README.md +++ b/src/liblibc/README.md @@ -5,7 +5,7 @@ A Rust library with native bindings to the types and functions commonly found on various systems, including libc. [![Build Status](https://travis-ci.org/rust-lang/libc.svg?branch=master)](https://travis-ci.org/rust-lang/libc) -[![Build status](https://ci.appveyor.com/api/projects/status/34csq3uurnw7c0rl?svg=true)](https://ci.appveyor.com/project/alexcrichton/libc) +[![Build status](https://ci.appveyor.com/api/projects/status/github/rust-lang/libc?svg=true)](https://ci.appveyor.com/project/rust-lang-libs/libc) [Documentation](#platforms-and-documentation) diff --git a/src/liblibc/ci/docker/arm-linux-androideabi/Dockerfile b/src/liblibc/ci/docker/arm-linux-androideabi/Dockerfile index 0e41ba6dbe..1911fbd879 100644 --- a/src/liblibc/ci/docker/arm-linux-androideabi/Dockerfile +++ b/src/liblibc/ci/docker/arm-linux-androideabi/Dockerfile @@ -1,4 +1,34 @@ -FROM alexcrichton/rust-slave-android:2015-11-22 -ENV CARGO_TARGET_ARM_LINUX_ANDROIDEABI_LINKER=arm-linux-androideabi-gcc \ - PATH=$PATH:/rust/bin -ENTRYPOINT ["sh"] +FROM ubuntu:16.04 + +RUN dpkg --add-architecture i386 && \ + apt-get update && \ + apt-get install -y --no-install-recommends \ + file \ + curl \ + ca-certificates \ + python \ + unzip \ + expect \ + openjdk-9-jre \ + libstdc++6:i386 \ + gcc \ + libc6-dev + +WORKDIR /android/ + +COPY install-ndk.sh /android/ +RUN sh /android/install-ndk.sh + +ENV PATH=$PATH:/android/ndk-arm/bin:/android/sdk/tools:/android/sdk/platform-tools + +COPY install-sdk.sh accept-licenses.sh /android/ +RUN sh /android/install-sdk.sh + +ENV PATH=$PATH:/rust/bin \ + CARGO_TARGET_ARM_LINUX_ANDROIDEABI_LINKER=arm-linux-androideabi-gcc \ + ANDROID_EMULATOR_FORCE_32BIT=1 \ + HOME=/tmp +RUN chmod 755 /android/sdk/tools/* + +RUN cp -r /root/.android /tmp +RUN chmod 777 -R /tmp/.android diff --git a/src/liblibc/ci/docker/arm-linux-androideabi/accept-licenses.sh b/src/liblibc/ci/docker/arm-linux-androideabi/accept-licenses.sh new file mode 100755 index 0000000000..8d8f60a5ec --- /dev/null +++ b/src/liblibc/ci/docker/arm-linux-androideabi/accept-licenses.sh @@ -0,0 +1,15 @@ +#!/usr/bin/expect -f +# ignore-license + +set timeout 1800 +set cmd [lindex $argv 0] +set licenses [lindex $argv 1] + +spawn {*}$cmd +expect { + "Do you accept the license '*'*" { + exp_send "y\r" + exp_continue + } + eof +} diff --git a/src/liblibc/ci/docker/arm-linux-androideabi/install-ndk.sh b/src/liblibc/ci/docker/arm-linux-androideabi/install-ndk.sh new file mode 100644 index 0000000000..566a319184 --- /dev/null +++ b/src/liblibc/ci/docker/arm-linux-androideabi/install-ndk.sh @@ -0,0 +1,21 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +curl -O https://dl.google.com/android/repository/android-ndk-r13b-linux-x86_64.zip +unzip -q android-ndk-r13b-linux-x86_64.zip +android-ndk-r13b/build/tools/make_standalone_toolchain.py \ + --install-dir /android/ndk-arm \ + --arch arm \ + --api 24 + +rm -rf ./android-ndk-r13b-linux-x86_64.zip ./android-ndk-r13b diff --git a/src/liblibc/ci/docker/arm-linux-androideabi/install-sdk.sh b/src/liblibc/ci/docker/arm-linux-androideabi/install-sdk.sh new file mode 100644 index 0000000000..3f20837fe0 --- /dev/null +++ b/src/liblibc/ci/docker/arm-linux-androideabi/install-sdk.sh @@ -0,0 +1,33 @@ +#!/bin/sh +# Copyright 2016 The Rust Project Developers. See the COPYRIGHT +# file at the top-level directory of this distribution and at +# http://rust-lang.org/COPYRIGHT. +# +# Licensed under the Apache License, Version 2.0 or the MIT license +# , at your +# option. This file may not be copied, modified, or distributed +# except according to those terms. + +set -ex + +# Prep the SDK and emulator +# +# Note that the update process requires that we accept a bunch of licenses, and +# we can't just pipe `yes` into it for some reason, so we take the same strategy +# located in https://github.com/appunite/docker by just wrapping it in a script +# which apparently magically accepts the licenses. + +mkdir sdk +curl https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz | \ + tar xzf - -C sdk --strip-components=1 + +filter="platform-tools,android-21" +filter="$filter,sys-img-armeabi-v7a-android-21" + +./accept-licenses.sh "android - update sdk -a --no-ui --filter $filter" + +echo "no" | android create avd \ + --name arm-21 \ + --target android-21 \ + --abi armeabi-v7a diff --git a/src/liblibc/ci/docker/mips-unknown-linux-musl/Dockerfile b/src/liblibc/ci/docker/mips-unknown-linux-musl/Dockerfile index 77c6adb435..cbc41c24d1 100644 --- a/src/liblibc/ci/docker/mips-unknown-linux-musl/Dockerfile +++ b/src/liblibc/ci/docker/mips-unknown-linux-musl/Dockerfile @@ -6,7 +6,10 @@ RUN apt-get install -y --no-install-recommends \ bzip2 RUN mkdir /toolchain -RUN curl -L https://downloads.openwrt.org/snapshots/trunk/ar71xx/generic/OpenWrt-SDK-ar71xx-generic_gcc-5.3.0_musl-1.1.15.Linux-x86_64.tar.bz2 | \ + +# Note that this originally came from: +# https://downloads.openwrt.org/snapshots/trunk/ar71xx/generic/OpenWrt-SDK-ar71xx-generic_gcc-5.3.0_musl-1.1.15.Linux-x86_64.tar.bz2 +RUN curl -L https://s3.amazonaws.com/rust-lang-ci/libc/OpenWrt-SDK-ar71xx-generic_gcc-5.3.0_musl-1.1.15.Linux-x86_64.tar.bz2 | \ tar xjf - -C /toolchain --strip-components=1 ENV PATH=$PATH:/rust/bin:/toolchain/staging_dir/toolchain-mips_34kc_gcc-5.3.0_musl-1.1.15/bin \ diff --git a/src/liblibc/ci/docker/x86_64-unknown-freebsd/Dockerfile b/src/liblibc/ci/docker/x86_64-unknown-freebsd/Dockerfile index b127338222..12b0bdffcc 100644 --- a/src/liblibc/ci/docker/x86_64-unknown-freebsd/Dockerfile +++ b/src/liblibc/ci/docker/x86_64-unknown-freebsd/Dockerfile @@ -8,6 +8,6 @@ RUN apt-get install -y --no-install-recommends \ ENTRYPOINT ["sh"] ENV PATH=$PATH:/rust/bin \ - QEMU=freebsd.qcow2.gz \ + QEMU=2016-11-06/freebsd.qcow2.gz \ CAN_CROSS=1 \ CARGO_TARGET_X86_64_UNKNOWN_FREEBSD_LINKER=x86_64-unknown-freebsd10-gcc diff --git a/src/liblibc/ci/docker/x86_64-unknown-openbsd/Dockerfile b/src/liblibc/ci/docker/x86_64-unknown-openbsd/Dockerfile index 26340a5ed1..518baf8702 100644 --- a/src/liblibc/ci/docker/x86_64-unknown-openbsd/Dockerfile +++ b/src/liblibc/ci/docker/x86_64-unknown-openbsd/Dockerfile @@ -5,4 +5,4 @@ RUN apt-get install -y --no-install-recommends \ gcc libc6-dev qemu curl ca-certificates \ genext2fs ENV PATH=$PATH:/rust/bin \ - QEMU=2016-09-07/openbsd-6.0-without-pkgs.qcow2 + QEMU=2016-11-06/openbsd-6.0-without-pkgs.qcow2 diff --git a/src/liblibc/ci/run-docker.sh b/src/liblibc/ci/run-docker.sh index e34e65ffcd..a7702ae1dc 100644 --- a/src/liblibc/ci/run-docker.sh +++ b/src/liblibc/ci/run-docker.sh @@ -6,14 +6,21 @@ set -ex run() { echo $1 docker build -t libc ci/docker/$1 + mkdir -p target docker run \ + --user `id -u`:`id -g` \ --rm \ - -v `rustc --print sysroot`:/rust:ro \ - -v `pwd`:/checkout:ro \ - -e CARGO_TARGET_DIR=/tmp/target \ - -w /checkout \ + --volume $HOME/.cargo:/cargo \ + --env CARGO_HOME=/cargo \ + --volume `rustc --print sysroot`:/rust:ro \ + --volume `pwd`:/checkout:ro \ + --volume `pwd`/target:/checkout/target \ + --env CARGO_TARGET_DIR=/checkout/target \ + --workdir /checkout \ --privileged \ - -it libc \ + --interactive \ + --tty \ + libc \ ci/run.sh $1 } diff --git a/src/liblibc/ci/run.sh b/src/liblibc/ci/run.sh index 15721ab965..179fe7a885 100755 --- a/src/liblibc/ci/run.sh +++ b/src/liblibc/ci/run.sh @@ -21,14 +21,14 @@ if [ "$QEMU" != "" ]; then # image is .gz : download and uncompress it qemufile=$(echo ${QEMU%.gz} | sed 's/\//__/g') if [ ! -f $tmpdir/$qemufile ]; then - curl https://people.mozilla.org/~acrichton/libc-test/qemu/$QEMU | \ + curl https://s3.amazonaws.com/rust-lang-ci/libc/$QEMU | \ gunzip -d > $tmpdir/$qemufile fi else # plain qcow2 image: just download it qemufile=$(echo ${QEMU} | sed 's/\//__/g') if [ ! -f $tmpdir/$qemufile ]; then - curl https://people.mozilla.org/~acrichton/libc-test/qemu/$QEMU \ + curl https://s3.amazonaws.com/rust-lang-ci/libc/$QEMU \ > $tmpdir/$qemufile fi fi diff --git a/src/liblibc/libc-test/build.rs b/src/liblibc/libc-test/build.rs index bbd9f8cfcf..7b241dcb73 100644 --- a/src/liblibc/libc-test/build.rs +++ b/src/liblibc/libc-test/build.rs @@ -162,6 +162,7 @@ fn main() { cfg.header("sys/ipc.h"); cfg.header("sys/msg.h"); cfg.header("sys/shm.h"); + cfg.header("sys/fsuid.h"); cfg.header("pty.h"); cfg.header("shadow.h"); } @@ -215,6 +216,10 @@ fn main() { cfg.header("sys/ioctl_compat.h"); } + if linux || freebsd || netbsd || apple { + cfg.header("aio.h"); + } + cfg.type_name(move |ty, is_struct| { match ty { // Just pass all these through, no need for a "struct" prefix @@ -293,6 +298,9 @@ fn main() { // The alignment of this is 4 on 64-bit OSX... "kevent" if apple && x86_64 => true, + // This is actually a union, not a struct + "sigval" => true, + _ => false } }); @@ -353,6 +361,19 @@ fn main() { "QFMT_VFS_OLD" | "QFMT_VFS_V0" if mips && linux => true, + // These constants were removed in FreeBSD 11 (svn r273250) but will + // still be accepted and ignored at runtime. + "MAP_RENAME" | + "MAP_NORESERVE" if freebsd => true, + + // These constants were removed in FreeBSD 11 (svn r262489), + // and they've never had any legitimate use outside of the + // base system anyway. + "CTL_MAXID" | + "KERN_MAXID" | + "HW_MAXID" | + "USER_MAXID" if freebsd => true, + _ => false, } }); @@ -417,11 +438,23 @@ fn main() { // [3]: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/sys/eventfd.h;h=6295f32e937e779e74318eb9d3bdbe76aef8a8f3;hb=4e42b5b8f89f0e288e68be7ad70f9525aebc2cff#l34 "eventfd" if linux => true, - // The `uname` funcion in freebsd is now an inline wrapper that + // The `uname` function in freebsd is now an inline wrapper that // delegates to another, but the symbol still exists, so don't check // the symbol. "uname" if freebsd => true, + // aio_waitcomplete's return type changed between FreeBSD 10 and 11. + "aio_waitcomplete" if freebsd => true, + + // lio_listio confuses the checker, probably because one of its + // arguments is an array + "lio_listio" if freebsd => true, + "lio_listio" if musl => true, + + // Apparently the NDK doesn't have this defined on android, but + // it's in a header file? + "endpwent" if android => true, + _ => false, } }); @@ -441,7 +474,13 @@ fn main() { // sighandler_t type is super weird (struct_ == "sigaction" && field == "sa_sigaction") || // __timeval type is a patch which doesn't exist in glibc - (linux && struct_ == "utmpx" && field == "ut_tv") + (linux && struct_ == "utmpx" && field == "ut_tv") || + // sigval is actually a union, but we pretend it's a struct + (struct_ == "sigevent" && field == "sigev_value") || + // aio_buf is "volatile void*" and Rust doesn't understand volatile + (struct_ == "aiocb" && field == "aio_buf") || + // stack_t.ss_sp's type changed from FreeBSD 10 to 11 in svn r294930 + (freebsd && struct_ == "stack_t" && field == "ss_sp") }); cfg.skip_field(move |struct_, field| { @@ -451,7 +490,9 @@ fn main() { // musl names this __dummy1 but it's still there (musl && struct_ == "glob_t" && field == "gl_flags") || // musl seems to define this as an *anonymous* bitfield - (musl && struct_ == "statvfs" && field == "__f_unused") + (musl && struct_ == "statvfs" && field == "__f_unused") || + // sigev_notify_thread_id is actually part of a sigev_un union + (struct_ == "sigevent" && field == "sigev_notify_thread_id") }); cfg.fn_cname(move |name, cname| { diff --git a/src/liblibc/src/lib.rs b/src/liblibc/src/lib.rs index dcc4791f9a..5b80aca66c 100644 --- a/src/liblibc/src/lib.rs +++ b/src/liblibc/src/lib.rs @@ -75,6 +75,7 @@ // Attributes needed when building as part of the standard library #![cfg_attr(stdbuild, feature(no_std, core, core_slice_ext, staged_api, custom_attribute, cfg_target_vendor))] +#![cfg_attr(stdbuild, feature(link_cfg))] #![cfg_attr(stdbuild, no_std)] #![cfg_attr(stdbuild, staged_api)] #![cfg_attr(stdbuild, allow(warnings))] @@ -265,6 +266,9 @@ cfg_if! { if #[cfg(windows)] { mod windows; pub use windows::*; + } else if #[cfg(target_os = "redox")] { + mod redox; + pub use redox::*; } else if #[cfg(unix)] { mod unix; pub use unix::*; diff --git a/src/liblibc/src/redox.rs b/src/liblibc/src/redox.rs new file mode 100644 index 0000000000..7a05a3957b --- /dev/null +++ b/src/liblibc/src/redox.rs @@ -0,0 +1,49 @@ +pub type c_char = i8; +pub type c_long = i64; +pub type c_ulong = u64; + +pub type wchar_t = i16; + +pub type off_t = usize; +pub type mode_t = u16; +pub type time_t = i64; +pub type pid_t = usize; +pub type gid_t = usize; +pub type uid_t = usize; + +pub type in_addr_t = u32; +pub type in_port_t = u16; + +pub type socklen_t = u32; +pub type sa_family_t = u16; + +s! { + pub struct in_addr { + pub s_addr: in_addr_t, + } + + pub struct in6_addr { + pub s6_addr: [u8; 16], + __align: [u32; 0], + } + + pub struct sockaddr { + pub sa_family: sa_family_t, + pub sa_data: [::c_char; 14], + } + + pub struct sockaddr_in { + pub sin_family: sa_family_t, + pub sin_port: ::in_port_t, + pub sin_addr: ::in_addr, + pub sin_zero: [u8; 8], + } + + pub struct sockaddr_in6 { + pub sin6_family: sa_family_t, + pub sin6_port: in_port_t, + pub sin6_flowinfo: u32, + pub sin6_addr: ::in6_addr, + pub sin6_scope_id: u32, + } +} diff --git a/src/liblibc/src/unix/bsd/apple/mod.rs b/src/liblibc/src/unix/bsd/apple/mod.rs index aa4d1ac745..919d12680a 100644 --- a/src/liblibc/src/unix/bsd/apple/mod.rs +++ b/src/liblibc/src/unix/bsd/apple/mod.rs @@ -25,6 +25,16 @@ pub type sem_t = ::c_int; pub enum timezone {} s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_offset: ::off_t, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_reqprio: ::c_int, + pub aio_sigevent: sigevent, + pub aio_lio_opcode: ::c_int + } + pub struct utmpx { pub ut_user: [::c_char; _UTX_USERSIZE], pub ut_id: [::c_char; _UTX_IDSIZE], @@ -303,6 +313,14 @@ s! { pub int_p_sign_posn: ::c_char, pub int_n_sign_posn: ::c_char, } + + pub struct sigevent { + pub sigev_notify: ::c_int, + pub sigev_signo: ::c_int, + pub sigev_value: ::sigval, + __unused1: *mut ::c_void, //actually a function pointer + pub sigev_notify_attributes: *mut ::pthread_attr_t + } } pub const _UTX_USERSIZE: usize = 256; @@ -1331,6 +1349,20 @@ pub const PRIO_DARWIN_NONUI: ::c_int = 0x1001; pub const SEM_FAILED: *mut sem_t = -1isize as *mut ::sem_t; +pub const SIGEV_NONE: ::c_int = 0; +pub const SIGEV_SIGNAL: ::c_int = 1; +pub const SIGEV_THREAD: ::c_int = 3; + +pub const AIO_CANCELED: ::c_int = 2; +pub const AIO_NOTCANCELED: ::c_int = 4; +pub const AIO_ALLDONE: ::c_int = 1; +pub const AIO_LISTIO_MAX: ::c_int = 16; +pub const LIO_NOP: ::c_int = 0; +pub const LIO_WRITE: ::c_int = 2; +pub const LIO_READ: ::c_int = 1; +pub const LIO_WAIT: ::c_int = 2; +pub const LIO_NOWAIT: ::c_int = 1; + f! { pub fn WSTOPSIG(status: ::c_int) -> ::c_int { status >> 8 @@ -1354,6 +1386,19 @@ f! { } extern { + pub fn aio_read(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_write(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_fsync(op: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_error(aiocbp: *const aiocb) -> ::c_int; + pub fn aio_return(aiocbp: *mut aiocb) -> ::ssize_t; + #[cfg_attr(all(target_os = "macos", target_arch = "x86"), + link_name = "aio_suspend$UNIX2003")] + pub fn aio_suspend(aiocb_list: *const *const aiocb, nitems: ::c_int, + timeout: *const ::timespec) -> ::c_int; + pub fn aio_cancel(fd: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn lio_listio(mode: ::c_int, aiocb_list: *const *mut aiocb, + nitems: ::c_int, sevp: *mut sigevent) -> ::c_int; + pub fn lutimes(file: *const ::c_char, times: *const ::timeval) -> ::c_int; pub fn getutxent() -> *mut utmpx; diff --git a/src/liblibc/src/unix/bsd/freebsdlike/dragonfly/mod.rs b/src/liblibc/src/unix/bsd/freebsdlike/dragonfly/mod.rs index 179cd913b3..3a91bca008 100644 --- a/src/liblibc/src/unix/bsd/freebsdlike/dragonfly/mod.rs +++ b/src/liblibc/src/unix/bsd/freebsdlike/dragonfly/mod.rs @@ -15,6 +15,18 @@ pub type fsblkcnt_t = u64; pub type fsfilcnt_t = u64; s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_offset: ::off_t, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: sigevent, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + _aio_val: ::c_int, + _aio_err: ::c_int + } + pub struct dirent { pub d_fileno: ::ino_t, pub d_namlen: u16, @@ -33,6 +45,15 @@ s! { pub node: [u8; 6], } + pub struct sigevent { + pub sigev_notify: ::c_int, + pub sigev_signo: ::c_int, //actually a union + #[cfg(target_pointer_width = "64")] + __unused1: ::c_int, + pub sigev_value: ::sigval, + __unused2: *mut ::c_void //actually a function pointer + } + pub struct statvfs { pub f_bsize: ::c_ulong, pub f_frsize: ::c_ulong, diff --git a/src/liblibc/src/unix/bsd/freebsdlike/freebsd/mod.rs b/src/liblibc/src/unix/bsd/freebsdlike/freebsd/mod.rs index a89440ebde..9cd0a34ba2 100644 --- a/src/liblibc/src/unix/bsd/freebsdlike/freebsd/mod.rs +++ b/src/liblibc/src/unix/bsd/freebsdlike/freebsd/mod.rs @@ -1,6 +1,7 @@ pub type fflags_t = u32; pub type clock_t = i32; pub type ino_t = u32; +pub type lwpid_t = i32; pub type nlink_t = u16; pub type blksize_t = u32; pub type clockid_t = ::c_int; @@ -9,6 +10,22 @@ pub type fsblkcnt_t = ::uint64_t; pub type fsfilcnt_t = ::uint64_t; s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_offset: ::off_t, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + __unused1: [::c_int; 2], + __unused2: *mut ::c_void, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + // unused 3 through 5 are the __aiocb_private structure + __unused3: ::c_long, + __unused4: ::c_long, + __unused5: *mut ::c_void, + pub aio_sigevent: sigevent + } + pub struct dirent { pub d_fileno: u32, pub d_reclen: u16, @@ -17,6 +34,18 @@ s! { pub d_name: [::c_char; 256], } + pub struct sigevent { + pub sigev_notify: ::c_int, + pub sigev_signo: ::c_int, + pub sigev_value: ::sigval, + //The rest of the structure is actually a union. We expose only + //sigev_notify_thread_id because it's the most useful union member. + pub sigev_notify_thread_id: ::lwpid_t, + #[cfg(target_pointer_width = "64")] + __unused1: ::c_int, + __unused2: [::c_long; 7] + } + pub struct statvfs { pub f_bavail: ::fsblkcnt_t, pub f_bfree: ::fsblkcnt_t, @@ -32,6 +61,8 @@ s! { } } +pub const SIGEV_THREAD_ID: ::c_int = 4; + pub const RAND_MAX: ::c_int = 0x7fff_fffd; pub const PTHREAD_STACK_MIN: ::size_t = 2048; pub const SIGSTKSZ: ::size_t = 34816; @@ -138,7 +169,6 @@ pub const CTL_HW: ::c_int = 6; pub const CTL_MACHDEP: ::c_int = 7; pub const CTL_USER: ::c_int = 8; pub const CTL_P1003_1B: ::c_int = 9; -pub const CTL_MAXID: ::c_int = 10; pub const KERN_OSTYPE: ::c_int = 1; pub const KERN_OSRELEASE: ::c_int = 2; pub const KERN_OSREV: ::c_int = 3; @@ -176,7 +206,6 @@ pub const KERN_LOGSIGEXIT: ::c_int = 34; pub const KERN_IOV_MAX: ::c_int = 35; pub const KERN_HOSTUUID: ::c_int = 36; pub const KERN_ARND: ::c_int = 37; -pub const KERN_MAXID: ::c_int = 38; pub const KERN_PROC_ALL: ::c_int = 0; pub const KERN_PROC_PID: ::c_int = 1; pub const KERN_PROC_PGRP: ::c_int = 2; @@ -223,7 +252,6 @@ pub const HW_DISKSTATS: ::c_int = 9; pub const HW_FLOATINGPT: ::c_int = 10; pub const HW_MACHINE_ARCH: ::c_int = 11; pub const HW_REALMEM: ::c_int = 12; -pub const HW_MAXID: ::c_int = 13; pub const USER_CS_PATH: ::c_int = 1; pub const USER_BC_BASE_MAX: ::c_int = 2; pub const USER_BC_DIM_MAX: ::c_int = 3; @@ -244,7 +272,6 @@ pub const USER_POSIX2_SW_DEV: ::c_int = 17; pub const USER_POSIX2_UPE: ::c_int = 18; pub const USER_STREAM_MAX: ::c_int = 19; pub const USER_TZNAME_MAX: ::c_int = 20; -pub const USER_MAXID: ::c_int = 21; pub const CTL_P1003_1B_ASYNCHRONOUS_IO: ::c_int = 1; pub const CTL_P1003_1B_MAPPED_FILES: ::c_int = 2; pub const CTL_P1003_1B_MEMLOCK: ::c_int = 3; @@ -270,6 +297,20 @@ pub const CTL_P1003_1B_SEM_NSEMS_MAX: ::c_int = 22; pub const CTL_P1003_1B_SEM_VALUE_MAX: ::c_int = 23; pub const CTL_P1003_1B_SIGQUEUE_MAX: ::c_int = 24; pub const CTL_P1003_1B_TIMER_MAX: ::c_int = 25; + +// The *_MAXID constants never should've been used outside of the +// FreeBSD base system. And with the exception of CTL_P1003_1B_MAXID, +// they were all removed in svn r262489. They remain here for backwards +// compatibility only, and are scheduled to be removed in libc 1.0.0. +#[doc(hidden)] +pub const CTL_MAXID: ::c_int = 10; +#[doc(hidden)] +pub const KERN_MAXID: ::c_int = 38; +#[doc(hidden)] +pub const HW_MAXID: ::c_int = 13; +#[doc(hidden)] +pub const USER_MAXID: ::c_int = 21; +#[doc(hidden)] pub const CTL_P1003_1B_MAXID: ::c_int = 26; pub const MSG_NOSIGNAL: ::c_int = 0x20000; diff --git a/src/liblibc/src/unix/bsd/freebsdlike/mod.rs b/src/liblibc/src/unix/bsd/freebsdlike/mod.rs index 2cfb323c06..0e8d69adb8 100644 --- a/src/liblibc/src/unix/bsd/freebsdlike/mod.rs +++ b/src/liblibc/src/unix/bsd/freebsdlike/mod.rs @@ -92,6 +92,7 @@ s! { } pub struct stack_t { + // In FreeBSD 11 and later, ss_sp is actually a void* pub ss_sp: *mut ::c_char, pub ss_size: ::size_t, pub ss_flags: ::c_int, @@ -176,6 +177,21 @@ s! { } } +pub const AIO_LISTIO_MAX: ::c_int = 16; +pub const AIO_CANCELED: ::c_int = 1; +pub const AIO_NOTCANCELED: ::c_int = 2; +pub const AIO_ALLDONE: ::c_int = 3; +pub const LIO_NOP: ::c_int = 0; +pub const LIO_WRITE: ::c_int = 1; +pub const LIO_READ: ::c_int = 2; +pub const LIO_WAIT: ::c_int = 1; +pub const LIO_NOWAIT: ::c_int = 0; + +pub const SIGEV_NONE: ::c_int = 0; +pub const SIGEV_SIGNAL: ::c_int = 1; +pub const SIGEV_THREAD: ::c_int = 2; +pub const SIGEV_KEVENT: ::c_int = 3; + pub const EMPTY: ::c_short = 0; pub const BOOT_TIME: ::c_short = 1; pub const OLD_TIME: ::c_short = 2; @@ -419,6 +435,7 @@ pub const ENOPROTOOPT: ::c_int = 42; pub const EPROTONOSUPPORT: ::c_int = 43; pub const ESOCKTNOSUPPORT: ::c_int = 44; pub const EOPNOTSUPP: ::c_int = 45; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 46; pub const EAFNOSUPPORT: ::c_int = 47; pub const EADDRINUSE: ::c_int = 48; @@ -745,6 +762,19 @@ extern { #[link(name = "util")] extern { + pub fn aio_read(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_write(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_fsync(op: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_error(aiocbp: *const aiocb) -> ::c_int; + pub fn aio_return(aiocbp: *mut aiocb) -> ::ssize_t; + pub fn aio_suspend(aiocb_list: *const *const aiocb, nitems: ::c_int, + timeout: *const ::timespec) -> ::c_int; + pub fn aio_cancel(fd: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn lio_listio(mode: ::c_int, aiocb_list: *const *mut aiocb, + nitems: ::c_int, sevp: *mut sigevent) -> ::c_int; + pub fn aio_waitcomplete(iocbp: *mut *mut aiocb, + timeout: *mut ::timespec) -> ::ssize_t; + pub fn getnameinfo(sa: *const ::sockaddr, salen: ::socklen_t, host: *mut ::c_char, diff --git a/src/liblibc/src/unix/bsd/netbsdlike/netbsd/mod.rs b/src/liblibc/src/unix/bsd/netbsdlike/netbsd/mod.rs index aa46aff191..57a34130c9 100644 --- a/src/liblibc/src/unix/bsd/netbsdlike/netbsd/mod.rs +++ b/src/liblibc/src/unix/bsd/netbsdlike/netbsd/mod.rs @@ -6,6 +6,19 @@ pub type fsblkcnt_t = ::uint64_t; pub type fsfilcnt_t = ::uint64_t; s! { + pub struct aiocb { + pub aio_offset: ::off_t, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_sigevent: ::sigevent, + _state: ::c_int, + _errno: ::c_int, + _retval: ::ssize_t + } + pub struct dirent { pub d_fileno: ::ino_t, pub d_reclen: u16, @@ -30,6 +43,14 @@ s! { __unused8: *mut ::c_void, } + pub struct sigevent { + pub sigev_notify: ::c_int, + pub sigev_signo: ::c_int, + pub sigev_value: ::sigval, + __unused1: *mut ::c_void, //actually a function pointer + pub sigev_notify_attributes: *mut ::c_void + } + pub struct sigset_t { __bits: [u32; 4], } @@ -549,7 +570,32 @@ pub const KERN_PROC_RGID: ::c_int = 8; pub const EAI_SYSTEM: ::c_int = 11; +pub const AIO_CANCELED: ::c_int = 1; +pub const AIO_NOTCANCELED: ::c_int = 2; +pub const AIO_ALLDONE: ::c_int = 3; +pub const LIO_NOP: ::c_int = 0; +pub const LIO_WRITE: ::c_int = 1; +pub const LIO_READ: ::c_int = 2; +pub const LIO_WAIT: ::c_int = 1; +pub const LIO_NOWAIT: ::c_int = 0; + +pub const SIGEV_NONE: ::c_int = 0; +pub const SIGEV_SIGNAL: ::c_int = 1; +pub const SIGEV_THREAD: ::c_int = 2; + extern { + pub fn aio_read(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_write(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_fsync(op: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_error(aiocbp: *const aiocb) -> ::c_int; + pub fn aio_return(aiocbp: *mut aiocb) -> ::ssize_t; + #[link_name = "__aio_suspend50"] + pub fn aio_suspend(aiocb_list: *const *const aiocb, nitems: ::c_int, + timeout: *const ::timespec) -> ::c_int; + pub fn aio_cancel(fd: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn lio_listio(mode: ::c_int, aiocb_list: *const *mut aiocb, + nitems: ::c_int, sevp: *mut sigevent) -> ::c_int; + pub fn lutimes(file: *const ::c_char, times: *const ::timeval) -> ::c_int; pub fn getnameinfo(sa: *const ::sockaddr, salen: ::socklen_t, diff --git a/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/mod.rs b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/mod.rs index d7afb11edd..9d49250833 100644 --- a/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/mod.rs +++ b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/mod.rs @@ -1,7 +1,5 @@ -pub type c_long = i64; -pub type c_ulong = u64; pub type clock_t = i64; -pub type suseconds_t = i64; +pub type suseconds_t = ::c_long; pub type dev_t = i32; pub type sigset_t = ::c_uint; pub type blksize_t = ::int32_t; @@ -110,6 +108,9 @@ s! { pub si_code: ::c_int, pub si_errno: ::c_int, pub si_addr: *mut ::c_char, + #[cfg(target_pointer_width = "32")] + __pad: [u8; 112], + #[cfg(target_pointer_width = "64")] __pad: [u8; 108], } @@ -433,6 +434,8 @@ extern { newlen: ::size_t) -> ::c_int; pub fn getentropy(buf: *mut ::c_void, buflen: ::size_t) -> ::c_int; + pub fn pledge(promises: *const ::c_char, + paths: *mut *const ::c_char) -> ::c_int; } cfg_if! { @@ -446,3 +449,6 @@ cfg_if! { // Unknown target_os } } + +mod other; +pub use self::other::*; diff --git a/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b32/mod.rs b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b32/mod.rs new file mode 100644 index 0000000000..9b0b338b91 --- /dev/null +++ b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b32/mod.rs @@ -0,0 +1,2 @@ +pub type c_long = i32; +pub type c_ulong = u32; diff --git a/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b64/mod.rs b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b64/mod.rs new file mode 100644 index 0000000000..b07c476aa4 --- /dev/null +++ b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/b64/mod.rs @@ -0,0 +1,2 @@ +pub type c_long = i64; +pub type c_ulong = u64; diff --git a/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/mod.rs b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/mod.rs new file mode 100644 index 0000000000..e4087da7bc --- /dev/null +++ b/src/liblibc/src/unix/bsd/netbsdlike/openbsdlike/other/mod.rs @@ -0,0 +1,11 @@ +cfg_if! { + if #[cfg(target_arch = "x86_64")] { + mod b64; + pub use self::b64::*; + } else if #[cfg(target_arch = "x86")] { + mod b32; + pub use self::b32::*; + } else { + // Unknown target_arch + } +} diff --git a/src/liblibc/src/unix/mod.rs b/src/liblibc/src/unix/mod.rs index 3e03eea675..cf3a87ab48 100644 --- a/src/liblibc/src/unix/mod.rs +++ b/src/liblibc/src/unix/mod.rs @@ -119,6 +119,11 @@ s! { pub l_onoff: ::c_int, pub l_linger: ::c_int, } + + pub struct sigval { + // Actually a union of an int and a void* + pub sival_ptr: *mut ::c_void + } } pub const SIG_DFL: sighandler_t = 0 as sighandler_t; @@ -205,7 +210,8 @@ cfg_if! { // cargo build, don't pull in anything extra as the libstd dep // already pulls in all libs. } else if #[cfg(any(all(target_env = "musl", not(target_arch = "mips"))))] { - #[link(name = "c", kind = "static")] + #[link(name = "c", kind = "static", cfg(target_feature = "crt-static"))] + #[link(name = "c", cfg(not(target_feature = "crt-static")))] extern {} } else if #[cfg(target_os = "emscripten")] { #[link(name = "c")] @@ -425,6 +431,7 @@ extern { pub fn nanosleep(rqtp: *const timespec, rmtp: *mut timespec) -> ::c_int; pub fn tcgetpgrp(fd: ::c_int) -> pid_t; + pub fn tcsetpgrp(fd: ::c_int, pgrp: ::pid_t) -> ::c_int; pub fn ttyname(fd: ::c_int) -> *mut c_char; pub fn unlink(c: *const c_char) -> ::c_int; #[cfg_attr(all(target_os = "macos", target_arch = "x86"), diff --git a/src/liblibc/src/unix/notbsd/android/mod.rs b/src/liblibc/src/unix/notbsd/android/mod.rs index efc136817d..a88f8ef7da 100644 --- a/src/liblibc/src/unix/notbsd/android/mod.rs +++ b/src/liblibc/src/unix/notbsd/android/mod.rs @@ -331,6 +331,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; @@ -641,6 +642,8 @@ pub const NLA_F_NESTED: ::c_int = 1 << 15; pub const NLA_F_NET_BYTEORDER: ::c_int = 1 << 14; pub const NLA_TYPE_MASK: ::c_int = !(NLA_F_NESTED | NLA_F_NET_BYTEORDER); +pub const SIGEV_THREAD_ID: ::c_int = 4; + f! { pub fn sigemptyset(set: *mut sigset_t) -> ::c_int { *set = 0; diff --git a/src/liblibc/src/unix/notbsd/linux/mips/mips32.rs b/src/liblibc/src/unix/notbsd/linux/mips/mips32.rs new file mode 100644 index 0000000000..8b9c5ca767 --- /dev/null +++ b/src/liblibc/src/unix/notbsd/linux/mips/mips32.rs @@ -0,0 +1,281 @@ +pub type c_char = i8; +pub type c_long = i32; +pub type c_ulong = u32; +pub type clock_t = i32; +pub type time_t = i32; +pub type suseconds_t = i32; +pub type wchar_t = i32; +pub type off_t = i32; +pub type ino_t = u32; +pub type blkcnt_t = i32; +pub type blksize_t = i32; +pub type nlink_t = u32; +pub type fsblkcnt_t = ::c_ulong; +pub type fsfilcnt_t = ::c_ulong; +pub type rlim_t = c_ulong; + +s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: ::sigevent, + __next_prio: *mut aiocb, + __abs_prio: ::c_int, + __policy: ::c_int, + __error_code: ::c_int, + __return_value: ::ssize_t, + pub aio_offset: off_t, + __unused1: [::c_char; 4], + __glibc_reserved: [::c_char; 32] + } + + pub struct stat { + pub st_dev: ::c_ulong, + st_pad1: [::c_long; 3], + pub st_ino: ::ino_t, + pub st_mode: ::mode_t, + pub st_nlink: ::nlink_t, + pub st_uid: ::uid_t, + pub st_gid: ::gid_t, + pub st_rdev: ::c_ulong, + pub st_pad2: [::c_long; 2], + pub st_size: ::off_t, + st_pad3: ::c_long, + pub st_atime: ::time_t, + pub st_atime_nsec: ::c_long, + pub st_mtime: ::time_t, + pub st_mtime_nsec: ::c_long, + pub st_ctime: ::time_t, + pub st_ctime_nsec: ::c_long, + pub st_blksize: ::blksize_t, + pub st_blocks: ::blkcnt_t, + st_pad5: [::c_long; 14], + } + + pub struct stat64 { + pub st_dev: ::c_ulong, + st_pad1: [::c_long; 3], + pub st_ino: ::ino64_t, + pub st_mode: ::mode_t, + pub st_nlink: ::nlink_t, + pub st_uid: ::uid_t, + pub st_gid: ::gid_t, + pub st_rdev: ::c_ulong, + st_pad2: [::c_long; 2], + pub st_size: ::off64_t, + pub st_atime: ::time_t, + pub st_atime_nsec: ::c_long, + pub st_mtime: ::time_t, + pub st_mtime_nsec: ::c_long, + pub st_ctime: ::time_t, + pub st_ctime_nsec: ::c_long, + pub st_blksize: ::blksize_t, + st_pad3: ::c_long, + pub st_blocks: ::blkcnt64_t, + st_pad5: [::c_long; 14], + } + + pub struct pthread_attr_t { + __size: [u32; 9] + } + + pub struct sigaction { + pub sa_flags: ::c_int, + pub sa_sigaction: ::sighandler_t, + pub sa_mask: sigset_t, + _restorer: *mut ::c_void, + _resv: [::c_int; 1], + } + + pub struct stack_t { + pub ss_sp: *mut ::c_void, + pub ss_size: ::size_t, + pub ss_flags: ::c_int, + } + + pub struct sigset_t { + __val: [::c_ulong; 32], + } + + pub struct siginfo_t { + pub si_signo: ::c_int, + pub si_code: ::c_int, + pub si_errno: ::c_int, + pub _pad: [::c_int; 29], + } + + pub struct glob64_t { + pub gl_pathc: ::size_t, + pub gl_pathv: *mut *mut ::c_char, + pub gl_offs: ::size_t, + pub gl_flags: ::c_int, + + __unused1: *mut ::c_void, + __unused2: *mut ::c_void, + __unused3: *mut ::c_void, + __unused4: *mut ::c_void, + __unused5: *mut ::c_void, + } + + pub struct ipc_perm { + pub __key: ::key_t, + pub uid: ::uid_t, + pub gid: ::gid_t, + pub cuid: ::uid_t, + pub cgid: ::gid_t, + pub mode: ::c_uint, + pub __seq: ::c_ushort, + __pad1: ::c_ushort, + __unused1: ::c_ulong, + __unused2: ::c_ulong + } + + pub struct shmid_ds { + pub shm_perm: ::ipc_perm, + pub shm_segsz: ::size_t, + pub shm_atime: ::time_t, + pub shm_dtime: ::time_t, + pub shm_ctime: ::time_t, + pub shm_cpid: ::pid_t, + pub shm_lpid: ::pid_t, + pub shm_nattch: ::shmatt_t, + __unused4: ::c_ulong, + __unused5: ::c_ulong + } + + pub struct msqid_ds { + pub msg_perm: ::ipc_perm, + #[cfg(target_endian = "big")] + __glibc_reserved1: ::c_ulong, + pub msg_stime: ::time_t, + #[cfg(target_endian = "little")] + __glibc_reserved1: ::c_ulong, + #[cfg(target_endian = "big")] + __glibc_reserved2: ::c_ulong, + pub msg_rtime: ::time_t, + #[cfg(target_endian = "little")] + __glibc_reserved2: ::c_ulong, + #[cfg(target_endian = "big")] + __glibc_reserved3: ::c_ulong, + pub msg_ctime: ::time_t, + #[cfg(target_endian = "little")] + __glibc_reserved3: ::c_ulong, + __msg_cbytes: ::c_ulong, + pub msg_qnum: ::msgqnum_t, + pub msg_qbytes: ::msglen_t, + pub msg_lspid: ::pid_t, + pub msg_lrpid: ::pid_t, + __glibc_reserved4: ::c_ulong, + __glibc_reserved5: ::c_ulong, + } + + pub struct statfs { + pub f_type: ::c_long, + pub f_bsize: ::c_long, + pub f_frsize: ::c_long, + pub f_blocks: ::fsblkcnt_t, + pub f_bfree: ::fsblkcnt_t, + pub f_files: ::fsblkcnt_t, + pub f_ffree: ::fsblkcnt_t, + pub f_bavail: ::fsblkcnt_t, + pub f_fsid: ::fsid_t, + + pub f_namelen: ::c_long, + f_spare: [::c_long; 6], + } + + pub struct msghdr { + pub msg_name: *mut ::c_void, + pub msg_namelen: ::socklen_t, + pub msg_iov: *mut ::iovec, + pub msg_iovlen: ::size_t, + pub msg_control: *mut ::c_void, + pub msg_controllen: ::size_t, + pub msg_flags: ::c_int, + } + + pub struct termios { + pub c_iflag: ::tcflag_t, + pub c_oflag: ::tcflag_t, + pub c_cflag: ::tcflag_t, + pub c_lflag: ::tcflag_t, + pub c_line: ::cc_t, + pub c_cc: [::cc_t; ::NCCS], + } + + pub struct flock { + pub l_type: ::c_short, + pub l_whence: ::c_short, + pub l_start: ::off_t, + pub l_len: ::off_t, + pub l_sysid: ::c_long, + pub l_pid: ::pid_t, + pad: [::c_long; 4], + } + + pub struct sysinfo { + pub uptime: ::c_long, + pub loads: [::c_ulong; 3], + pub totalram: ::c_ulong, + pub freeram: ::c_ulong, + pub sharedram: ::c_ulong, + pub bufferram: ::c_ulong, + pub totalswap: ::c_ulong, + pub freeswap: ::c_ulong, + pub procs: ::c_ushort, + pub pad: ::c_ushort, + pub totalhigh: ::c_ulong, + pub freehigh: ::c_ulong, + pub mem_unit: ::c_uint, + pub _f: [::c_char; 8], + } + + // FIXME this is actually a union + pub struct sem_t { + #[cfg(target_pointer_width = "32")] + __size: [::c_char; 16], + #[cfg(target_pointer_width = "64")] + __size: [::c_char; 32], + __align: [::c_long; 0], + } +} + +pub const __SIZEOF_PTHREAD_CONDATTR_T: usize = 4; +pub const __SIZEOF_PTHREAD_MUTEX_T: usize = 24; +pub const __SIZEOF_PTHREAD_RWLOCK_T: usize = 32; +pub const __SIZEOF_PTHREAD_MUTEXATTR_T: usize = 4; + +pub const RLIM_INFINITY: ::rlim_t = 0x7fffffff; + +pub const SYS_gettid: ::c_long = 4222; // Valid for O32 + +#[link(name = "util")] +extern { + pub fn sysctl(name: *mut ::c_int, + namelen: ::c_int, + oldp: *mut ::c_void, + oldlenp: *mut ::size_t, + newp: *mut ::c_void, + newlen: ::size_t) + -> ::c_int; + pub fn ioctl(fd: ::c_int, request: ::c_ulong, ...) -> ::c_int; + pub fn backtrace(buf: *mut *mut ::c_void, + sz: ::c_int) -> ::c_int; + pub fn glob64(pattern: *const ::c_char, + flags: ::c_int, + errfunc: ::dox::Option ::c_int>, + pglob: *mut glob64_t) -> ::c_int; + pub fn globfree64(pglob: *mut glob64_t); + pub fn ptrace(request: ::c_uint, ...) -> ::c_long; + pub fn pthread_attr_getaffinity_np(attr: *const ::pthread_attr_t, + cpusetsize: ::size_t, + cpuset: *mut ::cpu_set_t) -> ::c_int; + pub fn pthread_attr_setaffinity_np(attr: *mut ::pthread_attr_t, + cpusetsize: ::size_t, + cpuset: *const ::cpu_set_t) -> ::c_int; +} diff --git a/src/liblibc/src/unix/notbsd/linux/mips/mips64.rs b/src/liblibc/src/unix/notbsd/linux/mips/mips64.rs new file mode 100644 index 0000000000..71dfb71d49 --- /dev/null +++ b/src/liblibc/src/unix/notbsd/linux/mips/mips64.rs @@ -0,0 +1,219 @@ +pub type blkcnt_t = i64; +pub type blksize_t = i64; +pub type c_char = i8; +pub type c_long = i64; +pub type c_ulong = u64; +pub type fsblkcnt_t = ::c_ulong; +pub type fsfilcnt_t = ::c_ulong; +pub type ino_t = u64; +pub type nlink_t = u64; +pub type off_t = i64; +pub type rlim_t = ::c_ulong; +pub type suseconds_t = i64; +pub type time_t = i64; +pub type wchar_t = i32; + +s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: ::sigevent, + __next_prio: *mut aiocb, + __abs_prio: ::c_int, + __policy: ::c_int, + __error_code: ::c_int, + __return_value: ::ssize_t, + pub aio_offset: off_t, + __glibc_reserved: [::c_char; 32] + } + + pub struct stat { + pub st_dev: ::c_ulong, + st_pad1: [::c_long; 2], + pub st_ino: ::ino_t, + pub st_mode: ::mode_t, + pub st_nlink: ::nlink_t, + pub st_uid: ::uid_t, + pub st_gid: ::gid_t, + pub st_rdev: ::c_ulong, + st_pad2: [::c_ulong; 1], + pub st_size: ::off_t, + st_pad3: ::c_long, + pub st_atime: ::time_t, + pub st_atime_nsec: ::c_long, + pub st_mtime: ::time_t, + pub st_mtime_nsec: ::c_long, + pub st_ctime: ::time_t, + pub st_ctime_nsec: ::c_long, + pub st_blksize: ::blksize_t, + st_pad4: ::c_long, + pub st_blocks: ::blkcnt_t, + st_pad5: [::c_long; 7], + } + + pub struct stat64 { + pub st_dev: ::c_ulong, + st_pad1: [::c_long; 2], + pub st_ino: ::ino64_t, + pub st_mode: ::mode_t, + pub st_nlink: ::nlink_t, + pub st_uid: ::uid_t, + pub st_gid: ::gid_t, + pub st_rdev: ::c_ulong, + st_pad2: [::c_long; 2], + pub st_size: ::off64_t, + pub st_atime: ::time_t, + pub st_atime_nsec: ::c_long, + pub st_mtime: ::time_t, + pub st_mtime_nsec: ::c_long, + pub st_ctime: ::time_t, + pub st_ctime_nsec: ::c_long, + pub st_blksize: ::blksize_t, + st_pad3: ::c_long, + pub st_blocks: ::blkcnt64_t, + st_pad5: [::c_long; 7], + } + + pub struct pthread_attr_t { + __size: [::c_ulong; 7] + } + + pub struct sigaction { + pub sa_flags: ::c_int, + pub sa_sigaction: ::sighandler_t, + pub sa_mask: sigset_t, + _restorer: *mut ::c_void, + } + + pub struct stack_t { + pub ss_sp: *mut ::c_void, + pub ss_size: ::size_t, + pub ss_flags: ::c_int, + } + + pub struct sigset_t { + __size: [::c_ulong; 16], + } + + pub struct siginfo_t { + pub si_signo: ::c_int, + pub si_code: ::c_int, + pub si_errno: ::c_int, + _pad: ::c_int, + _pad2: [::c_long; 14], + } + + pub struct ipc_perm { + pub __key: ::key_t, + pub uid: ::uid_t, + pub gid: ::gid_t, + pub cuid: ::uid_t, + pub cgid: ::gid_t, + pub mode: ::c_uint, + pub __seq: ::c_ushort, + __pad1: ::c_ushort, + __unused1: ::c_ulong, + __unused2: ::c_ulong + } + + pub struct shmid_ds { + pub shm_perm: ::ipc_perm, + pub shm_segsz: ::size_t, + pub shm_atime: ::time_t, + pub shm_dtime: ::time_t, + pub shm_ctime: ::time_t, + pub shm_cpid: ::pid_t, + pub shm_lpid: ::pid_t, + pub shm_nattch: ::shmatt_t, + __unused4: ::c_ulong, + __unused5: ::c_ulong + } + + pub struct msqid_ds { + pub msg_perm: ::ipc_perm, + pub msg_stime: ::time_t, + pub msg_rtime: ::time_t, + pub msg_ctime: ::time_t, + __msg_cbytes: ::c_ulong, + pub msg_qnum: ::msgqnum_t, + pub msg_qbytes: ::msglen_t, + pub msg_lspid: ::pid_t, + pub msg_lrpid: ::pid_t, + __glibc_reserved4: ::c_ulong, + __glibc_reserved5: ::c_ulong, + } + + pub struct statfs { + pub f_type: ::c_long, + pub f_bsize: ::c_long, + pub f_frsize: ::c_long, + pub f_blocks: ::fsblkcnt_t, + pub f_bfree: ::fsblkcnt_t, + pub f_files: ::fsblkcnt_t, + pub f_ffree: ::fsblkcnt_t, + pub f_bavail: ::fsblkcnt_t, + pub f_fsid: ::fsid_t, + + pub f_namelen: ::c_long, + f_spare: [::c_long; 6], + } + + pub struct msghdr { + pub msg_name: *mut ::c_void, + pub msg_namelen: ::socklen_t, + pub msg_iov: *mut ::iovec, + pub msg_iovlen: ::size_t, + pub msg_control: *mut ::c_void, + pub msg_controllen: ::size_t, + pub msg_flags: ::c_int, + } + + pub struct termios { + pub c_iflag: ::tcflag_t, + pub c_oflag: ::tcflag_t, + pub c_cflag: ::tcflag_t, + pub c_lflag: ::tcflag_t, + pub c_line: ::cc_t, + pub c_cc: [::cc_t; ::NCCS], + } + + pub struct sysinfo { + pub uptime: ::c_long, + pub loads: [::c_ulong; 3], + pub totalram: ::c_ulong, + pub freeram: ::c_ulong, + pub sharedram: ::c_ulong, + pub bufferram: ::c_ulong, + pub totalswap: ::c_ulong, + pub freeswap: ::c_ulong, + pub procs: ::c_ushort, + pub pad: ::c_ushort, + pub totalhigh: ::c_ulong, + pub freehigh: ::c_ulong, + pub mem_unit: ::c_uint, + pub _f: [::c_char; 0], + } + + // FIXME this is actually a union + pub struct sem_t { + __size: [::c_char; 32], + __align: [::c_long; 0], + } +} + +pub const __SIZEOF_PTHREAD_CONDATTR_T: usize = 4; +pub const __SIZEOF_PTHREAD_MUTEXATTR_T: usize = 4; +pub const __SIZEOF_PTHREAD_MUTEX_T: usize = 40; +pub const __SIZEOF_PTHREAD_RWLOCK_T: usize = 56; + +pub const RLIM_INFINITY: ::rlim_t = 0xffff_ffff_ffff_ffff; + +pub const SYS_gettid: ::c_long = 5178; // Valid for n64 + +#[link(name = "util")] +extern { + pub fn ioctl(fd: ::c_int, request: ::c_ulong, ...) -> ::c_int; +} diff --git a/src/liblibc/src/unix/notbsd/linux/mips/mod.rs b/src/liblibc/src/unix/notbsd/linux/mips/mod.rs new file mode 100644 index 0000000000..7952e3a40d --- /dev/null +++ b/src/liblibc/src/unix/notbsd/linux/mips/mod.rs @@ -0,0 +1,400 @@ +pub const BUFSIZ: ::c_uint = 8192; +pub const TMP_MAX: ::c_uint = 238328; +pub const FOPEN_MAX: ::c_uint = 16; +pub const POSIX_FADV_DONTNEED: ::c_int = 4; +pub const POSIX_FADV_NOREUSE: ::c_int = 5; +pub const POSIX_MADV_DONTNEED: ::c_int = 4; +pub const _SC_2_C_VERSION: ::c_int = 96; +pub const O_ACCMODE: ::c_int = 3; +pub const O_DIRECT: ::c_int = 0x8000; +pub const O_DIRECTORY: ::c_int = 0x10000; +pub const O_NOFOLLOW: ::c_int = 0x20000; +pub const ST_RELATIME: ::c_ulong = 4096; +pub const NI_MAXHOST: ::socklen_t = 1025; + +pub const RLIMIT_NOFILE: ::c_int = 5; +pub const RLIMIT_AS: ::c_int = 6; +pub const RLIMIT_RSS: ::c_int = 7; +pub const RLIMIT_NPROC: ::c_int = 8; +pub const RLIMIT_MEMLOCK: ::c_int = 9; +pub const RLIMIT_NLIMITS: ::c_int = 16; + +pub const O_APPEND: ::c_int = 8; +pub const O_CREAT: ::c_int = 256; +pub const O_EXCL: ::c_int = 1024; +pub const O_NOCTTY: ::c_int = 2048; +pub const O_NONBLOCK: ::c_int = 128; +pub const O_SYNC: ::c_int = 0x4010; +pub const O_RSYNC: ::c_int = 0x4010; +pub const O_DSYNC: ::c_int = 0x10; +pub const O_FSYNC: ::c_int = 0x4010; +pub const O_ASYNC: ::c_int = 0x1000; +pub const O_NDELAY: ::c_int = 0x80; + +pub const SOCK_NONBLOCK: ::c_int = 128; + +pub const EDEADLK: ::c_int = 45; +pub const ENAMETOOLONG: ::c_int = 78; +pub const ENOLCK: ::c_int = 46; +pub const ENOSYS: ::c_int = 89; +pub const ENOTEMPTY: ::c_int = 93; +pub const ELOOP: ::c_int = 90; +pub const ENOMSG: ::c_int = 35; +pub const EIDRM: ::c_int = 36; +pub const ECHRNG: ::c_int = 37; +pub const EL2NSYNC: ::c_int = 38; +pub const EL3HLT: ::c_int = 39; +pub const EL3RST: ::c_int = 40; +pub const ELNRNG: ::c_int = 41; +pub const EUNATCH: ::c_int = 42; +pub const ENOCSI: ::c_int = 43; +pub const EL2HLT: ::c_int = 44; +pub const EBADE: ::c_int = 50; +pub const EBADR: ::c_int = 51; +pub const EXFULL: ::c_int = 52; +pub const ENOANO: ::c_int = 53; +pub const EBADRQC: ::c_int = 54; +pub const EBADSLT: ::c_int = 55; +pub const EDEADLOCK: ::c_int = 56; +pub const EMULTIHOP: ::c_int = 74; +pub const EOVERFLOW: ::c_int = 79; +pub const ENOTUNIQ: ::c_int = 80; +pub const EBADFD: ::c_int = 81; +pub const EBADMSG: ::c_int = 77; +pub const EREMCHG: ::c_int = 82; +pub const ELIBACC: ::c_int = 83; +pub const ELIBBAD: ::c_int = 84; +pub const ELIBSCN: ::c_int = 85; +pub const ELIBMAX: ::c_int = 86; +pub const ELIBEXEC: ::c_int = 87; +pub const EILSEQ: ::c_int = 88; +pub const ERESTART: ::c_int = 91; +pub const ESTRPIPE: ::c_int = 92; +pub const EUSERS: ::c_int = 94; +pub const ENOTSOCK: ::c_int = 95; +pub const EDESTADDRREQ: ::c_int = 96; +pub const EMSGSIZE: ::c_int = 97; +pub const EPROTOTYPE: ::c_int = 98; +pub const ENOPROTOOPT: ::c_int = 99; +pub const EPROTONOSUPPORT: ::c_int = 120; +pub const ESOCKTNOSUPPORT: ::c_int = 121; +pub const EOPNOTSUPP: ::c_int = 122; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; +pub const EPFNOSUPPORT: ::c_int = 123; +pub const EAFNOSUPPORT: ::c_int = 124; +pub const EADDRINUSE: ::c_int = 125; +pub const EADDRNOTAVAIL: ::c_int = 126; +pub const ENETDOWN: ::c_int = 127; +pub const ENETUNREACH: ::c_int = 128; +pub const ENETRESET: ::c_int = 129; +pub const ECONNABORTED: ::c_int = 130; +pub const ECONNRESET: ::c_int = 131; +pub const ENOBUFS: ::c_int = 132; +pub const EISCONN: ::c_int = 133; +pub const ENOTCONN: ::c_int = 134; +pub const ESHUTDOWN: ::c_int = 143; +pub const ETOOMANYREFS: ::c_int = 144; +pub const ETIMEDOUT: ::c_int = 145; +pub const ECONNREFUSED: ::c_int = 146; +pub const EHOSTDOWN: ::c_int = 147; +pub const EHOSTUNREACH: ::c_int = 148; +pub const EALREADY: ::c_int = 149; +pub const EINPROGRESS: ::c_int = 150; +pub const ESTALE: ::c_int = 151; +pub const EUCLEAN: ::c_int = 135; +pub const ENOTNAM: ::c_int = 137; +pub const ENAVAIL: ::c_int = 138; +pub const EISNAM: ::c_int = 139; +pub const EREMOTEIO: ::c_int = 140; +pub const EDQUOT: ::c_int = 1133; +pub const ENOMEDIUM: ::c_int = 159; +pub const EMEDIUMTYPE: ::c_int = 160; +pub const ECANCELED: ::c_int = 158; +pub const ENOKEY: ::c_int = 161; +pub const EKEYEXPIRED: ::c_int = 162; +pub const EKEYREVOKED: ::c_int = 163; +pub const EKEYREJECTED: ::c_int = 164; +pub const EOWNERDEAD: ::c_int = 165; +pub const ENOTRECOVERABLE: ::c_int = 166; +pub const ERFKILL: ::c_int = 167; + +pub const LC_PAPER: ::c_int = 7; +pub const LC_NAME: ::c_int = 8; +pub const LC_ADDRESS: ::c_int = 9; +pub const LC_TELEPHONE: ::c_int = 10; +pub const LC_MEASUREMENT: ::c_int = 11; +pub const LC_IDENTIFICATION: ::c_int = 12; +pub const LC_PAPER_MASK: ::c_int = (1 << LC_PAPER); +pub const LC_NAME_MASK: ::c_int = (1 << LC_NAME); +pub const LC_ADDRESS_MASK: ::c_int = (1 << LC_ADDRESS); +pub const LC_TELEPHONE_MASK: ::c_int = (1 << LC_TELEPHONE); +pub const LC_MEASUREMENT_MASK: ::c_int = (1 << LC_MEASUREMENT); +pub const LC_IDENTIFICATION_MASK: ::c_int = (1 << LC_IDENTIFICATION); +pub const LC_ALL_MASK: ::c_int = ::LC_CTYPE_MASK + | ::LC_NUMERIC_MASK + | ::LC_TIME_MASK + | ::LC_COLLATE_MASK + | ::LC_MONETARY_MASK + | ::LC_MESSAGES_MASK + | LC_PAPER_MASK + | LC_NAME_MASK + | LC_ADDRESS_MASK + | LC_TELEPHONE_MASK + | LC_MEASUREMENT_MASK + | LC_IDENTIFICATION_MASK; + +pub const MAP_NORESERVE: ::c_int = 0x400; +pub const MAP_ANON: ::c_int = 0x800; +pub const MAP_ANONYMOUS: ::c_int = 0x800; +pub const MAP_GROWSDOWN: ::c_int = 0x1000; +pub const MAP_DENYWRITE: ::c_int = 0x2000; +pub const MAP_EXECUTABLE: ::c_int = 0x4000; +pub const MAP_LOCKED: ::c_int = 0x8000; +pub const MAP_POPULATE: ::c_int = 0x10000; +pub const MAP_NONBLOCK: ::c_int = 0x20000; +pub const MAP_STACK: ::c_int = 0x40000; + +pub const SOCK_STREAM: ::c_int = 2; +pub const SOCK_DGRAM: ::c_int = 1; +pub const SOCK_SEQPACKET: ::c_int = 5; + +pub const SOL_SOCKET: ::c_int = 0xffff; + +pub const SO_REUSEADDR: ::c_int = 4; +pub const SO_REUSEPORT: ::c_int = 0x200; +pub const SO_TYPE: ::c_int = 4104; +pub const SO_ERROR: ::c_int = 4103; +pub const SO_DONTROUTE: ::c_int = 16; +pub const SO_BROADCAST: ::c_int = 32; +pub const SO_SNDBUF: ::c_int = 4097; +pub const SO_RCVBUF: ::c_int = 4098; +pub const SO_KEEPALIVE: ::c_int = 8; +pub const SO_OOBINLINE: ::c_int = 256; +pub const SO_LINGER: ::c_int = 128; +pub const SO_RCVLOWAT: ::c_int = 4100; +pub const SO_SNDLOWAT: ::c_int = 4099; +pub const SO_RCVTIMEO: ::c_int = 4102; +pub const SO_SNDTIMEO: ::c_int = 4101; +pub const SO_ACCEPTCONN: ::c_int = 4105; + +pub const FIOCLEX: ::c_ulong = 0x6601; +pub const FIONBIO: ::c_ulong = 0x667e; + +pub const SA_ONSTACK: ::c_int = 0x08000000; +pub const SA_SIGINFO: ::c_int = 0x00000008; +pub const SA_NOCLDWAIT: ::c_int = 0x00010000; + +pub const SIGCHLD: ::c_int = 18; +pub const SIGBUS: ::c_int = 10; +pub const SIGTTIN: ::c_int = 26; +pub const SIGTTOU: ::c_int = 27; +pub const SIGXCPU: ::c_int = 30; +pub const SIGXFSZ: ::c_int = 31; +pub const SIGVTALRM: ::c_int = 28; +pub const SIGPROF: ::c_int = 29; +pub const SIGWINCH: ::c_int = 20; +pub const SIGUSR1: ::c_int = 16; +pub const SIGUSR2: ::c_int = 17; +pub const SIGCONT: ::c_int = 25; +pub const SIGSTOP: ::c_int = 23; +pub const SIGTSTP: ::c_int = 24; +pub const SIGURG: ::c_int = 21; +pub const SIGIO: ::c_int = 22; +pub const SIGSYS: ::c_int = 12; +pub const SIGPOLL: ::c_int = 22; +pub const SIGPWR: ::c_int = 19; +pub const SIG_SETMASK: ::c_int = 3; +pub const SIG_BLOCK: ::c_int = 0x1; +pub const SIG_UNBLOCK: ::c_int = 0x2; + +pub const POLLRDNORM: ::c_short = 0x040; +pub const POLLWRNORM: ::c_short = 0x004; +pub const POLLRDBAND: ::c_short = 0x080; +pub const POLLWRBAND: ::c_short = 0x100; + +pub const PTHREAD_STACK_MIN: ::size_t = 131072; + +pub const ADFS_SUPER_MAGIC: ::c_long = 0x0000adf5; +pub const AFFS_SUPER_MAGIC: ::c_long = 0x0000adff; +pub const CODA_SUPER_MAGIC: ::c_long = 0x73757245; +pub const CRAMFS_MAGIC: ::c_long = 0x28cd3d45; +pub const EFS_SUPER_MAGIC: ::c_long = 0x00414a53; +pub const EXT2_SUPER_MAGIC: ::c_long = 0x0000ef53; +pub const EXT3_SUPER_MAGIC: ::c_long = 0x0000ef53; +pub const EXT4_SUPER_MAGIC: ::c_long = 0x0000ef53; +pub const HPFS_SUPER_MAGIC: ::c_long = 0xf995e849; +pub const HUGETLBFS_MAGIC: ::c_long = 0x958458f6; +pub const ISOFS_SUPER_MAGIC: ::c_long = 0x00009660; +pub const JFFS2_SUPER_MAGIC: ::c_long = 0x000072b6; +pub const MINIX_SUPER_MAGIC: ::c_long = 0x0000137f; +pub const MINIX_SUPER_MAGIC2: ::c_long = 0x0000138f; +pub const MINIX2_SUPER_MAGIC: ::c_long = 0x00002468; +pub const MINIX2_SUPER_MAGIC2: ::c_long = 0x00002478; +pub const MSDOS_SUPER_MAGIC: ::c_long = 0x00004d44; +pub const NCP_SUPER_MAGIC: ::c_long = 0x0000564c; +pub const NFS_SUPER_MAGIC: ::c_long = 0x00006969; +pub const OPENPROM_SUPER_MAGIC: ::c_long = 0x00009fa1; +pub const PROC_SUPER_MAGIC: ::c_long = 0x00009fa0; +pub const QNX4_SUPER_MAGIC: ::c_long = 0x0000002f; +pub const REISERFS_SUPER_MAGIC: ::c_long = 0x52654973; +pub const SMB_SUPER_MAGIC: ::c_long = 0x0000517b; +pub const TMPFS_MAGIC: ::c_long = 0x01021994; +pub const USBDEVICE_SUPER_MAGIC: ::c_long = 0x00009fa2; + +pub const VEOF: usize = 16; +pub const VEOL: usize = 17; +pub const VEOL2: usize = 6; +pub const VMIN: usize = 4; +pub const IEXTEN: ::tcflag_t = 0x00000100; +pub const TOSTOP: ::tcflag_t = 0x00008000; +pub const FLUSHO: ::tcflag_t = 0x00002000; +pub const IUTF8: ::tcflag_t = 0x00004000; +pub const TCSANOW: ::c_int = 0x540e; +pub const TCSADRAIN: ::c_int = 0x540f; +pub const TCSAFLUSH: ::c_int = 0x5410; + +pub const CPU_SETSIZE: ::c_int = 0x400; + +pub const PTRACE_TRACEME: ::c_uint = 0; +pub const PTRACE_PEEKTEXT: ::c_uint = 1; +pub const PTRACE_PEEKDATA: ::c_uint = 2; +pub const PTRACE_PEEKUSER: ::c_uint = 3; +pub const PTRACE_POKETEXT: ::c_uint = 4; +pub const PTRACE_POKEDATA: ::c_uint = 5; +pub const PTRACE_POKEUSER: ::c_uint = 6; +pub const PTRACE_CONT: ::c_uint = 7; +pub const PTRACE_KILL: ::c_uint = 8; +pub const PTRACE_SINGLESTEP: ::c_uint = 9; +pub const PTRACE_ATTACH: ::c_uint = 16; +pub const PTRACE_DETACH: ::c_uint = 17; +pub const PTRACE_SYSCALL: ::c_uint = 24; +pub const PTRACE_SETOPTIONS: ::c_uint = 0x4200; +pub const PTRACE_GETEVENTMSG: ::c_uint = 0x4201; +pub const PTRACE_GETSIGINFO: ::c_uint = 0x4202; +pub const PTRACE_SETSIGINFO: ::c_uint = 0x4203; +pub const PTRACE_GETFPREGS: ::c_uint = 14; +pub const PTRACE_SETFPREGS: ::c_uint = 15; +pub const PTRACE_GETFPXREGS: ::c_uint = 18; +pub const PTRACE_SETFPXREGS: ::c_uint = 19; +pub const PTRACE_GETREGS: ::c_uint = 12; +pub const PTRACE_SETREGS: ::c_uint = 13; + +pub const MAP_HUGETLB: ::c_int = 0x080000; + +pub const EFD_NONBLOCK: ::c_int = 0x80; + +pub const F_GETLK: ::c_int = 14; +pub const F_GETOWN: ::c_int = 23; +pub const F_SETOWN: ::c_int = 24; +pub const F_SETLK: ::c_int = 6; +pub const F_SETLKW: ::c_int = 7; + +pub const SFD_NONBLOCK: ::c_int = 0x80; + +pub const TCGETS: ::c_ulong = 0x540d; +pub const TCSETS: ::c_ulong = 0x540e; +pub const TCSETSW: ::c_ulong = 0x540f; +pub const TCSETSF: ::c_ulong = 0x5410; +pub const TCGETA: ::c_ulong = 0x5401; +pub const TCSETA: ::c_ulong = 0x5402; +pub const TCSETAW: ::c_ulong = 0x5403; +pub const TCSETAF: ::c_ulong = 0x5404; +pub const TCSBRK: ::c_ulong = 0x5405; +pub const TCXONC: ::c_ulong = 0x5406; +pub const TCFLSH: ::c_ulong = 0x5407; +pub const TIOCGSOFTCAR: ::c_ulong = 0x5481; +pub const TIOCSSOFTCAR: ::c_ulong = 0x5482; +pub const TIOCINQ: ::c_ulong = 0x467f; +pub const TIOCLINUX: ::c_ulong = 0x5483; +pub const TIOCGSERIAL: ::c_ulong = 0x5484; +pub const TIOCEXCL: ::c_ulong = 0x740d; +pub const TIOCNXCL: ::c_ulong = 0x740e; +pub const TIOCSCTTY: ::c_ulong = 0x5480; +pub const TIOCGPGRP: ::c_ulong = 0x40047477; +pub const TIOCSPGRP: ::c_ulong = 0x80047476; +pub const TIOCOUTQ: ::c_ulong = 0x7472; +pub const TIOCSTI: ::c_ulong = 0x5472; +pub const TIOCGWINSZ: ::c_ulong = 0x40087468; +pub const TIOCSWINSZ: ::c_ulong = 0x80087467; +pub const TIOCMGET: ::c_ulong = 0x741d; +pub const TIOCMBIS: ::c_ulong = 0x741b; +pub const TIOCMBIC: ::c_ulong = 0x741c; +pub const TIOCMSET: ::c_ulong = 0x741a; +pub const FIONREAD: ::c_ulong = 0x467f; +pub const TIOCCONS: ::c_ulong = 0x80047478; + +pub const RTLD_DEEPBIND: ::c_int = 0x10; +pub const RTLD_GLOBAL: ::c_int = 0x4; +pub const RTLD_NOLOAD: ::c_int = 0x8; + +pub const LINUX_REBOOT_MAGIC1: ::c_int = 0xfee1dead; +pub const LINUX_REBOOT_MAGIC2: ::c_int = 672274793; +pub const LINUX_REBOOT_MAGIC2A: ::c_int = 85072278; +pub const LINUX_REBOOT_MAGIC2B: ::c_int = 369367448; +pub const LINUX_REBOOT_MAGIC2C: ::c_int = 537993216; + +pub const LINUX_REBOOT_CMD_RESTART: ::c_int = 0x01234567; +pub const LINUX_REBOOT_CMD_HALT: ::c_int = 0xCDEF0123; +pub const LINUX_REBOOT_CMD_CAD_ON: ::c_int = 0x89ABCDEF; +pub const LINUX_REBOOT_CMD_CAD_OFF: ::c_int = 0x00000000; +pub const LINUX_REBOOT_CMD_POWER_OFF: ::c_int = 0x4321FEDC; +pub const LINUX_REBOOT_CMD_RESTART2: ::c_int = 0xA1B2C3D4; +pub const LINUX_REBOOT_CMD_SW_SUSPEND: ::c_int = 0xD000FCE2; +pub const LINUX_REBOOT_CMD_KEXEC: ::c_int = 0x45584543; + +pub const MCL_CURRENT: ::c_int = 0x0001; +pub const MCL_FUTURE: ::c_int = 0x0002; + +pub const SIGSTKSZ: ::size_t = 8192; +pub const CBAUD: ::tcflag_t = 0o0010017; +pub const TAB1: ::c_int = 0x00000800; +pub const TAB2: ::c_int = 0x00001000; +pub const TAB3: ::c_int = 0x00001800; +pub const CR1: ::c_int = 0x00000200; +pub const CR2: ::c_int = 0x00000400; +pub const CR3: ::c_int = 0x00000600; +pub const FF1: ::c_int = 0x00008000; +pub const BS1: ::c_int = 0x00002000; +pub const VT1: ::c_int = 0x00004000; +pub const VWERASE: usize = 14; +pub const VREPRINT: usize = 12; +pub const VSUSP: usize = 10; +pub const VSTART: usize = 8; +pub const VSTOP: usize = 9; +pub const VDISCARD: usize = 13; +pub const VTIME: usize = 5; +pub const IXON: ::tcflag_t = 0x00000400; +pub const IXOFF: ::tcflag_t = 0x00001000; +pub const ONLCR: ::tcflag_t = 0x4; +pub const CSIZE: ::tcflag_t = 0x00000030; +pub const CS6: ::tcflag_t = 0x00000010; +pub const CS7: ::tcflag_t = 0x00000020; +pub const CS8: ::tcflag_t = 0x00000030; +pub const CSTOPB: ::tcflag_t = 0x00000040; +pub const CREAD: ::tcflag_t = 0x00000080; +pub const PARENB: ::tcflag_t = 0x00000100; +pub const PARODD: ::tcflag_t = 0x00000200; +pub const HUPCL: ::tcflag_t = 0x00000400; +pub const CLOCAL: ::tcflag_t = 0x00000800; +pub const ECHOKE: ::tcflag_t = 0x00000800; +pub const ECHOE: ::tcflag_t = 0x00000010; +pub const ECHOK: ::tcflag_t = 0x00000020; +pub const ECHONL: ::tcflag_t = 0x00000040; +pub const ECHOPRT: ::tcflag_t = 0x00000400; +pub const ECHOCTL: ::tcflag_t = 0x00000200; +pub const ISIG: ::tcflag_t = 0x00000001; +pub const ICANON: ::tcflag_t = 0x00000002; +pub const PENDIN: ::tcflag_t = 0x00004000; +pub const NOFLSH: ::tcflag_t = 0x00000080; + +cfg_if! { + if #[cfg(target_arch = "mips")] { + mod mips32; + pub use self::mips32::*; + } else if #[cfg(target_arch = "mips64")] { + mod mips64; + pub use self::mips64::*; + } else { + // Unknown target_arch + } +} diff --git a/src/liblibc/src/unix/notbsd/linux/mod.rs b/src/liblibc/src/unix/notbsd/linux/mod.rs index c011b8c48e..08efed2ea0 100644 --- a/src/liblibc/src/unix/notbsd/linux/mod.rs +++ b/src/liblibc/src/unix/notbsd/linux/mod.rs @@ -66,21 +66,21 @@ s! { } pub struct pthread_mutex_t { - #[cfg(any(target_arch = "mips", target_arch = "mipsel", - target_arch = "arm", target_arch = "powerpc"))] + #[cfg(any(target_arch = "mips", target_arch = "arm", + target_arch = "powerpc"))] __align: [::c_long; 0], - #[cfg(not(any(target_arch = "mips", target_arch = "mipsel", - target_arch = "arm", target_arch = "powerpc")))] + #[cfg(not(any(target_arch = "mips", target_arch = "arm", + target_arch = "powerpc")))] __align: [::c_longlong; 0], size: [u8; __SIZEOF_PTHREAD_MUTEX_T], } pub struct pthread_rwlock_t { - #[cfg(any(target_arch = "mips", target_arch = "mipsel", - target_arch = "arm", target_arch = "powerpc"))] + #[cfg(any(target_arch = "mips", target_arch = "arm", + target_arch = "powerpc"))] __align: [::c_long; 0], - #[cfg(not(any(target_arch = "mips", target_arch = "mipsel", - target_arch = "arm", target_arch = "powerpc")))] + #[cfg(not(any(target_arch = "mips", target_arch = "arm", + target_arch = "powerpc")))] __align: [::c_longlong; 0], size: [u8; __SIZEOF_PTHREAD_RWLOCK_T], } @@ -524,6 +524,15 @@ pub const SYNC_FILE_RANGE_WAIT_AFTER: ::c_uint = 4; pub const EAI_SYSTEM: ::c_int = -11; +pub const AIO_CANCELED: ::c_int = 0; +pub const AIO_NOTCANCELED: ::c_int = 1; +pub const AIO_ALLDONE: ::c_int = 2; +pub const LIO_READ: ::c_int = 0; +pub const LIO_WRITE: ::c_int = 1; +pub const LIO_NOP: ::c_int = 2; +pub const LIO_WAIT: ::c_int = 0; +pub const LIO_NOWAIT: ::c_int = 1; + f! { pub fn CPU_ZERO(cpuset: &mut cpu_set_t) -> () { for slot in cpuset.bits.iter_mut() { @@ -557,6 +566,17 @@ f! { } extern { + pub fn aio_read(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_write(aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_fsync(op: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn aio_error(aiocbp: *const aiocb) -> ::c_int; + pub fn aio_return(aiocbp: *mut aiocb) -> ::ssize_t; + pub fn aio_suspend(aiocb_list: *const *const aiocb, nitems: ::c_int, + timeout: *const ::timespec) -> ::c_int; + pub fn aio_cancel(fd: ::c_int, aiocbp: *mut aiocb) -> ::c_int; + pub fn lio_listio(mode: ::c_int, aiocb_list: *const *mut aiocb, + nitems: ::c_int, sevp: *mut ::sigevent) -> ::c_int; + pub fn lutimes(file: *const ::c_char, times: *const ::timeval) -> ::c_int; pub fn setpwent(); @@ -714,6 +734,8 @@ extern { riovcnt: ::c_ulong, flags: ::c_ulong) -> isize; pub fn reboot(how_to: ::c_int) -> ::c_int; + pub fn setfsgid(gid: ::gid_t) -> ::c_int; + pub fn setfsuid(uid: ::uid_t) -> ::c_int; pub fn setresgid(rgid: ::gid_t, egid: ::gid_t, sgid: ::gid_t) -> ::c_int; pub fn setresuid(ruid: ::uid_t, euid: ::uid_t, suid: ::uid_t) -> ::c_int; @@ -734,15 +756,13 @@ cfg_if! { target_os = "emscripten"))] { mod musl; pub use self::musl::*; - } else if #[cfg(any(target_arch = "mips", target_arch = "mipsel"))] { + } else if #[cfg(any(target_arch = "mips", + target_arch = "mips64"))] { mod mips; pub use self::mips::*; } else if #[cfg(any(target_arch = "s390x"))] { mod s390x; pub use self::s390x::*; - } else if #[cfg(any(target_arch = "mips64"))] { - mod mips64; - pub use self::mips64::*; } else { mod other; pub use self::other::*; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/b32/arm.rs b/src/liblibc/src/unix/notbsd/linux/musl/b32/arm.rs index 998580d3e2..540c1abb5f 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/b32/arm.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/b32/arm.rs @@ -196,6 +196,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/b32/asmjs.rs b/src/liblibc/src/unix/notbsd/linux/musl/b32/asmjs.rs index 91a96c185a..e890a34585 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/b32/asmjs.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/b32/asmjs.rs @@ -196,6 +196,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/b32/mips.rs b/src/liblibc/src/unix/notbsd/linux/musl/b32/mips.rs index 9ebfe4a68f..363d7c5d66 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/b32/mips.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/b32/mips.rs @@ -198,6 +198,7 @@ pub const ENOPROTOOPT: ::c_int = 99; pub const EPROTONOSUPPORT: ::c_int = 120; pub const ESOCKTNOSUPPORT: ::c_int = 121; pub const EOPNOTSUPP: ::c_int = 122; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 123; pub const EAFNOSUPPORT: ::c_int = 124; pub const EADDRINUSE: ::c_int = 125; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/b32/x86.rs b/src/liblibc/src/unix/notbsd/linux/musl/b32/x86.rs index 194b8fd8bd..dede8ffc5f 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/b32/x86.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/b32/x86.rs @@ -209,6 +209,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/b64/mod.rs b/src/liblibc/src/unix/notbsd/linux/musl/b64/mod.rs index fdaf52e166..9c5d43419d 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/b64/mod.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/b64/mod.rs @@ -218,6 +218,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; diff --git a/src/liblibc/src/unix/notbsd/linux/musl/mod.rs b/src/liblibc/src/unix/notbsd/linux/musl/mod.rs index 1808e44359..e80850608e 100644 --- a/src/liblibc/src/unix/notbsd/linux/musl/mod.rs +++ b/src/liblibc/src/unix/notbsd/linux/musl/mod.rs @@ -11,6 +11,26 @@ pub type fsfilcnt_t = ::c_ulonglong; pub type rlim_t = ::c_ulonglong; s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: ::sigevent, + __td: *mut ::c_void, + __lock: [::c_int; 2], + __err: ::c_int, + __ret: ::ssize_t, + pub aio_offset: off_t, + __next: *mut ::c_void, + __prev: *mut ::c_void, + #[cfg(target_pointer_width = "32")] + __dummy4: [::c_char; 24], + #[cfg(target_pointer_width = "64")] + __dummy4: [::c_char; 16], + } + pub struct sigaction { pub sa_sigaction: ::sighandler_t, pub sa_mask: ::sigset_t, diff --git a/src/liblibc/src/unix/notbsd/linux/other/mod.rs b/src/liblibc/src/unix/notbsd/linux/other/mod.rs index 97c28ea81c..774040803f 100644 --- a/src/liblibc/src/unix/notbsd/linux/other/mod.rs +++ b/src/liblibc/src/unix/notbsd/linux/other/mod.rs @@ -4,6 +4,24 @@ pub type rlim_t = c_ulong; pub type __priority_which_t = ::c_uint; s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: ::sigevent, + __next_prio: *mut aiocb, + __abs_prio: ::c_int, + __policy: ::c_int, + __error_code: ::c_int, + __return_value: ::ssize_t, + pub aio_offset: off_t, + #[cfg(target_pointer_width = "32")] + __unused1: [::c_char; 4], + __glibc_reserved: [::c_char; 32] + } + pub struct __exit_status { pub e_termination: ::c_short, pub e_exit: ::c_short, @@ -247,6 +265,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const EADDRINUSE: ::c_int = 98; @@ -345,6 +364,8 @@ pub const SIG_SETMASK: ::c_int = 2; pub const SIG_BLOCK: ::c_int = 0x000000; pub const SIG_UNBLOCK: ::c_int = 0x01; +pub const SIGEV_THREAD_ID: ::c_int = 4; + pub const POLLRDNORM: ::c_short = 0x040; pub const POLLWRNORM: ::c_short = 0x100; pub const POLLRDBAND: ::c_short = 0x080; diff --git a/src/liblibc/src/unix/notbsd/linux/s390x.rs b/src/liblibc/src/unix/notbsd/linux/s390x.rs index be12d72fc5..76e9aefe81 100644 --- a/src/liblibc/src/unix/notbsd/linux/s390x.rs +++ b/src/liblibc/src/unix/notbsd/linux/s390x.rs @@ -18,6 +18,24 @@ pub type __fsword_t = ::c_long; pub type __priority_which_t = ::c_uint; s! { + pub struct aiocb { + pub aio_fildes: ::c_int, + pub aio_lio_opcode: ::c_int, + pub aio_reqprio: ::c_int, + pub aio_buf: *mut ::c_void, + pub aio_nbytes: ::size_t, + pub aio_sigevent: ::sigevent, + __next_prio: *mut aiocb, + __abs_prio: ::c_int, + __policy: ::c_int, + __error_code: ::c_int, + __return_value: ::ssize_t, + pub aio_offset: off_t, + #[cfg(target_pointer_width = "32")] + __unused1: [::c_char; 4], + __glibc_reserved: [::c_char; 32] + } + pub struct stat { pub st_dev: ::dev_t, pub st_ino: ::ino_t, @@ -389,6 +407,7 @@ pub const ENOPROTOOPT: ::c_int = 92; pub const EPROTONOSUPPORT: ::c_int = 93; pub const ESOCKTNOSUPPORT: ::c_int = 94; pub const EOPNOTSUPP: ::c_int = 95; +pub const ENOTSUP: ::c_int = EOPNOTSUPP; pub const EPFNOSUPPORT: ::c_int = 96; pub const EAFNOSUPPORT: ::c_int = 97; pub const ENETDOWN: ::c_int = 100; diff --git a/src/liblibc/src/unix/notbsd/mod.rs b/src/liblibc/src/unix/notbsd/mod.rs index d11202b7b7..03a2b44ae0 100644 --- a/src/liblibc/src/unix/notbsd/mod.rs +++ b/src/liblibc/src/unix/notbsd/mod.rs @@ -163,6 +163,19 @@ s! { pub int_p_sign_posn: ::c_char, pub int_n_sign_posn: ::c_char, } + + pub struct sigevent { + pub sigev_value: ::sigval, + pub sigev_signo: ::c_int, + pub sigev_notify: ::c_int, + // Actually a union. We only expose sigev_notify_thread_id because it's + // the most useful member + pub sigev_notify_thread_id: ::c_int, + #[cfg(target_pointer_width = "64")] + __unused1: [::c_int; 11], + #[cfg(target_pointer_width = "32")] + __unused1: [::c_int; 12] + } } // intentionally not public, only used for fd_set @@ -632,6 +645,10 @@ pub const PIPE_BUF: usize = 4096; pub const SI_LOAD_SHIFT: ::c_uint = 16; +pub const SIGEV_SIGNAL: ::c_int = 0; +pub const SIGEV_NONE: ::c_int = 1; +pub const SIGEV_THREAD: ::c_int = 2; + f! { pub fn FD_CLR(fd: ::c_int, set: *mut fd_set) -> () { let fd = fd as usize; diff --git a/src/liblibc/src/unix/solaris/mod.rs b/src/liblibc/src/unix/solaris/mod.rs index 63cd1249a4..c2db06bc50 100644 --- a/src/liblibc/src/unix/solaris/mod.rs +++ b/src/liblibc/src/unix/solaris/mod.rs @@ -602,6 +602,7 @@ pub const EMLINK: ::c_int = 31; pub const EPIPE: ::c_int = 32; pub const EDOM: ::c_int = 33; pub const ERANGE: ::c_int = 34; +pub const ENOTSUP: ::c_int = 48; pub const EAGAIN: ::c_int = 11; pub const EWOULDBLOCK: ::c_int = 11; pub const EINPROGRESS: ::c_int = 150; diff --git a/src/liblibc/src/windows.rs b/src/liblibc/src/windows.rs index b916fd4bd6..6c8332a62e 100644 --- a/src/liblibc/src/windows.rs +++ b/src/liblibc/src/windows.rs @@ -146,8 +146,9 @@ pub const ENOTEMPTY: ::c_int = 41; pub const EILSEQ: ::c_int = 42; pub const STRUNCATE: ::c_int = 80; -#[cfg(target_env = "msvc")] // " if " -- appease style checker -#[link(name = "msvcrt")] +#[cfg(all(target_env = "msvc", stdbuild))] // " if " -- appease style checker +#[link(name = "msvcrt", cfg(not(target_feature = "crt-static")))] +#[link(name = "libcmt", cfg(target_feature = "crt-static"))] extern {} extern { diff --git a/src/libpanic_abort/Cargo.toml b/src/libpanic_abort/Cargo.toml index 9d62be64fc..e0eac41f49 100644 --- a/src/libpanic_abort/Cargo.toml +++ b/src/libpanic_abort/Cargo.toml @@ -6,6 +6,8 @@ version = "0.0.0" [lib] path = "lib.rs" test = false +bench = false +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/libpanic_abort/lib.rs b/src/libpanic_abort/lib.rs index b87160dd75..8f85bfe2c6 100644 --- a/src/libpanic_abort/lib.rs +++ b/src/libpanic_abort/lib.rs @@ -28,7 +28,7 @@ #![panic_runtime] #![feature(panic_runtime)] #![cfg_attr(unix, feature(libc))] -#![cfg_attr(windows, feature(core_intrinsics))] +#![cfg_attr(any(target_os = "redox", windows), feature(core_intrinsics))] // Rust's "try" function, but if we're aborting on panics we just call the // function as there's nothing else we need to do here. @@ -53,7 +53,7 @@ pub unsafe extern fn __rust_maybe_catch_panic(f: fn(*mut u8), // now hopefully. #[no_mangle] pub unsafe extern fn __rust_start_panic(_data: usize, _vtable: usize) -> u32 { - return abort(); + abort(); #[cfg(unix)] unsafe fn abort() -> ! { @@ -61,7 +61,7 @@ pub unsafe extern fn __rust_start_panic(_data: usize, _vtable: usize) -> u32 { libc::abort(); } - #[cfg(windows)] + #[cfg(any(target_os = "redox", windows))] unsafe fn abort() -> ! { core::intrinsics::abort(); } diff --git a/src/libpanic_unwind/Cargo.toml b/src/libpanic_unwind/Cargo.toml index 18f37a8bb1..a978ea16e9 100644 --- a/src/libpanic_unwind/Cargo.toml +++ b/src/libpanic_unwind/Cargo.toml @@ -6,6 +6,8 @@ version = "0.0.0" [lib] path = "lib.rs" test = false +bench = false +doc = false [dependencies] alloc = { path = "../liballoc" } diff --git a/src/libpanic_unwind/lib.rs b/src/libpanic_unwind/lib.rs index ff483fa823..b75d9ec652 100644 --- a/src/libpanic_unwind/lib.rs +++ b/src/libpanic_unwind/lib.rs @@ -69,6 +69,7 @@ mod imp; // i686-pc-windows-gnu and all others #[cfg(any(all(unix, not(target_os = "emscripten")), + target_os = "redox", all(windows, target_arch = "x86", target_env = "gnu")))] #[path = "gcc.rs"] mod imp; diff --git a/src/libproc_macro/lib.rs b/src/libproc_macro/lib.rs index 1d2c64d6d9..22b365fa64 100644 --- a/src/libproc_macro/lib.rs +++ b/src/libproc_macro/lib.rs @@ -15,19 +15,16 @@ //! Currently the primary use of this crate is to provide the ability to define //! new custom derive modes through `#[proc_macro_derive]`. //! -//! Added recently as part of [RFC 1681] this crate is currently *unstable* and -//! requires the `#![feature(proc_macro_lib)]` directive to use. -//! -//! [RFC 1681]: https://github.com/rust-lang/rfcs/blob/master/text/1681-macros-1.1.md -//! //! Note that this crate is intentionally very bare-bones currently. The main //! type, `TokenStream`, only supports `fmt::Display` and `FromStr` //! implementations, indicating that it can only go to and come from a string. //! This functionality is intended to be expanded over time as more surface //! area for macro authors is stabilized. +//! +//! See [the book](../../book/procedural-macros.html) for more. #![crate_name = "proc_macro"] -#![unstable(feature = "proc_macro_lib", issue = "27812")] +#![stable(feature = "proc_macro_lib", since = "1.15.0")] #![crate_type = "rlib"] #![crate_type = "dylib"] #![cfg_attr(not(stage0), deny(warnings))] @@ -55,12 +52,14 @@ use syntax::ptr::P; /// /// The API of this type is intentionally bare-bones, but it'll be expanded over /// time! +#[stable(feature = "proc_macro_lib", since = "1.15.0")] pub struct TokenStream { inner: Vec>, } /// Error returned from `TokenStream::from_str`. #[derive(Debug)] +#[stable(feature = "proc_macro_lib", since = "1.15.0")] pub struct LexError { _inner: (), } @@ -95,7 +94,8 @@ pub mod __internal { pub trait Registry { fn register_custom_derive(&mut self, trait_name: &str, - expand: fn(TokenStream) -> TokenStream); + expand: fn(TokenStream) -> TokenStream, + attributes: &[&'static str]); } // Emulate scoped_thread_local!() here essentially @@ -130,6 +130,7 @@ pub mod __internal { } } +#[stable(feature = "proc_macro_lib", since = "1.15.0")] impl FromStr for TokenStream { type Err = LexError; @@ -153,6 +154,7 @@ impl FromStr for TokenStream { } } +#[stable(feature = "proc_macro_lib", since = "1.15.0")] impl fmt::Display for TokenStream { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { for item in self.inner.iter() { diff --git a/src/libproc_macro_plugin/qquote.rs b/src/libproc_macro_plugin/qquote.rs index e5a3abc2ea..1ae906e0aa 100644 --- a/src/libproc_macro_plugin/qquote.rs +++ b/src/libproc_macro_plugin/qquote.rs @@ -34,8 +34,9 @@ use syntax::codemap::Span; use syntax::ext::base::*; use syntax::ext::base; use syntax::ext::proc_macro_shim::build_block_emitter; -use syntax::parse::token::{self, Token, gensym_ident, str_to_ident}; +use syntax::parse::token::{self, Token}; use syntax::print::pprust; +use syntax::symbol::Symbol; use syntax::tokenstream::{TokenTree, TokenStream}; // ____________________________________________________________________________________________ @@ -124,7 +125,7 @@ fn qquote_iter<'cx>(cx: &'cx mut ExtCtxt, depth: i64, ts: TokenStream) -> (Bindi } // produce an error or something first let exp = vec![exp.unwrap().to_owned()]; debug!("RHS: {:?}", exp.clone()); - let new_id = gensym_ident("tmp"); + let new_id = Ident::with_empty_ctxt(Symbol::gensym("tmp")); debug!("RHS TS: {:?}", TokenStream::from_tts(exp.clone())); debug!("RHS TS TT: {:?}", TokenStream::from_tts(exp.clone()).to_vec()); bindings.push((new_id, TokenStream::from_tts(exp))); @@ -179,7 +180,7 @@ fn unravel_concats(tss: Vec) -> TokenStream { }; while let Some(ts) = pushes.pop() { - output = build_fn_call(str_to_ident("concat"), + output = build_fn_call(Ident::from_str("concat"), concat(concat(ts, from_tokens(vec![Token::Comma])), output)); @@ -209,18 +210,19 @@ fn convert_complex_tts<'cx>(cx: &'cx mut ExtCtxt, tts: Vec) -> (Bindings, T // FIXME handle sequence repetition tokens QTT::QDL(qdl) => { debug!(" QDL: {:?} ", qdl.tts); - let new_id = gensym_ident("qdl_tmp"); + let new_id = Ident::with_empty_ctxt(Symbol::gensym("qdl_tmp")); let mut cct_rec = convert_complex_tts(cx, qdl.tts); bindings.append(&mut cct_rec.0); bindings.push((new_id, cct_rec.1)); let sep = build_delim_tok(qdl.delim); - pushes.push(build_mod_call(vec![str_to_ident("proc_macro_tokens"), - str_to_ident("build"), - str_to_ident("build_delimited")], - concat(from_tokens(vec![Token::Ident(new_id)]), - concat(lex(","), sep)))); + pushes.push(build_mod_call( + vec![Ident::from_str("proc_macro_tokens"), + Ident::from_str("build"), + Ident::from_str("build_delimited")], + concat(from_tokens(vec![Token::Ident(new_id)]), concat(lex(","), sep)), + )); } QTT::QIdent(t) => { pushes.push(TokenStream::from_tts(vec![t])); @@ -250,13 +252,13 @@ fn unravel(binds: Bindings) -> TokenStream { /// Checks if the Ident is `unquote`. fn is_unquote(id: Ident) -> bool { - let qq = str_to_ident("unquote"); + let qq = Ident::from_str("unquote"); id.name == qq.name // We disregard context; unquote is _reserved_ } /// Checks if the Ident is `quote`. fn is_qquote(id: Ident) -> bool { - let qq = str_to_ident("qquote"); + let qq = Ident::from_str("qquote"); id.name == qq.name // We disregard context; qquote is _reserved_ } @@ -266,7 +268,8 @@ mod int_build { use syntax::ast::{self, Ident}; use syntax::codemap::{DUMMY_SP}; - use syntax::parse::token::{self, Token, keywords, str_to_ident}; + use syntax::parse::token::{self, Token, Lit}; + use syntax::symbol::keywords; use syntax::tokenstream::{TokenTree, TokenStream}; // ____________________________________________________________________________________________ @@ -277,19 +280,19 @@ mod int_build { build_paren_delimited(build_vec(build_token_tt(t)))) } - pub fn emit_lit(l: token::Lit, n: Option) -> TokenStream { + pub fn emit_lit(l: Lit, n: Option) -> TokenStream { let suf = match n { - Some(n) => format!("Some(ast::Name({}))", n.0), + Some(n) => format!("Some(ast::Name({}))", n.as_u32()), None => "None".to_string(), }; let lit = match l { - token::Lit::Byte(n) => format!("Lit::Byte(token::intern(\"{}\"))", n.to_string()), - token::Lit::Char(n) => format!("Lit::Char(token::intern(\"{}\"))", n.to_string()), - token::Lit::Integer(n) => format!("Lit::Integer(token::intern(\"{}\"))", n.to_string()), - token::Lit::Float(n) => format!("Lit::Float(token::intern(\"{}\"))", n.to_string()), - token::Lit::Str_(n) => format!("Lit::Str_(token::intern(\"{}\"))", n.to_string()), - token::Lit::ByteStr(n) => format!("Lit::ByteStr(token::intern(\"{}\"))", n.to_string()), + Lit::Byte(n) => format!("Lit::Byte(Symbol::intern(\"{}\"))", n.to_string()), + Lit::Char(n) => format!("Lit::Char(Symbol::intern(\"{}\"))", n.to_string()), + Lit::Float(n) => format!("Lit::Float(Symbol::intern(\"{}\"))", n.to_string()), + Lit::Str_(n) => format!("Lit::Str_(Symbol::intern(\"{}\"))", n.to_string()), + Lit::Integer(n) => format!("Lit::Integer(Symbol::intern(\"{}\"))", n.to_string()), + Lit::ByteStr(n) => format!("Lit::ByteStr(Symbol::intern(\"{}\"))", n.to_string()), _ => panic!("Unsupported literal"), }; @@ -388,9 +391,10 @@ mod int_build { Token::Underscore => lex("_"), Token::Literal(lit, sfx) => emit_lit(lit, sfx), // fix ident expansion information... somehow - Token::Ident(ident) => lex(&format!("Token::Ident(str_to_ident(\"{}\"))", ident.name)), - Token::Lifetime(ident) => lex(&format!("Token::Ident(str_to_ident(\"{}\"))", - ident.name)), + Token::Ident(ident) => + lex(&format!("Token::Ident(Ident::from_str(\"{}\"))", ident.name)), + Token::Lifetime(ident) => + lex(&format!("Token::Ident(Ident::from_str(\"{}\"))", ident.name)), _ => panic!("Unhandled case!"), } } @@ -408,7 +412,7 @@ mod int_build { /// Takes `input` and returns `vec![input]`. pub fn build_vec(ts: TokenStream) -> TokenStream { - build_mac_call(str_to_ident("vec"), ts) + build_mac_call(Ident::from_str("vec"), ts) // tts.clone().to_owned() } diff --git a/src/libproc_macro_tokens/build.rs b/src/libproc_macro_tokens/build.rs index 7b7590b863..d39aba0aa7 100644 --- a/src/libproc_macro_tokens/build.rs +++ b/src/libproc_macro_tokens/build.rs @@ -13,7 +13,8 @@ extern crate syntax_pos; use syntax::ast::Ident; use syntax::codemap::DUMMY_SP; -use syntax::parse::token::{self, Token, keywords, str_to_ident}; +use syntax::parse::token::{self, Token}; +use syntax::symbol::keywords; use syntax::tokenstream::{self, TokenTree, TokenStream}; use std::rc::Rc; @@ -43,13 +44,13 @@ pub fn ident_eq(tident: &TokenTree, id: Ident) -> bool { /// Convert a `&str` into a Token. pub fn str_to_token_ident(s: &str) -> Token { - Token::Ident(str_to_ident(s)) + Token::Ident(Ident::from_str(s)) } /// Converts a keyword (from `syntax::parse::token::keywords`) into a Token that /// corresponds to it. pub fn keyword_to_token_ident(kw: keywords::Keyword) -> Token { - Token::Ident(str_to_ident(&kw.name().as_str()[..])) + Token::Ident(Ident::from_str(&kw.name().as_str()[..])) } // ____________________________________________________________________________________________ diff --git a/src/libproc_macro_tokens/parse.rs b/src/libproc_macro_tokens/parse.rs index 9af8a68cdc..5ab4fcd5da 100644 --- a/src/libproc_macro_tokens/parse.rs +++ b/src/libproc_macro_tokens/parse.rs @@ -15,12 +15,12 @@ extern crate syntax; use syntax::parse::{ParseSess, filemap_to_tts}; use syntax::tokenstream::TokenStream; -/// Map a string to tts, using a made-up filename. For example, `lex(15)` will return a +/// Map a string to tts, using a made-up filename. For example, `lex("15")` will return a /// TokenStream containing the literal 15. pub fn lex(source_str: &str) -> TokenStream { let ps = ParseSess::new(); TokenStream::from_tts(filemap_to_tts(&ps, - ps.codemap().new_filemap("procmacro_lex".to_string(), + ps.codemap().new_filemap("".to_string(), None, source_str.to_owned()))) } diff --git a/src/librand/Cargo.toml b/src/librand/Cargo.toml index 86b061db05..eda5f21756 100644 --- a/src/librand/Cargo.toml +++ b/src/librand/Cargo.toml @@ -6,6 +6,7 @@ version = "0.0.0" [lib] name = "rand" path = "lib.rs" +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/librustc/cfg/construct.rs b/src/librustc/cfg/construct.rs index a2fc6e044e..f21d98a0fc 100644 --- a/src/librustc/cfg/construct.rs +++ b/src/librustc/cfg/construct.rs @@ -10,8 +10,6 @@ use rustc_data_structures::graph; use cfg::*; -use hir::def::Def; -use hir::pat_util; use ty::{self, TyCtxt}; use syntax::ast; use syntax::ptr::P; @@ -33,16 +31,16 @@ struct LoopScope { } pub fn construct<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - blk: &hir::Block) -> CFG { + body: &hir::Expr) -> CFG { let mut graph = graph::Graph::new(); let entry = graph.add_node(CFGNodeData::Entry); // `fn_exit` is target of return exprs, which lies somewhere - // outside input `blk`. (Distinguishing `fn_exit` and `block_exit` + // outside input `body`. (Distinguishing `fn_exit` and `body_exit` // also resolves chicken-and-egg problem that arises if you try to - // have return exprs jump to `block_exit` during construction.) + // have return exprs jump to `body_exit` during construction.) let fn_exit = graph.add_node(CFGNodeData::Exit); - let block_exit; + let body_exit; let mut cfg_builder = CFGBuilder { graph: graph, @@ -50,8 +48,8 @@ pub fn construct<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, tcx: tcx, loop_scopes: Vec::new() }; - block_exit = cfg_builder.block(blk, entry); - cfg_builder.add_contained_edge(block_exit, fn_exit); + body_exit = cfg_builder.expr(body, entry); + cfg_builder.add_contained_edge(body_exit, fn_exit); let CFGBuilder {graph, ..} = cfg_builder; CFG {graph: graph, entry: entry, @@ -100,7 +98,7 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { fn pat(&mut self, pat: &hir::Pat, pred: CFGIndex) -> CFGIndex { match pat.node { PatKind::Binding(.., None) | - PatKind::Path(..) | + PatKind::Path(_) | PatKind::Lit(..) | PatKind::Range(..) | PatKind::Wild => { @@ -223,7 +221,7 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { expr_exit } - hir::ExprLoop(ref body, _) => { + hir::ExprLoop(ref body, _, _) => { // // [pred] // | @@ -282,16 +280,17 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { self.add_unreachable_node() } - hir::ExprBreak(label) => { - let loop_scope = self.find_scope(expr, label.map(|l| l.node)); - let b = self.add_ast_node(expr.id, &[pred]); + hir::ExprBreak(label, ref opt_expr) => { + let v = self.opt_expr(opt_expr, pred); + let loop_scope = self.find_scope(expr, label); + let b = self.add_ast_node(expr.id, &[v]); self.add_exiting_edge(expr, b, loop_scope, loop_scope.break_index); self.add_unreachable_node() } hir::ExprAgain(label) => { - let loop_scope = self.find_scope(expr, label.map(|l| l.node)); + let loop_scope = self.find_scope(expr, label); let a = self.add_ast_node(expr.id, &[pred]); self.add_exiting_edge(expr, a, loop_scope, loop_scope.continue_index); @@ -299,15 +298,15 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { } hir::ExprArray(ref elems) => { - self.straightline(expr, pred, elems.iter().map(|e| &**e)) + self.straightline(expr, pred, elems.iter().map(|e| &*e)) } hir::ExprCall(ref func, ref args) => { - self.call(expr, pred, &func, args.iter().map(|e| &**e)) + self.call(expr, pred, &func, args.iter().map(|e| &*e)) } hir::ExprMethodCall(.., ref args) => { - self.call(expr, pred, &args[0], args[1..].iter().map(|e| &**e)) + self.call(expr, pred, &args[0], args[1..].iter().map(|e| &*e)) } hir::ExprIndex(ref l, ref r) | @@ -320,7 +319,7 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { } hir::ExprTup(ref exprs) => { - self.straightline(expr, pred, exprs.iter().map(|e| &**e)) + self.straightline(expr, pred, exprs.iter().map(|e| &*e)) } hir::ExprStruct(_, ref fields, ref base) => { @@ -353,14 +352,14 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { } hir::ExprInlineAsm(_, ref outputs, ref inputs) => { - let post_outputs = self.exprs(outputs.iter().map(|e| &**e), pred); - let post_inputs = self.exprs(inputs.iter().map(|e| &**e), post_outputs); + let post_outputs = self.exprs(outputs.iter().map(|e| &*e), pred); + let post_inputs = self.exprs(inputs.iter().map(|e| &*e), post_outputs); self.add_ast_node(expr.id, &[post_inputs]) } hir::ExprClosure(..) | hir::ExprLit(..) | - hir::ExprPath(..) => { + hir::ExprPath(_) => { self.straightline(expr, pred, None::.iter()) } } @@ -456,7 +455,7 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { // Visit the guard expression let guard_exit = self.expr(&guard, guard_start); - let this_has_bindings = pat_util::pat_contains_bindings_or_wild(&pat); + let this_has_bindings = pat.contains_bindings_or_wild(); // If both this pattern and the previous pattern // were free of bindings, they must consist only @@ -569,23 +568,16 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { fn find_scope(&self, expr: &hir::Expr, - label: Option) -> LoopScope { - if label.is_none() { - return *self.loop_scopes.last().unwrap(); - } - - match self.tcx.expect_def(expr.id) { - Def::Label(loop_id) => { + label: Option) -> LoopScope { + match label { + None => *self.loop_scopes.last().unwrap(), + Some(label) => { for l in &self.loop_scopes { - if l.loop_id == loop_id { + if l.loop_id == label.loop_id { return *l; } } - span_bug!(expr.span, "no loop scope for id {}", loop_id); - } - - r => { - span_bug!(expr.span, "bad entry `{:?}` in def_map for label", r); + span_bug!(expr.span, "no loop scope for id {}", label.loop_id); } } } diff --git a/src/librustc/cfg/mod.rs b/src/librustc/cfg/mod.rs index d06f51073d..43434b884c 100644 --- a/src/librustc/cfg/mod.rs +++ b/src/librustc/cfg/mod.rs @@ -59,8 +59,8 @@ pub type CFGEdge = graph::Edge; impl CFG { pub fn new<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - blk: &hir::Block) -> CFG { - construct::construct(tcx, blk) + body: &hir::Expr) -> CFG { + construct::construct(tcx, body) } pub fn node_is_reachable(&self, id: ast::NodeId) -> bool { diff --git a/src/librustc/dep_graph/dep_node.rs b/src/librustc/dep_graph/dep_node.rs index e99ffa95ed..e261c699b6 100644 --- a/src/librustc/dep_graph/dep_node.rs +++ b/src/librustc/dep_graph/dep_node.rs @@ -42,6 +42,10 @@ pub enum DepNode { // Represents the HIR node with the given node-id Hir(D), + // Represents the body of a function or method. The def-id is that of the + // function/method. + HirBody(D), + // Represents the metadata for a given HIR node, typically found // in an extern crate. MetaData(D), @@ -59,6 +63,7 @@ pub enum DepNode { PluginRegistrar, StabilityIndex, CollectItem(D), + CollectItemSig(D), Coherence, EffectCheck, Liveness, @@ -90,7 +95,7 @@ pub enum DepNode { RvalueCheck(D), Reachability, DeadCheck, - StabilityCheck, + StabilityCheck(D), LateLintCheck, TransCrate, TransCrateItem(D), @@ -103,11 +108,10 @@ pub enum DepNode { // nodes. Often we map multiple tables to the same node if there // is no point in distinguishing them (e.g., both the type and // predicates for an item wind up in `ItemSignature`). - ImplOrTraitItems(D), + AssociatedItems(D), ItemSignature(D), - FieldTy(D), SizedConstraint(D), - ImplOrTraitItemDefIds(D), + AssociatedItemDefIds(D), InherentImpls(D), // The set of impls for a given trait. Ultimately, it would be @@ -150,13 +154,13 @@ impl DepNode { CollectItem, BorrowCheck, Hir, + HirBody, TransCrateItem, TypeckItemType, TypeckItemBody, - ImplOrTraitItems, + AssociatedItems, ItemSignature, - FieldTy, - ImplOrTraitItemDefIds, + AssociatedItemDefIds, InherentImpls, TraitImpls, ReprHints, @@ -189,7 +193,6 @@ impl DepNode { Privacy => Some(Privacy), Reachability => Some(Reachability), DeadCheck => Some(DeadCheck), - StabilityCheck => Some(StabilityCheck), LateLintCheck => Some(LateLintCheck), TransCrate => Some(TransCrate), TransWriteMetadata => Some(TransWriteMetadata), @@ -200,8 +203,10 @@ impl DepNode { WorkProduct(ref id) => Some(WorkProduct(id.clone())), Hir(ref d) => op(d).map(Hir), + HirBody(ref d) => op(d).map(HirBody), MetaData(ref d) => op(d).map(MetaData), CollectItem(ref d) => op(d).map(CollectItem), + CollectItemSig(ref d) => op(d).map(CollectItemSig), CoherenceCheckImpl(ref d) => op(d).map(CoherenceCheckImpl), CoherenceOverlapCheck(ref d) => op(d).map(CoherenceOverlapCheck), CoherenceOverlapCheckSpecial(ref d) => op(d).map(CoherenceOverlapCheckSpecial), @@ -217,13 +222,13 @@ impl DepNode { Mir(ref d) => op(d).map(Mir), BorrowCheck(ref d) => op(d).map(BorrowCheck), RvalueCheck(ref d) => op(d).map(RvalueCheck), + StabilityCheck(ref d) => op(d).map(StabilityCheck), TransCrateItem(ref d) => op(d).map(TransCrateItem), TransInlinedItem(ref d) => op(d).map(TransInlinedItem), - ImplOrTraitItems(ref d) => op(d).map(ImplOrTraitItems), + AssociatedItems(ref d) => op(d).map(AssociatedItems), ItemSignature(ref d) => op(d).map(ItemSignature), - FieldTy(ref d) => op(d).map(FieldTy), SizedConstraint(ref d) => op(d).map(SizedConstraint), - ImplOrTraitItemDefIds(ref d) => op(d).map(ImplOrTraitItemDefIds), + AssociatedItemDefIds(ref d) => op(d).map(AssociatedItemDefIds), InherentImpls(ref d) => op(d).map(InherentImpls), TraitImpls(ref d) => op(d).map(TraitImpls), TraitItems(ref d) => op(d).map(TraitItems), diff --git a/src/librustc/dep_graph/dep_tracking_map.rs b/src/librustc/dep_graph/dep_tracking_map.rs index 51f7890c7a..50dfe9d22f 100644 --- a/src/librustc/dep_graph/dep_tracking_map.rs +++ b/src/librustc/dep_graph/dep_tracking_map.rs @@ -9,7 +9,7 @@ // except according to those terms. use hir::def_id::DefId; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use std::cell::RefCell; use std::ops::Index; use std::hash::Hash; @@ -24,7 +24,7 @@ use super::{DepNode, DepGraph}; pub struct DepTrackingMap { phantom: PhantomData, graph: DepGraph, - map: FnvHashMap, + map: FxHashMap, } pub trait DepTrackingMapConfig { @@ -38,7 +38,7 @@ impl DepTrackingMap { DepTrackingMap { phantom: PhantomData, graph: graph, - map: FnvHashMap() + map: FxHashMap() } } @@ -112,15 +112,15 @@ impl MemoizationMap for RefCell> { /// switched to `Map(key)`. Therefore, if `op` makes use of any /// HIR nodes or shared state accessed through its closure /// environment, it must explicitly register a read of that - /// state. As an example, see `type_scheme_of_item` in `collect`, + /// state. As an example, see `type_of_item` in `collect`, /// which looks something like this: /// /// ``` - /// fn type_scheme_of_item(..., item: &hir::Item) -> ty::TypeScheme<'tcx> { + /// fn type_of_item(..., item: &hir::Item) -> Ty<'tcx> { /// let item_def_id = ccx.tcx.map.local_def_id(it.id); - /// ccx.tcx.tcache.memoized(item_def_id, || { + /// ccx.tcx.item_types.memoized(item_def_id, || { /// ccx.tcx.dep_graph.read(DepNode::Hir(item_def_id)); // (*) - /// compute_type_scheme_of_item(ccx, item) + /// compute_type_of_item(ccx, item) /// }); /// } /// ``` diff --git a/src/librustc/dep_graph/edges.rs b/src/librustc/dep_graph/edges.rs index 10f3d21f2a..8657a3e5a5 100644 --- a/src/librustc/dep_graph/edges.rs +++ b/src/librustc/dep_graph/edges.rs @@ -8,15 +8,15 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use rustc_data_structures::fnv::{FnvHashMap, FnvHashSet}; +use rustc_data_structures::fx::{FxHashMap, FxHashSet}; use std::fmt::Debug; use std::hash::Hash; use super::{DepGraphQuery, DepNode}; pub struct DepGraphEdges { nodes: Vec>, - indices: FnvHashMap, IdIndex>, - edges: FnvHashSet<(IdIndex, IdIndex)>, + indices: FxHashMap, IdIndex>, + edges: FxHashSet<(IdIndex, IdIndex)>, open_nodes: Vec, } @@ -46,8 +46,8 @@ impl DepGraphEdges { pub fn new() -> DepGraphEdges { DepGraphEdges { nodes: vec![], - indices: FnvHashMap(), - edges: FnvHashSet(), + indices: FxHashMap(), + edges: FxHashSet(), open_nodes: Vec::new() } } diff --git a/src/librustc/dep_graph/graph.rs b/src/librustc/dep_graph/graph.rs index fac3586afc..2637d34c5c 100644 --- a/src/librustc/dep_graph/graph.rs +++ b/src/librustc/dep_graph/graph.rs @@ -9,7 +9,7 @@ // except according to those terms. use hir::def_id::DefId; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use session::config::OutputType; use std::cell::{Ref, RefCell}; use std::rc::Rc; @@ -34,10 +34,10 @@ struct DepGraphData { /// things available to us. If we find that they are not dirty, we /// load the path to the file storing those work-products here into /// this map. We can later look for and extract that data. - previous_work_products: RefCell, WorkProduct>>, + previous_work_products: RefCell, WorkProduct>>, /// Work-products that we generate in this run. - work_products: RefCell, WorkProduct>>, + work_products: RefCell, WorkProduct>>, } impl DepGraph { @@ -45,8 +45,8 @@ impl DepGraph { DepGraph { data: Rc::new(DepGraphData { thread: DepGraphThreadData::new(enabled), - previous_work_products: RefCell::new(FnvHashMap()), - work_products: RefCell::new(FnvHashMap()), + previous_work_products: RefCell::new(FxHashMap()), + work_products: RefCell::new(FxHashMap()), }) } } @@ -117,7 +117,7 @@ impl DepGraph { /// Access the map of work-products created during this run. Only /// used during saving of the dep-graph. - pub fn work_products(&self) -> Ref, WorkProduct>> { + pub fn work_products(&self) -> Ref, WorkProduct>> { self.data.work_products.borrow() } } diff --git a/src/librustc/dep_graph/mod.rs b/src/librustc/dep_graph/mod.rs index 9c00e95c17..e365cea6d0 100644 --- a/src/librustc/dep_graph/mod.rs +++ b/src/librustc/dep_graph/mod.rs @@ -25,5 +25,5 @@ pub use self::dep_node::WorkProductId; pub use self::graph::DepGraph; pub use self::graph::WorkProduct; pub use self::query::DepGraphQuery; -pub use self::visit::visit_all_items_in_krate; +pub use self::visit::visit_all_item_likes_in_krate; pub use self::raii::DepTask; diff --git a/src/librustc/dep_graph/query.rs b/src/librustc/dep_graph/query.rs index 7a780c1d4a..4c791f9655 100644 --- a/src/librustc/dep_graph/query.rs +++ b/src/librustc/dep_graph/query.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::graph::{Direction, INCOMING, Graph, NodeIndex, OUTGOING}; use std::fmt::Debug; use std::hash::Hash; @@ -17,7 +17,7 @@ use super::DepNode; pub struct DepGraphQuery { pub graph: Graph, ()>, - pub indices: FnvHashMap, NodeIndex>, + pub indices: FxHashMap, NodeIndex>, } impl DepGraphQuery { @@ -25,7 +25,7 @@ impl DepGraphQuery { edges: &[(DepNode, DepNode)]) -> DepGraphQuery { let mut graph = Graph::new(); - let mut indices = FnvHashMap(); + let mut indices = FxHashMap(); for node in nodes { indices.insert(node.clone(), graph.next_node_index()); graph.add_node(node.clone()); diff --git a/src/librustc/dep_graph/shadow.rs b/src/librustc/dep_graph/shadow.rs index 06def4bf19..5d4190a8ae 100644 --- a/src/librustc/dep_graph/shadow.rs +++ b/src/librustc/dep_graph/shadow.rs @@ -27,7 +27,7 @@ //! created. See `./README.md` for details. use hir::def_id::DefId; -use std::cell::{BorrowState, RefCell}; +use std::cell::RefCell; use std::env; use super::DepNode; @@ -71,15 +71,11 @@ impl ShadowGraph { pub fn enqueue(&self, message: &DepMessage) { if ENABLED { - match self.stack.borrow_state() { - BorrowState::Unused => {} - _ => { - // When we apply edge filters, that invokes the - // Debug trait on DefIds, which in turn reads from - // various bits of state and creates reads! Ignore - // those recursive reads. - return; - } + if self.stack.try_borrow().is_err() { + // When we apply edge filters, that invokes the Debug trait on + // DefIds, which in turn reads from various bits of state and + // creates reads! Ignore those recursive reads. + return; } let mut stack = self.stack.borrow_mut(); diff --git a/src/librustc/dep_graph/visit.rs b/src/librustc/dep_graph/visit.rs index d085c24036..600732fc6f 100644 --- a/src/librustc/dep_graph/visit.rs +++ b/src/librustc/dep_graph/visit.rs @@ -10,22 +10,21 @@ use hir; use hir::def_id::DefId; -use hir::intravisit::Visitor; +use hir::itemlikevisit::ItemLikeVisitor; use ty::TyCtxt; use super::dep_node::DepNode; - /// Visit all the items in the krate in some order. When visiting a /// particular item, first create a dep-node by calling `dep_node_fn` /// and push that onto the dep-graph stack of tasks, and also create a /// read edge from the corresponding AST node. This is used in /// compiler passes to automatically record the item that they are /// working on. -pub fn visit_all_items_in_krate<'a, 'tcx, V, F>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - mut dep_node_fn: F, - visitor: &mut V) - where F: FnMut(DefId) -> DepNode, V: Visitor<'tcx> +pub fn visit_all_item_likes_in_krate<'a, 'tcx, V, F>(tcx: TyCtxt<'a, 'tcx, 'tcx>, + mut dep_node_fn: F, + visitor: &mut V) + where F: FnMut(DefId) -> DepNode, V: ItemLikeVisitor<'tcx> { struct TrackingVisitor<'visit, 'tcx: 'visit, F: 'visit, V: 'visit> { tcx: TyCtxt<'visit, 'tcx, 'tcx>, @@ -33,8 +32,8 @@ pub fn visit_all_items_in_krate<'a, 'tcx, V, F>(tcx: TyCtxt<'a, 'tcx, 'tcx>, visitor: &'visit mut V } - impl<'visit, 'tcx, F, V> Visitor<'tcx> for TrackingVisitor<'visit, 'tcx, F, V> - where F: FnMut(DefId) -> DepNode, V: Visitor<'tcx> + impl<'visit, 'tcx, F, V> ItemLikeVisitor<'tcx> for TrackingVisitor<'visit, 'tcx, F, V> + where F: FnMut(DefId) -> DepNode, V: ItemLikeVisitor<'tcx> { fn visit_item(&mut self, i: &'tcx hir::Item) { let item_def_id = self.tcx.map.local_def_id(i.id); @@ -46,6 +45,17 @@ pub fn visit_all_items_in_krate<'a, 'tcx, V, F>(tcx: TyCtxt<'a, 'tcx, 'tcx>, self.visitor.visit_item(i); debug!("Ended task {:?}", task_id); } + + fn visit_impl_item(&mut self, i: &'tcx hir::ImplItem) { + let impl_item_def_id = self.tcx.map.local_def_id(i.id); + let task_id = (self.dep_node_fn)(impl_item_def_id); + let _task = self.tcx.dep_graph.in_task(task_id.clone()); + debug!("Started task {:?}", task_id); + assert!(!self.tcx.map.is_inlined_def_id(impl_item_def_id)); + self.tcx.dep_graph.read(DepNode::Hir(impl_item_def_id)); + self.visitor.visit_impl_item(i); + debug!("Ended task {:?}", task_id); + } } let krate = tcx.dep_graph.with_ignore(|| tcx.map.krate()); @@ -54,5 +64,5 @@ pub fn visit_all_items_in_krate<'a, 'tcx, V, F>(tcx: TyCtxt<'a, 'tcx, 'tcx>, dep_node_fn: &mut dep_node_fn, visitor: visitor }; - krate.visit_all_items(&mut tracking_visitor) + krate.visit_all_item_likes(&mut tracking_visitor) } diff --git a/src/librustc/diagnostics.rs b/src/librustc/diagnostics.rs index 465a09505e..ec09877ae1 100644 --- a/src/librustc/diagnostics.rs +++ b/src/librustc/diagnostics.rs @@ -672,120 +672,6 @@ extern "C" { ``` "##, -E0269: r##" -A returned value was expected but not all control paths return one. - -Erroneous code example: - -```compile_fail,E0269 -fn abracada_FAIL() -> String { - "this won't work".to_string(); - // error: not all control paths return a value -} -``` - -In the previous code, the function is supposed to return a `String`, however, -the code returns nothing (because of the ';'). Another erroneous code would be: - -```compile_fail -fn abracada_FAIL(b: bool) -> u32 { - if b { - 0 - } else { - "a" // It fails because an `u32` was expected and something else is - // returned. - } -} -``` - -It is advisable to find out what the unhandled cases are and check for them, -returning an appropriate value or panicking if necessary. Check if you need -to remove a semicolon from the last expression, like in the first erroneous -code example. -"##, - -E0270: r##" -Rust lets you define functions which are known to never return, i.e. are -'diverging', by marking its return type as `!`. - -For example, the following functions never return: - -```no_run -fn foo() -> ! { - loop {} -} - -fn bar() -> ! { - foo() // foo() is diverging, so this will diverge too -} - -fn baz() -> ! { - panic!(); // this macro internally expands to a call to a diverging function -} -``` - -Such functions can be used in a place where a value is expected without -returning a value of that type, for instance: - -```no_run -fn foo() -> ! { - loop {} -} - -let x = 3; - -let y = match x { - 1 => 1, - 2 => 4, - _ => foo() // diverging function called here -}; - -println!("{}", y) -``` - -If the third arm of the match block is reached, since `foo()` doesn't ever -return control to the match block, it is fine to use it in a place where an -integer was expected. The `match` block will never finish executing, and any -point where `y` (like the print statement) is needed will not be reached. - -However, if we had a diverging function that actually does finish execution: - -```ignore -fn foo() -> ! { - loop {break;} -} -``` - -Then we would have an unknown value for `y` in the following code: - -```no_run -fn foo() -> ! { - loop {} -} - -let x = 3; - -let y = match x { - 1 => 1, - 2 => 4, - _ => foo() -}; - -println!("{}", y); -``` - -In the previous example, the print statement was never reached when the -wildcard match arm was hit, so we were okay with `foo()` not returning an -integer that we could set to `y`. But in this example, `foo()` actually does -return control, so the print statement will be executed with an uninitialized -value. - -Obviously we cannot have functions which are allowed to be used in such -positions and yet can return control. So, if you are defining a function that -returns `!`, make sure that there is no way for it to actually finish -executing. -"##, - E0271: r##" This is because of a type mismatch between the associated type of some trait (e.g. `T::Bar`, where `T` implements `trait Quux { type Bar; }`) diff --git a/src/librustc/hir/check_attr.rs b/src/librustc/hir/check_attr.rs index 8ba52cdb64..6f5f548aa7 100644 --- a/src/librustc/hir/check_attr.rs +++ b/src/librustc/hir/check_attr.rs @@ -64,7 +64,7 @@ impl<'a> CheckAttrVisitor<'a> { None => continue, }; - let (message, label) = match &*name { + let (message, label) = match &*name.as_str() { "C" => { conflicting_reprs += 1; if target != Target::Struct && @@ -120,7 +120,7 @@ impl<'a> CheckAttrVisitor<'a> { } fn check_attribute(&self, attr: &ast::Attribute, target: Target) { - let name: &str = &attr.name(); + let name: &str = &attr.name().as_str(); match name { "inline" => self.check_inline(attr, target), "repr" => self.check_repr(attr, target), @@ -129,8 +129,8 @@ impl<'a> CheckAttrVisitor<'a> { } } -impl<'a> Visitor for CheckAttrVisitor<'a> { - fn visit_item(&mut self, item: &ast::Item) { +impl<'a> Visitor<'a> for CheckAttrVisitor<'a> { + fn visit_item(&mut self, item: &'a ast::Item) { let target = Target::from_item(item); for attr in &item.attrs { self.check_attribute(attr, target); diff --git a/src/librustc/hir/def.rs b/src/librustc/hir/def.rs index 8b9cee1d2f..b6fce2d6ca 100644 --- a/src/librustc/hir/def.rs +++ b/src/librustc/hir/def.rs @@ -52,6 +52,9 @@ pub enum Def { ast::NodeId), // expr node that creates the closure Label(ast::NodeId), + // Macro namespace + Macro(DefId), + // Both namespaces Err, } @@ -80,14 +83,6 @@ impl PathResolution { PathResolution { base_def: def, depth: 0 } } - /// Get the definition, if fully resolved, otherwise panic. - pub fn full_def(&self) -> Def { - if self.depth != 0 { - bug!("path not fully resolved: {:?}", self); - } - self.base_def - } - pub fn kind_name(&self) -> &'static str { if self.depth != 0 { "associated item" @@ -103,7 +98,7 @@ pub type DefMap = NodeMap; // within. pub type ExportMap = NodeMap>; -#[derive(Copy, Clone, RustcEncodable, RustcDecodable)] +#[derive(Copy, Clone, Debug, RustcEncodable, RustcDecodable)] pub struct Export { pub name: ast::Name, // The name of the target. pub def: Def, // The definition of the target. @@ -133,7 +128,7 @@ impl Def { Def::Variant(id) | Def::VariantCtor(id, ..) | Def::Enum(id) | Def::TyAlias(id) | Def::AssociatedTy(id) | Def::TyParam(id) | Def::Struct(id) | Def::StructCtor(id, ..) | Def::Union(id) | Def::Trait(id) | Def::Method(id) | Def::Const(id) | - Def::AssociatedConst(id) | Def::Local(id) | Def::Upvar(id, ..) => { + Def::AssociatedConst(id) | Def::Local(id) | Def::Upvar(id, ..) | Def::Macro(id) => { id } @@ -173,6 +168,7 @@ impl Def { Def::Upvar(..) => "closure capture", Def::Label(..) => "label", Def::SelfTy(..) => "self type", + Def::Macro(..) => "macro", Def::Err => "unresolved item", } } diff --git a/src/librustc/hir/def_id.rs b/src/librustc/hir/def_id.rs index 399243551d..d3771b1755 100644 --- a/src/librustc/hir/def_id.rs +++ b/src/librustc/hir/def_id.rs @@ -34,6 +34,10 @@ impl Idx for CrateNum { /// LOCAL_CRATE in their DefId. pub const LOCAL_CRATE: CrateNum = CrateNum(0); +/// Virtual crate for builtin macros +// FIXME(jseyfried): this is also used for custom derives until proc-macro crates get `CrateNum`s. +pub const BUILTIN_MACROS_CRATE: CrateNum = CrateNum(!0); + impl CrateNum { pub fn new(x: usize) -> CrateNum { assert!(x < (u32::MAX as usize)); diff --git a/src/librustc/hir/intravisit.rs b/src/librustc/hir/intravisit.rs index b1771f52da..625bde2ca8 100644 --- a/src/librustc/hir/intravisit.rs +++ b/src/librustc/hir/intravisit.rs @@ -8,7 +8,15 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -//! HIR walker. Each overridden visit method has full control over what +//! HIR walker for walking the contents of nodes. +//! +//! **For an overview of the visitor strategy, see the docs on the +//! `super::itemlikevisit::ItemLikeVisitor` trait.** +//! +//! If you have decided to use this visitor, here are some general +//! notes on how to do it: +//! +//! Each overridden visit method has full control over what //! happens with its node, it can do its own traversal of the node's children, //! call `intravisit::walk_*` to apply the default traversal algorithm, or prevent //! deeper traversal by doing nothing. @@ -30,6 +38,9 @@ use syntax::ast::{NodeId, CRATE_NODE_ID, Name, Attribute}; use syntax::codemap::Spanned; use syntax_pos::Span; use hir::*; +use hir::def::Def; +use hir::map::Map; +use super::itemlikevisit::DeepVisitor; use std::cmp; use std::u32; @@ -56,6 +67,62 @@ impl<'a> FnKind<'a> { } } +/// Specifies what nested things a visitor wants to visit. The most +/// common choice is `OnlyBodies`, which will cause the visitor to +/// visit fn bodies for fns that it encounters, but skip over nested +/// item-like things. +/// +/// See the comments on `ItemLikeVisitor` for more details on the overall +/// visit strategy. +pub enum NestedVisitorMap<'this, 'tcx: 'this> { + /// Do not visit any nested things. When you add a new + /// "non-nested" thing, you will want to audit such uses to see if + /// they remain valid. + /// + /// Use this if you are only walking some particular kind of tree + /// (i.e., a type, or fn signature) and you don't want to thread a + /// HIR map around. + None, + + /// Do not visit nested item-like things, but visit nested things + /// that are inside of an item-like. + /// + /// **This is the most common choice.** A very commmon pattern is + /// to use `tcx.visit_all_item_likes_in_krate()` as an outer loop, + /// and to have the visitor that visits the contents of each item + /// using this setting. + OnlyBodies(&'this Map<'tcx>), + + /// Visit all nested things, including item-likes. + /// + /// **This is an unusual choice.** It is used when you want to + /// process everything within their lexical context. Typically you + /// kick off the visit by doing `walk_krate()`. + All(&'this Map<'tcx>), +} + +impl<'this, 'tcx> NestedVisitorMap<'this, 'tcx> { + /// Returns the map to use for an "intra item-like" thing (if any). + /// e.g., function body. + pub fn intra(self) -> Option<&'this Map<'tcx>> { + match self { + NestedVisitorMap::None => None, + NestedVisitorMap::OnlyBodies(map) => Some(map), + NestedVisitorMap::All(map) => Some(map), + } + } + + /// Returns the map to use for an "item-like" thing (if any). + /// e.g., item, impl-item. + pub fn inter(self) -> Option<&'this Map<'tcx>> { + match self { + NestedVisitorMap::None => None, + NestedVisitorMap::OnlyBodies(_) => None, + NestedVisitorMap::All(map) => Some(map), + } + } +} + /// Each method of the Visitor trait is a hook to be potentially /// overridden. Each method's default implementation recursively visits /// the substructure of the input via the corresponding `walk` method; @@ -76,27 +143,86 @@ pub trait Visitor<'v> : Sized { /////////////////////////////////////////////////////////////////////////// // Nested items. - /// Invoked when a nested item is encountered. By default, does - /// nothing. If you want a deep walk, you need to override to - /// fetch the item contents. But most of the time, it is easier - /// (and better) to invoke `Crate::visit_all_items`, which visits - /// all items in the crate in some order (but doesn't respect - /// nesting). + /// The default versions of the `visit_nested_XXX` routines invoke + /// this method to get a map to use. By selecting an enum variant, + /// you control which kinds of nested HIR are visited; see + /// `NestedVisitorMap` for details. By "nested HIR", we are + /// referring to bits of HIR that are not directly embedded within + /// one another but rather indirectly, through a table in the + /// crate. This is done to control dependencies during incremental + /// compilation: the non-inline bits of HIR can be tracked and + /// hashed separately. + /// + /// **If for some reason you want the nested behavior, but don't + /// have a `Map` are your disposal:** then you should override the + /// `visit_nested_XXX` methods, and override this method to + /// `panic!()`. This way, if a new `visit_nested_XXX` variant is + /// added in the future, we will see the panic in your code and + /// fix it appropriately. + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'v>; + + /// Invoked when a nested item is encountered. By default does + /// nothing unless you override `nested_visit_map` to return + /// `Some(_)`, in which case it will walk the item. **You probably + /// don't want to override this method** -- instead, override + /// `nested_visit_map` or use the "shallow" or "deep" visit + /// patterns described on `itemlikevisit::ItemLikeVisitor`. The only + /// reason to override this method is if you want a nested pattern + /// but cannot supply a `Map`; see `nested_visit_map` for advice. #[allow(unused_variables)] fn visit_nested_item(&mut self, id: ItemId) { + let opt_item = self.nested_visit_map().inter().map(|map| map.expect_item(id.id)); + if let Some(item) = opt_item { + self.visit_item(item); + } } - /// Visit the top-level item and (optionally) nested items. See + /// Like `visit_nested_item()`, but for impl items. See + /// `visit_nested_item()` for advice on when to override this + /// method. + #[allow(unused_variables)] + fn visit_nested_impl_item(&mut self, id: ImplItemId) { + let opt_item = self.nested_visit_map().inter().map(|map| map.impl_item(id)); + if let Some(item) = opt_item { + self.visit_impl_item(item); + } + } + + /// Invoked to visit the body of a function, method or closure. Like + /// visit_nested_item, does nothing by default unless you override + /// `nested_visit_map` to return `Some(_)`, in which case it will walk the + /// body. + fn visit_body(&mut self, id: ExprId) { + let opt_expr = self.nested_visit_map().intra().map(|map| map.expr(id)); + if let Some(expr) = opt_expr { + self.visit_expr(expr); + } + } + + /// Visit the top-level item and (optionally) nested items / impl items. See /// `visit_nested_item` for details. fn visit_item(&mut self, i: &'v Item) { walk_item(self, i) } + /// When invoking `visit_all_item_likes()`, you need to supply an + /// item-like visitor. This method converts a "intra-visit" + /// visitor into an item-like visitor that walks the entire tree. + /// If you use this, you probably don't want to process the + /// contents of nested item-like things, since the outer loop will + /// visit them as well. + fn as_deep_visitor<'s>(&'s mut self) -> DeepVisitor<'s, Self> { + DeepVisitor::new(self) + } + /////////////////////////////////////////////////////////////////////////// fn visit_id(&mut self, _node_id: NodeId) { // Nothing to do. } + fn visit_def_mention(&mut self, _def: Def) { + // Nothing to do. + } fn visit_name(&mut self, _span: Span, _name: Name) { // Nothing to do. } @@ -138,7 +264,7 @@ pub trait Visitor<'v> : Sized { fn visit_where_predicate(&mut self, predicate: &'v WherePredicate) { walk_where_predicate(self, predicate) } - fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl, b: &'v Block, s: Span, id: NodeId) { + fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl, b: ExprId, s: Span, id: NodeId) { walk_fn(self, fk, fd, b, s, id) } fn visit_trait_item(&mut self, ti: &'v TraitItem) { @@ -147,6 +273,9 @@ pub trait Visitor<'v> : Sized { fn visit_impl_item(&mut self, ii: &'v ImplItem) { walk_impl_item(self, ii) } + fn visit_impl_item_ref(&mut self, ii: &'v ImplItemRef) { + walk_impl_item_ref(self, ii) + } fn visit_trait_ref(&mut self, t: &'v TraitRef) { walk_trait_ref(self, t) } @@ -183,12 +312,12 @@ pub trait Visitor<'v> : Sized { fn visit_lifetime_def(&mut self, lifetime: &'v LifetimeDef) { walk_lifetime_def(self, lifetime) } + fn visit_qpath(&mut self, qpath: &'v QPath, id: NodeId, span: Span) { + walk_qpath(self, qpath, id, span) + } fn visit_path(&mut self, path: &'v Path, _id: NodeId) { walk_path(self, path) } - fn visit_path_list_item(&mut self, prefix: &'v Path, item: &'v PathListItem) { - walk_path_list_item(self, prefix, item) - } fn visit_path_segment(&mut self, path_span: Span, path_segment: &'v PathSegment) { walk_path_segment(self, path_span, path_segment) } @@ -206,6 +335,12 @@ pub trait Visitor<'v> : Sized { fn visit_vis(&mut self, vis: &'v Visibility) { walk_vis(self, vis) } + fn visit_associated_item_kind(&mut self, kind: &'v AssociatedItemKind) { + walk_associated_item_kind(self, kind); + } + fn visit_defaultness(&mut self, defaultness: &'v Defaultness) { + walk_defaultness(self, defaultness); + } } pub fn walk_opt_name<'v, V: Visitor<'v>>(visitor: &mut V, span: Span, opt_name: Option) { @@ -282,23 +417,9 @@ pub fn walk_item<'v, V: Visitor<'v>>(visitor: &mut V, item: &'v Item) { visitor.visit_id(item.id); walk_opt_name(visitor, item.span, opt_name) } - ItemUse(ref vp) => { + ItemUse(ref path, _) => { visitor.visit_id(item.id); - match vp.node { - ViewPathSimple(name, ref path) => { - visitor.visit_name(vp.span, name); - visitor.visit_path(path, item.id); - } - ViewPathGlob(ref path) => { - visitor.visit_path(path, item.id); - } - ViewPathList(ref prefix, ref list) => { - visitor.visit_path(prefix, item.id); - for item in list { - visitor.visit_path_list_item(prefix, item) - } - } - } + visitor.visit_path(path, item.id); } ItemStatic(ref typ, _, ref expr) | ItemConst(ref typ, ref expr) => { @@ -306,7 +427,7 @@ pub fn walk_item<'v, V: Visitor<'v>>(visitor: &mut V, item: &'v Item) { visitor.visit_ty(typ); visitor.visit_expr(expr); } - ItemFn(ref declaration, unsafety, constness, abi, ref generics, ref body) => { + ItemFn(ref declaration, unsafety, constness, abi, ref generics, body_id) => { visitor.visit_fn(FnKind::ItemFn(item.name, generics, unsafety, @@ -315,7 +436,7 @@ pub fn walk_item<'v, V: Visitor<'v>>(visitor: &mut V, item: &'v Item) { &item.vis, &item.attrs), declaration, - body, + body_id, item.span, item.id) } @@ -341,12 +462,14 @@ pub fn walk_item<'v, V: Visitor<'v>>(visitor: &mut V, item: &'v Item) { visitor.visit_id(item.id); visitor.visit_trait_ref(trait_ref) } - ItemImpl(.., ref type_parameters, ref opt_trait_reference, ref typ, ref impl_items) => { + ItemImpl(.., ref type_parameters, ref opt_trait_reference, ref typ, ref impl_item_refs) => { visitor.visit_id(item.id); visitor.visit_generics(type_parameters); walk_list!(visitor, visit_trait_ref, opt_trait_reference); visitor.visit_ty(typ); - walk_list!(visitor, visit_impl_item, impl_items); + for impl_item_ref in impl_item_refs { + visitor.visit_impl_item_ref(impl_item_ref); + } } ItemStruct(ref struct_definition, ref generics) | ItemUnion(ref struct_definition, ref generics) => { @@ -412,11 +535,8 @@ pub fn walk_ty<'v, V: Visitor<'v>>(visitor: &mut V, typ: &'v Ty) { walk_fn_decl(visitor, &function_declaration.decl); walk_list!(visitor, visit_lifetime_def, &function_declaration.lifetimes); } - TyPath(ref maybe_qself, ref path) => { - if let Some(ref qself) = *maybe_qself { - visitor.visit_ty(&qself.ty); - } - visitor.visit_path(path, typ.id); + TyPath(ref qpath) => { + visitor.visit_qpath(qpath, typ.id, typ.span); } TyObjectSum(ref ty, ref bounds) => { visitor.visit_ty(ty); @@ -439,18 +559,26 @@ pub fn walk_ty<'v, V: Visitor<'v>>(visitor: &mut V, typ: &'v Ty) { } } -pub fn walk_path<'v, V: Visitor<'v>>(visitor: &mut V, path: &'v Path) { - for segment in &path.segments { - visitor.visit_path_segment(path.span, segment); +pub fn walk_qpath<'v, V: Visitor<'v>>(visitor: &mut V, qpath: &'v QPath, id: NodeId, span: Span) { + match *qpath { + QPath::Resolved(ref maybe_qself, ref path) => { + if let Some(ref qself) = *maybe_qself { + visitor.visit_ty(qself); + } + visitor.visit_path(path, id) + } + QPath::TypeRelative(ref qself, ref segment) => { + visitor.visit_ty(qself); + visitor.visit_path_segment(span, segment); + } } } -pub fn walk_path_list_item<'v, V>(visitor: &mut V, _prefix: &'v Path, item: &'v PathListItem) - where V: Visitor<'v>, -{ - visitor.visit_id(item.node.id); - visitor.visit_name(item.span, item.node.name); - walk_opt_name(visitor, item.span, item.node.rename); +pub fn walk_path<'v, V: Visitor<'v>>(visitor: &mut V, path: &'v Path) { + visitor.visit_def_mention(path.def); + for segment in &path.segments { + visitor.visit_path_segment(path.span, segment); + } } pub fn walk_path_segment<'v, V: Visitor<'v>>(visitor: &mut V, @@ -486,18 +614,15 @@ pub fn walk_assoc_type_binding<'v, V: Visitor<'v>>(visitor: &mut V, pub fn walk_pat<'v, V: Visitor<'v>>(visitor: &mut V, pattern: &'v Pat) { visitor.visit_id(pattern.id); match pattern.node { - PatKind::TupleStruct(ref path, ref children, _) => { - visitor.visit_path(path, pattern.id); + PatKind::TupleStruct(ref qpath, ref children, _) => { + visitor.visit_qpath(qpath, pattern.id, pattern.span); walk_list!(visitor, visit_pat, children); } - PatKind::Path(ref opt_qself, ref path) => { - if let Some(ref qself) = *opt_qself { - visitor.visit_ty(&qself.ty); - } - visitor.visit_path(path, pattern.id) + PatKind::Path(ref qpath) => { + visitor.visit_qpath(qpath, pattern.id, pattern.span); } - PatKind::Struct(ref path, ref fields, _) => { - visitor.visit_path(path, pattern.id); + PatKind::Struct(ref qpath, ref fields, _) => { + visitor.visit_qpath(qpath, pattern.id, pattern.span); for field in fields { visitor.visit_name(field.span, field.node.name); visitor.visit_pat(&field.node.pat) @@ -510,7 +635,8 @@ pub fn walk_pat<'v, V: Visitor<'v>>(visitor: &mut V, pattern: &'v Pat) { PatKind::Ref(ref subpattern, _) => { visitor.visit_pat(subpattern) } - PatKind::Binding(_, ref pth1, ref optional_subpattern) => { + PatKind::Binding(_, def_id, ref pth1, ref optional_subpattern) => { + visitor.visit_def_mention(Def::Local(def_id)); visitor.visit_name(pth1.span, pth1.node); walk_list!(visitor, visit_pat, optional_subpattern); } @@ -635,13 +761,25 @@ pub fn walk_fn_kind<'v, V: Visitor<'v>>(visitor: &mut V, function_kind: FnKind<' pub fn walk_fn<'v, V: Visitor<'v>>(visitor: &mut V, function_kind: FnKind<'v>, function_declaration: &'v FnDecl, - function_body: &'v Block, + body_id: ExprId, _span: Span, id: NodeId) { visitor.visit_id(id); walk_fn_decl(visitor, function_declaration); walk_fn_kind(visitor, function_kind); - visitor.visit_block(function_body) + visitor.visit_body(body_id) +} + +pub fn walk_fn_with_body<'v, V: Visitor<'v>>(visitor: &mut V, + function_kind: FnKind<'v>, + function_declaration: &'v FnDecl, + body: &'v Expr, + _span: Span, + id: NodeId) { + visitor.visit_id(id); + walk_fn_decl(visitor, function_declaration); + walk_fn_kind(visitor, function_kind); + visitor.visit_expr(body) } pub fn walk_trait_item<'v, V: Visitor<'v>>(visitor: &mut V, trait_item: &'v TraitItem) { @@ -658,13 +796,13 @@ pub fn walk_trait_item<'v, V: Visitor<'v>>(visitor: &mut V, trait_item: &'v Trai visitor.visit_generics(&sig.generics); walk_fn_decl(visitor, &sig.decl); } - MethodTraitItem(ref sig, Some(ref body)) => { + MethodTraitItem(ref sig, Some(body_id)) => { visitor.visit_fn(FnKind::Method(trait_item.name, sig, None, &trait_item.attrs), &sig.decl, - body, + body_id, trait_item.span, trait_item.id); } @@ -677,22 +815,26 @@ pub fn walk_trait_item<'v, V: Visitor<'v>>(visitor: &mut V, trait_item: &'v Trai } pub fn walk_impl_item<'v, V: Visitor<'v>>(visitor: &mut V, impl_item: &'v ImplItem) { - visitor.visit_vis(&impl_item.vis); - visitor.visit_name(impl_item.span, impl_item.name); - walk_list!(visitor, visit_attribute, &impl_item.attrs); - match impl_item.node { + // NB: Deliberately force a compilation error if/when new fields are added. + let ImplItem { id: _, name, ref vis, ref defaultness, ref attrs, ref node, span } = *impl_item; + + visitor.visit_name(span, name); + visitor.visit_vis(vis); + visitor.visit_defaultness(defaultness); + walk_list!(visitor, visit_attribute, attrs); + match *node { ImplItemKind::Const(ref ty, ref expr) => { visitor.visit_id(impl_item.id); visitor.visit_ty(ty); visitor.visit_expr(expr); } - ImplItemKind::Method(ref sig, ref body) => { + ImplItemKind::Method(ref sig, body_id) => { visitor.visit_fn(FnKind::Method(impl_item.name, sig, Some(&impl_item.vis), &impl_item.attrs), &sig.decl, - body, + body_id, impl_item.span, impl_item.id); } @@ -703,6 +845,17 @@ pub fn walk_impl_item<'v, V: Visitor<'v>>(visitor: &mut V, impl_item: &'v ImplIt } } +pub fn walk_impl_item_ref<'v, V: Visitor<'v>>(visitor: &mut V, impl_item_ref: &'v ImplItemRef) { + // NB: Deliberately force a compilation error if/when new fields are added. + let ImplItemRef { id, name, ref kind, span, ref vis, ref defaultness } = *impl_item_ref; + visitor.visit_nested_impl_item(id); + visitor.visit_name(span, name); + visitor.visit_associated_item_kind(kind); + visitor.visit_vis(vis); + visitor.visit_defaultness(defaultness); +} + + pub fn walk_struct_def<'v, V: Visitor<'v>>(visitor: &mut V, struct_definition: &'v VariantData) { visitor.visit_id(struct_definition.id()); walk_list!(visitor, visit_struct_field, struct_definition.fields()); @@ -756,8 +909,8 @@ pub fn walk_expr<'v, V: Visitor<'v>>(visitor: &mut V, expression: &'v Expr) { visitor.visit_expr(element); visitor.visit_expr(count) } - ExprStruct(ref path, ref fields, ref optional_base) => { - visitor.visit_path(path, expression.id); + ExprStruct(ref qpath, ref fields, ref optional_base) => { + visitor.visit_qpath(qpath, expression.id, expression.span); for field in fields { visitor.visit_name(field.name.span, field.name.node); visitor.visit_expr(&field.expr) @@ -798,7 +951,7 @@ pub fn walk_expr<'v, V: Visitor<'v>>(visitor: &mut V, expression: &'v Expr) { visitor.visit_block(block); walk_opt_sp_name(visitor, opt_sp_name); } - ExprLoop(ref block, ref opt_sp_name) => { + ExprLoop(ref block, ref opt_sp_name, _) => { visitor.visit_block(block); walk_opt_sp_name(visitor, opt_sp_name); } @@ -806,7 +959,7 @@ pub fn walk_expr<'v, V: Visitor<'v>>(visitor: &mut V, expression: &'v Expr) { visitor.visit_expr(subexpression); walk_list!(visitor, visit_arm, arms); } - ExprClosure(_, ref function_declaration, ref body, _fn_decl_span) => { + ExprClosure(_, ref function_declaration, body, _fn_decl_span) => { visitor.visit_fn(FnKind::Closure(&expression.attrs), function_declaration, body, @@ -833,14 +986,21 @@ pub fn walk_expr<'v, V: Visitor<'v>>(visitor: &mut V, expression: &'v Expr) { visitor.visit_expr(main_expression); visitor.visit_expr(index_expression) } - ExprPath(ref maybe_qself, ref path) => { - if let Some(ref qself) = *maybe_qself { - visitor.visit_ty(&qself.ty); - } - visitor.visit_path(path, expression.id) + ExprPath(ref qpath) => { + visitor.visit_qpath(qpath, expression.id, expression.span); } - ExprBreak(ref opt_sp_name) | ExprAgain(ref opt_sp_name) => { - walk_opt_sp_name(visitor, opt_sp_name); + ExprBreak(None, ref opt_expr) => { + walk_list!(visitor, visit_expr, opt_expr); + } + ExprBreak(Some(label), ref opt_expr) => { + visitor.visit_def_mention(Def::Label(label.loop_id)); + visitor.visit_name(label.span, label.name); + walk_list!(visitor, visit_expr, opt_expr); + } + ExprAgain(None) => {} + ExprAgain(Some(label)) => { + visitor.visit_def_mention(Def::Label(label.loop_id)); + visitor.visit_name(label.span, label.name); } ExprRet(ref optional_expression) => { walk_list!(visitor, visit_expr, optional_expression); @@ -872,6 +1032,18 @@ pub fn walk_vis<'v, V: Visitor<'v>>(visitor: &mut V, vis: &'v Visibility) { } } +pub fn walk_associated_item_kind<'v, V: Visitor<'v>>(_: &mut V, _: &'v AssociatedItemKind) { + // No visitable content here: this fn exists so you can call it if + // the right thing to do, should content be added in the future, + // would be to walk it. +} + +pub fn walk_defaultness<'v, V: Visitor<'v>>(_: &mut V, _: &'v Defaultness) { + // No visitable content here: this fn exists so you can call it if + // the right thing to do, should content be added in the future, + // would be to walk it. +} + #[derive(Copy, Clone, RustcEncodable, RustcDecodable, Debug, PartialEq, Eq)] pub struct IdRange { pub min: NodeId, @@ -902,13 +1074,14 @@ impl IdRange { } -pub struct IdRangeComputingVisitor { - pub result: IdRange, +pub struct IdRangeComputingVisitor<'a, 'ast: 'a> { + result: IdRange, + map: &'a map::Map<'ast>, } -impl IdRangeComputingVisitor { - pub fn new() -> IdRangeComputingVisitor { - IdRangeComputingVisitor { result: IdRange::max() } +impl<'a, 'ast> IdRangeComputingVisitor<'a, 'ast> { + pub fn new(map: &'a map::Map<'ast>) -> IdRangeComputingVisitor<'a, 'ast> { + IdRangeComputingVisitor { result: IdRange::max(), map: map } } pub fn result(&self) -> IdRange { @@ -916,20 +1089,25 @@ impl IdRangeComputingVisitor { } } -impl<'v> Visitor<'v> for IdRangeComputingVisitor { +impl<'a, 'ast> Visitor<'ast> for IdRangeComputingVisitor<'a, 'ast> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + NestedVisitorMap::OnlyBodies(&self.map) + } + fn visit_id(&mut self, id: NodeId) { self.result.add(id); } } /// Computes the id range for a single fn body, ignoring nested items. -pub fn compute_id_range_for_fn_body(fk: FnKind, - decl: &FnDecl, - body: &Block, - sp: Span, - id: NodeId) - -> IdRange { - let mut visitor = IdRangeComputingVisitor::new(); - visitor.visit_fn(fk, decl, body, sp, id); +pub fn compute_id_range_for_fn_body<'v>(fk: FnKind<'v>, + decl: &'v FnDecl, + body: &'v Expr, + sp: Span, + id: NodeId, + map: &map::Map<'v>) + -> IdRange { + let mut visitor = IdRangeComputingVisitor::new(map); + walk_fn_with_body(&mut visitor, fk, decl, body, sp, id); visitor.result() } diff --git a/src/librustc/hir/itemlikevisit.rs b/src/librustc/hir/itemlikevisit.rs new file mode 100644 index 0000000000..71ef713144 --- /dev/null +++ b/src/librustc/hir/itemlikevisit.rs @@ -0,0 +1,86 @@ +// Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use super::{Item, ImplItem}; +use super::intravisit::Visitor; + +/// The "item-like visitor" visitor defines only the top-level methods +/// that can be invoked by `Crate::visit_all_item_likes()`. Whether +/// this trait is the right one to implement will depend on the +/// overall pattern you need. Here are the three available patterns, +/// in roughly the order of desirability: +/// +/// 1. **Shallow visit**: Get a simple callback for every item (or item-like thing) in the HIR. +/// - Example: find all items with a `#[foo]` attribute on them. +/// - How: Implement `ItemLikeVisitor` and call `tcx.visit_all_item_likes_in_krate()`. +/// - Pro: Efficient; just walks the lists of item-like things, not the nodes themselves. +/// - Pro: Integrates well into dependency tracking. +/// - Con: Don't get information about nesting +/// - Con: Don't have methods for specific bits of HIR, like "on +/// every expr, do this". +/// 2. **Deep visit**: Want to scan for specific kinds of HIR nodes within +/// an item, but don't care about how item-like things are nested +/// within one another. +/// - Example: Examine each expression to look for its type and do some check or other. +/// - How: Implement `intravisit::Visitor` and use +/// `tcx.visit_all_item_likes_in_krate(visitor.as_deep_visitor())`. Within +/// your `intravisit::Visitor` impl, implement methods like +/// `visit_expr()`; don't forget to invoke +/// `intravisit::walk_visit_expr()` to keep walking the subparts. +/// - Pro: Visitor methods for any kind of HIR node, not just item-like things. +/// - Pro: Integrates well into dependency tracking. +/// - Con: Don't get information about nesting between items +/// 3. **Nested visit**: Want to visit the whole HIR and you care about the nesting between +/// item-like things. +/// - Example: Lifetime resolution, which wants to bring lifetimes declared on the +/// impl into scope while visiting the impl-items, and then back out again. +/// - How: Implement `intravisit::Visitor` and override the +/// `visit_nested_map()` methods to return +/// `NestedVisitorMap::All`. Walk your crate with +/// `intravisit::walk_crate()` invoked on `tcx.map.krate()`. +/// - Pro: Visitor methods for any kind of HIR node, not just item-like things. +/// - Pro: Preserves nesting information +/// - Con: Does not integrate well into dependency tracking. +/// +/// Note: the methods of `ItemLikeVisitor` intentionally have no +/// defaults, so that as we expand the list of item-like things, we +/// revisit the various visitors to see if they need to change. This +/// is harder to do with `intravisit::Visitor`, so when you add a new +/// `visit_nested_foo()` method, it is recommended that you search for +/// existing `fn visit_nested` methods to see where changes are +/// needed. +pub trait ItemLikeVisitor<'hir> { + fn visit_item(&mut self, item: &'hir Item); + fn visit_impl_item(&mut self, impl_item: &'hir ImplItem); +} + +pub struct DeepVisitor<'v, V: 'v> { + visitor: &'v mut V, +} + +impl<'v, 'hir, V> DeepVisitor<'v, V> + where V: Visitor<'hir> + 'v +{ + pub fn new(base: &'v mut V) -> Self { + DeepVisitor { visitor: base } + } +} + +impl<'v, 'hir, V> ItemLikeVisitor<'hir> for DeepVisitor<'v, V> + where V: Visitor<'hir> +{ + fn visit_item(&mut self, item: &'hir Item) { + self.visitor.visit_item(item); + } + + fn visit_impl_item(&mut self, impl_item: &'hir ImplItem) { + self.visitor.visit_impl_item(impl_item); + } +} diff --git a/src/librustc/hir/lowering.rs b/src/librustc/hir/lowering.rs index e1fec898e4..74876eb59e 100644 --- a/src/librustc/hir/lowering.rs +++ b/src/librustc/hir/lowering.rs @@ -46,15 +46,20 @@ use hir::map::definitions::DefPathData; use hir::def_id::{DefIndex, DefId}; use hir::def::{Def, PathResolution}; use session::Session; +use util::nodemap::NodeMap; +use rustc_data_structures::fnv::FnvHashMap; use std::collections::BTreeMap; use std::iter; +use std::mem; + use syntax::ast::*; use syntax::errors; use syntax::ptr::P; -use syntax::codemap::{respan, Spanned}; -use syntax::parse::token; +use syntax::codemap::{self, respan, Spanned}; use syntax::std_inject; +use syntax::symbol::{Symbol, keywords}; +use syntax::util::small_vector::SmallVector; use syntax::visit::{self, Visitor}; use syntax_pos::Span; @@ -66,19 +71,22 @@ pub struct LoweringContext<'a> { // the form of a DefIndex) so that if we create a new node which introduces // a definition, then we can properly create the def id. parent_def: Option, + exprs: FnvHashMap, resolver: &'a mut Resolver, + + /// The items being lowered are collected here. + items: BTreeMap, + + impl_items: BTreeMap, } pub trait Resolver { // Resolve a global hir path generated by the lowerer when expanding `for`, `if let`, etc. - fn resolve_generated_global_path(&mut self, path: &hir::Path, is_value: bool) -> Def; + fn resolve_hir_path(&mut self, path: &mut hir::Path, is_value: bool); // Obtain the resolution for a node id fn get_resolution(&mut self, id: NodeId) -> Option; - // Record the resolution of a path or binding generated by the lowerer when expanding. - fn record_resolution(&mut self, id: NodeId, def: Def); - // We must keep the set of definitions up to date as we add nodes that weren't in the AST. // This should only return `None` during testing. fn definitions(&mut self) -> &mut Definitions; @@ -97,49 +105,100 @@ pub fn lower_crate(sess: &Session, crate_root: std_inject::injected_crate_name(krate), sess: sess, parent_def: None, + exprs: FnvHashMap(), resolver: resolver, + items: BTreeMap::new(), + impl_items: BTreeMap::new(), }.lower_crate(krate) } +#[derive(Copy, Clone, PartialEq, Eq)] +enum ParamMode { + /// Any path in a type context. + Explicit, + /// The `module::Type` in `module::Type::method` in an expression. + Optional +} + impl<'a> LoweringContext<'a> { - fn lower_crate(&mut self, c: &Crate) -> hir::Crate { + fn lower_crate(mut self, c: &Crate) -> hir::Crate { + self.lower_items(c); + let module = self.lower_mod(&c.module); + let attrs = self.lower_attrs(&c.attrs); + let exported_macros = c.exported_macros.iter().map(|m| self.lower_macro_def(m)).collect(); + + hir::Crate { + module: module, + attrs: attrs, + span: c.span, + exported_macros: exported_macros, + items: self.items, + impl_items: self.impl_items, + exprs: mem::replace(&mut self.exprs, FnvHashMap()), + } + } + + fn lower_items(&mut self, c: &Crate) { struct ItemLowerer<'lcx, 'interner: 'lcx> { - items: BTreeMap, lctx: &'lcx mut LoweringContext<'interner>, } - impl<'lcx, 'interner> Visitor for ItemLowerer<'lcx, 'interner> { - fn visit_item(&mut self, item: &Item) { - self.items.insert(item.id, self.lctx.lower_item(item)); + impl<'lcx, 'interner> Visitor<'lcx> for ItemLowerer<'lcx, 'interner> { + fn visit_item(&mut self, item: &'lcx Item) { + let hir_item = self.lctx.lower_item(item); + self.lctx.items.insert(item.id, hir_item); visit::walk_item(self, item); } + + fn visit_impl_item(&mut self, item: &'lcx ImplItem) { + let id = self.lctx.lower_impl_item_ref(item).id; + let hir_item = self.lctx.lower_impl_item(item); + self.lctx.impl_items.insert(id, hir_item); + visit::walk_impl_item(self, item); + } } - let items = { - let mut item_lowerer = ItemLowerer { items: BTreeMap::new(), lctx: self }; - visit::walk_crate(&mut item_lowerer, c); - item_lowerer.items - }; + let mut item_lowerer = ItemLowerer { lctx: self }; + visit::walk_crate(&mut item_lowerer, c); + } - hir::Crate { - module: self.lower_mod(&c.module), - attrs: self.lower_attrs(&c.attrs), - span: c.span, - exported_macros: c.exported_macros.iter().map(|m| self.lower_macro_def(m)).collect(), - items: items, - } + fn record_expr(&mut self, expr: hir::Expr) -> hir::ExprId { + let id = hir::ExprId(expr.id); + self.exprs.insert(id, expr); + id } fn next_id(&self) -> NodeId { self.sess.next_node_id() } + fn expect_full_def(&mut self, id: NodeId) -> Def { + self.resolver.get_resolution(id).map_or(Def::Err, |pr| { + if pr.depth != 0 { + bug!("path not fully resolved: {:?}", pr); + } + pr.base_def + }) + } + fn diagnostic(&self) -> &errors::Handler { self.sess.diagnostic() } fn str_to_ident(&self, s: &'static str) -> Name { - token::gensym(s) + Symbol::gensym(s) + } + + fn allow_internal_unstable(&self, reason: &'static str, mut span: Span) -> Span { + span.expn_id = self.sess.codemap().record_expansion(codemap::ExpnInfo { + call_site: span, + callee: codemap::NameAndSpan { + format: codemap::CompilerDesugaring(Symbol::intern(reason)), + span: Some(span), + allow_internal_unstable: true, + }, + }); + span } fn with_parent_def(&mut self, parent_id: NodeId, f: F) -> T @@ -161,47 +220,29 @@ impl<'a> LoweringContext<'a> { o_id.map(|sp_ident| respan(sp_ident.span, sp_ident.node.name)) } - fn lower_attrs(&mut self, attrs: &Vec) -> hir::HirVec { - attrs.clone().into() - } - - fn lower_view_path(&mut self, view_path: &ViewPath) -> P { - P(Spanned { - node: match view_path.node { - ViewPathSimple(ident, ref path) => { - hir::ViewPathSimple(ident.name, self.lower_path(path)) + fn lower_label(&mut self, id: NodeId, label: Option>) -> Option { + label.map(|sp_ident| { + hir::Label { + span: sp_ident.span, + name: sp_ident.node.name, + loop_id: match self.expect_full_def(id) { + Def::Label(loop_id) => loop_id, + _ => DUMMY_NODE_ID } - ViewPathGlob(ref path) => { - hir::ViewPathGlob(self.lower_path(path)) - } - ViewPathList(ref path, ref path_list_idents) => { - hir::ViewPathList(self.lower_path(path), - path_list_idents.iter() - .map(|item| self.lower_path_list_item(item)) - .collect()) - } - }, - span: view_path.span, + } }) } - fn lower_path_list_item(&mut self, path_list_ident: &PathListItem) -> hir::PathListItem { - Spanned { - node: hir::PathListItem_ { - id: path_list_ident.node.id, - name: path_list_ident.node.name.name, - rename: path_list_ident.node.rename.map(|rename| rename.name), - }, - span: path_list_ident.span, - } + fn lower_attrs(&mut self, attrs: &Vec) -> hir::HirVec { + attrs.clone().into() } fn lower_arm(&mut self, arm: &Arm) -> hir::Arm { hir::Arm { attrs: self.lower_attrs(&arm.attrs), pats: arm.pats.iter().map(|x| self.lower_pat(x)).collect(), - guard: arm.guard.as_ref().map(|ref x| self.lower_expr(x)), - body: self.lower_expr(&arm.body), + guard: arm.guard.as_ref().map(|ref x| P(self.lower_expr(x))), + body: P(self.lower_expr(&arm.body)), } } @@ -240,22 +281,16 @@ impl<'a> LoweringContext<'a> { return self.lower_ty(ty); } TyKind::Path(ref qself, ref path) => { - let qself = qself.as_ref().map(|&QSelf { ref ty, position }| { - hir::QSelf { - ty: self.lower_ty(ty), - position: position, - } - }); - hir::TyPath(qself, self.lower_path(path)) + hir::TyPath(self.lower_qpath(t.id, qself, path, ParamMode::Explicit)) } TyKind::ObjectSum(ref ty, ref bounds) => { hir::TyObjectSum(self.lower_ty(ty), self.lower_bounds(bounds)) } TyKind::Array(ref ty, ref e) => { - hir::TyArray(self.lower_ty(ty), self.lower_expr(e)) + hir::TyArray(self.lower_ty(ty), P(self.lower_expr(e))) } TyKind::Typeof(ref expr) => { - hir::TyTypeof(self.lower_expr(expr)) + hir::TyTypeof(P(self.lower_expr(expr))) } TyKind::PolyTraitRef(ref bounds) => { hir::TyPolyTraitRef(self.lower_bounds(bounds)) @@ -282,44 +317,146 @@ impl<'a> LoweringContext<'a> { name: v.node.name.name, attrs: self.lower_attrs(&v.node.attrs), data: self.lower_variant_data(&v.node.data), - disr_expr: v.node.disr_expr.as_ref().map(|e| self.lower_expr(e)), + disr_expr: v.node.disr_expr.as_ref().map(|e| P(self.lower_expr(e))), }, span: v.span, } } - fn lower_path(&mut self, p: &Path) -> hir::Path { + fn lower_qpath(&mut self, + id: NodeId, + qself: &Option, + p: &Path, + param_mode: ParamMode) + -> hir::QPath { + let qself_position = qself.as_ref().map(|q| q.position); + let qself = qself.as_ref().map(|q| self.lower_ty(&q.ty)); + + let resolution = self.resolver.get_resolution(id) + .unwrap_or(PathResolution::new(Def::Err)); + + let proj_start = p.segments.len() - resolution.depth; + let path = P(hir::Path { + global: p.global, + def: resolution.base_def, + segments: p.segments[..proj_start].iter().enumerate().map(|(i, segment)| { + let param_mode = match (qself_position, param_mode) { + (Some(j), ParamMode::Optional) if i < j => { + // This segment is part of the trait path in a + // qualified path - one of `a`, `b` or `Trait` + // in `::T::U::method`. + ParamMode::Explicit + } + _ => param_mode + }; + self.lower_path_segment(segment, param_mode) + }).collect(), + span: p.span, + }); + + // Simple case, either no projections, or only fully-qualified. + // E.g. `std::mem::size_of` or `::Item`. + if resolution.depth == 0 { + return hir::QPath::Resolved(qself, path); + } + + // Create the innermost type that we're projecting from. + let mut ty = if path.segments.is_empty() { + // If the base path is empty that means there exists a + // syntactical `Self`, e.g. `&i32` in `<&i32>::clone`. + qself.expect("missing QSelf for ::...") + } else { + // Otherwise, the base path is an implicit `Self` type path, + // e.g. `Vec` in `Vec::new` or `::Item` in + // `::Item::default`. + self.ty(p.span, hir::TyPath(hir::QPath::Resolved(qself, path))) + }; + + // Anything after the base path are associated "extensions", + // out of which all but the last one are associated types, + // e.g. for `std::vec::Vec::::IntoIter::Item::clone`: + // * base path is `std::vec::Vec` + // * "extensions" are `IntoIter`, `Item` and `clone` + // * type nodes are: + // 1. `std::vec::Vec` (created above) + // 2. `>::IntoIter` + // 3. `<>::IntoIter>::Item` + // * final path is `<<>::IntoIter>::Item>::clone` + for (i, segment) in p.segments.iter().enumerate().skip(proj_start) { + let segment = P(self.lower_path_segment(segment, param_mode)); + let qpath = hir::QPath::TypeRelative(ty, segment); + + // It's finished, return the extension of the right node type. + if i == p.segments.len() - 1 { + return qpath; + } + + // Wrap the associated extension in another type node. + ty = self.ty(p.span, hir::TyPath(qpath)); + } + + // Should've returned in the for loop above. + span_bug!(p.span, "lower_qpath: no final extension segment in {}..{}", + proj_start, p.segments.len()) + } + + fn lower_path_extra(&mut self, + id: NodeId, + p: &Path, + name: Option, + param_mode: ParamMode) + -> hir::Path { hir::Path { global: p.global, - segments: p.segments - .iter() - .map(|&PathSegment { identifier, ref parameters }| { - hir::PathSegment { - name: identifier.name, - parameters: self.lower_path_parameters(parameters), - } - }) - .collect(), + def: self.expect_full_def(id), + segments: p.segments.iter().map(|segment| { + self.lower_path_segment(segment, param_mode) + }).chain(name.map(|name| { + hir::PathSegment { + name: name, + parameters: hir::PathParameters::none() + } + })).collect(), span: p.span, } } - fn lower_path_parameters(&mut self, path_parameters: &PathParameters) -> hir::PathParameters { - match *path_parameters { - PathParameters::AngleBracketed(ref data) => - hir::AngleBracketedParameters(self.lower_angle_bracketed_parameter_data(data)), + fn lower_path(&mut self, + id: NodeId, + p: &Path, + param_mode: ParamMode) + -> hir::Path { + self.lower_path_extra(id, p, None, param_mode) + } + + fn lower_path_segment(&mut self, + segment: &PathSegment, + param_mode: ParamMode) + -> hir::PathSegment { + let parameters = match segment.parameters { + PathParameters::AngleBracketed(ref data) => { + let data = self.lower_angle_bracketed_parameter_data(data, param_mode); + hir::AngleBracketedParameters(data) + } PathParameters::Parenthesized(ref data) => hir::ParenthesizedParameters(self.lower_parenthesized_parameter_data(data)), + }; + + hir::PathSegment { + name: segment.identifier.name, + parameters: parameters, } } fn lower_angle_bracketed_parameter_data(&mut self, - data: &AngleBracketedParameterData) + data: &AngleBracketedParameterData, + param_mode: ParamMode) -> hir::AngleBracketedParameterData { let &AngleBracketedParameterData { ref lifetimes, ref types, ref bindings } = data; hir::AngleBracketedParameterData { lifetimes: self.lower_lifetimes(lifetimes), types: types.iter().map(|ty| self.lower_ty(ty)).collect(), + infer_types: types.is_empty() && param_mode == ParamMode::Optional, bindings: bindings.iter().map(|b| self.lower_ty_binding(b)).collect(), } } @@ -340,7 +477,7 @@ impl<'a> LoweringContext<'a> { id: l.id, ty: l.ty.as_ref().map(|t| self.lower_ty(t)), pat: self.lower_pat(&l.pat), - init: l.init.as_ref().map(|e| self.lower_expr(e)), + init: l.init.as_ref().map(|e| P(self.lower_expr(e))), span: l.span, attrs: l.attrs.clone(), }) @@ -384,28 +521,36 @@ impl<'a> LoweringContext<'a> { } } - fn lower_ty_param(&mut self, tp: &TyParam) -> hir::TyParam { + fn lower_ty_param(&mut self, tp: &TyParam, add_bounds: &[TyParamBound]) -> hir::TyParam { let mut name = tp.ident.name; // Don't expose `Self` (recovered "keyword used as ident" parse error). // `rustc::ty` expects `Self` to be only used for a trait's `Self`. // Instead, use gensym("Self") to create a distinct name that looks the same. - if name == token::keywords::SelfType.name() { - name = token::gensym("Self"); + if name == keywords::SelfType.name() { + name = Symbol::gensym("Self"); + } + + let mut bounds = self.lower_bounds(&tp.bounds); + if !add_bounds.is_empty() { + bounds = bounds.into_iter().chain(self.lower_bounds(add_bounds).into_iter()).collect(); } hir::TyParam { id: tp.id, name: name, - bounds: self.lower_bounds(&tp.bounds), + bounds: bounds, default: tp.default.as_ref().map(|x| self.lower_ty(x)), span: tp.span, pure_wrt_drop: tp.attrs.iter().any(|attr| attr.check_name("may_dangle")), } } - fn lower_ty_params(&mut self, tps: &P<[TyParam]>) -> hir::HirVec { - tps.iter().map(|tp| self.lower_ty_param(tp)).collect() + fn lower_ty_params(&mut self, tps: &P<[TyParam]>, add_bounds: &NodeMap>) + -> hir::HirVec { + tps.iter().map(|tp| { + self.lower_ty_param(tp, add_bounds.get(&tp.id).map_or(&[][..], |x| &x)) + }).collect() } fn lower_lifetime(&mut self, l: &Lifetime) -> hir::Lifetime { @@ -437,8 +582,47 @@ impl<'a> LoweringContext<'a> { } fn lower_generics(&mut self, g: &Generics) -> hir::Generics { + // Collect `?Trait` bounds in where clause and move them to parameter definitions. + let mut add_bounds = NodeMap(); + for pred in &g.where_clause.predicates { + if let WherePredicate::BoundPredicate(ref bound_pred) = *pred { + 'next_bound: for bound in &bound_pred.bounds { + if let TraitTyParamBound(_, TraitBoundModifier::Maybe) = *bound { + let report_error = |this: &mut Self| { + this.diagnostic().span_err(bound_pred.bounded_ty.span, + "`?Trait` bounds are only permitted at the \ + point where a type parameter is declared"); + }; + // Check if the where clause type is a plain type parameter. + match bound_pred.bounded_ty.node { + TyKind::Path(None, ref path) + if !path.global && path.segments.len() == 1 && + bound_pred.bound_lifetimes.is_empty() => { + if let Some(Def::TyParam(def_id)) = + self.resolver.get_resolution(bound_pred.bounded_ty.id) + .map(|d| d.base_def) { + if let Some(node_id) = + self.resolver.definitions().as_local_node_id(def_id) { + for ty_param in &g.ty_params { + if node_id == ty_param.id { + add_bounds.entry(ty_param.id).or_insert(Vec::new()) + .push(bound.clone()); + continue 'next_bound; + } + } + } + } + report_error(self) + } + _ => report_error(self) + } + } + } + } + } + hir::Generics { - ty_params: self.lower_ty_params(&g.ty_params), + ty_params: self.lower_ty_params(&g.ty_params, &add_bounds), lifetimes: self.lower_lifetime_defs(&g.lifetimes), where_clause: self.lower_where_clause(&g.where_clause), span: g.span, @@ -464,7 +648,11 @@ impl<'a> LoweringContext<'a> { hir::WherePredicate::BoundPredicate(hir::WhereBoundPredicate { bound_lifetimes: self.lower_lifetime_defs(bound_lifetimes), bounded_ty: self.lower_ty(bounded_ty), - bounds: bounds.iter().map(|x| self.lower_ty_param_bound(x)).collect(), + bounds: bounds.iter().filter_map(|bound| match *bound { + // Ignore `?Trait` bounds, they were copied into type parameters already. + TraitTyParamBound(_, TraitBoundModifier::Maybe) => None, + _ => Some(self.lower_ty_param_bound(bound)) + }).collect(), span: span, }) } @@ -483,7 +671,7 @@ impl<'a> LoweringContext<'a> { span}) => { hir::WherePredicate::EqPredicate(hir::WhereEqPredicate { id: id, - path: self.lower_path(path), + path: self.lower_path(id, path, ParamMode::Explicit), ty: self.lower_ty(ty), span: span, }) @@ -513,7 +701,7 @@ impl<'a> LoweringContext<'a> { fn lower_trait_ref(&mut self, p: &TraitRef) -> hir::TraitRef { hir::TraitRef { - path: self.lower_path(&p.path), + path: self.lower_path(p.ref_id, &p.path, ParamMode::Explicit), ref_id: p.ref_id, } } @@ -530,7 +718,7 @@ impl<'a> LoweringContext<'a> { hir::StructField { span: f.span, id: f.id, - name: f.ident.map(|ident| ident.name).unwrap_or(token::intern(&index.to_string())), + name: f.ident.map(|ident| ident.name).unwrap_or(Symbol::intern(&index.to_string())), vis: self.lower_visibility(&f.vis), ty: self.lower_ty(&f.ty), attrs: self.lower_attrs(&f.attrs), @@ -540,7 +728,7 @@ impl<'a> LoweringContext<'a> { fn lower_field(&mut self, f: &Field) -> hir::Field { hir::Field { name: respan(f.ident.span, f.ident.node.name), - expr: self.lower_expr(&f.expr), + expr: P(self.lower_expr(&f.expr)), span: f.span, is_shorthand: f.is_shorthand, } @@ -553,17 +741,15 @@ impl<'a> LoweringContext<'a> { } } - fn lower_bounds(&mut self, bounds: &TyParamBounds) -> hir::TyParamBounds { + fn lower_bounds(&mut self, bounds: &[TyParamBound]) -> hir::TyParamBounds { bounds.iter().map(|bound| self.lower_ty_param_bound(bound)).collect() } fn lower_block(&mut self, b: &Block) -> P { - let mut stmts = Vec::new(); let mut expr = None; - if let Some((last, rest)) = b.stmts.split_last() { - stmts = rest.iter().map(|s| self.lower_stmt(s)).collect::>(); - let last = self.lower_stmt(last); + let mut stmts = b.stmts.iter().flat_map(|s| self.lower_stmt(s)).collect::>(); + if let Some(last) = stmts.pop() { if let hir::StmtExpr(e, _) = last.node { expr = Some(e); } else { @@ -580,27 +766,85 @@ impl<'a> LoweringContext<'a> { }) } - fn lower_item_kind(&mut self, i: &ItemKind) -> hir::Item_ { + fn lower_item_kind(&mut self, + id: NodeId, + name: &mut Name, + attrs: &hir::HirVec, + vis: &mut hir::Visibility, + i: &ItemKind) + -> hir::Item_ { match *i { ItemKind::ExternCrate(string) => hir::ItemExternCrate(string), ItemKind::Use(ref view_path) => { - hir::ItemUse(self.lower_view_path(view_path)) + let path = match view_path.node { + ViewPathSimple(_, ref path) => path, + ViewPathGlob(ref path) => path, + ViewPathList(ref path, ref path_list_idents) => { + for &Spanned { node: ref import, span } in path_list_idents { + // `use a::{self as x, b as y};` lowers to + // `use a as x; use a::b as y;` + let mut ident = import.name; + let suffix = if ident.name == keywords::SelfValue.name() { + if let Some(last) = path.segments.last() { + ident = last.identifier; + } + None + } else { + Some(ident.name) + }; + + let mut path = self.lower_path_extra(import.id, path, suffix, + ParamMode::Explicit); + path.span = span; + self.items.insert(import.id, hir::Item { + id: import.id, + name: import.rename.unwrap_or(ident).name, + attrs: attrs.clone(), + node: hir::ItemUse(P(path), hir::UseKind::Single), + vis: vis.clone(), + span: span, + }); + } + path + } + }; + let path = P(self.lower_path(id, path, ParamMode::Explicit)); + let kind = match view_path.node { + ViewPathSimple(ident, _) => { + *name = ident.name; + hir::UseKind::Single + } + ViewPathGlob(_) => { + hir::UseKind::Glob + } + ViewPathList(..) => { + // Privatize the degenerate import base, used only to check + // the stability of `use a::{};`, to avoid it showing up as + // a reexport by accident when `pub`, e.g. in documentation. + *vis = hir::Inherited; + hir::UseKind::ListStem + } + }; + hir::ItemUse(path, kind) } ItemKind::Static(ref t, m, ref e) => { hir::ItemStatic(self.lower_ty(t), self.lower_mutability(m), - self.lower_expr(e)) + P(self.lower_expr(e))) } ItemKind::Const(ref t, ref e) => { - hir::ItemConst(self.lower_ty(t), self.lower_expr(e)) + hir::ItemConst(self.lower_ty(t), P(self.lower_expr(e))) } ItemKind::Fn(ref decl, unsafety, constness, abi, ref generics, ref body) => { + let body = self.lower_block(body); + let body = self.expr_block(body, ThinVec::new()); + let body_id = self.record_expr(body); hir::ItemFn(self.lower_fn_decl(decl), self.lower_unsafety(unsafety), self.lower_constness(constness), abi, self.lower_generics(generics), - self.lower_block(body)) + body_id) } ItemKind::Mod(ref m) => hir::ItemMod(self.lower_mod(m)), ItemKind::ForeignMod(ref nm) => hir::ItemForeignMod(self.lower_foreign_mod(nm)), @@ -630,7 +874,7 @@ impl<'a> LoweringContext<'a> { } ItemKind::Impl(unsafety, polarity, ref generics, ref ifce, ref ty, ref impl_items) => { let new_impl_items = impl_items.iter() - .map(|item| self.lower_impl_item(item)) + .map(|item| self.lower_impl_item_ref(item)) .collect(); let ifce = ifce.as_ref().map(|trait_ref| self.lower_trait_ref(trait_ref)); hir::ItemImpl(self.lower_unsafety(unsafety), @@ -661,11 +905,15 @@ impl<'a> LoweringContext<'a> { node: match i.node { TraitItemKind::Const(ref ty, ref default) => { hir::ConstTraitItem(this.lower_ty(ty), - default.as_ref().map(|x| this.lower_expr(x))) + default.as_ref().map(|x| P(this.lower_expr(x)))) } TraitItemKind::Method(ref sig, ref body) => { hir::MethodTraitItem(this.lower_method_sig(sig), - body.as_ref().map(|x| this.lower_block(x))) + body.as_ref().map(|x| { + let body = this.lower_block(x); + let expr = this.expr_block(body, ThinVec::new()); + this.record_expr(expr) + })) } TraitItemKind::Type(ref bounds, ref default) => { hir::TypeTraitItem(this.lower_bounds(bounds), @@ -685,14 +933,16 @@ impl<'a> LoweringContext<'a> { name: i.ident.name, attrs: this.lower_attrs(&i.attrs), vis: this.lower_visibility(&i.vis), - defaultness: this.lower_defaultness(i.defaultness), + defaultness: this.lower_defaultness(i.defaultness, true /* [1] */), node: match i.node { ImplItemKind::Const(ref ty, ref expr) => { - hir::ImplItemKind::Const(this.lower_ty(ty), this.lower_expr(expr)) + hir::ImplItemKind::Const(this.lower_ty(ty), P(this.lower_expr(expr))) } ImplItemKind::Method(ref sig, ref body) => { - hir::ImplItemKind::Method(this.lower_method_sig(sig), - this.lower_block(body)) + let body = this.lower_block(body); + let expr = this.expr_block(body, ThinVec::new()); + let expr_id = this.record_expr(expr); + hir::ImplItemKind::Method(this.lower_method_sig(sig), expr_id) } ImplItemKind::Type(ref ty) => hir::ImplItemKind::Type(this.lower_ty(ty)), ImplItemKind::Macro(..) => panic!("Shouldn't exist any more"), @@ -700,12 +950,34 @@ impl<'a> LoweringContext<'a> { span: i.span, } }) + + // [1] since `default impl` is not yet implemented, this is always true in impls + } + + fn lower_impl_item_ref(&mut self, i: &ImplItem) -> hir::ImplItemRef { + hir::ImplItemRef { + id: hir::ImplItemId { node_id: i.id }, + name: i.ident.name, + span: i.span, + vis: self.lower_visibility(&i.vis), + defaultness: self.lower_defaultness(i.defaultness, true /* [1] */), + kind: match i.node { + ImplItemKind::Const(..) => hir::AssociatedItemKind::Const, + ImplItemKind::Type(..) => hir::AssociatedItemKind::Type, + ImplItemKind::Method(ref sig, _) => hir::AssociatedItemKind::Method { + has_self: sig.decl.get_self().is_some(), + }, + ImplItemKind::Macro(..) => unimplemented!(), + }, + } + + // [1] since `default impl` is not yet implemented, this is always true in impls } fn lower_mod(&mut self, m: &Mod) -> hir::Mod { hir::Mod { inner: m.inner, - item_ids: m.items.iter().map(|x| self.lower_item_id(x)).collect(), + item_ids: m.items.iter().flat_map(|x| self.lower_item_id(x)).collect(), } } @@ -721,21 +993,30 @@ impl<'a> LoweringContext<'a> { } } - fn lower_item_id(&mut self, i: &Item) -> hir::ItemId { - hir::ItemId { id: i.id } + fn lower_item_id(&mut self, i: &Item) -> SmallVector { + if let ItemKind::Use(ref view_path) = i.node { + if let ViewPathList(_, ref imports) = view_path.node { + return iter::once(i.id).chain(imports.iter().map(|import| import.node.id)) + .map(|id| hir::ItemId { id: id }).collect(); + } + } + SmallVector::one(hir::ItemId { id: i.id }) } pub fn lower_item(&mut self, i: &Item) -> hir::Item { + let mut name = i.ident.name; + let attrs = self.lower_attrs(&i.attrs); + let mut vis = self.lower_visibility(&i.vis); let node = self.with_parent_def(i.id, |this| { - this.lower_item_kind(&i.node) + this.lower_item_kind(i.id, &mut name, &attrs, &mut vis, &i.node) }); hir::Item { id: i.id, - name: i.ident.name, - attrs: self.lower_attrs(&i.attrs), + name: name, + attrs: attrs, node: node, - vis: self.lower_visibility(&i.vis), + vis: vis, span: i.span, } } @@ -838,29 +1119,41 @@ impl<'a> LoweringContext<'a> { self.with_parent_def(p.id, |this| { match this.resolver.get_resolution(p.id).map(|d| d.base_def) { // `None` can occur in body-less function signatures - None | Some(Def::Local(..)) => { + def @ None | def @ Some(Def::Local(_)) => { + let def_id = def.map(|d| d.def_id()).unwrap_or_else(|| { + this.resolver.definitions().local_def_id(p.id) + }); hir::PatKind::Binding(this.lower_binding_mode(binding_mode), + def_id, respan(pth1.span, pth1.node.name), sub.as_ref().map(|x| this.lower_pat(x))) } - _ => hir::PatKind::Path(None, hir::Path::from_name(pth1.span, - pth1.node.name)) + Some(def) => { + hir::PatKind::Path(hir::QPath::Resolved(None, P(hir::Path { + span: pth1.span, + global: false, + def: def, + segments: hir_vec![ + hir::PathSegment::from_name(pth1.node.name) + ], + }))) + } } }) } - PatKind::Lit(ref e) => hir::PatKind::Lit(self.lower_expr(e)), + PatKind::Lit(ref e) => hir::PatKind::Lit(P(self.lower_expr(e))), PatKind::TupleStruct(ref path, ref pats, ddpos) => { - hir::PatKind::TupleStruct(self.lower_path(path), - pats.iter().map(|x| self.lower_pat(x)).collect(), ddpos) + let qpath = self.lower_qpath(p.id, &None, path, ParamMode::Optional); + hir::PatKind::TupleStruct(qpath, + pats.iter().map(|x| self.lower_pat(x)).collect(), + ddpos) } - PatKind::Path(ref opt_qself, ref path) => { - let opt_qself = opt_qself.as_ref().map(|qself| { - hir::QSelf { ty: self.lower_ty(&qself.ty), position: qself.position } - }); - hir::PatKind::Path(opt_qself, self.lower_path(path)) + PatKind::Path(ref qself, ref path) => { + hir::PatKind::Path(self.lower_qpath(p.id, qself, path, ParamMode::Optional)) } - PatKind::Struct(ref pth, ref fields, etc) => { - let pth = self.lower_path(pth); + PatKind::Struct(ref path, ref fields, etc) => { + let qpath = self.lower_qpath(p.id, &None, path, ParamMode::Optional); + let fs = fields.iter() .map(|f| { Spanned { @@ -873,7 +1166,7 @@ impl<'a> LoweringContext<'a> { } }) .collect(); - hir::PatKind::Struct(pth, fs, etc) + hir::PatKind::Struct(qpath, fs, etc) } PatKind::Tuple(ref elts, ddpos) => { hir::PatKind::Tuple(elts.iter().map(|x| self.lower_pat(x)).collect(), ddpos) @@ -883,7 +1176,7 @@ impl<'a> LoweringContext<'a> { hir::PatKind::Ref(self.lower_pat(inner), self.lower_mutability(mutbl)) } PatKind::Range(ref e1, ref e2) => { - hir::PatKind::Range(self.lower_expr(e1), self.lower_expr(e2)) + hir::PatKind::Range(P(self.lower_expr(e1)), P(self.lower_expr(e2))) } PatKind::Slice(ref before, ref slice, ref after) => { hir::PatKind::Slice(before.iter().map(|x| self.lower_pat(x)).collect(), @@ -896,8 +1189,8 @@ impl<'a> LoweringContext<'a> { }) } - fn lower_expr(&mut self, e: &Expr) -> P { - P(hir::Expr { + fn lower_expr(&mut self, e: &Expr) -> hir::Expr { + hir::Expr { id: e.id, node: match e.node { // Issue #22181: @@ -917,7 +1210,7 @@ impl<'a> LoweringContext<'a> { // // But for now there are type-inference issues doing that. ExprKind::Box(ref e) => { - hir::ExprBox(self.lower_expr(e)) + hir::ExprBox(P(self.lower_expr(e))) } // Desugar ExprBox: `in (PLACE) EXPR` @@ -931,8 +1224,8 @@ impl<'a> LoweringContext<'a> { // std::intrinsics::move_val_init(raw_place, pop_unsafe!( EXPR )); // InPlace::finalize(place) // }) - let placer_expr = self.lower_expr(placer); - let value_expr = self.lower_expr(value_expr); + let placer_expr = P(self.lower_expr(placer)); + let value_expr = P(self.lower_expr(value_expr)); let placer_ident = self.str_to_ident("placer"); let place_ident = self.str_to_ident("place"); @@ -943,10 +1236,10 @@ impl<'a> LoweringContext<'a> { let move_val_init = ["intrinsics", "move_val_init"]; let inplace_finalize = ["ops", "InPlace", "finalize"]; + let unstable_span = self.allow_internal_unstable("<-", e.span); let make_call = |this: &mut LoweringContext, p, args| { - let path = this.std_path(e.span, p); - let path = this.expr_path(path, ThinVec::new()); - this.expr_call(e.span, path, args) + let path = P(this.expr_std_path(unstable_span, p, ThinVec::new())); + P(this.expr_call(e.span, path, args)) }; let mk_stmt_let = |this: &mut LoweringContext, bind, expr| { @@ -959,11 +1252,6 @@ impl<'a> LoweringContext<'a> { // let placer = ; let (s1, placer_binding) = { - let placer_expr = self.signal_block_expr(hir_vec![], - placer_expr, - e.span, - hir::PopUnstableBlock, - ThinVec::new()); mk_stmt_let(self, placer_ident, placer_expr) }; @@ -976,7 +1264,7 @@ impl<'a> LoweringContext<'a> { // let p_ptr = Place::pointer(&mut place); let (s3, p_ptr_binding) = { - let agent = self.expr_ident(e.span, place_ident, place_binding); + let agent = P(self.expr_ident(e.span, place_ident, place_binding)); let args = hir_vec![self.expr_mut_addr_of(e.span, agent)]; let call = make_call(self, &place_pointer, args); mk_stmt_let(self, p_ptr_ident, call) @@ -984,11 +1272,6 @@ impl<'a> LoweringContext<'a> { // pop_unsafe!(EXPR)); let pop_unsafe_expr = { - let value_expr = self.signal_block_expr(hir_vec![], - value_expr, - e.span, - hir::PopUnstableBlock, - ThinVec::new()); self.signal_block_expr(hir_vec![], value_expr, e.span, @@ -1010,33 +1293,31 @@ impl<'a> LoweringContext<'a> { let place = self.expr_ident(e.span, place_ident, place_binding); let call = make_call(self, &inplace_finalize, hir_vec![place]); - self.signal_block_expr(hir_vec![call_move_val_init], - call, - e.span, - hir::PushUnsafeBlock(hir::CompilerGenerated), - ThinVec::new()) + P(self.signal_block_expr(hir_vec![call_move_val_init], + call, + e.span, + hir::PushUnsafeBlock(hir::CompilerGenerated), + ThinVec::new())) }; - return self.signal_block_expr(hir_vec![s1, s2, s3], - expr, - e.span, - hir::PushUnstableBlock, - e.attrs.clone()); + let block = self.block_all(e.span, hir_vec![s1, s2, s3], Some(expr)); + // add the attributes to the outer returned expr node + return self.expr_block(P(block), e.attrs.clone()); } ExprKind::Vec(ref exprs) => { hir::ExprArray(exprs.iter().map(|x| self.lower_expr(x)).collect()) } ExprKind::Repeat(ref expr, ref count) => { - let expr = self.lower_expr(expr); - let count = self.lower_expr(count); + let expr = P(self.lower_expr(expr)); + let count = P(self.lower_expr(count)); hir::ExprRepeat(expr, count) } ExprKind::Tup(ref elts) => { hir::ExprTup(elts.iter().map(|x| self.lower_expr(x)).collect()) } ExprKind::Call(ref f, ref args) => { - let f = self.lower_expr(f); + let f = P(self.lower_expr(f)); hir::ExprCall(f, args.iter().map(|x| self.lower_expr(x)).collect()) } ExprKind::MethodCall(i, ref tps, ref args) => { @@ -1046,27 +1327,27 @@ impl<'a> LoweringContext<'a> { } ExprKind::Binary(binop, ref lhs, ref rhs) => { let binop = self.lower_binop(binop); - let lhs = self.lower_expr(lhs); - let rhs = self.lower_expr(rhs); + let lhs = P(self.lower_expr(lhs)); + let rhs = P(self.lower_expr(rhs)); hir::ExprBinary(binop, lhs, rhs) } ExprKind::Unary(op, ref ohs) => { let op = self.lower_unop(op); - let ohs = self.lower_expr(ohs); + let ohs = P(self.lower_expr(ohs)); hir::ExprUnary(op, ohs) } ExprKind::Lit(ref l) => hir::ExprLit(P((**l).clone())), ExprKind::Cast(ref expr, ref ty) => { - let expr = self.lower_expr(expr); + let expr = P(self.lower_expr(expr)); hir::ExprCast(expr, self.lower_ty(ty)) } ExprKind::Type(ref expr, ref ty) => { - let expr = self.lower_expr(expr); + let expr = P(self.lower_expr(expr)); hir::ExprType(expr, self.lower_ty(ty)) } ExprKind::AddrOf(m, ref ohs) => { let m = self.lower_mutability(m); - let ohs = self.lower_expr(ohs); + let ohs = P(self.lower_expr(ohs)); hir::ExprAddrOf(m, ohs) } // More complicated than you might expect because the else branch @@ -1077,7 +1358,7 @@ impl<'a> LoweringContext<'a> { ExprKind::IfLet(..) => { // wrap the if-let expr in a block let span = els.span; - let els = self.lower_expr(els); + let els = P(self.lower_expr(els)); let id = self.next_id(); let blk = P(hir::Block { stmts: hir_vec![], @@ -1086,84 +1367,77 @@ impl<'a> LoweringContext<'a> { rules: hir::DefaultBlock, span: span, }); - self.expr_block(blk, ThinVec::new()) + P(self.expr_block(blk, ThinVec::new())) } - _ => self.lower_expr(els), + _ => P(self.lower_expr(els)), } }); - hir::ExprIf(self.lower_expr(cond), self.lower_block(blk), else_opt) + hir::ExprIf(P(self.lower_expr(cond)), self.lower_block(blk), else_opt) } ExprKind::While(ref cond, ref body, opt_ident) => { - hir::ExprWhile(self.lower_expr(cond), self.lower_block(body), + hir::ExprWhile(P(self.lower_expr(cond)), self.lower_block(body), self.lower_opt_sp_ident(opt_ident)) } ExprKind::Loop(ref body, opt_ident) => { - hir::ExprLoop(self.lower_block(body), self.lower_opt_sp_ident(opt_ident)) + hir::ExprLoop(self.lower_block(body), + self.lower_opt_sp_ident(opt_ident), + hir::LoopSource::Loop) } ExprKind::Match(ref expr, ref arms) => { - hir::ExprMatch(self.lower_expr(expr), + hir::ExprMatch(P(self.lower_expr(expr)), arms.iter().map(|x| self.lower_arm(x)).collect(), hir::MatchSource::Normal) } ExprKind::Closure(capture_clause, ref decl, ref body, fn_decl_span) => { self.with_parent_def(e.id, |this| { + let expr = this.lower_expr(body); hir::ExprClosure(this.lower_capture_clause(capture_clause), this.lower_fn_decl(decl), - this.lower_block(body), + this.record_expr(expr), fn_decl_span) }) } ExprKind::Block(ref blk) => hir::ExprBlock(self.lower_block(blk)), ExprKind::Assign(ref el, ref er) => { - hir::ExprAssign(self.lower_expr(el), self.lower_expr(er)) + hir::ExprAssign(P(self.lower_expr(el)), P(self.lower_expr(er))) } ExprKind::AssignOp(op, ref el, ref er) => { hir::ExprAssignOp(self.lower_binop(op), - self.lower_expr(el), - self.lower_expr(er)) + P(self.lower_expr(el)), + P(self.lower_expr(er))) } ExprKind::Field(ref el, ident) => { - hir::ExprField(self.lower_expr(el), respan(ident.span, ident.node.name)) + hir::ExprField(P(self.lower_expr(el)), respan(ident.span, ident.node.name)) } ExprKind::TupField(ref el, ident) => { - hir::ExprTupField(self.lower_expr(el), ident) + hir::ExprTupField(P(self.lower_expr(el)), ident) } ExprKind::Index(ref el, ref er) => { - hir::ExprIndex(self.lower_expr(el), self.lower_expr(er)) + hir::ExprIndex(P(self.lower_expr(el)), P(self.lower_expr(er))) } ExprKind::Range(ref e1, ref e2, lims) => { fn make_struct(this: &mut LoweringContext, ast_expr: &Expr, path: &[&str], - fields: &[(&str, &P)]) -> P { - let struct_path = this.std_path(ast_expr.span, - &iter::once(&"ops").chain(path) - .map(|s| *s) - .collect::>()); + fields: &[(&str, &P)]) -> hir::Expr { + let struct_path = &iter::once(&"ops").chain(path).map(|s| *s) + .collect::>(); + let unstable_span = this.allow_internal_unstable("...", ast_expr.span); - let hir_expr = if fields.len() == 0 { - this.expr_path(struct_path, ast_expr.attrs.clone()) + if fields.len() == 0 { + this.expr_std_path(unstable_span, struct_path, + ast_expr.attrs.clone()) } else { let fields = fields.into_iter().map(|&(s, e)| { - let expr = this.lower_expr(&e); - let signal_block = this.signal_block_expr(hir_vec![], - expr, - e.span, - hir::PopUnstableBlock, - ThinVec::new()); - this.field(token::intern(s), signal_block, ast_expr.span) + let expr = P(this.lower_expr(&e)); + let unstable_span = this.allow_internal_unstable("...", e.span); + this.field(Symbol::intern(s), expr, unstable_span) }).collect(); let attrs = ast_expr.attrs.clone(); - this.expr_struct(ast_expr.span, struct_path, fields, None, attrs) - }; - - this.signal_block_expr(hir_vec![], - hir_expr, - ast_expr.span, - hir::PushUnstableBlock, - ThinVec::new()) + this.expr_std_struct(unstable_span, struct_path, fields, None, attrs) + } } use syntax::ast::RangeLimits::*; @@ -1197,17 +1471,14 @@ impl<'a> LoweringContext<'a> { }; } ExprKind::Path(ref qself, ref path) => { - let hir_qself = qself.as_ref().map(|&QSelf { ref ty, position }| { - hir::QSelf { - ty: self.lower_ty(ty), - position: position, - } - }); - hir::ExprPath(hir_qself, self.lower_path(path)) + hir::ExprPath(self.lower_qpath(e.id, qself, path, ParamMode::Optional)) } - ExprKind::Break(opt_ident) => hir::ExprBreak(self.lower_opt_sp_ident(opt_ident)), - ExprKind::Continue(opt_ident) => hir::ExprAgain(self.lower_opt_sp_ident(opt_ident)), - ExprKind::Ret(ref e) => hir::ExprRet(e.as_ref().map(|x| self.lower_expr(x))), + ExprKind::Break(opt_ident, ref opt_expr) => { + hir::ExprBreak(self.lower_label(e.id, opt_ident), + opt_expr.as_ref().map(|x| P(self.lower_expr(x)))) + } + ExprKind::Continue(opt_ident) => hir::ExprAgain(self.lower_label(e.id, opt_ident)), + ExprKind::Ret(ref e) => hir::ExprRet(e.as_ref().map(|x| P(self.lower_expr(x)))), ExprKind::InlineAsm(ref asm) => { let hir_asm = hir::InlineAsm { inputs: asm.inputs.iter().map(|&(ref c, _)| c.clone()).collect(), @@ -1233,22 +1504,21 @@ impl<'a> LoweringContext<'a> { hir::ExprInlineAsm(P(hir_asm), outputs, inputs) } ExprKind::Struct(ref path, ref fields, ref maybe_expr) => { - hir::ExprStruct(P(self.lower_path(path)), + hir::ExprStruct(self.lower_qpath(e.id, &None, path, ParamMode::Optional), fields.iter().map(|x| self.lower_field(x)).collect(), - maybe_expr.as_ref().map(|x| self.lower_expr(x))) + maybe_expr.as_ref().map(|x| P(self.lower_expr(x)))) } ExprKind::Paren(ref ex) => { - return self.lower_expr(ex).map(|mut ex| { - // include parens in span, but only if it is a super-span. - if e.span.contains(ex.span) { - ex.span = e.span; - } - // merge attributes into the inner expression. - let mut attrs = e.attrs.clone(); - attrs.extend::>(ex.attrs.into()); - ex.attrs = attrs; - ex - }); + let mut ex = self.lower_expr(ex); + // include parens in span, but only if it is a super-span. + if e.span.contains(ex.span) { + ex.span = e.span; + } + // merge attributes into the inner expression. + let mut attrs = e.attrs.clone(); + attrs.extend::>(ex.attrs.into()); + ex.attrs = attrs; + return ex; } // Desugar ExprIfLet @@ -1265,13 +1535,13 @@ impl<'a> LoweringContext<'a> { // ` => ` let pat_arm = { let body = self.lower_block(body); - let body_expr = self.expr_block(body, ThinVec::new()); + let body_expr = P(self.expr_block(body, ThinVec::new())); let pat = self.lower_pat(pat); self.arm(hir_vec![pat], body_expr) }; // `[_ if => ,]` - let mut else_opt = else_opt.as_ref().map(|e| self.lower_expr(e)); + let mut else_opt = else_opt.as_ref().map(|e| P(self.lower_expr(e))); let else_if_arms = { let mut arms = vec![]; loop { @@ -1285,7 +1555,7 @@ impl<'a> LoweringContext<'a> { attrs: hir_vec![], pats: hir_vec![pat_under], guard: Some(cond), - body: self.expr_block(then, ThinVec::new()), + body: P(self.expr_block(then, ThinVec::new())), }); else_opt.map(|else_opt| (else_opt, true)) } @@ -1325,7 +1595,7 @@ impl<'a> LoweringContext<'a> { arms.extend(else_if_arms); arms.push(else_arm); - let sub_expr = self.lower_expr(sub_expr); + let sub_expr = P(self.lower_expr(sub_expr)); // add attributes to the outer returned expr node return self.expr(e.span, hir::ExprMatch(sub_expr, @@ -1351,7 +1621,7 @@ impl<'a> LoweringContext<'a> { // ` => ` let pat_arm = { let body = self.lower_block(body); - let body_expr = self.expr_block(body, ThinVec::new()); + let body_expr = P(self.expr_block(body, ThinVec::new())); let pat = self.lower_pat(pat); self.arm(hir_vec![pat], body_expr) }; @@ -1365,7 +1635,7 @@ impl<'a> LoweringContext<'a> { // `match { ... }` let arms = hir_vec![pat_arm, break_arm]; - let sub_expr = self.lower_expr(sub_expr); + let sub_expr = P(self.lower_expr(sub_expr)); let match_expr = self.expr(e.span, hir::ExprMatch(sub_expr, arms, @@ -1373,11 +1643,12 @@ impl<'a> LoweringContext<'a> { ThinVec::new()); // `[opt_ident]: loop { ... }` - let loop_block = self.block_expr(match_expr); - let loop_expr = hir::ExprLoop(loop_block, self.lower_opt_sp_ident(opt_ident)); + let loop_block = P(self.block_expr(P(match_expr))); + let loop_expr = hir::ExprLoop(loop_block, self.lower_opt_sp_ident(opt_ident), + hir::LoopSource::WhileLet); // add attributes to the outer returned expr node let attrs = e.attrs.clone(); - return P(hir::Expr { id: e.id, node: loop_expr, span: e.span, attrs: attrs }); + return hir::Expr { id: e.id, node: loop_expr, span: e.span, attrs: attrs }; } // Desugar ExprForLoop @@ -1433,21 +1704,24 @@ impl<'a> LoweringContext<'a> { // `match ::std::iter::Iterator::next(&mut iter) { ... }` let match_expr = { - let next_path = self.std_path(e.span, &["iter", "Iterator", "next"]); - let iter = self.expr_ident(e.span, iter, iter_pat.id); + let iter = P(self.expr_ident(e.span, iter, iter_pat.id)); let ref_mut_iter = self.expr_mut_addr_of(e.span, iter); - let next_path = self.expr_path(next_path, ThinVec::new()); - let next_expr = self.expr_call(e.span, next_path, hir_vec![ref_mut_iter]); + let next_path = &["iter", "Iterator", "next"]; + let next_path = P(self.expr_std_path(e.span, next_path, ThinVec::new())); + let next_expr = P(self.expr_call(e.span, next_path, + hir_vec![ref_mut_iter])); let arms = hir_vec![pat_arm, break_arm]; - self.expr(e.span, - hir::ExprMatch(next_expr, arms, hir::MatchSource::ForLoopDesugar), - ThinVec::new()) + P(self.expr(e.span, + hir::ExprMatch(next_expr, arms, + hir::MatchSource::ForLoopDesugar), + ThinVec::new())) }; // `[opt_ident]: loop { ... }` - let loop_block = self.block_expr(match_expr); - let loop_expr = hir::ExprLoop(loop_block, self.lower_opt_sp_ident(opt_ident)); + let loop_block = P(self.block_expr(match_expr)); + let loop_expr = hir::ExprLoop(loop_block, self.lower_opt_sp_ident(opt_ident), + hir::LoopSource::ForLoop); let loop_expr = P(hir::Expr { id: e.id, node: loop_expr, @@ -1460,17 +1734,16 @@ impl<'a> LoweringContext<'a> { // `match ::std::iter::IntoIterator::into_iter() { ... }` let into_iter_expr = { - let into_iter_path = self.std_path(e.span, - &["iter", "IntoIterator", "into_iter"]); - - let into_iter = self.expr_path(into_iter_path, ThinVec::new()); - self.expr_call(e.span, into_iter, hir_vec![head]) + let into_iter_path = &["iter", "IntoIterator", "into_iter"]; + let into_iter = P(self.expr_std_path(e.span, into_iter_path, + ThinVec::new())); + P(self.expr_call(e.span, into_iter, hir_vec![head])) }; - let match_expr = self.expr_match(e.span, - into_iter_expr, - hir_vec![iter_arm], - hir::MatchSource::ForLoopDesugar); + let match_expr = P(self.expr_match(e.span, + into_iter_expr, + hir_vec![iter_arm], + hir::MatchSource::ForLoopDesugar)); // `{ let _result = ...; _result }` // underscore prevents an unused_variables lint if the head diverges @@ -1478,8 +1751,8 @@ impl<'a> LoweringContext<'a> { let (let_stmt, let_stmt_binding) = self.stmt_let(e.span, false, result_ident, match_expr); - let result = self.expr_ident(e.span, result_ident, let_stmt_binding); - let block = self.block_all(e.span, hir_vec![let_stmt], Some(result)); + let result = P(self.expr_ident(e.span, result_ident, let_stmt_binding)); + let block = P(self.block_all(e.span, hir_vec![let_stmt], Some(result))); // add the attributes to the outer returned expr node return self.expr_block(block, e.attrs.clone()); } @@ -1489,70 +1762,56 @@ impl<'a> LoweringContext<'a> { ExprKind::Try(ref sub_expr) => { // to: // - // { - // match { Carrier::translate( { } ) } { - // Ok(val) => val, - // Err(err) => { return Carrier::from_error(From::from(err)); } - // } + // match Carrier::translate() { + // Ok(val) => val, + // Err(err) => return Carrier::from_error(From::from(err)) // } + let unstable_span = self.allow_internal_unstable("?", e.span); - // { Carrier::translate( { } ) } + // Carrier::translate() let discr = { // expand let sub_expr = self.lower_expr(sub_expr); - let sub_expr = self.signal_block_expr(hir_vec![], - sub_expr, - e.span, - hir::PopUnstableBlock, - ThinVec::new()); - let path = self.std_path(e.span, &["ops", "Carrier", "translate"]); - let path = self.expr_path(path, ThinVec::new()); - let call = self.expr_call(e.span, path, hir_vec![sub_expr]); - - self.signal_block_expr(hir_vec![], - call, - e.span, - hir::PushUnstableBlock, - ThinVec::new()) + let path = &["ops", "Carrier", "translate"]; + let path = P(self.expr_std_path(unstable_span, path, ThinVec::new())); + P(self.expr_call(e.span, path, hir_vec![sub_expr])) }; // Ok(val) => val let ok_arm = { let val_ident = self.str_to_ident("val"); let val_pat = self.pat_ident(e.span, val_ident); - let val_expr = self.expr_ident(e.span, val_ident, val_pat.id); + let val_expr = P(self.expr_ident(e.span, val_ident, val_pat.id)); let ok_pat = self.pat_ok(e.span, val_pat); self.arm(hir_vec![ok_pat], val_expr) }; - // Err(err) => { return Carrier::from_error(From::from(err)); } + // Err(err) => return Carrier::from_error(From::from(err)) let err_arm = { let err_ident = self.str_to_ident("err"); let err_local = self.pat_ident(e.span, err_ident); let from_expr = { - let path = self.std_path(e.span, &["convert", "From", "from"]); - let from = self.expr_path(path, ThinVec::new()); + let path = &["convert", "From", "from"]; + let from = P(self.expr_std_path(e.span, path, ThinVec::new())); let err_expr = self.expr_ident(e.span, err_ident, err_local.id); self.expr_call(e.span, from, hir_vec![err_expr]) }; let from_err_expr = { - let path = self.std_path(e.span, &["ops", "Carrier", "from_error"]); - let from_err = self.expr_path(path, ThinVec::new()); - self.expr_call(e.span, from_err, hir_vec![from_expr]) + let path = &["ops", "Carrier", "from_error"]; + let from_err = P(self.expr_std_path(unstable_span, path, + ThinVec::new())); + P(self.expr_call(e.span, from_err, hir_vec![from_expr])) }; - let ret_expr = self.expr(e.span, - hir::Expr_::ExprRet(Some(from_err_expr)), - ThinVec::new()); - let ret_stmt = self.stmt_expr(ret_expr); - let block = self.signal_block_stmt(ret_stmt, e.span, - hir::PushUnstableBlock, ThinVec::new()); + let ret_expr = P(self.expr(e.span, + hir::Expr_::ExprRet(Some(from_err_expr)), + ThinVec::new())); let err_pat = self.pat_err(e.span, err_local); - self.arm(hir_vec![err_pat], block) + self.arm(hir_vec![err_pat], ret_expr) }; return self.expr_match(e.span, discr, hir_vec![err_arm, ok_arm], @@ -1563,11 +1822,11 @@ impl<'a> LoweringContext<'a> { }, span: e.span, attrs: e.attrs.clone(), - }) + } } - fn lower_stmt(&mut self, s: &Stmt) -> hir::Stmt { - match s.node { + fn lower_stmt(&mut self, s: &Stmt) -> SmallVector { + SmallVector::one(match s.node { StmtKind::Local(ref l) => Spanned { node: hir::StmtDecl(P(Spanned { node: hir::DeclLocal(self.lower_local(l)), @@ -1575,27 +1834,31 @@ impl<'a> LoweringContext<'a> { }), s.id), span: s.span, }, - StmtKind::Item(ref it) => Spanned { - node: hir::StmtDecl(P(Spanned { - node: hir::DeclItem(self.lower_item_id(it)), + StmtKind::Item(ref it) => { + // Can only use the ID once. + let mut id = Some(s.id); + return self.lower_item_id(it).into_iter().map(|item_id| Spanned { + node: hir::StmtDecl(P(Spanned { + node: hir::DeclItem(item_id), + span: s.span, + }), id.take().unwrap_or_else(|| self.next_id())), span: s.span, - }), s.id), - span: s.span, - }, + }).collect(); + } StmtKind::Expr(ref e) => { Spanned { - node: hir::StmtExpr(self.lower_expr(e), s.id), + node: hir::StmtExpr(P(self.lower_expr(e)), s.id), span: s.span, } } StmtKind::Semi(ref e) => { Spanned { - node: hir::StmtSemi(self.lower_expr(e), s.id), + node: hir::StmtSemi(P(self.lower_expr(e)), s.id), span: s.span, } } StmtKind::Mac(..) => panic!("Shouldn't exist here"), - } + }) } fn lower_capture_clause(&mut self, c: CaptureBy) -> hir::CaptureClause { @@ -1609,16 +1872,23 @@ impl<'a> LoweringContext<'a> { match *v { Visibility::Public => hir::Public, Visibility::Crate(_) => hir::Visibility::Crate, - Visibility::Restricted { ref path, id } => - hir::Visibility::Restricted { path: P(self.lower_path(path)), id: id }, + Visibility::Restricted { ref path, id } => { + hir::Visibility::Restricted { + path: P(self.lower_path(id, path, ParamMode::Explicit)), + id: id + } + } Visibility::Inherited => hir::Inherited, } } - fn lower_defaultness(&mut self, d: Defaultness) -> hir::Defaultness { + fn lower_defaultness(&mut self, d: Defaultness, has_value: bool) -> hir::Defaultness { match d { - Defaultness::Default => hir::Defaultness::Default, - Defaultness::Final => hir::Defaultness::Final, + Defaultness::Default => hir::Defaultness::Default { has_value: has_value }, + Defaultness::Final => { + assert!(has_value); + hir::Defaultness::Final + } } } @@ -1681,36 +1951,41 @@ impl<'a> LoweringContext<'a> { } fn expr_break(&mut self, span: Span, attrs: ThinVec) -> P { - self.expr(span, hir::ExprBreak(None), attrs) + P(self.expr(span, hir::ExprBreak(None, None), attrs)) } - fn expr_call(&mut self, span: Span, e: P, args: hir::HirVec>) - -> P { + fn expr_call(&mut self, span: Span, e: P, args: hir::HirVec) + -> hir::Expr { self.expr(span, hir::ExprCall(e, args), ThinVec::new()) } - fn expr_ident(&mut self, span: Span, id: Name, binding: NodeId) -> P { - let expr_path = hir::ExprPath(None, self.path_ident(span, id)); - let expr = self.expr(span, expr_path, ThinVec::new()); - + fn expr_ident(&mut self, span: Span, id: Name, binding: NodeId) -> hir::Expr { let def = { let defs = self.resolver.definitions(); Def::Local(defs.local_def_id(binding)) }; - self.resolver.record_resolution(expr.id, def); - expr + let expr_path = hir::ExprPath(hir::QPath::Resolved(None, P(hir::Path { + span: span, + global: false, + def: def, + segments: hir_vec![hir::PathSegment::from_name(id)], + }))); + + self.expr(span, expr_path, ThinVec::new()) } - fn expr_mut_addr_of(&mut self, span: Span, e: P) -> P { + fn expr_mut_addr_of(&mut self, span: Span, e: P) -> hir::Expr { self.expr(span, hir::ExprAddrOf(hir::MutMutable, e), ThinVec::new()) } - fn expr_path(&mut self, path: hir::Path, attrs: ThinVec) -> P { - let def = self.resolver.resolve_generated_global_path(&path, true); - let expr = self.expr(path.span, hir::ExprPath(None, path), attrs); - self.resolver.record_resolution(expr.id, def); - expr + fn expr_std_path(&mut self, + span: Span, + components: &[&str], + attrs: ThinVec) + -> hir::Expr { + let path = self.std_path(span, components, true); + self.expr(span, hir::ExprPath(hir::QPath::Resolved(None, P(path))), attrs) } fn expr_match(&mut self, @@ -1718,37 +1993,36 @@ impl<'a> LoweringContext<'a> { arg: P, arms: hir::HirVec, source: hir::MatchSource) - -> P { + -> hir::Expr { self.expr(span, hir::ExprMatch(arg, arms, source), ThinVec::new()) } - fn expr_block(&mut self, b: P, attrs: ThinVec) -> P { + fn expr_block(&mut self, b: P, attrs: ThinVec) -> hir::Expr { self.expr(b.span, hir::ExprBlock(b), attrs) } - fn expr_tuple(&mut self, sp: Span, exprs: hir::HirVec>) -> P { - self.expr(sp, hir::ExprTup(exprs), ThinVec::new()) + fn expr_tuple(&mut self, sp: Span, exprs: hir::HirVec) -> P { + P(self.expr(sp, hir::ExprTup(exprs), ThinVec::new())) } - fn expr_struct(&mut self, - sp: Span, - path: hir::Path, - fields: hir::HirVec, - e: Option>, - attrs: ThinVec) -> P { - let def = self.resolver.resolve_generated_global_path(&path, false); - let expr = self.expr(sp, hir::ExprStruct(P(path), fields, e), attrs); - self.resolver.record_resolution(expr.id, def); - expr + fn expr_std_struct(&mut self, + span: Span, + components: &[&str], + fields: hir::HirVec, + e: Option>, + attrs: ThinVec) -> hir::Expr { + let path = self.std_path(span, components, false); + let qpath = hir::QPath::Resolved(None, P(path)); + self.expr(span, hir::ExprStruct(qpath, fields, e), attrs) } - fn expr(&mut self, span: Span, node: hir::Expr_, attrs: ThinVec) -> P { - P(hir::Expr { + fn expr(&mut self, span: Span, node: hir::Expr_, attrs: ThinVec) -> hir::Expr { + hir::Expr { id: self.next_id(), node: node, span: span, attrs: attrs, - }) + } } fn stmt_let(&mut self, sp: Span, mutbl: bool, ident: Name, ex: P) @@ -1771,61 +2045,50 @@ impl<'a> LoweringContext<'a> { (respan(sp, hir::StmtDecl(P(decl), self.next_id())), pat_id) } - // Turns `` into `;`, note that this produces a StmtSemi, not a - // StmtExpr. - fn stmt_expr(&self, expr: P) -> hir::Stmt { - hir::Stmt { - span: expr.span, - node: hir::StmtSemi(expr, self.next_id()), - } - } - - fn block_expr(&mut self, expr: P) -> P { + fn block_expr(&mut self, expr: P) -> hir::Block { self.block_all(expr.span, hir::HirVec::new(), Some(expr)) } fn block_all(&mut self, span: Span, stmts: hir::HirVec, expr: Option>) - -> P { - P(hir::Block { + -> hir::Block { + hir::Block { stmts: stmts, expr: expr, id: self.next_id(), rules: hir::DefaultBlock, span: span, - }) + } } fn pat_ok(&mut self, span: Span, pat: P) -> P { - let path = self.std_path(span, &["result", "Result", "Ok"]); - self.pat_enum(span, path, hir_vec![pat]) + self.pat_std_enum(span, &["result", "Result", "Ok"], hir_vec![pat]) } fn pat_err(&mut self, span: Span, pat: P) -> P { - let path = self.std_path(span, &["result", "Result", "Err"]); - self.pat_enum(span, path, hir_vec![pat]) + self.pat_std_enum(span, &["result", "Result", "Err"], hir_vec![pat]) } fn pat_some(&mut self, span: Span, pat: P) -> P { - let path = self.std_path(span, &["option", "Option", "Some"]); - self.pat_enum(span, path, hir_vec![pat]) + self.pat_std_enum(span, &["option", "Option", "Some"], hir_vec![pat]) } fn pat_none(&mut self, span: Span) -> P { - let path = self.std_path(span, &["option", "Option", "None"]); - self.pat_enum(span, path, hir_vec![]) + self.pat_std_enum(span, &["option", "Option", "None"], hir_vec![]) } - fn pat_enum(&mut self, span: Span, path: hir::Path, subpats: hir::HirVec>) - -> P { - let def = self.resolver.resolve_generated_global_path(&path, true); + fn pat_std_enum(&mut self, + span: Span, + components: &[&str], + subpats: hir::HirVec>) + -> P { + let path = self.std_path(span, components, true); + let qpath = hir::QPath::Resolved(None, P(path)); let pt = if subpats.is_empty() { - hir::PatKind::Path(None, path) + hir::PatKind::Path(qpath) } else { - hir::PatKind::TupleStruct(path, subpats, None) + hir::PatKind::TupleStruct(qpath, subpats, None) }; - let pat = self.pat(span, pt); - self.resolver.record_resolution(pat.id, def); - pat + self.pat(span, pt) } fn pat_ident(&mut self, span: Span, name: Name) -> P { @@ -1834,25 +2097,26 @@ impl<'a> LoweringContext<'a> { fn pat_ident_binding_mode(&mut self, span: Span, name: Name, bm: hir::BindingMode) -> P { - let pat_ident = hir::PatKind::Binding(bm, - Spanned { - span: span, - node: name, - }, - None); - - let pat = self.pat(span, pat_ident); - + let id = self.next_id(); let parent_def = self.parent_def; - let def = { + let def_id = { let defs = self.resolver.definitions(); let def_path_data = DefPathData::Binding(name.as_str()); - let def_index = defs.create_def_with_parent(parent_def, pat.id, def_path_data); - Def::Local(DefId::local(def_index)) + let def_index = defs.create_def_with_parent(parent_def, id, def_path_data); + DefId::local(def_index) }; - self.resolver.record_resolution(pat.id, def); - pat + P(hir::Pat { + id: id, + node: hir::PatKind::Binding(bm, + def_id, + Spanned { + span: span, + node: name, + }, + None), + span: span, + }) } fn pat_wild(&mut self, span: Span) -> P { @@ -1867,63 +2131,25 @@ impl<'a> LoweringContext<'a> { }) } - fn path_ident(&mut self, span: Span, id: Name) -> hir::Path { - self.path(span, vec![id]) - } + /// Given suffix ["b","c","d"], returns path `::std::b::c::d` when + /// `fld.cx.use_std`, and `::core::b::c::d` otherwise. + /// The path is also resolved according to `is_value`. + fn std_path(&mut self, span: Span, components: &[&str], is_value: bool) -> hir::Path { + let idents = self.crate_root.iter().chain(components); - fn path(&mut self, span: Span, strs: Vec) -> hir::Path { - self.path_all(span, false, strs, hir::HirVec::new(), hir::HirVec::new(), hir::HirVec::new()) - } - - fn path_global(&mut self, span: Span, strs: Vec) -> hir::Path { - self.path_all(span, true, strs, hir::HirVec::new(), hir::HirVec::new(), hir::HirVec::new()) - } - - fn path_all(&mut self, - sp: Span, - global: bool, - mut names: Vec, - lifetimes: hir::HirVec, - types: hir::HirVec>, - bindings: hir::HirVec) - -> hir::Path { - let last_identifier = names.pop().unwrap(); - let mut segments: Vec = names.into_iter().map(|name| { - hir::PathSegment { - name: name, - parameters: hir::PathParameters::none(), - } + let segments: Vec<_> = idents.map(|name| { + hir::PathSegment::from_name(Symbol::intern(name)) }).collect(); - segments.push(hir::PathSegment { - name: last_identifier, - parameters: hir::AngleBracketedParameters(hir::AngleBracketedParameterData { - lifetimes: lifetimes, - types: types, - bindings: bindings, - }), - }); - hir::Path { - span: sp, - global: global, + let mut path = hir::Path { + span: span, + global: true, + def: Def::Err, segments: segments.into(), - } - } + }; - fn std_path_components(&mut self, components: &[&str]) -> Vec { - let mut v = Vec::new(); - if let Some(s) = self.crate_root { - v.push(token::intern(s)); - } - v.extend(components.iter().map(|s| token::intern(s))); - return v; - } - - // Given suffix ["b","c","d"], returns path `::std::b::c::d` when - // `fld.cx.use_std`, and `::core::b::c::d` otherwise. - fn std_path(&mut self, span: Span, components: &[&str]) -> hir::Path { - let idents = self.std_path_components(components); - self.path_global(span, idents) + self.resolver.resolve_hir_path(&mut path, is_value); + path } fn signal_block_expr(&mut self, @@ -1932,7 +2158,7 @@ impl<'a> LoweringContext<'a> { span: Span, rule: hir::BlockCheckMode, attrs: ThinVec) - -> P { + -> hir::Expr { let id = self.next_id(); let block = P(hir::Block { rules: rule, @@ -1944,20 +2170,11 @@ impl<'a> LoweringContext<'a> { self.expr_block(block, attrs) } - fn signal_block_stmt(&mut self, - stmt: hir::Stmt, - span: Span, - rule: hir::BlockCheckMode, - attrs: ThinVec) - -> P { - let id = self.next_id(); - let block = P(hir::Block { - rules: rule, + fn ty(&mut self, span: Span, node: hir::Ty_) -> P { + P(hir::Ty { + id: self.next_id(), + node: node, span: span, - id: id, - stmts: hir_vec![stmt], - expr: None, - }); - self.expr_block(block, attrs) + }) } } diff --git a/src/librustc/hir/map/blocks.rs b/src/librustc/hir/map/blocks.rs index 4487234885..068e7ed862 100644 --- a/src/librustc/hir/map/blocks.rs +++ b/src/librustc/hir/map/blocks.rs @@ -21,11 +21,9 @@ //! nested within a uniquely determined `FnLike`), and users can ask //! for the `Code` associated with a particular NodeId. -pub use self::Code::*; - use hir as ast; use hir::map::{self, Node}; -use hir::{Block, FnDecl}; +use hir::{Expr, FnDecl}; use hir::intravisit::FnKind; use syntax::abi; use syntax::ast::{Attribute, Name, NodeId}; @@ -50,7 +48,7 @@ pub trait MaybeFnLike { fn is_fn_like(&self) -> bool; } /// Components shared by fn-like things (fn items, methods, closures). pub struct FnParts<'a> { pub decl: &'a FnDecl, - pub body: &'a Block, + pub body: ast::ExprId, pub kind: FnKind<'a>, pub span: Span, pub id: NodeId, @@ -77,29 +75,32 @@ impl MaybeFnLike for ast::Expr { } } -/// Carries either an FnLikeNode or a Block, as these are the two +/// Carries either an FnLikeNode or a Expr, as these are the two /// constructs that correspond to "code" (as in, something from which /// we can construct a control-flow graph). #[derive(Copy, Clone)] pub enum Code<'a> { - FnLikeCode(FnLikeNode<'a>), - BlockCode(&'a Block), + FnLike(FnLikeNode<'a>), + Expr(&'a Expr), } impl<'a> Code<'a> { pub fn id(&self) -> NodeId { match *self { - FnLikeCode(node) => node.id(), - BlockCode(block) => block.id, + Code::FnLike(node) => node.id(), + Code::Expr(block) => block.id, } } - /// Attempts to construct a Code from presumed FnLike or Block node input. - pub fn from_node(node: Node) -> Option { - if let map::NodeBlock(block) = node { - Some(BlockCode(block)) - } else { - FnLikeNode::from_node(node).map(|fn_like| FnLikeCode(fn_like)) + /// Attempts to construct a Code from presumed FnLike or Expr node input. + pub fn from_node(map: &map::Map<'a>, id: NodeId) -> Option> { + match map.get(id) { + map::NodeBlock(_) => { + // Use the parent, hopefully an expression node. + Code::from_node(map, map.get_parent_node(id)) + } + map::NodeExpr(expr) => Some(Code::Expr(expr)), + node => FnLikeNode::from_node(node).map(Code::FnLike) } } } @@ -114,7 +115,7 @@ struct ItemFnParts<'a> { abi: abi::Abi, vis: &'a ast::Visibility, generics: &'a ast::Generics, - body: &'a Block, + body: ast::ExprId, id: NodeId, span: Span, attrs: &'a [Attribute], @@ -124,14 +125,14 @@ struct ItemFnParts<'a> { /// for use when implementing FnLikeNode operations. struct ClosureParts<'a> { decl: &'a FnDecl, - body: &'a Block, + body: ast::ExprId, id: NodeId, span: Span, attrs: &'a [Attribute], } impl<'a> ClosureParts<'a> { - fn new(d: &'a FnDecl, b: &'a Block, id: NodeId, s: Span, attrs: &'a [Attribute]) -> Self { + fn new(d: &'a FnDecl, b: ast::ExprId, id: NodeId, s: Span, attrs: &'a [Attribute]) -> Self { ClosureParts { decl: d, body: b, @@ -171,9 +172,9 @@ impl<'a> FnLikeNode<'a> { } } - pub fn body(self) -> &'a Block { - self.handle(|i: ItemFnParts<'a>| &*i.body, - |_, _, _: &'a ast::MethodSig, _, body: &'a ast::Block, _, _| body, + pub fn body(self) -> ast::ExprId { + self.handle(|i: ItemFnParts<'a>| i.body, + |_, _, _: &'a ast::MethodSig, _, body: ast::ExprId, _, _| body, |c: ClosureParts<'a>| c.body) } @@ -195,6 +196,18 @@ impl<'a> FnLikeNode<'a> { |c: ClosureParts| c.id) } + pub fn constness(self) -> ast::Constness { + match self.kind() { + FnKind::ItemFn(_, _, _, constness, ..) => { + constness + } + FnKind::Method(_, m, ..) => { + m.constness + } + _ => ast::Constness::NotConst + } + } + pub fn kind(self) -> FnKind<'a> { let item = |p: ItemFnParts<'a>| -> FnKind<'a> { FnKind::ItemFn(p.name, p.generics, p.unsafety, p.constness, p.abi, p.vis, p.attrs) @@ -214,7 +227,7 @@ impl<'a> FnLikeNode<'a> { Name, &'a ast::MethodSig, Option<&'a ast::Visibility>, - &'a ast::Block, + ast::ExprId, Span, &'a [Attribute]) -> A, @@ -222,13 +235,13 @@ impl<'a> FnLikeNode<'a> { { match self.node { map::NodeItem(i) => match i.node { - ast::ItemFn(ref decl, unsafety, constness, abi, ref generics, ref block) => + ast::ItemFn(ref decl, unsafety, constness, abi, ref generics, block) => item_fn(ItemFnParts { id: i.id, name: i.name, decl: &decl, unsafety: unsafety, - body: &block, + body: block, generics: generics, abi: abi, vis: &i.vis, @@ -239,24 +252,24 @@ impl<'a> FnLikeNode<'a> { _ => bug!("item FnLikeNode that is not fn-like"), }, map::NodeTraitItem(ti) => match ti.node { - ast::MethodTraitItem(ref sig, Some(ref body)) => { + ast::MethodTraitItem(ref sig, Some(body)) => { method(ti.id, ti.name, sig, None, body, ti.span, &ti.attrs) } _ => bug!("trait method FnLikeNode that is not fn-like"), }, map::NodeImplItem(ii) => { match ii.node { - ast::ImplItemKind::Method(ref sig, ref body) => { + ast::ImplItemKind::Method(ref sig, body) => { method(ii.id, ii.name, sig, Some(&ii.vis), body, ii.span, &ii.attrs) } _ => { bug!("impl method FnLikeNode that is not fn-like") } } - } + }, map::NodeExpr(e) => match e.node { - ast::ExprClosure(_, ref decl, ref block, _fn_decl_span) => - closure(ClosureParts::new(&decl, &block, e.id, e.span, &e.attrs)), + ast::ExprClosure(_, ref decl, block, _fn_decl_span) => + closure(ClosureParts::new(&decl, block, e.id, e.span, &e.attrs)), _ => bug!("expr FnLikeNode that is not fn-like"), }, _ => bug!("other FnLikeNode that is not fn-like"), diff --git a/src/librustc/hir/map/collector.rs b/src/librustc/hir/map/collector.rs index 3d9031a136..c46c8f044e 100644 --- a/src/librustc/hir/map/collector.rs +++ b/src/librustc/hir/map/collector.rs @@ -9,10 +9,8 @@ // except according to those terms. use super::*; -use super::MapEntry::*; -use hir::*; -use hir::intravisit::Visitor; +use hir::intravisit::{Visitor, NestedVisitorMap}; use hir::def_id::DefId; use middle::cstore::InlinedItem; use std::iter::repeat; @@ -92,6 +90,11 @@ impl<'ast> Visitor<'ast> for NodeCollector<'ast> { /// Because we want to track parent items and so forth, enable /// deep walking so that we walk nested items in the context of /// their outer items. + + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + panic!("visit_nested_xxx must be manually implemented in this visitor") + } + fn visit_nested_item(&mut self, item: ItemId) { debug!("visit_nested_item: {:?}", item); if !self.ignore_nested_items { @@ -99,6 +102,14 @@ impl<'ast> Visitor<'ast> for NodeCollector<'ast> { } } + fn visit_nested_impl_item(&mut self, item_id: ImplItemId) { + self.visit_impl_item(self.krate.impl_item(item_id)) + } + + fn visit_body(&mut self, id: ExprId) { + self.visit_expr(self.krate.expr(id)) + } + fn visit_item(&mut self, i: &'ast Item) { debug!("visit_item: {:?}", i); @@ -117,23 +128,6 @@ impl<'ast> Visitor<'ast> for NodeCollector<'ast> { this.insert(struct_def.id(), NodeStructCtor(struct_def)); } } - ItemTrait(.., ref bounds, _) => { - for b in bounds.iter() { - if let TraitTyParamBound(ref t, TraitBoundModifier::None) = *b { - this.insert(t.trait_ref.ref_id, NodeItem(i)); - } - } - } - ItemUse(ref view_path) => { - match view_path.node { - ViewPathList(_, ref paths) => { - for path in paths { - this.insert(path.node.id, NodeItem(i)); - } - } - _ => () - } - } _ => {} } intravisit::walk_item(this, i); @@ -210,8 +204,16 @@ impl<'ast> Visitor<'ast> for NodeCollector<'ast> { }); } + fn visit_trait_ref(&mut self, tr: &'ast TraitRef) { + self.insert(tr.ref_id, NodeTraitRef(tr)); + + self.with_parent(tr.ref_id, |this| { + intravisit::walk_trait_ref(this, tr); + }); + } + fn visit_fn(&mut self, fk: intravisit::FnKind<'ast>, fd: &'ast FnDecl, - b: &'ast Block, s: Span, id: NodeId) { + b: ExprId, s: Span, id: NodeId) { assert_eq!(self.parent_node, id); intravisit::walk_fn(self, fk, fd, b, s, id); } @@ -226,4 +228,29 @@ impl<'ast> Visitor<'ast> for NodeCollector<'ast> { fn visit_lifetime(&mut self, lifetime: &'ast Lifetime) { self.insert(lifetime.id, NodeLifetime(lifetime)); } + + fn visit_vis(&mut self, visibility: &'ast Visibility) { + match *visibility { + Visibility::Public | + Visibility::Crate | + Visibility::Inherited => {} + Visibility::Restricted { id, .. } => { + self.insert(id, NodeVisibility(visibility)); + self.with_parent(id, |this| { + intravisit::walk_vis(this, visibility); + }); + } + } + } + + fn visit_macro_def(&mut self, macro_def: &'ast MacroDef) { + self.insert_entry(macro_def.id, NotPresent); + } + + fn visit_struct_field(&mut self, field: &'ast StructField) { + self.insert(field.id, NodeField(field)); + self.with_parent(field.id, |this| { + intravisit::walk_struct_field(this, field); + }); + } } diff --git a/src/librustc/hir/map/def_collector.rs b/src/librustc/hir/map/def_collector.rs index 421843a7f1..eb5a89f320 100644 --- a/src/librustc/hir/map/def_collector.rs +++ b/src/librustc/hir/map/def_collector.rs @@ -8,10 +8,10 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use super::*; +use hir::map::definitions::*; use hir; -use hir::intravisit; +use hir::intravisit::{self, Visitor, NestedVisitorMap}; use hir::def_id::{CRATE_DEF_INDEX, DefId, DefIndex}; use middle::cstore::InlinedItem; @@ -19,7 +19,7 @@ use middle::cstore::InlinedItem; use syntax::ast::*; use syntax::ext::hygiene::Mark; use syntax::visit; -use syntax::parse::token::{self, keywords}; +use syntax::symbol::{Symbol, keywords}; /// Creates def ids for nodes in the HIR. pub struct DefCollector<'a> { @@ -135,8 +135,8 @@ impl<'a> DefCollector<'a> { } } -impl<'a> visit::Visitor for DefCollector<'a> { - fn visit_item(&mut self, i: &Item) { +impl<'a> visit::Visitor<'a> for DefCollector<'a> { + fn visit_item(&mut self, i: &'a Item) { debug!("visit_item: {:?}", i); // Pick the def data. This need not be unique, but the more @@ -155,7 +155,20 @@ impl<'a> visit::Visitor for DefCollector<'a> { DefPathData::ValueNs(i.ident.name.as_str()), ItemKind::Mac(..) if i.id == DUMMY_NODE_ID => return, // Scope placeholder ItemKind::Mac(..) => return self.visit_macro_invoc(i.id, false), - ItemKind::Use(..) => DefPathData::Misc, + ItemKind::Use(ref view_path) => { + match view_path.node { + ViewPathGlob(..) => {} + + // FIXME(eddyb) Should use the real name. Which namespace? + ViewPathSimple(..) => {} + ViewPathList(_, ref imports) => { + for import in imports { + self.create_def(import.node.id, DefPathData::Misc); + } + } + } + DefPathData::Misc + } }; let def = self.create_def(i.id, def_data); @@ -169,7 +182,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { this.with_parent(variant_def_index, |this| { for (index, field) in v.node.data.fields().iter().enumerate() { let name = field.ident.map(|ident| ident.name) - .unwrap_or_else(|| token::intern(&index.to_string())); + .unwrap_or_else(|| Symbol::intern(&index.to_string())); this.create_def(field.id, DefPathData::Field(name.as_str())); } @@ -188,7 +201,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { for (index, field) in struct_def.fields().iter().enumerate() { let name = field.ident.map(|ident| ident.name.as_str()) - .unwrap_or(token::intern(&index.to_string()).as_str()); + .unwrap_or(Symbol::intern(&index.to_string()).as_str()); this.create_def(field.id, DefPathData::Field(name)); } } @@ -198,7 +211,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { }); } - fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) { + fn visit_foreign_item(&mut self, foreign_item: &'a ForeignItem) { let def = self.create_def(foreign_item.id, DefPathData::ValueNs(foreign_item.ident.name.as_str())); @@ -207,7 +220,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { }); } - fn visit_generics(&mut self, generics: &Generics) { + fn visit_generics(&mut self, generics: &'a Generics) { for ty_param in generics.ty_params.iter() { self.create_def(ty_param.id, DefPathData::TypeParam(ty_param.ident.name.as_str())); } @@ -215,7 +228,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { visit::walk_generics(self, generics); } - fn visit_trait_item(&mut self, ti: &TraitItem) { + fn visit_trait_item(&mut self, ti: &'a TraitItem) { let def_data = match ti.node { TraitItemKind::Method(..) | TraitItemKind::Const(..) => DefPathData::ValueNs(ti.ident.name.as_str()), @@ -233,7 +246,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { }); } - fn visit_impl_item(&mut self, ii: &ImplItem) { + fn visit_impl_item(&mut self, ii: &'a ImplItem) { let def_data = match ii.node { ImplItemKind::Method(..) | ImplItemKind::Const(..) => DefPathData::ValueNs(ii.ident.name.as_str()), @@ -251,7 +264,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { }); } - fn visit_pat(&mut self, pat: &Pat) { + fn visit_pat(&mut self, pat: &'a Pat) { let parent_def = self.parent_def; match pat.node { @@ -267,7 +280,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { self.parent_def = parent_def; } - fn visit_expr(&mut self, expr: &Expr) { + fn visit_expr(&mut self, expr: &'a Expr) { let parent_def = self.parent_def; match expr.node { @@ -284,7 +297,7 @@ impl<'a> visit::Visitor for DefCollector<'a> { self.parent_def = parent_def; } - fn visit_ty(&mut self, ty: &Ty) { + fn visit_ty(&mut self, ty: &'a Ty) { match ty.node { TyKind::Mac(..) => return self.visit_macro_invoc(ty.id, false), TyKind::Array(_, ref length) => self.visit_ast_const_integer(length), @@ -296,15 +309,15 @@ impl<'a> visit::Visitor for DefCollector<'a> { visit::walk_ty(self, ty); } - fn visit_lifetime_def(&mut self, def: &LifetimeDef) { + fn visit_lifetime_def(&mut self, def: &'a LifetimeDef) { self.create_def(def.lifetime.id, DefPathData::LifetimeDef(def.lifetime.name.as_str())); } - fn visit_macro_def(&mut self, macro_def: &MacroDef) { + fn visit_macro_def(&mut self, macro_def: &'a MacroDef) { self.create_def(macro_def.id, DefPathData::MacroDef(macro_def.ident.name.as_str())); } - fn visit_stmt(&mut self, stmt: &Stmt) { + fn visit_stmt(&mut self, stmt: &'a Stmt) { match stmt.node { StmtKind::Mac(..) => self.visit_macro_invoc(stmt.id, false), _ => visit::walk_stmt(self, stmt), @@ -313,7 +326,18 @@ impl<'a> visit::Visitor for DefCollector<'a> { } // We walk the HIR rather than the AST when reading items from metadata. -impl<'ast> intravisit::Visitor<'ast> for DefCollector<'ast> { +impl<'ast> Visitor<'ast> for DefCollector<'ast> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + // note however that we override `visit_body` below + NestedVisitorMap::None + } + + fn visit_body(&mut self, id: hir::ExprId) { + if let Some(krate) = self.hir_crate { + self.visit_expr(krate.expr(id)); + } + } + fn visit_item(&mut self, i: &'ast hir::Item) { debug!("visit_item: {:?}", i); @@ -423,7 +447,7 @@ impl<'ast> intravisit::Visitor<'ast> for DefCollector<'ast> { fn visit_pat(&mut self, pat: &'ast hir::Pat) { let parent_def = self.parent_def; - if let hir::PatKind::Binding(_, name, _) = pat.node { + if let hir::PatKind::Binding(_, _, name, _) = pat.node { let def = self.create_def(pat.id, DefPathData::Binding(name.node.as_str())); self.parent_def = Some(def); } diff --git a/src/librustc/hir/map/definitions.rs b/src/librustc/hir/map/definitions.rs index e8b3714bbe..a684563512 100644 --- a/src/librustc/hir/map/definitions.rs +++ b/src/librustc/hir/map/definitions.rs @@ -9,12 +9,12 @@ // except according to those terms. use hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE}; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; +use rustc_data_structures::stable_hasher::StableHasher; use std::fmt::Write; use std::hash::{Hash, Hasher}; -use std::collections::hash_map::DefaultHasher; use syntax::ast; -use syntax::parse::token::{self, InternedString}; +use syntax::symbol::{Symbol, InternedString}; use ty::TyCtxt; use util::nodemap::NodeMap; @@ -22,7 +22,7 @@ use util::nodemap::NodeMap; #[derive(Clone)] pub struct Definitions { data: Vec, - key_map: FnvHashMap, + key_map: FxHashMap, node_map: NodeMap, } @@ -115,9 +115,9 @@ impl DefPath { pub fn to_string(&self, tcx: TyCtxt) -> String { let mut s = String::with_capacity(self.data.len() * 16); - s.push_str(&tcx.original_crate_name(self.krate)); + s.push_str(&tcx.original_crate_name(self.krate).as_str()); s.push_str("/"); - s.push_str(&tcx.crate_disambiguator(self.krate)); + s.push_str(&tcx.crate_disambiguator(self.krate).as_str()); for component in &self.data { write!(s, @@ -131,14 +131,15 @@ impl DefPath { } pub fn deterministic_hash(&self, tcx: TyCtxt) -> u64 { - let mut state = DefaultHasher::new(); + debug!("deterministic_hash({:?})", self); + let mut state = StableHasher::new(); self.deterministic_hash_to(tcx, &mut state); state.finish() } pub fn deterministic_hash_to(&self, tcx: TyCtxt, state: &mut H) { - tcx.original_crate_name(self.krate).hash(state); - tcx.crate_disambiguator(self.krate).hash(state); + tcx.original_crate_name(self.krate).as_str().hash(state); + tcx.crate_disambiguator(self.krate).as_str().hash(state); self.data.hash(state); } } @@ -219,7 +220,7 @@ impl Definitions { pub fn new() -> Definitions { Definitions { data: vec![], - key_map: FnvHashMap(), + key_map: FxHashMap(), node_map: NodeMap(), } } @@ -328,7 +329,7 @@ impl DefPathData { LifetimeDef(ref name) | EnumVariant(ref name) | Binding(ref name) | - Field(ref name) => Some(token::intern(name)), + Field(ref name) => Some(Symbol::intern(name)), Impl | CrateRoot | @@ -343,7 +344,7 @@ impl DefPathData { pub fn as_interned_str(&self) -> InternedString { use self::DefPathData::*; - match *self { + let s = match *self { TypeNs(ref name) | ValueNs(ref name) | Module(ref name) | @@ -353,47 +354,27 @@ impl DefPathData { EnumVariant(ref name) | Binding(ref name) | Field(ref name) => { - name.clone() - } - - Impl => { - InternedString::new("{{impl}}") + return name.clone(); } // note that this does not show up in user printouts - CrateRoot => { - InternedString::new("{{root}}") - } + CrateRoot => "{{root}}", // note that this does not show up in user printouts - InlinedRoot(_) => { - InternedString::new("{{inlined-root}}") - } + InlinedRoot(_) => "{{inlined-root}}", - Misc => { - InternedString::new("{{?}}") - } + Impl => "{{impl}}", + Misc => "{{?}}", + ClosureExpr => "{{closure}}", + StructCtor => "{{constructor}}", + Initializer => "{{initializer}}", + ImplTrait => "{{impl-Trait}}", + }; - ClosureExpr => { - InternedString::new("{{closure}}") - } - - StructCtor => { - InternedString::new("{{constructor}}") - } - - Initializer => { - InternedString::new("{{initializer}}") - } - - ImplTrait => { - InternedString::new("{{impl-Trait}}") - } - } + Symbol::intern(s).as_str() } pub fn to_string(&self) -> String { self.as_interned_str().to_string() } } - diff --git a/src/librustc/hir/map/mod.rs b/src/librustc/hir/map/mod.rs index 39114ec423..117edcf14a 100644 --- a/src/librustc/hir/map/mod.rs +++ b/src/librustc/hir/map/mod.rs @@ -18,7 +18,6 @@ pub use self::definitions::{Definitions, DefKey, DefPath, DefPathData, use dep_graph::{DepGraph, DepNode}; use middle::cstore::InlinedItem; -use middle::cstore::InlinedItem as II; use hir::def_id::{CRATE_DEF_INDEX, DefId, DefIndex}; use syntax::abi::Abi; @@ -46,9 +45,11 @@ pub enum Node<'ast> { NodeTraitItem(&'ast TraitItem), NodeImplItem(&'ast ImplItem), NodeVariant(&'ast Variant), + NodeField(&'ast StructField), NodeExpr(&'ast Expr), NodeStmt(&'ast Stmt), NodeTy(&'ast Ty), + NodeTraitRef(&'ast TraitRef), NodeLocal(&'ast Pat), NodePat(&'ast Pat), NodeBlock(&'ast Block), @@ -57,7 +58,10 @@ pub enum Node<'ast> { NodeStructCtor(&'ast VariantData), NodeLifetime(&'ast Lifetime), - NodeTyParam(&'ast TyParam) + NodeTyParam(&'ast TyParam), + NodeVisibility(&'ast Visibility), + + NodeInlinedItem(&'ast InlinedItem), } /// Represents an entry and its parent NodeID. @@ -73,15 +77,18 @@ pub enum MapEntry<'ast> { EntryTraitItem(NodeId, &'ast TraitItem), EntryImplItem(NodeId, &'ast ImplItem), EntryVariant(NodeId, &'ast Variant), + EntryField(NodeId, &'ast StructField), EntryExpr(NodeId, &'ast Expr), EntryStmt(NodeId, &'ast Stmt), EntryTy(NodeId, &'ast Ty), + EntryTraitRef(NodeId, &'ast TraitRef), EntryLocal(NodeId, &'ast Pat), EntryPat(NodeId, &'ast Pat), EntryBlock(NodeId, &'ast Block), EntryStructCtor(NodeId, &'ast VariantData), EntryLifetime(NodeId, &'ast Lifetime), EntryTyParam(NodeId, &'ast TyParam), + EntryVisibility(NodeId, &'ast Visibility), /// Roots for node trees. RootCrate, @@ -102,15 +109,20 @@ impl<'ast> MapEntry<'ast> { NodeTraitItem(n) => EntryTraitItem(p, n), NodeImplItem(n) => EntryImplItem(p, n), NodeVariant(n) => EntryVariant(p, n), + NodeField(n) => EntryField(p, n), NodeExpr(n) => EntryExpr(p, n), NodeStmt(n) => EntryStmt(p, n), NodeTy(n) => EntryTy(p, n), + NodeTraitRef(n) => EntryTraitRef(p, n), NodeLocal(n) => EntryLocal(p, n), NodePat(n) => EntryPat(p, n), NodeBlock(n) => EntryBlock(p, n), NodeStructCtor(n) => EntryStructCtor(p, n), NodeLifetime(n) => EntryLifetime(p, n), NodeTyParam(n) => EntryTyParam(p, n), + NodeVisibility(n) => EntryVisibility(p, n), + + NodeInlinedItem(n) => RootInlinedParent(n), } } @@ -121,15 +133,18 @@ impl<'ast> MapEntry<'ast> { EntryTraitItem(id, _) => id, EntryImplItem(id, _) => id, EntryVariant(id, _) => id, + EntryField(id, _) => id, EntryExpr(id, _) => id, EntryStmt(id, _) => id, EntryTy(id, _) => id, + EntryTraitRef(id, _) => id, EntryLocal(id, _) => id, EntryPat(id, _) => id, EntryBlock(id, _) => id, EntryStructCtor(id, _) => id, EntryLifetime(id, _) => id, EntryTyParam(id, _) => id, + EntryVisibility(id, _) => id, NotPresent | RootCrate | @@ -144,15 +159,19 @@ impl<'ast> MapEntry<'ast> { EntryTraitItem(_, n) => NodeTraitItem(n), EntryImplItem(_, n) => NodeImplItem(n), EntryVariant(_, n) => NodeVariant(n), + EntryField(_, n) => NodeField(n), EntryExpr(_, n) => NodeExpr(n), EntryStmt(_, n) => NodeStmt(n), EntryTy(_, n) => NodeTy(n), + EntryTraitRef(_, n) => NodeTraitRef(n), EntryLocal(_, n) => NodeLocal(n), EntryPat(_, n) => NodePat(n), EntryBlock(_, n) => NodeBlock(n), EntryStructCtor(_, n) => NodeStructCtor(n), EntryLifetime(_, n) => NodeLifetime(n), EntryTyParam(_, n) => NodeTyParam(n), + EntryVisibility(_, n) => NodeVisibility(n), + RootInlinedParent(n) => NodeInlinedItem(n), _ => return None }) } @@ -237,40 +256,63 @@ impl<'ast> Map<'ast> { let map = self.map.borrow(); let mut id = id0; if !self.is_inlined_node_id(id) { + let mut last_expr = None; loop { match map[id.as_usize()] { EntryItem(_, item) => { - let def_id = self.local_def_id(item.id); - // NB ^~~~~~~ - // - // You would expect that `item.id == id`, but this - // is not always the case. In particular, for a - // ViewPath item like `use self::{mem, foo}`, we - // map the ids for `mem` and `foo` to the - // enclosing view path item. This seems mega super - // ultra wrong, but then who am I to judge? - // -nmatsakis + assert_eq!(id, item.id); + let def_id = self.local_def_id(id); assert!(!self.is_inlined_def_id(def_id)); + + if let Some(last_id) = last_expr { + // The body of the item may have a separate dep node + // (Note that trait items don't currently have + // their own dep node, so there's also just one + // HirBody node for all the items) + if self.is_body(last_id, item) { + return DepNode::HirBody(def_id); + } + } + return DepNode::Hir(def_id); + } + + EntryImplItem(_, item) => { + let def_id = self.local_def_id(id); + assert!(!self.is_inlined_def_id(def_id)); + + if let Some(last_id) = last_expr { + // The body of the item may have a separate dep node + if self.is_impl_item_body(last_id, item) { + return DepNode::HirBody(def_id); + } + } return DepNode::Hir(def_id); } EntryForeignItem(p, _) | EntryTraitItem(p, _) | - EntryImplItem(p, _) | EntryVariant(p, _) | - EntryExpr(p, _) | + EntryField(p, _) | EntryStmt(p, _) | EntryTy(p, _) | + EntryTraitRef(p, _) | EntryLocal(p, _) | EntryPat(p, _) | EntryBlock(p, _) | EntryStructCtor(p, _) | EntryLifetime(p, _) | - EntryTyParam(p, _) => + EntryTyParam(p, _) | + EntryVisibility(p, _) => id = p, - RootCrate => - return DepNode::Krate, + EntryExpr(p, _) => { + last_expr = Some(id); + id = p; + } + + RootCrate => { + return DepNode::Hir(DefId::local(CRATE_DEF_INDEX)); + } RootInlinedParent(_) => bug!("node {} has inlined ancestor but is not inlined", id0), @@ -299,23 +341,22 @@ impl<'ast> Map<'ast> { EntryTraitItem(p, _) | EntryImplItem(p, _) | EntryVariant(p, _) | + EntryField(p, _) | EntryExpr(p, _) | EntryStmt(p, _) | EntryTy(p, _) | + EntryTraitRef(p, _) | EntryLocal(p, _) | EntryPat(p, _) | EntryBlock(p, _) | EntryStructCtor(p, _) | EntryLifetime(p, _) | - EntryTyParam(p, _) => + EntryTyParam(p, _) | + EntryVisibility(p, _) => id = p, - RootInlinedParent(parent) => match *parent { - InlinedItem::Item(def_id, _) | - InlinedItem::TraitItem(def_id, _) | - InlinedItem::ImplItem(def_id, _) => - return DepNode::MetaData(def_id) - }, + RootInlinedParent(parent) => + return DepNode::MetaData(parent.def_id), RootCrate => bug!("node {} has crate ancestor but is inlined", id0), @@ -327,6 +368,29 @@ impl<'ast> Map<'ast> { } } + fn is_body(&self, node_id: NodeId, item: &Item) -> bool { + match item.node { + ItemFn(_, _, _, _, _, body) => body.node_id() == node_id, + // Since trait items currently don't get their own dep nodes, + // we check here whether node_id is the body of any of the items. + // If they get their own dep nodes, this can go away + ItemTrait(_, _, _, ref trait_items) => { + trait_items.iter().any(|trait_item| { match trait_item.node { + MethodTraitItem(_, Some(body)) => body.node_id() == node_id, + _ => false + }}) + } + _ => false + } + } + + fn is_impl_item_body(&self, node_id: NodeId, item: &ImplItem) -> bool { + match item.node { + ImplItemKind::Method(_, body) => body.node_id() == node_id, + _ => false + } + } + pub fn num_local_def_ids(&self) -> usize { self.definitions.borrow().len() } @@ -378,6 +442,14 @@ impl<'ast> Map<'ast> { self.forest.krate() } + pub fn impl_item(&self, id: ImplItemId) -> &'ast ImplItem { + self.read(id.node_id); + + // NB: intentionally bypass `self.forest.krate()` so that we + // do not trigger a read of the whole krate here + self.forest.krate.impl_item(id) + } + /// Get the attributes on the krate. This is preferable to /// invoking `krate.attrs` because it registers a tighter /// dep-graph access. @@ -530,8 +602,7 @@ impl<'ast> Map<'ast> { pub fn get_parent_did(&self, id: NodeId) -> DefId { let parent = self.get_parent(id); match self.find_entry(parent) { - Some(RootInlinedParent(&II::TraitItem(did, _))) | - Some(RootInlinedParent(&II::ImplItem(did, _))) => did, + Some(RootInlinedParent(ii)) => ii.def_id, _ => self.local_def_id(parent) } } @@ -629,6 +700,10 @@ impl<'ast> Map<'ast> { } } + pub fn expr(&self, id: ExprId) -> &'ast Expr { + self.expect_expr(id.node_id()) + } + /// Returns the name associated with the given NodeId's AST. pub fn name(&self, id: NodeId) -> Name { match self.get(id) { @@ -637,9 +712,10 @@ impl<'ast> Map<'ast> { NodeImplItem(ii) => ii.name, NodeTraitItem(ti) => ti.name, NodeVariant(v) => v.node.name, + NodeField(f) => f.name, NodeLifetime(lt) => lt.name, NodeTyParam(tp) => tp.name, - NodeLocal(&Pat { node: PatKind::Binding(_,l,_), .. }) => l.node, + NodeLocal(&Pat { node: PatKind::Binding(_,_,l,_), .. }) => l.node, NodeStructCtor(_) => self.name(self.get_parent(id)), _ => bug!("no name for {}", self.node_to_string(id)) } @@ -655,6 +731,7 @@ impl<'ast> Map<'ast> { Some(NodeTraitItem(ref ti)) => Some(&ti.attrs[..]), Some(NodeImplItem(ref ii)) => Some(&ii.attrs[..]), Some(NodeVariant(ref v)) => Some(&v.node.attrs[..]), + Some(NodeField(ref f)) => Some(&f.attrs[..]), Some(NodeExpr(ref e)) => Some(&*e.attrs), Some(NodeStmt(ref s)) => Some(s.node.attrs()), // unit/tuple structs take the attributes straight from @@ -684,44 +761,40 @@ impl<'ast> Map<'ast> { } } - pub fn opt_span(&self, id: NodeId) -> Option { - let sp = match self.find(id) { - Some(NodeItem(item)) => item.span, - Some(NodeForeignItem(foreign_item)) => foreign_item.span, - Some(NodeTraitItem(trait_method)) => trait_method.span, - Some(NodeImplItem(ref impl_item)) => impl_item.span, - Some(NodeVariant(variant)) => variant.span, - Some(NodeExpr(expr)) => expr.span, - Some(NodeStmt(stmt)) => stmt.span, - Some(NodeTy(ty)) => ty.span, - Some(NodeLocal(pat)) => pat.span, - Some(NodePat(pat)) => pat.span, - Some(NodeBlock(block)) => block.span, - Some(NodeStructCtor(_)) => self.expect_item(self.get_parent(id)).span, - Some(NodeTyParam(ty_param)) => ty_param.span, - _ => return None, - }; - Some(sp) - } - pub fn span(&self, id: NodeId) -> Span { self.read(id); // reveals span from node - self.opt_span(id) - .unwrap_or_else(|| bug!("AstMap.span: could not find span for id {:?}", id)) + match self.find_entry(id) { + Some(EntryItem(_, item)) => item.span, + Some(EntryForeignItem(_, foreign_item)) => foreign_item.span, + Some(EntryTraitItem(_, trait_method)) => trait_method.span, + Some(EntryImplItem(_, impl_item)) => impl_item.span, + Some(EntryVariant(_, variant)) => variant.span, + Some(EntryField(_, field)) => field.span, + Some(EntryExpr(_, expr)) => expr.span, + Some(EntryStmt(_, stmt)) => stmt.span, + Some(EntryTy(_, ty)) => ty.span, + Some(EntryTraitRef(_, tr)) => tr.path.span, + Some(EntryLocal(_, pat)) => pat.span, + Some(EntryPat(_, pat)) => pat.span, + Some(EntryBlock(_, block)) => block.span, + Some(EntryStructCtor(_, _)) => self.expect_item(self.get_parent(id)).span, + Some(EntryLifetime(_, lifetime)) => lifetime.span, + Some(EntryTyParam(_, ty_param)) => ty_param.span, + Some(EntryVisibility(_, &Visibility::Restricted { ref path, .. })) => path.span, + Some(EntryVisibility(_, v)) => bug!("unexpected Visibility {:?}", v), + + Some(RootCrate) => self.forest.krate.span, + Some(RootInlinedParent(parent)) => parent.body.span, + Some(NotPresent) | None => { + bug!("hir::map::Map::span: id not in map: {:?}", id) + } + } } pub fn span_if_local(&self, id: DefId) -> Option { self.as_local_node_id(id).map(|id| self.span(id)) } - pub fn def_id_span(&self, def_id: DefId, fallback: Span) -> Span { - if let Some(node_id) = self.as_local_node_id(def_id) { - self.opt_span(node_id).unwrap_or(fallback) - } else { - fallback - } - } - pub fn node_to_string(&self, id: NodeId) -> String { node_id_to_string(self, id, true) } @@ -752,7 +825,7 @@ impl<'a, 'ast> NodesMatchingSuffix<'a, 'ast> { None => return false, Some((node_id, name)) => (node_id, name), }; - if &part[..] != mod_name.as_str() { + if mod_name != &**part { return false; } cursor = self.map.get_parent(mod_id); @@ -790,8 +863,7 @@ impl<'a, 'ast> NodesMatchingSuffix<'a, 'ast> { // We are looking at some node `n` with a given name and parent // id; do their names match what I am seeking? fn matches_names(&self, parent_of_n: NodeId, name: Name) -> bool { - name.as_str() == &self.item_name[..] && - self.suffix_matches(parent_of_n) + name == &**self.item_name && self.suffix_matches(parent_of_n) } } @@ -811,6 +883,7 @@ impl<'a, 'ast> Iterator for NodesMatchingSuffix<'a, 'ast> { Some(EntryTraitItem(_, n)) => n.name(), Some(EntryImplItem(_, n)) => n.name(), Some(EntryVariant(_, n)) => n.name(), + Some(EntryField(_, n)) => n.name(), _ => continue, }; if self.matches_names(self.map.get_parent(idx), name) { @@ -829,6 +902,7 @@ impl Named for Spanned { fn name(&self) -> Name { self.node.name() } impl Named for Item { fn name(&self) -> Name { self.name } } impl Named for ForeignItem { fn name(&self) -> Name { self.name } } impl Named for Variant_ { fn name(&self) -> Name { self.name } } +impl Named for StructField { fn name(&self) -> Name { self.name } } impl Named for TraitItem { fn name(&self) -> Name { self.name } } impl Named for ImplItem { fn name(&self) -> Name { self.name } } @@ -914,15 +988,20 @@ impl<'a> NodePrinter for pprust::State<'a> { NodeExpr(a) => self.print_expr(&a), NodeStmt(a) => self.print_stmt(&a), NodeTy(a) => self.print_type(&a), + NodeTraitRef(a) => self.print_trait_ref(&a), NodePat(a) => self.print_pat(&a), NodeBlock(a) => self.print_block(&a), NodeLifetime(a) => self.print_lifetime(&a), + NodeVisibility(a) => self.print_visibility(&a), NodeTyParam(_) => bug!("cannot print TyParam"), + NodeField(_) => bug!("cannot print StructField"), // these cases do not carry enough information in the // ast_map to reconstruct their full structure for pretty // printing. NodeLocal(_) => bug!("cannot print isolated Local"), NodeStructCtor(_) => bug!("cannot print isolated StructCtor"), + + NodeInlinedItem(_) => bug!("cannot print inlined item"), } } } @@ -997,6 +1076,11 @@ fn node_id_to_string(map: &Map, id: NodeId, include_id: bool) -> String { variant.node.name, path_str(), id_str) } + Some(NodeField(ref field)) => { + format!("field {} in {}{}", + field.name, + path_str(), id_str) + } Some(NodeExpr(ref expr)) => { format!("expr {}{}", pprust::expr_to_string(&expr), id_str) } @@ -1006,6 +1090,9 @@ fn node_id_to_string(map: &Map, id: NodeId, include_id: bool) -> String { Some(NodeTy(ref ty)) => { format!("type {}{}", pprust::ty_to_string(&ty), id_str) } + Some(NodeTraitRef(ref tr)) => { + format!("trait_ref {}{}", pprust::path_to_string(&tr.path), id_str) + } Some(NodeLocal(ref pat)) => { format!("local {}{}", pprust::pat_to_string(&pat), id_str) } @@ -1025,6 +1112,12 @@ fn node_id_to_string(map: &Map, id: NodeId, include_id: bool) -> String { Some(NodeTyParam(ref ty_param)) => { format!("typaram {:?}{}", ty_param, id_str) } + Some(NodeVisibility(ref vis)) => { + format!("visibility {:?}{}", vis, id_str) + } + Some(NodeInlinedItem(_)) => { + format!("inlined item {}", id_str) + } None => { format!("unknown node{}", id_str) } diff --git a/src/librustc/hir/mod.rs b/src/librustc/hir/mod.rs index 5f57ceac35..4fd8f96ba0 100644 --- a/src/librustc/hir/mod.rs +++ b/src/librustc/hir/mod.rs @@ -27,21 +27,21 @@ pub use self::Ty_::*; pub use self::TyParamBound::*; pub use self::UnOp::*; pub use self::UnsafeSource::*; -pub use self::ViewPath_::*; pub use self::Visibility::{Public, Inherited}; pub use self::PathParameters::*; use hir::def::Def; use hir::def_id::DefId; -use util::nodemap::{NodeMap, FnvHashSet}; +use util::nodemap::{NodeMap, FxHashSet}; +use rustc_data_structures::fnv::FnvHashMap; use syntax_pos::{mk_sp, Span, ExpnId, DUMMY_SP}; use syntax::codemap::{self, respan, Spanned}; use syntax::abi::Abi; use syntax::ast::{Name, NodeId, DUMMY_NODE_ID, AsmDialect}; use syntax::ast::{Attribute, Lit, StrStyle, FloatTy, IntTy, UintTy, MetaItem}; -use syntax::parse::token::{keywords, InternedString}; use syntax::ptr::P; +use syntax::symbol::{Symbol, keywords}; use syntax::tokenstream::TokenTree; use syntax::util::ThinVec; @@ -68,6 +68,7 @@ pub mod check_attr; pub mod def; pub mod def_id; pub mod intravisit; +pub mod itemlikevisit; pub mod lowering; pub mod map; pub mod pat_util; @@ -107,6 +108,8 @@ pub struct Path { /// A `::foo` path, is relative to the crate root rather than current /// module (like paths in an import). pub global: bool, + /// The definition that the path resolved to. + pub def: Def, /// The segments in the path: the things separated by `::`. pub segments: HirVec, } @@ -123,21 +126,6 @@ impl fmt::Display for Path { } } -impl Path { - /// Convert a span and an identifier to the corresponding - /// 1-segment path. - pub fn from_name(s: Span, name: Name) -> Path { - Path { - span: s, - global: false, - segments: hir_vec![PathSegment { - name: name, - parameters: PathParameters::none() - }], - } - } -} - /// A segment of a path: an identifier, an optional lifetime, and a set of /// types. #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] @@ -153,6 +141,16 @@ pub struct PathSegment { pub parameters: PathParameters, } +impl PathSegment { + /// Convert an identifier to the corresponding segment. + pub fn from_name(name: Name) -> PathSegment { + PathSegment { + name: name, + parameters: PathParameters::none() + } + } +} + #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub enum PathParameters { /// The `<'a, A,B,C>` in `foo::bar::baz::<'a, A,B,C>` @@ -166,6 +164,7 @@ impl PathParameters { AngleBracketedParameters(AngleBracketedParameterData { lifetimes: HirVec::new(), types: HirVec::new(), + infer_types: true, bindings: HirVec::new(), }) } @@ -240,6 +239,11 @@ pub struct AngleBracketedParameterData { pub lifetimes: HirVec, /// The type parameters for this path segment, if present. pub types: HirVec>, + /// Whether to infer remaining type parameters, if any. + /// This only applies to expression and pattern paths, and + /// out of those only the segments with no type parameters + /// to begin with, e.g. `Vec::new` is `>::new::<..>`. + pub infer_types: bool, /// Bindings (equality constraints) on associated types, if present. /// E.g., `Foo`. pub bindings: HirVec, @@ -423,6 +427,9 @@ pub struct Crate { // detected, which in turn can make compile-fail tests yield // slightly different results. pub items: BTreeMap, + + pub impl_items: BTreeMap, + pub exprs: FnvHashMap, } impl Crate { @@ -430,6 +437,10 @@ impl Crate { &self.items[&id] } + pub fn impl_item(&self, id: ImplItemId) -> &ImplItem { + &self.impl_items[&id] + } + /// Visits all items in the crate in some determinstic (but /// unspecified) order. If you just need to process every item, /// but don't care about nesting, this method is the best choice. @@ -438,12 +449,20 @@ impl Crate { /// follows lexical scoping rules -- then you want a different /// approach. You should override `visit_nested_item` in your /// visitor and then call `intravisit::walk_crate` instead. - pub fn visit_all_items<'hir, V>(&'hir self, visitor: &mut V) - where V: intravisit::Visitor<'hir> + pub fn visit_all_item_likes<'hir, V>(&'hir self, visitor: &mut V) + where V: itemlikevisit::ItemLikeVisitor<'hir> { for (_, item) in &self.items { visitor.visit_item(item); } + + for (_, impl_item) in &self.impl_items { + visitor.visit_impl_item(impl_item); + } + } + + pub fn expr(&self, id: ExprId) -> &Expr { + &self.exprs[&id] } } @@ -516,7 +535,7 @@ impl Pat { PatKind::Lit(_) | PatKind::Range(..) | PatKind::Binding(..) | - PatKind::Path(..) => { + PatKind::Path(_) => { true } } @@ -555,20 +574,20 @@ pub enum PatKind { Wild, /// A fresh binding `ref mut binding @ OPT_SUBPATTERN`. - Binding(BindingMode, Spanned, Option>), + /// The `DefId` is for the definition of the variable being bound. + Binding(BindingMode, DefId, Spanned, Option>), /// A struct or struct variant pattern, e.g. `Variant {x, y, ..}`. /// The `bool` is `true` in the presence of a `..`. - Struct(Path, HirVec>, bool), + Struct(QPath, HirVec>, bool), /// A tuple struct/variant pattern `Variant(x, y, .., z)`. /// If the `..` pattern fragment is present, then `Option` denotes its position. /// 0 <= position <= subpats.len() - TupleStruct(Path, HirVec>, Option), + TupleStruct(QPath, HirVec>, Option), - /// A possibly qualified path pattern. - /// Such pattern can be resolved to a unit struct/variant or a constant. - Path(Option, Path), + /// A path pattern for an unit struct/variant or a (maybe-associated) constant. + Path(QPath), /// A tuple pattern `(a, b)`. /// If the `..` pattern fragment is present, then `Option` denotes its position. @@ -825,9 +844,6 @@ pub enum BlockCheckMode { UnsafeBlock(UnsafeSource), PushUnsafeBlock(UnsafeSource), PopUnsafeBlock(UnsafeSource), - // Within this block (but outside a PopUnstableBlock), we suspend checking of stability. - PushUnstableBlock, - PopUnstableBlock, } #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)] @@ -836,6 +852,15 @@ pub enum UnsafeSource { UserProvided, } +#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] +pub struct ExprId(NodeId); + +impl ExprId { + pub fn node_id(self) -> NodeId { + self.0 + } +} + /// An expression #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)] pub struct Expr { @@ -845,6 +870,12 @@ pub struct Expr { pub attrs: ThinVec, } +impl Expr { + pub fn expr_id(&self) -> ExprId { + ExprId(self.id) + } +} + impl fmt::Debug for Expr { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "expr({}: {})", self.id, print::expr_to_string(self)) @@ -856,12 +887,12 @@ pub enum Expr_ { /// A `box x` expression. ExprBox(P), /// An array (`[a, b, c, d]`) - ExprArray(HirVec>), + ExprArray(HirVec), /// A function call /// /// The first field resolves to the function itself (usually an `ExprPath`), /// and the second field is the list of arguments - ExprCall(P, HirVec>), + ExprCall(P, HirVec), /// A method call (`x.foo::(a, b, c, d)`) /// /// The `Spanned` is the identifier for the method name. @@ -874,9 +905,9 @@ pub enum Expr_ { /// /// Thus, `x.foo::(a, b, c, d)` is represented as /// `ExprMethodCall(foo, [Bar, Baz], [x, a, b, c, d])`. - ExprMethodCall(Spanned, HirVec>, HirVec>), + ExprMethodCall(Spanned, HirVec>, HirVec), /// A tuple (`(a, b, c ,d)`) - ExprTup(HirVec>), + ExprTup(HirVec), /// A binary operation (For example: `a + b`, `a * b`) ExprBinary(BinOp, P, P), /// A unary operation (For example: `!x`, `*x`) @@ -897,14 +928,14 @@ pub enum Expr_ { /// Conditionless loop (can be exited with break, continue, or return) /// /// `'label: loop { block }` - ExprLoop(P, Option>), + ExprLoop(P, Option>, LoopSource), /// A `match` block, with a source that indicates whether or not it is /// the result of a desugaring, and if so, which kind. ExprMatch(P, HirVec, MatchSource), /// A closure (for example, `move |a, b, c| {a + b + c}`). /// /// The final span is the span of the argument block `|...|` - ExprClosure(CaptureClause, P, P, Span), + ExprClosure(CaptureClause, P, ExprId, Span), /// A block (`{ ... }`) ExprBlock(P), @@ -923,30 +954,26 @@ pub enum Expr_ { /// An indexing operation (`foo[2]`) ExprIndex(P, P), - /// Variable reference, possibly containing `::` and/or type - /// parameters, e.g. foo::bar::. - /// - /// Optionally "qualified", - /// e.g. ` as SomeTrait>::SomeType`. - ExprPath(Option, Path), + /// Path to a definition, possibly containing lifetime or type parameters. + ExprPath(QPath), /// A referencing operation (`&a` or `&mut a`) ExprAddrOf(Mutability, P), /// A `break`, with an optional label to break - ExprBreak(Option>), + ExprBreak(Option(...)`. - if !method.generics.types.is_empty() { + if !self.item_generics(method.def_id).types.is_empty() { return Some(MethodViolationCode::Generic); } @@ -331,8 +319,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // Compute supertraits of current trait lazily. if supertraits.is_none() { - let trait_def = self.lookup_trait_def(trait_def_id); - let trait_ref = ty::Binder(trait_def.trait_ref.clone()); + let trait_ref = ty::Binder(ty::TraitRef { + def_id: trait_def_id, + substs: Substs::identity_for_item(self, trait_def_id) + }); supertraits = Some(traits::supertraits(self, trait_ref).collect()); } diff --git a/src/librustc/traits/project.rs b/src/librustc/traits/project.rs index 9db9e8812f..6f645b5f94 100644 --- a/src/librustc/traits/project.rs +++ b/src/librustc/traits/project.rs @@ -24,17 +24,16 @@ use super::VtableImplData; use super::util; use hir::def_id::DefId; -use infer::{InferOk, TypeOrigin}; +use infer::InferOk; +use infer::type_variable::TypeVariableOrigin; use rustc_data_structures::snapshot_map::{Snapshot, SnapshotMap}; -use syntax::parse::token; use syntax::ast; +use syntax::symbol::Symbol; use ty::subst::Subst; use ty::{self, ToPredicate, ToPolyTraitRef, Ty, TyCtxt}; use ty::fold::{TypeFoldable, TypeFolder}; use util::common::FN_OUTPUT_NAME; -use std::rc::Rc; - /// Depending on the stage of compilation, we want projection to be /// more or less conservative. #[derive(Debug, Copy, Clone, PartialEq, Eq)] @@ -211,11 +210,8 @@ fn project_and_unify_type<'cx, 'gcx, 'tcx>( obligations); let infcx = selcx.infcx(); - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); - match infcx.eq_types(true, origin, normalized_ty, obligation.predicate.ty) { - Ok(InferOk { obligations: inferred_obligations, .. }) => { - // FIXME(#32730) once obligations are generated in inference, drop this assertion - assert!(inferred_obligations.is_empty()); + match infcx.eq_types(true, &obligation.cause, normalized_ty, obligation.predicate.ty) { + Ok(InferOk { obligations: inferred_obligations, value: () }) => { obligations.extend(inferred_obligations); Ok(Some(obligations)) }, @@ -313,7 +309,7 @@ impl<'a, 'b, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for AssociatedTypeNormalizer<'a, ty::TyAnon(def_id, substs) if !substs.has_escaping_regions() => { // (*) // Only normalize `impl Trait` after type-checking, usually in trans. if self.selcx.projection_mode() == Reveal::All { - let generic_ty = self.tcx().lookup_item_type(def_id).ty; + let generic_ty = self.tcx().item_type(def_id); let concrete_ty = generic_ty.subst(self.tcx(), substs); self.fold_ty(concrete_ty) } else { @@ -387,7 +383,12 @@ pub fn normalize_projection_type<'a, 'b, 'gcx, 'tcx>( // and a deferred predicate to resolve this when more type // information is available. - let ty_var = selcx.infcx().next_ty_var(); + let tcx = selcx.infcx().tcx; + let def_id = tcx.associated_items(projection_ty.trait_ref.def_id).find(|i| + i.name == projection_ty.item_name && i.kind == ty::AssociatedKind::Type + ).map(|i| i.def_id).unwrap(); + let ty_var = selcx.infcx().next_ty_var( + TypeVariableOrigin::NormalizeProjectionType(tcx.def_span(def_id))); let projection = ty::Binder(ty::ProjectionPredicate { projection_ty: projection_ty, ty: ty_var @@ -601,7 +602,12 @@ fn normalize_to_error<'a, 'gcx, 'tcx>(selcx: &mut SelectionContext<'a, 'gcx, 'tc let trait_obligation = Obligation { cause: cause, recursion_depth: depth, predicate: trait_ref.to_predicate() }; - let new_value = selcx.infcx().next_ty_var(); + let tcx = selcx.infcx().tcx; + let def_id = tcx.associated_items(projection_ty.trait_ref.def_id).find(|i| + i.name == projection_ty.item_name && i.kind == ty::AssociatedKind::Type + ).map(|i| i.def_id).unwrap(); + let new_value = selcx.infcx().next_ty_var( + TypeVariableOrigin::NormalizeProjectionType(tcx.def_span(def_id))); Normalized { value: new_value, obligations: vec![trait_obligation] @@ -811,7 +817,7 @@ fn assemble_candidates_from_trait_def<'cx, 'gcx, 'tcx>( }; // If so, extract what we know from the trait and try to come up with a good answer. - let trait_predicates = selcx.tcx().lookup_predicates(def_id); + let trait_predicates = selcx.tcx().item_predicates(def_id); let bounds = trait_predicates.instantiate(selcx.tcx(), substs); let bounds = elaborate_predicates(selcx.tcx(), bounds.predicates); assemble_candidates_from_predicates(selcx, @@ -842,18 +848,18 @@ fn assemble_candidates_from_predicates<'cx, 'gcx, 'tcx, I>( let same_name = data.item_name() == obligation.predicate.item_name; let is_match = same_name && infcx.probe(|_| { - let origin = TypeOrigin::Misc(obligation.cause.span); let data_poly_trait_ref = data.to_poly_trait_ref(); let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref(); infcx.sub_poly_trait_refs(false, - origin, + obligation.cause.clone(), data_poly_trait_ref, obligation_poly_trait_ref) - // FIXME(#32730) once obligations are propagated from unification in - // inference, drop this assertion - .map(|InferOk { obligations, .. }| assert!(obligations.is_empty())) + .map(|InferOk { obligations: _, value: () }| { + // FIXME(#32730) -- do we need to take obligations + // into account in any way? At the moment, no. + }) .is_ok() }); @@ -945,7 +951,7 @@ fn assemble_candidates_from_impls<'cx, 'gcx, 'tcx>( // an error when we confirm the candidate // (which will ultimately lead to `normalize_to_error` // being invoked). - node_item.item.ty.is_some() + node_item.item.defaultness.has_value() } else { node_item.item.defaultness.is_default() }; @@ -1009,8 +1015,9 @@ fn assemble_candidates_from_impls<'cx, 'gcx, 'tcx>( // types, which appear not to unify -- so the // overlap check succeeds, when it should // fail. - bug!("Tried to project an inherited associated type during \ - coherence checking, which is currently not supported."); + span_bug!(obligation.cause.span, + "Tried to project an inherited associated type during \ + coherence checking, which is currently not supported."); }; candidate_set.vec.extend(new_candidate); } @@ -1127,7 +1134,7 @@ fn confirm_object_candidate<'cx, 'gcx, 'tcx>( debug!("confirm_object_candidate(object_ty={:?})", object_ty); let data = match object_ty.sty { - ty::TyTrait(ref data) => data, + ty::TyDynamic(ref data, ..) => data, _ => { span_bug!( obligation.cause.span, @@ -1135,7 +1142,7 @@ fn confirm_object_candidate<'cx, 'gcx, 'tcx>( object_ty) } }; - let env_predicates = data.projection_bounds.iter().map(|p| { + let env_predicates = data.projection_bounds().map(|p| { p.with_self_ty(selcx.tcx(), object_ty).to_predicate() }).collect(); let env_predicate = { @@ -1155,12 +1162,11 @@ fn confirm_object_candidate<'cx, 'gcx, 'tcx>( // select those with a relevant trait-ref let mut env_predicates = env_predicates.filter(|data| { - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); let data_poly_trait_ref = data.to_poly_trait_ref(); let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref(); selcx.infcx().probe(|_| { selcx.infcx().sub_poly_trait_refs(false, - origin, + obligation.cause.clone(), data_poly_trait_ref, obligation_poly_trait_ref).is_ok() }) @@ -1189,12 +1195,10 @@ fn confirm_fn_pointer_candidate<'cx, 'gcx, 'tcx>( fn_pointer_vtable: VtableFnPointerData<'tcx, PredicateObligation<'tcx>>) -> Progress<'tcx> { - // FIXME(#32730) drop this assertion once obligations are propagated from inference (fn pointer - // vtable nested obligations ONLY come from unification in inference) - assert!(fn_pointer_vtable.nested.is_empty()); let fn_type = selcx.infcx().shallow_resolve(fn_pointer_vtable.fn_ty); let sig = fn_type.fn_sig(); confirm_callable_candidate(selcx, obligation, sig, util::TupleArgumentsFlag::Yes) + .with_addl_obligations(fn_pointer_vtable.nested) } fn confirm_closure_candidate<'cx, 'gcx, 'tcx>( @@ -1252,7 +1256,7 @@ fn confirm_callable_candidate<'cx, 'gcx, 'tcx>( let predicate = ty::Binder(ty::ProjectionPredicate { // (1) recreate binder here projection_ty: ty::ProjectionTy { trait_ref: trait_ref, - item_name: token::intern(FN_OUTPUT_NAME), + item_name: Symbol::intern(FN_OUTPUT_NAME), }, ty: ret_type }); @@ -1267,12 +1271,10 @@ fn confirm_param_env_candidate<'cx, 'gcx, 'tcx>( -> Progress<'tcx> { let infcx = selcx.infcx(); - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); + let cause = obligation.cause.clone(); let trait_ref = obligation.predicate.trait_ref; - match infcx.match_poly_projection_predicate(origin, poly_projection, trait_ref) { + match infcx.match_poly_projection_predicate(cause, poly_projection, trait_ref) { Ok(InferOk { value: ty_match, obligations }) => { - // FIXME(#32730) once obligations are generated in inference, drop this assertion - assert!(obligations.is_empty()); Progress { ty: ty_match.value, obligations: obligations, @@ -1305,7 +1307,7 @@ fn confirm_impl_candidate<'cx, 'gcx, 'tcx>( match assoc_ty { Some(node_item) => { - let ty = node_item.item.ty.unwrap_or_else(|| { + let ty = if !node_item.item.defaultness.has_value() { // This means that the impl is missing a definition for the // associated type. This error will be reported by the type // checker method `check_impl_items_against_trait`, so here we @@ -1314,7 +1316,9 @@ fn confirm_impl_candidate<'cx, 'gcx, 'tcx>( node_item.item.name, obligation.predicate.trait_ref); tcx.types.err - }); + } else { + tcx.item_type(node_item.item.def_id) + }; let substs = translate_substs(selcx.infcx(), impl_def_id, substs, node_item.node); Progress { ty: ty.subst(tcx, substs), @@ -1339,27 +1343,25 @@ fn assoc_ty_def<'cx, 'gcx, 'tcx>( selcx: &SelectionContext<'cx, 'gcx, 'tcx>, impl_def_id: DefId, assoc_ty_name: ast::Name) - -> Option>>> + -> Option> { let trait_def_id = selcx.tcx().impl_trait_ref(impl_def_id).unwrap().def_id; if selcx.projection_mode() == Reveal::ExactMatch { let impl_node = specialization_graph::Node::Impl(impl_def_id); for item in impl_node.items(selcx.tcx()) { - if let ty::TypeTraitItem(assoc_ty) = item { - if assoc_ty.name == assoc_ty_name { - return Some(specialization_graph::NodeItem { - node: specialization_graph::Node::Impl(impl_def_id), - item: assoc_ty, - }); - } + if item.kind == ty::AssociatedKind::Type && item.name == assoc_ty_name { + return Some(specialization_graph::NodeItem { + node: specialization_graph::Node::Impl(impl_def_id), + item: item, + }); } } None } else { selcx.tcx().lookup_trait_def(trait_def_id) .ancestors(impl_def_id) - .type_defs(selcx.tcx(), assoc_ty_name) + .defs(selcx.tcx(), assoc_ty_name, ty::AssociatedKind::Type) .next() } } diff --git a/src/librustc/traits/select.rs b/src/librustc/traits/select.rs index e75c8bd433..23cfc25175 100644 --- a/src/librustc/traits/select.rs +++ b/src/librustc/traits/select.rs @@ -35,12 +35,13 @@ use super::util; use hir::def_id::DefId; use infer; -use infer::{InferCtxt, InferOk, TypeFreshener, TypeOrigin}; +use infer::{InferCtxt, InferOk, TypeFreshener}; use ty::subst::{Kind, Subst, Substs}; use ty::{self, ToPredicate, ToPolyTraitRef, Ty, TyCtxt, TypeFoldable}; use traits; use ty::fast_reject; use ty::relate::TypeRelation; +use middle::lang_items; use rustc_data_structures::bitvec::BitVector; use rustc_data_structures::snapshot_vec::{SnapshotVecDelegate, SnapshotVec}; @@ -51,7 +52,7 @@ use std::mem; use std::rc::Rc; use syntax::abi::Abi; use hir; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; struct InferredObligationsSnapshotVecDelegate<'tcx> { phantom: PhantomData<&'tcx i32>, @@ -104,8 +105,8 @@ struct TraitObligationStack<'prev, 'tcx: 'prev> { #[derive(Clone)] pub struct SelectionCache<'tcx> { - hashmap: RefCell, - SelectionResult<'tcx, SelectionCandidate<'tcx>>>>, + hashmap: RefCell, + SelectionResult<'tcx, SelectionCandidate<'tcx>>>>, } pub enum MethodMatchResult { @@ -306,7 +307,7 @@ enum EvaluationResult { #[derive(Clone)] pub struct EvaluationCache<'tcx> { - hashmap: RefCell, EvaluationResult>> + hashmap: RefCell, EvaluationResult>> } impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { @@ -418,9 +419,6 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { None => Ok(None), Some(candidate) => { let mut candidate = self.confirm_candidate(obligation, candidate)?; - // FIXME(#32730) remove this assertion once inferred obligations are propagated - // from inference - assert!(self.inferred_obligations.len() == 0); let inferred_obligations = (*self.inferred_obligations).into_iter().cloned(); candidate.nested_obligations_mut().extend(inferred_obligations); Ok(Some(candidate)) @@ -521,7 +519,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { ty::Predicate::Equate(ref p) => { // does this code ever run? - match self.infcx.equality_predicate(obligation.cause.span, p) { + match self.infcx.equality_predicate(&obligation.cause, p) { Ok(InferOk { obligations, .. }) => { self.inferred_obligations.extend(obligations); EvaluatedToOk @@ -1094,40 +1092,31 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // Other bounds. Consider both in-scope bounds from fn decl // and applicable impls. There is a certain set of precedence rules here. - match self.tcx().lang_items.to_builtin_kind(obligation.predicate.def_id()) { - Some(ty::BoundCopy) => { - debug!("obligation self ty is {:?}", - obligation.predicate.0.self_ty()); + let def_id = obligation.predicate.def_id(); + if self.tcx().lang_items.copy_trait() == Some(def_id) { + debug!("obligation self ty is {:?}", + obligation.predicate.0.self_ty()); - // User-defined copy impls are permitted, but only for - // structs and enums. - self.assemble_candidates_from_impls(obligation, &mut candidates)?; + // User-defined copy impls are permitted, but only for + // structs and enums. + self.assemble_candidates_from_impls(obligation, &mut candidates)?; - // For other types, we'll use the builtin rules. - let copy_conditions = self.copy_conditions(obligation); - self.assemble_builtin_bound_candidates(copy_conditions, &mut candidates)?; - } - Some(ty::BoundSized) => { - // Sized is never implementable by end-users, it is - // always automatically computed. - let sized_conditions = self.sized_conditions(obligation); - self.assemble_builtin_bound_candidates(sized_conditions, - &mut candidates)?; - } - - None if self.tcx().lang_items.unsize_trait() == - Some(obligation.predicate.def_id()) => { - self.assemble_candidates_for_unsizing(obligation, &mut candidates); - } - - Some(ty::BoundSend) | - Some(ty::BoundSync) | - None => { - self.assemble_closure_candidates(obligation, &mut candidates)?; - self.assemble_fn_pointer_candidates(obligation, &mut candidates)?; - self.assemble_candidates_from_impls(obligation, &mut candidates)?; - self.assemble_candidates_from_object_ty(obligation, &mut candidates); - } + // For other types, we'll use the builtin rules. + let copy_conditions = self.copy_conditions(obligation); + self.assemble_builtin_bound_candidates(copy_conditions, &mut candidates)?; + } else if self.tcx().lang_items.sized_trait() == Some(def_id) { + // Sized is never implementable by end-users, it is + // always automatically computed. + let sized_conditions = self.sized_conditions(obligation); + self.assemble_builtin_bound_candidates(sized_conditions, + &mut candidates)?; + } else if self.tcx().lang_items.unsize_trait() == Some(def_id) { + self.assemble_candidates_for_unsizing(obligation, &mut candidates); + } else { + self.assemble_closure_candidates(obligation, &mut candidates)?; + self.assemble_fn_pointer_candidates(obligation, &mut candidates)?; + self.assemble_candidates_from_impls(obligation, &mut candidates)?; + self.assemble_candidates_from_object_ty(obligation, &mut candidates); } self.assemble_candidates_from_projected_tys(obligation, &mut candidates); @@ -1200,7 +1189,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { def_id={:?}, substs={:?}", def_id, substs); - let item_predicates = self.tcx().lookup_predicates(def_id); + let item_predicates = self.tcx().item_predicates(def_id); let bounds = item_predicates.instantiate(self.tcx(), substs); debug!("match_projection_obligation_against_definition_bounds: \ bounds={:?}", @@ -1247,9 +1236,9 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { -> bool { assert!(!skol_trait_ref.has_escaping_regions()); - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); + let cause = obligation.cause.clone(); match self.infcx.sub_poly_trait_refs(false, - origin, + cause, trait_bound.clone(), ty::Binder(skol_trait_ref.clone())) { Ok(InferOk { obligations, .. }) => { @@ -1379,21 +1368,13 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { ty::TyFnDef(.., &ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: _, - output: _, - variadic: false - }) + ref sig, }) | ty::TyFnPtr(&ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: _, - output: _, - variadic: false - }) - }) => { + ref sig + }) if !sig.variadic() => { candidates.vec.push(FnPointerCandidate); } @@ -1448,7 +1429,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { if self.tcx().trait_has_default_impl(def_id) { match self_ty.sty { - ty::TyTrait(..) => { + ty::TyDynamic(..) => { // For object types, we don't know what the closed // over types are. For most traits, this means we // conservatively say nothing; a candidate may be @@ -1518,20 +1499,18 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // any LBR. let self_ty = this.tcx().erase_late_bound_regions(&obligation.self_ty()); let poly_trait_ref = match self_ty.sty { - ty::TyTrait(ref data) => { - match this.tcx().lang_items.to_builtin_kind(obligation.predicate.def_id()) { - Some(bound @ ty::BoundSend) | Some(bound @ ty::BoundSync) => { - if data.builtin_bounds.contains(&bound) { - debug!("assemble_candidates_from_object_ty: matched builtin bound, \ - pushing candidate"); - candidates.vec.push(BuiltinObjectCandidate); - return; - } - } - _ => {} + ty::TyDynamic(ref data, ..) => { + if data.auto_traits().any(|did| did == obligation.predicate.def_id()) { + debug!("assemble_candidates_from_object_ty: matched builtin bound, \ + pushing candidate"); + candidates.vec.push(BuiltinObjectCandidate); + return; } - data.principal.with_self_ty(this.tcx(), self_ty) + match data.principal() { + Some(p) => p.with_self_ty(this.tcx(), self_ty), + None => return, + } } ty::TyInfer(ty::TyVar(_)) => { debug!("assemble_candidates_from_object_ty: ambiguous"); @@ -1602,7 +1581,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { let may_apply = match (&source.sty, &target.sty) { // Trait+Kx+'a -> Trait+Ky+'b (upcasts). - (&ty::TyTrait(ref data_a), &ty::TyTrait(ref data_b)) => { + (&ty::TyDynamic(ref data_a, ..), &ty::TyDynamic(ref data_b, ..)) => { // Upcasts permit two things: // // 1. Dropping builtin bounds, e.g. `Foo+Send` to `Foo` @@ -1614,12 +1593,17 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // // We always upcast when we can because of reason // #2 (region bounds). - data_a.principal.def_id() == data_b.principal.def_id() && - data_a.builtin_bounds.is_superset(&data_b.builtin_bounds) + match (data_a.principal(), data_b.principal()) { + (Some(a), Some(b)) => a.def_id() == b.def_id() && + data_b.auto_traits() + // All of a's auto traits need to be in b's auto traits. + .all(|b| data_a.auto_traits().any(|a| a == b)), + _ => false + } } // T -> Trait. - (_, &ty::TyTrait(_)) => true, + (_, &ty::TyDynamic(..)) => true, // Ambiguous handling is below T -> Trait, because inference // variables can still implement Unsize and nested @@ -1771,7 +1755,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { Where(ty::Binder(Vec::new())) } - ty::TyStr | ty::TySlice(_) | ty::TyTrait(..) => Never, + ty::TyStr | ty::TySlice(_) | ty::TyDynamic(..) => Never, ty::TyTuple(tys) => { Where(ty::Binder(tys.last().into_iter().cloned().collect())) @@ -1817,7 +1801,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { Where(ty::Binder(Vec::new())) } - ty::TyBox(_) | ty::TyTrait(..) | ty::TyStr | ty::TySlice(..) | + ty::TyBox(_) | ty::TyDynamic(..) | ty::TyStr | ty::TySlice(..) | ty::TyClosure(..) | ty::TyRef(_, ty::TypeAndMut { ty: _, mutbl: hir::MutMutable }) => { Never @@ -1882,7 +1866,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { Vec::new() } - ty::TyTrait(..) | + ty::TyDynamic(..) | ty::TyParam(..) | ty::TyProjection(..) | ty::TyAnon(..) | @@ -1912,16 +1896,16 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { tys.to_vec() } - ty::TyClosure(_, ref substs) => { + ty::TyClosure(def_id, ref substs) => { // FIXME(#27086). We are invariant w/r/t our - // substs.func_substs, but we don't see them as + // func_substs, but we don't see them as // constituent types; this seems RIGHT but also like // something that a normal type couldn't simulate. Is // this just a gap with the way that PhantomData and // OIBIT interact? That is, there is no way to say // "make me invariant with respect to this TYPE, but // do not act as though I can reach it" - substs.upvar_tys.to_vec() + substs.upvar_tys(def_id, self.tcx()).collect() } // for `PhantomData`, we pass `T` @@ -2168,10 +2152,11 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // OK to skip binder, it is reintroduced below let self_ty = self.infcx.shallow_resolve(obligation.predicate.skip_binder().self_ty()); match self_ty.sty { - ty::TyTrait(ref data) => { + ty::TyDynamic(ref data, ..) => { // OK to skip the binder, it is reintroduced below - let input_types = data.principal.input_types(); - let assoc_types = data.projection_bounds.iter() + let principal = data.principal().unwrap(); + let input_types = principal.input_types(); + let assoc_types = data.projection_bounds() .map(|pb| pb.skip_binder().ty); let all_types: Vec<_> = input_types.chain(assoc_types) .collect(); @@ -2303,8 +2288,8 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // case that results. -nmatsakis let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder()); let poly_trait_ref = match self_ty.sty { - ty::TyTrait(ref data) => { - data.principal.with_self_ty(self.tcx(), self_ty) + ty::TyDynamic(ref data, ..) => { + data.principal().unwrap().with_self_ty(self.tcx(), self_ty) } _ => { span_bug!(obligation.cause.span, @@ -2439,16 +2424,14 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { /// selection of the impl. Therefore, if there is a mismatch, we /// report an error to the user. fn confirm_poly_trait_refs(&mut self, - obligation_cause: ObligationCause, + obligation_cause: ObligationCause<'tcx>, obligation_trait_ref: ty::PolyTraitRef<'tcx>, expected_trait_ref: ty::PolyTraitRef<'tcx>) -> Result<(), SelectionError<'tcx>> { - let origin = TypeOrigin::RelateOutputImplTypes(obligation_cause.span); - let obligation_trait_ref = obligation_trait_ref.clone(); self.infcx.sub_poly_trait_refs(false, - origin, + obligation_cause.clone(), expected_trait_ref.clone(), obligation_trait_ref.clone()) .map(|InferOk { obligations, .. }| self.inferred_obligations.extend(obligations)) @@ -2474,17 +2457,18 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { let mut nested = vec![]; match (&source.sty, &target.sty) { // Trait+Kx+'a -> Trait+Ky+'b (upcasts). - (&ty::TyTrait(ref data_a), &ty::TyTrait(ref data_b)) => { + (&ty::TyDynamic(ref data_a, r_a), &ty::TyDynamic(ref data_b, r_b)) => { // See assemble_candidates_for_unsizing for more info. - let new_trait = tcx.mk_trait(ty::TraitObject { - principal: data_a.principal, - region_bound: data_b.region_bound, - builtin_bounds: data_b.builtin_bounds, - projection_bounds: data_a.projection_bounds.clone(), - }); - let origin = TypeOrigin::Misc(obligation.cause.span); + // Binders reintroduced below in call to mk_existential_predicates. + let principal = data_a.skip_binder().principal(); + let iter = principal.into_iter().map(ty::ExistentialPredicate::Trait) + .chain(data_a.skip_binder().projection_bounds() + .map(|x| ty::ExistentialPredicate::Projection(x))) + .chain(data_b.auto_traits().map(ty::ExistentialPredicate::AutoTrait)); + let new_trait = tcx.mk_dynamic( + ty::Binder(tcx.mk_existential_predicates(iter)), r_b); let InferOk { obligations, .. } = - self.infcx.sub_types(false, origin, new_trait, target) + self.infcx.sub_types(false, &obligation.cause, new_trait, target) .map_err(|_| Unimplemented)?; self.inferred_obligations.extend(obligations); @@ -2492,20 +2476,16 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { let cause = ObligationCause::new(obligation.cause.span, obligation.cause.body_id, ObjectCastObligation(target)); - let outlives = ty::OutlivesPredicate(data_a.region_bound, - data_b.region_bound); + let outlives = ty::OutlivesPredicate(r_a, r_b); nested.push(Obligation::with_depth(cause, obligation.recursion_depth + 1, ty::Binder(outlives).to_predicate())); } // T -> Trait. - (_, &ty::TyTrait(ref data)) => { + (_, &ty::TyDynamic(ref data, r)) => { let mut object_dids = - data.builtin_bounds.iter().flat_map(|bound| { - tcx.lang_items.from_builtin_kind(bound).ok() - }) - .chain(Some(data.principal.def_id())); + data.auto_traits().chain(data.principal().map(|p| p.def_id())); if let Some(did) = object_dids.find(|did| { !tcx.is_object_safe(*did) }) { @@ -2521,41 +2501,33 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { predicate)); }; - // Create the obligation for casting from T to Trait. - push(data.principal.with_self_ty(tcx, source).to_predicate()); + // Create obligations: + // - Casting T to Trait + // - For all the various builtin bounds attached to the object cast. (In other + // words, if the object type is Foo+Send, this would create an obligation for the + // Send check.) + // - Projection predicates + for predicate in data.iter() { + push(predicate.with_self_ty(tcx, source)); + } // We can only make objects from sized types. - let mut builtin_bounds = data.builtin_bounds; - builtin_bounds.insert(ty::BoundSized); - - // Create additional obligations for all the various builtin - // bounds attached to the object cast. (In other words, if the - // object type is Foo+Send, this would create an obligation - // for the Send check.) - for bound in &builtin_bounds { - if let Ok(tr) = tcx.trait_ref_for_builtin_bound(bound, source) { - push(tr.to_predicate()); - } else { - return Err(Unimplemented); - } - } - - // Create obligations for the projection predicates. - for bound in &data.projection_bounds { - push(bound.with_self_ty(tcx, source).to_predicate()); - } + let tr = ty::TraitRef { + def_id: tcx.require_lang_item(lang_items::SizedTraitLangItem), + substs: tcx.mk_substs_trait(source, &[]), + }; + push(tr.to_predicate()); // If the type is `Foo+'a`, ensures that the type // being cast to `Foo+'a` outlives `'a`: - let outlives = ty::OutlivesPredicate(source, data.region_bound); + let outlives = ty::OutlivesPredicate(source, r); push(ty::Binder(outlives).to_predicate()); } // [T; n] -> [T]. (&ty::TyArray(a, _), &ty::TySlice(b)) => { - let origin = TypeOrigin::Misc(obligation.cause.span); let InferOk { obligations, .. } = - self.infcx.sub_types(false, origin, a, b) + self.infcx.sub_types(false, &obligation.cause, a, b) .map_err(|_| Unimplemented)?; self.inferred_obligations.extend(obligations); } @@ -2564,7 +2536,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { (&ty::TyAdt(def, substs_a), &ty::TyAdt(_, substs_b)) => { let fields = def .all_fields() - .map(|f| f.unsubst_ty()) + .map(|f| tcx.item_type(f.did)) .collect::>(); // The last field of the structure has to exist and contain type parameters. @@ -2617,9 +2589,8 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { } }); let new_struct = tcx.mk_adt(def, tcx.mk_substs(params)); - let origin = TypeOrigin::Misc(obligation.cause.span); let InferOk { obligations, .. } = - self.infcx.sub_types(false, origin, new_struct, target) + self.infcx.sub_types(false, &obligation.cause, new_struct, target) .map_err(|_| Unimplemented)?; self.inferred_obligations.extend(obligations); @@ -2705,10 +2676,9 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { impl_trait_ref, skol_obligation_trait_ref); - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); let InferOk { obligations, .. } = self.infcx.eq_trait_refs(false, - origin, + &obligation.cause, impl_trait_ref.value.clone(), skol_obligation_trait_ref) .map_err(|e| { @@ -2780,9 +2750,8 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { obligation, poly_trait_ref); - let origin = TypeOrigin::RelateOutputImplTypes(obligation.cause.span); self.infcx.sub_poly_trait_refs(false, - origin, + obligation.cause.clone(), poly_trait_ref, obligation.predicate.to_poly_trait_ref()) .map(|InferOk { obligations, .. }| self.inferred_obligations.extend(obligations)) @@ -2884,7 +2853,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> { // obligation will normalize to `<$0 as Iterator>::Item = $1` and // `$1: Copy`, so we must ensure the obligations are emitted in // that order. - let predicates = tcx.lookup_predicates(def_id); + let predicates = tcx.item_predicates(def_id); assert_eq!(predicates.parent, None); let predicates = predicates.predicates.iter().flat_map(|predicate| { let predicate = normalize_with_depth(self, cause.clone(), recursion_depth, @@ -2937,7 +2906,7 @@ impl<'tcx> TraitObligation<'tcx> { impl<'tcx> SelectionCache<'tcx> { pub fn new() -> SelectionCache<'tcx> { SelectionCache { - hashmap: RefCell::new(FnvHashMap()) + hashmap: RefCell::new(FxHashMap()) } } } @@ -2945,7 +2914,7 @@ impl<'tcx> SelectionCache<'tcx> { impl<'tcx> EvaluationCache<'tcx> { pub fn new() -> EvaluationCache<'tcx> { EvaluationCache { - hashmap: RefCell::new(FnvHashMap()) + hashmap: RefCell::new(FxHashMap()) } } } diff --git a/src/librustc/traits/specialize/mod.rs b/src/librustc/traits/specialize/mod.rs index 909247d1cb..59e3d398b2 100644 --- a/src/librustc/traits/specialize/mod.rs +++ b/src/librustc/traits/specialize/mod.rs @@ -20,9 +20,9 @@ use super::{SelectionContext, FulfillmentContext}; use super::util::impl_trait_ref_and_oblig; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use hir::def_id::DefId; -use infer::{InferCtxt, InferOk, TypeOrigin}; +use infer::{InferCtxt, InferOk}; use middle::region; use ty::subst::{Subst, Substs}; use traits::{self, Reveal, ObligationCause}; @@ -120,12 +120,14 @@ pub fn find_method<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let trait_def_id = tcx.trait_id_of_impl(impl_data.impl_def_id).unwrap(); let trait_def = tcx.lookup_trait_def(trait_def_id); - match trait_def.ancestors(impl_data.impl_def_id).fn_defs(tcx, name).next() { + let ancestors = trait_def.ancestors(impl_data.impl_def_id); + match ancestors.defs(tcx, name, ty::AssociatedKind::Method).next() { Some(node_item) => { let substs = tcx.infer_ctxt(None, None, Reveal::All).enter(|infcx| { let substs = substs.rebase_onto(tcx, trait_def_id, impl_data.substs); let substs = translate_substs(&infcx, impl_data.impl_def_id, substs, node_item.node); + let substs = infcx.tcx.erase_regions(&substs); tcx.lift(&substs).unwrap_or_else(|| { bug!("find_method: translate_substs \ returned {:?} which contains inference types/regions", @@ -222,8 +224,10 @@ fn fulfill_implication<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>, target_substs); // do the impls unify? If not, no specialization. - match infcx.eq_trait_refs(true, TypeOrigin::Misc(DUMMY_SP), source_trait_ref, - target_trait_ref) { + match infcx.eq_trait_refs(true, + &ObligationCause::dummy(), + source_trait_ref, + target_trait_ref) { Ok(InferOk { obligations, .. }) => { // FIXME(#32730) propagate obligations assert!(obligations.is_empty()) @@ -270,13 +274,13 @@ fn fulfill_implication<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>, } pub struct SpecializesCache { - map: FnvHashMap<(DefId, DefId), bool> + map: FxHashMap<(DefId, DefId), bool> } impl SpecializesCache { pub fn new() -> Self { SpecializesCache { - map: FnvHashMap() + map: FxHashMap() } } diff --git a/src/librustc/traits/specialize/specialization_graph.rs b/src/librustc/traits/specialize/specialization_graph.rs index 1374719ef4..a41523f2de 100644 --- a/src/librustc/traits/specialize/specialization_graph.rs +++ b/src/librustc/traits/specialize/specialization_graph.rs @@ -8,16 +8,14 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use std::rc::Rc; - use super::{OverlapError, specializes}; use hir::def_id::DefId; use traits::{self, Reveal}; -use ty::{self, TyCtxt, ImplOrTraitItem, TraitDef, TypeFoldable}; +use ty::{self, TyCtxt, TraitDef, TypeFoldable}; use ty::fast_reject::{self, SimplifiedType}; use syntax::ast::Name; -use util::nodemap::{DefIdMap, FnvHashMap}; +use util::nodemap::{DefIdMap, FxHashMap}; /// A per-trait graph of impls in specialization order. At the moment, this /// graph forms a tree rooted with the trait itself, with all other nodes @@ -57,7 +55,7 @@ struct Children { // the specialization graph. /// Impls of the trait. - nonblanket_impls: FnvHashMap>, + nonblanket_impls: FxHashMap>, /// Blanket impls associated with the trait. blanket_impls: Vec, @@ -78,7 +76,7 @@ enum Inserted { impl<'a, 'gcx, 'tcx> Children { fn new() -> Children { Children { - nonblanket_impls: FnvHashMap(), + nonblanket_impls: FxHashMap(), blanket_impls: vec![], } } @@ -285,12 +283,10 @@ impl<'a, 'gcx, 'tcx> Node { } /// Iterate over the items defined directly by the given (impl or trait) node. - pub fn items(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> NodeItems<'a, 'gcx> { - NodeItems { - tcx: tcx.global_tcx(), - items: tcx.impl_or_trait_items(self.def_id()), - idx: 0, - } + #[inline] // FIXME(#35870) Avoid closures being unexported due to impl Trait. + pub fn items(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) + -> impl Iterator + 'a { + tcx.associated_items(self.def_id()) } pub fn def_id(&self) -> DefId { @@ -301,40 +297,18 @@ impl<'a, 'gcx, 'tcx> Node { } } -/// An iterator over the items defined within a trait or impl. -pub struct NodeItems<'a, 'tcx: 'a> { - tcx: TyCtxt<'a, 'tcx, 'tcx>, - items: Rc>, - idx: usize -} - -impl<'a, 'tcx> Iterator for NodeItems<'a, 'tcx> { - type Item = ImplOrTraitItem<'tcx>; - fn next(&mut self) -> Option> { - if self.idx < self.items.len() { - let item_def_id = self.items[self.idx]; - let items_table = self.tcx.impl_or_trait_items.borrow(); - let item = items_table[&item_def_id].clone(); - self.idx += 1; - Some(item) - } else { - None - } - } -} - -pub struct Ancestors<'a, 'tcx: 'a> { - trait_def: &'a TraitDef<'tcx>, +pub struct Ancestors<'a> { + trait_def: &'a TraitDef, current_source: Option, } -impl<'a, 'tcx> Iterator for Ancestors<'a, 'tcx> { +impl<'a> Iterator for Ancestors<'a> { type Item = Node; fn next(&mut self) -> Option { let cur = self.current_source.take(); if let Some(Node::Impl(cur_impl)) = cur { let parent = self.trait_def.specialization_graph.borrow().parent(cur_impl); - if parent == self.trait_def.def_id() { + if parent == self.trait_def.def_id { self.current_source = Some(Node::Trait(parent)); } else { self.current_source = Some(Node::Impl(parent)); @@ -358,112 +332,22 @@ impl NodeItem { } } -pub struct TypeDefs<'a, 'tcx: 'a> { - // generally only invoked once or twice, so the box doesn't hurt - iter: Box>>> + 'a>, -} - -impl<'a, 'tcx> Iterator for TypeDefs<'a, 'tcx> { - type Item = NodeItem>>; - fn next(&mut self) -> Option { - self.iter.next() - } -} - -pub struct FnDefs<'a, 'tcx: 'a> { - // generally only invoked once or twice, so the box doesn't hurt - iter: Box>>> + 'a>, -} - -impl<'a, 'tcx> Iterator for FnDefs<'a, 'tcx> { - type Item = NodeItem>>; - fn next(&mut self) -> Option { - self.iter.next() - } -} - -pub struct ConstDefs<'a, 'tcx: 'a> { - // generally only invoked once or twice, so the box doesn't hurt - iter: Box>>> + 'a>, -} - -impl<'a, 'tcx> Iterator for ConstDefs<'a, 'tcx> { - type Item = NodeItem>>; - fn next(&mut self) -> Option { - self.iter.next() - } -} - -impl<'a, 'gcx, 'tcx> Ancestors<'a, 'tcx> { - /// Search the items from the given ancestors, returning each type definition - /// with the given name. - pub fn type_defs(self, tcx: TyCtxt<'a, 'gcx, 'tcx>, name: Name) -> TypeDefs<'a, 'gcx> { - let iter = self.flat_map(move |node| { - node.items(tcx) - .filter_map(move |item| { - if let ty::TypeTraitItem(assoc_ty) = item { - if assoc_ty.name == name { - return Some(NodeItem { - node: node, - item: assoc_ty, - }); - } - } - None - }) - - }); - TypeDefs { iter: Box::new(iter) } - } - - /// Search the items from the given ancestors, returning each fn definition - /// with the given name. - pub fn fn_defs(self, tcx: TyCtxt<'a, 'gcx, 'tcx>, name: Name) -> FnDefs<'a, 'gcx> { - let iter = self.flat_map(move |node| { - node.items(tcx) - .filter_map(move |item| { - if let ty::MethodTraitItem(method) = item { - if method.name == name { - return Some(NodeItem { - node: node, - item: method, - }); - } - } - None - }) - - }); - FnDefs { iter: Box::new(iter) } - } - - /// Search the items from the given ancestors, returning each const - /// definition with the given name. - pub fn const_defs(self, tcx: TyCtxt<'a, 'gcx, 'tcx>, name: Name) -> ConstDefs<'a, 'gcx> { - let iter = self.flat_map(move |node| { - node.items(tcx) - .filter_map(move |item| { - if let ty::ConstTraitItem(konst) = item { - if konst.name == name { - return Some(NodeItem { - node: node, - item: konst, - }); - } - } - None - }) - - }); - ConstDefs { iter: Box::new(iter) } +impl<'a, 'gcx, 'tcx> Ancestors<'a> { + /// Search the items from the given ancestors, returning each definition + /// with the given name and the given kind. + #[inline] // FIXME(#35870) Avoid closures being unexported due to impl Trait. + pub fn defs(self, tcx: TyCtxt<'a, 'gcx, 'tcx>, name: Name, kind: ty::AssociatedKind) + -> impl Iterator> + 'a { + self.flat_map(move |node| { + node.items(tcx).filter(move |item| item.kind == kind && item.name == name) + .map(move |item| NodeItem { node: node, item: item }) + }) } } /// Walk up the specialization ancestors of a given impl, starting with that /// impl itself. -pub fn ancestors<'a, 'tcx>(trait_def: &'a TraitDef<'tcx>, - start_from_impl: DefId) - -> Ancestors<'a, 'tcx> { +pub fn ancestors<'a>(trait_def: &'a TraitDef, start_from_impl: DefId) -> Ancestors<'a> { Ancestors { trait_def: trait_def, current_source: Some(Node::Impl(start_from_impl)), diff --git a/src/librustc/traits/structural_impls.rs b/src/librustc/traits/structural_impls.rs index d33e8b5675..dedb126d7f 100644 --- a/src/librustc/traits/structural_impls.rs +++ b/src/librustc/traits/structural_impls.rs @@ -190,9 +190,6 @@ impl<'a, 'tcx> Lift<'tcx> for traits::ObligationCauseCode<'a> { super::VariableType(id) => Some(super::VariableType(id)), super::ReturnType => Some(super::ReturnType), super::RepeatVec => Some(super::RepeatVec), - super::ClosureCapture(node_id, span, bound) => { - Some(super::ClosureCapture(node_id, span, bound)) - } super::FieldSized => Some(super::FieldSized), super::ConstSized => Some(super::ConstSized), super::SharedStatic => Some(super::SharedStatic), @@ -213,6 +210,34 @@ impl<'a, 'tcx> Lift<'tcx> for traits::ObligationCauseCode<'a> { lint_id: lint_id, }) } + super::ExprAssignable => { + Some(super::ExprAssignable) + } + super::MatchExpressionArm { arm_span, source } => { + Some(super::MatchExpressionArm { arm_span: arm_span, + source: source }) + } + super::IfExpression => { + Some(super::IfExpression) + } + super::IfExpressionWithNoElse => { + Some(super::IfExpressionWithNoElse) + } + super::EquatePredicate => { + Some(super::EquatePredicate) + } + super::MainFunctionType => { + Some(super::MainFunctionType) + } + super::StartFunctionType => { + Some(super::StartFunctionType) + } + super::IntrinsicType => { + Some(super::IntrinsicType) + } + super::MethodReceiver => { + Some(super::MethodReceiver) + } } } } @@ -461,6 +486,15 @@ impl<'tcx, T: TypeFoldable<'tcx>> TypeFoldable<'tcx> for Normalized<'tcx, T> { impl<'tcx> TypeFoldable<'tcx> for traits::ObligationCauseCode<'tcx> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { match *self { + super::ExprAssignable | + super::MatchExpressionArm { arm_span: _, source: _ } | + super::IfExpression | + super::IfExpressionWithNoElse | + super::EquatePredicate | + super::MainFunctionType | + super::StartFunctionType | + super::IntrinsicType | + super::MethodReceiver | super::MiscObligation | super::SliceOrArrayElem | super::TupleElem | @@ -470,7 +504,6 @@ impl<'tcx> TypeFoldable<'tcx> for traits::ObligationCauseCode<'tcx> { super::VariableType(_) | super::ReturnType | super::RepeatVec | - super::ClosureCapture(..) | super::FieldSized | super::ConstSized | super::SharedStatic | @@ -497,6 +530,15 @@ impl<'tcx> TypeFoldable<'tcx> for traits::ObligationCauseCode<'tcx> { fn super_visit_with>(&self, visitor: &mut V) -> bool { match *self { + super::ExprAssignable | + super::MatchExpressionArm { arm_span: _, source: _ } | + super::IfExpression | + super::IfExpressionWithNoElse | + super::EquatePredicate | + super::MainFunctionType | + super::StartFunctionType | + super::IntrinsicType | + super::MethodReceiver | super::MiscObligation | super::SliceOrArrayElem | super::TupleElem | @@ -506,7 +548,6 @@ impl<'tcx> TypeFoldable<'tcx> for traits::ObligationCauseCode<'tcx> { super::VariableType(_) | super::ReturnType | super::RepeatVec | - super::ClosureCapture(..) | super::FieldSized | super::ConstSized | super::SharedStatic | diff --git a/src/librustc/traits/util.rs b/src/librustc/traits/util.rs index a3d974216b..cebd8bf87d 100644 --- a/src/librustc/traits/util.rs +++ b/src/librustc/traits/util.rs @@ -12,8 +12,7 @@ use hir::def_id::DefId; use ty::subst::{Subst, Substs}; use ty::{self, Ty, TyCtxt, ToPredicate, ToPolyTraitRef}; use ty::outlives::Component; -use util::common::ErrorReported; -use util::nodemap::FnvHashSet; +use util::nodemap::FxHashSet; use super::{Obligation, ObligationCause, PredicateObligation, SelectionContext, Normalized}; @@ -50,12 +49,12 @@ fn anonymize_predicate<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, struct PredicateSet<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { tcx: TyCtxt<'a, 'gcx, 'tcx>, - set: FnvHashSet>, + set: FxHashSet>, } impl<'a, 'gcx, 'tcx> PredicateSet<'a, 'gcx, 'tcx> { fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>) -> PredicateSet<'a, 'gcx, 'tcx> { - PredicateSet { tcx: tcx, set: FnvHashSet() } + PredicateSet { tcx: tcx, set: FxHashSet() } } fn insert(&mut self, pred: &ty::Predicate<'tcx>) -> bool { @@ -128,7 +127,7 @@ impl<'cx, 'gcx, 'tcx> Elaborator<'cx, 'gcx, 'tcx> { match *predicate { ty::Predicate::Trait(ref data) => { // Predicates declared on the trait. - let predicates = tcx.lookup_super_predicates(data.def_id()); + let predicates = tcx.item_super_predicates(data.def_id()); let mut predicates: Vec<_> = predicates.predicates @@ -272,7 +271,7 @@ pub fn transitive_bounds<'cx, 'gcx, 'tcx>(tcx: TyCtxt<'cx, 'gcx, 'tcx>, pub struct SupertraitDefIds<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { tcx: TyCtxt<'a, 'gcx, 'tcx>, stack: Vec, - visited: FnvHashSet, + visited: FxHashSet, } pub fn supertrait_def_ids<'cx, 'gcx, 'tcx>(tcx: TyCtxt<'cx, 'gcx, 'tcx>, @@ -295,7 +294,7 @@ impl<'cx, 'gcx, 'tcx> Iterator for SupertraitDefIds<'cx, 'gcx, 'tcx> { None => { return None; } }; - let predicates = self.tcx.lookup_super_predicates(def_id); + let predicates = self.tcx.item_super_predicates(def_id); let visited = &mut self.visited; self.stack.extend( predicates.predicates @@ -362,7 +361,7 @@ pub fn impl_trait_ref_and_oblig<'a, 'gcx, 'tcx>(selcx: &mut SelectionContext<'a, let Normalized { value: impl_trait_ref, obligations: normalization_obligations1 } = super::normalize(selcx, ObligationCause::dummy(), &impl_trait_ref); - let predicates = selcx.tcx().lookup_predicates(impl_def_id); + let predicates = selcx.tcx().item_predicates(impl_def_id); let predicates = predicates.instantiate(selcx.tcx(), impl_substs); let Normalized { value: predicates, obligations: normalization_obligations2 } = super::normalize(selcx, ObligationCause::dummy(), &predicates); @@ -408,25 +407,6 @@ pub fn predicate_for_trait_ref<'tcx>( } impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { - pub fn trait_ref_for_builtin_bound(self, - builtin_bound: ty::BuiltinBound, - param_ty: Ty<'tcx>) - -> Result, ErrorReported> - { - match self.lang_items.from_builtin_kind(builtin_bound) { - Ok(def_id) => { - Ok(ty::TraitRef { - def_id: def_id, - substs: self.mk_substs_trait(param_ty, &[]) - }) - } - Err(e) => { - self.sess.err(&e); - Err(ErrorReported) - } - } - } - pub fn predicate_for_trait_def(self, cause: ObligationCause<'tcx>, trait_def_id: DefId, @@ -442,17 +422,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { predicate_for_trait_ref(cause, trait_ref, recursion_depth) } - pub fn predicate_for_builtin_bound(self, - cause: ObligationCause<'tcx>, - builtin_bound: ty::BuiltinBound, - recursion_depth: usize, - param_ty: Ty<'tcx>) - -> Result, ErrorReported> - { - let trait_ref = self.trait_ref_for_builtin_bound(builtin_bound, param_ty)?; - Ok(predicate_for_trait_ref(cause, trait_ref, recursion_depth)) - } - /// Cast a trait reference into a reference to one of its super /// traits; returns `None` if `target_trait_def_id` is not a /// supertrait. @@ -477,8 +446,8 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { let mut entries = 0; // Count number of methods and add them to the total offset. // Skip over associated types and constants. - for trait_item in &self.trait_items(trait_ref.def_id())[..] { - if let ty::MethodTraitItem(_) = *trait_item { + for trait_item in self.associated_items(trait_ref.def_id()) { + if trait_item.kind == ty::AssociatedKind::Method { entries += 1; } } @@ -495,17 +464,13 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // add them to the total offset. // Skip over associated types and constants. let mut entries = object.vtable_base; - for trait_item in &self.trait_items(object.upcast_trait_ref.def_id())[..] { - if trait_item.def_id() == method_def_id { + for trait_item in self.associated_items(object.upcast_trait_ref.def_id()) { + if trait_item.def_id == method_def_id { // The item with the ID we were given really ought to be a method. - assert!(match *trait_item { - ty::MethodTraitItem(_) => true, - _ => false - }); - + assert_eq!(trait_item.kind, ty::AssociatedKind::Method); return entries; } - if let ty::MethodTraitItem(_) = *trait_item { + if trait_item.kind == ty::AssociatedKind::Method { entries += 1; } } @@ -522,14 +487,15 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { -> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>)> { let arguments_tuple = match tuple_arguments { - TupleArgumentsFlag::No => sig.0.inputs[0], - TupleArgumentsFlag::Yes => self.intern_tup(&sig.0.inputs[..]), + TupleArgumentsFlag::No => sig.skip_binder().inputs()[0], + TupleArgumentsFlag::Yes => + self.intern_tup(sig.skip_binder().inputs()), }; let trait_ref = ty::TraitRef { def_id: fn_trait_def_id, substs: self.mk_substs_trait(self_ty, &[arguments_tuple]), }; - ty::Binder((trait_ref, sig.0.output)) + ty::Binder((trait_ref, sig.skip_binder().output())) } } diff --git a/src/librustc/ty/contents.rs b/src/librustc/ty/contents.rs index b499e1346e..8c3cb79294 100644 --- a/src/librustc/ty/contents.rs +++ b/src/librustc/ty/contents.rs @@ -11,7 +11,7 @@ use hir::def_id::{DefId}; use ty::{self, Ty, TyCtxt}; use util::common::MemoizationMap; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; use std::fmt; use std::ops; @@ -98,10 +98,11 @@ impl TypeContents { TC::OwnsOwned | (*self & TC::OwnsAll) } - pub fn union(v: &[T], mut f: F) -> TypeContents where - F: FnMut(&T) -> TypeContents, + pub fn union(v: I, mut f: F) -> TypeContents where + I: IntoIterator, + F: FnMut(T) -> TypeContents, { - v.iter().fold(TC::None, |tc, ty| tc | f(ty)) + v.into_iter().fold(TC::None, |tc, ty| tc | f(ty)) } pub fn has_dtor(&self) -> bool { @@ -141,11 +142,11 @@ impl fmt::Debug for TypeContents { impl<'a, 'tcx> ty::TyS<'tcx> { pub fn type_contents(&'tcx self, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> TypeContents { - return tcx.tc_cache.memoize(self, || tc_ty(tcx, self, &mut FnvHashMap())); + return tcx.tc_cache.memoize(self, || tc_ty(tcx, self, &mut FxHashMap())); fn tc_ty<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, ty: Ty<'tcx>, - cache: &mut FnvHashMap, TypeContents>) -> TypeContents + cache: &mut FxHashMap, TypeContents>) -> TypeContents { // Subtle: Note that we are *not* using tcx.tc_cache here but rather a // private cache for this walk. This is needed in the case of cyclic @@ -194,7 +195,7 @@ impl<'a, 'tcx> ty::TyS<'tcx> { tc_ty(tcx, typ, cache).owned_pointer() } - ty::TyTrait(_) => { + ty::TyDynamic(..) => { TC::All - TC::InteriorParam } @@ -215,8 +216,10 @@ impl<'a, 'tcx> ty::TyS<'tcx> { } ty::TyStr => TC::None, - ty::TyClosure(_, ref substs) => { - TypeContents::union(&substs.upvar_tys, |ty| tc_ty(tcx, &ty, cache)) + ty::TyClosure(def_id, ref substs) => { + TypeContents::union( + substs.upvar_tys(def_id, tcx), + |ty| tc_ty(tcx, &ty, cache)) } ty::TyTuple(ref tys) => { diff --git a/src/librustc/ty/context.rs b/src/librustc/ty/context.rs index 7e5e10435d..4854a14f73 100644 --- a/src/librustc/ty/context.rs +++ b/src/librustc/ty/context.rs @@ -14,7 +14,7 @@ use dep_graph::{DepGraph, DepTrackingMap}; use session::Session; use middle; use hir::TraitMap; -use hir::def::DefMap; +use hir::def::Def; use hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE}; use hir::map as ast_map; use hir::map::{DefKey, DefPathData, DisambiguatedDefPathData}; @@ -29,14 +29,14 @@ use ty::{self, TraitRef, Ty, TypeAndMut}; use ty::{TyS, TypeVariants, Slice}; use ty::{AdtKind, AdtDef, ClosureSubsts, Region}; use hir::FreevarMap; -use ty::{BareFnTy, InferTy, ParamTy, ProjectionTy, TraitObject}; +use ty::{BareFnTy, InferTy, ParamTy, ProjectionTy, ExistentialPredicate}; use ty::{TyVar, TyVid, IntVar, IntVid, FloatVar, FloatVid}; use ty::TypeVariants::*; use ty::layout::{Layout, TargetDataLayout}; use ty::maps; use util::common::MemoizationMap; use util::nodemap::{NodeMap, NodeSet, DefIdMap, DefIdSet}; -use util::nodemap::{FnvHashMap, FnvHashSet}; +use util::nodemap::{FxHashMap, FxHashSet}; use rustc_data_structures::accumulate_vec::AccumulateVec; use arena::TypedArena; @@ -47,9 +47,10 @@ use std::mem; use std::ops::Deref; use std::rc::Rc; use std::iter; +use std::cmp::Ordering; use syntax::ast::{self, Name, NodeId}; use syntax::attr; -use syntax::parse::token::{self, keywords}; +use syntax::symbol::{Symbol, keywords}; use hir; @@ -63,11 +64,12 @@ pub struct CtxtArenas<'tcx> { region: TypedArena, stability: TypedArena, layout: TypedArena, + existential_predicates: TypedArena>, // references generics: TypedArena>, - trait_def: TypedArena>, - adt_def: TypedArena>, + trait_def: TypedArena, + adt_def: TypedArena, mir: TypedArena>>, } @@ -81,6 +83,7 @@ impl<'tcx> CtxtArenas<'tcx> { region: TypedArena::new(), stability: TypedArena::new(), layout: TypedArena::new(), + existential_predicates: TypedArena::new(), generics: TypedArena::new(), trait_def: TypedArena::new(), @@ -96,26 +99,28 @@ pub struct CtxtInterners<'tcx> { /// Specifically use a speedy hash algorithm for these hash sets, /// they're accessed quite often. - type_: RefCell>>>, - type_list: RefCell>>>>, - substs: RefCell>>>, - bare_fn: RefCell>>>, - region: RefCell>>, - stability: RefCell>, - layout: RefCell>, + type_: RefCell>>>, + type_list: RefCell>>>>, + substs: RefCell>>>, + bare_fn: RefCell>>>, + region: RefCell>>, + stability: RefCell>, + layout: RefCell>, + existential_predicates: RefCell>>>>, } impl<'gcx: 'tcx, 'tcx> CtxtInterners<'tcx> { fn new(arenas: &'tcx CtxtArenas<'tcx>) -> CtxtInterners<'tcx> { CtxtInterners { arenas: arenas, - type_: RefCell::new(FnvHashSet()), - type_list: RefCell::new(FnvHashSet()), - substs: RefCell::new(FnvHashSet()), - bare_fn: RefCell::new(FnvHashSet()), - region: RefCell::new(FnvHashSet()), - stability: RefCell::new(FnvHashSet()), - layout: RefCell::new(FnvHashSet()) + type_: RefCell::new(FxHashSet()), + type_list: RefCell::new(FxHashSet()), + substs: RefCell::new(FxHashSet()), + bare_fn: RefCell::new(FxHashSet()), + region: RefCell::new(FxHashSet()), + stability: RefCell::new(FxHashSet()), + layout: RefCell::new(FxHashSet()), + existential_predicates: RefCell::new(FxHashSet()), } } @@ -201,6 +206,9 @@ pub struct CommonTypes<'tcx> { } pub struct Tables<'tcx> { + /// Resolved definitions for `::X` associated paths. + pub type_relative_path_defs: NodeMap, + /// Stores the types for various nodes in the AST. Note that this table /// is not guaranteed to be populated until after typeck. See /// typeck::check::fn_ctxt for details. @@ -244,11 +252,12 @@ pub struct Tables<'tcx> { impl<'a, 'gcx, 'tcx> Tables<'tcx> { pub fn empty() -> Tables<'tcx> { Tables { - node_types: FnvHashMap(), + type_relative_path_defs: NodeMap(), + node_types: FxHashMap(), item_substs: NodeMap(), adjustments: NodeMap(), - method_map: FnvHashMap(), - upvar_capture_map: FnvHashMap(), + method_map: FxHashMap(), + upvar_capture_map: FxHashMap(), closure_tys: DefIdMap(), closure_kinds: DefIdMap(), liberated_fn_sigs: NodeMap(), @@ -256,6 +265,16 @@ impl<'a, 'gcx, 'tcx> Tables<'tcx> { } } + /// Returns the final resolution of a `QPath` in an `Expr` or `Pat` node. + pub fn qpath_def(&self, qpath: &hir::QPath, id: NodeId) -> Def { + match *qpath { + hir::QPath::Resolved(_, ref path) => path.def, + hir::QPath::TypeRelative(..) => { + self.type_relative_path_defs.get(&id).cloned().unwrap_or(Def::Err) + } + } + } + pub fn node_id_to_type(&self, id: NodeId) -> Ty<'tcx> { match self.node_id_to_type_opt(id) { Some(ty) => ty, @@ -379,11 +398,6 @@ pub struct GlobalCtxt<'tcx> { pub sess: &'tcx Session, - /// Map from path id to the results from resolve; generated - /// initially by resolve and updated during typeck in some cases - /// (e.g., UFCS paths) - pub def_map: RefCell, - /// Map indicating what traits are in scope for places where this /// is relevant; generated by resolve. pub trait_map: TraitMap, @@ -403,18 +417,15 @@ pub struct GlobalCtxt<'tcx> { pub tables: RefCell>, /// Maps from a trait item to the trait item "descriptor" - pub impl_or_trait_items: RefCell>>, + pub associated_items: RefCell>>, /// Maps from an impl/trait def-id to a list of the def-ids of its items - pub impl_or_trait_item_def_ids: RefCell>>, - - /// A cache for the trait_items() routine; note that the routine - /// itself pushes the `TraitItems` dependency node. - trait_items_cache: RefCell>>, + pub associated_item_def_ids: RefCell>>, pub impl_trait_refs: RefCell>>, pub trait_defs: RefCell>>, pub adt_defs: RefCell>>, + pub adt_sized_constraint: RefCell>>, /// Maps from the def-id of an item (trait/struct/enum/fn) to its /// associated generics and predicates. @@ -448,19 +459,19 @@ pub struct GlobalCtxt<'tcx> { pub maybe_unused_trait_imports: NodeSet, // Records the type of every item. - pub tcache: RefCell>>, + pub item_types: RefCell>>, // Internal cache for metadata decoding. No need to track deps on this. - pub rcache: RefCell>>, + pub rcache: RefCell>>, // Cache for the type-contents routine. FIXME -- track deps? - pub tc_cache: RefCell, ty::contents::TypeContents>>, + pub tc_cache: RefCell, ty::contents::TypeContents>>, // FIXME no dep tracking, but we should be able to remove this pub ty_param_defs: RefCell>>, // FIXME dep tracking -- should be harmless enough - pub normalized_cache: RefCell, Ty<'tcx>>>, + pub normalized_cache: RefCell, Ty<'tcx>>>, pub lang_items: middle::lang_items::LanguageItems, @@ -565,20 +576,20 @@ pub struct GlobalCtxt<'tcx> { /// The definite name of the current crate after taking into account /// attributes, commandline parameters, etc. - pub crate_name: token::InternedString, + pub crate_name: Symbol, /// Data layout specification for the current target. pub data_layout: TargetDataLayout, /// Cache for layouts computed from types. - pub layout_cache: RefCell, &'tcx Layout>>, + pub layout_cache: RefCell, &'tcx Layout>>, /// Used to prevent layout from recursing too deeply. pub layout_depth: Cell, /// Map from function to the `#[derive]` mode that it's defining. Only used /// by `proc-macro` crates. - pub derive_macros: RefCell>, + pub derive_macros: RefCell>, } impl<'tcx> GlobalCtxt<'tcx> { @@ -592,15 +603,15 @@ impl<'tcx> GlobalCtxt<'tcx> { } impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { - pub fn crate_name(self, cnum: CrateNum) -> token::InternedString { + pub fn crate_name(self, cnum: CrateNum) -> Symbol { if cnum == LOCAL_CRATE { - self.crate_name.clone() + self.crate_name } else { self.sess.cstore.crate_name(cnum) } } - pub fn original_crate_name(self, cnum: CrateNum) -> token::InternedString { + pub fn original_crate_name(self, cnum: CrateNum) -> Symbol { if cnum == LOCAL_CRATE { self.crate_name.clone() } else { @@ -608,7 +619,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } - pub fn crate_disambiguator(self, cnum: CrateNum) -> token::InternedString { + pub fn crate_disambiguator(self, cnum: CrateNum) -> Symbol { if cnum == LOCAL_CRATE { self.sess.local_crate_disambiguator() } else { @@ -669,10 +680,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.ty_param_defs.borrow().get(&node_id).unwrap().clone() } - pub fn node_type_insert(self, id: NodeId, ty: Ty<'gcx>) { - self.tables.borrow_mut().node_types.insert(id, ty); - } - pub fn alloc_generics(self, generics: ty::Generics<'gcx>) -> &'gcx ty::Generics<'gcx> { self.global_interners.arenas.generics.alloc(generics) @@ -682,38 +689,17 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.global_interners.arenas.mir.alloc(RefCell::new(mir)) } - pub fn intern_trait_def(self, def: ty::TraitDef<'gcx>) - -> &'gcx ty::TraitDef<'gcx> { - let did = def.trait_ref.def_id; - let interned = self.alloc_trait_def(def); - if let Some(prev) = self.trait_defs.borrow_mut().insert(did, interned) { - bug!("Tried to overwrite interned TraitDef: {:?}", prev) - } - self.generics.borrow_mut().insert(did, interned.generics); - interned - } - - pub fn alloc_trait_def(self, def: ty::TraitDef<'gcx>) - -> &'gcx ty::TraitDef<'gcx> { + pub fn alloc_trait_def(self, def: ty::TraitDef) -> &'gcx ty::TraitDef { self.global_interners.arenas.trait_def.alloc(def) } - pub fn insert_adt_def(self, did: DefId, adt_def: ty::AdtDefMaster<'gcx>) { - // this will need a transmute when reverse-variance is removed - if let Some(prev) = self.adt_defs.borrow_mut().insert(did, adt_def) { - bug!("Tried to overwrite interned AdtDef: {:?}", prev) - } - } - - pub fn intern_adt_def(self, - did: DefId, - kind: AdtKind, - variants: Vec>) - -> ty::AdtDefMaster<'gcx> { - let def = ty::AdtDefData::new(self, did, kind, variants); - let interned = self.global_interners.arenas.adt_def.alloc(def); - self.insert_adt_def(did, interned); - interned + pub fn alloc_adt_def(self, + did: DefId, + kind: AdtKind, + variants: Vec) + -> &'gcx ty::AdtDef { + let def = ty::AdtDef::new(self, did, kind, variants); + self.global_interners.arenas.adt_def.alloc(def) } pub fn intern_stability(self, stab: attr::Stability) -> &'gcx attr::Stability { @@ -776,7 +762,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// reference to the context, to allow formatting values that need it. pub fn create_and_enter(s: &'tcx Session, arenas: &'tcx CtxtArenas<'tcx>, - def_map: DefMap, trait_map: TraitMap, named_region_map: resolve_lifetime::NamedRegionMap, map: ast_map::Map<'tcx>, @@ -801,16 +786,16 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { types: common_types, named_region_map: named_region_map, region_maps: region_maps, - free_region_maps: RefCell::new(FnvHashMap()), + free_region_maps: RefCell::new(FxHashMap()), item_variance_map: RefCell::new(DepTrackingMap::new(dep_graph.clone())), variance_computed: Cell::new(false), sess: s, - def_map: RefCell::new(def_map), trait_map: trait_map, tables: RefCell::new(Tables::empty()), impl_trait_refs: RefCell::new(DepTrackingMap::new(dep_graph.clone())), trait_defs: RefCell::new(DepTrackingMap::new(dep_graph.clone())), adt_defs: RefCell::new(DepTrackingMap::new(dep_graph.clone())), + adt_sized_constraint: RefCell::new(DepTrackingMap::new(dep_graph.clone())), generics: RefCell::new(DepTrackingMap::new(dep_graph.clone())), predicates: RefCell::new(DepTrackingMap::new(dep_graph.clone())), super_predicates: RefCell::new(DepTrackingMap::new(dep_graph.clone())), @@ -819,14 +804,13 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { mir_map: RefCell::new(DepTrackingMap::new(dep_graph.clone())), freevars: RefCell::new(freevars), maybe_unused_trait_imports: maybe_unused_trait_imports, - tcache: RefCell::new(DepTrackingMap::new(dep_graph.clone())), - rcache: RefCell::new(FnvHashMap()), - tc_cache: RefCell::new(FnvHashMap()), - impl_or_trait_items: RefCell::new(DepTrackingMap::new(dep_graph.clone())), - impl_or_trait_item_def_ids: RefCell::new(DepTrackingMap::new(dep_graph.clone())), - trait_items_cache: RefCell::new(DepTrackingMap::new(dep_graph.clone())), + item_types: RefCell::new(DepTrackingMap::new(dep_graph.clone())), + rcache: RefCell::new(FxHashMap()), + tc_cache: RefCell::new(FxHashMap()), + associated_items: RefCell::new(DepTrackingMap::new(dep_graph.clone())), + associated_item_def_ids: RefCell::new(DepTrackingMap::new(dep_graph.clone())), ty_param_defs: RefCell::new(NodeMap()), - normalized_cache: RefCell::new(FnvHashMap()), + normalized_cache: RefCell::new(FxHashMap()), lang_items: lang_items, inherent_impls: RefCell::new(DepTrackingMap::new(dep_graph.clone())), used_unsafe: RefCell::new(NodeSet()), @@ -844,9 +828,9 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { custom_coerce_unsized_kinds: RefCell::new(DefIdMap()), cast_kinds: RefCell::new(NodeMap()), fragment_infos: RefCell::new(DefIdMap()), - crate_name: token::intern_and_get_ident(crate_name), + crate_name: Symbol::intern(crate_name), data_layout: data_layout, - layout_cache: RefCell::new(FnvHashMap()), + layout_cache: RefCell::new(FxHashMap()), layout_depth: Cell::new(0), derive_macros: RefCell::new(NodeMap()), }, f) @@ -960,6 +944,27 @@ impl<'a, 'tcx> Lift<'tcx> for &'a Slice> { } } +impl<'a, 'tcx> Lift<'tcx> for &'a Slice> { + type Lifted = &'tcx Slice>; + fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) + -> Option<&'tcx Slice>> { + if self.is_empty() { + return Some(Slice::empty()); + } + if let Some(&Interned(eps)) = tcx.interners.existential_predicates.borrow().get(&self[..]) { + if *self as *const _ == eps as *const _ { + return Some(eps); + } + } + // Also try in the global tcx if we're not that. + if !tcx.is_global() { + self.lift_to_tcx(tcx.global_tcx()) + } else { + None + } + } +} + impl<'a, 'tcx> Lift<'tcx> for &'a BareFnTy<'a> { type Lifted = &'tcx BareFnTy<'tcx>; fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) @@ -1128,7 +1133,7 @@ impl<'a, 'tcx> TyCtxt<'a, 'tcx, 'tcx> { sty_debug_print!( self, TyAdt, TyBox, TyArray, TySlice, TyRawPtr, TyRef, TyFnDef, TyFnPtr, - TyTrait, TyClosure, TyTuple, TyParam, TyInfer, TyProjection, TyAnon); + TyDynamic, TyClosure, TyTuple, TyParam, TyInfer, TyProjection, TyAnon); println!("Substs interner: #{}", self.interners.substs.borrow().len()); println!("BareFnTy interner: #{}", self.interners.bare_fn.borrow().len()); @@ -1202,6 +1207,13 @@ impl<'tcx> Borrow for Interned<'tcx, Region> { } } +impl<'tcx: 'lcx, 'lcx> Borrow<[ExistentialPredicate<'lcx>]> + for Interned<'tcx, Slice>> { + fn borrow<'a>(&'a self) -> &'a [ExistentialPredicate<'lcx>] { + &self.0[..] + } +} + macro_rules! intern_method { ($lt_tcx:tt, $name:ident: $method:ident($alloc:ty, $alloc_method:ident, @@ -1299,6 +1311,7 @@ macro_rules! slice_interners { } slice_interners!( + existential_predicates: _intern_existential_predicates(ExistentialPredicate), type_list: _intern_type_list(Ty), substs: _intern_substs(Kind) ); @@ -1360,7 +1373,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.mk_imm_ref(self.mk_region(ty::ReStatic), self.mk_str()) } - pub fn mk_adt(self, def: AdtDef<'tcx>, substs: &'tcx Substs<'tcx>) -> Ty<'tcx> { + pub fn mk_adt(self, def: &'tcx AdtDef, substs: &'tcx Substs<'tcx>) -> Ty<'tcx> { // take a copy of substs so that we own the vectors inside self.mk_ty(TyAdt(def, substs)) } @@ -1439,28 +1452,29 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.mk_ty(TyFnPtr(fty)) } - pub fn mk_trait(self, mut obj: TraitObject<'tcx>) -> Ty<'tcx> { - obj.projection_bounds.sort_by_key(|b| b.sort_key(self)); - self.mk_ty(TyTrait(box obj)) + pub fn mk_dynamic( + self, + obj: ty::Binder<&'tcx Slice>>, + reg: &'tcx ty::Region + ) -> Ty<'tcx> { + self.mk_ty(TyDynamic(obj, reg)) } pub fn mk_projection(self, trait_ref: TraitRef<'tcx>, item_name: Name) - -> Ty<'tcx> { - // take a copy of substs so that we own the vectors inside - let inner = ProjectionTy { trait_ref: trait_ref, item_name: item_name }; - self.mk_ty(TyProjection(inner)) - } + -> Ty<'tcx> { + // take a copy of substs so that we own the vectors inside + let inner = ProjectionTy { trait_ref: trait_ref, item_name: item_name }; + self.mk_ty(TyProjection(inner)) + } pub fn mk_closure(self, closure_id: DefId, - substs: &'tcx Substs<'tcx>, - tys: &[Ty<'tcx>]) - -> Ty<'tcx> { + substs: &'tcx Substs<'tcx>) + -> Ty<'tcx> { self.mk_closure_from_closure_substs(closure_id, ClosureSubsts { - func_substs: substs, - upvar_tys: self.intern_type_list(tys) + substs: substs }) } @@ -1505,6 +1519,13 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.mk_ty(TyAnon(def_id, substs)) } + pub fn intern_existential_predicates(self, eps: &[ExistentialPredicate<'tcx>]) + -> &'tcx Slice> { + assert!(!eps.is_empty()); + assert!(eps.windows(2).all(|w| w[0].cmp(self, &w[1]) != Ordering::Greater)); + self._intern_existential_predicates(eps) + } + pub fn intern_type_list(self, ts: &[Ty<'tcx>]) -> &'tcx Slice> { if ts.len() == 0 { Slice::empty() @@ -1521,6 +1542,23 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } + pub fn mk_fn_sig(self, inputs: I, output: I::Item, variadic: bool) + -> , ty::FnSig<'tcx>>>::Output + where I: Iterator, + I::Item: InternIteratorElement, ty::FnSig<'tcx>> + { + inputs.chain(iter::once(output)).intern_with(|xs| ty::FnSig { + inputs_and_output: self.intern_type_list(xs), + variadic: variadic + }) + } + + pub fn mk_existential_predicates], + &'tcx Slice>>>(self, iter: I) + -> I::Output { + iter.intern_with(|xs| self.intern_existential_predicates(xs)) + } + pub fn mk_type_list], &'tcx Slice>>>(self, iter: I) -> I::Output { iter.intern_with(|xs| self.intern_type_list(xs)) @@ -1539,15 +1577,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.mk_substs(iter::once(s).chain(t.into_iter().cloned()).map(Kind::from)) } - pub fn trait_items(self, trait_did: DefId) -> Rc>> { - self.trait_items_cache.memoize(trait_did, || { - let def_ids = self.impl_or_trait_items(trait_did); - Rc::new(def_ids.iter() - .map(|&def_id| self.impl_or_trait_item(def_id)) - .collect()) - }) - } - /// Obtain the representation annotation for a struct definition. pub fn lookup_repr_hints(self, did: DefId) -> Rc> { self.repr_hint_cache.memoize(did, || { @@ -1592,4 +1621,3 @@ impl InternIteratorElement for Result { Ok(f(&iter.collect::, _>>()?)) } } - diff --git a/src/librustc/ty/error.rs b/src/librustc/ty/error.rs index 9b345c2d02..e95ce97e13 100644 --- a/src/librustc/ty/error.rs +++ b/src/librustc/ty/error.rs @@ -45,12 +45,12 @@ pub enum TypeError<'tcx> { IntMismatch(ExpectedFound), FloatMismatch(ExpectedFound), Traits(ExpectedFound), - BuiltinBoundsMismatch(ExpectedFound), VariadicMismatch(ExpectedFound), CyclicTy, ProjectionNameMismatched(ExpectedFound), ProjectionBoundsLength(ExpectedFound), - TyParamDefaultMismatch(ExpectedFound>) + TyParamDefaultMismatch(ExpectedFound>), + ExistentialMismatch(ExpectedFound<&'tcx ty::Slice>>), } #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Eq, Hash, Debug, Copy)] @@ -135,19 +135,6 @@ impl<'tcx> fmt::Display for TypeError<'tcx> { format!("trait `{}`", tcx.item_path_str(values.found))) }), - BuiltinBoundsMismatch(values) => { - if values.expected.is_empty() { - write!(f, "expected no bounds, found `{}`", - values.found) - } else if values.found.is_empty() { - write!(f, "expected bounds `{}`, found no bounds", - values.expected) - } else { - write!(f, "expected bounds `{}`, found bounds `{}`", - values.expected, - values.found) - } - } IntMismatch(ref values) => { write!(f, "expected `{:?}`, found `{:?}`", values.expected, @@ -178,6 +165,10 @@ impl<'tcx> fmt::Display for TypeError<'tcx> { values.expected.ty, values.found.ty) } + ExistentialMismatch(ref values) => { + report_maybe_different(f, format!("trait `{}`", values.expected), + format!("trait `{}`", values.found)) + } } } } @@ -214,8 +205,9 @@ impl<'a, 'gcx, 'lcx, 'tcx> ty::TyS<'tcx> { } ty::TyFnDef(..) => format!("fn item"), ty::TyFnPtr(_) => "fn pointer".to_string(), - ty::TyTrait(ref inner) => { - format!("trait {}", tcx.item_path_str(inner.principal.def_id())) + ty::TyDynamic(ref inner, ..) => { + inner.principal().map_or_else(|| "trait".to_string(), + |p| format!("trait {}", tcx.item_path_str(p.def_id()))) } ty::TyClosure(..) => "closure".to_string(), ty::TyTuple(_) => "tuple".to_string(), @@ -291,10 +283,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { expected.ty, found.ty)); - match - self.map.as_local_node_id(expected.def_id) - .and_then(|node_id| self.map.opt_span(node_id)) - { + match self.map.span_if_local(expected.def_id) { Some(span) => { db.span_note(span, "a default was defined here..."); } @@ -308,10 +297,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { expected.origin_span, "...that was applied to an unconstrained type variable here"); - match - self.map.as_local_node_id(found.def_id) - .and_then(|node_id| self.map.opt_span(node_id)) - { + match self.map.span_if_local(found.def_id) { Some(span) => { db.span_note(span, "a second default was defined here..."); } diff --git a/src/librustc/ty/fast_reject.rs b/src/librustc/ty/fast_reject.rs index befc9533c3..7b4d76ad49 100644 --- a/src/librustc/ty/fast_reject.rs +++ b/src/librustc/ty/fast_reject.rs @@ -11,6 +11,7 @@ use hir::def_id::DefId; use ty::{self, Ty, TyCtxt}; use syntax::ast; +use middle::lang_items::OwnedBoxLangItem; use self::SimplifiedType::*; @@ -59,8 +60,8 @@ pub fn simplify_type<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, ty::TyStr => Some(StrSimplifiedType), ty::TyArray(..) | ty::TySlice(_) => Some(ArraySimplifiedType), ty::TyRawPtr(_) => Some(PtrSimplifiedType), - ty::TyTrait(ref trait_info) => { - Some(TraitSimplifiedType(trait_info.principal.def_id())) + ty::TyDynamic(ref trait_info, ..) => { + trait_info.principal().map(|p| TraitSimplifiedType(p.def_id())) } ty::TyRef(_, mt) => { // since we introduce auto-refs during method lookup, we @@ -70,10 +71,7 @@ pub fn simplify_type<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, } ty::TyBox(_) => { // treat like we would treat `Box` - match tcx.lang_items.require_owned_box() { - Ok(def_id) => Some(AdtSimplifiedType(def_id)), - Err(msg) => tcx.sess.fatal(&msg), - } + Some(AdtSimplifiedType(tcx.require_lang_item(OwnedBoxLangItem))) } ty::TyClosure(def_id, _) => { Some(ClosureSimplifiedType(def_id)) @@ -83,7 +81,7 @@ pub fn simplify_type<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, Some(TupleSimplifiedType(tys.len())) } ty::TyFnDef(.., ref f) | ty::TyFnPtr(ref f) => { - Some(FunctionSimplifiedType(f.sig.0.inputs.len())) + Some(FunctionSimplifiedType(f.sig.skip_binder().inputs().len())) } ty::TyProjection(_) | ty::TyParam(_) => { if can_simplify_params { diff --git a/src/librustc/ty/flags.rs b/src/librustc/ty/flags.rs index 649d78f9d9..a06d3ed6cf 100644 --- a/src/librustc/ty/flags.rs +++ b/src/librustc/ty/flags.rs @@ -88,8 +88,7 @@ impl FlagComputation { &ty::TyClosure(_, ref substs) => { self.add_flags(TypeFlags::HAS_TY_CLOSURE); self.add_flags(TypeFlags::HAS_LOCAL_NAMES); - self.add_substs(&substs.func_substs); - self.add_tys(&substs.upvar_tys); + self.add_substs(&substs.substs); } &ty::TyInfer(infer) => { @@ -122,16 +121,21 @@ impl FlagComputation { self.add_substs(substs); } - &ty::TyTrait(ref obj) => { + &ty::TyDynamic(ref obj, r) => { let mut computation = FlagComputation::new(); - computation.add_substs(obj.principal.skip_binder().substs); - for projection_bound in &obj.projection_bounds { - let mut proj_computation = FlagComputation::new(); - proj_computation.add_existential_projection(&projection_bound.0); - self.add_bound_computation(&proj_computation); + for predicate in obj.skip_binder().iter() { + match *predicate { + ty::ExistentialPredicate::Trait(tr) => computation.add_substs(tr.substs), + ty::ExistentialPredicate::Projection(p) => { + let mut proj_computation = FlagComputation::new(); + proj_computation.add_existential_projection(&p); + self.add_bound_computation(&proj_computation); + } + ty::ExistentialPredicate::AutoTrait(_) => {} + } } self.add_bound_computation(&computation); - self.add_region(obj.region_bound); + self.add_region(r); } &ty::TyBox(tt) | &ty::TyArray(tt, _) | &ty::TySlice(tt) => { @@ -176,8 +180,8 @@ impl FlagComputation { fn add_fn_sig(&mut self, fn_sig: &ty::PolyFnSig) { let mut computation = FlagComputation::new(); - computation.add_tys(&fn_sig.0.inputs); - computation.add_ty(fn_sig.0.output); + computation.add_tys(fn_sig.skip_binder().inputs()); + computation.add_ty(fn_sig.skip_binder().output()); self.add_bound_computation(&computation); } diff --git a/src/librustc/ty/fold.rs b/src/librustc/ty/fold.rs index b79ebdb14f..10754825a8 100644 --- a/src/librustc/ty/fold.rs +++ b/src/librustc/ty/fold.rs @@ -45,7 +45,7 @@ use ty::adjustment; use ty::{self, Binder, Ty, TyCtxt, TypeFlags}; use std::fmt; -use util::nodemap::{FnvHashMap, FnvHashSet}; +use util::nodemap::{FxHashMap, FxHashSet}; /// The TypeFoldable trait is implemented for every type that can be folded. /// Basically, every type that has a corresponding method in TypeFolder. @@ -191,6 +191,10 @@ pub trait TypeVisitor<'tcx> : Sized { t.super_visit_with(self) } + fn visit_trait_ref(&mut self, trait_ref: ty::TraitRef<'tcx>) -> bool { + trait_ref.super_visit_with(self) + } + fn visit_region(&mut self, r: &'tcx ty::Region) -> bool { r.super_visit_with(self) } @@ -225,7 +229,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// whether any late-bound regions were skipped pub fn collect_regions(self, value: &T, - region_set: &mut FnvHashSet<&'tcx ty::Region>) + region_set: &mut FxHashSet<&'tcx ty::Region>) -> bool where T : TypeFoldable<'tcx> { @@ -319,14 +323,14 @@ struct RegionReplacer<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { tcx: TyCtxt<'a, 'gcx, 'tcx>, current_depth: u32, fld_r: &'a mut (FnMut(ty::BoundRegion) -> &'tcx ty::Region + 'a), - map: FnvHashMap + map: FxHashMap } impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { pub fn replace_late_bound_regions(self, value: &Binder, mut f: F) - -> (T, FnvHashMap) + -> (T, FxHashMap) where F : FnMut(ty::BoundRegion) -> &'tcx ty::Region, T : TypeFoldable<'tcx>, { @@ -390,7 +394,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// variables and equate `value` with something else, those /// variables will also be equated. pub fn collect_constrained_late_bound_regions(&self, value: &Binder) - -> FnvHashSet + -> FxHashSet where T : TypeFoldable<'tcx> { self.collect_late_bound_regions(value, true) @@ -398,14 +402,14 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// Returns a set of all late-bound regions that appear in `value` anywhere. pub fn collect_referenced_late_bound_regions(&self, value: &Binder) - -> FnvHashSet + -> FxHashSet where T : TypeFoldable<'tcx> { self.collect_late_bound_regions(value, false) } fn collect_late_bound_regions(&self, value: &Binder, just_constraint: bool) - -> FnvHashSet + -> FxHashSet where T : TypeFoldable<'tcx> { let mut collector = LateBoundRegionsCollector::new(just_constraint); @@ -450,7 +454,7 @@ impl<'a, 'gcx, 'tcx> RegionReplacer<'a, 'gcx, 'tcx> { tcx: tcx, current_depth: 1, fld_r: fld_r, - map: FnvHashMap() + map: FxHashMap() } } } @@ -650,7 +654,7 @@ impl<'tcx> TypeVisitor<'tcx> for HasTypeFlagsVisitor { /// Collects all the late-bound regions it finds into a hash set. struct LateBoundRegionsCollector { current_depth: u32, - regions: FnvHashSet, + regions: FxHashSet, just_constrained: bool, } @@ -658,7 +662,7 @@ impl LateBoundRegionsCollector { fn new(just_constrained: bool) -> Self { LateBoundRegionsCollector { current_depth: 1, - regions: FnvHashSet(), + regions: FxHashSet(), just_constrained: just_constrained, } } diff --git a/src/librustc/ty/item_path.rs b/src/librustc/ty/item_path.rs index fdf5185eb6..440a391678 100644 --- a/src/librustc/ty/item_path.rs +++ b/src/librustc/ty/item_path.rs @@ -12,7 +12,7 @@ use hir::map::DefPathData; use hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX, LOCAL_CRATE}; use ty::{self, Ty, TyCtxt}; use syntax::ast; -use syntax::parse::token; +use syntax::symbol::Symbol; use std::cell::Cell; @@ -94,14 +94,14 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { if let Some(extern_crate_def_id) = opt_extern_crate { self.push_item_path(buffer, extern_crate_def_id); } else { - buffer.push(&self.crate_name(cnum)); + buffer.push(&self.crate_name(cnum).as_str()); } } } RootMode::Absolute => { // In absolute mode, just write the crate name // unconditionally. - buffer.push(&self.original_crate_name(cnum)); + buffer.push(&self.original_crate_name(cnum).as_str()); } } } @@ -126,7 +126,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { return true; } None => { - buffer.push(&self.crate_name(cur_def.krate)); + buffer.push(&self.crate_name(cur_def.krate).as_str()); cur_path.iter().rev().map(|segment| buffer.push(&segment.as_str())).count(); return true; } @@ -136,7 +136,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { cur_path.push(self.sess.cstore.def_key(cur_def) .disambiguated_data.data.get_opt_name().unwrap_or_else(|| - token::intern(""))); + Symbol::intern(""))); match visible_parent_map.get(&cur_def) { Some(&def) => cur_def = def, None => return false, @@ -218,7 +218,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // users may find it useful. Currently, we omit the parent if // the impl is either in the same module as the self-type or // as the trait. - let self_ty = self.lookup_item_type(impl_def_id).ty; + let self_ty = self.item_type(impl_def_id); let in_self_mod = match characteristic_def_id_of_type(self_ty) { None => false, Some(ty_def_id) => self.parent_def_id(ty_def_id) == Some(parent_def_id), @@ -316,7 +316,7 @@ pub fn characteristic_def_id_of_type(ty: Ty) -> Option { match ty.sty { ty::TyAdt(adt_def, _) => Some(adt_def.did), - ty::TyTrait(ref data) => Some(data.principal.def_id()), + ty::TyDynamic(data, ..) => data.principal().map(|p| p.def_id()), ty::TyArray(subty, _) | ty::TySlice(subty) | diff --git a/src/librustc/ty/ivar.rs b/src/librustc/ty/ivar.rs deleted file mode 100644 index 634599406a..0000000000 --- a/src/librustc/ty/ivar.rs +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright 2015 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -use dep_graph::DepNode; -use hir::def_id::DefId; -use ty::{Ty, TyS}; -use ty::tls; - -use rustc_data_structures::ivar; - -use std::fmt; -use std::marker::PhantomData; -use core::nonzero::NonZero; - -/// An IVar that contains a Ty. 'lt is a (reverse-variant) upper bound -/// on the lifetime of the IVar. This is required because of variance -/// problems: the IVar needs to be variant with respect to 'tcx (so -/// it can be referred to from Ty) but can only be modified if its -/// lifetime is exactly 'tcx. -/// -/// Safety invariants: -/// (A) self.0, if fulfilled, is a valid Ty<'tcx> -/// (B) no aliases to this value with a 'tcx longer than this -/// value's 'lt exist -/// -/// Dependency tracking: each ivar does not know what node in the -/// dependency graph it is associated with, so when you get/fulfill -/// you must supply a `DepNode` id. This should always be the same id! -/// -/// NonZero is used rather than Unique because Unique isn't Copy. -pub struct TyIVar<'tcx, 'lt: 'tcx>(ivar::Ivar>>, - PhantomData)->TyS<'tcx>>); - -impl<'tcx, 'lt> TyIVar<'tcx, 'lt> { - #[inline] - pub fn new() -> Self { - // Invariant (A) satisfied because the IVar is unfulfilled - // Invariant (B) because 'lt : 'tcx - TyIVar(ivar::Ivar::new(), PhantomData) - } - - #[inline] - pub fn get(&self, dep_node: DepNode) -> Option> { - tls::with(|tcx| tcx.dep_graph.read(dep_node)); - self.untracked_get() - } - - /// Reads the ivar without registered a dep-graph read. Use with - /// caution. - #[inline] - pub fn untracked_get(&self) -> Option> { - match self.0.get() { - None => None, - // valid because of invariant (A) - Some(v) => Some(unsafe { &*(*v as *const TyS<'tcx>) }) - } - } - - #[inline] - pub fn unwrap(&self, dep_node: DepNode) -> Ty<'tcx> { - self.get(dep_node).unwrap() - } - - pub fn fulfill(&self, dep_node: DepNode, value: Ty<'lt>) { - tls::with(|tcx| tcx.dep_graph.write(dep_node)); - - // Invariant (A) is fulfilled, because by (B), every alias - // of this has a 'tcx longer than 'lt. - let value: *const TyS<'lt> = value; - // FIXME(27214): unneeded [as *const ()] - let value = value as *const () as *const TyS<'static>; - self.0.fulfill(unsafe { NonZero::new(value) }) - } -} - -impl<'tcx, 'lt> fmt::Debug for TyIVar<'tcx, 'lt> { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - match self.untracked_get() { - Some(val) => write!(f, "TyIVar({:?})", val), - None => f.write_str("TyIVar()") - } - } -} diff --git a/src/librustc/ty/layout.rs b/src/librustc/ty/layout.rs index 5ce43d905e..ebac30c8e5 100644 --- a/src/librustc/ty/layout.rs +++ b/src/librustc/ty/layout.rs @@ -24,6 +24,7 @@ use syntax_pos::DUMMY_SP; use std::cmp; use std::fmt; use std::i64; +use std::iter; /// Parsed [Data layout](http://llvm.org/docs/LangRef.html#data-layout) /// for a target, which contains everything needed to compute layouts. @@ -415,7 +416,7 @@ impl Integer { /// signed discriminant range and #[repr] attribute. /// N.B.: u64 values above i64::MAX will be treated as signed, but /// that shouldn't affect anything, other than maybe debuginfo. - pub fn repr_discr(tcx: TyCtxt, ty: Ty, hint: attr::ReprAttr, min: i64, max: i64) + fn repr_discr(tcx: TyCtxt, ty: Ty, hints: &[attr::ReprAttr], min: i64, max: i64) -> (Integer, bool) { // Theoretically, negative values could be larger in unsigned representation // than the unsigned representation of the signed minimum. However, if there @@ -424,34 +425,41 @@ impl Integer { let unsigned_fit = Integer::fit_unsigned(cmp::max(min as u64, max as u64)); let signed_fit = cmp::max(Integer::fit_signed(min), Integer::fit_signed(max)); - let at_least = match hint { - attr::ReprInt(ity) => { - let discr = Integer::from_attr(&tcx.data_layout, ity); - let fit = if ity.is_signed() { signed_fit } else { unsigned_fit }; - if discr < fit { - bug!("Integer::repr_discr: `#[repr]` hint too small for \ - discriminant range of enum `{}", ty) + let mut min_from_extern = None; + let min_default = I8; + + for &r in hints.iter() { + match r { + attr::ReprInt(ity) => { + let discr = Integer::from_attr(&tcx.data_layout, ity); + let fit = if ity.is_signed() { signed_fit } else { unsigned_fit }; + if discr < fit { + bug!("Integer::repr_discr: `#[repr]` hint too small for \ + discriminant range of enum `{}", ty) + } + return (discr, ity.is_signed()); } - return (discr, ity.is_signed()); - } - attr::ReprExtern => { - match &tcx.sess.target.target.arch[..] { - // WARNING: the ARM EABI has two variants; the one corresponding - // to `at_least == I32` appears to be used on Linux and NetBSD, - // but some systems may use the variant corresponding to no - // lower bound. However, we don't run on those yet...? - "arm" => I32, - _ => I32, + attr::ReprExtern => { + match &tcx.sess.target.target.arch[..] { + // WARNING: the ARM EABI has two variants; the one corresponding + // to `at_least == I32` appears to be used on Linux and NetBSD, + // but some systems may use the variant corresponding to no + // lower bound. However, we don't run on those yet...? + "arm" => min_from_extern = Some(I32), + _ => min_from_extern = Some(I32), + } + } + attr::ReprAny => {}, + attr::ReprPacked => { + bug!("Integer::repr_discr: found #[repr(packed)] on enum `{}", ty); + } + attr::ReprSimd => { + bug!("Integer::repr_discr: found #[repr(simd)] on enum `{}", ty); } } - attr::ReprAny => I8, - attr::ReprPacked => { - bug!("Integer::repr_discr: found #[repr(packed)] on enum `{}", ty); - } - attr::ReprSimd => { - bug!("Integer::repr_discr: found #[repr(simd)] on enum `{}", ty); - } - }; + } + + let at_least = min_from_extern.unwrap_or(min_default); // If there are no negative values, we can use the unsigned fit. if min >= 0 { @@ -511,67 +519,162 @@ pub struct Struct { /// If true, the size is exact, otherwise it's only a lower bound. pub sized: bool, - /// Offsets for the first byte of each field. + /// Offsets for the first byte of each field, ordered to match the source definition order. + /// This vector does not go in increasing order. /// FIXME(eddyb) use small vector optimization for the common case. pub offsets: Vec, + /// Maps source order field indices to memory order indices, depending how fields were permuted. + /// FIXME (camlorn) also consider small vector optimization here. + pub memory_index: Vec, + pub min_size: Size, } +// Info required to optimize struct layout. +#[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Debug)] +enum StructKind { + // A tuple, closure, or univariant which cannot be coerced to unsized. + AlwaysSizedUnivariant, + // A univariant, the last field of which may be coerced to unsized. + MaybeUnsizedUnivariant, + // A univariant, but part of an enum. + EnumVariant, +} + impl<'a, 'gcx, 'tcx> Struct { - pub fn new(dl: &TargetDataLayout, packed: bool) -> Struct { - Struct { + // FIXME(camlorn): reprs need a better representation to deal with multiple reprs on one type. + fn new(dl: &TargetDataLayout, fields: &Vec<&'a Layout>, + reprs: &[attr::ReprAttr], kind: StructKind, + scapegoat: Ty<'gcx>) -> Result> { + let packed = reprs.contains(&attr::ReprPacked); + let mut ret = Struct { align: if packed { dl.i8_align } else { dl.aggregate_align }, packed: packed, sized: true, offsets: vec![], + memory_index: vec![], min_size: Size::from_bytes(0), + }; + + // Anything with ReprExtern or ReprPacked doesn't optimize. + // Neither do 1-member and 2-member structs. + // In addition, code in trans assume that 2-element structs can become pairs. + // It's easier to just short-circuit here. + let mut can_optimize = fields.len() > 2 || StructKind::EnumVariant == kind; + if can_optimize { + // This exhaustive match makes new reprs force the adder to modify this function. + // Otherwise, things can silently break. + // Note the inversion, return true to stop optimizing. + can_optimize = !reprs.iter().any(|r| { + match *r { + attr::ReprAny | attr::ReprInt(_) => false, + attr::ReprExtern | attr::ReprPacked => true, + attr::ReprSimd => bug!("Simd vectors should be represented as layout::Vector") + } + }); } - } - /// Extend the Struct with more fields. - pub fn extend(&mut self, dl: &TargetDataLayout, - fields: I, - scapegoat: Ty<'gcx>) - -> Result<(), LayoutError<'gcx>> - where I: Iterator>> { - self.offsets.reserve(fields.size_hint().0); + // Disable field reordering until we can decide what to do. + // The odd pattern here avoids a warning about the value never being read. + if can_optimize { can_optimize = false } - let mut offset = self.min_size; + let (optimize, sort_ascending) = match kind { + StructKind::AlwaysSizedUnivariant => (can_optimize, false), + StructKind::MaybeUnsizedUnivariant => (can_optimize, false), + StructKind::EnumVariant => { + assert!(fields.len() >= 1, "Enum variants must have discriminants."); + (can_optimize && fields[0].size(dl).bytes() == 1, true) + } + }; - for field in fields { - if !self.sized { - bug!("Struct::extend: field #{} of `{}` comes after unsized field", - self.offsets.len(), scapegoat); + ret.offsets = vec![Size::from_bytes(0); fields.len()]; + let mut inverse_memory_index: Vec = (0..fields.len() as u32).collect(); + + if optimize { + let start = if let StructKind::EnumVariant = kind { 1 } else { 0 }; + let end = if let StructKind::MaybeUnsizedUnivariant = kind { + fields.len() - 1 + } else { + fields.len() + }; + if end > start { + let optimizing = &mut inverse_memory_index[start..end]; + if sort_ascending { + optimizing.sort_by_key(|&x| fields[x as usize].align(dl).abi()); + } else { + optimizing.sort_by(| &a, &b | { + let a = fields[a as usize].align(dl).abi(); + let b = fields[b as usize].align(dl).abi(); + b.cmp(&a) + }); + } + } + } + + // inverse_memory_index holds field indices by increasing memory offset. + // That is, if field 5 has offset 0, the first element of inverse_memory_index is 5. + // We now write field offsets to the corresponding offset slot; + // field 5 with offset 0 puts 0 in offsets[5]. + // At the bottom of this function, we use inverse_memory_index to produce memory_index. + + if let StructKind::EnumVariant = kind { + assert_eq!(inverse_memory_index[0], 0, + "Enum variant discriminants must have the lowest offset."); + } + + let mut offset = Size::from_bytes(0); + + for i in inverse_memory_index.iter() { + let field = fields[*i as usize]; + if !ret.sized { + bug!("Struct::new: field #{} of `{}` comes after unsized field", + ret.offsets.len(), scapegoat); } - let field = field?; if field.is_unsized() { - self.sized = false; + ret.sized = false; } // Invariant: offset < dl.obj_size_bound() <= 1<<61 - if !self.packed { + if !ret.packed { let align = field.align(dl); - self.align = self.align.max(align); + ret.align = ret.align.max(align); offset = offset.abi_align(align); } - self.offsets.push(offset); - + debug!("Struct::new offset: {:?} field: {:?} {:?}", offset, field, field.size(dl)); + ret.offsets[*i as usize] = offset; offset = offset.checked_add(field.size(dl), dl) .map_or(Err(LayoutError::SizeOverflow(scapegoat)), Ok)?; } - self.min_size = offset; - Ok(()) + debug!("Struct::new min_size: {:?}", offset); + ret.min_size = offset; + + // As stated above, inverse_memory_index holds field indices by increasing offset. + // This makes it an already-sorted view of the offsets vec. + // To invert it, consider: + // If field 5 has offset 0, offsets[0] is 5, and memory_index[5] should be 0. + // Field 5 would be the first element, so memory_index is i: + // Note: if we didn't optimize, it's already right. + + if optimize { + ret.memory_index = vec![0; inverse_memory_index.len()]; + + for i in 0..inverse_memory_index.len() { + ret.memory_index[inverse_memory_index[i] as usize] = i as u32; + } + } else { + ret.memory_index = inverse_memory_index; + } + + Ok(ret) } - /// Get the size without trailing alignment padding. - - /// Get the size with trailing aligment padding. + /// Get the size with trailing alignment padding. pub fn stride(&self) -> Size { self.min_size.abi_align(self.align) } @@ -589,18 +692,45 @@ impl<'a, 'gcx, 'tcx> Struct { Ok(true) } + /// Get indices of the tys that made this struct by increasing offset. + #[inline] + pub fn field_index_by_increasing_offset<'b>(&'b self) -> impl iter::Iterator+'b { + let mut inverse_small = [0u8; 64]; + let mut inverse_big = vec![]; + let use_small = self.memory_index.len() <= inverse_small.len(); + + // We have to write this logic twice in order to keep the array small. + if use_small { + for i in 0..self.memory_index.len() { + inverse_small[self.memory_index[i] as usize] = i as u8; + } + } else { + inverse_big = vec![0; self.memory_index.len()]; + for i in 0..self.memory_index.len() { + inverse_big[self.memory_index[i] as usize] = i as u32; + } + } + + (0..self.memory_index.len()).map(move |i| { + if use_small { inverse_small[i] as usize } + else { inverse_big[i] as usize } + }) + } + /// Find the path leading to a non-zero leaf field, starting from /// the given type and recursing through aggregates. + /// The tuple is `(path, source_path)`, + /// where `path` is in memory order and `source_path` in source order. // FIXME(eddyb) track value ranges and traverse already optimized enums. - pub fn non_zero_field_in_type(infcx: &InferCtxt<'a, 'gcx, 'tcx>, - ty: Ty<'gcx>) - -> Result, LayoutError<'gcx>> { + fn non_zero_field_in_type(infcx: &InferCtxt<'a, 'gcx, 'tcx>, + ty: Ty<'gcx>) + -> Result, LayoutError<'gcx>> { let tcx = infcx.tcx.global_tcx(); match (ty.layout(infcx)?, &ty.sty) { (&Scalar { non_zero: true, .. }, _) | - (&CEnum { non_zero: true, .. }, _) => Ok(Some(vec![])), + (&CEnum { non_zero: true, .. }, _) => Ok(Some((vec![], vec![]))), (&FatPointer { non_zero: true, .. }, _) => { - Ok(Some(vec![FAT_PTR_ADDR as u32])) + Ok(Some((vec![FAT_PTR_ADDR as u32], vec![FAT_PTR_ADDR as u32]))) } // Is this the NonZero lang item wrapping a pointer or integer type? @@ -611,10 +741,11 @@ impl<'a, 'gcx, 'tcx> Struct { // FIXME(eddyb) also allow floating-point types here. Scalar { value: Int(_), non_zero: false } | Scalar { value: Pointer, non_zero: false } => { - Ok(Some(vec![0])) + Ok(Some((vec![0], vec![0]))) } FatPointer { non_zero: false, .. } => { - Ok(Some(vec![FAT_PTR_ADDR as u32, 0])) + let tmp = vec![FAT_PTR_ADDR as u32, 0]; + Ok(Some((tmp.clone(), tmp))) } _ => Ok(None) } @@ -622,25 +753,30 @@ impl<'a, 'gcx, 'tcx> Struct { // Perhaps one of the fields of this struct is non-zero // let's recurse and find out - (_, &ty::TyAdt(def, substs)) if def.is_struct() => { - Struct::non_zero_field_path(infcx, def.struct_variant().fields + (&Univariant { ref variant, .. }, &ty::TyAdt(def, substs)) if def.is_struct() => { + Struct::non_zero_field_paths(infcx, def.struct_variant().fields .iter().map(|field| { field.ty(tcx, substs) - })) + }), + Some(&variant.memory_index[..])) } // Perhaps one of the upvars of this closure is non-zero - // Let's recurse and find out! - (_, &ty::TyClosure(_, ty::ClosureSubsts { upvar_tys: tys, .. })) | + (&Univariant { ref variant, .. }, &ty::TyClosure(def, substs)) => { + let upvar_tys = substs.upvar_tys(def, tcx); + Struct::non_zero_field_paths(infcx, upvar_tys, + Some(&variant.memory_index[..])) + } // Can we use one of the fields in this tuple? - (_, &ty::TyTuple(tys)) => { - Struct::non_zero_field_path(infcx, tys.iter().cloned()) + (&Univariant { ref variant, .. }, &ty::TyTuple(tys)) => { + Struct::non_zero_field_paths(infcx, tys.iter().cloned(), + Some(&variant.memory_index[..])) } // Is this a fixed-size array of something non-zero // with at least one element? (_, &ty::TyArray(ety, d)) if d > 0 => { - Struct::non_zero_field_path(infcx, Some(ety).into_iter()) + Struct::non_zero_field_paths(infcx, Some(ety).into_iter(), None) } (_, &ty::TyProjection(_)) | (_, &ty::TyAnon(..)) => { @@ -658,14 +794,23 @@ impl<'a, 'gcx, 'tcx> Struct { /// Find the path leading to a non-zero leaf field, starting from /// the given set of fields and recursing through aggregates. - pub fn non_zero_field_path(infcx: &InferCtxt<'a, 'gcx, 'tcx>, - fields: I) - -> Result, LayoutError<'gcx>> + /// Returns Some((path, source_path)) on success. + /// `path` is translated to memory order. `source_path` is not. + fn non_zero_field_paths(infcx: &InferCtxt<'a, 'gcx, 'tcx>, + fields: I, + permutation: Option<&[u32]>) + -> Result, LayoutError<'gcx>> where I: Iterator> { for (i, ty) in fields.enumerate() { - if let Some(mut path) = Struct::non_zero_field_in_type(infcx, ty)? { - path.push(i as u32); - return Ok(Some(path)); + if let Some((mut path, mut source_path)) = Struct::non_zero_field_in_type(infcx, ty)? { + source_path.push(i as u32); + let index = if let Some(p) = permutation { + p[i] as usize + } else { + i + }; + path.push(index as u32); + return Ok(Some((path, source_path))); } } Ok(None) @@ -705,16 +850,20 @@ impl<'a, 'gcx, 'tcx> Union { index, scapegoat); } + debug!("Union::extend field: {:?} {:?}", field, field.size(dl)); + if !self.packed { self.align = self.align.max(field.align(dl)); } self.min_size = cmp::max(self.min_size, field.size(dl)); } + debug!("Union::extend min-size: {:?}", self.min_size); + Ok(()) } - /// Get the size with trailing aligment padding. + /// Get the size with trailing alignment padding. pub fn stride(&self) -> Size { self.min_size.abi_align(self.align) } @@ -824,7 +973,9 @@ pub enum Layout { nndiscr: u64, nonnull: Struct, // N.B. There is a 0 at the start, for LLVM GEP through a pointer. - discrfield: FieldPath + discrfield: FieldPath, + // Like discrfield, but in source order. For debuginfo. + discrfield_source: FieldPath } } @@ -878,6 +1029,7 @@ impl<'a, 'gcx, 'tcx> Layout { let dl = &tcx.data_layout; assert!(!ty.has_infer_types()); + let layout = match ty.sty { // Basic scalars. ty::TyBool => Scalar { value: Int(I1), non_zero: false }, @@ -899,7 +1051,11 @@ impl<'a, 'gcx, 'tcx> Layout { ty::TyFnPtr(_) => Scalar { value: Pointer, non_zero: true }, // The never type. - ty::TyNever => Univariant { variant: Struct::new(dl, false), non_zero: false }, + ty::TyNever => Univariant { + variant: Struct::new(dl, &vec![], &[], + StructKind::AlwaysSizedUnivariant, ty)?, + non_zero: false + }, // Potentially-fat pointers. ty::TyBox(pointee) | @@ -915,7 +1071,7 @@ impl<'a, 'gcx, 'tcx> Layout { ty::TySlice(_) | ty::TyStr => { Int(dl.ptr_sized_integer()) } - ty::TyTrait(_) => Pointer, + ty::TyDynamic(..) => Pointer, _ => return Err(LayoutError::Unknown(unsized_part)) }; FatPointer { metadata: meta, non_zero: non_zero } @@ -950,21 +1106,36 @@ impl<'a, 'gcx, 'tcx> Layout { // Odd unit types. ty::TyFnDef(..) => { Univariant { - variant: Struct::new(dl, false), + variant: Struct::new(dl, &vec![], + &[], StructKind::AlwaysSizedUnivariant, ty)?, non_zero: false } } - ty::TyTrait(_) => { - let mut unit = Struct::new(dl, false); + ty::TyDynamic(..) => { + let mut unit = Struct::new(dl, &vec![], &[], + StructKind::AlwaysSizedUnivariant, ty)?; unit.sized = false; Univariant { variant: unit, non_zero: false } } // Tuples and closures. - ty::TyClosure(_, ty::ClosureSubsts { upvar_tys: tys, .. }) | + ty::TyClosure(def_id, ref substs) => { + let tys = substs.upvar_tys(def_id, tcx); + let st = Struct::new(dl, + &tys.map(|ty| ty.layout(infcx)) + .collect::, _>>()?, + &[], + StructKind::AlwaysSizedUnivariant, ty)?; + Univariant { variant: st, non_zero: false } + } + ty::TyTuple(tys) => { - let mut st = Struct::new(dl, false); - st.extend(dl, tys.iter().map(|ty| ty.layout(infcx)), ty)?; + // FIXME(camlorn): if we ever allow unsized tuples, this needs to be checked. + // See the univariant case below to learn how. + let st = Struct::new(dl, + &tys.iter().map(|ty| ty.layout(infcx)) + .collect::, _>>()?, + &[], StructKind::AlwaysSizedUnivariant, ty)?; Univariant { variant: st, non_zero: false } } @@ -988,16 +1159,16 @@ impl<'a, 'gcx, 'tcx> Layout { // ADTs. ty::TyAdt(def, substs) => { - let hint = *tcx.lookup_repr_hints(def.did).get(0) - .unwrap_or(&attr::ReprAny); + let hints = &tcx.lookup_repr_hints(def.did)[..]; if def.variants.is_empty() { // Uninhabitable; represent as unit // (Typechecking will reject discriminant-sizing attrs.) - assert_eq!(hint, attr::ReprAny); + assert_eq!(hints.len(), 0); return success(Univariant { - variant: Struct::new(dl, false), + variant: Struct::new(dl, &vec![], + &hints[..], StructKind::AlwaysSizedUnivariant, ty)?, non_zero: false }); } @@ -1012,7 +1183,7 @@ impl<'a, 'gcx, 'tcx> Layout { if x > max { max = x; } } - let (discr, signed) = Integer::repr_discr(tcx, ty, hint, min, max); + let (discr, signed) = Integer::repr_discr(tcx, ty, &hints[..], min, max); return success(CEnum { discr: discr, signed: signed, @@ -1022,21 +1193,35 @@ impl<'a, 'gcx, 'tcx> Layout { }); } - if !def.is_enum() || def.variants.len() == 1 && hint == attr::ReprAny { + if !def.is_enum() || def.variants.len() == 1 && hints.is_empty() { // Struct, or union, or univariant enum equivalent to a struct. // (Typechecking will reject discriminant-sizing attrs.) + let kind = if def.is_enum() || def.variants[0].fields.len() == 0{ + StructKind::AlwaysSizedUnivariant + } else { + use middle::region::ROOT_CODE_EXTENT; + let param_env = tcx.construct_parameter_environment(DUMMY_SP, + def.did, ROOT_CODE_EXTENT); + let fields = &def.variants[0].fields; + let last_field = &fields[fields.len()-1]; + let always_sized = last_field.ty(tcx, param_env.free_substs) + .is_sized(tcx, ¶m_env, DUMMY_SP); + if !always_sized { StructKind::MaybeUnsizedUnivariant } + else { StructKind::AlwaysSizedUnivariant } + }; + let fields = def.variants[0].fields.iter().map(|field| { field.ty(tcx, substs).layout(infcx) - }); + }).collect::, _>>()?; let packed = tcx.lookup_packed(def.did); let layout = if def.is_union() { let mut un = Union::new(dl, packed); - un.extend(dl, fields, ty)?; + un.extend(dl, fields.iter().map(|&f| Ok(f)), ty)?; UntaggedUnion { variants: un } } else { - let mut st = Struct::new(dl, packed); - st.extend(dl, fields, ty)?; + let st = Struct::new(dl, &fields, &hints[..], + kind, ty)?; let non_zero = Some(def.did) == tcx.lang_items.non_zero(); Univariant { variant: st, non_zero: non_zero } }; @@ -1058,7 +1243,7 @@ impl<'a, 'gcx, 'tcx> Layout { v.fields.iter().map(|field| field.ty(tcx, substs)).collect::>() }).collect::>(); - if variants.len() == 2 && hint == attr::ReprAny { + if variants.len() == 2 && hints.is_empty() { // Nullable pointer optimization for discr in 0..2 { let other_fields = variants[1 - discr].iter().map(|ty| { @@ -1067,9 +1252,11 @@ impl<'a, 'gcx, 'tcx> Layout { if !Struct::would_be_zero_sized(dl, other_fields)? { continue; } - let path = Struct::non_zero_field_path(infcx, - variants[discr].iter().cloned())?; - let mut path = if let Some(p) = path { p } else { continue }; + let paths = Struct::non_zero_field_paths(infcx, + variants[discr].iter().cloned(), + None)?; + let (mut path, mut path_source) = if let Some(p) = paths { p } + else { continue }; // FIXME(eddyb) should take advantage of a newtype. if path == &[0] && variants[discr].len() == 1 { @@ -1086,14 +1273,25 @@ impl<'a, 'gcx, 'tcx> Layout { }); } + let st = Struct::new(dl, + &variants[discr].iter().map(|ty| ty.layout(infcx)) + .collect::, _>>()?, + &hints[..], StructKind::AlwaysSizedUnivariant, ty)?; + + // We have to fix the last element of path here. + let mut i = *path.last().unwrap(); + i = st.memory_index[i as usize]; + *path.last_mut().unwrap() = i; path.push(0); // For GEP through a pointer. path.reverse(); - let mut st = Struct::new(dl, false); - st.extend(dl, variants[discr].iter().map(|ty| ty.layout(infcx)), ty)?; + path_source.push(0); + path_source.reverse(); + return success(StructWrappedNullablePointer { nndiscr: discr as u64, nonnull: st, - discrfield: path + discrfield: path, + discrfield_source: path_source }); } } @@ -1101,7 +1299,7 @@ impl<'a, 'gcx, 'tcx> Layout { // The general case. let discr_max = (variants.len() - 1) as i64; assert!(discr_max >= 0); - let (min_ity, _) = Integer::repr_discr(tcx, ty, hint, 0, discr_max); + let (min_ity, _) = Integer::repr_discr(tcx, ty, &hints[..], 0, discr_max); let mut align = dl.aggregate_align; let mut size = Size::from_bytes(0); @@ -1111,24 +1309,26 @@ impl<'a, 'gcx, 'tcx> Layout { // Create the set of structs that represent each variant // Use the minimum integer type we figured out above - let discr = Some(Scalar { value: Int(min_ity), non_zero: false }); + let discr = Scalar { value: Int(min_ity), non_zero: false }; let mut variants = variants.into_iter().map(|fields| { - let mut found_start = false; - let fields = fields.into_iter().map(|field| { - let field = field.layout(infcx)?; - if !found_start { - // Find the first field we can't move later - // to make room for a larger discriminant. - let field_align = field.align(dl); - if field.size(dl).bytes() != 0 || field_align.abi() != 1 { - start_align = start_align.min(field_align); - found_start = true; - } + let mut fields = fields.into_iter().map(|field| { + field.layout(infcx) + }).collect::, _>>()?; + fields.insert(0, &discr); + let st = Struct::new(dl, + &fields, + &hints[..], StructKind::EnumVariant, ty)?; + // Find the first field we can't move later + // to make room for a larger discriminant. + // It is important to skip the first field. + for i in st.field_index_by_increasing_offset().skip(1) { + let field = fields[i]; + let field_align = field.align(dl); + if field.size(dl).bytes() != 0 || field_align.abi() != 1 { + start_align = start_align.min(field_align); + break; } - Ok(field) - }); - let mut st = Struct::new(dl, false); - st.extend(dl, discr.iter().map(Ok).chain(fields), ty)?; + } size = cmp::max(size, st.min_size); align = align.max(st.align); Ok(st) @@ -1162,11 +1362,12 @@ impl<'a, 'gcx, 'tcx> Layout { let old_ity_size = Int(min_ity).size(dl); let new_ity_size = Int(ity).size(dl); for variant in &mut variants { - for offset in &mut variant.offsets[1..] { - if *offset > old_ity_size { - break; + for i in variant.offsets.iter_mut() { + // The first field is the discrimminant, at offset 0. + // These aren't in order, and we need to skip it. + if *i <= old_ity_size && *i > Size::from_bytes(0) { + *i = new_ity_size; } - *offset = new_ity_size; } // We might be making the struct larger. if variant.min_size <= old_ity_size { diff --git a/src/librustc/ty/maps.rs b/src/librustc/ty/maps.rs index cad87081a9..42b3544421 100644 --- a/src/librustc/ty/maps.rs +++ b/src/librustc/ty/maps.rs @@ -16,7 +16,7 @@ use ty::{self, Ty}; use std::cell::RefCell; use std::marker::PhantomData; use std::rc::Rc; -use syntax::{attr, ast}; +use syntax::attr; macro_rules! dep_map_ty { ($ty_name:ident : $node_name:ident ($key:ty) -> $value:ty) => { @@ -32,18 +32,17 @@ macro_rules! dep_map_ty { } } -dep_map_ty! { ImplOrTraitItems: ImplOrTraitItems(DefId) -> ty::ImplOrTraitItem<'tcx> } -dep_map_ty! { Tcache: ItemSignature(DefId) -> Ty<'tcx> } +dep_map_ty! { AssociatedItems: AssociatedItems(DefId) -> ty::AssociatedItem } +dep_map_ty! { Types: ItemSignature(DefId) -> Ty<'tcx> } dep_map_ty! { Generics: ItemSignature(DefId) -> &'tcx ty::Generics<'tcx> } dep_map_ty! { Predicates: ItemSignature(DefId) -> ty::GenericPredicates<'tcx> } dep_map_ty! { SuperPredicates: ItemSignature(DefId) -> ty::GenericPredicates<'tcx> } -dep_map_ty! { ImplOrTraitItemDefIds: ImplOrTraitItemDefIds(DefId) -> Rc> } +dep_map_ty! { AssociatedItemDefIds: AssociatedItemDefIds(DefId) -> Rc> } dep_map_ty! { ImplTraitRefs: ItemSignature(DefId) -> Option> } -dep_map_ty! { TraitDefs: ItemSignature(DefId) -> &'tcx ty::TraitDef<'tcx> } -dep_map_ty! { AdtDefs: ItemSignature(DefId) -> ty::AdtDefMaster<'tcx> } +dep_map_ty! { TraitDefs: ItemSignature(DefId) -> &'tcx ty::TraitDef } +dep_map_ty! { AdtDefs: ItemSignature(DefId) -> &'tcx ty::AdtDef } +dep_map_ty! { AdtSizedConstraint: SizedConstraint(DefId) -> Ty<'tcx> } dep_map_ty! { ItemVariances: ItemSignature(DefId) -> Rc> } dep_map_ty! { InherentImpls: InherentImpls(DefId) -> Vec } -dep_map_ty! { TraitItems: TraitItems(DefId) -> Rc>> } dep_map_ty! { ReprHints: ReprHints(DefId) -> Rc> } -dep_map_ty! { InlinedClosures: Hir(DefId) -> ast::NodeId } dep_map_ty! { Mir: Mir(DefId) -> &'tcx RefCell> } diff --git a/src/librustc/ty/mod.rs b/src/librustc/ty/mod.rs index 2c15f08e89..dd1a6caa02 100644 --- a/src/librustc/ty/mod.rs +++ b/src/librustc/ty/mod.rs @@ -10,9 +10,8 @@ pub use self::Variance::*; pub use self::DtorKind::*; -pub use self::ImplOrTraitItemContainer::*; +pub use self::AssociatedItemContainer::*; pub use self::BorrowKind::*; -pub use self::ImplOrTraitItem::*; pub use self::IntVarValue::*; pub use self::LvaluePreference::*; pub use self::fold::TypeFoldable; @@ -20,7 +19,7 @@ pub use self::fold::TypeFoldable; use dep_graph::{self, DepNode}; use hir::map as ast_map; use middle; -use hir::def::{Def, CtorKind, PathResolution, ExportMap}; +use hir::def::{Def, CtorKind, ExportMap}; use hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX, LOCAL_CRATE}; use middle::lang_items::{FnTraitLangItem, FnMutTraitLangItem, FnOnceTraitLangItem}; use middle::region::{CodeExtent, ROOT_CODE_EXTENT}; @@ -30,8 +29,7 @@ use ty; use ty::subst::{Subst, Substs}; use ty::walk::TypeWalker; use util::common::MemoizationMap; -use util::nodemap::NodeSet; -use util::nodemap::FnvHashMap; +use util::nodemap::{NodeSet, NodeMap, FxHashMap, FxHashSet}; use serialize::{self, Encodable, Encoder}; use std::borrow::Cow; @@ -45,18 +43,18 @@ use std::vec::IntoIter; use std::mem; use syntax::ast::{self, Name, NodeId}; use syntax::attr; -use syntax::parse::token::{self, InternedString}; +use syntax::symbol::{Symbol, InternedString}; use syntax_pos::{DUMMY_SP, Span}; use rustc_const_math::ConstInt; +use rustc_data_structures::accumulate_vec::IntoIter as AccIntoIter; use hir; -use hir::intravisit::Visitor; +use hir::itemlikevisit::ItemLikeVisitor; pub use self::sty::{Binder, DebruijnIndex}; -pub use self::sty::{BuiltinBound, BuiltinBounds}; pub use self::sty::{BareFnTy, FnSig, PolyFnSig}; -pub use self::sty::{ClosureTy, InferTy, ParamTy, ProjectionTy, TraitObject}; +pub use self::sty::{ClosureTy, InferTy, ParamTy, ProjectionTy, ExistentialPredicate}; pub use self::sty::{ClosureSubsts, TypeAndMut}; pub use self::sty::{TraitRef, TypeVariants, PolyTraitRef}; pub use self::sty::{ExistentialTraitRef, PolyExistentialTraitRef}; @@ -69,11 +67,6 @@ pub use self::sty::InferTy::*; pub use self::sty::Region::*; pub use self::sty::TypeVariants::*; -pub use self::sty::BuiltinBound::Send as BoundSend; -pub use self::sty::BuiltinBound::Sized as BoundSized; -pub use self::sty::BuiltinBound::Copy as BoundCopy; -pub use self::sty::BuiltinBound::Sync as BoundSync; - pub use self::contents::TypeContents; pub use self::context::{TyCtxt, tls}; pub use self::context::{CtxtArenas, Lift, Tables}; @@ -100,7 +93,6 @@ pub mod util; mod contents; mod context; mod flags; -mod ivar; mod structural_impls; mod sty; @@ -111,12 +103,13 @@ pub type Disr = ConstInt; /// The complete set of all analyses described in this module. This is /// produced by the driver and fed to trans and later passes. #[derive(Clone)] -pub struct CrateAnalysis<'a> { +pub struct CrateAnalysis<'tcx> { pub export_map: ExportMap, pub access_levels: middle::privacy::AccessLevels, pub reachable: NodeSet, - pub name: &'a str, + pub name: String, pub glob_map: Option, + pub hir_ty_to_ty: NodeMap>, } #[derive(Copy, Clone)] @@ -135,12 +128,12 @@ impl DtorKind { } #[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub enum ImplOrTraitItemContainer { +pub enum AssociatedItemContainer { TraitContainer(DefId), ImplContainer(DefId), } -impl ImplOrTraitItemContainer { +impl AssociatedItemContainer { pub fn id(&self) -> DefId { match *self { TraitContainer(id) => id, @@ -170,9 +163,9 @@ impl<'a, 'gcx, 'tcx> ImplHeader<'tcx> { let header = ImplHeader { impl_def_id: impl_def_id, - self_ty: tcx.lookup_item_type(impl_def_id).ty, + self_ty: tcx.item_type(impl_def_id), trait_ref: tcx.impl_trait_ref(impl_def_id), - predicates: tcx.lookup_predicates(impl_def_id).predicates + predicates: tcx.item_predicates(impl_def_id).predicates }.subst(tcx, impl_substs); let traits::Normalized { value: mut header, obligations } = @@ -183,58 +176,33 @@ impl<'a, 'gcx, 'tcx> ImplHeader<'tcx> { } } -#[derive(Clone)] -pub enum ImplOrTraitItem<'tcx> { - ConstTraitItem(Rc>), - MethodTraitItem(Rc>), - TypeTraitItem(Rc>), +#[derive(Copy, Clone, Debug)] +pub struct AssociatedItem { + pub def_id: DefId, + pub name: Name, + pub kind: AssociatedKind, + pub vis: Visibility, + pub defaultness: hir::Defaultness, + pub container: AssociatedItemContainer, + + /// Whether this is a method with an explicit self + /// as its first argument, allowing method calls. + pub method_has_self_argument: bool, } -impl<'tcx> ImplOrTraitItem<'tcx> { +#[derive(Copy, Clone, PartialEq, Eq, Debug, RustcEncodable, RustcDecodable)] +pub enum AssociatedKind { + Const, + Method, + Type +} + +impl AssociatedItem { pub fn def(&self) -> Def { - match *self { - ConstTraitItem(ref associated_const) => Def::AssociatedConst(associated_const.def_id), - MethodTraitItem(ref method) => Def::Method(method.def_id), - TypeTraitItem(ref ty) => Def::AssociatedTy(ty.def_id), - } - } - - pub fn def_id(&self) -> DefId { - match *self { - ConstTraitItem(ref associated_const) => associated_const.def_id, - MethodTraitItem(ref method) => method.def_id, - TypeTraitItem(ref associated_type) => associated_type.def_id, - } - } - - pub fn name(&self) -> Name { - match *self { - ConstTraitItem(ref associated_const) => associated_const.name, - MethodTraitItem(ref method) => method.name, - TypeTraitItem(ref associated_type) => associated_type.name, - } - } - - pub fn vis(&self) -> Visibility { - match *self { - ConstTraitItem(ref associated_const) => associated_const.vis, - MethodTraitItem(ref method) => method.vis, - TypeTraitItem(ref associated_type) => associated_type.vis, - } - } - - pub fn container(&self) -> ImplOrTraitItemContainer { - match *self { - ConstTraitItem(ref associated_const) => associated_const.container, - MethodTraitItem(ref method) => method.container, - TypeTraitItem(ref associated_type) => associated_type.container, - } - } - - pub fn as_opt_method(&self) -> Option>> { - match *self { - MethodTraitItem(ref m) => Some((*m).clone()), - _ => None, + match self.kind { + AssociatedKind::Const => Def::AssociatedConst(self.def_id), + AssociatedKind::Method => Def::Method(self.def_id), + AssociatedKind::Type => Def::AssociatedTy(self.def_id), } } } @@ -272,7 +240,7 @@ impl Visibility { match *visibility { hir::Public => Visibility::Public, hir::Visibility::Crate => Visibility::Restricted(ast::CRATE_NODE_ID), - hir::Visibility::Restricted { id, .. } => match tcx.expect_def(id) { + hir::Visibility::Restricted { ref path, .. } => match path.def { // If there is no resolution, `resolve` will have already reported an error, so // assume that the visibility is public to avoid reporting more privacy errors. Def::Err => Visibility::Public, @@ -308,64 +276,6 @@ impl Visibility { } } -#[derive(Clone, Debug)] -pub struct Method<'tcx> { - pub name: Name, - pub generics: &'tcx Generics<'tcx>, - pub predicates: GenericPredicates<'tcx>, - pub fty: &'tcx BareFnTy<'tcx>, - pub explicit_self: ExplicitSelfCategory<'tcx>, - pub vis: Visibility, - pub defaultness: hir::Defaultness, - pub has_body: bool, - pub def_id: DefId, - pub container: ImplOrTraitItemContainer, -} - -impl<'tcx> Method<'tcx> { - pub fn container_id(&self) -> DefId { - match self.container { - TraitContainer(id) => id, - ImplContainer(id) => id, - } - } -} - -impl<'tcx> PartialEq for Method<'tcx> { - #[inline] - fn eq(&self, other: &Self) -> bool { self.def_id == other.def_id } -} - -impl<'tcx> Eq for Method<'tcx> {} - -impl<'tcx> Hash for Method<'tcx> { - #[inline] - fn hash(&self, s: &mut H) { - self.def_id.hash(s) - } -} - -#[derive(Clone, Copy, Debug)] -pub struct AssociatedConst<'tcx> { - pub name: Name, - pub ty: Ty<'tcx>, - pub vis: Visibility, - pub defaultness: hir::Defaultness, - pub def_id: DefId, - pub container: ImplOrTraitItemContainer, - pub has_value: bool -} - -#[derive(Clone, Copy, Debug)] -pub struct AssociatedType<'tcx> { - pub name: Name, - pub ty: Option>, - pub vis: Visibility, - pub defaultness: hir::Defaultness, - pub def_id: DefId, - pub container: ImplOrTraitItemContainer, -} - #[derive(Clone, PartialEq, RustcDecodable, RustcEncodable, Copy)] pub enum Variance { Covariant, // T <: T iff A <: B -- e.g., function return type @@ -418,7 +328,7 @@ impl MethodCall { // maps from an expression id that corresponds to a method call to the details // of the method to be invoked -pub type MethodMap<'tcx> = FnvHashMap>; +pub type MethodMap<'tcx> = FxHashMap>; // Contains information needed to resolve types and (in the future) look up // the types of AST nodes. @@ -650,7 +560,7 @@ pub struct UpvarBorrow<'tcx> { pub region: &'tcx ty::Region, } -pub type UpvarCaptureMap<'tcx> = FnvHashMap>; +pub type UpvarCaptureMap<'tcx> = FxHashMap>; #[derive(Copy, Clone)] pub struct ClosureUpvar<'tcx> { @@ -712,10 +622,6 @@ pub struct RegionParameterDef<'tcx> { } impl<'tcx> RegionParameterDef<'tcx> { - pub fn to_early_bound_region(&self) -> ty::Region { - ty::ReEarlyBound(self.to_early_bound_region_data()) - } - pub fn to_early_bound_region_data(&self) -> ty::EarlyBoundRegion { ty::EarlyBoundRegion { index: self.index, @@ -791,7 +697,7 @@ impl<'a, 'gcx, 'tcx> GenericPredicates<'tcx> { instantiated: &mut InstantiatedPredicates<'tcx>, substs: &Substs<'tcx>) { if let Some(def_id) = self.parent { - tcx.lookup_predicates(def_id).instantiate_into(tcx, instantiated, substs); + tcx.item_predicates(def_id).instantiate_into(tcx, instantiated, substs); } instantiated.predicates.extend(self.predicates.iter().map(|p| p.subst(tcx, substs))) } @@ -1251,10 +1157,10 @@ pub struct ParameterEnvironment<'tcx> { pub free_id_outlive: CodeExtent, /// A cache for `moves_by_default`. - pub is_copy_cache: RefCell, bool>>, + pub is_copy_cache: RefCell, bool>>, /// A cache for `type_is_sized` - pub is_sized_cache: RefCell, bool>>, + pub is_sized_cache: RefCell, bool>>, } impl<'a, 'tcx> ParameterEnvironment<'tcx> { @@ -1267,8 +1173,8 @@ impl<'a, 'tcx> ParameterEnvironment<'tcx> { implicit_region_bound: self.implicit_region_bound, caller_bounds: caller_bounds, free_id_outlive: self.free_id_outlive, - is_copy_cache: RefCell::new(FnvHashMap()), - is_sized_cache: RefCell::new(FnvHashMap()), + is_copy_cache: RefCell::new(FxHashMap()), + is_sized_cache: RefCell::new(FxHashMap()), } } @@ -1288,19 +1194,10 @@ impl<'a, 'tcx> ParameterEnvironment<'tcx> { tcx.region_maps.item_extent(id)) } hir::ImplItemKind::Method(_, ref body) => { - let method_def_id = tcx.map.local_def_id(id); - match tcx.impl_or_trait_item(method_def_id) { - MethodTraitItem(ref method_ty) => { - tcx.construct_parameter_environment( - impl_item.span, - method_ty.def_id, - tcx.region_maps.call_site_extent(id, body.id)) - } - _ => { - bug!("ParameterEnvironment::for_item(): \ - got non-method item from impl method?!") - } - } + tcx.construct_parameter_environment( + impl_item.span, + tcx.map.local_def_id(id), + tcx.region_maps.call_site_extent(id, body.node_id())) } } } @@ -1319,40 +1216,30 @@ impl<'a, 'tcx> ParameterEnvironment<'tcx> { // Use call-site for extent (unless this is a // trait method with no default; then fallback // to the method id). - let method_def_id = tcx.map.local_def_id(id); - match tcx.impl_or_trait_item(method_def_id) { - MethodTraitItem(ref method_ty) => { - let extent = if let Some(ref body) = *body { - // default impl: use call_site extent as free_id_outlive bound. - tcx.region_maps.call_site_extent(id, body.id) - } else { - // no default impl: use item extent as free_id_outlive bound. - tcx.region_maps.item_extent(id) - }; - tcx.construct_parameter_environment( - trait_item.span, - method_ty.def_id, - extent) - } - _ => { - bug!("ParameterEnvironment::for_item(): \ - got non-method item from provided \ - method?!") - } - } + let extent = if let Some(body_id) = *body { + // default impl: use call_site extent as free_id_outlive bound. + tcx.region_maps.call_site_extent(id, body_id.node_id()) + } else { + // no default impl: use item extent as free_id_outlive bound. + tcx.region_maps.item_extent(id) + }; + tcx.construct_parameter_environment( + trait_item.span, + tcx.map.local_def_id(id), + extent) } } } Some(ast_map::NodeItem(item)) => { match item.node { - hir::ItemFn(.., ref body) => { + hir::ItemFn(.., body_id) => { // We assume this is a function. let fn_def_id = tcx.map.local_def_id(id); tcx.construct_parameter_environment( item.span, fn_def_id, - tcx.region_maps.call_site_extent(id, body.id)) + tcx.region_maps.call_site_extent(id, body_id.node_id())) } hir::ItemEnum(..) | hir::ItemStruct(..) | @@ -1382,8 +1269,13 @@ impl<'a, 'tcx> ParameterEnvironment<'tcx> { } Some(ast_map::NodeExpr(expr)) => { // This is a convenience to allow closures to work. - if let hir::ExprClosure(..) = expr.node { - ParameterEnvironment::for_item(tcx, tcx.map.get_parent(id)) + if let hir::ExprClosure(.., body, _) = expr.node { + let def_id = tcx.map.local_def_id(id); + let base_def_id = tcx.closure_base_def_id(def_id); + tcx.construct_parameter_environment( + expr.span, + base_def_id, + tcx.region_maps.call_site_extent(id, body.node_id())) } else { tcx.empty_parameter_environment() } @@ -1403,31 +1295,6 @@ impl<'a, 'tcx> ParameterEnvironment<'tcx> { } } -/// A "type scheme", in ML terminology, is a type combined with some -/// set of generic types that the type is, well, generic over. In Rust -/// terms, it is the "type" of a fn item or struct -- this type will -/// include various generic parameters that must be substituted when -/// the item/struct is referenced. That is called converting the type -/// scheme to a monotype. -/// -/// - `generics`: the set of type parameters and their bounds -/// - `ty`: the base types, which may reference the parameters defined -/// in `generics` -/// -/// Note that TypeSchemes are also sometimes called "polytypes" (and -/// in fact this struct used to carry that name, so you may find some -/// stray references in a comment or something). We try to reserve the -/// "poly" prefix to refer to higher-ranked things, as in -/// `PolyTraitRef`. -/// -/// Note that each item also comes with predicates, see -/// `lookup_predicates`. -#[derive(Clone, Debug)] -pub struct TypeScheme<'tcx> { - pub generics: &'tcx Generics<'tcx>, - pub ty: Ty<'tcx>, -} - bitflags! { flags AdtFlags: u32 { const NO_ADT_FLAGS = 0, @@ -1441,92 +1308,64 @@ bitflags! { } } -pub type AdtDef<'tcx> = &'tcx AdtDefData<'tcx, 'static>; -pub type VariantDef<'tcx> = &'tcx VariantDefData<'tcx, 'static>; -pub type FieldDef<'tcx> = &'tcx FieldDefData<'tcx, 'static>; - -// See comment on AdtDefData for explanation -pub type AdtDefMaster<'tcx> = &'tcx AdtDefData<'tcx, 'tcx>; -pub type VariantDefMaster<'tcx> = &'tcx VariantDefData<'tcx, 'tcx>; -pub type FieldDefMaster<'tcx> = &'tcx FieldDefData<'tcx, 'tcx>; - -pub struct VariantDefData<'tcx, 'container: 'tcx> { +pub struct VariantDef { /// The variant's DefId. If this is a tuple-like struct, /// this is the DefId of the struct's ctor. pub did: DefId, pub name: Name, // struct's name if this is a struct pub disr_val: Disr, - pub fields: Vec>, + pub fields: Vec, pub ctor_kind: CtorKind, } -pub struct FieldDefData<'tcx, 'container: 'tcx> { - /// The field's DefId. NOTE: the fields of tuple-like enum variants - /// are not real items, and don't have entries in tcache etc. +pub struct FieldDef { pub did: DefId, pub name: Name, pub vis: Visibility, - /// TyIVar is used here to allow for variance (see the doc at - /// AdtDefData). - /// - /// Note: direct accesses to `ty` must also add dep edges. - ty: ivar::TyIVar<'tcx, 'container> } /// The definition of an abstract data type - a struct or enum. /// /// These are all interned (by intern_adt_def) into the adt_defs /// table. -/// -/// Because of the possibility of nested tcx-s, this type -/// needs 2 lifetimes: the traditional variant lifetime ('tcx) -/// bounding the lifetime of the inner types is of course necessary. -/// However, it is not sufficient - types from a child tcx must -/// not be leaked into the master tcx by being stored in an AdtDefData. -/// -/// The 'container lifetime ensures that by outliving the container -/// tcx and preventing shorter-lived types from being inserted. When -/// write access is not needed, the 'container lifetime can be -/// erased to 'static, which can be done by the AdtDef wrapper. -pub struct AdtDefData<'tcx, 'container: 'tcx> { +pub struct AdtDef { pub did: DefId, - pub variants: Vec>, + pub variants: Vec, destructor: Cell>, - flags: Cell, - sized_constraint: ivar::TyIVar<'tcx, 'container>, + flags: Cell } -impl<'tcx, 'container> PartialEq for AdtDefData<'tcx, 'container> { - // AdtDefData are always interned and this is part of TyS equality +impl PartialEq for AdtDef { + // AdtDef are always interned and this is part of TyS equality #[inline] fn eq(&self, other: &Self) -> bool { self as *const _ == other as *const _ } } -impl<'tcx, 'container> Eq for AdtDefData<'tcx, 'container> {} +impl Eq for AdtDef {} -impl<'tcx, 'container> Hash for AdtDefData<'tcx, 'container> { +impl Hash for AdtDef { #[inline] fn hash(&self, s: &mut H) { - (self as *const AdtDefData).hash(s) + (self as *const AdtDef).hash(s) } } -impl<'tcx> serialize::UseSpecializedEncodable for AdtDef<'tcx> { +impl<'tcx> serialize::UseSpecializedEncodable for &'tcx AdtDef { fn default_encode(&self, s: &mut S) -> Result<(), S::Error> { self.did.encode(s) } } -impl<'tcx> serialize::UseSpecializedDecodable for AdtDef<'tcx> {} +impl<'tcx> serialize::UseSpecializedDecodable for &'tcx AdtDef {} #[derive(Copy, Clone, Debug, Eq, PartialEq)] pub enum AdtKind { Struct, Union, Enum } -impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { +impl<'a, 'gcx, 'tcx> AdtDef { fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>, did: DefId, kind: AdtKind, - variants: Vec>) -> Self { + variants: Vec) -> Self { let mut flags = AdtFlags::NO_ADT_FLAGS; let attrs = tcx.get_attrs(did); if attr::contains_name(&attrs, "fundamental") { @@ -1543,12 +1382,11 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { AdtKind::Union => flags = flags | AdtFlags::IS_UNION, AdtKind::Struct => {} } - AdtDefData { + AdtDef { did: did, variants: variants, flags: Cell::new(flags), destructor: Cell::new(None), - sized_constraint: ivar::TyIVar::new(), } } @@ -1559,6 +1397,20 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { self.flags.set(self.flags.get() | AdtFlags::IS_DTORCK_VALID) } + #[inline] + pub fn is_uninhabited_recurse(&self, + visited: &mut FxHashSet<(DefId, &'tcx Substs<'tcx>)>, + block: Option, + tcx: TyCtxt<'a, 'gcx, 'tcx>, + substs: &'tcx Substs<'tcx>) -> bool { + if !visited.insert((self.did, substs)) { + return false; + }; + self.variants.iter().all(|v| { + v.is_uninhabited_recurse(visited, block, tcx, substs, self.is_union()) + }) + } + #[inline] pub fn is_struct(&self) -> bool { !self.is_union() && !self.is_enum() @@ -1638,37 +1490,21 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { /// Asserts this is a struct and returns the struct's unique /// variant. - pub fn struct_variant(&self) -> &VariantDefData<'gcx, 'container> { + pub fn struct_variant(&self) -> &VariantDef { assert!(!self.is_enum()); &self.variants[0] } - #[inline] - pub fn type_scheme(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> TypeScheme<'gcx> { - tcx.lookup_item_type(self.did) - } - #[inline] pub fn predicates(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> GenericPredicates<'gcx> { - tcx.lookup_predicates(self.did) + tcx.item_predicates(self.did) } /// Returns an iterator over all fields contained /// by this ADT. #[inline] - pub fn all_fields(&self) -> - iter::FlatMap< - slice::Iter>, - slice::Iter>, - for<'s> fn(&'s VariantDefData<'gcx, 'container>) - -> slice::Iter<'s, FieldDefData<'gcx, 'container>> - > { - self.variants.iter().flat_map(VariantDefData::fields_iter) - } - - #[inline] - pub fn is_empty(&self) -> bool { - self.variants.is_empty() + pub fn all_fields<'s>(&'s self) -> impl Iterator { + self.variants.iter().flat_map(|v| v.fields.iter()) } #[inline] @@ -1681,7 +1517,7 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { self.variants.iter().all(|v| v.fields.is_empty()) } - pub fn variant_with_id(&self, vid: DefId) -> &VariantDefData<'gcx, 'container> { + pub fn variant_with_id(&self, vid: DefId) -> &VariantDef { self.variants .iter() .find(|v| v.did == vid) @@ -1695,7 +1531,7 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { .expect("variant_index_with_id: unknown variant") } - pub fn variant_of_def(&self, def: Def) -> &VariantDefData<'gcx, 'container> { + pub fn variant_of_def(&self, def: Def) -> &VariantDef { match def { Def::Variant(vid) | Def::VariantCtor(vid, ..) => self.variant_with_id(vid), Def::Struct(..) | Def::StructCtor(..) | Def::Union(..) | @@ -1718,9 +1554,7 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'gcx, 'container> { None => NoDtor, } } -} -impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'tcx, 'container> { /// Returns a simpler type such that `Self: Sized` if and only /// if that type is Sized, or `TyErr` if this type is recursive. /// @@ -1739,19 +1573,9 @@ impl<'a, 'gcx, 'tcx, 'container> AdtDefData<'tcx, 'container> { /// Due to normalization being eager, this applies even if /// the associated type is behind a pointer, e.g. issue #31299. pub fn sized_constraint(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Ty<'tcx> { - match self.sized_constraint.get(DepNode::SizedConstraint(self.did)) { - None => { - let global_tcx = tcx.global_tcx(); - let this = global_tcx.lookup_adt_def_master(self.did); - this.calculate_sized_constraint_inner(global_tcx, &mut Vec::new()); - self.sized_constraint(tcx) - } - Some(ty) => ty - } + self.calculate_sized_constraint_inner(tcx.global_tcx(), &mut Vec::new()) } -} -impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { /// Calculates the Sized-constraint. /// /// As the Sized-constraint of enums can be a *set* of types, @@ -1767,42 +1591,41 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { /// such. /// - a TyError, if a type contained itself. The representability /// check should catch this case. - fn calculate_sized_constraint_inner(&'tcx self, + fn calculate_sized_constraint_inner(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, - stack: &mut Vec>) + stack: &mut Vec) + -> Ty<'tcx> { - let dep_node = || DepNode::SizedConstraint(self.did); + if let Some(ty) = tcx.adt_sized_constraint.borrow().get(&self.did) { + return ty; + } // Follow the memoization pattern: push the computation of // DepNode::SizedConstraint as our current task. - let _task = tcx.dep_graph.in_task(dep_node()); - if self.sized_constraint.untracked_get().is_some() { - // --------------- - // can skip the dep-graph read since we just pushed the task - return; - } + let _task = tcx.dep_graph.in_task(DepNode::SizedConstraint(self.did)); - if stack.contains(&self) { + if stack.contains(&self.did) { debug!("calculate_sized_constraint: {:?} is recursive", self); // This should be reported as an error by `check_representable`. // // Consider the type as Sized in the meanwhile to avoid // further errors. - self.sized_constraint.fulfill(dep_node(), tcx.types.err); - return; + tcx.adt_sized_constraint.borrow_mut().insert(self.did, tcx.types.err); + return tcx.types.err; } - stack.push(self); + stack.push(self.did); let tys : Vec<_> = self.variants.iter().flat_map(|v| { v.fields.last() }).flat_map(|f| { - self.sized_constraint_for_ty(tcx, stack, f.unsubst_ty()) + let ty = tcx.item_type(f.did); + self.sized_constraint_for_ty(tcx, stack, ty) }).collect(); let self_ = stack.pop().unwrap(); - assert_eq!(self_, self); + assert_eq!(self_, self.did); let ty = match tys.len() { _ if tys.references_error() => tcx.types.err, @@ -1811,24 +1634,26 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { _ => tcx.intern_tup(&tys[..]) }; - match self.sized_constraint.get(dep_node()) { + let old = tcx.adt_sized_constraint.borrow().get(&self.did).cloned(); + match old { Some(old_ty) => { debug!("calculate_sized_constraint: {:?} recurred", self); - assert_eq!(old_ty, tcx.types.err) + assert_eq!(old_ty, tcx.types.err); + old_ty } None => { debug!("calculate_sized_constraint: {:?} => {:?}", self, ty); - self.sized_constraint.fulfill(dep_node(), ty) + tcx.adt_sized_constraint.borrow_mut().insert(self.did, ty); + ty } } } - fn sized_constraint_for_ty( - &'tcx self, - tcx: TyCtxt<'a, 'tcx, 'tcx>, - stack: &mut Vec>, - ty: Ty<'tcx> - ) -> Vec> { + fn sized_constraint_for_ty(&self, + tcx: TyCtxt<'a, 'tcx, 'tcx>, + stack: &mut Vec, + ty: Ty<'tcx>) + -> Vec> { let result = match ty.sty { TyBool | TyChar | TyInt(..) | TyUint(..) | TyFloat(..) | TyBox(..) | TyRawPtr(..) | TyRef(..) | TyFnDef(..) | TyFnPtr(_) | @@ -1836,7 +1661,7 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { vec![] } - TyStr | TyTrait(..) | TySlice(_) | TyError => { + TyStr | TyDynamic(..) | TySlice(_) | TyError => { // these are never sized - return the target type vec![ty] } @@ -1850,12 +1675,9 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { TyAdt(adt, substs) => { // recursive case - let adt = tcx.lookup_adt_def_master(adt.did); - adt.calculate_sized_constraint_inner(tcx, stack); let adt_ty = - adt.sized_constraint - .unwrap(DepNode::SizedConstraint(adt.did)) - .subst(tcx, substs); + adt.calculate_sized_constraint_inner(tcx, stack) + .subst(tcx, substs); debug!("sized_constraint_for_ty({:?}) intermediate = {:?}", ty, adt_ty); if let ty::TyTuple(ref tys) = adt_ty.sty { @@ -1886,7 +1708,7 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { def_id: sized_trait, substs: tcx.mk_substs_trait(ty, &[]) }).to_predicate(); - let predicates = tcx.lookup_predicates(self.did).predicates; + let predicates = tcx.item_predicates(self.did).predicates; if predicates.into_iter().any(|p| p == sized_predicate) { vec![] } else { @@ -1904,16 +1726,11 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> { } } -impl<'tcx, 'container> VariantDefData<'tcx, 'container> { - #[inline] - fn fields_iter(&self) -> slice::Iter> { - self.fields.iter() - } - +impl<'a, 'gcx, 'tcx> VariantDef { #[inline] pub fn find_field_named(&self, name: ast::Name) - -> Option<&FieldDefData<'tcx, 'container>> { + -> Option<&FieldDef> { self.fields.iter().find(|f| f.name == name) } @@ -1925,33 +1742,38 @@ impl<'tcx, 'container> VariantDefData<'tcx, 'container> { } #[inline] - pub fn field_named(&self, name: ast::Name) -> &FieldDefData<'tcx, 'container> { + pub fn field_named(&self, name: ast::Name) -> &FieldDef { self.find_field_named(name).unwrap() } + + #[inline] + pub fn is_uninhabited_recurse(&self, + visited: &mut FxHashSet<(DefId, &'tcx Substs<'tcx>)>, + block: Option, + tcx: TyCtxt<'a, 'gcx, 'tcx>, + substs: &'tcx Substs<'tcx>, + is_union: bool) -> bool { + if is_union { + self.fields.iter().all(|f| f.is_uninhabited_recurse(visited, block, tcx, substs)) + } else { + self.fields.iter().any(|f| f.is_uninhabited_recurse(visited, block, tcx, substs)) + } + } } -impl<'a, 'gcx, 'tcx, 'container> FieldDefData<'tcx, 'container> { - pub fn new(did: DefId, - name: Name, - vis: Visibility) -> Self { - FieldDefData { - did: did, - name: name, - vis: vis, - ty: ivar::TyIVar::new() - } - } - +impl<'a, 'gcx, 'tcx> FieldDef { pub fn ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, subst: &Substs<'tcx>) -> Ty<'tcx> { - self.unsubst_ty().subst(tcx, subst) + tcx.item_type(self.did).subst(tcx, subst) } - pub fn unsubst_ty(&self) -> Ty<'tcx> { - self.ty.unwrap(DepNode::FieldTy(self.did)) - } - - pub fn fulfill_ty(&self, ty: Ty<'container>) { - self.ty.fulfill(DepNode::FieldTy(self.did), ty); + #[inline] + pub fn is_uninhabited_recurse(&self, + visited: &mut FxHashSet<(DefId, &'tcx Substs<'tcx>)>, + block: Option, + tcx: TyCtxt<'a, 'gcx, 'tcx>, + substs: &'tcx Substs<'tcx>) -> bool { + block.map_or(true, |b| self.vis.is_accessible_from(b, &tcx.map)) && + self.ty(tcx, substs).is_uninhabited_recurse(visited, block, tcx) } } @@ -1974,18 +1796,14 @@ pub enum ClosureKind { impl<'a, 'tcx> ClosureKind { pub fn trait_did(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>) -> DefId { - let result = match *self { - ClosureKind::Fn => tcx.lang_items.require(FnTraitLangItem), + match *self { + ClosureKind::Fn => tcx.require_lang_item(FnTraitLangItem), ClosureKind::FnMut => { - tcx.lang_items.require(FnMutTraitLangItem) + tcx.require_lang_item(FnMutTraitLangItem) } ClosureKind::FnOnce => { - tcx.lang_items.require(FnOnceTraitLangItem) + tcx.require_lang_item(FnOnceTraitLangItem) } - }; - match result { - Ok(trait_did) => trait_did, - Err(err) => tcx.sess.fatal(&err[..]), } } @@ -2022,7 +1840,7 @@ impl<'tcx> TyS<'tcx> { /// Iterator that walks the immediate children of `self`. Hence /// `Foo, u32>` yields the sequence `[Bar, u32]` /// (but not `i32`, like `walk`). - pub fn walk_shallow(&'tcx self) -> IntoIter> { + pub fn walk_shallow(&'tcx self) -> AccIntoIter> { walk::walk_shallow(self) } @@ -2065,7 +1883,7 @@ impl LvaluePreference { } /// Helper for looking things up in the various maps that are populated during -/// typeck::collect (e.g., `tcx.impl_or_trait_items`, `tcx.tcache`, etc). All of +/// typeck::collect (e.g., `tcx.associated_items`, `tcx.types`, etc). All of /// these share the pattern that if the id is local, it should have been loaded /// into the map by the `typeck::collect` phase. If the def-id is external, /// then we have to go consult the crate loading code (and cache the result for @@ -2142,7 +1960,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { match self.map.find(id) { Some(ast_map::NodeLocal(pat)) => { match pat.node { - hir::PatKind::Binding(_, ref path1, _) => path1.node.as_str(), + hir::PatKind::Binding(_, _, ref path1, _) => path1.node.as_str(), _ => { bug!("Variable id {} maps to {:?}, not local", id, pat); }, @@ -2154,11 +1972,8 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { pub fn expr_is_lval(self, expr: &hir::Expr) -> bool { match expr.node { - hir::ExprPath(..) => { - // This function can be used during type checking when not all paths are - // fully resolved. Partially resolved paths in expressions can only legally - // refer to associated items which are always rvalues. - match self.expect_resolution(expr.id).base_def { + hir::ExprPath(hir::QPath::Resolved(_, ref path)) => { + match path.def { Def::Local(..) | Def::Upvar(..) | Def::Static(..) | Def::Err => true, _ => false, } @@ -2175,6 +1990,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { true } + // Partially qualified paths in expressions can only legally + // refer to associated items which are always rvalues. + hir::ExprPath(hir::QPath::TypeRelative(..)) | + hir::ExprCall(..) | hir::ExprMethodCall(..) | hir::ExprStruct(..) | @@ -2204,13 +2023,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } - pub fn provided_trait_methods(self, id: DefId) -> Vec>> { - self.impl_or_trait_items(id).iter().filter_map(|&def_id| { - match self.impl_or_trait_item(def_id) { - MethodTraitItem(ref m) if m.has_body => Some(m.clone()), - _ => None - } - }).collect() + pub fn provided_trait_methods(self, id: DefId) -> Vec { + self.associated_items(id) + .filter(|item| item.kind == AssociatedKind::Method && item.defaultness.has_value()) + .collect() } pub fn trait_impl_polarity(self, id: DefId) -> hir::ImplPolarity { @@ -2243,17 +2059,148 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { }) } - pub fn impl_or_trait_item(self, id: DefId) -> ImplOrTraitItem<'gcx> { - lookup_locally_or_in_crate_store( - "impl_or_trait_items", id, &self.impl_or_trait_items, - || self.sess.cstore.impl_or_trait_item(self.global_tcx(), id) - .expect("missing ImplOrTraitItem in metadata")) + pub fn associated_item(self, def_id: DefId) -> AssociatedItem { + self.associated_items.memoize(def_id, || { + if !def_id.is_local() { + return self.sess.cstore.associated_item(def_id) + .expect("missing AssociatedItem in metadata"); + } + + // When the user asks for a given associated item, we + // always go ahead and convert all the associated items in + // the container. Note that we are also careful only to + // ever register a read on the *container* of the assoc + // item, not the assoc item itself. This prevents changes + // in the details of an item (for example, the type to + // which an associated type is bound) from contaminating + // those tasks that just need to scan the names of items + // and so forth. + + let id = self.map.as_local_node_id(def_id).unwrap(); + let parent_id = self.map.get_parent(id); + let parent_def_id = self.map.local_def_id(parent_id); + let parent_item = self.map.expect_item(parent_id); + match parent_item.node { + hir::ItemImpl(.., ref impl_trait_ref, _, ref impl_item_refs) => { + for impl_item_ref in impl_item_refs { + let assoc_item = + self.associated_item_from_impl_item_ref(parent_def_id, + impl_trait_ref.is_some(), + impl_item_ref); + self.associated_items.borrow_mut().insert(assoc_item.def_id, assoc_item); + } + } + + hir::ItemTrait(.., ref trait_items) => { + for trait_item in trait_items { + let assoc_item = + self.associated_item_from_trait_item_ref(parent_def_id, trait_item); + self.associated_items.borrow_mut().insert(assoc_item.def_id, assoc_item); + } + } + + ref r => { + panic!("unexpected container of associated items: {:?}", r) + } + } + + // memoize wants us to return something, so return + // the one we generated for this def-id + *self.associated_items.borrow().get(&def_id).unwrap() + }) } - pub fn impl_or_trait_items(self, id: DefId) -> Rc> { - lookup_locally_or_in_crate_store( - "impl_or_trait_items", id, &self.impl_or_trait_item_def_ids, - || Rc::new(self.sess.cstore.impl_or_trait_items(id))) + fn associated_item_from_trait_item_ref(self, + parent_def_id: DefId, + trait_item: &hir::TraitItem) + -> AssociatedItem { + let def_id = self.map.local_def_id(trait_item.id); + + let (kind, has_self, has_value) = match trait_item.node { + hir::MethodTraitItem(ref sig, ref body) => { + (AssociatedKind::Method, sig.decl.get_self().is_some(), + body.is_some()) + } + hir::ConstTraitItem(_, ref value) => { + (AssociatedKind::Const, false, value.is_some()) + } + hir::TypeTraitItem(_, ref ty) => { + (AssociatedKind::Type, false, ty.is_some()) + } + }; + + AssociatedItem { + name: trait_item.name, + kind: kind, + vis: Visibility::from_hir(&hir::Inherited, trait_item.id, self), + defaultness: hir::Defaultness::Default { has_value: has_value }, + def_id: def_id, + container: TraitContainer(parent_def_id), + method_has_self_argument: has_self + } + } + + fn associated_item_from_impl_item_ref(self, + parent_def_id: DefId, + from_trait_impl: bool, + impl_item_ref: &hir::ImplItemRef) + -> AssociatedItem { + let def_id = self.map.local_def_id(impl_item_ref.id.node_id); + let (kind, has_self) = match impl_item_ref.kind { + hir::AssociatedItemKind::Const => (ty::AssociatedKind::Const, false), + hir::AssociatedItemKind::Method { has_self } => { + (ty::AssociatedKind::Method, has_self) + } + hir::AssociatedItemKind::Type => (ty::AssociatedKind::Type, false), + }; + + // Trait impl items are always public. + let public = hir::Public; + let vis = if from_trait_impl { &public } else { &impl_item_ref.vis }; + + ty::AssociatedItem { + name: impl_item_ref.name, + kind: kind, + vis: ty::Visibility::from_hir(vis, impl_item_ref.id.node_id, self), + defaultness: impl_item_ref.defaultness, + def_id: def_id, + container: ImplContainer(parent_def_id), + method_has_self_argument: has_self + } + } + + pub fn associated_item_def_ids(self, def_id: DefId) -> Rc> { + self.associated_item_def_ids.memoize(def_id, || { + if !def_id.is_local() { + return Rc::new(self.sess.cstore.associated_item_def_ids(def_id)); + } + + let id = self.map.as_local_node_id(def_id).unwrap(); + let item = self.map.expect_item(id); + let vec: Vec<_> = match item.node { + hir::ItemTrait(.., ref trait_items) => { + trait_items.iter() + .map(|trait_item| trait_item.id) + .map(|id| self.map.local_def_id(id)) + .collect() + } + hir::ItemImpl(.., ref impl_item_refs) => { + impl_item_refs.iter() + .map(|impl_item_ref| impl_item_ref.id) + .map(|id| self.map.local_def_id(id.node_id)) + .collect() + } + _ => span_bug!(item.span, "associated_item_def_ids: not impl or trait") + }; + Rc::new(vec) + }) + } + + #[inline] // FIXME(#35870) Avoid closures being unexported due to impl Trait. + pub fn associated_items(self, def_id: DefId) + -> impl Iterator + 'a { + let def_ids = self.associated_item_def_ids(def_id); + (0..def_ids.len()).map(move |i| self.associated_item(def_ids[i])) } /// Returns the trait-ref corresponding to a given impl, or None if it is @@ -2264,25 +2211,9 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { || self.sess.cstore.impl_trait_ref(self.global_tcx(), id)) } - /// Returns a path resolution for node id if it exists, panics otherwise. - pub fn expect_resolution(self, id: NodeId) -> PathResolution { - *self.def_map.borrow().get(&id).expect("no def-map entry for node id") - } - - /// Returns a fully resolved definition for node id if it exists, panics otherwise. - pub fn expect_def(self, id: NodeId) -> Def { - self.expect_resolution(id).full_def() - } - - /// Returns a fully resolved definition for node id if it exists, or none if no - /// definition exists, panics on partial resolutions to catch errors. - pub fn expect_def_or_none(self, id: NodeId) -> Option { - self.def_map.borrow().get(&id).map(|resolution| resolution.full_def()) - } - // Returns `ty::VariantDef` if `def` refers to a struct, // or variant or their constructors, panics otherwise. - pub fn expect_variant_def(self, def: Def) -> VariantDef<'tcx> { + pub fn expect_variant_def(self, def: Def) -> &'tcx VariantDef { match def { Def::Variant(did) | Def::VariantCtor(did, ..) => { let enum_did = self.parent_def_id(did).unwrap(); @@ -2347,11 +2278,19 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } + pub fn def_span(self, def_id: DefId) -> Span { + if let Some(id) = self.map.as_local_node_id(def_id) { + self.map.span(id) + } else { + self.sess.cstore.def_span(&self.sess, def_id) + } + } + pub fn item_name(self, id: DefId) -> ast::Name { if let Some(id) = self.map.as_local_node_id(id) { self.map.name(id) } else if id.index == CRATE_DEF_INDEX { - token::intern(&self.sess.cstore.original_crate_name(id.krate)) + self.sess.cstore.original_crate_name(id.krate) } else { let def_key = self.sess.cstore.def_key(id); // The name of a StructCtor is that of its struct parent. @@ -2368,81 +2307,45 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } - // Register a given item type - pub fn register_item_type(self, did: DefId, scheme: TypeScheme<'gcx>) { - self.tcache.borrow_mut().insert(did, scheme.ty); - self.generics.borrow_mut().insert(did, scheme.generics); - } - // If the given item is in an external crate, looks up its type and adds it to // the type cache. Returns the type parameters and type. - pub fn lookup_item_type(self, did: DefId) -> TypeScheme<'gcx> { - let ty = lookup_locally_or_in_crate_store( - "tcache", did, &self.tcache, - || self.sess.cstore.item_type(self.global_tcx(), did)); - - TypeScheme { - ty: ty, - generics: self.lookup_generics(did) - } - } - - pub fn opt_lookup_item_type(self, did: DefId) -> Option> { - if did.krate != LOCAL_CRATE { - return Some(self.lookup_item_type(did)); - } - - if let Some(ty) = self.tcache.borrow().get(&did).cloned() { - Some(TypeScheme { - ty: ty, - generics: self.lookup_generics(did) - }) - } else { - None - } + pub fn item_type(self, did: DefId) -> Ty<'gcx> { + lookup_locally_or_in_crate_store( + "item_types", did, &self.item_types, + || self.sess.cstore.item_type(self.global_tcx(), did)) } /// Given the did of a trait, returns its canonical trait ref. - pub fn lookup_trait_def(self, did: DefId) -> &'gcx TraitDef<'gcx> { + pub fn lookup_trait_def(self, did: DefId) -> &'gcx TraitDef { lookup_locally_or_in_crate_store( "trait_defs", did, &self.trait_defs, || self.alloc_trait_def(self.sess.cstore.trait_def(self.global_tcx(), did)) ) } - /// Given the did of an ADT, return a master reference to its - /// definition. Unless you are planning on fulfilling the ADT's fields, - /// use lookup_adt_def instead. - pub fn lookup_adt_def_master(self, did: DefId) -> AdtDefMaster<'gcx> { + /// Given the did of an ADT, return a reference to its definition. + pub fn lookup_adt_def(self, did: DefId) -> &'gcx AdtDef { lookup_locally_or_in_crate_store( "adt_defs", did, &self.adt_defs, - || self.sess.cstore.adt_def(self.global_tcx(), did) - ) - } - - /// Given the did of an ADT, return a reference to its definition. - pub fn lookup_adt_def(self, did: DefId) -> AdtDef<'gcx> { - // when reverse-variance goes away, a transmute:: - // would be needed here. - self.lookup_adt_def_master(did) + || self.sess.cstore.adt_def(self.global_tcx(), did)) } /// Given the did of an item, returns its generics. - pub fn lookup_generics(self, did: DefId) -> &'gcx Generics<'gcx> { + pub fn item_generics(self, did: DefId) -> &'gcx Generics<'gcx> { lookup_locally_or_in_crate_store( "generics", did, &self.generics, || self.alloc_generics(self.sess.cstore.item_generics(self.global_tcx(), did))) } /// Given the did of an item, returns its full set of predicates. - pub fn lookup_predicates(self, did: DefId) -> GenericPredicates<'gcx> { + pub fn item_predicates(self, did: DefId) -> GenericPredicates<'gcx> { lookup_locally_or_in_crate_store( "predicates", did, &self.predicates, || self.sess.cstore.item_predicates(self.global_tcx(), did)) } /// Given the did of a trait, returns its superpredicates. - pub fn lookup_super_predicates(self, did: DefId) -> GenericPredicates<'gcx> { + pub fn item_super_predicates(self, did: DefId) -> GenericPredicates<'gcx> { lookup_locally_or_in_crate_store( "super_predicates", did, &self.super_predicates, || self.sess.cstore.item_super_predicates(self.global_tcx(), did)) @@ -2539,31 +2442,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { def.flags.set(def.flags.get() | TraitFlags::HAS_DEFAULT_IMPL) } - /// Load primitive inherent implementations if necessary - pub fn populate_implementations_for_primitive_if_necessary(self, - primitive_def_id: DefId) { - if primitive_def_id.is_local() { - return - } - - // The primitive is not local, hence we are reading this out - // of metadata. - let _ignore = self.dep_graph.in_ignore(); - - if self.populated_external_primitive_impls.borrow().contains(&primitive_def_id) { - return - } - - debug!("populate_implementations_for_primitive_if_necessary: searching for {:?}", - primitive_def_id); - - let impl_items = self.sess.cstore.impl_or_trait_items(primitive_def_id); - - // Store the implementation info. - self.impl_or_trait_item_def_ids.borrow_mut().insert(primitive_def_id, Rc::new(impl_items)); - self.populated_external_primitive_impls.borrow_mut().insert(primitive_def_id); - } - /// Populates the type context with all the inherent implementations for /// the given type if necessary. pub fn populate_inherent_implementations_for_type_if_necessary(self, @@ -2584,11 +2462,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { type_id); let inherent_impls = self.sess.cstore.inherent_implementations_for_type(type_id); - for &impl_def_id in &inherent_impls { - // Store the implementation info. - let impl_items = self.sess.cstore.impl_or_trait_items(impl_def_id); - self.impl_or_trait_item_def_ids.borrow_mut().insert(impl_def_id, Rc::new(impl_items)); - } self.inherent_impls.borrow_mut().insert(type_id, inherent_impls); self.populated_external_types.borrow_mut().insert(type_id); @@ -2617,23 +2490,11 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } for impl_def_id in self.sess.cstore.implementations_of_trait(Some(trait_id)) { - let impl_items = self.sess.cstore.impl_or_trait_items(impl_def_id); let trait_ref = self.impl_trait_ref(impl_def_id).unwrap(); // Record the trait->implementation mapping. let parent = self.sess.cstore.impl_parent(impl_def_id).unwrap_or(trait_id); def.record_remote_impl(self, impl_def_id, trait_ref, parent); - - // For any methods that use a default implementation, add them to - // the map. This is a bit unfortunate. - for &impl_item_def_id in &impl_items { - // load impl items eagerly for convenience - // FIXME: we may want to load these lazily - self.impl_or_trait_item(impl_item_def_id); - } - - // Store the implementation info. - self.impl_or_trait_item_def_ids.borrow_mut().insert(impl_def_id, Rc::new(impl_items)); } def.flags.set(def.flags.get() | TraitFlags::IMPLS_VALID); @@ -2661,12 +2522,12 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // tables by typeck; else, it will be retreived from // the external crate metadata. if let Some(ty) = self.tables.borrow().closure_tys.get(&def_id) { - return ty.subst(self, substs.func_substs); + return ty.subst(self, substs.substs); } let ty = self.sess.cstore.closure_ty(self.global_tcx(), def_id); self.tables.borrow_mut().closure_tys.insert(def_id, ty.clone()); - ty.subst(self, substs.func_substs) + ty.subst(self, substs.substs) } /// Given the def_id of an impl, return the def_id of the trait it implements. @@ -2679,17 +2540,17 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// ID of the impl that the method belongs to. Otherwise, return `None`. pub fn impl_of_method(self, def_id: DefId) -> Option { if def_id.krate != LOCAL_CRATE { - return self.sess.cstore.impl_or_trait_item(self.global_tcx(), def_id) + return self.sess.cstore.associated_item(def_id) .and_then(|item| { - match item.container() { + match item.container { TraitContainer(_) => None, ImplContainer(def_id) => Some(def_id), } }); } - match self.impl_or_trait_items.borrow().get(&def_id).cloned() { + match self.associated_items.borrow().get(&def_id).cloned() { Some(trait_item) => { - match trait_item.container() { + match trait_item.container { TraitContainer(_) => None, ImplContainer(def_id) => Some(def_id), } @@ -2705,9 +2566,9 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { if def_id.krate != LOCAL_CRATE { return self.sess.cstore.trait_of_item(def_id); } - match self.impl_or_trait_items.borrow().get(&def_id) { - Some(impl_or_trait_item) => { - match impl_or_trait_item.container() { + match self.associated_items.borrow().get(&def_id) { + Some(associated_item) => { + match associated_item.container { TraitContainer(def_id) => Some(def_id), ImplContainer(_) => None } @@ -2716,30 +2577,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } - /// If the given def ID describes an item belonging to a trait, (either a - /// default method or an implementation of a trait method), return the ID of - /// the method inside trait definition (this means that if the given def ID - /// is already that of the original trait method, then the return value is - /// the same). - /// Otherwise, return `None`. - pub fn trait_item_of_item(self, def_id: DefId) -> Option { - let impl_or_trait_item = match self.impl_or_trait_items.borrow().get(&def_id) { - Some(m) => m.clone(), - None => return None, - }; - match impl_or_trait_item.container() { - TraitContainer(_) => Some(impl_or_trait_item.def_id()), - ImplContainer(def_id) => { - self.trait_id_of_impl(def_id).and_then(|trait_did| { - let name = impl_or_trait_item.name(); - self.trait_items(trait_did).iter() - .find(|item| item.name() == name) - .map(|item| item.def_id()) - }) - } - } - } - /// Construct a parameter environment suitable for static contexts or other contexts where there /// are no free type/lifetime parameters in scope. pub fn empty_parameter_environment(self) -> ParameterEnvironment<'tcx> { @@ -2752,8 +2589,8 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { caller_bounds: Vec::new(), implicit_region_bound: self.mk_region(ty::ReEmpty), free_id_outlive: free_id_outlive, - is_copy_cache: RefCell::new(FnvHashMap()), - is_sized_cache: RefCell::new(FnvHashMap()), + is_copy_cache: RefCell::new(FxHashMap()), + is_sized_cache: RefCell::new(FxHashMap()), } } @@ -2801,7 +2638,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // let tcx = self.global_tcx(); - let generic_predicates = tcx.lookup_predicates(def_id); + let generic_predicates = tcx.item_predicates(def_id); let bounds = generic_predicates.instantiate(tcx, free_substs); let bounds = tcx.liberate_late_bound_regions(free_id_outlive, &ty::Binder(bounds)); let predicates = bounds.predicates; @@ -2824,8 +2661,8 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { implicit_region_bound: tcx.mk_region(ty::ReScope(free_id_outlive)), caller_bounds: predicates, free_id_outlive: free_id_outlive, - is_copy_cache: RefCell::new(FnvHashMap()), - is_sized_cache: RefCell::new(FnvHashMap()), + is_copy_cache: RefCell::new(FxHashMap()), + is_sized_cache: RefCell::new(FxHashMap()), }; let cause = traits::ObligationCause::misc(span, free_id_outlive.node_id(&self.region_maps)); @@ -2836,17 +2673,17 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { self.mk_region(ty::ReScope(self.region_maps.node_extent(id))) } - pub fn visit_all_items_in_krate(self, - dep_node_fn: F, - visitor: &mut V) - where F: FnMut(DefId) -> DepNode, V: Visitor<'gcx> + pub fn visit_all_item_likes_in_krate(self, + dep_node_fn: F, + visitor: &mut V) + where F: FnMut(DefId) -> DepNode, V: ItemLikeVisitor<'gcx> { - dep_graph::visit_all_items_in_krate(self.global_tcx(), dep_node_fn, visitor); + dep_graph::visit_all_item_likes_in_krate(self.global_tcx(), dep_node_fn, visitor); } /// Looks up the span of `impl_did` if the impl is local; otherwise returns `Err` /// with the name of the crate containing the impl. - pub fn span_of_impl(self, impl_did: DefId) -> Result { + pub fn span_of_impl(self, impl_did: DefId) -> Result { if impl_did.is_local() { let node_id = self.map.as_local_node_id(impl_did).unwrap(); Ok(self.map.span(node_id)) @@ -2856,15 +2693,6 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { } } -/// The category of explicit self. -#[derive(Clone, Copy, Eq, PartialEq, Debug, RustcEncodable, RustcDecodable)] -pub enum ExplicitSelfCategory<'tcx> { - Static, - ByValue, - ByReference(&'tcx Region, hir::Mutability), - ByBox, -} - impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { pub fn with_freevars(self, fid: NodeId, f: F) -> T where F: FnOnce(&[hir::Freevar]) -> T, diff --git a/src/librustc/ty/outlives.rs b/src/librustc/ty/outlives.rs index 51feab9d40..eb384eec6a 100644 --- a/src/librustc/ty/outlives.rs +++ b/src/librustc/ty/outlives.rs @@ -72,7 +72,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // in the `subtys` iterator (e.g., when encountering a // projection). match ty.sty { - ty::TyClosure(_, ref substs) => { + ty::TyClosure(def_id, ref substs) => { // FIXME(#27086). We do not accumulate from substs, since they // don't represent reachable data. This means that, in // practice, some of the lifetime parameters might not @@ -110,7 +110,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // what func/type parameters are used and unused, // taking into consideration UFCS and so forth. - for &upvar_ty in substs.upvar_tys { + for upvar_ty in substs.upvar_tys(def_id, *self) { self.compute_components(upvar_ty, out); } } @@ -177,7 +177,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { ty::TyTuple(..) | // ... ty::TyFnDef(..) | // OutlivesFunction (*) ty::TyFnPtr(_) | // OutlivesFunction (*) - ty::TyTrait(..) | // OutlivesObject, OutlivesFragment (*) + ty::TyDynamic(..) | // OutlivesObject, OutlivesFragment (*) ty::TyError => { // (*) Bare functions and traits are both binders. In the // RFC, this means we would add the bound regions to the diff --git a/src/librustc/ty/relate.rs b/src/librustc/ty/relate.rs index cb90e6392c..76c26d01ac 100644 --- a/src/librustc/ty/relate.rs +++ b/src/librustc/ty/relate.rs @@ -18,8 +18,10 @@ use ty::subst::{Kind, Substs}; use ty::{self, Ty, TyCtxt, TypeFoldable}; use ty::error::{ExpectedFound, TypeError}; use std::rc::Rc; +use std::iter; use syntax::abi; use hir as ast; +use rustc_data_structures::accumulate_vec::AccumulateVec; pub type RelateResult<'tcx, T> = Result>; @@ -185,32 +187,28 @@ impl<'tcx> Relate<'tcx> for ty::FnSig<'tcx> { expected_found(relation, &a.variadic, &b.variadic))); } - let inputs = relate_arg_vecs(relation, - &a.inputs, - &b.inputs)?; - let output = relation.relate(&a.output, &b.output)?; + if a.inputs().len() != b.inputs().len() { + return Err(TypeError::ArgCount); + } - Ok(ty::FnSig {inputs: inputs, - output: output, - variadic: a.variadic}) + let inputs_and_output = a.inputs().iter().cloned() + .zip(b.inputs().iter().cloned()) + .map(|x| (x, false)) + .chain(iter::once(((a.output(), b.output()), true))) + .map(|((a, b), is_output)| { + if is_output { + relation.relate(&a, &b) + } else { + relation.relate_with_variance(ty::Contravariant, &a, &b) + } + }).collect::, _>>()?; + Ok(ty::FnSig { + inputs_and_output: relation.tcx().intern_type_list(&inputs_and_output), + variadic: a.variadic + }) } } -fn relate_arg_vecs<'a, 'gcx, 'tcx, R>(relation: &mut R, - a_args: &[Ty<'tcx>], - b_args: &[Ty<'tcx>]) - -> RelateResult<'tcx, Vec>> - where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a -{ - if a_args.len() != b_args.len() { - return Err(TypeError::ArgCount); - } - - a_args.iter().zip(b_args) - .map(|(a, b)| relation.relate_with_variance(ty::Contravariant, a, b)) - .collect() -} - impl<'tcx> Relate<'tcx> for ast::Unsafety { fn relate<'a, 'gcx, R>(relation: &mut R, a: &ast::Unsafety, @@ -302,23 +300,6 @@ impl<'tcx> Relate<'tcx> for Vec> { } } -impl<'tcx> Relate<'tcx> for ty::BuiltinBounds { - fn relate<'a, 'gcx, R>(relation: &mut R, - a: &ty::BuiltinBounds, - b: &ty::BuiltinBounds) - -> RelateResult<'tcx, ty::BuiltinBounds> - where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a - { - // Two sets of builtin bounds are only relatable if they are - // precisely the same (but see the coercion code). - if a != b { - Err(TypeError::BuiltinBoundsMismatch(expected_found(relation, a, b))) - } else { - Ok(*a) - } - } -} - impl<'tcx> Relate<'tcx> for ty::TraitRef<'tcx> { fn relate<'a, 'gcx, R>(relation: &mut R, a: &ty::TraitRef<'tcx>, @@ -415,23 +396,15 @@ pub fn super_relate_tys<'a, 'gcx, 'tcx, R>(relation: &mut R, Ok(tcx.mk_adt(a_def, substs)) } - (&ty::TyTrait(ref a_obj), &ty::TyTrait(ref b_obj)) => - { - let principal = relation.relate(&a_obj.principal, &b_obj.principal)?; - let r = - relation.with_cause( - Cause::ExistentialRegionBound, - |relation| relation.relate_with_variance(ty::Contravariant, - &a_obj.region_bound, - &b_obj.region_bound))?; - let nb = relation.relate(&a_obj.builtin_bounds, &b_obj.builtin_bounds)?; - let pb = relation.relate(&a_obj.projection_bounds, &b_obj.projection_bounds)?; - Ok(tcx.mk_trait(ty::TraitObject { - principal: principal, - region_bound: r, - builtin_bounds: nb, - projection_bounds: pb - })) + (&ty::TyDynamic(ref a_obj, ref a_region), &ty::TyDynamic(ref b_obj, ref b_region)) => { + let region_bound = relation.with_cause(Cause::ExistentialRegionBound, + |relation| { + relation.relate_with_variance( + ty::Contravariant, + a_region, + b_region) + })?; + Ok(tcx.mk_dynamic(relation.relate(a_obj, b_obj)?, region_bound)) } (&ty::TyClosure(a_id, a_substs), @@ -527,6 +500,31 @@ pub fn super_relate_tys<'a, 'gcx, 'tcx, R>(relation: &mut R, } } +impl<'tcx> Relate<'tcx> for &'tcx ty::Slice> { + fn relate<'a, 'gcx, R>(relation: &mut R, + a: &Self, + b: &Self) + -> RelateResult<'tcx, Self> + where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a { + + if a.len() != b.len() { + return Err(TypeError::ExistentialMismatch(expected_found(relation, a, b))); + } + + let tcx = relation.tcx(); + let v = a.iter().zip(b.iter()).map(|(ep_a, ep_b)| { + use ty::ExistentialPredicate::*; + match (*ep_a, *ep_b) { + (Trait(ref a), Trait(ref b)) => Ok(Trait(relation.relate(a, b)?)), + (Projection(ref a), Projection(ref b)) => Ok(Projection(relation.relate(a, b)?)), + (AutoTrait(ref a), AutoTrait(ref b)) if a == b => Ok(AutoTrait(*a)), + _ => Err(TypeError::ExistentialMismatch(expected_found(relation, a, b))) + } + }); + Ok(tcx.mk_existential_predicates(v)?) + } +} + impl<'tcx> Relate<'tcx> for ty::ClosureSubsts<'tcx> { fn relate<'a, 'gcx, R>(relation: &mut R, a: &ty::ClosureSubsts<'tcx>, @@ -534,13 +532,8 @@ impl<'tcx> Relate<'tcx> for ty::ClosureSubsts<'tcx> { -> RelateResult<'tcx, ty::ClosureSubsts<'tcx>> where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a { - let substs = relate_substs(relation, None, a.func_substs, b.func_substs)?; - assert_eq!(a.upvar_tys.len(), b.upvar_tys.len()); - Ok(ty::ClosureSubsts { - func_substs: substs, - upvar_tys: relation.tcx().mk_type_list( - a.upvar_tys.iter().zip(b.upvar_tys).map(|(a, b)| relation.relate(a, b)))? - }) + let substs = relate_substs(relation, None, a.substs, b.substs)?; + Ok(ty::ClosureSubsts { substs: substs }) } } diff --git a/src/librustc/ty/structural_impls.rs b/src/librustc/ty/structural_impls.rs index 9ca911837b..0f0478bc8c 100644 --- a/src/librustc/ty/structural_impls.rs +++ b/src/librustc/ty/structural_impls.rs @@ -198,11 +198,8 @@ impl<'tcx, T: Lift<'tcx>> Lift<'tcx> for ty::Binder { impl<'a, 'tcx> Lift<'tcx> for ty::ClosureSubsts<'a> { type Lifted = ty::ClosureSubsts<'tcx>; fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) -> Option { - tcx.lift(&(self.func_substs, self.upvar_tys)).map(|(substs, upvar_tys)| { - ty::ClosureSubsts { - func_substs: substs, - upvar_tys: upvar_tys - } + tcx.lift(&self.substs).map(|substs| { + ty::ClosureSubsts { substs: substs } }) } } @@ -235,14 +232,11 @@ impl<'a, 'tcx> Lift<'tcx> for ty::adjustment::AutoBorrow<'a> { impl<'a, 'tcx> Lift<'tcx> for ty::FnSig<'a> { type Lifted = ty::FnSig<'tcx>; fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) -> Option { - tcx.lift(&self.inputs[..]).and_then(|inputs| { - tcx.lift(&self.output).map(|output| { - ty::FnSig { - inputs: inputs, - output: output, - variadic: self.variadic - } - }) + tcx.lift(&self.inputs_and_output).map(|x| { + ty::FnSig { + inputs_and_output: x, + variadic: self.variadic + } }) } } @@ -318,7 +312,6 @@ impl<'a, 'tcx> Lift<'tcx> for ty::error::TypeError<'a> { IntMismatch(x) => IntMismatch(x), FloatMismatch(x) => FloatMismatch(x), Traits(x) => Traits(x), - BuiltinBoundsMismatch(x) => BuiltinBoundsMismatch(x), VariadicMismatch(x) => VariadicMismatch(x), CyclicTy => CyclicTy, ProjectionNameMismatched(x) => ProjectionNameMismatched(x), @@ -328,6 +321,7 @@ impl<'a, 'tcx> Lift<'tcx> for ty::error::TypeError<'a> { TyParamDefaultMismatch(ref x) => { return tcx.lift(x).map(TyParamDefaultMismatch) } + ExistentialMismatch(ref x) => return tcx.lift(x).map(ExistentialMismatch) }) } } @@ -430,20 +424,33 @@ impl<'tcx, T:TypeFoldable<'tcx>> TypeFoldable<'tcx> for ty::Binder { } } -impl<'tcx> TypeFoldable<'tcx> for ty::TraitObject<'tcx> { +impl<'tcx> TypeFoldable<'tcx> for &'tcx ty::Slice> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { - ty::TraitObject { - principal: self.principal.fold_with(folder), - region_bound: self.region_bound.fold_with(folder), - builtin_bounds: self.builtin_bounds, - projection_bounds: self.projection_bounds.fold_with(folder), + let v = self.iter().map(|p| p.fold_with(folder)).collect::>(); + folder.tcx().intern_existential_predicates(&v) + } + + fn super_visit_with>(&self, visitor: &mut V) -> bool { + self.iter().any(|p| p.visit_with(visitor)) + } +} + +impl<'tcx> TypeFoldable<'tcx> for ty::ExistentialPredicate<'tcx> { + fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { + use ty::ExistentialPredicate::*; + match *self { + Trait(ref tr) => Trait(tr.fold_with(folder)), + Projection(ref p) => Projection(p.fold_with(folder)), + AutoTrait(did) => AutoTrait(did), } } fn super_visit_with>(&self, visitor: &mut V) -> bool { - self.principal.visit_with(visitor) || - self.region_bound.visit_with(visitor) || - self.projection_bounds.visit_with(visitor) + match *self { + ty::ExistentialPredicate::Trait(ref tr) => tr.visit_with(visitor), + ty::ExistentialPredicate::Projection(ref p) => p.visit_with(visitor), + ty::ExistentialPredicate::AutoTrait(_) => false, + } } } @@ -466,7 +473,8 @@ impl<'tcx> TypeFoldable<'tcx> for Ty<'tcx> { ty::TyArray(typ, sz) => ty::TyArray(typ.fold_with(folder), sz), ty::TySlice(typ) => ty::TySlice(typ.fold_with(folder)), ty::TyAdt(tid, substs) => ty::TyAdt(tid, substs.fold_with(folder)), - ty::TyTrait(ref trait_ty) => ty::TyTrait(trait_ty.fold_with(folder)), + ty::TyDynamic(ref trait_ty, ref region) => + ty::TyDynamic(trait_ty.fold_with(folder), region.fold_with(folder)), ty::TyTuple(ts) => ty::TyTuple(ts.fold_with(folder)), ty::TyFnDef(def_id, substs, f) => { ty::TyFnDef(def_id, @@ -482,9 +490,14 @@ impl<'tcx> TypeFoldable<'tcx> for Ty<'tcx> { ty::TyAnon(did, substs) => ty::TyAnon(did, substs.fold_with(folder)), ty::TyBool | ty::TyChar | ty::TyStr | ty::TyInt(_) | ty::TyUint(_) | ty::TyFloat(_) | ty::TyError | ty::TyInfer(_) | - ty::TyParam(..) | ty::TyNever => self.sty.clone(), + ty::TyParam(..) | ty::TyNever => return self }; - folder.tcx().mk_ty(sty) + + if self.sty == sty { + self + } else { + folder.tcx().mk_ty(sty) + } } fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { @@ -498,7 +511,8 @@ impl<'tcx> TypeFoldable<'tcx> for Ty<'tcx> { ty::TyArray(typ, _sz) => typ.visit_with(visitor), ty::TySlice(typ) => typ.visit_with(visitor), ty::TyAdt(_, substs) => substs.visit_with(visitor), - ty::TyTrait(ref trait_ty) => trait_ty.visit_with(visitor), + ty::TyDynamic(ref trait_ty, ref reg) => + trait_ty.visit_with(visitor) || reg.visit_with(visitor), ty::TyTuple(ts) => ts.visit_with(visitor), ty::TyFnDef(_, substs, ref f) => { substs.visit_with(visitor) || f.visit_with(visitor) @@ -572,9 +586,11 @@ impl<'tcx> TypeFoldable<'tcx> for ty::TypeAndMut<'tcx> { impl<'tcx> TypeFoldable<'tcx> for ty::FnSig<'tcx> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { - ty::FnSig { inputs: self.inputs.fold_with(folder), - output: self.output.fold_with(folder), - variadic: self.variadic } + let inputs_and_output = self.inputs_and_output.fold_with(folder); + ty::FnSig { + inputs_and_output: folder.tcx().intern_type_list(&inputs_and_output), + variadic: self.variadic, + } } fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { @@ -582,7 +598,8 @@ impl<'tcx> TypeFoldable<'tcx> for ty::FnSig<'tcx> { } fn super_visit_with>(&self, visitor: &mut V) -> bool { - self.inputs.visit_with(visitor) || self.output.visit_with(visitor) + self.inputs().iter().any(|i| i.visit_with(visitor)) || + self.output().visit_with(visitor) } } @@ -597,6 +614,10 @@ impl<'tcx> TypeFoldable<'tcx> for ty::TraitRef<'tcx> { fn super_visit_with>(&self, visitor: &mut V) -> bool { self.substs.visit_with(visitor) } + + fn visit_with>(&self, visitor: &mut V) -> bool { + visitor.visit_trait_ref(*self) + } } impl<'tcx> TypeFoldable<'tcx> for ty::ExistentialTraitRef<'tcx> { @@ -654,13 +675,12 @@ impl<'tcx> TypeFoldable<'tcx> for &'tcx ty::Region { impl<'tcx> TypeFoldable<'tcx> for ty::ClosureSubsts<'tcx> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { ty::ClosureSubsts { - func_substs: self.func_substs.fold_with(folder), - upvar_tys: self.upvar_tys.fold_with(folder), + substs: self.substs.fold_with(folder), } } fn super_visit_with>(&self, visitor: &mut V) -> bool { - self.func_substs.visit_with(visitor) || self.upvar_tys.visit_with(visitor) + self.substs.visit_with(visitor) } } @@ -698,16 +718,6 @@ impl<'tcx> TypeFoldable<'tcx> for ty::adjustment::AutoBorrow<'tcx> { } } -impl<'tcx> TypeFoldable<'tcx> for ty::BuiltinBounds { - fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, _folder: &mut F) -> Self { - *self - } - - fn super_visit_with>(&self, _visitor: &mut V) -> bool { - false - } -} - impl<'tcx> TypeFoldable<'tcx> for ty::TypeParameterDef<'tcx> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { ty::TypeParameterDef { @@ -765,6 +775,36 @@ impl<'tcx> TypeFoldable<'tcx> for ty::RegionParameterDef<'tcx> { } } +impl<'tcx> TypeFoldable<'tcx> for ty::Generics<'tcx> { + fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { + ty::Generics { + parent: self.parent, + parent_regions: self.parent_regions, + parent_types: self.parent_types, + regions: self.regions.fold_with(folder), + types: self.types.fold_with(folder), + has_self: self.has_self, + } + } + + fn super_visit_with>(&self, visitor: &mut V) -> bool { + self.regions.visit_with(visitor) || self.types.visit_with(visitor) + } +} + +impl<'tcx> TypeFoldable<'tcx> for ty::GenericPredicates<'tcx> { + fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { + ty::GenericPredicates { + parent: self.parent, + predicates: self.predicates.fold_with(folder), + } + } + + fn super_visit_with>(&self, visitor: &mut V) -> bool { + self.predicates.visit_with(visitor) + } +} + impl<'tcx> TypeFoldable<'tcx> for ty::Predicate<'tcx> { fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { match *self { diff --git a/src/librustc/ty/sty.rs b/src/librustc/ty/sty.rs index 92dfb883ef..3b7c46ef7f 100644 --- a/src/librustc/ty/sty.rs +++ b/src/librustc/ty/sty.rs @@ -11,18 +11,20 @@ //! This module contains TypeVariants and its major components use hir::def_id::DefId; + use middle::region; use ty::subst::Substs; -use ty::{self, AdtDef, ToPredicate, TypeFlags, Ty, TyCtxt, TypeFoldable}; +use ty::{self, AdtDef, TypeFlags, Ty, TyCtxt, TypeFoldable}; use ty::{Slice, TyS}; -use util::common::ErrorReported; +use ty::subst::Kind; -use collections::enum_set::{self, EnumSet, CLike}; use std::fmt; -use std::ops; +use std::iter; +use std::cmp::Ordering; use syntax::abi; -use syntax::ast::{self, Name}; -use syntax::parse::token::{keywords, InternedString}; +use syntax::ast::{self, Name, NodeId}; +use syntax::symbol::{keywords, InternedString}; +use util::nodemap::FxHashSet; use serialize; @@ -111,7 +113,7 @@ pub enum TypeVariants<'tcx> { /// That is, even after substitution it is possible that there are type /// variables. This happens when the `TyAdt` corresponds to an ADT /// definition and not a concrete use of it. - TyAdt(AdtDef<'tcx>, &'tcx Substs<'tcx>), + TyAdt(&'tcx AdtDef, &'tcx Substs<'tcx>), /// `Box`; this is nominally a struct in the documentation, but is /// special-cased internally. For example, it is possible to implicitly @@ -145,7 +147,7 @@ pub enum TypeVariants<'tcx> { TyFnPtr(&'tcx BareFnTy<'tcx>), /// A trait, defined with `trait`. - TyTrait(Box>), + TyDynamic(Binder<&'tcx Slice>>, &'tcx ty::Region), /// The anonymous type of a closure. Used to represent the type of /// `|a| a`. @@ -164,7 +166,7 @@ pub enum TypeVariants<'tcx> { /// Anonymized (`impl Trait`) type found in a return type. /// The DefId comes from the `impl Trait` ast::Ty node, and the /// substitutions are for the generics of the function in question. - /// After typeck, the concrete type can be found in the `tcache` map. + /// After typeck, the concrete type can be found in the `types` map. TyAnon(DefId, &'tcx Substs<'tcx>), /// A type parameter; for example, `T` in `fn f(x: T) {} @@ -254,23 +256,123 @@ pub enum TypeVariants<'tcx> { /// handle). Plus it fixes an ICE. :P #[derive(Copy, Clone, PartialEq, Eq, Hash, Debug, RustcEncodable, RustcDecodable)] pub struct ClosureSubsts<'tcx> { - /// Lifetime and type parameters from the enclosing function. + /// Lifetime and type parameters from the enclosing function, + /// concatenated with the types of the upvars. + /// /// These are separated out because trans wants to pass them around /// when monomorphizing. - pub func_substs: &'tcx Substs<'tcx>, - - /// The types of the upvars. The list parallels the freevars and - /// `upvar_borrows` lists. These are kept distinct so that we can - /// easily index into them. - pub upvar_tys: &'tcx Slice> + pub substs: &'tcx Substs<'tcx>, } -#[derive(Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)] -pub struct TraitObject<'tcx> { - pub principal: PolyExistentialTraitRef<'tcx>, - pub region_bound: &'tcx ty::Region, - pub builtin_bounds: BuiltinBounds, - pub projection_bounds: Vec>, +impl<'a, 'gcx, 'acx, 'tcx> ClosureSubsts<'tcx> { + #[inline] + pub fn upvar_tys(self, def_id: DefId, tcx: TyCtxt<'a, 'gcx, 'acx>) -> + impl Iterator> + 'tcx + { + let generics = tcx.item_generics(def_id); + self.substs[self.substs.len()-generics.own_count()..].iter().map( + |t| t.as_type().expect("unexpected region in upvars")) + } +} + +#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)] +pub enum ExistentialPredicate<'tcx> { + // e.g. Iterator + Trait(ExistentialTraitRef<'tcx>), + // e.g. Iterator::Item = T + Projection(ExistentialProjection<'tcx>), + // e.g. Send + AutoTrait(DefId), +} + +impl<'a, 'gcx, 'tcx> ExistentialPredicate<'tcx> { + pub fn cmp(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, other: &Self) -> Ordering { + use self::ExistentialPredicate::*; + match (*self, *other) { + (Trait(_), Trait(_)) => Ordering::Equal, + (Projection(ref a), Projection(ref b)) => a.sort_key(tcx).cmp(&b.sort_key(tcx)), + (AutoTrait(ref a), AutoTrait(ref b)) => + tcx.lookup_trait_def(*a).def_path_hash.cmp(&tcx.lookup_trait_def(*b).def_path_hash), + (Trait(_), _) => Ordering::Less, + (Projection(_), Trait(_)) => Ordering::Greater, + (Projection(_), _) => Ordering::Less, + (AutoTrait(_), _) => Ordering::Greater, + } + } + +} + +impl<'a, 'gcx, 'tcx> Binder> { + pub fn with_self_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, self_ty: Ty<'tcx>) + -> ty::Predicate<'tcx> { + use ty::ToPredicate; + match *self.skip_binder() { + ExistentialPredicate::Trait(tr) => Binder(tr).with_self_ty(tcx, self_ty).to_predicate(), + ExistentialPredicate::Projection(p) => + ty::Predicate::Projection(Binder(p.with_self_ty(tcx, self_ty))), + ExistentialPredicate::AutoTrait(did) => { + let trait_ref = Binder(ty::TraitRef { + def_id: did, + substs: tcx.mk_substs_trait(self_ty, &[]), + }); + trait_ref.to_predicate() + } + } + } +} + +impl<'tcx> serialize::UseSpecializedDecodable for &'tcx Slice> {} + +impl<'tcx> Slice> { + pub fn principal(&self) -> Option> { + match self.get(0) { + Some(&ExistentialPredicate::Trait(tr)) => Some(tr), + _ => None + } + } + + #[inline] + pub fn projection_bounds<'a>(&'a self) -> + impl Iterator> + 'a { + self.iter().filter_map(|predicate| { + match *predicate { + ExistentialPredicate::Projection(p) => Some(p), + _ => None, + } + }) + } + + #[inline] + pub fn auto_traits<'a>(&'a self) -> impl Iterator + 'a { + self.iter().filter_map(|predicate| { + match *predicate { + ExistentialPredicate::AutoTrait(d) => Some(d), + _ => None + } + }) + } +} + +impl<'tcx> Binder<&'tcx Slice>> { + pub fn principal(&self) -> Option> { + self.skip_binder().principal().map(Binder) + } + + #[inline] + pub fn projection_bounds<'a>(&'a self) -> + impl Iterator> + 'a { + self.skip_binder().projection_bounds().map(Binder) + } + + #[inline] + pub fn auto_traits<'a>(&'a self) -> impl Iterator + 'a { + self.skip_binder().auto_traits() + } + + pub fn iter<'a>(&'a self) + -> impl DoubleEndedIterator>> + 'tcx { + self.skip_binder().iter().cloned().map(Binder) + } } /// A complete reference to a trait. These take numerous guises in syntax, @@ -334,14 +436,30 @@ pub struct ExistentialTraitRef<'tcx> { pub substs: &'tcx Substs<'tcx>, } -impl<'tcx> ExistentialTraitRef<'tcx> { - pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator> + 'a { +impl<'a, 'gcx, 'tcx> ExistentialTraitRef<'tcx> { + pub fn input_types<'b>(&'b self) -> impl DoubleEndedIterator> + 'b { // Select only the "input types" from a trait-reference. For // now this is all the types that appear in the // trait-reference, but it should eventually exclude // associated types. self.substs.types() } + + /// Object types don't have a self-type specified. Therefore, when + /// we convert the principal trait-ref into a normal trait-ref, + /// you must give *some* self-type. A common choice is `mk_err()` + /// or some skolemized type. + pub fn with_self_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, self_ty: Ty<'tcx>) + -> ty::TraitRef<'tcx> { + // otherwise the escaping regions would be captured by the binder + assert!(!self_ty.has_escaping_regions()); + + ty::TraitRef { + def_id: self.def_id, + substs: tcx.mk_substs( + iter::once(Kind::from(self_ty)).chain(self.substs.iter().cloned())) + } + } } pub type PolyExistentialTraitRef<'tcx> = Binder>; @@ -445,22 +563,31 @@ pub struct ClosureTy<'tcx> { /// - `variadic` indicates whether this is a variadic function. (only true for foreign fns) #[derive(Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)] pub struct FnSig<'tcx> { - pub inputs: Vec>, - pub output: Ty<'tcx>, + pub inputs_and_output: &'tcx Slice>, pub variadic: bool } +impl<'tcx> FnSig<'tcx> { + pub fn inputs(&self) -> &[Ty<'tcx>] { + &self.inputs_and_output[..self.inputs_and_output.len() - 1] + } + + pub fn output(&self) -> Ty<'tcx> { + self.inputs_and_output[self.inputs_and_output.len() - 1] + } +} + pub type PolyFnSig<'tcx> = Binder>; impl<'tcx> PolyFnSig<'tcx> { - pub fn inputs(&self) -> ty::Binder>> { - self.map_bound_ref(|fn_sig| fn_sig.inputs.clone()) + pub fn inputs(&self) -> Binder<&[Ty<'tcx>]> { + Binder(self.skip_binder().inputs()) } pub fn input(&self, index: usize) -> ty::Binder> { - self.map_bound_ref(|fn_sig| fn_sig.inputs[index]) + self.map_bound_ref(|fn_sig| fn_sig.inputs()[index]) } pub fn output(&self) -> ty::Binder> { - self.map_bound_ref(|fn_sig| fn_sig.output.clone()) + self.map_bound_ref(|fn_sig| fn_sig.output().clone()) } pub fn variadic(&self) -> bool { self.skip_binder().variadic @@ -556,7 +683,7 @@ pub struct DebruijnIndex { /// /// These are regions that are stored behind a binder and must be substituted /// with some concrete region before being used. There are 2 kind of -/// bound regions: early-bound, which are bound in a TypeScheme/TraitDef, +/// bound regions: early-bound, which are bound in an item's Generics, /// and are substituted by a Substs, and late-bound, which are part of /// higher-ranked types (e.g. `for<'a> fn(&'a ())`) and are substituted by /// the likes of `liberate_late_bound_regions`. The distinction exists @@ -703,122 +830,53 @@ pub struct ExistentialProjection<'tcx> { pub type PolyExistentialProjection<'tcx> = Binder>; -impl<'a, 'tcx, 'gcx> PolyExistentialProjection<'tcx> { +impl<'a, 'tcx, 'gcx> ExistentialProjection<'tcx> { pub fn item_name(&self) -> Name { - self.0.item_name // safe to skip the binder to access a name + self.item_name // safe to skip the binder to access a name } pub fn sort_key(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> (u64, InternedString) { // We want something here that is stable across crate boundaries. // The DefId isn't but the `deterministic_hash` of the corresponding // DefPath is. - let trait_def = tcx.lookup_trait_def(self.0.trait_ref.def_id); + let trait_def = tcx.lookup_trait_def(self.trait_ref.def_id); let def_path_hash = trait_def.def_path_hash; // An `ast::Name` is also not stable (it's just an index into an // interning table), so map to the corresponding `InternedString`. - let item_name = self.0.item_name.as_str(); + let item_name = self.item_name.as_str(); (def_path_hash, item_name) } pub fn with_self_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, self_ty: Ty<'tcx>) - -> ty::PolyProjectionPredicate<'tcx> + -> ty::ProjectionPredicate<'tcx> { // otherwise the escaping regions would be captured by the binders assert!(!self_ty.has_escaping_regions()); - let trait_ref = self.map_bound(|proj| proj.trait_ref); - self.map_bound(|proj| ty::ProjectionPredicate { + ty::ProjectionPredicate { projection_ty: ty::ProjectionTy { - trait_ref: trait_ref.with_self_ty(tcx, self_ty).0, - item_name: proj.item_name + trait_ref: self.trait_ref.with_self_ty(tcx, self_ty), + item_name: self.item_name }, - ty: proj.ty - }) - } -} - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, RustcEncodable, RustcDecodable)] -pub struct BuiltinBounds(EnumSet); - -impl<'a, 'gcx, 'tcx> BuiltinBounds { - pub fn empty() -> BuiltinBounds { - BuiltinBounds(EnumSet::new()) - } - - pub fn iter(&self) -> enum_set::Iter { - self.into_iter() - } - - pub fn to_predicates(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, - self_ty: Ty<'tcx>) - -> Vec> { - self.iter().filter_map(|builtin_bound| - match tcx.trait_ref_for_builtin_bound(builtin_bound, self_ty) { - Ok(trait_ref) => Some(trait_ref.to_predicate()), - Err(ErrorReported) => { None } - } - ).collect() - } -} - -impl ops::Deref for BuiltinBounds { - type Target = EnumSet; - fn deref(&self) -> &Self::Target { &self.0 } -} - -impl ops::DerefMut for BuiltinBounds { - fn deref_mut(&mut self) -> &mut Self::Target { &mut self.0 } -} - -impl<'a> IntoIterator for &'a BuiltinBounds { - type Item = BuiltinBound; - type IntoIter = enum_set::Iter; - fn into_iter(self) -> Self::IntoIter { - (**self).into_iter() - } -} - -#[derive(Clone, RustcEncodable, PartialEq, Eq, RustcDecodable, Hash, - Debug, Copy)] -pub enum BuiltinBound { - Send = 0, - Sized = 1, - Copy = 2, - Sync = 3, -} - -impl CLike for BuiltinBound { - fn to_usize(&self) -> usize { - *self as usize - } - fn from_usize(v: usize) -> BuiltinBound { - match v { - 0 => BuiltinBound::Send, - 1 => BuiltinBound::Sized, - 2 => BuiltinBound::Copy, - 3 => BuiltinBound::Sync, - _ => bug!("{} is not a valid BuiltinBound", v) + ty: self.ty } } } -impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { - pub fn try_add_builtin_trait(self, - trait_def_id: DefId, - builtin_bounds: &mut EnumSet) - -> bool - { - //! Checks whether `trait_ref` refers to one of the builtin - //! traits, like `Send`, and adds the corresponding - //! bound to the set `builtin_bounds` if so. Returns true if `trait_ref` - //! is a builtin trait. +impl<'a, 'tcx, 'gcx> PolyExistentialProjection<'tcx> { + pub fn item_name(&self) -> Name { + self.skip_binder().item_name() + } - match self.lang_items.to_builtin_kind(trait_def_id) { - Some(bound) => { builtin_bounds.insert(bound); true } - None => false - } + pub fn sort_key(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) -> (u64, InternedString) { + self.skip_binder().sort_key(tcx) + } + + pub fn with_self_ty(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, self_ty: Ty<'tcx>) + -> ty::PolyProjectionPredicate<'tcx> { + self.map_bound(|p| p.with_self_ty(tcx, self_ty)) } } @@ -920,19 +978,27 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> { } } - pub fn is_uninhabited(&self, _cx: TyCtxt) -> bool { - // FIXME(#24885): be smarter here, the AdtDefData::is_empty method could easily be made - // more complete. + /// Checks whether a type is uninhabited. + /// If `block` is `Some(id)` it also checks that the uninhabited-ness is visible from `id`. + pub fn is_uninhabited(&self, block: Option, cx: TyCtxt<'a, 'gcx, 'tcx>) -> bool { + let mut visited = FxHashSet::default(); + self.is_uninhabited_recurse(&mut visited, block, cx) + } + + pub fn is_uninhabited_recurse(&self, + visited: &mut FxHashSet<(DefId, &'tcx Substs<'tcx>)>, + block: Option, + cx: TyCtxt<'a, 'gcx, 'tcx>) -> bool { match self.sty { - TyAdt(def, _) => def.is_empty(), + TyAdt(def, substs) => { + def.is_uninhabited_recurse(visited, block, cx, substs) + }, - // FIXME(canndrew): There's no reason why these can't be uncommented, they're tested - // and they don't break anything. But I'm keeping my changes small for now. - //TyNever => true, - //TyTuple(ref tys) => tys.iter().any(|ty| ty.is_uninhabited(cx)), + TyNever => true, + TyTuple(ref tys) => tys.iter().any(|ty| ty.is_uninhabited_recurse(visited, block, cx)), + TyArray(ty, len) => len > 0 && ty.is_uninhabited_recurse(visited, block, cx), + TyRef(_, ref tm) => tm.ty.is_uninhabited_recurse(visited, block, cx), - // FIXME(canndrew): this line breaks core::fmt - //TyRef(_, ref tm) => tm.ty.is_uninhabited(cx), _ => false, } } @@ -1070,7 +1136,7 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> { pub fn is_trait(&self) -> bool { match self.sty { - TyTrait(..) => true, + TyDynamic(..) => true, _ => false } } @@ -1186,7 +1252,7 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> { } // Type accessors for substructures of types - pub fn fn_args(&self) -> ty::Binder>> { + pub fn fn_args(&self) -> ty::Binder<&[Ty<'tcx>]> { self.fn_sig().inputs() } @@ -1203,14 +1269,14 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> { pub fn ty_to_def_id(&self) -> Option { match self.sty { - TyTrait(ref tt) => Some(tt.principal.def_id()), + TyDynamic(ref tt, ..) => tt.principal().map(|p| p.def_id()), TyAdt(def, _) => Some(def.did), TyClosure(id, _) => Some(id), _ => None } } - pub fn ty_adt_def(&self) -> Option> { + pub fn ty_adt_def(&self) -> Option<&'tcx AdtDef> { match self.sty { TyAdt(adt, _) => Some(adt), _ => None @@ -1225,16 +1291,18 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> { TyRef(region, _) => { vec![region] } - TyTrait(ref obj) => { - let mut v = vec![obj.region_bound]; - v.extend(obj.principal.skip_binder().substs.regions()); + TyDynamic(ref obj, region) => { + let mut v = vec![region]; + if let Some(p) = obj.principal() { + v.extend(p.skip_binder().substs.regions()); + } v } TyAdt(_, substs) | TyAnon(_, substs) => { substs.regions().collect() } TyClosure(_, ref substs) => { - substs.func_substs.regions().collect() + substs.substs.regions().collect() } TyProjection(ref data) => { data.trait_ref.substs.regions().collect() diff --git a/src/librustc/ty/subst.rs b/src/librustc/ty/subst.rs index a4ceecd8c9..d6f61a12a3 100644 --- a/src/librustc/ty/subst.rs +++ b/src/librustc/ty/subst.rs @@ -165,6 +165,14 @@ impl<'tcx> Decodable for Kind<'tcx> { pub type Substs<'tcx> = Slice>; impl<'a, 'gcx, 'tcx> Substs<'tcx> { + /// Creates a Substs that maps each generic parameter to itself. + pub fn identity_for_item(tcx: TyCtxt<'a, 'gcx, 'tcx>, def_id: DefId) + -> &'tcx Substs<'tcx> { + Substs::for_item(tcx, def_id, |def, _| { + tcx.mk_region(ty::ReEarlyBound(def.to_early_bound_region_data())) + }, |def, _| tcx.mk_param_from_def(def)) + } + /// Creates a Substs for generic parameter definitions, /// by calling closures to obtain each region and type. /// The closures get to observe the Substs as they're @@ -177,12 +185,28 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> { -> &'tcx Substs<'tcx> where FR: FnMut(&ty::RegionParameterDef, &[Kind<'tcx>]) -> &'tcx ty::Region, FT: FnMut(&ty::TypeParameterDef<'tcx>, &[Kind<'tcx>]) -> Ty<'tcx> { - let defs = tcx.lookup_generics(def_id); + let defs = tcx.item_generics(def_id); let mut substs = Vec::with_capacity(defs.count()); Substs::fill_item(&mut substs, tcx, defs, &mut mk_region, &mut mk_type); tcx.intern_substs(&substs) } + pub fn extend_to(&self, + tcx: TyCtxt<'a, 'gcx, 'tcx>, + def_id: DefId, + mut mk_region: FR, + mut mk_type: FT) + -> &'tcx Substs<'tcx> + where FR: FnMut(&ty::RegionParameterDef, &[Kind<'tcx>]) -> &'tcx ty::Region, + FT: FnMut(&ty::TypeParameterDef<'tcx>, &[Kind<'tcx>]) -> Ty<'tcx> + { + let defs = tcx.item_generics(def_id); + let mut result = Vec::with_capacity(defs.count()); + result.extend(self[..].iter().cloned()); + Substs::fill_single(&mut result, defs, &mut mk_region, &mut mk_type); + tcx.intern_substs(&result) + } + fn fill_item(substs: &mut Vec>, tcx: TyCtxt<'a, 'gcx, 'tcx>, defs: &ty::Generics<'tcx>, @@ -192,10 +216,18 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> { FT: FnMut(&ty::TypeParameterDef<'tcx>, &[Kind<'tcx>]) -> Ty<'tcx> { if let Some(def_id) = defs.parent { - let parent_defs = tcx.lookup_generics(def_id); + let parent_defs = tcx.item_generics(def_id); Substs::fill_item(substs, tcx, parent_defs, mk_region, mk_type); } + Substs::fill_single(substs, defs, mk_region, mk_type) + } + fn fill_single(substs: &mut Vec>, + defs: &ty::Generics<'tcx>, + mk_region: &mut FR, + mk_type: &mut FT) + where FR: FnMut(&ty::RegionParameterDef, &[Kind<'tcx>]) -> &'tcx ty::Region, + FT: FnMut(&ty::TypeParameterDef<'tcx>, &[Kind<'tcx>]) -> Ty<'tcx> { // Handle Self first, before all regions. let mut types = defs.types.iter(); if defs.parent.is_none() && defs.has_self { @@ -271,9 +303,14 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> { source_ancestor: DefId, target_substs: &Substs<'tcx>) -> &'tcx Substs<'tcx> { - let defs = tcx.lookup_generics(source_ancestor); + let defs = tcx.item_generics(source_ancestor); tcx.mk_substs(target_substs.iter().chain(&self[defs.own_count()..]).cloned()) } + + pub fn truncate_to(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, generics: &ty::Generics<'tcx>) + -> &'tcx Substs<'tcx> { + tcx.mk_substs(self.iter().take(generics.count()).cloned()) + } } impl<'tcx> TypeFoldable<'tcx> for &'tcx Substs<'tcx> { @@ -519,7 +556,7 @@ impl<'a, 'gcx, 'tcx> ty::TraitRef<'tcx> { trait_id: DefId, substs: &Substs<'tcx>) -> ty::TraitRef<'tcx> { - let defs = tcx.lookup_generics(trait_id); + let defs = tcx.item_generics(trait_id); ty::TraitRef { def_id: trait_id, diff --git a/src/librustc/ty/trait_def.rs b/src/librustc/ty/trait_def.rs index 3ff2ed76e5..c6d862b23b 100644 --- a/src/librustc/ty/trait_def.rs +++ b/src/librustc/ty/trait_def.rs @@ -16,10 +16,12 @@ use ty::fast_reject; use ty::{Ty, TyCtxt, TraitRef}; use std::cell::{Cell, RefCell}; use hir; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; + +/// A trait's definition with type information. +pub struct TraitDef { + pub def_id: DefId, -/// As `TypeScheme` but for a trait ref. -pub struct TraitDef<'tcx> { pub unsafety: hir::Unsafety, /// If `true`, then this trait had the `#[rustc_paren_sugar]` @@ -28,15 +30,6 @@ pub struct TraitDef<'tcx> { /// be usable with the sugar (or without it). pub paren_sugar: bool, - /// Generic type definitions. Note that `Self` is listed in here - /// as having a single bound, the trait itself (e.g., in the trait - /// `Eq`, there is a single bound `Self : Eq`). This is so that - /// default methods get to assume that the `Self` parameters - /// implements the trait. - pub generics: &'tcx ty::Generics<'tcx>, - - pub trait_ref: ty::TraitRef<'tcx>, - // Impls of a trait. To allow for quicker lookup, the impls are indexed by a // simplified version of their `Self` type: impls with a simplifiable `Self` // are stored in `nonblanket_impls` keyed by it, while all other impls are @@ -55,7 +48,7 @@ pub struct TraitDef<'tcx> { /// Impls of the trait. nonblanket_impls: RefCell< - FnvHashMap> + FxHashMap> >, /// Blanket impls associated with the trait. @@ -72,19 +65,17 @@ pub struct TraitDef<'tcx> { pub def_path_hash: u64, } -impl<'a, 'gcx, 'tcx> TraitDef<'tcx> { - pub fn new(unsafety: hir::Unsafety, +impl<'a, 'gcx, 'tcx> TraitDef { + pub fn new(def_id: DefId, + unsafety: hir::Unsafety, paren_sugar: bool, - generics: &'tcx ty::Generics<'tcx>, - trait_ref: ty::TraitRef<'tcx>, def_path_hash: u64) - -> TraitDef<'tcx> { + -> TraitDef { TraitDef { + def_id: def_id, paren_sugar: paren_sugar, unsafety: unsafety, - generics: generics, - trait_ref: trait_ref, - nonblanket_impls: RefCell::new(FnvHashMap()), + nonblanket_impls: RefCell::new(FxHashMap()), blanket_impls: RefCell::new(vec![]), flags: Cell::new(ty::TraitFlags::NO_TRAIT_FLAGS), specialization_graph: RefCell::new(traits::specialization_graph::Graph::new()), @@ -92,10 +83,6 @@ impl<'a, 'gcx, 'tcx> TraitDef<'tcx> { } } - pub fn def_id(&self) -> DefId { - self.trait_ref.def_id - } - // returns None if not yet calculated pub fn object_safety(&self) -> Option { if self.flags.get().intersects(TraitFlags::OBJECT_SAFETY_VALID) { @@ -117,11 +104,11 @@ impl<'a, 'gcx, 'tcx> TraitDef<'tcx> { } fn write_trait_impls(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) { - tcx.dep_graph.write(DepNode::TraitImpls(self.trait_ref.def_id)); + tcx.dep_graph.write(DepNode::TraitImpls(self.def_id)); } fn read_trait_impls(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>) { - tcx.dep_graph.read(DepNode::TraitImpls(self.trait_ref.def_id)); + tcx.dep_graph.read(DepNode::TraitImpls(self.def_id)); } /// Records a basic trait-to-implementation mapping. @@ -203,13 +190,13 @@ impl<'a, 'gcx, 'tcx> TraitDef<'tcx> { .insert(tcx, impl_def_id) } - pub fn ancestors(&'a self, of_impl: DefId) -> specialization_graph::Ancestors<'a, 'tcx> { + pub fn ancestors(&'a self, of_impl: DefId) -> specialization_graph::Ancestors<'a> { specialization_graph::ancestors(self, of_impl) } pub fn for_each_impl(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>, mut f: F) { self.read_trait_impls(tcx); - tcx.populate_implementations_for_trait_if_necessary(self.trait_ref.def_id); + tcx.populate_implementations_for_trait_if_necessary(self.def_id); for &impl_def_id in self.blanket_impls.borrow().iter() { f(impl_def_id); @@ -231,7 +218,7 @@ impl<'a, 'gcx, 'tcx> TraitDef<'tcx> { { self.read_trait_impls(tcx); - tcx.populate_implementations_for_trait_if_necessary(self.trait_ref.def_id); + tcx.populate_implementations_for_trait_if_necessary(self.def_id); for &impl_def_id in self.blanket_impls.borrow().iter() { f(impl_def_id); diff --git a/src/librustc/ty/util.rs b/src/librustc/ty/util.rs index cca4069ba5..b4ac6b9d25 100644 --- a/src/librustc/ty/util.rs +++ b/src/librustc/ty/util.rs @@ -11,23 +11,24 @@ //! misc. type-system utilities too small to deserve their own file use hir::def_id::DefId; +use hir::map::DefPathData; use infer::InferCtxt; use hir::map as ast_map; -use hir::pat_util; use traits::{self, Reveal}; use ty::{self, Ty, AdtKind, TyCtxt, TypeAndMut, TypeFlags, TypeFoldable}; use ty::{Disr, ParameterEnvironment}; use ty::fold::TypeVisitor; use ty::layout::{Layout, LayoutError}; use ty::TypeVariants::*; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; +use middle::lang_items; use rustc_const_math::{ConstInt, ConstIsize, ConstUsize}; +use rustc_data_structures::stable_hasher::{StableHasher, StableHasherResult}; use std::cell::RefCell; use std::cmp; -use std::hash::{Hash, Hasher}; -use std::collections::hash_map::DefaultHasher; +use std::hash::Hash; use std::intrinsics; use syntax::ast::{self, Name}; use syntax::attr::{self, SignedInt, UnsignedInt}; @@ -179,14 +180,6 @@ impl<'tcx> ParameterEnvironment<'tcx> { } impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { - pub fn pat_contains_ref_binding(self, pat: &hir::Pat) -> Option { - pat_util::pat_contains_ref_binding(pat) - } - - pub fn arm_contains_ref_binding(self, arm: &hir::Arm) -> Option { - pat_util::arm_contains_ref_binding(arm) - } - pub fn has_error_field(self, ty: Ty<'tcx>) -> bool { match ty.sty { ty::TyAdt(def, substs) => { @@ -356,7 +349,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// Creates a hash of the type `Ty` which will be the same no matter what crate /// context it's calculated within. This is used by the `type_id` intrinsic. pub fn type_id_hash(self, ty: Ty<'tcx>) -> u64 { - let mut hasher = TypeIdHasher::new(self, DefaultHasher::default()); + let mut hasher = TypeIdHasher::new(self); hasher.visit_ty(ty); hasher.finish() } @@ -373,7 +366,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { /// `adt` that do not strictly outlive the adt value itself. /// (This allows programs to make cyclic structures without /// resorting to unasfe means; see RFCs 769 and 1238). - pub fn is_adt_dtorck(self, adt: ty::AdtDef) -> bool { + pub fn is_adt_dtorck(self, adt: &ty::AdtDef) -> bool { let dtor_method = match adt.destructor() { Some(dtor) => dtor, None => return false @@ -390,98 +383,38 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> { // (e.g. calling `foo.0.clone()` of `Foo`). return !self.has_attr(dtor_method, "unsafe_destructor_blind_to_params"); } -} -/// When hashing a type this ends up affecting properties like symbol names. We -/// want these symbol names to be calculated independent of other factors like -/// what architecture you're compiling *from*. -/// -/// The hashing just uses the standard `Hash` trait, but the implementations of -/// `Hash` for the `usize` and `isize` types are *not* architecture independent -/// (e.g. they has 4 or 8 bytes). As a result we want to avoid `usize` and -/// `isize` completely when hashing. To ensure that these don't leak in we use a -/// custom hasher implementation here which inflates the size of these to a `u64` -/// and `i64`. -/// -/// The same goes for endianess: We always convert multi-byte integers to little -/// endian before hashing. -#[derive(Debug)] -pub struct ArchIndependentHasher { - inner: H, -} - -impl ArchIndependentHasher { - pub fn new(inner: H) -> ArchIndependentHasher { - ArchIndependentHasher { inner: inner } - } - - pub fn into_inner(self) -> H { - self.inner - } -} - -impl Hasher for ArchIndependentHasher { - fn write(&mut self, bytes: &[u8]) { - self.inner.write(bytes) - } - - fn finish(&self) -> u64 { - self.inner.finish() - } - - fn write_u8(&mut self, i: u8) { - self.inner.write_u8(i) - } - fn write_u16(&mut self, i: u16) { - self.inner.write_u16(i.to_le()) - } - fn write_u32(&mut self, i: u32) { - self.inner.write_u32(i.to_le()) - } - fn write_u64(&mut self, i: u64) { - self.inner.write_u64(i.to_le()) - } - fn write_usize(&mut self, i: usize) { - self.inner.write_u64((i as u64).to_le()) - } - fn write_i8(&mut self, i: i8) { - self.inner.write_i8(i) - } - fn write_i16(&mut self, i: i16) { - self.inner.write_i16(i.to_le()) - } - fn write_i32(&mut self, i: i32) { - self.inner.write_i32(i.to_le()) - } - fn write_i64(&mut self, i: i64) { - self.inner.write_i64(i.to_le()) - } - fn write_isize(&mut self, i: isize) { - self.inner.write_i64((i as i64).to_le()) - } -} - -pub struct TypeIdHasher<'a, 'gcx: 'a+'tcx, 'tcx: 'a, H> { - tcx: TyCtxt<'a, 'gcx, 'tcx>, - state: ArchIndependentHasher, -} - -impl<'a, 'gcx, 'tcx, H: Hasher> TypeIdHasher<'a, 'gcx, 'tcx, H> { - pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>, state: H) -> Self { - TypeIdHasher { - tcx: tcx, - state: ArchIndependentHasher::new(state), + pub fn closure_base_def_id(&self, def_id: DefId) -> DefId { + let mut def_id = def_id; + while self.def_key(def_id).disambiguated_data.data == DefPathData::ClosureExpr { + def_id = self.parent_def_id(def_id).unwrap_or_else(|| { + bug!("closure {:?} has no parent", def_id); + }); } + def_id + } +} + +pub struct TypeIdHasher<'a, 'gcx: 'a+'tcx, 'tcx: 'a, W> { + tcx: TyCtxt<'a, 'gcx, 'tcx>, + state: StableHasher, +} + +impl<'a, 'gcx, 'tcx, W> TypeIdHasher<'a, 'gcx, 'tcx, W> + where W: StableHasherResult +{ + pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>) -> Self { + TypeIdHasher { tcx: tcx, state: StableHasher::new() } + } + + pub fn finish(self) -> W { + self.state.finish() } pub fn hash(&mut self, x: T) { x.hash(&mut self.state); } - pub fn finish(self) -> u64 { - self.state.finish() - } - fn hash_discriminant_u8(&mut self, x: &T) { let v = unsafe { intrinsics::discriminant_value(x) @@ -501,13 +434,11 @@ impl<'a, 'gcx, 'tcx, H: Hasher> TypeIdHasher<'a, 'gcx, 'tcx, H> { pub fn def_path(&mut self, def_path: &ast_map::DefPath) { def_path.deterministic_hash_to(self.tcx, &mut self.state); } - - pub fn into_inner(self) -> H { - self.state.inner - } } -impl<'a, 'gcx, 'tcx, H: Hasher> TypeVisitor<'tcx> for TypeIdHasher<'a, 'gcx, 'tcx, H> { +impl<'a, 'gcx, 'tcx, W> TypeVisitor<'tcx> for TypeIdHasher<'a, 'gcx, 'tcx, W> + where W: StableHasherResult +{ fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool { // Distinguish between the Ty variants uniformly. self.hash_discriminant_u8(&ty.sty); @@ -527,11 +458,15 @@ impl<'a, 'gcx, 'tcx, H: Hasher> TypeVisitor<'tcx> for TypeIdHasher<'a, 'gcx, 'tc self.hash(f.unsafety); self.hash(f.abi); self.hash(f.sig.variadic()); - self.hash(f.sig.inputs().skip_binder().len()); + self.hash(f.sig.skip_binder().inputs().len()); } - TyTrait(ref data) => { - self.def_id(data.principal.def_id()); - self.hash(data.builtin_bounds); + TyDynamic(ref data, ..) => { + if let Some(p) = data.principal() { + self.def_id(p.def_id()); + } + for d in data.auto_traits() { + self.def_id(d); + } } TyTuple(tys) => { self.hash(tys.len()); @@ -593,8 +528,8 @@ impl<'a, 'gcx, 'tcx, H: Hasher> TypeVisitor<'tcx> for TypeIdHasher<'a, 'gcx, 'tc impl<'a, 'tcx> ty::TyS<'tcx> { fn impls_bound(&'tcx self, tcx: TyCtxt<'a, 'tcx, 'tcx>, param_env: &ParameterEnvironment<'tcx>, - bound: ty::BuiltinBound, - cache: &RefCell, bool>>, + def_id: DefId, + cache: &RefCell, bool>>, span: Span) -> bool { if self.has_param_types() || self.has_self_ty() { @@ -605,7 +540,7 @@ impl<'a, 'tcx> ty::TyS<'tcx> { let result = tcx.infer_ctxt(None, Some(param_env.clone()), Reveal::ExactMatch) .enter(|infcx| { - traits::type_known_to_meet_builtin_bound(&infcx, self, bound, span) + traits::type_known_to_meet_bound(&infcx, self, def_id, span) }); if self.has_param_types() || self.has_self_ty() { cache.borrow_mut().insert(self, result); @@ -634,12 +569,13 @@ impl<'a, 'tcx> ty::TyS<'tcx> { mutbl: hir::MutMutable, .. }) => Some(true), - TyArray(..) | TySlice(..) | TyTrait(..) | TyTuple(..) | + TyArray(..) | TySlice(..) | TyDynamic(..) | TyTuple(..) | TyClosure(..) | TyAdt(..) | TyAnon(..) | TyProjection(..) | TyParam(..) | TyInfer(..) | TyError => None }.unwrap_or_else(|| { - !self.impls_bound(tcx, param_env, ty::BoundCopy, ¶m_env.is_copy_cache, span) - }); + !self.impls_bound(tcx, param_env, + tcx.require_lang_item(lang_items::CopyTraitLangItem), + ¶m_env.is_copy_cache, span) }); if !self.has_param_types() && !self.has_self_ty() { self.flags.set(self.flags.get() | if result { @@ -675,13 +611,13 @@ impl<'a, 'tcx> ty::TyS<'tcx> { TyBox(..) | TyRawPtr(..) | TyRef(..) | TyFnDef(..) | TyFnPtr(_) | TyArray(..) | TyTuple(..) | TyClosure(..) | TyNever => Some(true), - TyStr | TyTrait(..) | TySlice(_) => Some(false), + TyStr | TyDynamic(..) | TySlice(_) => Some(false), TyAdt(..) | TyProjection(..) | TyParam(..) | TyInfer(..) | TyAnon(..) | TyError => None }.unwrap_or_else(|| { - self.impls_bound(tcx, param_env, ty::BoundSized, ¶m_env.is_sized_cache, span) - }); + self.impls_bound(tcx, param_env, tcx.require_lang_item(lang_items::SizedTraitLangItem), + ¶m_env.is_sized_cache, span) }); if !self.has_param_types() && !self.has_self_ty() { self.flags.set(self.flags.get() | if result { @@ -765,7 +701,7 @@ impl<'a, 'tcx> ty::TyS<'tcx> { } } - fn same_struct_or_enum<'tcx>(ty: Ty<'tcx>, def: ty::AdtDef<'tcx>) -> bool { + fn same_struct_or_enum<'tcx>(ty: Ty<'tcx>, def: &'tcx ty::AdtDef) -> bool { match ty.sty { TyAdt(ty_def, _) => { ty_def == def diff --git a/src/librustc/ty/walk.rs b/src/librustc/ty/walk.rs index bebdebf127..3fa7a80314 100644 --- a/src/librustc/ty/walk.rs +++ b/src/librustc/ty/walk.rs @@ -12,17 +12,22 @@ //! WARNING: this does not keep track of the region depth. use ty::{self, Ty}; -use std::iter::Iterator; -use std::vec::IntoIter; +use rustc_data_structures::small_vec::SmallVec; +use rustc_data_structures::accumulate_vec::IntoIter as AccIntoIter; + +// The TypeWalker's stack is hot enough that it's worth going to some effort to +// avoid heap allocations. +pub type TypeWalkerArray<'tcx> = [Ty<'tcx>; 8]; +pub type TypeWalkerStack<'tcx> = SmallVec>; pub struct TypeWalker<'tcx> { - stack: Vec>, + stack: TypeWalkerStack<'tcx>, last_subtree: usize, } impl<'tcx> TypeWalker<'tcx> { pub fn new(ty: Ty<'tcx>) -> TypeWalker<'tcx> { - TypeWalker { stack: vec![ty], last_subtree: 1, } + TypeWalker { stack: SmallVec::one(ty), last_subtree: 1, } } /// Skips the subtree of types corresponding to the last type @@ -61,8 +66,8 @@ impl<'tcx> Iterator for TypeWalker<'tcx> { } } -pub fn walk_shallow<'tcx>(ty: Ty<'tcx>) -> IntoIter> { - let mut stack = vec![]; +pub fn walk_shallow<'tcx>(ty: Ty<'tcx>) -> AccIntoIter> { + let mut stack = SmallVec::new(); push_subtypes(&mut stack, ty); stack.into_iter() } @@ -73,7 +78,7 @@ pub fn walk_shallow<'tcx>(ty: Ty<'tcx>) -> IntoIter> { // known to be significant to any code, but it seems like the // natural order one would expect (basically, the order of the // types as they are written). -fn push_subtypes<'tcx>(stack: &mut Vec>, parent_ty: Ty<'tcx>) { +fn push_subtypes<'tcx>(stack: &mut TypeWalkerStack<'tcx>, parent_ty: Ty<'tcx>) { match parent_ty.sty { ty::TyBool | ty::TyChar | ty::TyInt(_) | ty::TyUint(_) | ty::TyFloat(_) | ty::TyStr | ty::TyInfer(_) | ty::TyParam(_) | ty::TyNever | ty::TyError => { @@ -87,18 +92,25 @@ fn push_subtypes<'tcx>(stack: &mut Vec>, parent_ty: Ty<'tcx>) { ty::TyProjection(ref data) => { stack.extend(data.trait_ref.substs.types().rev()); } - ty::TyTrait(ref obj) => { - stack.extend(obj.principal.input_types().rev()); - stack.extend(obj.projection_bounds.iter().map(|pred| { - pred.0.ty - }).rev()); + ty::TyDynamic(ref obj, ..) => { + stack.extend(obj.iter().rev().flat_map(|predicate| { + let (substs, opt_ty) = match *predicate.skip_binder() { + ty::ExistentialPredicate::Trait(tr) => (tr.substs, None), + ty::ExistentialPredicate::Projection(p) => + (p.trait_ref.substs, Some(p.ty)), + ty::ExistentialPredicate::AutoTrait(_) => + // Empty iterator + (ty::Substs::empty(), None), + }; + + substs.types().rev().chain(opt_ty) + })); } ty::TyAdt(_, substs) | ty::TyAnon(_, substs) => { stack.extend(substs.types().rev()); } ty::TyClosure(_, ref substs) => { - stack.extend(substs.func_substs.types().rev()); - stack.extend(substs.upvar_tys.iter().cloned().rev()); + stack.extend(substs.substs.types().rev()); } ty::TyTuple(ts) => { stack.extend(ts.iter().cloned().rev()); @@ -113,7 +125,7 @@ fn push_subtypes<'tcx>(stack: &mut Vec>, parent_ty: Ty<'tcx>) { } } -fn push_sig_subtypes<'tcx>(stack: &mut Vec>, sig: &ty::PolyFnSig<'tcx>) { - stack.push(sig.0.output); - stack.extend(sig.0.inputs.iter().cloned().rev()); +fn push_sig_subtypes<'tcx>(stack: &mut TypeWalkerStack<'tcx>, sig: &ty::PolyFnSig<'tcx>) { + stack.push(sig.skip_binder().output()); + stack.extend(sig.skip_binder().inputs().iter().cloned().rev()); } diff --git a/src/librustc/ty/wf.rs b/src/librustc/ty/wf.rs index 155fa4989e..bab9964651 100644 --- a/src/librustc/ty/wf.rs +++ b/src/librustc/ty/wf.rs @@ -17,7 +17,7 @@ use ty::{self, ToPredicate, Ty, TyCtxt, TypeFoldable}; use std::iter::once; use syntax::ast; use syntax_pos::Span; -use util::common::ErrorReported; +use middle::lang_items; /// Returns the set of obligations needed to make `ty` well-formed. /// If `ty` contains unresolved inference variables, this may include @@ -282,14 +282,11 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { fn require_sized(&mut self, subty: Ty<'tcx>, cause: traits::ObligationCauseCode<'tcx>) { if !subty.has_escaping_regions() { let cause = self.cause(cause); - match self.infcx.tcx.trait_ref_for_builtin_bound(ty::BoundSized, subty) { - Ok(trait_ref) => { - self.out.push( - traits::Obligation::new(cause, - trait_ref.to_predicate())); - } - Err(ErrorReported) => { } - } + let trait_ref = ty::TraitRef { + def_id: self.infcx.tcx.require_lang_item(lang_items::SizedTraitLangItem), + substs: self.infcx.tcx.mk_substs_trait(subty, &[]), + }; + self.out.push(traits::Obligation::new(cause, trait_ref.to_predicate())); } } @@ -298,7 +295,6 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { /// is WF. Returns false if `ty0` is an unresolved type variable, /// in which case we are not able to simplify at all. fn compute(&mut self, ty0: Ty<'tcx>) -> bool { - let tcx = self.infcx.tcx; let mut subtys = ty0.walk(); while let Some(ty) = subtys.next() { match ty.sty { @@ -377,12 +373,12 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { // of whatever returned this exact `impl Trait`. } - ty::TyTrait(ref data) => { + ty::TyDynamic(data, r) => { // WfObject // // Here, we defer WF checking due to higher-ranked // regions. This is perhaps not ideal. - self.from_object_ty(ty, data); + self.from_object_ty(ty, data, r); // FIXME(#27579) RFC also considers adding trait // obligations that don't refer to Self and @@ -391,15 +387,12 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { let cause = self.cause(traits::MiscObligation); let component_traits = - data.builtin_bounds.iter().flat_map(|bound| { - tcx.lang_items.from_builtin_kind(bound).ok() - }) - .chain(Some(data.principal.def_id())); + data.auto_traits().chain(data.principal().map(|p| p.def_id())); self.out.extend( - component_traits.map(|did| { traits::Obligation::new( + component_traits.map(|did| traits::Obligation::new( cause.clone(), ty::Predicate::ObjectSafe(did) - )}) + )) ); } @@ -446,7 +439,7 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { -> Vec> { let predicates = - self.infcx.tcx.lookup_predicates(def_id) + self.infcx.tcx.item_predicates(def_id) .instantiate(self.infcx.tcx, substs); let cause = self.cause(traits::ItemObligation(def_id)); predicates.predicates @@ -456,7 +449,9 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { .collect() } - fn from_object_ty(&mut self, ty: Ty<'tcx>, data: &ty::TraitObject<'tcx>) { + fn from_object_ty(&mut self, ty: Ty<'tcx>, + data: ty::Binder<&'tcx ty::Slice>>, + region: &'tcx ty::Region) { // Imagine a type like this: // // trait Foo { } @@ -491,11 +486,9 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { if !data.has_escaping_regions() { let implicit_bounds = - object_region_bounds(self.infcx.tcx, - data.principal, - data.builtin_bounds); + object_region_bounds(self.infcx.tcx, data); - let explicit_bound = data.region_bound; + let explicit_bound = region; for implicit_bound in implicit_bounds { let cause = self.cause(traits::ObjectTypeBound(ty, explicit_bound)); @@ -514,8 +507,7 @@ impl<'a, 'gcx, 'tcx> WfPredicates<'a, 'gcx, 'tcx> { /// `ty::required_region_bounds`, see that for more information. pub fn object_region_bounds<'a, 'gcx, 'tcx>( tcx: TyCtxt<'a, 'gcx, 'tcx>, - principal: ty::PolyExistentialTraitRef<'tcx>, - others: ty::BuiltinBounds) + existential_predicates: ty::Binder<&'tcx ty::Slice>>) -> Vec<&'tcx ty::Region> { // Since we don't actually *know* the self type for an object, @@ -523,8 +515,13 @@ pub fn object_region_bounds<'a, 'gcx, 'tcx>( // a skolemized type. let open_ty = tcx.mk_infer(ty::FreshTy(0)); - let mut predicates = others.to_predicates(tcx, open_ty); - predicates.push(principal.with_self_ty(tcx, open_ty).to_predicate()); + let predicates = existential_predicates.iter().filter_map(|predicate| { + if let ty::ExistentialPredicate::Projection(_) = *predicate.skip_binder() { + None + } else { + Some(predicate.with_self_ty(tcx, open_ty)) + } + }).collect(); tcx.required_region_bounds(open_ty, predicates) } diff --git a/src/librustc/util/common.rs b/src/librustc/util/common.rs index 7cd5fd78df..e01856b2a4 100644 --- a/src/librustc/util/common.rs +++ b/src/librustc/util/common.rs @@ -19,10 +19,6 @@ use std::iter::repeat; use std::path::Path; use std::time::{Duration, Instant}; -use hir; -use hir::intravisit; -use hir::intravisit::Visitor; - // The name of the associated type for `Fn` return types pub const FN_OUTPUT_NAME: &'static str = "Output"; @@ -186,57 +182,6 @@ pub fn indenter() -> Indenter { Indenter { _cannot_construct_outside_of_this_module: () } } -struct LoopQueryVisitor

where P: FnMut(&hir::Expr_) -> bool { - p: P, - flag: bool, -} - -impl<'v, P> Visitor<'v> for LoopQueryVisitor

where P: FnMut(&hir::Expr_) -> bool { - fn visit_expr(&mut self, e: &hir::Expr) { - self.flag |= (self.p)(&e.node); - match e.node { - // Skip inner loops, since a break in the inner loop isn't a - // break inside the outer loop - hir::ExprLoop(..) | hir::ExprWhile(..) => {} - _ => intravisit::walk_expr(self, e) - } - } -} - -// Takes a predicate p, returns true iff p is true for any subexpressions -// of b -- skipping any inner loops (loop, while, loop_body) -pub fn loop_query

(b: &hir::Block, p: P) -> bool where P: FnMut(&hir::Expr_) -> bool { - let mut v = LoopQueryVisitor { - p: p, - flag: false, - }; - intravisit::walk_block(&mut v, b); - return v.flag; -} - -struct BlockQueryVisitor

where P: FnMut(&hir::Expr) -> bool { - p: P, - flag: bool, -} - -impl<'v, P> Visitor<'v> for BlockQueryVisitor

where P: FnMut(&hir::Expr) -> bool { - fn visit_expr(&mut self, e: &hir::Expr) { - self.flag |= (self.p)(e); - intravisit::walk_expr(self, e) - } -} - -// Takes a predicate p, returns true iff p is true for any subexpressions -// of b -- skipping any inner loops (loop, while, loop_body) -pub fn block_query

(b: &hir::Block, p: P) -> bool where P: FnMut(&hir::Expr) -> bool { - let mut v = BlockQueryVisitor { - p: p, - flag: false, - }; - intravisit::walk_block(&mut v, &b); - return v.flag; -} - pub trait MemoizationMap { type Key: Clone; type Value: Clone; diff --git a/src/librustc/util/nodemap.rs b/src/librustc/util/nodemap.rs index 69bcc9cbff..b03011fcb2 100644 --- a/src/librustc/util/nodemap.rs +++ b/src/librustc/util/nodemap.rs @@ -15,17 +15,17 @@ use hir::def_id::DefId; use syntax::ast; -pub use rustc_data_structures::fnv::FnvHashMap; -pub use rustc_data_structures::fnv::FnvHashSet; +pub use rustc_data_structures::fx::FxHashMap; +pub use rustc_data_structures::fx::FxHashSet; -pub type NodeMap = FnvHashMap; -pub type DefIdMap = FnvHashMap; +pub type NodeMap = FxHashMap; +pub type DefIdMap = FxHashMap; -pub type NodeSet = FnvHashSet; -pub type DefIdSet = FnvHashSet; +pub type NodeSet = FxHashSet; +pub type DefIdSet = FxHashSet; -pub fn NodeMap() -> NodeMap { FnvHashMap() } -pub fn DefIdMap() -> DefIdMap { FnvHashMap() } -pub fn NodeSet() -> NodeSet { FnvHashSet() } -pub fn DefIdSet() -> DefIdSet { FnvHashSet() } +pub fn NodeMap() -> NodeMap { FxHashMap() } +pub fn DefIdMap() -> DefIdMap { FxHashMap() } +pub fn NodeSet() -> NodeSet { FxHashSet() } +pub fn DefIdSet() -> DefIdSet { FxHashSet() } diff --git a/src/librustc/util/ppaux.rs b/src/librustc/util/ppaux.rs index 31304fb7b4..38b38e5b49 100644 --- a/src/librustc/util/ppaux.rs +++ b/src/librustc/util/ppaux.rs @@ -16,17 +16,16 @@ use ty::{TyBool, TyChar, TyAdt}; use ty::{TyError, TyStr, TyArray, TySlice, TyFloat, TyFnDef, TyFnPtr}; use ty::{TyParam, TyRawPtr, TyRef, TyNever, TyTuple}; use ty::{TyClosure, TyProjection, TyAnon}; -use ty::{TyBox, TyTrait, TyInt, TyUint, TyInfer}; +use ty::{TyBox, TyDynamic, TyInt, TyUint, TyInfer}; use ty::{self, Ty, TyCtxt, TypeFoldable}; -use ty::fold::{TypeFolder, TypeVisitor}; use std::cell::Cell; use std::fmt; use std::usize; use syntax::abi::Abi; -use syntax::parse::token; use syntax::ast::CRATE_NODE_ID; +use syntax::symbol::Symbol; use hir; pub fn verbose() -> bool { @@ -105,7 +104,7 @@ pub fn parameterized(f: &mut fmt::Formatter, } } } - let mut generics = tcx.lookup_generics(item_def_id); + let mut generics = tcx.item_generics(item_def_id); let mut path_def_id = did; verbose = tcx.sess.verbose(); has_self = generics.has_self; @@ -115,7 +114,7 @@ pub fn parameterized(f: &mut fmt::Formatter, // Methods. assert!(is_value_path); child_types = generics.types.len(); - generics = tcx.lookup_generics(def_id); + generics = tcx.item_generics(def_id); num_regions = generics.regions.len(); num_types = generics.types.len(); @@ -284,7 +283,7 @@ fn in_binder<'a, 'gcx, 'tcx, T, U>(f: &mut fmt::Formatter, ty::BrAnon(_) | ty::BrFresh(_) | ty::BrEnv => { - let name = token::intern("'r"); + let name = Symbol::intern("'r"); let _ = write!(f, "{}", name); ty::BrNamed(tcx.map.local_def_id(CRATE_NODE_ID), name, @@ -298,75 +297,32 @@ fn in_binder<'a, 'gcx, 'tcx, T, U>(f: &mut fmt::Formatter, write!(f, "{}", new_value) } -/// This curious type is here to help pretty-print trait objects. In -/// a trait object, the projections are stored separately from the -/// main trait bound, but in fact we want to package them together -/// when printing out; they also have separate binders, but we want -/// them to share a binder when we print them out. (And the binder -/// pretty-printing logic is kind of clever and we don't want to -/// reproduce it.) So we just repackage up the structure somewhat. -/// -/// Right now there is only one trait in an object that can have -/// projection bounds, so we just stuff them altogether. But in -/// reality we should eventually sort things out better. -#[derive(Clone, Debug)] -struct TraitAndProjections<'tcx>(ty::TraitRef<'tcx>, - Vec>); - -impl<'tcx> TypeFoldable<'tcx> for TraitAndProjections<'tcx> { - fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self { - TraitAndProjections(self.0.fold_with(folder), self.1.fold_with(folder)) - } - - fn super_visit_with>(&self, visitor: &mut V) -> bool { - self.0.visit_with(visitor) || self.1.visit_with(visitor) - } -} - -impl<'tcx> fmt::Display for TraitAndProjections<'tcx> { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - let TraitAndProjections(ref trait_ref, ref projection_bounds) = *self; - parameterized(f, trait_ref.substs, - trait_ref.def_id, - projection_bounds) - } -} - -impl<'tcx> fmt::Display for ty::TraitObject<'tcx> { +impl<'tcx> fmt::Display for &'tcx ty::Slice> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { // Generate the main trait ref, including associated types. ty::tls::with(|tcx| { // Use a type that can't appear in defaults of type parameters. let dummy_self = tcx.mk_infer(ty::FreshTy(0)); - let principal = tcx.lift(&self.principal) - .expect("could not lift TraitRef for printing") - .with_self_ty(tcx, dummy_self).0; - let projections = self.projection_bounds.iter().map(|p| { - tcx.lift(p) - .expect("could not lift projection for printing") - .with_self_ty(tcx, dummy_self).0 - }).collect(); + if let Some(p) = self.principal() { + let principal = tcx.lift(&p).expect("could not lift TraitRef for printing") + .with_self_ty(tcx, dummy_self); + let projections = self.projection_bounds().map(|p| { + tcx.lift(&p) + .expect("could not lift projection for printing") + .with_self_ty(tcx, dummy_self) + }).collect::>(); + parameterized(f, principal.substs, principal.def_id, &projections)?; + } - let tap = ty::Binder(TraitAndProjections(principal, projections)); - in_binder(f, tcx, &ty::Binder(""), Some(tap)) + // Builtin bounds. + for did in self.auto_traits() { + write!(f, " + {}", tcx.item_path_str(did))?; + } + + Ok(()) })?; - // Builtin bounds. - for bound in &self.builtin_bounds { - write!(f, " + {:?}", bound)?; - } - - // FIXME: It'd be nice to compute from context when this bound - // is implied, but that's non-trivial -- we'd perhaps have to - // use thread-local data of some kind? There are also - // advantages to just showing the region, since it makes - // people aware that it's there. - let bound = self.region_bound.to_string(); - if !bound.is_empty() { - write!(f, " + {}", bound)?; - } - Ok(()) } } @@ -432,14 +388,15 @@ impl<'tcx> fmt::Debug for ty::ExistentialTraitRef<'tcx> { } } -impl<'tcx> fmt::Debug for ty::TraitDef<'tcx> { +impl fmt::Debug for ty::TraitDef { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - write!(f, "TraitDef(generics={:?}, trait_ref={:?})", - self.generics, self.trait_ref) + ty::tls::with(|tcx| { + write!(f, "{}", tcx.item_path_str(self.def_id)) + }) } } -impl<'tcx, 'container> fmt::Debug for ty::AdtDefData<'tcx, 'container> { +impl fmt::Debug for ty::AdtDef { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { ty::tls::with(|tcx| { write!(f, "{}", tcx.item_path_str(self.did)) @@ -453,41 +410,6 @@ impl<'tcx> fmt::Debug for ty::adjustment::Adjustment<'tcx> { } } -impl<'tcx> fmt::Debug for ty::TraitObject<'tcx> { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - let mut empty = true; - let mut maybe_continue = |f: &mut fmt::Formatter| { - if empty { - empty = false; - Ok(()) - } else { - write!(f, " + ") - } - }; - - maybe_continue(f)?; - write!(f, "{:?}", self.principal)?; - - let region_str = format!("{:?}", self.region_bound); - if !region_str.is_empty() { - maybe_continue(f)?; - write!(f, "{}", region_str)?; - } - - for bound in &self.builtin_bounds { - maybe_continue(f)?; - write!(f, "{:?}", bound)?; - } - - for projection_bound in &self.projection_bounds { - maybe_continue(f)?; - write!(f, "{:?}", projection_bound)?; - } - - Ok(()) - } -} - impl<'tcx> fmt::Debug for ty::Predicate<'tcx> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { @@ -670,35 +592,10 @@ impl<'tcx> fmt::Debug for ty::InstantiatedPredicates<'tcx> { } } -impl<'tcx> fmt::Debug for ty::ImplOrTraitItem<'tcx> { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - write!(f, "ImplOrTraitItem(")?; - match *self { - ty::ImplOrTraitItem::MethodTraitItem(ref i) => write!(f, "{:?}", i), - ty::ImplOrTraitItem::ConstTraitItem(ref i) => write!(f, "{:?}", i), - ty::ImplOrTraitItem::TypeTraitItem(ref i) => write!(f, "{:?}", i), - }?; - write!(f, ")") - } -} - impl<'tcx> fmt::Display for ty::FnSig<'tcx> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "fn")?; - fn_sig(f, &self.inputs, self.variadic, self.output) - } -} - -impl fmt::Display for ty::BuiltinBounds { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - let mut bounds = self.iter(); - if let Some(bound) = bounds.next() { - write!(f, "{:?}", bound)?; - for bound in bounds { - write!(f, " + {:?}", bound)?; - } - } - Ok(()) + fn_sig(f, self.inputs(), self.variadic, self.output()) } } @@ -728,7 +625,7 @@ impl fmt::Debug for ty::RegionVid { impl<'tcx> fmt::Debug for ty::FnSig<'tcx> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - write!(f, "({:?}; variadic: {})->{:?}", self.inputs, self.variadic, self.output) + write!(f, "({:?}; variadic: {})->{:?}", self.inputs(), self.variadic, self.output()) } } @@ -765,6 +662,12 @@ impl fmt::Debug for ty::IntVarValue { } }*/ +impl<'tcx> fmt::Display for ty::Binder<&'tcx ty::Slice>> { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + ty::tls::with(|tcx| in_binder(f, tcx, self, tcx.lift(self))) + } +} + impl<'tcx> fmt::Display for ty::Binder> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { ty::tls::with(|tcx| in_binder(f, tcx, self, tcx.lift(self))) @@ -877,20 +780,28 @@ impl<'tcx> fmt::Display for ty::TypeVariants<'tcx> { TyAdt(def, substs) => { ty::tls::with(|tcx| { if def.did.is_local() && - !tcx.tcache.borrow().contains_key(&def.did) { + !tcx.item_types.borrow().contains_key(&def.did) { write!(f, "{}<..>", tcx.item_path_str(def.did)) } else { parameterized(f, substs, def.did, &[]) } }) } - TyTrait(ref data) => write!(f, "{}", data), + TyDynamic(data, r) => { + write!(f, "{}", data)?; + let r = r.to_string(); + if !r.is_empty() { + write!(f, " + {}", r) + } else { + Ok(()) + } + } TyProjection(ref data) => write!(f, "{}", data), TyAnon(def_id, substs) => { ty::tls::with(|tcx| { // Grab the "TraitA + TraitB" from `impl TraitA + TraitB`, // by looking up the projections associated with the def_id. - let item_predicates = tcx.lookup_predicates(def_id); + let item_predicates = tcx.item_predicates(def_id); let substs = tcx.lift(&substs).unwrap_or_else(|| { tcx.intern_substs(&[]) }); @@ -919,13 +830,14 @@ impl<'tcx> fmt::Display for ty::TypeVariants<'tcx> { } TyStr => write!(f, "str"), TyClosure(did, substs) => ty::tls::with(|tcx| { + let upvar_tys = substs.upvar_tys(did, tcx); write!(f, "[closure")?; if let Some(node_id) = tcx.map.as_local_node_id(did) { write!(f, "@{:?}", tcx.map.span(node_id))?; let mut sep = " "; tcx.with_freevars(node_id, |freevars| { - for (freevar, upvar_ty) in freevars.iter().zip(substs.upvar_tys) { + for (freevar, upvar_ty) in freevars.iter().zip(upvar_tys) { let def_id = freevar.def.def_id(); let node_id = tcx.map.as_local_node_id(def_id).unwrap(); write!(f, @@ -942,7 +854,7 @@ impl<'tcx> fmt::Display for ty::TypeVariants<'tcx> { // visible in trans bug reports, I imagine. write!(f, "@{:?}", did)?; let mut sep = " "; - for (index, upvar_ty) in substs.upvar_tys.iter().enumerate() { + for (index, upvar_ty) in upvar_tys.enumerate() { write!(f, "{}{}:{}", sep, index, upvar_ty)?; sep = ", "; } @@ -995,20 +907,6 @@ impl fmt::Display for ty::InferTy { } } -impl<'tcx> fmt::Display for ty::ExplicitSelfCategory<'tcx> { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - f.write_str(match *self { - ty::ExplicitSelfCategory::Static => "static", - ty::ExplicitSelfCategory::ByValue => "self", - ty::ExplicitSelfCategory::ByReference(_, hir::MutMutable) => { - "&mut self" - } - ty::ExplicitSelfCategory::ByReference(_, hir::MutImmutable) => "&self", - ty::ExplicitSelfCategory::ByBox => "Box", - }) - } -} - impl fmt::Display for ty::ParamTy { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}", self.name) diff --git a/src/librustc_back/lib.rs b/src/librustc_back/lib.rs index da5f787bdf..3dc577b3c6 100644 --- a/src/librustc_back/lib.rs +++ b/src/librustc_back/lib.rs @@ -36,7 +36,6 @@ #![feature(rand)] #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #![cfg_attr(test, feature(rand))] extern crate syntax; diff --git a/src/librustc_back/target/aarch64_linux_android.rs b/src/librustc_back/target/aarch64_linux_android.rs index 140195c780..54eead9498 100644 --- a/src/librustc_back/target/aarch64_linux_android.rs +++ b/src/librustc_back/target/aarch64_linux_android.rs @@ -10,6 +10,9 @@ use target::{Target, TargetOptions, TargetResult}; +// See https://developer.android.com/ndk/guides/abis.html#arm64-v8a +// for target ABI requirements. + pub fn target() -> TargetResult { let mut base = super::android_base::opts(); base.max_atomic_width = Some(128); diff --git a/src/librustc_back/target/armv5te_unknown_linux_gnueabi.rs b/src/librustc_back/target/armv5te_unknown_linux_gnueabi.rs new file mode 100644 index 0000000000..37216e2076 --- /dev/null +++ b/src/librustc_back/target/armv5te_unknown_linux_gnueabi.rs @@ -0,0 +1,34 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use target::{Target, TargetOptions, TargetResult}; + +pub fn target() -> TargetResult { + let base = super::linux_base::opts(); + Ok(Target { + llvm_target: "armv5te-unknown-linux-gnueabi".to_string(), + target_endian: "little".to_string(), + target_pointer_width: "32".to_string(), + data_layout: "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64".to_string(), + arch: "arm".to_string(), + target_os: "linux".to_string(), + target_env: "gnu".to_string(), + target_vendor: "unknown".to_string(), + + options: TargetOptions { + features: "+soft-float".to_string(), + // No atomic instructions on ARMv5 + max_atomic_width: Some(0), + abi_blacklist: super::arm_base::abi_blacklist(), + .. base + } + }) +} + diff --git a/src/librustc_back/target/armv7_linux_androideabi.rs b/src/librustc_back/target/armv7_linux_androideabi.rs index 42f0deaa3f..36f409b794 100644 --- a/src/librustc_back/target/armv7_linux_androideabi.rs +++ b/src/librustc_back/target/armv7_linux_androideabi.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,9 +10,12 @@ use target::{Target, TargetOptions, TargetResult}; +// See https://developer.android.com/ndk/guides/abis.html#v7a +// for target ABI requirements. + pub fn target() -> TargetResult { let mut base = super::android_base::opts(); - base.features = "+v7,+thumb2,+vfp3,+d16".to_string(); + base.features = "+v7,+thumb2,+vfp3,+d16,-neon".to_string(); base.max_atomic_width = Some(64); Ok(Target { diff --git a/src/librustc_back/target/i686_linux_android.rs b/src/librustc_back/target/i686_linux_android.rs index a2c007d496..f8a8f5a350 100644 --- a/src/librustc_back/target/i686_linux_android.rs +++ b/src/librustc_back/target/i686_linux_android.rs @@ -10,6 +10,9 @@ use target::{Target, TargetResult}; +// See https://developer.android.com/ndk/guides/abis.html#x86 +// for target ABI requirements. + pub fn target() -> TargetResult { let mut base = super::android_base::opts(); diff --git a/src/librustc_back/target/i686_unknown_openbsd.rs b/src/librustc_back/target/i686_unknown_openbsd.rs new file mode 100644 index 0000000000..81efd37386 --- /dev/null +++ b/src/librustc_back/target/i686_unknown_openbsd.rs @@ -0,0 +1,30 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use target::{Target, TargetResult}; + +pub fn target() -> TargetResult { + let mut base = super::openbsd_base::opts(); + base.cpu = "pentium4".to_string(); + base.max_atomic_width = Some(64); + base.pre_link_args.push("-m32".to_string()); + + Ok(Target { + llvm_target: "i686-unknown-openbsd".to_string(), + target_endian: "little".to_string(), + target_pointer_width: "32".to_string(), + data_layout: "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128".to_string(), + arch: "x86".to_string(), + target_os: "openbsd".to_string(), + target_env: "".to_string(), + target_vendor: "unknown".to_string(), + options: base, + }) +} diff --git a/src/librustc_back/target/linux_musl_base.rs b/src/librustc_back/target/linux_musl_base.rs index d55907aeed..18cca425a3 100644 --- a/src/librustc_back/target/linux_musl_base.rs +++ b/src/librustc_back/target/linux_musl_base.rs @@ -16,7 +16,6 @@ pub fn opts() -> TargetOptions { // Make sure that the linker/gcc really don't pull in anything, including // default objects, libs, etc. base.pre_link_args.push("-nostdlib".to_string()); - base.pre_link_args.push("-static".to_string()); // At least when this was tested, the linker would not add the // `GNU_EH_FRAME` program header to executables generated, which is required @@ -67,5 +66,8 @@ pub fn opts() -> TargetOptions { base.has_rpath = false; base.position_independent_executables = false; + // These targets statically link libc by default + base.crt_static_default = true; + base } diff --git a/src/librustc_back/target/mod.rs b/src/librustc_back/target/mod.rs index 4d9315a1a3..351d469ea2 100644 --- a/src/librustc_back/target/mod.rs +++ b/src/librustc_back/target/mod.rs @@ -145,6 +145,7 @@ supported_targets! { ("arm-unknown-linux-gnueabihf", arm_unknown_linux_gnueabihf), ("arm-unknown-linux-musleabi", arm_unknown_linux_musleabi), ("arm-unknown-linux-musleabihf", arm_unknown_linux_musleabihf), + ("armv5te-unknown-linux-gnueabi", armv5te_unknown_linux_gnueabi), ("armv7-unknown-linux-gnueabihf", armv7_unknown_linux_gnueabihf), ("armv7-unknown-linux-musleabihf", armv7_unknown_linux_musleabihf), ("aarch64-unknown-linux-gnu", aarch64_unknown_linux_gnu), @@ -167,7 +168,10 @@ supported_targets! { ("x86_64-unknown-dragonfly", x86_64_unknown_dragonfly), ("x86_64-unknown-bitrig", x86_64_unknown_bitrig), + + ("i686-unknown-openbsd", i686_unknown_openbsd), ("x86_64-unknown-openbsd", x86_64_unknown_openbsd), + ("x86_64-unknown-netbsd", x86_64_unknown_netbsd), ("x86_64-rumprun-netbsd", x86_64_rumprun_netbsd), @@ -298,6 +302,9 @@ pub struct TargetOptions { pub staticlib_suffix: String, /// OS family to use for conditional compilation. Valid options: "unix", "windows". pub target_family: Option, + /// Whether the target toolchain is like OpenBSD's. + /// Only useful for compiling against OpenBSD, for configuring abi when returning a struct. + pub is_like_openbsd: bool, /// Whether the target toolchain is like OSX's. Only useful for compiling against iOS/OS X, in /// particular running dsymutil and some other stuff like `-dead_strip`. Defaults to false. pub is_like_osx: bool, @@ -358,6 +365,11 @@ pub struct TargetOptions { // will 'just work'. pub obj_is_bitcode: bool, + // LLVM can't produce object files for this target. Instead, we'll make LLVM + // emit assembly and then use `gcc` to turn that assembly into an object + // file + pub no_integrated_as: bool, + /// Don't use this field; instead use the `.max_atomic_width()` method. pub max_atomic_width: Option, @@ -367,6 +379,9 @@ pub struct TargetOptions { /// A blacklist of ABIs unsupported by the current target. Note that generic /// ABIs are considered to be supported on all platforms and cannot be blacklisted. pub abi_blacklist: Vec, + + /// Whether or not the CRT is statically linked by default. + pub crt_static_default: bool, } impl Default for TargetOptions { @@ -394,6 +409,7 @@ impl Default for TargetOptions { staticlib_prefix: "lib".to_string(), staticlib_suffix: ".a".to_string(), target_family: None, + is_like_openbsd: false, is_like_osx: false, is_like_solaris: false, is_like_windows: false, @@ -415,9 +431,11 @@ impl Default for TargetOptions { allow_asm: true, has_elf_tls: false, obj_is_bitcode: false, + no_integrated_as: false, max_atomic_width: None, panic_strategy: PanicStrategy::Unwind, abi_blacklist: vec![], + crt_static_default: false, } } } @@ -558,6 +576,7 @@ impl Target { key!(staticlib_prefix); key!(staticlib_suffix); key!(target_family, optional); + key!(is_like_openbsd, bool); key!(is_like_osx, bool); key!(is_like_solaris, bool); key!(is_like_windows, bool); @@ -575,8 +594,10 @@ impl Target { key!(exe_allocation_crate); key!(has_elf_tls, bool); key!(obj_is_bitcode, bool); + key!(no_integrated_as, bool); key!(max_atomic_width, Option); try!(key!(panic_strategy, PanicStrategy)); + key!(crt_static_default, bool); if let Some(array) = obj.find("abi-blacklist").and_then(Json::as_array) { for name in array.iter().filter_map(|abi| abi.as_string()) { @@ -717,6 +738,7 @@ impl ToJson for Target { target_option_val!(staticlib_prefix); target_option_val!(staticlib_suffix); target_option_val!(target_family); + target_option_val!(is_like_openbsd); target_option_val!(is_like_osx); target_option_val!(is_like_solaris); target_option_val!(is_like_windows); @@ -734,8 +756,10 @@ impl ToJson for Target { target_option_val!(exe_allocation_crate); target_option_val!(has_elf_tls); target_option_val!(obj_is_bitcode); + target_option_val!(no_integrated_as); target_option_val!(max_atomic_width); target_option_val!(panic_strategy); + target_option_val!(crt_static_default); if default.abi_blacklist != self.options.abi_blacklist { d.insert("abi-blacklist".to_string(), self.options.abi_blacklist.iter() diff --git a/src/librustc_back/target/openbsd_base.rs b/src/librustc_back/target/openbsd_base.rs index 90e6631841..1f74170e39 100644 --- a/src/librustc_back/target/openbsd_base.rs +++ b/src/librustc_back/target/openbsd_base.rs @@ -17,6 +17,7 @@ pub fn opts() -> TargetOptions { executables: true, linker_is_gnu: true, has_rpath: true, + is_like_openbsd: true, pre_link_args: vec![ // GNU-style linkers will use this to omit linking to libraries // which don't actually fulfill any relocations, but only for diff --git a/src/librustc_borrowck/borrowck/check_loans.rs b/src/librustc_borrowck/borrowck/check_loans.rs index b2032e6a1b..5ed628d7dc 100644 --- a/src/librustc_borrowck/borrowck/check_loans.rs +++ b/src/librustc_borrowck/borrowck/check_loans.rs @@ -190,7 +190,7 @@ pub fn check_loans<'a, 'b, 'c, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, all_loans: &[Loan<'tcx>], fn_id: ast::NodeId, decl: &hir::FnDecl, - body: &hir::Block) { + body: &hir::Expr) { debug!("check_loans(body id={})", body.id); let param_env = ty::ParameterEnvironment::for_item(bccx.tcx, fn_id); diff --git a/src/librustc_borrowck/borrowck/fragments.rs b/src/librustc_borrowck/borrowck/fragments.rs index 515868c460..b0a1b34985 100644 --- a/src/librustc_borrowck/borrowck/fragments.rs +++ b/src/librustc_borrowck/borrowck/fragments.rs @@ -496,8 +496,8 @@ fn add_fragment_siblings_for_extension<'a, 'tcx>(this: &MoveData<'tcx>, }, ref ty => { - let opt_span = origin_id.and_then(|id|tcx.map.opt_span(id)); - span_bug!(opt_span.unwrap_or(DUMMY_SP), + let span = origin_id.map_or(DUMMY_SP, |id| tcx.map.span(id)); + span_bug!(span, "type {:?} ({:?}) is not fragmentable", parent_ty, ty); } diff --git a/src/librustc_borrowck/borrowck/gather_loans/gather_moves.rs b/src/librustc_borrowck/borrowck/gather_loans/gather_moves.rs index 51574868f9..2c277c04a5 100644 --- a/src/librustc_borrowck/borrowck/gather_loans/gather_moves.rs +++ b/src/librustc_borrowck/borrowck/gather_loans/gather_moves.rs @@ -98,7 +98,7 @@ pub fn gather_move_from_pat<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, move_pat: &hir::Pat, cmt: mc::cmt<'tcx>) { let pat_span_path_opt = match move_pat.node { - PatKind::Binding(_, ref path1, _) => { + PatKind::Binding(_, _, ref path1, _) => { Some(MoveSpanAndPath{span: move_pat.span, name: path1.node}) }, diff --git a/src/librustc_borrowck/borrowck/gather_loans/mod.rs b/src/librustc_borrowck/borrowck/gather_loans/mod.rs index 763c012a8f..5d59b58b84 100644 --- a/src/librustc_borrowck/borrowck/gather_loans/mod.rs +++ b/src/librustc_borrowck/borrowck/gather_loans/mod.rs @@ -30,7 +30,7 @@ use syntax_pos::Span; use rustc::hir; use rustc::hir::Expr; use rustc::hir::intravisit; -use rustc::hir::intravisit::Visitor; +use rustc::hir::intravisit::{Visitor, NestedVisitorMap}; use self::restrictions::RestrictionResult; @@ -42,7 +42,7 @@ mod move_error; pub fn gather_loans_in_fn<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, fn_id: NodeId, decl: &hir::FnDecl, - body: &hir::Block) + body: &hir::Expr) -> (Vec>, move_data::MoveData<'tcx>) { let mut glcx = GatherLoanCtxt { @@ -520,8 +520,12 @@ struct StaticInitializerCtxt<'a, 'tcx: 'a> { item_id: ast::NodeId } -impl<'a, 'tcx, 'v> Visitor<'v> for StaticInitializerCtxt<'a, 'tcx> { - fn visit_expr(&mut self, ex: &Expr) { +impl<'a, 'tcx> Visitor<'tcx> for StaticInitializerCtxt<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.bccx.tcx.map) + } + + fn visit_expr(&mut self, ex: &'tcx Expr) { if let hir::ExprAddrOf(mutbl, ref base) = ex.node { let param_env = ty::ParameterEnvironment::for_item(self.bccx.tcx, self.item_id); @@ -542,9 +546,9 @@ impl<'a, 'tcx, 'v> Visitor<'v> for StaticInitializerCtxt<'a, 'tcx> { } } -pub fn gather_loans_in_static_initializer(bccx: &mut BorrowckCtxt, - item_id: ast::NodeId, - expr: &hir::Expr) { +pub fn gather_loans_in_static_initializer<'a, 'tcx>(bccx: &mut BorrowckCtxt<'a, 'tcx>, + item_id: ast::NodeId, + expr: &'tcx hir::Expr) { debug!("gather_loans_in_static_initializer(expr={:?})", expr); diff --git a/src/librustc_borrowck/borrowck/mir/dataflow/graphviz.rs b/src/librustc_borrowck/borrowck/mir/dataflow/graphviz.rs index 28f5872386..8461f6d061 100644 --- a/src/librustc_borrowck/borrowck/mir/dataflow/graphviz.rs +++ b/src/librustc_borrowck/borrowck/mir/dataflow/graphviz.rs @@ -88,7 +88,7 @@ pub trait MirWithFlowState<'tcx> { } impl<'a, 'tcx: 'a, BD> MirWithFlowState<'tcx> for MirBorrowckCtxtPreDataflow<'a, 'tcx, BD> - where 'a, 'tcx: 'a, BD: BitDenotation> + where 'tcx: 'a, BD: BitDenotation> { type BD = BD; fn node_id(&self) -> NodeId { self.node_id } diff --git a/src/librustc_borrowck/borrowck/mir/dataflow/sanity_check.rs b/src/librustc_borrowck/borrowck/mir/dataflow/sanity_check.rs index b8c26a0512..916d17dcc9 100644 --- a/src/librustc_borrowck/borrowck/mir/dataflow/sanity_check.rs +++ b/src/librustc_borrowck/borrowck/mir/dataflow/sanity_check.rs @@ -169,7 +169,7 @@ fn is_rustc_peek<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, { let name = tcx.item_name(def_id); if abi == Abi::RustIntrinsic || abi == Abi::PlatformIntrinsic { - if name.as_str() == "rustc_peek" { + if name == "rustc_peek" { return Some((args, source_info.span)); } } diff --git a/src/librustc_borrowck/borrowck/mir/elaborate_drops.rs b/src/librustc_borrowck/borrowck/mir/elaborate_drops.rs index 191cd981b6..88e5bae483 100644 --- a/src/librustc_borrowck/borrowck/mir/elaborate_drops.rs +++ b/src/librustc_borrowck/borrowck/mir/elaborate_drops.rs @@ -21,7 +21,7 @@ use rustc::mir::*; use rustc::mir::transform::{Pass, MirPass, MirSource}; use rustc::middle::const_val::ConstVal; use rustc::middle::lang_items; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::FxHashMap; use rustc_data_structures::indexed_set::IdxSetBuf; use rustc_data_structures::indexed_vec::Idx; use syntax_pos::Span; @@ -63,7 +63,7 @@ impl<'tcx> MirPass<'tcx> for ElaborateDrops { env: &env, flow_inits: flow_inits, flow_uninits: flow_uninits, - drop_flags: FnvHashMap(), + drop_flags: FxHashMap(), patch: MirPatch::new(mir), }.elaborate() }; @@ -118,7 +118,7 @@ struct ElaborateDropsCtxt<'a, 'tcx: 'a> { env: &'a MoveDataParamEnv<'tcx>, flow_inits: DataflowResults>, flow_uninits: DataflowResults>, - drop_flags: FnvHashMap, + drop_flags: FxHashMap, patch: MirPatch<'tcx>, } @@ -442,7 +442,7 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { fn move_paths_for_fields(&self, base_lv: &Lvalue<'tcx>, variant_path: MovePathIndex, - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, substs: &'tcx Substs<'tcx>) -> Vec<(Lvalue<'tcx>, Option)> { @@ -481,54 +481,55 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { is_cleanup: bool) -> Vec { - let mut succ = succ; let mut unwind_succ = if is_cleanup { None } else { c.unwind }; - let mut update_drop_flag = true; + + let mut succ = self.new_block( + c, c.is_cleanup, TerminatorKind::Goto { target: succ } + ); + + // Always clear the "master" drop flag at the bottom of the + // ladder. This is needed because the "master" drop flag + // protects the ADT's discriminant, which is invalidated + // after the ADT is dropped. + self.set_drop_flag( + Location { block: succ, statement_index: 0 }, + c.path, + DropFlagState::Absent + ); fields.iter().rev().enumerate().map(|(i, &(ref lv, path))| { - let drop_block = match path { - Some(path) => { - debug!("drop_ladder: for std field {} ({:?})", i, lv); + succ = if let Some(path) = path { + debug!("drop_ladder: for std field {} ({:?})", i, lv); - self.elaborated_drop_block(&DropCtxt { - source_info: c.source_info, - is_cleanup: is_cleanup, - init_data: c.init_data, - lvalue: lv, - path: path, - succ: succ, - unwind: unwind_succ, - }) - } - None => { - debug!("drop_ladder: for rest field {} ({:?})", i, lv); + self.elaborated_drop_block(&DropCtxt { + source_info: c.source_info, + is_cleanup: is_cleanup, + init_data: c.init_data, + lvalue: lv, + path: path, + succ: succ, + unwind: unwind_succ, + }) + } else { + debug!("drop_ladder: for rest field {} ({:?})", i, lv); - let blk = self.complete_drop(&DropCtxt { - source_info: c.source_info, - is_cleanup: is_cleanup, - init_data: c.init_data, - lvalue: lv, - path: c.path, - succ: succ, - unwind: unwind_succ, - }, update_drop_flag); - - // the drop flag has been updated - updating - // it again would clobber it. - update_drop_flag = false; - - blk - } + self.complete_drop(&DropCtxt { + source_info: c.source_info, + is_cleanup: is_cleanup, + init_data: c.init_data, + lvalue: lv, + path: c.path, + succ: succ, + unwind: unwind_succ, + }, false) }; - succ = drop_block; unwind_succ = unwind_ladder.as_ref().map(|p| p[i]); - - drop_block + succ }).collect() } @@ -619,7 +620,7 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { fn open_drop_for_variant<'a>(&mut self, c: &DropCtxt<'a, 'tcx>, drop_block: &mut Option, - adt: ty::AdtDef<'tcx>, + adt: &'tcx ty::AdtDef, substs: &'tcx Substs<'tcx>, variant_index: usize) -> BasicBlock @@ -652,7 +653,7 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { } fn open_drop_for_adt<'a>(&mut self, c: &DropCtxt<'a, 'tcx>, - adt: ty::AdtDef<'tcx>, substs: &'tcx Substs<'tcx>) + adt: &'tcx ty::AdtDef, substs: &'tcx Substs<'tcx>) -> BasicBlock { debug!("open_drop_for_adt({:?}, {:?}, {:?})", c, adt, substs); @@ -709,9 +710,11 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { ty::TyAdt(def, substs) => { self.open_drop_for_adt(c, def, substs) } - ty::TyTuple(tys) | ty::TyClosure(_, ty::ClosureSubsts { - upvar_tys: tys, .. - }) => { + ty::TyClosure(def_id, substs) => { + let tys : Vec<_> = substs.upvar_tys(def_id, self.tcx).collect(); + self.open_drop_for_tuple(c, &tys) + } + ty::TyTuple(tys) => { self.open_drop_for_tuple(c, tys) } ty::TyBox(ty) => { @@ -855,10 +858,9 @@ impl<'b, 'tcx> ElaborateDropsCtxt<'b, 'tcx> { let tcx = self.tcx; let unit_temp = Lvalue::Local(self.patch.new_temp(tcx.mk_nil())); - let free_func = tcx.lang_items.require(lang_items::BoxFreeFnLangItem) - .unwrap_or_else(|e| tcx.sess.fatal(&e)); + let free_func = tcx.require_lang_item(lang_items::BoxFreeFnLangItem); let substs = tcx.mk_substs(iter::once(Kind::from(ty))); - let fty = tcx.lookup_item_type(free_func).ty.subst(tcx, substs); + let fty = tcx.item_type(free_func).subst(tcx, substs); self.patch.new_block(BasicBlockData { statements: statements, diff --git a/src/librustc_borrowck/borrowck/mir/gather_moves.rs b/src/librustc_borrowck/borrowck/mir/gather_moves.rs index 1dc5769e63..02064b52cb 100644 --- a/src/librustc_borrowck/borrowck/mir/gather_moves.rs +++ b/src/librustc_borrowck/borrowck/mir/gather_moves.rs @@ -11,7 +11,7 @@ use rustc::ty::{self, TyCtxt, ParameterEnvironment}; use rustc::mir::*; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::FxHashMap; use rustc_data_structures::indexed_vec::{IndexVec}; use syntax::codemap::DUMMY_SP; @@ -181,7 +181,7 @@ pub struct MovePathLookup<'tcx> { /// subsequent search so that it is solely relative to that /// base-lvalue). For the remaining lookup, we map the projection /// elem to the associated MovePathIndex. - projections: FnvHashMap<(MovePathIndex, AbstractElem<'tcx>), MovePathIndex> + projections: FxHashMap<(MovePathIndex, AbstractElem<'tcx>), MovePathIndex> } struct MoveDataBuilder<'a, 'tcx: 'a> { @@ -215,7 +215,7 @@ impl<'a, 'tcx> MoveDataBuilder<'a, 'tcx> { locals: mir.local_decls.indices().map(Lvalue::Local).map(|v| { Self::new_move_path(&mut move_paths, &mut path_map, None, v) }).collect(), - projections: FnvHashMap(), + projections: FxHashMap(), }, move_paths: move_paths, path_map: path_map, diff --git a/src/librustc_borrowck/borrowck/mir/mod.rs b/src/librustc_borrowck/borrowck/mir/mod.rs index cea9170da9..9035c2ab3c 100644 --- a/src/librustc_borrowck/borrowck/mir/mod.rs +++ b/src/librustc_borrowck/borrowck/mir/mod.rs @@ -11,7 +11,6 @@ use borrowck::BorrowckCtxt; use syntax::ast::{self, MetaItem}; -use syntax::ptr::P; use syntax_pos::{Span, DUMMY_SP}; use rustc::hir; @@ -35,7 +34,7 @@ use self::dataflow::{MaybeInitializedLvals, MaybeUninitializedLvals}; use self::dataflow::{DefinitelyInitializedLvals}; use self::gather_moves::{MoveData, MovePathIndex, LookupResult}; -fn has_rustc_mir_with(attrs: &[ast::Attribute], name: &str) -> Option> { +fn has_rustc_mir_with(attrs: &[ast::Attribute], name: &str) -> Option { for attr in attrs { if attr.check_name("rustc_mir") { let items = attr.meta_item_list(); @@ -58,7 +57,7 @@ pub struct MoveDataParamEnv<'tcx> { pub fn borrowck_mir(bcx: &mut BorrowckCtxt, fk: FnKind, _decl: &hir::FnDecl, - body: &hir::Block, + body: &hir::Expr, _sp: Span, id: ast::NodeId, attributes: &[ast::Attribute]) { diff --git a/src/librustc_borrowck/borrowck/mod.rs b/src/librustc_borrowck/borrowck/mod.rs index 2f74ea3e47..ecf5c3ef17 100644 --- a/src/librustc_borrowck/borrowck/mod.rs +++ b/src/librustc_borrowck/borrowck/mod.rs @@ -24,7 +24,7 @@ use self::InteriorKind::*; use rustc::dep_graph::DepNode; use rustc::hir::map as hir_map; -use rustc::hir::map::blocks::FnParts; +use rustc::hir::map::blocks::{FnParts, FnLikeNode}; use rustc::cfg; use rustc::middle::dataflow::DataFlowContext; use rustc::middle::dataflow::BitwiseOperator; @@ -47,9 +47,7 @@ use syntax_pos::{MultiSpan, Span}; use errors::DiagnosticBuilder; use rustc::hir; -use rustc::hir::{FnDecl, Block}; -use rustc::hir::intravisit; -use rustc::hir::intravisit::{Visitor, FnKind}; +use rustc::hir::intravisit::{self, Visitor, FnKind, NestedVisitorMap}; pub mod check_loans; @@ -64,9 +62,13 @@ pub struct LoanDataFlowOperator; pub type LoanDataFlow<'a, 'tcx> = DataFlowContext<'a, 'tcx, LoanDataFlowOperator>; -impl<'a, 'tcx, 'v> Visitor<'v> for BorrowckCtxt<'a, 'tcx> { - fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl, - b: &'v Block, s: Span, id: ast::NodeId) { +impl<'a, 'tcx> Visitor<'tcx> for BorrowckCtxt<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + + fn visit_fn(&mut self, fk: FnKind<'tcx>, fd: &'tcx hir::FnDecl, + b: hir::ExprId, s: Span, id: ast::NodeId) { match fk { FnKind::ItemFn(..) | FnKind::Method(..) => { @@ -81,18 +83,18 @@ impl<'a, 'tcx, 'v> Visitor<'v> for BorrowckCtxt<'a, 'tcx> { } } - fn visit_item(&mut self, item: &hir::Item) { + fn visit_item(&mut self, item: &'tcx hir::Item) { borrowck_item(self, item); } - fn visit_trait_item(&mut self, ti: &hir::TraitItem) { + fn visit_trait_item(&mut self, ti: &'tcx hir::TraitItem) { if let hir::ConstTraitItem(_, Some(ref expr)) = ti.node { gather_loans::gather_loans_in_static_initializer(self, ti.id, &expr); } intravisit::walk_trait_item(self, ti); } - fn visit_impl_item(&mut self, ii: &hir::ImplItem) { + fn visit_impl_item(&mut self, ii: &'tcx hir::ImplItem) { if let hir::ImplItemKind::Const(_, ref expr) = ii.node { gather_loans::gather_loans_in_static_initializer(self, ii.id, &expr); } @@ -112,7 +114,7 @@ pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { } }; - tcx.visit_all_items_in_krate(DepNode::BorrowCheck, &mut bccx); + tcx.visit_all_item_likes_in_krate(DepNode::BorrowCheck, &mut bccx.as_deep_visitor()); if tcx.sess.borrowck_stats() { println!("--- borrowck stats ---"); @@ -133,7 +135,7 @@ pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { } } -fn borrowck_item(this: &mut BorrowckCtxt, item: &hir::Item) { +fn borrowck_item<'a, 'tcx>(this: &mut BorrowckCtxt<'a, 'tcx>, item: &'tcx hir::Item) { // Gather loans for items. Note that we don't need // to check loans for single expressions. The check // loan step is intended for things that have a data @@ -156,15 +158,17 @@ pub struct AnalysisData<'a, 'tcx: 'a> { pub move_data: move_data::FlowedMoveData<'a, 'tcx>, } -fn borrowck_fn(this: &mut BorrowckCtxt, - fk: FnKind, - decl: &hir::FnDecl, - body: &hir::Block, - sp: Span, - id: ast::NodeId, - attributes: &[ast::Attribute]) { +fn borrowck_fn<'a, 'tcx>(this: &mut BorrowckCtxt<'a, 'tcx>, + fk: FnKind<'tcx>, + decl: &'tcx hir::FnDecl, + body_id: hir::ExprId, + sp: Span, + id: ast::NodeId, + attributes: &[ast::Attribute]) { debug!("borrowck_fn(id={})", id); + let body = this.tcx.map.expr(body_id); + if attributes.iter().any(|item| item.check_name("rustc_mir_borrowck")) { this.with_temp_region_map(id, |this| { mir::borrowck_mir(this, fk, decl, body, sp, id, attributes) @@ -193,21 +197,21 @@ fn borrowck_fn(this: &mut BorrowckCtxt, decl, body); - intravisit::walk_fn(this, fk, decl, body, sp, id); + intravisit::walk_fn(this, fk, decl, body_id, sp, id); } fn build_borrowck_dataflow_data<'a, 'tcx>(this: &mut BorrowckCtxt<'a, 'tcx>, - fk: FnKind, - decl: &hir::FnDecl, + fk: FnKind<'tcx>, + decl: &'tcx hir::FnDecl, cfg: &cfg::CFG, - body: &hir::Block, + body: &'tcx hir::Expr, sp: Span, id: ast::NodeId) -> AnalysisData<'a, 'tcx> { // Check the body of fn items. let tcx = this.tcx; - let id_range = intravisit::compute_id_range_for_fn_body(fk, decl, body, sp, id); + let id_range = intravisit::compute_id_range_for_fn_body(fk, decl, body, sp, id, &tcx.map); let (all_loans, move_data) = gather_loans::gather_loans_in_fn(this, id, decl, body); @@ -243,7 +247,7 @@ fn build_borrowck_dataflow_data<'a, 'tcx>(this: &mut BorrowckCtxt<'a, 'tcx>, /// the `BorrowckCtxt` itself , e.g. the flowgraph visualizer. pub fn build_borrowck_dataflow_data_for_fn<'a, 'tcx>( tcx: TyCtxt<'a, 'tcx, 'tcx>, - fn_parts: FnParts<'a>, + fn_parts: FnParts<'tcx>, cfg: &cfg::CFG) -> (BorrowckCtxt<'a, 'tcx>, AnalysisData<'a, 'tcx>) { @@ -259,11 +263,13 @@ pub fn build_borrowck_dataflow_data_for_fn<'a, 'tcx>( } }; + let body = tcx.map.expr(fn_parts.body); + let dataflow_data = build_borrowck_dataflow_data(&mut bccx, fn_parts.kind, &fn_parts.decl, cfg, - &fn_parts.body, + body, fn_parts.span, fn_parts.id); @@ -409,8 +415,8 @@ pub fn closure_to_block(closure_id: ast::NodeId, tcx: TyCtxt) -> ast::NodeId { match tcx.map.get(closure_id) { hir_map::NodeExpr(expr) => match expr.node { - hir::ExprClosure(.., ref block, _) => { - block.id + hir::ExprClosure(.., body_id, _) => { + body_id.node_id() } _ => { bug!("encountered non-closure id: {}", closure_id) @@ -972,50 +978,14 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { pub fn note_and_explain_bckerr(&self, db: &mut DiagnosticBuilder, err: BckError<'tcx>, error_span: Span) { - let code = err.code; - match code { - err_mutbl => { - match err.cmt.note { - mc::NoteClosureEnv(upvar_id) | mc::NoteUpvarRef(upvar_id) => { - // If this is an `Fn` closure, it simply can't mutate upvars. - // If it's an `FnMut` closure, the original variable was declared immutable. - // We need to determine which is the case here. - let kind = match err.cmt.upvar().unwrap().cat { - Categorization::Upvar(mc::Upvar { kind, .. }) => kind, - _ => bug!() - }; - if kind == ty::ClosureKind::Fn { - db.span_help( - self.tcx.map.span(upvar_id.closure_expr_id), - "consider changing this closure to take \ - self by mutable reference"); - } - } - _ => { - if let Categorization::Local(local_id) = err.cmt.cat { - let span = self.tcx.map.span(local_id); - if let Ok(snippet) = self.tcx.sess.codemap().span_to_snippet(span) { - if snippet.starts_with("ref ") { - db.span_label(span, - &format!("use `{}` here to make mutable", - snippet.replace("ref ", "ref mut "))); - } else if snippet != "self" { - db.span_label(span, - &format!("use `mut {}` here to make mutable", snippet)); - } - } - db.span_label(error_span, &format!("cannot borrow mutably")); - } - } - } - } - + match err.code { + err_mutbl => self.note_and_explain_mutbl_error(db, &err, &error_span), err_out_of_scope(super_scope, sub_scope, cause) => { - let (value_kind, value_msg, is_temporary) = match err.cmt.cat { + let (value_kind, value_msg) = match err.cmt.cat { mc::Categorization::Rvalue(_) => - ("temporary value", "temporary value created here", true), + ("temporary value", "temporary value created here"), _ => - ("borrowed value", "does not live long enough", false) + ("borrowed value", "borrow occurs here") }; let is_closure = match cause { @@ -1028,14 +998,14 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { Some(primary) => { db.span = MultiSpan::from_span(s); db.span_label(primary, &format!("capture occurs here")); - db.span_label(s, &value_msg); + db.span_label(s, &"does not live long enough"); true } None => false } } _ => { - db.span_label(error_span, &value_msg); + db.span_label(error_span, &"does not live long enough"); false } }; @@ -1045,11 +1015,11 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { match (sub_span, super_span) { (Some(s1), Some(s2)) if s1 == s2 => { - if !is_temporary && !is_closure { + if !is_closure { db.span = MultiSpan::from_span(s1); - db.span_label(error_span, &format!("borrow occurs here")); + db.span_label(error_span, &value_msg); let msg = match opt_loan_path(&err.cmt) { - None => "borrowed value".to_string(), + None => value_kind.to_string(), Some(lp) => { format!("`{}`", self.loan_path_to_string(&lp)) } @@ -1062,17 +1032,16 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { db.note("values in a scope are dropped in the opposite order \ they are created"); } - (Some(s1), Some(s2)) if !is_temporary && !is_closure => { + (Some(s1), Some(s2)) if !is_closure => { db.span = MultiSpan::from_span(s2); - db.span_label(error_span, &format!("borrow occurs here")); + db.span_label(error_span, &value_msg); let msg = match opt_loan_path(&err.cmt) { - None => "borrowed value".to_string(), + None => value_kind.to_string(), Some(lp) => { format!("`{}`", self.loan_path_to_string(&lp)) } }; - db.span_label(s2, - &format!("{} dropped here while still borrowed", msg)); + db.span_label(s2, &format!("{} dropped here while still borrowed", msg)); db.span_label(s1, &format!("{} needs to live until here", value_kind)); } _ => { @@ -1131,6 +1100,86 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { } } + fn note_and_explain_mutbl_error(&self, db: &mut DiagnosticBuilder, err: &BckError<'tcx>, + error_span: &Span) { + match err.cmt.note { + mc::NoteClosureEnv(upvar_id) | mc::NoteUpvarRef(upvar_id) => { + // If this is an `Fn` closure, it simply can't mutate upvars. + // If it's an `FnMut` closure, the original variable was declared immutable. + // We need to determine which is the case here. + let kind = match err.cmt.upvar().unwrap().cat { + Categorization::Upvar(mc::Upvar { kind, .. }) => kind, + _ => bug!() + }; + if kind == ty::ClosureKind::Fn { + db.span_help(self.tcx.map.span(upvar_id.closure_expr_id), + "consider changing this closure to take \ + self by mutable reference"); + } + } + _ => { + if let Categorization::Deref(ref inner_cmt, ..) = err.cmt.cat { + if let Categorization::Local(local_id) = inner_cmt.cat { + let parent = self.tcx.map.get_parent_node(local_id); + let opt_fn_decl = FnLikeNode::from_node(self.tcx.map.get(parent)) + .map(|fn_like| fn_like.decl()); + + if let Some(fn_decl) = opt_fn_decl { + if let Some(ref arg) = fn_decl.inputs.iter() + .find(|ref arg| arg.pat.id == local_id) { + if let hir::TyRptr( + opt_lifetime, + hir::MutTy{mutbl: hir::Mutability::MutImmutable, ref ty}) = + arg.ty.node { + if let Some(lifetime) = opt_lifetime { + if let Ok(snippet) = self.tcx.sess.codemap() + .span_to_snippet(ty.span) { + if let Ok(lifetime_snippet) = self.tcx.sess.codemap() + .span_to_snippet(lifetime.span) { + db.span_label(arg.ty.span, + &format!("use `&{} mut {}` \ + here to make mutable", + lifetime_snippet, + snippet)); + } + } + } + else if let Ok(snippet) = self.tcx.sess.codemap() + .span_to_snippet(arg.ty.span) { + if snippet.starts_with("&") { + db.span_label(arg.ty.span, + &format!("use `{}` here to make mutable", + snippet.replace("&", "&mut "))); + } + } + } + } + } + } + } else if let Categorization::Local(local_id) = err.cmt.cat { + let span = self.tcx.map.span(local_id); + if let Ok(snippet) = self.tcx.sess.codemap().span_to_snippet(span) { + if snippet.starts_with("ref mut ") || snippet.starts_with("&mut ") { + db.span_label(*error_span, &format!("cannot reborrow mutably")); + db.span_label(*error_span, &format!("try removing `&mut` here")); + } else { + if snippet.starts_with("ref ") { + db.span_label(span, &format!("use `{}` here to make mutable", + snippet.replace("ref ", "ref mut "))); + } else if snippet != "self" { + db.span_label(span, + &format!("use `mut {}` here to make mutable", + snippet)); + } + db.span_label(*error_span, &format!("cannot borrow mutably")); + } + } else { + db.span_label(*error_span, &format!("cannot borrow mutably")); + } + } + } + } + } pub fn append_loan_path_to_string(&self, loan_path: &LoanPath<'tcx>, out: &mut String) { diff --git a/src/librustc_borrowck/borrowck/move_data.rs b/src/librustc_borrowck/borrowck/move_data.rs index ba036f1a8b..32bda5e116 100644 --- a/src/librustc_borrowck/borrowck/move_data.rs +++ b/src/librustc_borrowck/borrowck/move_data.rs @@ -23,7 +23,7 @@ use rustc::middle::expr_use_visitor as euv; use rustc::middle::expr_use_visitor::MutateMode; use rustc::middle::mem_categorization as mc; use rustc::ty::{self, TyCtxt}; -use rustc::util::nodemap::{FnvHashMap, NodeSet}; +use rustc::util::nodemap::{FxHashMap, NodeSet}; use std::cell::RefCell; use std::rc::Rc; @@ -41,7 +41,7 @@ pub struct MoveData<'tcx> { pub paths: RefCell>>, /// Cache of loan path to move path index, for easy lookup. - pub path_map: RefCell>, MovePathIndex>>, + pub path_map: RefCell>, MovePathIndex>>, /// Each move or uninitialized variable gets an entry here. pub moves: RefCell>, @@ -217,7 +217,7 @@ impl<'a, 'tcx> MoveData<'tcx> { pub fn new() -> MoveData<'tcx> { MoveData { paths: RefCell::new(Vec::new()), - path_map: RefCell::new(FnvHashMap()), + path_map: RefCell::new(FxHashMap()), moves: RefCell::new(Vec::new()), path_assignments: RefCell::new(Vec::new()), var_assignments: RefCell::new(Vec::new()), @@ -656,7 +656,7 @@ impl<'a, 'tcx> FlowedMoveData<'a, 'tcx> { cfg: &cfg::CFG, id_range: IdRange, decl: &hir::FnDecl, - body: &hir::Block) + body: &hir::Expr) -> FlowedMoveData<'a, 'tcx> { let mut dfcx_moves = DataFlowContext::new(tcx, diff --git a/src/librustc_borrowck/lib.rs b/src/librustc_borrowck/lib.rs index 2cd709dbd3..1ff232da42 100644 --- a/src/librustc_borrowck/lib.rs +++ b/src/librustc_borrowck/lib.rs @@ -19,14 +19,12 @@ #![allow(non_camel_case_types)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(quote)] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(staged_api)] #![feature(associated_consts)] #![feature(nonzero)] -#![cfg_attr(stage0, feature(question_mark))] #[macro_use] extern crate log; #[macro_use] extern crate syntax; extern crate syntax_pos; diff --git a/src/librustc_const_eval/_match.rs b/src/librustc_const_eval/_match.rs index 7f5eb31612..23771f4bae 100644 --- a/src/librustc_const_eval/_match.rs +++ b/src/librustc_const_eval/_match.rs @@ -17,14 +17,14 @@ use eval::{compare_const_vals}; use rustc_const_math::ConstInt; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::indexed_vec::Idx; use pattern::{FieldPattern, Pattern, PatternKind}; use pattern::{PatternFoldable, PatternFolder}; -use rustc::hir::def_id::{DefId}; -use rustc::hir::pat_util::def_to_path; +use rustc::hir::def::Def; +use rustc::hir::def_id::DefId; use rustc::ty::{self, Ty, TyCtxt, TypeFoldable}; use rustc::hir; @@ -39,7 +39,7 @@ use syntax_pos::{Span, DUMMY_SP}; use arena::TypedArena; -use std::cmp::Ordering; +use std::cmp::{self, Ordering}; use std::fmt; use std::iter::{FromIterator, IntoIterator, repeat}; @@ -160,7 +160,7 @@ pub struct MatchCheckCtxt<'a, 'tcx: 'a> { /// associated types to get field types. pub wild_pattern: &'a Pattern<'tcx>, pub pattern_arena: &'a TypedArena>, - pub byte_array_map: FnvHashMap<*const Pattern<'tcx>, Vec<&'a Pattern<'tcx>>>, + pub byte_array_map: FxHashMap<*const Pattern<'tcx>, Vec<&'a Pattern<'tcx>>>, } impl<'a, 'tcx> MatchCheckCtxt<'a, 'tcx> { @@ -181,7 +181,7 @@ impl<'a, 'tcx> MatchCheckCtxt<'a, 'tcx> { tcx: tcx, wild_pattern: &wild_pattern, pattern_arena: &pattern_arena, - byte_array_map: FnvHashMap(), + byte_array_map: FxHashMap(), }) } @@ -223,10 +223,8 @@ pub enum Constructor { Slice(usize), } -impl Constructor { - fn variant_for_adt<'tcx, 'container, 'a>(&self, - adt: &'a ty::AdtDefData<'tcx, 'container>) - -> &'a ty::VariantDefData<'tcx, 'container> { +impl<'tcx> Constructor { + fn variant_for_adt(&self, adt: &'tcx ty::AdtDef) -> &'tcx ty::VariantDef { match self { &Variant(vid) => adt.variant_with_id(vid), &Single => { @@ -324,6 +322,12 @@ impl Witness { ty::TyAdt(adt, _) => { let v = ctor.variant_for_adt(adt); + let qpath = hir::QPath::Resolved(None, P(hir::Path { + span: DUMMY_SP, + global: false, + def: Def::Err, + segments: vec![hir::PathSegment::from_name(v.name)].into(), + })); match v.ctor_kind { CtorKind::Fictive => { let field_pats: hir::HirVec<_> = v.fields.iter() @@ -338,16 +342,12 @@ impl Witness { } }).collect(); let has_more_fields = field_pats.len() < arity; - PatKind::Struct( - def_to_path(cx.tcx, v.did), field_pats, has_more_fields) + PatKind::Struct(qpath, field_pats, has_more_fields) } CtorKind::Fn => { - PatKind::TupleStruct( - def_to_path(cx.tcx, v.did), pats.collect(), None) - } - CtorKind::Const => { - PatKind::Path(None, def_to_path(cx.tcx, v.did)) + PatKind::TupleStruct(qpath, pats.collect(), None) } + CtorKind::Const => PatKind::Path(qpath) } } @@ -419,6 +419,99 @@ fn all_constructors(_cx: &mut MatchCheckCtxt, pcx: PatternContext) -> Vec( + _cx: &mut MatchCheckCtxt<'a, 'tcx>, + patterns: I) -> usize + where I: Iterator> +{ + // The exhaustiveness-checking paper does not include any details on + // checking variable-length slice patterns. However, they are matched + // by an infinite collection of fixed-length array patterns. + // + // Checking the infinite set directly would take an infinite amount + // of time. However, it turns out that for each finite set of + // patterns `P`, all sufficiently large array lengths are equivalent: + // + // Each slice `s` with a "sufficiently-large" length `l ≥ L` that applies + // to exactly the subset `Pₜ` of `P` can be transformed to a slice + // `sₘ` for each sufficiently-large length `m` that applies to exactly + // the same subset of `P`. + // + // Because of that, each witness for reachability-checking from one + // of the sufficiently-large lengths can be transformed to an + // equally-valid witness from any other length, so we only have + // to check slice lengths from the "minimal sufficiently-large length" + // and below. + // + // Note that the fact that there is a *single* `sₘ` for each `m` + // not depending on the specific pattern in `P` is important: if + // you look at the pair of patterns + // `[true, ..]` + // `[.., false]` + // Then any slice of length ≥1 that matches one of these two + // patterns can be be trivially turned to a slice of any + // other length ≥1 that matches them and vice-versa - for + // but the slice from length 2 `[false, true]` that matches neither + // of these patterns can't be turned to a slice from length 1 that + // matches neither of these patterns, so we have to consider + // slices from length 2 there. + // + // Now, to see that that length exists and find it, observe that slice + // patterns are either "fixed-length" patterns (`[_, _, _]`) or + // "variable-length" patterns (`[_, .., _]`). + // + // For fixed-length patterns, all slices with lengths *longer* than + // the pattern's length have the same outcome (of not matching), so + // as long as `L` is greater than the pattern's length we can pick + // any `sₘ` from that length and get the same result. + // + // For variable-length patterns, the situation is more complicated, + // because as seen above the precise value of `sₘ` matters. + // + // However, for each variable-length pattern `p` with a prefix of length + // `plₚ` and suffix of length `slₚ`, only the first `plₚ` and the last + // `slₚ` elements are examined. + // + // Therefore, as long as `L` is positive (to avoid concerns about empty + // types), all elements after the maximum prefix length and before + // the maximum suffix length are not examined by any variable-length + // pattern, and therefore can be added/removed without affecting + // them - creating equivalent patterns from any sufficiently-large + // length. + // + // Of course, if fixed-length patterns exist, we must be sure + // that our length is large enough to miss them all, so + // we can pick `L = max(FIXED_LEN+1 ∪ {max(PREFIX_LEN) + max(SUFFIX_LEN)})` + // + // for example, with the above pair of patterns, all elements + // but the first and last can be added/removed, so any + // witness of length ≥2 (say, `[false, false, true]`) can be + // turned to a witness from any other length ≥2. + + let mut max_prefix_len = 0; + let mut max_suffix_len = 0; + let mut max_fixed_len = 0; + + for row in patterns { + match *row.kind { + PatternKind::Constant { value: ConstVal::ByteStr(ref data) } => { + max_fixed_len = cmp::max(max_fixed_len, data.len()); + } + PatternKind::Slice { ref prefix, slice: None, ref suffix } => { + let fixed_len = prefix.len() + suffix.len(); + max_fixed_len = cmp::max(max_fixed_len, fixed_len); + } + PatternKind::Slice { ref prefix, slice: Some(_), ref suffix } => { + max_prefix_len = cmp::max(max_prefix_len, prefix.len()); + max_suffix_len = cmp::max(max_suffix_len, suffix.len()); + } + _ => {} + } + } + + cmp::max(max_fixed_len + 1, max_prefix_len + max_suffix_len) +} + /// Algorithm from http://moscova.inria.fr/~maranget/papers/warn/index.html /// /// Whether a vector `v` of patterns is 'useful' in relation to a set of such @@ -453,16 +546,12 @@ pub fn is_useful<'a, 'tcx>(cx: &mut MatchCheckCtxt<'a, 'tcx>, let &Matrix(ref rows) = matrix; assert!(rows.iter().all(|r| r.len() == v.len())); + + let pcx = PatternContext { ty: rows.iter().map(|r| r[0].ty).find(|ty| !ty.references_error()) .unwrap_or(v[0].ty), - max_slice_length: rows.iter().filter_map(|row| match *row[0].kind { - PatternKind::Slice { ref prefix, slice: _, ref suffix } => - Some(prefix.len() + suffix.len()), - PatternKind::Constant { value: ConstVal::ByteStr(ref data) } => - Some(data.len()), - _ => None - }).max().map_or(0, |v| v + 1) + max_slice_length: max_slice_length(cx, rows.iter().map(|r| r[0]).chain(Some(v[0]))) }; debug!("is_useful_expand_first_col: pcx={:?}, expanding {:?}", pcx, v[0]); diff --git a/src/librustc_const_eval/check_match.rs b/src/librustc_const_eval/check_match.rs index 615aca90db..786b59e818 100644 --- a/src/librustc_const_eval/check_match.rs +++ b/src/librustc_const_eval/check_match.rs @@ -19,8 +19,6 @@ use eval::report_const_eval_err; use rustc::dep_graph::DepNode; -use rustc::hir::pat_util::{pat_bindings, pat_contains_bindings}; - use rustc::middle::expr_use_visitor::{ConsumeMode, Delegate, ExprUseVisitor}; use rustc::middle::expr_use_visitor::{LoanCause, MutateMode}; use rustc::middle::expr_use_visitor as euv; @@ -31,7 +29,7 @@ use rustc::ty::{self, TyCtxt}; use rustc_errors::DiagnosticBuilder; use rustc::hir::def::*; -use rustc::hir::intravisit::{self, Visitor, FnKind}; +use rustc::hir::intravisit::{self, Visitor, FnKind, NestedVisitorMap}; use rustc::hir::print::pat_to_string; use rustc::hir::{self, Pat, PatKind}; @@ -43,12 +41,16 @@ use syntax_pos::Span; struct OuterVisitor<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx> } -impl<'a, 'v, 'tcx> Visitor<'v> for OuterVisitor<'a, 'tcx> { - fn visit_expr(&mut self, _expr: &hir::Expr) { +impl<'a, 'tcx> Visitor<'tcx> for OuterVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::None + } + + fn visit_expr(&mut self, _expr: &'tcx hir::Expr) { return // const, static and N in [T; N] - shouldn't contain anything } - fn visit_trait_item(&mut self, item: &hir::TraitItem) { + fn visit_trait_item(&mut self, item: &'tcx hir::TraitItem) { if let hir::ConstTraitItem(..) = item.node { return // nothing worth match checking in a constant } else { @@ -56,7 +58,7 @@ impl<'a, 'v, 'tcx> Visitor<'v> for OuterVisitor<'a, 'tcx> { } } - fn visit_impl_item(&mut self, item: &hir::ImplItem) { + fn visit_impl_item(&mut self, item: &'tcx hir::ImplItem) { if let hir::ImplItemKind::Const(..) = item.node { return // nothing worth match checking in a constant } else { @@ -64,8 +66,8 @@ impl<'a, 'v, 'tcx> Visitor<'v> for OuterVisitor<'a, 'tcx> { } } - fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v hir::FnDecl, - b: &'v hir::Block, s: Span, id: ast::NodeId) { + fn visit_fn(&mut self, fk: FnKind<'tcx>, fd: &'tcx hir::FnDecl, + b: hir::ExprId, s: Span, id: ast::NodeId) { if let FnKind::Closure(..) = fk { span_bug!(s, "check_match: closure outside of function") } @@ -78,7 +80,8 @@ impl<'a, 'v, 'tcx> Visitor<'v> for OuterVisitor<'a, 'tcx> { } pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { - tcx.visit_all_items_in_krate(DepNode::MatchCheck, &mut OuterVisitor { tcx: tcx }); + tcx.visit_all_item_likes_in_krate(DepNode::MatchCheck, + &mut OuterVisitor { tcx: tcx }.as_deep_visitor()); tcx.sess.abort_if_errors(); } @@ -91,8 +94,12 @@ struct MatchVisitor<'a, 'tcx: 'a> { param_env: &'a ty::ParameterEnvironment<'tcx> } -impl<'a, 'tcx, 'v> Visitor<'v> for MatchVisitor<'a, 'tcx> { - fn visit_expr(&mut self, ex: &hir::Expr) { +impl<'a, 'tcx> Visitor<'tcx> for MatchVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + + fn visit_expr(&mut self, ex: &'tcx hir::Expr) { intravisit::walk_expr(self, ex); match ex.node { @@ -103,7 +110,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for MatchVisitor<'a, 'tcx> { } } - fn visit_local(&mut self, loc: &hir::Local) { + fn visit_local(&mut self, loc: &'tcx hir::Local) { intravisit::walk_local(self, loc); self.check_irrefutable(&loc.pat, false); @@ -112,8 +119,8 @@ impl<'a, 'tcx, 'v> Visitor<'v> for MatchVisitor<'a, 'tcx> { self.check_patterns(false, slice::ref_slice(&loc.pat)); } - fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v hir::FnDecl, - b: &'v hir::Block, s: Span, n: ast::NodeId) { + fn visit_fn(&mut self, fk: FnKind<'tcx>, fd: &'tcx hir::FnDecl, + b: hir::ExprId, s: Span, n: ast::NodeId) { intravisit::walk_fn(self, fk, fd, b, s, n); for input in &fd.inputs { @@ -203,7 +210,7 @@ impl<'a, 'tcx> MatchVisitor<'a, 'tcx> { // Check for empty enum, because is_useful only works on inhabited types. let pat_ty = self.tcx.tables().node_id_to_type(scrut.id); if inlined_arms.is_empty() { - if !pat_ty.is_uninhabited(self.tcx) { + if !pat_ty.is_uninhabited(Some(scrut.id), self.tcx) { // We know the type is inhabited, so this must be wrong let mut err = create_e0004(self.tcx.sess, span, format!("non-exhaustive patterns: type {} \ @@ -261,26 +268,22 @@ impl<'a, 'tcx> MatchVisitor<'a, 'tcx> { fn check_for_bindings_named_the_same_as_variants(cx: &MatchVisitor, pat: &Pat) { pat.walk(|p| { - if let PatKind::Binding(hir::BindByValue(hir::MutImmutable), name, None) = p.node { + if let PatKind::Binding(hir::BindByValue(hir::MutImmutable), _, name, None) = p.node { let pat_ty = cx.tcx.tables().pat_ty(p); if let ty::TyAdt(edef, _) = pat_ty.sty { - if edef.is_enum() { - if let Def::Local(..) = cx.tcx.expect_def(p.id) { - if edef.variants.iter().any(|variant| { - variant.name == name.node && variant.ctor_kind == CtorKind::Const - }) { - let ty_path = cx.tcx.item_path_str(edef.did); - let mut err = struct_span_warn!(cx.tcx.sess, p.span, E0170, - "pattern binding `{}` is named the same as one \ - of the variants of the type `{}`", - name.node, ty_path); - help!(err, - "if you meant to match on a variant, \ - consider making the path in the pattern qualified: `{}::{}`", - ty_path, name.node); - err.emit(); - } - } + if edef.is_enum() && edef.variants.iter().any(|variant| { + variant.name == name.node && variant.ctor_kind == CtorKind::Const + }) { + let ty_path = cx.tcx.item_path_str(edef.did); + let mut err = struct_span_warn!(cx.tcx.sess, p.span, E0170, + "pattern binding `{}` is named the same as one \ + of the variants of the type `{}`", + name.node, ty_path); + help!(err, + "if you meant to match on a variant, \ + consider making the path in the pattern qualified: `{}::{}`", + ty_path, name.node); + err.emit(); } } } @@ -289,13 +292,13 @@ fn check_for_bindings_named_the_same_as_variants(cx: &MatchVisitor, pat: &Pat) { } /// Checks for common cases of "catchall" patterns that may not be intended as such. -fn pat_is_catchall(dm: &DefMap, pat: &Pat) -> bool { +fn pat_is_catchall(pat: &Pat) -> bool { match pat.node { PatKind::Binding(.., None) => true, - PatKind::Binding(.., Some(ref s)) => pat_is_catchall(dm, s), - PatKind::Ref(ref s, _) => pat_is_catchall(dm, s), + PatKind::Binding(.., Some(ref s)) => pat_is_catchall(s), + PatKind::Ref(ref s, _) => pat_is_catchall(s), PatKind::Tuple(ref v, _) => v.iter().all(|p| { - pat_is_catchall(dm, &p) + pat_is_catchall(&p) }), _ => false } @@ -373,7 +376,7 @@ fn check_arms<'a, 'tcx>(cx: &mut MatchCheckCtxt<'a, 'tcx>, } if guard.is_none() { seen.push(v); - if catchall.is_none() && pat_is_catchall(&cx.tcx.def_map.borrow(), hir_pat) { + if catchall.is_none() && pat_is_catchall(hir_pat) { catchall = Some(pat.span); } } @@ -453,7 +456,7 @@ fn check_legality_of_move_bindings(cx: &MatchVisitor, pats: &[P]) { let mut by_ref_span = None; for pat in pats { - pat_bindings(&pat, |bm, _, span, _path| { + pat.each_binding(|bm, _, span, _path| { if let hir::BindByRef(..) = bm { by_ref_span = Some(span); } @@ -464,7 +467,7 @@ fn check_legality_of_move_bindings(cx: &MatchVisitor, // check legality of moving out of the enum // x @ Foo(..) is legal, but x @ Foo(y) isn't. - if sub.map_or(false, |p| pat_contains_bindings(&p)) { + if sub.map_or(false, |p| p.contains_bindings()) { struct_span_err!(cx.tcx.sess, p.span, E0007, "cannot bind by-move with sub-bindings") .span_label(p.span, &format!("binds an already bound by-move value by moving it")) @@ -485,7 +488,7 @@ fn check_legality_of_move_bindings(cx: &MatchVisitor, for pat in pats { pat.walk(|p| { - if let PatKind::Binding(hir::BindByValue(..), _, ref sub) = p.node { + if let PatKind::Binding(hir::BindByValue(..), _, _, ref sub) = p.node { let pat_ty = cx.tcx.tables().node_id_to_type(p.id); if pat_ty.moves_by_default(cx.tcx, cx.param_env, pat.span) { check_move(p, sub.as_ref().map(|p| &**p)); @@ -562,6 +565,10 @@ struct AtBindingPatternVisitor<'a, 'b:'a, 'tcx:'b> { } impl<'a, 'b, 'tcx, 'v> Visitor<'v> for AtBindingPatternVisitor<'a, 'b, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'v> { + NestedVisitorMap::None + } + fn visit_pat(&mut self, pat: &Pat) { match pat.node { PatKind::Binding(.., ref subpat) => { diff --git a/src/librustc_const_eval/diagnostics.rs b/src/librustc_const_eval/diagnostics.rs index db72057636..83b0d9dec6 100644 --- a/src/librustc_const_eval/diagnostics.rs +++ b/src/librustc_const_eval/diagnostics.rs @@ -40,7 +40,9 @@ Ensure the ordering of the match arm is correct and remove any superfluous arms. "##, -/*E0002: r##" +E0002: r##" +## Note: this error code is no longer emitted by the compiler. + This error indicates that an empty match expression is invalid because the type it is matching on is non-empty (there exist values of this type). In safe code it is impossible to create an instance of an empty type, so empty match @@ -68,10 +70,11 @@ fn foo(x: Option) { } } ``` -"##,*/ +"##, +E0003: r##" +## Note: this error code is no longer emitted by the compiler. -/*E0003: r##" Not-a-Number (NaN) values cannot be compared for equality and hence can never match the input to a match expression. So, the following will not compile: @@ -98,7 +101,6 @@ match number { } ``` "##, -*/ E0004: r##" This error indicates that the compiler cannot guarantee a matching pattern for diff --git a/src/librustc_const_eval/eval.rs b/src/librustc_const_eval/eval.rs index 57a5400eca..9fcab12398 100644 --- a/src/librustc_const_eval/eval.rs +++ b/src/librustc_const_eval/eval.rs @@ -19,9 +19,8 @@ use rustc::hir::map as ast_map; use rustc::hir::map::blocks::FnLikeNode; use rustc::middle::cstore::InlinedItem; use rustc::traits; -use rustc::hir::def::{Def, CtorKind, PathResolution}; +use rustc::hir::def::{Def, CtorKind}; use rustc::hir::def_id::DefId; -use rustc::hir::pat_util::def_to_path; use rustc::ty::{self, Ty, TyCtxt}; use rustc::ty::util::IntTypeExt; use rustc::ty::subst::Substs; @@ -34,7 +33,6 @@ use graphviz::IntoCow; use syntax::ast; use rustc::hir::{Expr, PatKind}; use rustc::hir; -use rustc::hir::intravisit::FnKind; use syntax::ptr::P; use syntax::codemap; use syntax::attr::IntType; @@ -42,7 +40,6 @@ use syntax_pos::{self, Span}; use std::borrow::Cow; use std::cmp::Ordering; -use std::collections::hash_map::Entry::Vacant; use rustc_const_math::*; use rustc_errors::DiagnosticBuilder; @@ -105,14 +102,16 @@ pub fn lookup_const_by_id<'a, 'tcx: 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, _ => None }, Some(ast_map::NodeTraitItem(ti)) => match ti.node { - hir::ConstTraitItem(..) => { + hir::ConstTraitItem(ref ty, ref expr_option) => { if let Some(substs) = substs { // If we have a trait item and the substitutions for it, // `resolve_trait_associated_const` will select an impl // or the default. let trait_id = tcx.map.get_parent(node_id); let trait_id = tcx.map.local_def_id(trait_id); - resolve_trait_associated_const(tcx, ti, trait_id, substs) + let default_value = expr_option.as_ref() + .map(|expr| (&**expr, tcx.ast_ty_to_prim_ty(ty))); + resolve_trait_associated_const(tcx, def_id, default_value, trait_id, substs) } else { // Technically, without knowing anything about the // expression that generates the obligation, we could @@ -143,33 +142,31 @@ pub fn lookup_const_by_id<'a, 'tcx: 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } let mut used_substs = false; let expr_ty = match tcx.sess.cstore.maybe_get_item_ast(tcx, def_id) { - Some((&InlinedItem::Item(_, ref item), _)) => match item.node { - hir::ItemConst(ref ty, ref const_expr) => { - Some((&**const_expr, tcx.ast_ty_to_prim_ty(ty))) - }, - _ => None - }, - Some((&InlinedItem::TraitItem(trait_id, ref ti), _)) => match ti.node { - hir::ConstTraitItem(..) => { + Some((&InlinedItem { body: ref const_expr, .. }, _)) => { + Some((&**const_expr, Some(tcx.sess.cstore.item_type(tcx, def_id)))) + } + _ => None + }; + let expr_ty = match tcx.sess.cstore.describe_def(def_id) { + Some(Def::AssociatedConst(_)) => { + let trait_id = tcx.sess.cstore.trait_of_item(def_id); + // As mentioned in the comments above for in-crate + // constants, we only try to find the expression for a + // trait-associated const if the caller gives us the + // substitutions for the reference to it. + if let Some(trait_id) = trait_id { used_substs = true; + if let Some(substs) = substs { - // As mentioned in the comments above for in-crate - // constants, we only try to find the expression for - // a trait-associated const if the caller gives us - // the substitutions for the reference to it. - resolve_trait_associated_const(tcx, ti, trait_id, substs) + resolve_trait_associated_const(tcx, def_id, expr_ty, trait_id, substs) } else { None } + } else { + expr_ty } - _ => None - }, - Some((&InlinedItem::ImplItem(_, ref ii), _)) => match ii.node { - hir::ImplItemKind::Const(ref ty, ref expr) => { - Some((&**expr, tcx.ast_ty_to_prim_ty(ty))) - }, - _ => None }, + Some(Def::Const(..)) => expr_ty, _ => None }; // If we used the substitutions, particularly to choose an impl @@ -198,24 +195,29 @@ fn inline_const_fn_from_external_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, return None; } - let fn_id = match tcx.sess.cstore.maybe_get_item_ast(tcx, def_id) { - Some((&InlinedItem::Item(_, ref item), _)) => Some(item.id), - Some((&InlinedItem::ImplItem(_, ref item), _)) => Some(item.id), - _ => None - }; + let fn_id = tcx.sess.cstore.maybe_get_item_ast(tcx, def_id).map(|t| t.1); tcx.extern_const_fns.borrow_mut().insert(def_id, fn_id.unwrap_or(ast::DUMMY_NODE_ID)); fn_id } +pub enum ConstFnNode<'tcx> { + Local(FnLikeNode<'tcx>), + Inlined(&'tcx InlinedItem) +} + pub fn lookup_const_fn_by_id<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) - -> Option> + -> Option> { let fn_id = if let Some(node_id) = tcx.map.as_local_node_id(def_id) { node_id } else { if let Some(fn_id) = inline_const_fn_from_external_crate(tcx, def_id) { - fn_id + if let ast_map::NodeInlinedItem(ii) = tcx.map.get(fn_id) { + return Some(ConstFnNode::Inlined(ii)); + } else { + bug!("Got const fn from external crate, but it's not inlined") + } } else { return None; } @@ -226,18 +228,10 @@ pub fn lookup_const_fn_by_id<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefI None => return None }; - match fn_like.kind() { - FnKind::ItemFn(_, _, _, hir::Constness::Const, ..) => { - Some(fn_like) - } - FnKind::Method(_, m, ..) => { - if m.constness == hir::Constness::Const { - Some(fn_like) - } else { - None - } - } - _ => None + if fn_like.constness() == hir::Constness::Const { + Some(ConstFnNode::Local(fn_like)) + } else { + None } } @@ -282,27 +276,37 @@ pub fn const_expr_to_pat<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, .collect::>()?, None), hir::ExprCall(ref callee, ref args) => { - let def = tcx.expect_def(callee.id); - if let Vacant(entry) = tcx.def_map.borrow_mut().entry(expr.id) { - entry.insert(PathResolution::new(def)); - } - let path = match def { - Def::StructCtor(def_id, CtorKind::Fn) | - Def::VariantCtor(def_id, CtorKind::Fn) => def_to_path(tcx, def_id), - Def::Fn(..) | Def::Method(..) => return Ok(P(hir::Pat { - id: expr.id, - node: PatKind::Lit(P(expr.clone())), - span: span, - })), + let qpath = match callee.node { + hir::ExprPath(ref qpath) => qpath, _ => bug!() }; - let pats = args.iter() - .map(|expr| const_expr_to_pat(tcx, &**expr, pat_id, span)) - .collect::>()?; - PatKind::TupleStruct(path, pats, None) + let def = tcx.tables().qpath_def(qpath, callee.id); + let ctor_path = if let hir::QPath::Resolved(_, ref path) = *qpath { + match def { + Def::StructCtor(_, CtorKind::Fn) | + Def::VariantCtor(_, CtorKind::Fn) => { + Some(path.clone()) + } + _ => None + } + } else { + None + }; + match (def, ctor_path) { + (Def::Fn(..), None) | (Def::Method(..), None) => { + PatKind::Lit(P(expr.clone())) + } + (_, Some(ctor_path)) => { + let pats = args.iter() + .map(|expr| const_expr_to_pat(tcx, expr, pat_id, span)) + .collect::>()?; + PatKind::TupleStruct(hir::QPath::Resolved(None, ctor_path), pats, None) + } + _ => bug!() + } } - hir::ExprStruct(ref path, ref fields, None) => { + hir::ExprStruct(ref qpath, ref fields, None) => { let field_pats = fields.iter() .map(|field| Ok(codemap::Spanned { @@ -314,7 +318,7 @@ pub fn const_expr_to_pat<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, }, })) .collect::>()?; - PatKind::Struct((**path).clone(), field_pats, false) + PatKind::Struct(qpath.clone(), field_pats, false) } hir::ExprArray(ref exprs) => { @@ -324,10 +328,18 @@ pub fn const_expr_to_pat<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, PatKind::Slice(pats, None, hir::HirVec::new()) } - hir::ExprPath(_, ref path) => { - match tcx.expect_def(expr.id) { + hir::ExprPath(ref qpath) => { + let def = tcx.tables().qpath_def(qpath, expr.id); + match def { Def::StructCtor(_, CtorKind::Const) | - Def::VariantCtor(_, CtorKind::Const) => PatKind::Path(None, path.clone()), + Def::VariantCtor(_, CtorKind::Const) => { + match expr.node { + hir::ExprPath(hir::QPath::Resolved(_, ref path)) => { + PatKind::Path(hir::QPath::Resolved(None, path.clone())) + } + _ => bug!() + } + } Def::Const(def_id) | Def::AssociatedConst(def_id) => { let substs = Some(tcx.tables().node_id_item_substs(expr.id) .unwrap_or_else(|| tcx.intern_substs(&[]))); @@ -788,14 +800,9 @@ pub fn eval_const_expr_partial<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, Err(kind) => return Err(ConstEvalErr { span: e.span, kind: kind }), } } - hir::ExprPath(..) => { - // This function can be used before type checking when not all paths are fully resolved. - // FIXME: There's probably a better way to make sure we don't panic here. - let resolution = tcx.expect_resolution(e.id); - if resolution.depth != 0 { - signal!(e, UnresolvedPath); - } - match resolution.base_def { + hir::ExprPath(ref qpath) => { + let def = tcx.tables().qpath_def(qpath, e.id); + match def { Def::Const(def_id) | Def::AssociatedConst(def_id) => { let substs = if let ExprTypeChecked = ty_hint { @@ -845,6 +852,7 @@ pub fn eval_const_expr_partial<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } }, Def::Method(id) | Def::Fn(id) => Function(id), + Def::Err => signal!(e, UnresolvedPath), _ => signal!(e, NonConstPath), } } @@ -856,16 +864,22 @@ pub fn eval_const_expr_partial<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, Struct(_) => signal!(e, UnimplementedConstVal("tuple struct constructors")), callee => signal!(e, CallOn(callee)), }; - let (decl, result) = if let Some(fn_like) = lookup_const_fn_by_id(tcx, did) { - (fn_like.decl(), &fn_like.body().expr) - } else { - signal!(e, NonConstPath) + let (arg_defs, body_id) = match lookup_const_fn_by_id(tcx, did) { + Some(ConstFnNode::Inlined(ii)) => (ii.const_fn_args.clone(), ii.body.expr_id()), + Some(ConstFnNode::Local(fn_like)) => + (fn_like.decl().inputs.iter() + .map(|arg| match arg.pat.node { + hir::PatKind::Binding(_, def_id, _, _) => Some(def_id), + _ => None + }).collect(), + fn_like.body()), + None => signal!(e, NonConstPath), }; - let result = result.as_ref().expect("const fn has no result expression"); - assert_eq!(decl.inputs.len(), args.len()); + let result = tcx.map.expr(body_id); + assert_eq!(arg_defs.len(), args.len()); let mut call_args = DefIdMap(); - for (arg, arg_expr) in decl.inputs.iter().zip(args.iter()) { + for (arg, arg_expr) in arg_defs.into_iter().zip(args.iter()) { let arg_hint = ty_hint.erase_hint(); let arg_val = eval_const_expr_partial( tcx, @@ -874,8 +888,9 @@ pub fn eval_const_expr_partial<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, fn_args )?; debug!("const call arg: {:?}", arg); - let old = call_args.insert(tcx.expect_def(arg.pat.id).def_id(), arg_val); - assert!(old.is_none()); + if let Some(def_id) = arg { + assert!(call_args.insert(def_id, arg_val).is_none()); + } } debug!("const call({:?})", call_args); eval_const_expr_partial(tcx, &result, ty_hint, Some(&call_args))? @@ -1056,11 +1071,13 @@ fn infer<'a, 'tcx>(i: ConstInt, } } -fn resolve_trait_associated_const<'a, 'tcx: 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - ti: &'tcx hir::TraitItem, - trait_id: DefId, - rcvr_substs: &'tcx Substs<'tcx>) - -> Option<(&'tcx Expr, Option>)> +fn resolve_trait_associated_const<'a, 'tcx: 'a>( + tcx: TyCtxt<'a, 'tcx, 'tcx>, + trait_item_id: DefId, + default_value: Option<(&'tcx Expr, Option>)>, + trait_id: DefId, + rcvr_substs: &'tcx Substs<'tcx> +) -> Option<(&'tcx Expr, Option>)> { let trait_ref = ty::Binder(ty::TraitRef::new(trait_id, rcvr_substs)); debug!("resolve_trait_associated_const: trait_ref={:?}", @@ -1091,26 +1108,16 @@ fn resolve_trait_associated_const<'a, 'tcx: 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, // when constructing the inference context above. match selection { traits::VtableImpl(ref impl_data) => { - let ac = tcx.impl_or_trait_items(impl_data.impl_def_id) - .iter().filter_map(|&def_id| { - match tcx.impl_or_trait_item(def_id) { - ty::ConstTraitItem(ic) => Some(ic), - _ => None - } - }).find(|ic| ic.name == ti.name); + let name = tcx.associated_item(trait_item_id).name; + let ac = tcx.associated_items(impl_data.impl_def_id) + .find(|item| item.kind == ty::AssociatedKind::Const && item.name == name); match ac { Some(ic) => lookup_const_by_id(tcx, ic.def_id, None), - None => match ti.node { - hir::ConstTraitItem(ref ty, Some(ref expr)) => { - Some((&*expr, tcx.ast_ty_to_prim_ty(ty))) - }, - _ => None, - }, + None => default_value, } } _ => { - span_bug!(ti.span, - "resolve_trait_associated_const: unexpected vtable type") + bug!("resolve_trait_associated_const: unexpected vtable type") } } }) @@ -1227,7 +1234,7 @@ fn lit_to_const<'a, 'tcx>(lit: &ast::LitKind, use syntax::ast::*; use syntax::ast::LitIntType::*; match *lit { - LitKind::Str(ref s, _) => Ok(Str((*s).clone())), + LitKind::Str(ref s, _) => Ok(Str(s.as_str())), LitKind::ByteStr(ref data) => Ok(ByteStr(data.clone())), LitKind::Byte(n) => Ok(Integral(U8(n))), LitKind::Int(n, Signed(ity)) => { @@ -1255,15 +1262,15 @@ fn lit_to_const<'a, 'tcx>(lit: &ast::LitKind, infer(Infer(n), tcx, &ty::TyUint(ity)).map(Integral) }, - LitKind::Float(ref n, fty) => { - parse_float(n, Some(fty)).map(Float) + LitKind::Float(n, fty) => { + parse_float(&n.as_str(), Some(fty)).map(Float) } - LitKind::FloatUnsuffixed(ref n) => { + LitKind::FloatUnsuffixed(n) => { let fty_hint = match ty_hint.map(|t| &t.sty) { Some(&ty::TyFloat(fty)) => Some(fty), _ => None }; - parse_float(n, fty_hint).map(Float) + parse_float(&n.as_str(), fty_hint).map(Float) } LitKind::Bool(b) => Ok(Bool(b)), LitKind::Char(c) => Ok(Char(c)), @@ -1364,17 +1371,10 @@ pub fn eval_length<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let mut diag = report_const_eval_err( tcx, &err, count_expr.span, reason); - match count_expr.node { - hir::ExprPath(None, hir::Path { - global: false, - ref segments, - .. - }) if segments.len() == 1 => { - if let Some(Def::Local(..)) = tcx.expect_def_or_none(count_expr.id) { - diag.note(&format!("`{}` is a variable", segments[0].name)); - } + if let hir::ExprPath(hir::QPath::Resolved(None, ref path)) = count_expr.node { + if let Def::Local(..) = path.def { + diag.note(&format!("`{}` is a variable", path)); } - _ => {} } diag.emit(); diff --git a/src/librustc_const_eval/lib.rs b/src/librustc_const_eval/lib.rs index 3fa60f86c9..7a6cc49372 100644 --- a/src/librustc_const_eval/lib.rs +++ b/src/librustc_const_eval/lib.rs @@ -22,12 +22,10 @@ html_favicon_url = "https://doc.rust-lang.org/favicon.ico", html_root_url = "https://doc.rust-lang.org/nightly/")] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(rustc_private)] #![feature(staged_api)] #![feature(rustc_diagnostic_macros)] #![feature(slice_patterns)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(box_patterns)] #![feature(box_syntax)] diff --git a/src/librustc_const_eval/pattern.rs b/src/librustc_const_eval/pattern.rs index 241920f294..e93178c89c 100644 --- a/src/librustc_const_eval/pattern.rs +++ b/src/librustc_const_eval/pattern.rs @@ -66,7 +66,7 @@ pub enum PatternKind<'tcx> { /// Foo(...) or Foo{...} or Foo, where `Foo` is a variant name from an adt with >1 variants Variant { - adt_def: AdtDef<'tcx>, + adt_def: &'tcx AdtDef, variant_index: usize, subpatterns: Vec>, }, @@ -163,8 +163,9 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { } } - PatKind::Path(..) => { - match self.tcx.expect_def(pat.id) { + PatKind::Path(ref qpath) => { + let def = self.tcx.tables().qpath_def(qpath, pat.id); + match def { Def::Const(def_id) | Def::AssociatedConst(def_id) => { let tcx = self.tcx.global_tcx(); let substs = tcx.tables().node_id_item_substs(pat.id) @@ -188,7 +189,7 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { } } } - _ => self.lower_variant_or_leaf(pat, vec![]) + _ => self.lower_variant_or_leaf(def, vec![]) } } @@ -242,8 +243,7 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { } } - PatKind::Binding(bm, ref ident, ref sub) => { - let def_id = self.tcx.expect_def(pat.id).def_id(); + PatKind::Binding(bm, def_id, ref ident, ref sub) => { let id = self.tcx.map.as_local_node_id(def_id).unwrap(); let var_ty = self.tcx.tables().node_id_to_type(pat.id); let region = match var_ty.sty { @@ -281,13 +281,14 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { } } - PatKind::TupleStruct(_, ref subpatterns, ddpos) => { + PatKind::TupleStruct(ref qpath, ref subpatterns, ddpos) => { + let def = self.tcx.tables().qpath_def(qpath, pat.id); let pat_ty = self.tcx.tables().node_id_to_type(pat.id); let adt_def = match pat_ty.sty { ty::TyAdt(adt_def, _) => adt_def, _ => span_bug!(pat.span, "tuple struct pattern not applied to an ADT"), }; - let variant_def = adt_def.variant_of_def(self.tcx.expect_def(pat.id)); + let variant_def = adt_def.variant_of_def(def); let subpatterns = subpatterns.iter() @@ -297,10 +298,11 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { pattern: self.lower_pattern(field), }) .collect(); - self.lower_variant_or_leaf(pat, subpatterns) + self.lower_variant_or_leaf(def, subpatterns) } - PatKind::Struct(_, ref fields, _) => { + PatKind::Struct(ref qpath, ref fields, _) => { + let def = self.tcx.tables().qpath_def(qpath, pat.id); let pat_ty = self.tcx.tables().node_id_to_type(pat.id); let adt_def = match pat_ty.sty { ty::TyAdt(adt_def, _) => adt_def, @@ -310,7 +312,7 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { "struct pattern not applied to an ADT"); } }; - let variant_def = adt_def.variant_of_def(self.tcx.expect_def(pat.id)); + let variant_def = adt_def.variant_of_def(def); let subpatterns = fields.iter() @@ -329,7 +331,7 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { }) .collect(); - self.lower_variant_or_leaf(pat, subpatterns) + self.lower_variant_or_leaf(def, subpatterns) } }; @@ -418,11 +420,11 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { fn lower_variant_or_leaf( &mut self, - pat: &hir::Pat, + def: Def, subpatterns: Vec>) -> PatternKind<'tcx> { - match self.tcx.expect_def(pat.id) { + match def { Def::Variant(variant_id) | Def::VariantCtor(variant_id, ..) => { let enum_id = self.tcx.parent_def_id(variant_id).unwrap(); let adt_def = self.tcx.lookup_adt_def(enum_id); @@ -442,9 +444,7 @@ impl<'a, 'gcx, 'tcx> PatternContext<'a, 'gcx, 'tcx> { PatternKind::Leaf { subpatterns: subpatterns } } - def => { - span_bug!(pat.span, "inappropriate def for pattern: {:?}", def); - } + _ => bug!() } } } @@ -487,32 +487,22 @@ impl<'tcx, T: PatternFoldable<'tcx>> PatternFoldable<'tcx> for Option { } } -macro_rules! CopyImpls { - ($($ty:ty),+) => { +macro_rules! CloneImpls { + (<$lt_tcx:tt> $($ty:ty),+) => { $( - impl<'tcx> PatternFoldable<'tcx> for $ty { - fn super_fold_with>(&self, _: &mut F) -> Self { - self.clone() - } - } - )+ - } -} - -macro_rules! TcxCopyImpls { - ($($ty:ident),+) => { - $( - impl<'tcx> PatternFoldable<'tcx> for $ty<'tcx> { - fn super_fold_with>(&self, _: &mut F) -> Self { - *self + impl<$lt_tcx> PatternFoldable<$lt_tcx> for $ty { + fn super_fold_with>(&self, _: &mut F) -> Self { + Clone::clone(self) } } )+ } } -CopyImpls!{ Span, Field, Mutability, ast::Name, ast::NodeId, usize, ConstVal } -TcxCopyImpls!{ Ty, BindingMode, AdtDef } +CloneImpls!{ <'tcx> + Span, Field, Mutability, ast::Name, ast::NodeId, usize, ConstVal, + Ty<'tcx>, BindingMode<'tcx>, &'tcx AdtDef +} impl<'tcx> PatternFoldable<'tcx> for FieldPattern<'tcx> { fn super_fold_with>(&self, folder: &mut F) -> Self { diff --git a/src/librustc_const_math/lib.rs b/src/librustc_const_math/lib.rs index 31fccb41ce..f667ff23b2 100644 --- a/src/librustc_const_math/lib.rs +++ b/src/librustc_const_math/lib.rs @@ -25,7 +25,6 @@ #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #[macro_use] extern crate log; #[macro_use] extern crate syntax; diff --git a/src/librustc_data_structures/accumulate_vec.rs b/src/librustc_data_structures/accumulate_vec.rs index 3894db4027..937cb3f600 100644 --- a/src/librustc_data_structures/accumulate_vec.rs +++ b/src/librustc_data_structures/accumulate_vec.rs @@ -11,22 +11,68 @@ //! A vector type intended to be used for collecting from iterators onto the stack. //! //! Space for up to N elements is provided on the stack. If more elements are collected, Vec is -//! used to store the values on the heap. This type does not support re-allocating onto the heap, -//! and there is no way to push more elements onto the existing storage. +//! used to store the values on the heap. //! //! The N above is determined by Array's implementor, by way of an associatated constant. -use std::ops::Deref; -use std::iter::{IntoIterator, FromIterator}; +use std::ops::{Deref, DerefMut}; +use std::iter::{self, IntoIterator, FromIterator}; +use std::slice; +use std::vec; -use array_vec::{Array, ArrayVec}; +use rustc_serialize::{Encodable, Encoder, Decodable, Decoder}; -#[derive(Debug)] +use array_vec::{self, Array, ArrayVec}; + +#[derive(PartialEq, Eq, Hash, Debug)] pub enum AccumulateVec { Array(ArrayVec), Heap(Vec) } +impl Clone for AccumulateVec + where A: Array, + A::Element: Clone { + fn clone(&self) -> Self { + match *self { + AccumulateVec::Array(ref arr) => AccumulateVec::Array(arr.clone()), + AccumulateVec::Heap(ref vec) => AccumulateVec::Heap(vec.clone()), + } + } +} + +impl AccumulateVec { + pub fn new() -> AccumulateVec { + AccumulateVec::Array(ArrayVec::new()) + } + + pub fn one(el: A::Element) -> Self { + iter::once(el).collect() + } + + pub fn many>(iter: I) -> Self { + iter.into_iter().collect() + } + + pub fn len(&self) -> usize { + match *self { + AccumulateVec::Array(ref arr) => arr.len(), + AccumulateVec::Heap(ref vec) => vec.len(), + } + } + + pub fn is_empty(&self) -> bool { + self.len() == 0 + } + + pub fn pop(&mut self) -> Option { + match *self { + AccumulateVec::Array(ref mut arr) => arr.pop(), + AccumulateVec::Heap(ref mut vec) => vec.pop(), + } + } +} + impl Deref for AccumulateVec { type Target = [A::Element]; fn deref(&self) -> &Self::Target { @@ -37,6 +83,15 @@ impl Deref for AccumulateVec { } } +impl DerefMut for AccumulateVec { + fn deref_mut(&mut self) -> &mut [A::Element] { + match *self { + AccumulateVec::Array(ref mut v) => &mut v[..], + AccumulateVec::Heap(ref mut v) => &mut v[..], + } + } +} + impl FromIterator for AccumulateVec { fn from_iter(iter: I) -> AccumulateVec where I: IntoIterator { let iter = iter.into_iter(); @@ -50,3 +105,94 @@ impl FromIterator for AccumulateVec { } } +pub struct IntoIter { + repr: IntoIterRepr, +} + +enum IntoIterRepr { + Array(array_vec::Iter), + Heap(vec::IntoIter), +} + +impl Iterator for IntoIter { + type Item = A::Element; + + fn next(&mut self) -> Option { + match self.repr { + IntoIterRepr::Array(ref mut arr) => arr.next(), + IntoIterRepr::Heap(ref mut iter) => iter.next(), + } + } + + fn size_hint(&self) -> (usize, Option) { + match self.repr { + IntoIterRepr::Array(ref iter) => iter.size_hint(), + IntoIterRepr::Heap(ref iter) => iter.size_hint(), + } + } +} + +impl IntoIterator for AccumulateVec { + type Item = A::Element; + type IntoIter = IntoIter; + fn into_iter(self) -> Self::IntoIter { + IntoIter { + repr: match self { + AccumulateVec::Array(arr) => IntoIterRepr::Array(arr.into_iter()), + AccumulateVec::Heap(vec) => IntoIterRepr::Heap(vec.into_iter()), + } + } + } +} + +impl<'a, A: Array> IntoIterator for &'a AccumulateVec { + type Item = &'a A::Element; + type IntoIter = slice::Iter<'a, A::Element>; + fn into_iter(self) -> Self::IntoIter { + self.iter() + } +} + +impl<'a, A: Array> IntoIterator for &'a mut AccumulateVec { + type Item = &'a mut A::Element; + type IntoIter = slice::IterMut<'a, A::Element>; + fn into_iter(self) -> Self::IntoIter { + self.iter_mut() + } +} + +impl From> for AccumulateVec { + fn from(v: Vec) -> AccumulateVec { + AccumulateVec::many(v) + } +} + +impl Default for AccumulateVec { + fn default() -> AccumulateVec { + AccumulateVec::new() + } +} + +impl Encodable for AccumulateVec + where A: Array, + A::Element: Encodable { + fn encode(&self, s: &mut S) -> Result<(), S::Error> { + s.emit_seq(self.len(), |s| { + for (i, e) in self.iter().enumerate() { + try!(s.emit_seq_elt(i, |s| e.encode(s))); + } + Ok(()) + }) + } +} + +impl Decodable for AccumulateVec + where A: Array, + A::Element: Decodable { + fn decode(d: &mut D) -> Result, D::Error> { + d.read_seq(|d, len| { + Ok(try!((0..len).map(|i| d.read_seq_elt(i, |d| Decodable::decode(d))).collect())) + }) + } +} + diff --git a/src/librustc_data_structures/array_vec.rs b/src/librustc_data_structures/array_vec.rs index f87426cee5..631cf2cfcf 100644 --- a/src/librustc_data_structures/array_vec.rs +++ b/src/librustc_data_structures/array_vec.rs @@ -9,15 +9,15 @@ // except according to those terms. //! A stack-allocated vector, allowing storage of N elements on the stack. -//! -//! Currently, only the N = 8 case is supported (due to Array only being impl-ed for [T; 8]). use std::marker::Unsize; use std::iter::Extend; -use std::ptr::drop_in_place; -use std::ops::{Deref, DerefMut}; +use std::ptr::{self, drop_in_place}; +use std::ops::{Deref, DerefMut, Range}; +use std::hash::{Hash, Hasher}; use std::slice; use std::fmt; +use std::mem; pub unsafe trait Array { type Element; @@ -25,6 +25,12 @@ pub unsafe trait Array { const LEN: usize; } +unsafe impl Array for [T; 1] { + type Element = T; + type PartialStorage = [ManuallyDrop; 1]; + const LEN: usize = 1; +} + unsafe impl Array for [T; 8] { type Element = T; type PartialStorage = [ManuallyDrop; 8]; @@ -36,6 +42,32 @@ pub struct ArrayVec { values: A::PartialStorage } +impl Hash for ArrayVec + where A: Array, + A::Element: Hash { + fn hash(&self, state: &mut H) where H: Hasher { + (&self[..]).hash(state); + } +} + +impl PartialEq for ArrayVec { + fn eq(&self, other: &Self) -> bool { + self == other + } +} + +impl Eq for ArrayVec {} + +impl Clone for ArrayVec + where A: Array, + A::Element: Clone { + fn clone(&self) -> Self { + let mut v = ArrayVec::new(); + v.extend(self.iter().cloned()); + v + } +} + impl ArrayVec { pub fn new() -> Self { ArrayVec { @@ -43,6 +75,41 @@ impl ArrayVec { values: Default::default(), } } + + pub fn len(&self) -> usize { + self.count + } + + pub unsafe fn set_len(&mut self, len: usize) { + self.count = len; + } + + /// Panics when the stack vector is full. + pub fn push(&mut self, el: A::Element) { + let arr = &mut self.values as &mut [ManuallyDrop<_>]; + arr[self.count] = ManuallyDrop { value: el }; + self.count += 1; + } + + pub fn pop(&mut self) -> Option { + if self.count > 0 { + let arr = &mut self.values as &mut [ManuallyDrop<_>]; + self.count -= 1; + unsafe { + let value = ptr::read(&arr[self.count]); + Some(value.value) + } + } else { + None + } + } +} + +impl Default for ArrayVec + where A: Array { + fn default() -> Self { + ArrayVec::new() + } } impl fmt::Debug for ArrayVec @@ -81,15 +148,69 @@ impl Drop for ArrayVec { impl Extend for ArrayVec { fn extend(&mut self, iter: I) where I: IntoIterator { for el in iter { - unsafe { - let arr = &mut self.values as &mut [ManuallyDrop<_>]; - arr[self.count].value = el; - } - self.count += 1; + self.push(el); } } } +pub struct Iter { + indices: Range, + store: A::PartialStorage, +} + +impl Drop for Iter { + fn drop(&mut self) { + for _ in self {} + } +} + +impl Iterator for Iter { + type Item = A::Element; + + fn next(&mut self) -> Option { + let arr = &self.store as &[ManuallyDrop<_>]; + unsafe { + self.indices.next().map(|i| ptr::read(&arr[i]).value) + } + } + + fn size_hint(&self) -> (usize, Option) { + self.indices.size_hint() + } +} + +impl IntoIterator for ArrayVec { + type Item = A::Element; + type IntoIter = Iter; + fn into_iter(self) -> Self::IntoIter { + let store = unsafe { + ptr::read(&self.values) + }; + let indices = 0..self.count; + mem::forget(self); + Iter { + indices: indices, + store: store, + } + } +} + +impl<'a, A: Array> IntoIterator for &'a ArrayVec { + type Item = &'a A::Element; + type IntoIter = slice::Iter<'a, A::Element>; + fn into_iter(self) -> Self::IntoIter { + self.iter() + } +} + +impl<'a, A: Array> IntoIterator for &'a mut ArrayVec { + type Item = &'a mut A::Element; + type IntoIter = slice::IterMut<'a, A::Element>; + fn into_iter(self) -> Self::IntoIter { + self.iter_mut() + } +} + // FIXME: This should use repr(transparent) from rust-lang/rfcs#1758. #[allow(unions_with_drop_fields)] pub union ManuallyDrop { @@ -98,9 +219,17 @@ pub union ManuallyDrop { empty: (), } -impl Default for ManuallyDrop { - fn default() -> Self { - ManuallyDrop { empty: () } +impl ManuallyDrop { + fn new() -> ManuallyDrop { + ManuallyDrop { + empty: () + } + } +} + +impl Default for ManuallyDrop { + fn default() -> Self { + ManuallyDrop::new() } } diff --git a/src/librustc_data_structures/base_n.rs b/src/librustc_data_structures/base_n.rs new file mode 100644 index 0000000000..4359581a89 --- /dev/null +++ b/src/librustc_data_structures/base_n.rs @@ -0,0 +1,66 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +/// Convert unsigned integers into a string representation with some base. +/// Bases up to and including 36 can be used for case-insensitive things. + +use std::str; + +pub const MAX_BASE: u64 = 64; +pub const ALPHANUMERIC_ONLY: u64 = 62; + +const BASE_64: &'static [u8; MAX_BASE as usize] = + b"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@$"; + +#[inline] +pub fn push_str(mut n: u64, base: u64, output: &mut String) { + debug_assert!(base >= 2 && base <= MAX_BASE); + let mut s = [0u8; 64]; + let mut index = 0; + + loop { + s[index] = BASE_64[(n % base) as usize]; + index += 1; + n /= base; + + if n == 0 { + break; + } + } + &mut s[0..index].reverse(); + output.push_str(str::from_utf8(&s[0..index]).unwrap()); +} + +#[inline] +pub fn encode(n: u64, base: u64) -> String { + let mut s = String::with_capacity(13); + push_str(n, base, &mut s); + s +} + +#[test] +fn test_encode() { + fn test(n: u64, base: u64) { + assert_eq!(Ok(n), u64::from_str_radix(&encode(n, base)[..], base as u32)); + } + + for base in 2..37 { + test(0, base); + test(1, base); + test(35, base); + test(36, base); + test(37, base); + test(u64::max_value(), base); + + for i in 0 .. 1_000 { + test(i * 983, base); + } + } +} diff --git a/src/librustc_data_structures/blake2b.rs b/src/librustc_data_structures/blake2b.rs index 8c82c135dc..31492e2621 100644 --- a/src/librustc_data_structures/blake2b.rs +++ b/src/librustc_data_structures/blake2b.rs @@ -113,17 +113,20 @@ fn blake2b_compress(ctx: &mut Blake2bCtx, last: bool) { } { - // Re-interpret the input buffer in the state as u64s + // Re-interpret the input buffer in the state as an array + // of little-endian u64s, converting them to machine + // endianness. It's OK to modify the buffer in place + // since this is the last time this data will be accessed + // before it's overwritten. + let m: &mut [u64; 16] = unsafe { let b: &mut [u8; 128] = &mut ctx.b; ::std::mem::transmute(b) }; - // It's OK to modify the buffer in place since this is the last time - // this data will be accessed before it's overwritten if cfg!(target_endian = "big") { for word in &mut m[..] { - *word = word.to_be(); + *word = u64::from_le(*word); } } @@ -209,9 +212,10 @@ fn blake2b_final(ctx: &mut Blake2bCtx) blake2b_compress(ctx, true); + // Modify our buffer to little-endian format as it will be read + // as a byte array. It's OK to modify the buffer in place since + // this is the last time this data will be accessed. if cfg!(target_endian = "big") { - // Make sure that the data is in memory in little endian format, as is - // demanded by BLAKE2 for word in &mut ctx.h { *word = word.to_le(); } diff --git a/src/librustc_data_structures/flock.rs b/src/librustc_data_structures/flock.rs index 510c9ceef0..33d71ba862 100644 --- a/src/librustc_data_structures/flock.rs +++ b/src/librustc_data_structures/flock.rs @@ -31,6 +31,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_type: libc::c_short, pub l_whence: libc::c_short, @@ -53,6 +54,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_start: libc::off_t, pub l_len: libc::off_t, @@ -76,6 +78,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_start: libc::off_t, pub l_len: libc::off_t, @@ -98,6 +101,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_type: libc::c_short, pub l_whence: libc::c_short, @@ -119,6 +123,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_start: libc::off_t, pub l_len: libc::off_t, @@ -141,6 +146,7 @@ mod imp { mod os { use libc; + #[repr(C)] pub struct flock { pub l_type: libc::c_short, pub l_whence: libc::c_short, diff --git a/src/librustc_data_structures/fx.rs b/src/librustc_data_structures/fx.rs new file mode 100644 index 0000000000..1fb7673521 --- /dev/null +++ b/src/librustc_data_structures/fx.rs @@ -0,0 +1,115 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::collections::{HashMap, HashSet}; +use std::default::Default; +use std::hash::{Hasher, Hash, BuildHasherDefault}; +use std::ops::BitXor; + +pub type FxHashMap = HashMap>; +pub type FxHashSet = HashSet>; + +#[allow(non_snake_case)] +pub fn FxHashMap() -> FxHashMap { + HashMap::default() +} + +#[allow(non_snake_case)] +pub fn FxHashSet() -> FxHashSet { + HashSet::default() +} + +/// A speedy hash algorithm for use within rustc. The hashmap in libcollections +/// by default uses SipHash which isn't quite as speedy as we want. In the +/// compiler we're not really worried about DOS attempts, so we use a fast +/// non-cryptographic hash. +/// +/// This is the same as the algorithm used by Firefox -- which is a homespun +/// one not based on any widely-known algorithm -- though modified to produce +/// 64-bit hash values instead of 32-bit hash values. It consistently +/// out-performs an FNV-based hash within rustc itself -- the collision rate is +/// similar or slightly worse than FNV, but the speed of the hash function +/// itself is much higher because it works on up to 8 bytes at a time. +pub struct FxHasher { + hash: usize +} + +#[cfg(target_pointer_width = "32")] +const K: usize = 0x9e3779b9; +#[cfg(target_pointer_width = "64")] +const K: usize = 0x517cc1b727220a95; + +impl Default for FxHasher { + #[inline] + fn default() -> FxHasher { + FxHasher { hash: 0 } + } +} + +impl FxHasher { + #[inline] + fn add_to_hash(&mut self, i: usize) { + self.hash = self.hash.rotate_left(5).bitxor(i).wrapping_mul(K); + } +} + +impl Hasher for FxHasher { + #[inline] + fn write(&mut self, bytes: &[u8]) { + for byte in bytes { + let i = *byte; + self.add_to_hash(i as usize); + } + } + + #[inline] + fn write_u8(&mut self, i: u8) { + self.add_to_hash(i as usize); + } + + #[inline] + fn write_u16(&mut self, i: u16) { + self.add_to_hash(i as usize); + } + + #[inline] + fn write_u32(&mut self, i: u32) { + self.add_to_hash(i as usize); + } + + #[cfg(target_pointer_width = "32")] + #[inline] + fn write_u64(&mut self, i: u64) { + self.add_to_hash(i as usize); + self.add_to_hash((i >> 32) as usize); + } + + #[cfg(target_pointer_width = "64")] + #[inline] + fn write_u64(&mut self, i: u64) { + self.add_to_hash(i as usize); + } + + #[inline] + fn write_usize(&mut self, i: usize) { + self.add_to_hash(i); + } + + #[inline] + fn finish(&self) -> u64 { + self.hash as u64 + } +} + +pub fn hash(v: &T) -> u64 { + let mut state = FxHasher::default(); + v.hash(&mut state); + state.finish() +} diff --git a/src/librustc_data_structures/graph/mod.rs b/src/librustc_data_structures/graph/mod.rs index fdb629ca5a..f94ed6b720 100644 --- a/src/librustc_data_structures/graph/mod.rs +++ b/src/librustc_data_structures/graph/mod.rs @@ -231,18 +231,30 @@ impl Graph { // # Iterating over nodes, edges + pub fn enumerated_nodes(&self) -> EnumeratedNodes { + EnumeratedNodes { + iter: self.nodes.iter().enumerate() + } + } + + pub fn enumerated_edges(&self) -> EnumeratedEdges { + EnumeratedEdges { + iter: self.edges.iter().enumerate() + } + } + pub fn each_node<'a, F>(&'a self, mut f: F) -> bool where F: FnMut(NodeIndex, &'a Node) -> bool { //! Iterates over all edges defined in the graph. - self.nodes.iter().enumerate().all(|(i, node)| f(NodeIndex(i), node)) + self.enumerated_nodes().all(|(node_idx, node)| f(node_idx, node)) } pub fn each_edge<'a, F>(&'a self, mut f: F) -> bool where F: FnMut(EdgeIndex, &'a Edge) -> bool { //! Iterates over all edges defined in the graph - self.edges.iter().enumerate().all(|(i, edge)| f(EdgeIndex(i), edge)) + self.enumerated_edges().all(|(edge_idx, edge)| f(edge_idx, edge)) } pub fn outgoing_edges(&self, source: NodeIndex) -> AdjacentEdges { @@ -270,14 +282,11 @@ impl Graph { self.incoming_edges(target).sources() } - // # Fixed-point iteration - // - // A common use for graphs in our compiler is to perform - // fixed-point iteration. In this case, each edge represents a - // constraint, and the nodes themselves are associated with - // variables or other bitsets. This method facilitates such a - // computation. - + /// A common use for graphs in our compiler is to perform + /// fixed-point iteration. In this case, each edge represents a + /// constraint, and the nodes themselves are associated with + /// variables or other bitsets. This method facilitates such a + /// computation. pub fn iterate_until_fixed_point<'a, F>(&'a self, mut op: F) where F: FnMut(usize, EdgeIndex, &'a Edge) -> bool { @@ -286,8 +295,8 @@ impl Graph { while changed { changed = false; iteration += 1; - for (i, edge) in self.edges.iter().enumerate() { - changed |= op(iteration, EdgeIndex(i), edge); + for (edge_index, edge) in self.enumerated_edges() { + changed |= op(iteration, edge_index, edge); } } } @@ -298,10 +307,67 @@ impl Graph { -> DepthFirstTraversal<'a, N, E> { DepthFirstTraversal::with_start_node(self, start, direction) } + + /// Whether or not a node can be reached from itself. + pub fn is_node_cyclic(&self, starting_node_index: NodeIndex) -> bool { + // This is similar to depth traversal below, but we + // can't use that, because depth traversal doesn't show + // the starting node a second time. + let mut visited = BitVector::new(self.len_nodes()); + let mut stack = vec![starting_node_index]; + + while let Some(current_node_index) = stack.pop() { + visited.insert(current_node_index.0); + + // Directionality doesn't change the answer, + // so just use outgoing edges. + for (_, edge) in self.outgoing_edges(current_node_index) { + let target_node_index = edge.target(); + + if target_node_index == starting_node_index { + return true; + } + + if !visited.contains(target_node_index.0) { + stack.push(target_node_index); + } + } + } + + false + } } // # Iterators +pub struct EnumeratedNodes<'g, N> + where N: 'g, +{ + iter: ::std::iter::Enumerate<::std::slice::Iter<'g, Node>> +} + +impl<'g, N: Debug> Iterator for EnumeratedNodes<'g, N> { + type Item = (NodeIndex, &'g Node); + + fn next(&mut self) -> Option<(NodeIndex, &'g Node)> { + self.iter.next().map(|(idx, n)| (NodeIndex(idx), n)) + } +} + +pub struct EnumeratedEdges<'g, E> + where E: 'g, +{ + iter: ::std::iter::Enumerate<::std::slice::Iter<'g, Edge>> +} + +impl<'g, E: Debug> Iterator for EnumeratedEdges<'g, E> { + type Item = (EdgeIndex, &'g Edge); + + fn next(&mut self) -> Option<(EdgeIndex, &'g Edge)> { + self.iter.next().map(|(idx, e)| (EdgeIndex(idx), e)) + } +} + pub struct AdjacentEdges<'g, N, E> where N: 'g, E: 'g @@ -336,7 +402,7 @@ impl<'g, N: Debug, E: Debug> Iterator for AdjacentEdges<'g, N, E> { } } -pub struct AdjacentTargets<'g, N: 'g, E: 'g> +pub struct AdjacentTargets<'g, N, E> where N: 'g, E: 'g { @@ -351,7 +417,7 @@ impl<'g, N: Debug, E: Debug> Iterator for AdjacentTargets<'g, N, E> { } } -pub struct AdjacentSources<'g, N: 'g, E: 'g> +pub struct AdjacentSources<'g, N, E> where N: 'g, E: 'g { @@ -366,7 +432,10 @@ impl<'g, N: Debug, E: Debug> Iterator for AdjacentSources<'g, N, E> { } } -pub struct DepthFirstTraversal<'g, N: 'g, E: 'g> { +pub struct DepthFirstTraversal<'g, N, E> + where N: 'g, + E: 'g +{ graph: &'g Graph, stack: Vec, visited: BitVector, diff --git a/src/librustc_data_structures/graph/tests.rs b/src/librustc_data_structures/graph/tests.rs index be7f48d27e..bdefc39a61 100644 --- a/src/librustc_data_structures/graph/tests.rs +++ b/src/librustc_data_structures/graph/tests.rs @@ -11,8 +11,6 @@ use graph::*; use std::fmt::Debug; -type TestNode = Node<&'static str>; -type TestEdge = Edge<&'static str>; type TestGraph = Graph<&'static str, &'static str>; fn create_graph() -> TestGraph { @@ -20,10 +18,13 @@ fn create_graph() -> TestGraph { // Create a simple graph // - // A -+> B --> C - // | | ^ - // | v | - // F D --> E + // F + // | + // V + // A --> B --> C + // | ^ + // v | + // D --> E let a = graph.add_node("A"); let b = graph.add_node("B"); @@ -42,6 +43,29 @@ fn create_graph() -> TestGraph { return graph; } +fn create_graph_with_cycle() -> TestGraph { + let mut graph = Graph::new(); + + // Create a graph with a cycle. + // + // A --> B <-- + + // | | + // v | + // C --> D + + let a = graph.add_node("A"); + let b = graph.add_node("B"); + let c = graph.add_node("C"); + let d = graph.add_node("D"); + + graph.add_edge(a, b, "AB"); + graph.add_edge(b, c, "BC"); + graph.add_edge(c, d, "CD"); + graph.add_edge(d, b, "DB"); + + return graph; +} + #[test] fn each_node() { let graph = create_graph(); @@ -139,3 +163,15 @@ fn each_adjacent_from_d() { let graph = create_graph(); test_adjacent_edges(&graph, NodeIndex(3), "D", &[("BD", "B")], &[("DE", "E")]); } + +#[test] +fn is_node_cyclic_a() { + let graph = create_graph_with_cycle(); + assert!(!graph.is_node_cyclic(NodeIndex(0))); +} + +#[test] +fn is_node_cyclic_b() { + let graph = create_graph_with_cycle(); + assert!(graph.is_node_cyclic(NodeIndex(1))); +} diff --git a/src/librustc_data_structures/lib.rs b/src/librustc_data_structures/lib.rs index fc963dac94..86f244d65d 100644 --- a/src/librustc_data_structures/lib.rs +++ b/src/librustc_data_structures/lib.rs @@ -44,8 +44,12 @@ extern crate serialize as rustc_serialize; // used by deriving #[cfg(unix)] extern crate libc; +pub use rustc_serialize::hex::ToHex; + pub mod array_vec; pub mod accumulate_vec; +pub mod small_vec; +pub mod base_n; pub mod bitslice; pub mod blake2b; pub mod bitvec; @@ -57,9 +61,11 @@ pub mod indexed_vec; pub mod obligation_forest; pub mod snapshot_map; pub mod snapshot_vec; +pub mod stable_hasher; pub mod transitive_relation; pub mod unify; pub mod fnv; +pub mod fx; pub mod tuple_slice; pub mod veccell; pub mod control_flow_graph; diff --git a/src/librustc_data_structures/obligation_forest/mod.rs b/src/librustc_data_structures/obligation_forest/mod.rs index a2bfa784e8..a46238309b 100644 --- a/src/librustc_data_structures/obligation_forest/mod.rs +++ b/src/librustc_data_structures/obligation_forest/mod.rs @@ -15,7 +15,7 @@ //! in the first place). See README.md for a general overview of how //! to use this class. -use fnv::{FnvHashMap, FnvHashSet}; +use fx::{FxHashMap, FxHashSet}; use std::cell::Cell; use std::collections::hash_map::Entry; @@ -68,9 +68,9 @@ pub struct ObligationForest { /// backtrace iterator (which uses `split_at`). nodes: Vec>, /// A cache of predicates that have been successfully completed. - done_cache: FnvHashSet, + done_cache: FxHashSet, /// An cache of the nodes in `nodes`, indexed by predicate. - waiting_cache: FnvHashMap, + waiting_cache: FxHashMap, /// A list of the obligations added in snapshots, to allow /// for their removal. cache_list: Vec, @@ -158,8 +158,8 @@ impl ObligationForest { ObligationForest { nodes: vec![], snapshots: vec![], - done_cache: FnvHashSet(), - waiting_cache: FnvHashMap(), + done_cache: FxHashSet(), + waiting_cache: FxHashMap(), cache_list: vec![], scratch: Some(vec![]), } diff --git a/src/librustc_data_structures/small_vec.rs b/src/librustc_data_structures/small_vec.rs new file mode 100644 index 0000000000..4e2b378602 --- /dev/null +++ b/src/librustc_data_structures/small_vec.rs @@ -0,0 +1,215 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! A vector type intended to be used for collecting from iterators onto the stack. +//! +//! Space for up to N elements is provided on the stack. If more elements are collected, Vec is +//! used to store the values on the heap. SmallVec is similar to AccumulateVec, but adds +//! the ability to push elements. +//! +//! The N above is determined by Array's implementor, by way of an associatated constant. + +use std::ops::{Deref, DerefMut}; +use std::iter::{IntoIterator, FromIterator}; +use std::fmt::{self, Debug}; +use std::mem; +use std::ptr; + +use rustc_serialize::{Encodable, Encoder, Decodable, Decoder}; + +use accumulate_vec::{IntoIter, AccumulateVec}; +use array_vec::Array; + +pub struct SmallVec(AccumulateVec); + +impl Clone for SmallVec + where A: Array, + A::Element: Clone { + fn clone(&self) -> Self { + SmallVec(self.0.clone()) + } +} + +impl Debug for SmallVec + where A: Array + Debug, + A::Element: Debug { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.debug_tuple("SmallVec").field(&self.0).finish() + } +} + +impl SmallVec { + pub fn new() -> Self { + SmallVec(AccumulateVec::new()) + } + + pub fn with_capacity(cap: usize) -> Self { + let mut vec = SmallVec::new(); + vec.reserve(cap); + vec + } + + pub fn one(el: A::Element) -> Self { + SmallVec(AccumulateVec::one(el)) + } + + pub fn many>(els: I) -> Self { + SmallVec(AccumulateVec::many(els)) + } + + pub fn expect_one(self, err: &'static str) -> A::Element { + assert!(self.len() == 1, err); + match self.0 { + AccumulateVec::Array(arr) => arr.into_iter().next().unwrap(), + AccumulateVec::Heap(vec) => vec.into_iter().next().unwrap(), + } + } + + /// Will reallocate onto the heap if needed. + pub fn push(&mut self, el: A::Element) { + self.reserve(1); + match self.0 { + AccumulateVec::Array(ref mut array) => array.push(el), + AccumulateVec::Heap(ref mut vec) => vec.push(el), + } + } + + pub fn reserve(&mut self, n: usize) { + match self.0 { + AccumulateVec::Array(_) => { + if self.len() + n > A::LEN { + let len = self.len(); + let array = mem::replace(&mut self.0, + AccumulateVec::Heap(Vec::with_capacity(len + n))); + if let AccumulateVec::Array(array) = array { + match self.0 { + AccumulateVec::Heap(ref mut vec) => vec.extend(array), + _ => unreachable!() + } + } + } + } + AccumulateVec::Heap(ref mut vec) => vec.reserve(n) + } + } + + pub unsafe fn set_len(&mut self, len: usize) { + match self.0 { + AccumulateVec::Array(ref mut arr) => arr.set_len(len), + AccumulateVec::Heap(ref mut vec) => vec.set_len(len), + } + } + + pub fn insert(&mut self, index: usize, element: A::Element) { + let len = self.len(); + + // Reserve space for shifting elements to the right + self.reserve(1); + + assert!(index <= len); + + unsafe { + // infallible + // The spot to put the new value + { + let p = self.as_mut_ptr().offset(index as isize); + // Shift everything over to make space. (Duplicating the + // `index`th element into two consecutive places.) + ptr::copy(p, p.offset(1), len - index); + // Write it in, overwriting the first copy of the `index`th + // element. + ptr::write(p, element); + } + self.set_len(len + 1); + } + } + + pub fn truncate(&mut self, len: usize) { + unsafe { + while len < self.len() { + // Decrement len before the drop_in_place(), so a panic on Drop + // doesn't re-drop the just-failed value. + let newlen = self.len() - 1; + self.set_len(newlen); + ::std::ptr::drop_in_place(self.get_unchecked_mut(newlen)); + } + } + } +} + +impl Deref for SmallVec { + type Target = AccumulateVec; + fn deref(&self) -> &Self::Target { + &self.0 + } +} + +impl DerefMut for SmallVec { + fn deref_mut(&mut self) -> &mut AccumulateVec { + &mut self.0 + } +} + +impl FromIterator for SmallVec { + fn from_iter(iter: I) -> Self where I: IntoIterator { + SmallVec(iter.into_iter().collect()) + } +} + +impl Extend for SmallVec { + fn extend>(&mut self, iter: I) { + let iter = iter.into_iter(); + self.reserve(iter.size_hint().0); + for el in iter { + self.push(el); + } + } +} + +impl IntoIterator for SmallVec { + type Item = A::Element; + type IntoIter = IntoIter; + fn into_iter(self) -> Self::IntoIter { + self.0.into_iter() + } +} + +impl Default for SmallVec { + fn default() -> SmallVec { + SmallVec::new() + } +} + +impl Encodable for SmallVec + where A: Array, + A::Element: Encodable { + fn encode(&self, s: &mut S) -> Result<(), S::Error> { + s.emit_seq(self.len(), |s| { + for (i, e) in self.iter().enumerate() { + try!(s.emit_seq_elt(i, |s| e.encode(s))); + } + Ok(()) + }) + } +} + +impl Decodable for SmallVec + where A: Array, + A::Element: Decodable { + fn decode(d: &mut D) -> Result, D::Error> { + d.read_seq(|d, len| { + let mut vec = SmallVec::with_capacity(len); + for i in 0..len { + vec.push(try!(d.read_seq_elt(i, |d| Decodable::decode(d)))); + } + Ok(vec) + }) + } +} diff --git a/src/librustc_data_structures/snapshot_map/mod.rs b/src/librustc_data_structures/snapshot_map/mod.rs index a4e6166032..cd7143ad3c 100644 --- a/src/librustc_data_structures/snapshot_map/mod.rs +++ b/src/librustc_data_structures/snapshot_map/mod.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use fnv::FnvHashMap; +use fx::FxHashMap; use std::hash::Hash; use std::ops; use std::mem; @@ -19,7 +19,7 @@ mod test; pub struct SnapshotMap where K: Hash + Clone + Eq { - map: FnvHashMap, + map: FxHashMap, undo_log: Vec>, } @@ -40,7 +40,7 @@ impl SnapshotMap { pub fn new() -> Self { SnapshotMap { - map: FnvHashMap(), + map: FxHashMap(), undo_log: vec![], } } diff --git a/src/librustc_data_structures/stable_hasher.rs b/src/librustc_data_structures/stable_hasher.rs new file mode 100644 index 0000000000..ed97c3dde5 --- /dev/null +++ b/src/librustc_data_structures/stable_hasher.rs @@ -0,0 +1,176 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::hash::Hasher; +use std::marker::PhantomData; +use std::mem; +use blake2b::Blake2bHasher; +use rustc_serialize::leb128; + +fn write_unsigned_leb128_to_buf(buf: &mut [u8; 16], value: u64) -> usize { + leb128::write_unsigned_leb128_to(value, |i, v| buf[i] = v) +} + +fn write_signed_leb128_to_buf(buf: &mut [u8; 16], value: i64) -> usize { + leb128::write_signed_leb128_to(value, |i, v| buf[i] = v) +} + +/// When hashing something that ends up affecting properties like symbol names. We +/// want these symbol names to be calculated independent of other factors like +/// what architecture you're compiling *from*. +/// +/// The hashing just uses the standard `Hash` trait, but the implementations of +/// `Hash` for the `usize` and `isize` types are *not* architecture independent +/// (e.g. they has 4 or 8 bytes). As a result we want to avoid `usize` and +/// `isize` completely when hashing. +/// +/// To do that, we encode all integers to be hashed with some +/// arch-independent encoding. +/// +/// At the moment, we pass i8/u8 straight through and encode +/// all other integers using leb128. +/// +/// This hasher currently always uses the stable Blake2b algorithm +/// and allows for variable output lengths through its type +/// parameter. +#[derive(Debug)] +pub struct StableHasher { + state: Blake2bHasher, + bytes_hashed: u64, + width: PhantomData, +} + +pub trait StableHasherResult: Sized { + fn finish(hasher: StableHasher) -> Self; +} + +impl StableHasher { + pub fn new() -> Self { + StableHasher { + state: Blake2bHasher::new(mem::size_of::(), &[]), + bytes_hashed: 0, + width: PhantomData, + } + } + + pub fn finish(self) -> W { + W::finish(self) + } +} + +impl StableHasherResult for [u8; 20] { + fn finish(mut hasher: StableHasher) -> Self { + let mut result: [u8; 20] = [0; 20]; + result.copy_from_slice(hasher.state.finalize()); + result + } +} + +impl StableHasherResult for u64 { + fn finish(mut hasher: StableHasher) -> Self { + hasher.state.finalize(); + hasher.state.finish() + } +} + +impl StableHasher { + #[inline] + pub fn finalize(&mut self) -> &[u8] { + self.state.finalize() + } + + #[inline] + pub fn bytes_hashed(&self) -> u64 { + self.bytes_hashed + } + + #[inline] + fn write_uleb128(&mut self, value: u64) { + let mut buf = [0; 16]; + let len = write_unsigned_leb128_to_buf(&mut buf, value); + self.state.write(&buf[..len]); + self.bytes_hashed += len as u64; + } + + #[inline] + fn write_ileb128(&mut self, value: i64) { + let mut buf = [0; 16]; + let len = write_signed_leb128_to_buf(&mut buf, value); + self.state.write(&buf[..len]); + self.bytes_hashed += len as u64; + } +} + +// For the non-u8 integer cases we leb128 encode them first. Because small +// integers dominate, this significantly and cheaply reduces the number of +// bytes hashed, which is good because blake2b is expensive. +impl Hasher for StableHasher { + fn finish(&self) -> u64 { + panic!("use StableHasher::finish instead"); + } + + #[inline] + fn write(&mut self, bytes: &[u8]) { + self.state.write(bytes); + self.bytes_hashed += bytes.len() as u64; + } + + #[inline] + fn write_u8(&mut self, i: u8) { + self.state.write_u8(i); + self.bytes_hashed += 1; + } + + #[inline] + fn write_u16(&mut self, i: u16) { + self.write_uleb128(i as u64); + } + + #[inline] + fn write_u32(&mut self, i: u32) { + self.write_uleb128(i as u64); + } + + #[inline] + fn write_u64(&mut self, i: u64) { + self.write_uleb128(i); + } + + #[inline] + fn write_usize(&mut self, i: usize) { + self.write_uleb128(i as u64); + } + + #[inline] + fn write_i8(&mut self, i: i8) { + self.state.write_i8(i); + self.bytes_hashed += 1; + } + + #[inline] + fn write_i16(&mut self, i: i16) { + self.write_ileb128(i as i64); + } + + #[inline] + fn write_i32(&mut self, i: i32) { + self.write_ileb128(i as i64); + } + + #[inline] + fn write_i64(&mut self, i: i64) { + self.write_ileb128(i); + } + + #[inline] + fn write_isize(&mut self, i: isize) { + self.write_ileb128(i as i64); + } +} diff --git a/src/librustc_data_structures/unify/mod.rs b/src/librustc_data_structures/unify/mod.rs index 1f4d09a922..e2d3a4f453 100644 --- a/src/librustc_data_structures/unify/mod.rs +++ b/src/librustc_data_structures/unify/mod.rs @@ -344,7 +344,7 @@ impl<'tcx, K, V> UnificationTable } pub fn probe(&mut self, a_id: K) -> Option { - self.get(a_id).value.clone() + self.get(a_id).value } pub fn unsolved_variables(&mut self) -> Vec { diff --git a/src/librustc_driver/derive_registrar.rs b/src/librustc_driver/derive_registrar.rs index ea7621e16e..4db620b2be 100644 --- a/src/librustc_driver/derive_registrar.rs +++ b/src/librustc_driver/derive_registrar.rs @@ -9,7 +9,7 @@ // except according to those terms. use rustc::dep_graph::DepNode; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir::map::Map; use rustc::hir; use syntax::ast; @@ -20,7 +20,7 @@ pub fn find(hir_map: &Map) -> Option { let krate = hir_map.krate(); let mut finder = Finder { registrar: None }; - krate.visit_all_items(&mut finder); + krate.visit_all_item_likes(&mut finder); finder.registrar } @@ -28,10 +28,14 @@ struct Finder { registrar: Option, } -impl<'v> Visitor<'v> for Finder { +impl<'v> ItemLikeVisitor<'v> for Finder { fn visit_item(&mut self, item: &hir::Item) { if attr::contains_name(&item.attrs, "rustc_derive_registrar") { self.registrar = Some(item.id); } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } + diff --git a/src/librustc_driver/driver.rs b/src/librustc_driver/driver.rs index d839184956..7a12569266 100644 --- a/src/librustc_driver/driver.rs +++ b/src/librustc_driver/driver.rs @@ -10,11 +10,8 @@ use rustc::hir; use rustc::hir::{map as hir_map, FreevarMap, TraitMap}; -use rustc::hir::def::DefMap; use rustc::hir::lowering::lower_crate; -use rustc_data_structures::blake2b::Blake2bHasher; -use rustc_data_structures::fmt_wrap::FmtWrap; -use rustc::ty::util::ArchIndependentHasher; +use rustc_data_structures::stable_hasher::StableHasher; use rustc_mir as mir; use rustc::session::{Session, CompileResult, compile_result_from_err_count}; use rustc::session::config::{self, Input, OutputFilenames, OutputType, @@ -25,9 +22,10 @@ use rustc::middle::{self, dependency_format, stability, reachable}; use rustc::middle::privacy::AccessLevels; use rustc::ty::{self, TyCtxt}; use rustc::util::common::time; -use rustc::util::nodemap::NodeSet; +use rustc::util::nodemap::{NodeSet, NodeMap}; use rustc_borrowck as borrowck; use rustc_incremental::{self, IncrementalHashesMap}; +use rustc_incremental::ich::Fingerprint; use rustc_resolve::{MakeGlobMap, Resolver}; use rustc_metadata::creader::CrateLoader; use rustc_metadata::cstore::CStore; @@ -38,7 +36,7 @@ use rustc_privacy; use rustc_plugin::registry::Registry; use rustc_plugin as plugin; use rustc_passes::{ast_validation, no_asm, loops, consts, rvalues, - static_recursion, hir_stats}; + static_recursion, hir_stats, mir_stats}; use rustc_const_eval::check_match; use super::Compilation; @@ -53,7 +51,8 @@ use std::path::{Path, PathBuf}; use syntax::{ast, diagnostics, visit}; use syntax::attr; use syntax::ext::base::ExtCtxt; -use syntax::parse::{self, PResult, token}; +use syntax::parse::{self, PResult}; +use syntax::symbol::Symbol; use syntax::util::node_count::NodeCounter; use syntax; use syntax_ext; @@ -62,7 +61,6 @@ use derive_registrar; #[derive(Clone)] pub struct Resolutions { - pub def_map: DefMap, pub freevars: FreevarMap, pub trait_map: TraitMap, pub maybe_unused_trait_imports: NodeSet, @@ -210,13 +208,14 @@ pub fn compile_input(sess: &Session, tcx.print_debug_stats(); } - // Discard interned strings as they are no longer required. - token::clear_ident_interner(); - Ok((outputs, trans)) })?? }; + if sess.opts.debugging_opts.print_type_sizes { + sess.code_stats.borrow().print_type_sizes(); + } + let phase5_result = phase_5_run_llvm_passes(sess, &trans, &outputs); controller_entry_point!(after_llvm, @@ -330,9 +329,9 @@ impl<'a> PhaseController<'a> { /// State that is passed to a callback. What state is available depends on when /// during compilation the callback is made. See the various constructor methods /// (`state_*`) in the impl to see which data is provided for any given entry point. -pub struct CompileState<'a, 'b, 'ast: 'a, 'tcx: 'b> where 'ast: 'tcx { +pub struct CompileState<'a, 'tcx: 'a> { pub input: &'a Input, - pub session: &'ast Session, + pub session: &'tcx Session, pub krate: Option, pub registry: Option>, pub cstore: Option<&'a CStore>, @@ -340,21 +339,21 @@ pub struct CompileState<'a, 'b, 'ast: 'a, 'tcx: 'b> where 'ast: 'tcx { pub output_filenames: Option<&'a OutputFilenames>, pub out_dir: Option<&'a Path>, pub out_file: Option<&'a Path>, - pub arenas: Option<&'ast ty::CtxtArenas<'ast>>, + pub arenas: Option<&'tcx ty::CtxtArenas<'tcx>>, pub expanded_crate: Option<&'a ast::Crate>, pub hir_crate: Option<&'a hir::Crate>, - pub ast_map: Option<&'a hir_map::Map<'ast>>, + pub ast_map: Option<&'a hir_map::Map<'tcx>>, pub resolutions: Option<&'a Resolutions>, - pub analysis: Option<&'a ty::CrateAnalysis<'a>>, - pub tcx: Option>, + pub analysis: Option<&'a ty::CrateAnalysis<'tcx>>, + pub tcx: Option>, pub trans: Option<&'a trans::CrateTranslation>, } -impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { +impl<'a, 'tcx> CompileState<'a, 'tcx> { fn empty(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { input: input, session: session, @@ -377,12 +376,12 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { } fn state_after_parse(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option, krate: ast::Crate, cstore: &'a CStore) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { // Initialize the registry before moving `krate` registry: Some(Registry::new(&session, krate.span)), @@ -394,13 +393,13 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { } fn state_after_expand(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option, cstore: &'a CStore, expanded_crate: &'a ast::Crate, crate_name: &'a str) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { crate_name: Some(crate_name), cstore: Some(cstore), @@ -411,18 +410,18 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { } fn state_after_hir_lowering(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option, - arenas: &'ast ty::CtxtArenas<'ast>, + arenas: &'tcx ty::CtxtArenas<'tcx>, cstore: &'a CStore, - hir_map: &'a hir_map::Map<'ast>, - analysis: &'a ty::CrateAnalysis, + hir_map: &'a hir_map::Map<'tcx>, + analysis: &'a ty::CrateAnalysis<'static>, resolutions: &'a Resolutions, krate: &'a ast::Crate, hir_crate: &'a hir::Crate, crate_name: &'a str) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { crate_name: Some(crate_name), arenas: Some(arenas), @@ -438,15 +437,15 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { } fn state_after_analysis(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option, krate: Option<&'a ast::Crate>, hir_crate: &'a hir::Crate, - analysis: &'a ty::CrateAnalysis<'a>, - tcx: TyCtxt<'b, 'tcx, 'tcx>, + analysis: &'a ty::CrateAnalysis<'tcx>, + tcx: TyCtxt<'a, 'tcx, 'tcx>, crate_name: &'a str) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { analysis: Some(analysis), tcx: Some(tcx), @@ -460,11 +459,11 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { fn state_after_llvm(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option, trans: &'a trans::CrateTranslation) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { trans: Some(trans), out_file: out_file.as_ref().map(|s| &**s), @@ -473,10 +472,10 @@ impl<'a, 'b, 'ast, 'tcx> CompileState<'a, 'b, 'ast, 'tcx> { } fn state_when_compilation_done(input: &'a Input, - session: &'ast Session, + session: &'tcx Session, out_dir: &'a Option, out_file: &'a Option) - -> CompileState<'a, 'b, 'ast, 'tcx> { + -> Self { CompileState { out_file: out_file.as_ref().map(|s| &**s), ..CompileState::empty(input, session, out_dir) @@ -530,10 +529,10 @@ fn count_nodes(krate: &ast::Crate) -> usize { // For continuing compilation after a parsed crate has been // modified -pub struct ExpansionResult<'a> { +pub struct ExpansionResult { pub expanded_crate: ast::Crate, pub defs: hir_map::Definitions, - pub analysis: ty::CrateAnalysis<'a>, + pub analysis: ty::CrateAnalysis<'static>, pub resolutions: Resolutions, pub hir_forest: hir_map::Forest, } @@ -545,15 +544,15 @@ pub struct ExpansionResult<'a> { /// standard library and prelude, and name resolution. /// /// Returns `None` if we're aborting after handling -W help. -pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, - cstore: &CStore, - krate: ast::Crate, - registry: Option, - crate_name: &'a str, - addl_plugins: Option>, - make_glob_map: MakeGlobMap, - after_expand: F) - -> Result, usize> +pub fn phase_2_configure_and_expand(sess: &Session, + cstore: &CStore, + krate: ast::Crate, + registry: Option, + crate_name: &str, + addl_plugins: Option>, + make_glob_map: MakeGlobMap, + after_expand: F) + -> Result where F: FnOnce(&ast::Crate) -> CompileResult, { let time_passes = sess.time_passes(); @@ -563,11 +562,10 @@ pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, *sess.features.borrow_mut() = features; *sess.crate_types.borrow_mut() = collect_crate_types(sess, &krate.attrs); - *sess.crate_disambiguator.borrow_mut() = - token::intern(&compute_crate_disambiguator(sess)).as_str(); + *sess.crate_disambiguator.borrow_mut() = Symbol::intern(&compute_crate_disambiguator(sess)); time(time_passes, "recursion limit", || { - middle::recursion_limit::update_recursion_limit(sess, &krate); + middle::recursion_limit::update_limits(sess, &krate); }); krate = time(time_passes, "crate injection", || { @@ -709,13 +707,14 @@ pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, let crate_types = sess.crate_types.borrow(); let num_crate_types = crate_types.len(); let is_proc_macro_crate = crate_types.contains(&config::CrateTypeProcMacro); + let is_test_crate = sess.opts.test; syntax_ext::proc_macro_registrar::modify(&sess.parse_sess, &mut resolver, krate, is_proc_macro_crate, + is_test_crate, num_crate_types, - sess.diagnostic(), - &sess.features.borrow()) + sess.diagnostic()) }); } @@ -755,8 +754,6 @@ pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, || ast_validation::check_crate(sess, &krate)); time(sess.time_passes(), "name resolution", || -> CompileResult { - resolver.resolve_imports(); - // Since import resolution will eventually happen in expansion, // don't perform `after_expand` until after import resolution. after_expand(&krate)?; @@ -788,11 +785,11 @@ pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, export_map: resolver.export_map, access_levels: AccessLevels::default(), reachable: NodeSet(), - name: crate_name, + name: crate_name.to_string(), glob_map: if resolver.make_glob_map { Some(resolver.glob_map) } else { None }, + hir_ty_to_ty: NodeMap(), }, resolutions: Resolutions { - def_map: resolver.def_map, freevars: resolver.freevars, trait_map: resolver.trait_map, maybe_unused_trait_imports: resolver.maybe_unused_trait_imports, @@ -806,14 +803,14 @@ pub fn phase_2_configure_and_expand<'a, F>(sess: &Session, /// structures carrying the results of the analysis. pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, hir_map: hir_map::Map<'tcx>, - mut analysis: ty::CrateAnalysis, + mut analysis: ty::CrateAnalysis<'tcx>, resolutions: Resolutions, arenas: &'tcx ty::CtxtArenas<'tcx>, name: &str, f: F) -> Result where F: for<'a> FnOnce(TyCtxt<'a, 'tcx, 'tcx>, - ty::CrateAnalysis, + ty::CrateAnalysis<'tcx>, IncrementalHashesMap, CompileResult) -> R { @@ -839,9 +836,7 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, let named_region_map = time(time_passes, "lifetime resolution", - || middle::resolve_lifetime::krate(sess, - &hir_map, - &resolutions.def_map))?; + || middle::resolve_lifetime::krate(sess, &hir_map))?; time(time_passes, "looking for entry point", @@ -862,13 +857,12 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, time(time_passes, "static item recursion checking", - || static_recursion::check_crate(sess, &resolutions.def_map, &hir_map))?; + || static_recursion::check_crate(sess, &hir_map))?; let index = stability::Index::new(&hir_map); TyCtxt::create_and_enter(sess, arenas, - resolutions.def_map, resolutions.trait_map, named_region_map, hir_map, @@ -887,8 +881,17 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, "load_dep_graph", || rustc_incremental::load_dep_graph(tcx, &incremental_hashes_map)); + time(time_passes, "stability index", || { + tcx.stability.borrow_mut().build(tcx) + }); + + time(time_passes, + "stability checking", + || stability::check_unstable_api_usage(tcx)); + // passes are timed inside typeck - try_with_f!(typeck::check_crate(tcx), (tcx, analysis, incremental_hashes_map)); + analysis.hir_ty_to_ty = + try_with_f!(typeck::check_crate(tcx), (tcx, analysis, incremental_hashes_map)); time(time_passes, "const checking", @@ -899,11 +902,6 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, rustc_privacy::check_crate(tcx, &analysis.export_map) }); - // Do not move this check past lint - time(time_passes, "stability index", || { - tcx.stability.borrow_mut().build(tcx, &analysis.access_levels) - }); - time(time_passes, "intrinsic checking", || middle::intrinsicck::check_crate(tcx)); @@ -932,6 +930,10 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, "MIR dump", || mir::mir_map::build_mir_for_crate(tcx)); + if sess.opts.debugging_opts.mir_stats { + mir_stats::print_mir_stats(tcx, "PRE CLEANUP MIR STATS"); + } + time(time_passes, "MIR cleanup and validation", || { let mut passes = sess.mir_passes.borrow_mut(); // Push all the built-in validation passes. @@ -972,14 +974,8 @@ pub fn phase_3_run_analysis_passes<'tcx, F, R>(sess: &'tcx Session, middle::dead::check_crate(tcx, &analysis.access_levels); }); - let ref lib_features_used = - time(time_passes, - "stability checking", - || stability::check_unstable_api_usage(tcx)); - time(time_passes, "unused lib feature checking", || { - stability::check_unused_or_stable_features(&tcx.sess, - lib_features_used) + stability::check_unused_or_stable_features(tcx, &analysis.access_levels) }); time(time_passes, @@ -1006,6 +1002,10 @@ pub fn phase_4_translate_to_llvm<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, "resolving dependency formats", || dependency_format::calculate(&tcx.sess)); + if tcx.sess.opts.debugging_opts.mir_stats { + mir_stats::print_mir_stats(tcx, "PRE OPTIMISATION MIR STATS"); + } + // Run the passes that transform the MIR into a more suitable form for translation to LLVM // code. time(time_passes, "MIR optimisations", || { @@ -1034,6 +1034,10 @@ pub fn phase_4_translate_to_llvm<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, passes.run_passes(tcx); }); + if tcx.sess.opts.debugging_opts.mir_stats { + mir_stats::print_mir_stats(tcx, "POST OPTIMISATION MIR STATS"); + } + let translation = time(time_passes, "translation", @@ -1056,7 +1060,11 @@ pub fn phase_4_translate_to_llvm<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, pub fn phase_5_run_llvm_passes(sess: &Session, trans: &trans::CrateTranslation, outputs: &OutputFilenames) -> CompileResult { - if sess.opts.cg.no_integrated_as { + if sess.opts.cg.no_integrated_as || + (sess.target.target.options.no_integrated_as && + (outputs.outputs.contains_key(&OutputType::Object) || + outputs.outputs.contains_key(&OutputType::Exe))) + { let output_types = OutputTypes::new(&[(OutputType::Assembly, None)]); time(sess.time_passes(), "LLVM passes", @@ -1064,6 +1072,17 @@ pub fn phase_5_run_llvm_passes(sess: &Session, write::run_assembler(sess, outputs); + // HACK the linker expects the object file to be named foo.0.o but + // `run_assembler` produces an object named just foo.o. Rename it if we + // are going to build an executable + if sess.opts.output_types.contains_key(&OutputType::Exe) { + let f = outputs.path(OutputType::Object); + fs::copy(&f, + f.with_file_name(format!("{}.0.o", + f.file_stem().unwrap().to_string_lossy()))).unwrap(); + fs::remove_file(f).unwrap(); + } + // Remove assembly source, unless --save-temps was specified if !sess.opts.cg.save_temps { fs::remove_file(&outputs.temp_path(OutputType::Assembly, None)).unwrap(); @@ -1092,7 +1111,7 @@ pub fn phase_6_link_output(sess: &Session, outputs: &OutputFilenames) { time(sess.time_passes(), "linking", - || link::link_binary(sess, trans, outputs, &trans.link.crate_name)); + || link::link_binary(sess, trans, outputs, &trans.link.crate_name.as_str())); } fn escape_dep_filename(filename: &str) -> String { @@ -1170,6 +1189,9 @@ pub fn collect_crate_types(session: &Session, attrs: &[ast::Attribute]) -> Vec { Some(config::CrateTypeRlib) } + Some(ref n) if *n == "metadata" => { + Some(config::CrateTypeMetadata) + } Some(ref n) if *n == "dylib" => { Some(config::CrateTypeDylib) } @@ -1250,7 +1272,7 @@ pub fn compute_crate_disambiguator(session: &Session) -> String { // FIXME(mw): It seems that the crate_disambiguator is used everywhere as // a hex-string instead of raw bytes. We should really use the // smaller representation. - let mut hasher = ArchIndependentHasher::new(Blake2bHasher::new(128 / 8, &[])); + let mut hasher = StableHasher::::new(); let mut metadata = session.opts.cg.metadata.clone(); // We don't want the crate_disambiguator to dependent on the order @@ -1268,14 +1290,11 @@ pub fn compute_crate_disambiguator(session: &Session) -> String { hasher.write(s.as_bytes()); } - let mut hash_state = hasher.into_inner(); - let hash_bytes = hash_state.finalize(); - // If this is an executable, add a special suffix, so that we don't get // symbol conflicts when linking against a library of the same name. let is_exe = session.crate_types.borrow().contains(&config::CrateTypeExecutable); - format!("{:x}{}", FmtWrap(hash_bytes), if is_exe { "-exe" } else {""}) + format!("{}{}", hasher.finish().to_hex(), if is_exe { "-exe" } else {""}) } pub fn build_output_filenames(input: &Input, @@ -1343,11 +1362,3 @@ pub fn build_output_filenames(input: &Input, } } } - -// For use by the `rusti` project (https://github.com/murarth/rusti). -pub fn reset_thread_local_state() { - // These may be left in an incoherent state after a previous compile. - syntax::ext::hygiene::reset_hygiene_data(); - // `clear_ident_interner` can be used to free memory, but it does not restore the initial state. - token::reset_ident_interner(); -} diff --git a/src/librustc_driver/lib.rs b/src/librustc_driver/lib.rs index 6ddbce7dc7..f84622c2f0 100644 --- a/src/librustc_driver/lib.rs +++ b/src/librustc_driver/lib.rs @@ -24,14 +24,12 @@ #![cfg_attr(not(stage0), deny(warnings))] #![feature(box_syntax)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(libc)] #![feature(quote)] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(set_stdio)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] extern crate arena; extern crate flate; @@ -75,13 +73,15 @@ use rustc::dep_graph::DepGraph; use rustc::session::{self, config, Session, build_session, CompileResult}; use rustc::session::config::{Input, PrintRequest, OutputType, ErrorOutputType}; use rustc::session::config::nightly_options; -use rustc::session::early_error; +use rustc::session::{early_error, early_warn}; use rustc::lint::Lint; use rustc::lint; use rustc_metadata::locator; use rustc_metadata::cstore::CStore; use rustc::util::common::time; +use serialize::json::ToJson; + use std::cmp::max; use std::cmp::Ordering::Equal; use std::default::Default; @@ -95,12 +95,11 @@ use std::str; use std::sync::{Arc, Mutex}; use std::thread; -use syntax::{ast, json}; +use syntax::ast; use syntax::codemap::{CodeMap, FileLoader, RealFileLoader}; use syntax::feature_gate::{GatedCfg, UnstableFeatures}; use syntax::parse::{self, PResult}; -use syntax_pos::MultiSpan; -use errors::emitter::Emitter; +use syntax_pos::{DUMMY_SP, MultiSpan}; #[cfg(test)] pub mod test; @@ -374,37 +373,11 @@ fn handle_explain(code: &str, } } -fn check_cfg(cfg: &ast::CrateConfig, - output: ErrorOutputType) { - let emitter: Box = match output { - config::ErrorOutputType::HumanReadable(color_config) => { - Box::new(errors::emitter::EmitterWriter::stderr(color_config, None)) - } - config::ErrorOutputType::Json => Box::new(json::JsonEmitter::basic()), - }; - let handler = errors::Handler::with_emitter(true, false, emitter); - - let mut saw_invalid_predicate = false; - for item in cfg.iter() { - if item.is_meta_item_list() { - saw_invalid_predicate = true; - handler.emit(&MultiSpan::new(), - &format!("invalid predicate in --cfg command line argument: `{}`", - item.name()), - errors::Level::Fatal); - } - } - - if saw_invalid_predicate { - panic!(errors::FatalError); - } -} - impl<'a> CompilerCalls<'a> for RustcDefaultCalls { fn early_callback(&mut self, matches: &getopts::Matches, _: &config::Options, - cfg: &ast::CrateConfig, + _: &ast::CrateConfig, descriptions: &errors::registry::Registry, output: ErrorOutputType) -> Compilation { @@ -413,7 +386,6 @@ impl<'a> CompilerCalls<'a> for RustcDefaultCalls { return Compilation::Stop; } - check_cfg(cfg, output); Compilation::Continue } @@ -455,8 +427,6 @@ impl<'a> CompilerCalls<'a> for RustcDefaultCalls { 1 => panic!("make_input should have provided valid inputs"), _ => early_error(sopts.error_format, "multiple input filenames provided"), } - - None } fn late_callback(&mut self, @@ -616,6 +586,7 @@ impl RustcDefaultCalls { println!("{}", targets.join("\n")); }, PrintRequest::Sysroot => println!("{}", sess.sysroot().display()), + PrintRequest::TargetSpec => println!("{}", sess.target.target.to_json().pretty()), PrintRequest::FileNames | PrintRequest::CrateName => { let input = match input { @@ -642,24 +613,27 @@ impl RustcDefaultCalls { let allow_unstable_cfg = UnstableFeatures::from_environment() .is_nightly_build(); - for cfg in &sess.parse_sess.config { - if !allow_unstable_cfg && GatedCfg::gate(cfg).is_some() { + let mut cfgs = Vec::new(); + for &(name, ref value) in sess.parse_sess.config.iter() { + let gated_cfg = GatedCfg::gate(&ast::MetaItem { + name: name, + node: ast::MetaItemKind::Word, + span: DUMMY_SP, + }); + if !allow_unstable_cfg && gated_cfg.is_some() { continue; } - if cfg.is_word() { - println!("{}", cfg.name()); - } else if let Some(s) = cfg.value_str() { - println!("{}=\"{}\"", cfg.name(), s); - } else if cfg.is_meta_item_list() { - // Right now there are not and should not be any - // MetaItemKind::List items in the configuration returned by - // `build_configuration`. - panic!("Found an unexpected list in cfg attribute '{}'!", cfg.name()) + cfgs.push(if let &Some(ref value) = value { + format!("{}=\"{}\"", name, value) } else { - // There also shouldn't be literals. - panic!("Found an unexpected literal in cfg attribute '{}'!", cfg.name()) - } + format!("{}", name) + }); + } + + cfgs.sort(); + for cfg in cfgs { + println!("{}", cfg); } } PrintRequest::TargetCPUs => { @@ -1011,6 +985,11 @@ pub fn handle_options(args: &[String]) -> Option { return None; } + if cg_flags.iter().any(|x| *x == "no-stack-check") { + early_warn(ErrorOutputType::default(), + "the --no-stack-check flag is deprecated and does nothing"); + } + if cg_flags.contains(&"passes=list".to_string()) { unsafe { ::llvm::LLVMRustPrintPasses(); diff --git a/src/librustc_driver/pretty.rs b/src/librustc_driver/pretty.rs index b4ab9da92e..b055b04372 100644 --- a/src/librustc_driver/pretty.rs +++ b/src/librustc_driver/pretty.rs @@ -200,7 +200,7 @@ impl PpSourceMode { fn call_with_pp_support_hir<'tcx, A, B, F>(&self, sess: &'tcx Session, ast_map: &hir_map::Map<'tcx>, - analysis: &ty::CrateAnalysis, + analysis: &ty::CrateAnalysis<'tcx>, resolutions: &Resolutions, arenas: &'tcx ty::CtxtArenas<'tcx>, id: &str, @@ -450,15 +450,15 @@ impl<'ast> PrinterSupport<'ast> for HygieneAnnotation<'ast> { impl<'ast> pprust::PpAnn for HygieneAnnotation<'ast> { fn post(&self, s: &mut pprust::State, node: pprust::AnnNode) -> io::Result<()> { match node { - pprust::NodeIdent(&ast::Ident { name: ast::Name(nm), ctxt }) => { + pprust::NodeIdent(&ast::Ident { name, ctxt }) => { pp::space(&mut s.s)?; // FIXME #16420: this doesn't display the connections // between syntax contexts - s.synth_comment(format!("{}{:?}", nm, ctxt)) + s.synth_comment(format!("{}{:?}", name.as_u32(), ctxt)) } - pprust::NodeName(&ast::Name(nm)) => { + pprust::NodeName(&name) => { pp::space(&mut s.s)?; - s.synth_comment(nm.to_string()) + s.synth_comment(name.as_u32().to_string()) } _ => Ok(()), } @@ -696,13 +696,16 @@ impl fold::Folder for ReplaceBodyWithLoop { fn print_flowgraph<'a, 'tcx, W: Write>(variants: Vec, tcx: TyCtxt<'a, 'tcx, 'tcx>, - code: blocks::Code, + code: blocks::Code<'tcx>, mode: PpFlowGraphMode, mut out: W) -> io::Result<()> { let cfg = match code { - blocks::BlockCode(block) => cfg::CFG::new(tcx, &block), - blocks::FnLikeCode(fn_like) => cfg::CFG::new(tcx, &fn_like.body()), + blocks::Code::Expr(expr) => cfg::CFG::new(tcx, expr), + blocks::Code::FnLike(fn_like) => { + let body = tcx.map.expr(fn_like.body()); + cfg::CFG::new(tcx, body) + }, }; let labelled_edges = mode != PpFlowGraphMode::UnlabelledEdges; let lcfg = LabelledCFG { @@ -717,12 +720,12 @@ fn print_flowgraph<'a, 'tcx, W: Write>(variants: Vec, let r = dot::render(&lcfg, &mut out); return expand_err_details(r); } - blocks::BlockCode(_) => { + blocks::Code::Expr(_) => { tcx.sess.err("--pretty flowgraph with -Z flowgraph-print annotations requires \ fn-like node id."); return Ok(()); } - blocks::FnLikeCode(fn_like) => { + blocks::Code::FnLike(fn_like) => { let (bccx, analysis_data) = borrowck::build_borrowck_dataflow_data_for_fn(tcx, fn_like.to_fn_parts(), &cfg); @@ -817,7 +820,7 @@ pub fn print_after_parsing(sess: &Session, pub fn print_after_hir_lowering<'tcx, 'a: 'tcx>(sess: &'a Session, ast_map: &hir_map::Map<'tcx>, - analysis: &ty::CrateAnalysis, + analysis: &ty::CrateAnalysis<'tcx>, resolutions: &Resolutions, input: &Input, krate: &ast::Crate, @@ -934,7 +937,7 @@ pub fn print_after_hir_lowering<'tcx, 'a: 'tcx>(sess: &'a Session, // Instead, we call that function ourselves. fn print_with_analysis<'tcx, 'a: 'tcx>(sess: &'a Session, ast_map: &hir_map::Map<'tcx>, - analysis: &ty::CrateAnalysis, + analysis: &ty::CrateAnalysis<'tcx>, resolutions: &Resolutions, crate_name: &str, arenas: &'tcx ty::CtxtArenas<'tcx>, @@ -990,8 +993,7 @@ fn print_with_analysis<'tcx, 'a: 'tcx>(sess: &'a Session, tcx.sess.fatal(&format!("--pretty flowgraph couldn't find id: {}", nodeid)) }); - let code = blocks::Code::from_node(node); - match code { + match blocks::Code::from_node(&tcx.map, nodeid) { Some(code) => { let variants = gather_flowgraph_variants(tcx.sess); @@ -1004,11 +1006,7 @@ fn print_with_analysis<'tcx, 'a: 'tcx>(sess: &'a Session, got {:?}", node); - // Point to what was found, if there's an accessible span. - match tcx.map.opt_span(nodeid) { - Some(sp) => tcx.sess.span_fatal(sp, &message), - None => tcx.sess.fatal(&message), - } + tcx.sess.span_fatal(tcx.map.span(nodeid), &message) } } } diff --git a/src/librustc_driver/target_features.rs b/src/librustc_driver/target_features.rs index ba51947a33..124e7aafcc 100644 --- a/src/librustc_driver/target_features.rs +++ b/src/librustc_driver/target_features.rs @@ -8,12 +8,12 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use syntax::{ast, attr}; +use syntax::ast; use llvm::LLVMRustHasFeature; use rustc::session::Session; use rustc_trans::back::write::create_target_machine; -use syntax::parse::token::InternedString; -use syntax::parse::token::intern_and_get_ident as intern; +use syntax::feature_gate::UnstableFeatures; +use syntax::symbol::Symbol; use libc::c_char; // WARNING: the features must be known to LLVM or the feature @@ -24,7 +24,8 @@ const ARM_WHITELIST: &'static [&'static str] = &["neon\0", "vfp2\0", "vfp3\0", " const X86_WHITELIST: &'static [&'static str] = &["avx\0", "avx2\0", "bmi\0", "bmi2\0", "sse\0", "sse2\0", "sse3\0", "sse4.1\0", "sse4.2\0", - "ssse3\0", "tbm\0"]; + "ssse3\0", "tbm\0", "lzcnt\0", "popcnt\0", + "sse4a\0"]; /// Add `target_feature = "..."` cfgs for a variety of platform /// specific features (SSE, NEON etc.). @@ -40,11 +41,39 @@ pub fn add_configuration(cfg: &mut ast::CrateConfig, sess: &Session) { _ => &[], }; - let tf = InternedString::new("target_feature"); + let tf = Symbol::intern("target_feature"); for feat in whitelist { assert_eq!(feat.chars().last(), Some('\0')); if unsafe { LLVMRustHasFeature(target_machine, feat.as_ptr() as *const c_char) } { - cfg.push(attr::mk_name_value_item_str(tf.clone(), intern(&feat[..feat.len() - 1]))) + cfg.insert((tf, Some(Symbol::intern(&feat[..feat.len() - 1])))); } } + + let requested_features = sess.opts.cg.target_feature.split(','); + let unstable_options = sess.opts.debugging_opts.unstable_options; + let is_nightly = UnstableFeatures::from_environment().is_nightly_build(); + let found_negative = requested_features.clone().any(|r| r == "-crt-static"); + let found_positive = requested_features.clone().any(|r| r == "+crt-static"); + + // If the target we're compiling for requests a static crt by default, + // then see if the `-crt-static` feature was passed to disable that. + // Otherwise if we don't have a static crt by default then see if the + // `+crt-static` feature was passed. + let crt_static = if sess.target.target.options.crt_static_default { + !found_negative + } else { + found_positive + }; + + // If we switched from the default then that's only allowed on nightly, so + // gate that here. + if (found_positive || found_negative) && (!is_nightly || !unstable_options) { + sess.fatal("specifying the `crt-static` target feature is only allowed \ + on the nightly channel with `-Z unstable-options` passed \ + as well"); + } + + if crt_static { + cfg.insert((tf, Some(Symbol::intern("crt-static")))); + } } diff --git a/src/librustc_driver/test.rs b/src/librustc_driver/test.rs index 8dc2155014..2f8550e5ac 100644 --- a/src/librustc_driver/test.rs +++ b/src/librustc_driver/test.rs @@ -21,9 +21,10 @@ use rustc::middle::region::CodeExtentData; use rustc::middle::resolve_lifetime; use rustc::middle::stability; use rustc::ty::subst::{Kind, Subst}; -use rustc::traits::Reveal; +use rustc::traits::{ObligationCause, Reveal}; use rustc::ty::{self, Ty, TyCtxt, TypeFoldable}; -use rustc::infer::{self, InferOk, InferResult, TypeOrigin}; +use rustc::infer::{self, InferOk, InferResult}; +use rustc::infer::type_variable::TypeVariableOrigin; use rustc_metadata::cstore::CStore; use rustc::hir::map as hir_map; use rustc::session::{self, config}; @@ -34,8 +35,8 @@ use syntax::codemap::CodeMap; use errors; use errors::emitter::Emitter; use errors::{Level, DiagnosticBuilder}; -use syntax::parse::token; use syntax::feature_gate::UnstableFeatures; +use syntax::symbol::Symbol; use syntax_pos::DUMMY_SP; use rustc::hir; @@ -132,12 +133,11 @@ fn test_env(source_string: &str, // run just enough stuff to build a tcx: let lang_items = lang_items::collect_language_items(&sess, &ast_map); - let named_region_map = resolve_lifetime::krate(&sess, &ast_map, &resolutions.def_map); + let named_region_map = resolve_lifetime::krate(&sess, &ast_map); let region_map = region::resolve_crate(&sess, &ast_map); let index = stability::Index::new(&ast_map); TyCtxt::create_and_enter(&sess, &arenas, - resolutions.def_map, resolutions.trait_map, named_region_map.unwrap(), ast_map, @@ -245,7 +245,7 @@ impl<'a, 'gcx, 'tcx> Env<'a, 'gcx, 'tcx> { } pub fn make_subtype(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> bool { - match self.infcx.sub_types(true, TypeOrigin::Misc(DUMMY_SP), a, b) { + match self.infcx.sub_types(true, &ObligationCause::dummy(), a, b) { Ok(_) => true, Err(ref e) => panic!("Encountered error: {}", e), } @@ -267,15 +267,10 @@ impl<'a, 'gcx, 'tcx> Env<'a, 'gcx, 'tcx> { } pub fn t_fn(&self, input_tys: &[Ty<'tcx>], output_ty: Ty<'tcx>) -> Ty<'tcx> { - let input_args = input_tys.iter().cloned().collect(); self.infcx.tcx.mk_fn_ptr(self.infcx.tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: input_args, - output: output_ty, - variadic: false, - }), + sig: ty::Binder(self.infcx.tcx.mk_fn_sig(input_tys.iter().cloned(), output_ty, false)), })) } @@ -289,11 +284,11 @@ impl<'a, 'gcx, 'tcx> Env<'a, 'gcx, 'tcx> { pub fn t_param(&self, index: u32) -> Ty<'tcx> { let name = format!("T{}", index); - self.infcx.tcx.mk_param(index, token::intern(&name[..])) + self.infcx.tcx.mk_param(index, Symbol::intern(&name[..])) } pub fn re_early_bound(&self, index: u32, name: &'static str) -> &'tcx ty::Region { - let name = token::intern(name); + let name = Symbol::intern(name); self.infcx.tcx.mk_region(ty::ReEarlyBound(ty::EarlyBoundRegion { index: index, name: name, @@ -496,7 +491,7 @@ fn sub_free_bound_false_infer() { //! does NOT hold for any instantiation of `_#1`. test_env(EMPTY_SOURCE_STR, errors(&[]), |env| { - let t_infer1 = env.infcx.next_ty_var(); + let t_infer1 = env.infcx.next_ty_var(TypeVariableOrigin::MiscVariable(DUMMY_SP)); let t_rptr_bound1 = env.t_rptr_late_bound(1); env.check_not_sub(env.t_fn(&[t_infer1], env.tcx().types.isize), env.t_fn(&[t_rptr_bound1], env.tcx().types.isize)); @@ -515,7 +510,7 @@ fn lub_free_bound_infer() { test_env(EMPTY_SOURCE_STR, errors(&[]), |env| { env.create_simple_region_hierarchy(); - let t_infer1 = env.infcx.next_ty_var(); + let t_infer1 = env.infcx.next_ty_var(TypeVariableOrigin::MiscVariable(DUMMY_SP)); let t_rptr_bound1 = env.t_rptr_late_bound(1); let t_rptr_free1 = env.t_rptr_free(1, 1); env.check_lub(env.t_fn(&[t_infer1], env.tcx().types.isize), @@ -635,7 +630,7 @@ fn glb_bound_free() { fn glb_bound_free_infer() { test_env(EMPTY_SOURCE_STR, errors(&[]), |env| { let t_rptr_bound1 = env.t_rptr_late_bound(1); - let t_infer1 = env.infcx.next_ty_var(); + let t_infer1 = env.infcx.next_ty_var(TypeVariableOrigin::MiscVariable(DUMMY_SP)); // compute GLB(fn(_) -> isize, for<'b> fn(&'b isize) -> isize), // which should yield for<'b> fn(&'b isize) -> isize diff --git a/src/librustc_errors/emitter.rs b/src/librustc_errors/emitter.rs index a307e9b696..808fe504b9 100644 --- a/src/librustc_errors/emitter.rs +++ b/src/librustc_errors/emitter.rs @@ -14,7 +14,7 @@ use syntax_pos::{COMMAND_LINE_SP, DUMMY_SP, FileMap, Span, MultiSpan, CharPos}; use {Level, CodeSuggestion, DiagnosticBuilder, SubDiagnostic, CodeMapper}; use RenderSpan::*; -use snippet::{StyledString, Style, Annotation, Line}; +use snippet::{Annotation, AnnotationType, Line, MultilineAnnotation, StyledString, Style}; use styled_buffer::StyledBuffer; use std::io::prelude::*; @@ -65,6 +65,7 @@ pub struct EmitterWriter { struct FileWithAnnotatedLines { file: Rc, lines: Vec, + multiline_depth: usize, } @@ -137,10 +138,12 @@ impl EmitterWriter { line_index: line_index, annotations: vec![ann], }], + multiline_depth: 0, }); } let mut output = vec![]; + let mut multiline_annotations = vec![]; if let Some(ref cm) = self.cm { for span_label in msp.span_labels() { @@ -151,8 +154,9 @@ impl EmitterWriter { let mut hi = cm.lookup_char_pos(span_label.span.hi); let mut is_minimized = false; - // If the span is multi-line, simplify down to the span of one character - if lo.line != hi.line { + // If the span is long multi-line, simplify down to the span of one character + let max_multiline_span_length = 8; + if lo.line != hi.line && (hi.line - lo.line) > max_multiline_span_length { hi.line = lo.line; hi.col = CharPos(lo.col.0 + 1); is_minimized = true; @@ -163,22 +167,76 @@ impl EmitterWriter { // 6..7. This is degenerate input, but it's best to degrade // gracefully -- and the parser likes to supply a span like // that for EOF, in particular. - if lo.col == hi.col { + if lo.col == hi.col && lo.line == hi.line { hi.col = CharPos(lo.col.0 + 1); } - add_annotation_to_file(&mut output, - lo.file, - lo.line, - Annotation { - start_col: lo.col.0, - end_col: hi.col.0, - is_primary: span_label.is_primary, - is_minimized: is_minimized, - label: span_label.label.clone(), - }); + let mut ann = Annotation { + start_col: lo.col.0, + end_col: hi.col.0, + is_primary: span_label.is_primary, + label: span_label.label.clone(), + annotation_type: AnnotationType::Singleline, + }; + if is_minimized { + ann.annotation_type = AnnotationType::Minimized; + } else if lo.line != hi.line { + let ml = MultilineAnnotation { + depth: 1, + line_start: lo.line, + line_end: hi.line, + start_col: lo.col.0, + end_col: hi.col.0, + is_primary: span_label.is_primary, + label: span_label.label.clone(), + }; + ann.annotation_type = AnnotationType::Multiline(ml.clone()); + multiline_annotations.push((lo.file.clone(), ml)); + }; + + if !ann.is_multiline() { + add_annotation_to_file(&mut output, + lo.file, + lo.line, + ann); + } } } + + // Find overlapping multiline annotations, put them at different depths + multiline_annotations.sort_by(|a, b| { + (a.1.line_start, a.1.line_end).cmp(&(b.1.line_start, b.1.line_end)) + }); + for item in multiline_annotations.clone() { + let ann = item.1; + for item in multiline_annotations.iter_mut() { + let ref mut a = item.1; + // Move all other multiline annotations overlapping with this one + // one level to the right. + if &ann != a && + num_overlap(ann.line_start, ann.line_end, a.line_start, a.line_end, true) + { + a.increase_depth(); + } else { + break; + } + } + } + + let mut max_depth = 0; // max overlapping multiline spans + for (file, ann) in multiline_annotations { + if ann.depth > max_depth { + max_depth = ann.depth; + } + add_annotation_to_file(&mut output, file.clone(), ann.line_start, ann.as_start()); + for line in ann.line_start + 1..ann.line_end { + add_annotation_to_file(&mut output, file.clone(), line, ann.as_line()); + } + add_annotation_to_file(&mut output, file, ann.line_end, ann.as_end()); + } + for file_vec in output.iter_mut() { + file_vec.multiline_depth = max_depth; + } output } @@ -186,14 +244,20 @@ impl EmitterWriter { buffer: &mut StyledBuffer, file: Rc, line: &Line, - width_offset: usize) { + width_offset: usize, + multiline_depth: usize) { let source_string = file.get_line(line.line_index - 1) .unwrap_or(""); let line_offset = buffer.num_lines(); + let code_offset = if multiline_depth == 0 { + width_offset + } else { + width_offset + multiline_depth + 1 + }; // First create the source line we will highlight. - buffer.puts(line_offset, width_offset, &source_string, Style::Quotation); + buffer.puts(line_offset, code_offset, &source_string, Style::Quotation); buffer.puts(line_offset, 0, &(line.line_index.to_string()), @@ -201,14 +265,10 @@ impl EmitterWriter { draw_col_separator(buffer, line_offset, width_offset - 2); - if line.annotations.is_empty() { - return; - } - // We want to display like this: // // vec.push(vec.pop().unwrap()); - // --- ^^^ _ previous borrow ends here + // --- ^^^ - previous borrow ends here // | | // | error occurs here // previous borrow of `vec` occurs here @@ -227,42 +287,22 @@ impl EmitterWriter { // Sort the annotations by (start, end col) let mut annotations = line.annotations.clone(); annotations.sort(); + annotations.reverse(); - // Next, create the highlight line. - for annotation in &annotations { - for p in annotation.start_col..annotation.end_col { - if annotation.is_primary { - buffer.putc(line_offset + 1, - width_offset + p, - '^', - Style::UnderlinePrimary); - if !annotation.is_minimized { - buffer.set_style(line_offset, width_offset + p, Style::UnderlinePrimary); - } - } else { - buffer.putc(line_offset + 1, - width_offset + p, - '-', - Style::UnderlineSecondary); - if !annotation.is_minimized { - buffer.set_style(line_offset, width_offset + p, Style::UnderlineSecondary); - } - } - } - } - draw_col_separator(buffer, line_offset + 1, width_offset - 2); - - // Now we are going to write labels in. To start, we'll exclude - // the annotations with no labels. - let (labeled_annotations, unlabeled_annotations): (Vec<_>, _) = annotations.into_iter() - .partition(|a| a.label.is_some()); - - // If there are no annotations that need text, we're done. - if labeled_annotations.is_empty() { - return; - } - // Now add the text labels. We try, when possible, to stick the rightmost - // annotation at the end of the highlight line: + // First, figure out where each label will be positioned. + // + // In the case where you have the following annotations: + // + // vec.push(vec.pop().unwrap()); + // -------- - previous borrow ends here [C] + // || + // |this makes no sense [B] + // previous borrow of `vec` occurs here [A] + // + // `annotations_position` will hold [(2, A), (1, B), (0, C)]. + // + // We try, when possible, to stick the rightmost annotation at the end + // of the highlight line: // // vec.push(vec.pop().unwrap()); // --- --- - previous borrow ends here @@ -296,66 +336,251 @@ impl EmitterWriter { // the rightmost span overlaps with any other span, we should // use the "hang below" version, so we can at least make it // clear where the span *starts*. - let mut labeled_annotations = &labeled_annotations[..]; - match labeled_annotations.split_last().unwrap() { - (last, previous) => { - if previous.iter() - .chain(&unlabeled_annotations) - .all(|a| !overlaps(a, last)) { - // append the label afterwards; we keep it in a separate - // string - let highlight_label: String = format!(" {}", last.label.as_ref().unwrap()); - if last.is_primary { - buffer.append(line_offset + 1, &highlight_label, Style::LabelPrimary); - } else { - buffer.append(line_offset + 1, &highlight_label, Style::LabelSecondary); - } - labeled_annotations = previous; + let mut annotations_position = vec![]; + let mut line_len = 0; + let mut p = 0; + let mut ann_iter = annotations.iter().peekable(); + while let Some(annotation) = ann_iter.next() { + let is_line = if let AnnotationType::MultilineLine(_) = annotation.annotation_type { + true + } else { + false + }; + let peek = ann_iter.peek(); + if let Some(next) = peek { + let next_is_line = if let AnnotationType::MultilineLine(_) = next.annotation_type { + true + } else { + false + }; + + if overlaps(next, annotation) && !is_line && !next_is_line { + p += 1; } } + annotations_position.push((p, annotation)); + if let Some(next) = peek { + let next_is_line = if let AnnotationType::MultilineLine(_) = next.annotation_type { + true + } else { + false + }; + let l = if let Some(ref label) = next.label { + label.len() + 2 + } else { + 0 + }; + if (overlaps(next, annotation) || next.end_col + l > annotation.start_col) + && !is_line && !next_is_line + { + p += 1; + } + } + if line_len < p { + line_len = p; + } + } + if line_len != 0 { + line_len += 1; } - // If that's the last annotation, we're done - if labeled_annotations.is_empty() { + // If there are no annotations or the only annotations on this line are + // MultilineLine, then there's only code being shown, stop processing. + if line.annotations.is_empty() || line.annotations.iter() + .filter(|a| { + // Set the multiline annotation vertical lines to the left of + // the code in this line. + if let AnnotationType::MultilineLine(depth) = a.annotation_type { + buffer.putc(line_offset, + width_offset + depth - 1, + '|', + if a.is_primary { + Style::UnderlinePrimary + } else { + Style::UnderlineSecondary + }); + false + } else { + true + } + }).collect::>().len() == 0 + { return; } - for (index, annotation) in labeled_annotations.iter().enumerate() { - // Leave: - // - 1 extra line - // - One line for each thing that comes after - let comes_after = labeled_annotations.len() - index - 1; - let blank_lines = 3 + comes_after; + for pos in 0..line_len + 1 { + draw_col_separator(buffer, line_offset + pos + 1, width_offset - 2); + buffer.putc(line_offset + pos + 1, + width_offset - 2, + '|', + Style::LineNumber); + } - // For each blank line, draw a `|` at our column. The - // text ought to be long enough for this. - for index in 2..blank_lines { - if annotation.is_primary { - buffer.putc(line_offset + index, - width_offset + annotation.start_col, - '|', - Style::UnderlinePrimary); - } else { - buffer.putc(line_offset + index, - width_offset + annotation.start_col, - '|', - Style::UnderlineSecondary); - } - draw_col_separator(buffer, line_offset + index, width_offset - 2); - } - - if annotation.is_primary { - buffer.puts(line_offset + blank_lines, - width_offset + annotation.start_col, - annotation.label.as_ref().unwrap(), - Style::LabelPrimary); + // Write the horizontal lines for multiline annotations + // (only the first and last lines need this). + // + // After this we will have: + // + // 2 | fn foo() { + // | __________ + // | + // | + // 3 | + // 4 | } + // | _ + for &(pos, annotation) in &annotations_position { + let style = if annotation.is_primary { + Style::UnderlinePrimary } else { - buffer.puts(line_offset + blank_lines, - width_offset + annotation.start_col, - annotation.label.as_ref().unwrap(), - Style::LabelSecondary); + Style::UnderlineSecondary + }; + let pos = pos + 1; + match annotation.annotation_type { + AnnotationType::MultilineStart(depth) | + AnnotationType::MultilineEnd(depth) => { + draw_range(buffer, + '_', + line_offset + pos, + width_offset + depth, + code_offset + annotation.start_col, + style); + } + _ => (), + } + } + + // Write the vertical lines for multiline spans and for labels that are + // on a different line as the underline. + // + // After this we will have: + // + // 2 | fn foo() { + // | __________ + // | | | + // | | + // 3 | | + // 4 | | } + // | |_ + for &(pos, annotation) in &annotations_position { + let style = if annotation.is_primary { + Style::UnderlinePrimary + } else { + Style::UnderlineSecondary + }; + let pos = pos + 1; + if pos > 1 { + for p in line_offset + 1..line_offset + pos + 1 { + buffer.putc(p, + code_offset + annotation.start_col, + '|', + style); + } + } + match annotation.annotation_type { + AnnotationType::MultilineStart(depth) => { + for p in line_offset + pos + 1..line_offset + line_len + 2 { + buffer.putc(p, + width_offset + depth - 1, + '|', + style); + } + } + AnnotationType::MultilineEnd(depth) => { + for p in line_offset..line_offset + pos + 1 { + buffer.putc(p, + width_offset + depth - 1, + '|', + style); + } + } + AnnotationType::MultilineLine(depth) => { + // the first line will have already be filled when we checked + // wether there were any annotations for this line. + for p in line_offset + 1..line_offset + line_len + 2 { + buffer.putc(p, + width_offset + depth - 1, + '|', + style); + } + } + _ => (), + } + } + + // Write the labels on the annotations that actually have a label. + // + // After this we will have: + // + // 2 | fn foo() { + // | __________ starting here... + // | | | + // | | something about `foo` + // 3 | | + // 4 | | } + // | |_ ...ending here: test + for &(pos, annotation) in &annotations_position { + let style = if annotation.is_primary { + Style::LabelPrimary + } else { + Style::LabelSecondary + }; + let (pos, col) = if pos == 0 { + (pos + 1, annotation.end_col + 1) + } else { + (pos + 2, annotation.start_col) + }; + if let Some(ref label) = annotation.label { + buffer.puts(line_offset + pos, + code_offset + col, + &label, + style); + } + } + + // Sort from biggest span to smallest span so that smaller spans are + // represented in the output: + // + // x | fn foo() + // | ^^^---^^ + // | | | + // | | something about `foo` + // | something about `fn foo()` + annotations_position.sort_by(|a, b| { + fn len(a: &Annotation) -> usize { + // Account for usize underflows + if a.end_col > a.start_col { + a.end_col - a.start_col + } else { + a.start_col - a.end_col + } + } + // Decreasing order + len(a.1).cmp(&len(b.1)).reverse() + }); + + // Write the underlines. + // + // After this we will have: + // + // 2 | fn foo() { + // | ____-_____^ starting here... + // | | | + // | | something about `foo` + // 3 | | + // 4 | | } + // | |_^ ...ending here: test + for &(_, annotation) in &annotations_position { + let (underline, style) = if annotation.is_primary { + ('^', Style::UnderlinePrimary) + } else { + ('-', Style::UnderlineSecondary) + }; + for p in annotation.start_col..annotation.end_col { + buffer.putc(line_offset + 1, + code_offset + p, + underline, + style); } - draw_col_separator(buffer, line_offset + blank_lines, width_offset - 2); } } @@ -577,7 +802,8 @@ impl EmitterWriter { self.render_source_line(&mut buffer, annotated_file.file.clone(), &annotated_file.lines[line_idx], - 3 + max_line_num_len); + 3 + max_line_num_len, + annotated_file.multiline_depth); // check to see if we need to print out or elide lines that come between // this annotated line and the next one @@ -729,16 +955,38 @@ fn draw_col_separator(buffer: &mut StyledBuffer, line: usize, col: usize) { } fn draw_col_separator_no_space(buffer: &mut StyledBuffer, line: usize, col: usize) { - buffer.puts(line, col, "|", Style::LineNumber); + draw_col_separator_no_space_with_style(buffer, line, col, Style::LineNumber); +} + +fn draw_col_separator_no_space_with_style(buffer: &mut StyledBuffer, + line: usize, + col: usize, + style: Style) { + buffer.putc(line, col, '|', style); +} + +fn draw_range(buffer: &mut StyledBuffer, symbol: char, line: usize, + col_from: usize, col_to: usize, style: Style) { + for col in col_from..col_to { + buffer.putc(line, col, symbol, style); + } } fn draw_note_separator(buffer: &mut StyledBuffer, line: usize, col: usize) { buffer.puts(line, col, "= ", Style::LineNumber); } +fn num_overlap(a_start: usize, a_end: usize, b_start: usize, b_end:usize, inclusive: bool) -> bool { + let extra = if inclusive { + 1 + } else { + 0 + }; + (b_start..b_end + extra).contains(a_start) || + (a_start..a_end + extra).contains(b_start) +} fn overlaps(a1: &Annotation, a2: &Annotation) -> bool { - (a2.start_col..a2.end_col).contains(a1.start_col) || - (a1.start_col..a1.end_col).contains(a2.start_col) + num_overlap(a1.start_col, a1.end_col, a2.start_col, a2.end_col, false) } fn emit_to_destination(rendered_buffer: &Vec>, diff --git a/src/librustc_errors/lib.rs b/src/librustc_errors/lib.rs index badee66b83..09a0c7f9be 100644 --- a/src/librustc_errors/lib.rs +++ b/src/librustc_errors/lib.rs @@ -21,7 +21,6 @@ #![allow(unused_attributes)] #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(range_contains)] #![feature(libc)] #![feature(unicode)] @@ -32,7 +31,7 @@ extern crate term; extern crate log; #[macro_use] extern crate libc; -extern crate rustc_unicode; +extern crate std_unicode; extern crate serialize as rustc_serialize; // used by deriving extern crate syntax_pos; diff --git a/src/librustc_errors/snippet.rs b/src/librustc_errors/snippet.rs index abfb71c861..b8c1726443 100644 --- a/src/librustc_errors/snippet.rs +++ b/src/librustc_errors/snippet.rs @@ -41,6 +41,86 @@ pub struct Line { pub annotations: Vec, } + +#[derive(Clone, Debug, PartialOrd, Ord, PartialEq, Eq)] +pub struct MultilineAnnotation { + pub depth: usize, + pub line_start: usize, + pub line_end: usize, + pub start_col: usize, + pub end_col: usize, + pub is_primary: bool, + pub label: Option, +} + +impl MultilineAnnotation { + pub fn increase_depth(&mut self) { + self.depth += 1; + } + + pub fn as_start(&self) -> Annotation { + Annotation { + start_col: self.start_col, + end_col: self.start_col + 1, + is_primary: self.is_primary, + label: Some("starting here...".to_owned()), + annotation_type: AnnotationType::MultilineStart(self.depth) + } + } + + pub fn as_end(&self) -> Annotation { + Annotation { + start_col: self.end_col - 1, + end_col: self.end_col, + is_primary: self.is_primary, + label: match self.label { + Some(ref label) => Some(format!("...ending here: {}", label)), + None => Some("...ending here".to_owned()), + }, + annotation_type: AnnotationType::MultilineEnd(self.depth) + } + } + + pub fn as_line(&self) -> Annotation { + Annotation { + start_col: 0, + end_col: 0, + is_primary: self.is_primary, + label: None, + annotation_type: AnnotationType::MultilineLine(self.depth) + } + } +} + +#[derive(Clone, Debug, PartialOrd, Ord, PartialEq, Eq)] +pub enum AnnotationType { + /// Annotation under a single line of code + Singleline, + + /// Annotation under the first character of a multiline span + Minimized, + + /// Annotation enclosing the first and last character of a multiline span + Multiline(MultilineAnnotation), + + // The Multiline type above is replaced with the following three in order + // to reuse the current label drawing code. + // + // Each of these corresponds to one part of the following diagram: + // + // x | foo(1 + bar(x, + // | _________^ starting here... < MultilineStart + // x | | y), < MultilineLine + // | |______________^ ...ending here: label < MultilineEnd + // x | z); + /// Annotation marking the first character of a fully shown multiline span + MultilineStart(usize), + /// Annotation marking the last character of a fully shown multiline span + MultilineEnd(usize), + /// Line at the left enclosing the lines of a fully shown multiline span + MultilineLine(usize), +} + #[derive(Clone, Debug, PartialOrd, Ord, PartialEq, Eq)] pub struct Annotation { /// Start column, 0-based indexing -- counting *characters*, not @@ -55,11 +135,32 @@ pub struct Annotation { /// Is this annotation derived from primary span pub is_primary: bool, - /// Is this a large span minimized down to a smaller span - pub is_minimized: bool, - /// Optional label to display adjacent to the annotation. pub label: Option, + + /// Is this a single line, multiline or multiline span minimized down to a + /// smaller span. + pub annotation_type: AnnotationType, +} + +impl Annotation { + pub fn is_minimized(&self) -> bool { + match self.annotation_type { + AnnotationType::Minimized => true, + _ => false, + } + } + + pub fn is_multiline(&self) -> bool { + match self.annotation_type { + AnnotationType::Multiline(_) | + AnnotationType::MultilineStart(_) | + AnnotationType::MultilineLine(_) | + AnnotationType::MultilineEnd(_) => true, + _ => false, + } + } + } #[derive(Debug)] diff --git a/src/librustc_incremental/assert_dep_graph.rs b/src/librustc_incremental/assert_dep_graph.rs index 28aab1fdd4..87e6b2befd 100644 --- a/src/librustc_incremental/assert_dep_graph.rs +++ b/src/librustc_incremental/assert_dep_graph.rs @@ -48,16 +48,15 @@ use rustc::dep_graph::{DepGraphQuery, DepNode}; use rustc::dep_graph::debug::{DepNodeFilter, EdgeFilter}; use rustc::hir::def_id::DefId; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::FnvHashSet; +use rustc_data_structures::fx::FxHashSet; use rustc_data_structures::graph::{Direction, INCOMING, OUTGOING, NodeIndex}; use rustc::hir; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use graphviz::IntoCow; use std::env; use std::fs::File; use std::io::Write; use syntax::ast; -use syntax::parse::token::InternedString; use syntax_pos::Span; use {ATTR_IF_THIS_CHANGED, ATTR_THEN_THIS_WOULD_NEED}; @@ -81,7 +80,7 @@ pub fn assert_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { if_this_changed: vec![], then_this_would_need: vec![] }; visitor.process_attrs(ast::CRATE_NODE_ID, &tcx.map.krate().attrs); - tcx.map.krate().visit_all_items(&mut visitor); + tcx.map.krate().visit_all_item_likes(&mut visitor); (visitor.if_this_changed, visitor.then_this_would_need) }; @@ -97,7 +96,7 @@ pub fn assert_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { } type Sources = Vec<(Span, DefId, DepNode)>; -type Targets = Vec<(Span, InternedString, ast::NodeId, DepNode)>; +type Targets = Vec<(Span, ast::Name, ast::NodeId, DepNode)>; struct IfThisChanged<'a, 'tcx:'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, @@ -106,7 +105,7 @@ struct IfThisChanged<'a, 'tcx:'a> { } impl<'a, 'tcx> IfThisChanged<'a, 'tcx> { - fn argument(&self, attr: &ast::Attribute) -> Option { + fn argument(&self, attr: &ast::Attribute) -> Option { let mut value = None; for list_item in attr.meta_item_list().unwrap_or_default() { match list_item.word() { @@ -127,8 +126,8 @@ impl<'a, 'tcx> IfThisChanged<'a, 'tcx> { let dep_node_interned = self.argument(attr); let dep_node = match dep_node_interned { None => DepNode::Hir(def_id), - Some(ref n) => { - match DepNode::from_label_string(&n[..], def_id) { + Some(n) => { + match DepNode::from_label_string(&n.as_str(), def_id) { Ok(n) => n, Err(()) => { self.tcx.sess.span_fatal( @@ -142,8 +141,8 @@ impl<'a, 'tcx> IfThisChanged<'a, 'tcx> { } else if attr.check_name(ATTR_THEN_THIS_WOULD_NEED) { let dep_node_interned = self.argument(attr); let dep_node = match dep_node_interned { - Some(ref n) => { - match DepNode::from_label_string(&n[..], def_id) { + Some(n) => { + match DepNode::from_label_string(&n.as_str(), def_id) { Ok(n) => n, Err(()) => { self.tcx.sess.span_fatal( @@ -159,7 +158,7 @@ impl<'a, 'tcx> IfThisChanged<'a, 'tcx> { } }; self.then_this_would_need.push((attr.span, - dep_node_interned.clone().unwrap(), + dep_node_interned.unwrap(), node_id, dep_node)); } @@ -167,10 +166,14 @@ impl<'a, 'tcx> IfThisChanged<'a, 'tcx> { } } -impl<'a, 'tcx> Visitor<'tcx> for IfThisChanged<'a, 'tcx> { +impl<'a, 'tcx> ItemLikeVisitor<'tcx> for IfThisChanged<'a, 'tcx> { fn visit_item(&mut self, item: &'tcx hir::Item) { self.process_attrs(item.id, &item.attrs); } + + fn visit_impl_item(&mut self, impl_item: &'tcx hir::ImplItem) { + self.process_attrs(impl_item.id, &impl_item.attrs); + } } fn check_paths<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, @@ -244,7 +247,7 @@ fn dump_graph(tcx: TyCtxt) { } } -pub struct GraphvizDepGraph<'q>(FnvHashSet<&'q DepNode>, +pub struct GraphvizDepGraph<'q>(FxHashSet<&'q DepNode>, Vec<(&'q DepNode, &'q DepNode)>); impl<'a, 'tcx, 'q> dot::GraphWalk<'a> for GraphvizDepGraph<'q> { @@ -288,7 +291,7 @@ impl<'a, 'tcx, 'q> dot::Labeller<'a> for GraphvizDepGraph<'q> { // filter) or the set of nodes whose labels contain all of those // substrings. fn node_set<'q>(query: &'q DepGraphQuery, filter: &DepNodeFilter) - -> Option>> + -> Option>> { debug!("node_set(filter={:?})", filter); @@ -300,9 +303,9 @@ fn node_set<'q>(query: &'q DepGraphQuery, filter: &DepNodeFilter) } fn filter_nodes<'q>(query: &'q DepGraphQuery, - sources: &Option>>, - targets: &Option>>) - -> FnvHashSet<&'q DepNode> + sources: &Option>>, + targets: &Option>>) + -> FxHashSet<&'q DepNode> { if let &Some(ref sources) = sources { if let &Some(ref targets) = targets { @@ -318,11 +321,11 @@ fn filter_nodes<'q>(query: &'q DepGraphQuery, } fn walk_nodes<'q>(query: &'q DepGraphQuery, - starts: &FnvHashSet<&'q DepNode>, + starts: &FxHashSet<&'q DepNode>, direction: Direction) - -> FnvHashSet<&'q DepNode> + -> FxHashSet<&'q DepNode> { - let mut set = FnvHashSet(); + let mut set = FxHashSet(); for &start in starts { debug!("walk_nodes: start={:?} outgoing?={:?}", start, direction == OUTGOING); if set.insert(start) { @@ -342,9 +345,9 @@ fn walk_nodes<'q>(query: &'q DepGraphQuery, } fn walk_between<'q>(query: &'q DepGraphQuery, - sources: &FnvHashSet<&'q DepNode>, - targets: &FnvHashSet<&'q DepNode>) - -> FnvHashSet<&'q DepNode> + sources: &FxHashSet<&'q DepNode>, + targets: &FxHashSet<&'q DepNode>) + -> FxHashSet<&'q DepNode> { // This is a bit tricky. We want to include a node only if it is: // (a) reachable from a source and (b) will reach a target. And we @@ -410,7 +413,7 @@ fn walk_between<'q>(query: &'q DepGraphQuery, } fn filter_edges<'q>(query: &'q DepGraphQuery, - nodes: &FnvHashSet<&'q DepNode>) + nodes: &FxHashSet<&'q DepNode>) -> Vec<(&'q DepNode, &'q DepNode)> { query.edges() diff --git a/src/librustc_incremental/calculate_svh/hasher.rs b/src/librustc_incremental/calculate_svh/hasher.rs deleted file mode 100644 index d7d9c231a9..0000000000 --- a/src/librustc_incremental/calculate_svh/hasher.rs +++ /dev/null @@ -1,88 +0,0 @@ -// Copyright 2016 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -use std::mem; -use std::hash::Hasher; -use rustc_data_structures::blake2b::Blake2bHasher; -use rustc::ty::util::ArchIndependentHasher; -use ich::Fingerprint; -use rustc_serialize::leb128::write_unsigned_leb128; - -#[derive(Debug)] -pub struct IchHasher { - state: ArchIndependentHasher, - leb128_helper: Vec, - bytes_hashed: u64, -} - -impl IchHasher { - pub fn new() -> IchHasher { - let hash_size = mem::size_of::(); - IchHasher { - state: ArchIndependentHasher::new(Blake2bHasher::new(hash_size, &[])), - leb128_helper: vec![], - bytes_hashed: 0 - } - } - - pub fn bytes_hashed(&self) -> u64 { - self.bytes_hashed - } - - pub fn finish(self) -> Fingerprint { - let mut fingerprint = Fingerprint::zero(); - fingerprint.0.copy_from_slice(self.state.into_inner().finalize()); - fingerprint - } - - #[inline] - fn write_uleb128(&mut self, value: u64) { - let len = write_unsigned_leb128(&mut self.leb128_helper, 0, value); - self.state.write(&self.leb128_helper[0..len]); - self.bytes_hashed += len as u64; - } -} - -// For the non-u8 integer cases we leb128 encode them first. Because small -// integers dominate, this significantly and cheaply reduces the number of -// bytes hashed, which is good because blake2b is expensive. -impl Hasher for IchHasher { - fn finish(&self) -> u64 { - bug!("Use other finish() implementation to get the full 128-bit hash."); - } - - #[inline] - fn write(&mut self, bytes: &[u8]) { - self.state.write(bytes); - self.bytes_hashed += bytes.len() as u64; - } - - // There is no need to leb128-encode u8 values. - - #[inline] - fn write_u16(&mut self, i: u16) { - self.write_uleb128(i as u64); - } - - #[inline] - fn write_u32(&mut self, i: u32) { - self.write_uleb128(i as u64); - } - - #[inline] - fn write_u64(&mut self, i: u64) { - self.write_uleb128(i); - } - - #[inline] - fn write_usize(&mut self, i: usize) { - self.write_uleb128(i as u64); - } -} diff --git a/src/librustc_incremental/calculate_svh/mod.rs b/src/librustc_incremental/calculate_svh/mod.rs index 3b0b37bb01..eb31be4a8a 100644 --- a/src/librustc_incremental/calculate_svh/mod.rs +++ b/src/librustc_incremental/calculate_svh/mod.rs @@ -34,38 +34,40 @@ use rustc::dep_graph::DepNode; use rustc::hir; use rustc::hir::def_id::{CRATE_DEF_INDEX, DefId}; use rustc::hir::intravisit as visit; +use rustc::hir::intravisit::{Visitor, NestedVisitorMap}; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::stable_hasher::StableHasher; +use ich::Fingerprint; +use rustc_data_structures::fx::FxHashMap; use rustc::util::common::record_time; use rustc::session::config::DebugInfoLevel::NoDebugInfo; use self::def_path_hash::DefPathHashes; use self::svh_visitor::StrictVersionHashVisitor; use self::caching_codemap_view::CachingCodemapView; -use self::hasher::IchHasher; -use ich::Fingerprint; mod def_path_hash; mod svh_visitor; mod caching_codemap_view; -pub mod hasher; + +pub type IchHasher = StableHasher; pub struct IncrementalHashesMap { - hashes: FnvHashMap, Fingerprint>, + hashes: FxHashMap, Fingerprint>, // These are the metadata hashes for the current crate as they were stored // during the last compilation session. They are only loaded if // -Z query-dep-graph was specified and are needed for auto-tests using // the #[rustc_metadata_dirty] and #[rustc_metadata_clean] attributes to // check whether some metadata hash has changed in between two revisions. - pub prev_metadata_hashes: RefCell>, + pub prev_metadata_hashes: RefCell>, } impl IncrementalHashesMap { pub fn new() -> IncrementalHashesMap { IncrementalHashesMap { - hashes: FnvHashMap(), - prev_metadata_hashes: RefCell::new(FnvHashMap()), + hashes: FxHashMap(), + prev_metadata_hashes: RefCell::new(FxHashMap()), } } @@ -87,7 +89,12 @@ impl<'a> ::std::ops::Index<&'a DepNode> for IncrementalHashesMap { type Output = Fingerprint; fn index(&self, index: &'a DepNode) -> &Fingerprint { - &self.hashes[index] + match self.hashes.get(index) { + Some(fingerprint) => fingerprint, + None => { + bug!("Could not find ICH for {:?}", index); + } + } } } @@ -105,9 +112,15 @@ pub fn compute_incremental_hashes_map<'a, 'tcx: 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>) hash_spans: hash_spans, }; record_time(&tcx.sess.perf_stats.incr_comp_hashes_time, || { - visitor.calculate_def_id(DefId::local(CRATE_DEF_INDEX), - |v| visit::walk_crate(v, krate)); - krate.visit_all_items(&mut visitor); + visitor.calculate_def_id(DefId::local(CRATE_DEF_INDEX), |v| { + v.hash_crate_root_module(krate); + }); + krate.visit_all_item_likes(&mut visitor.as_deep_visitor()); + + for macro_def in krate.exported_macros.iter() { + visitor.calculate_node_id(macro_def.id, + |v| v.visit_macro_def(macro_def)); + } }); tcx.sess.perf_stats.incr_comp_hashes_count.set(visitor.hashes.len() as u64); @@ -137,19 +150,30 @@ impl<'a, 'tcx> HashItemsVisitor<'a, 'tcx> { { assert!(def_id.is_local()); debug!("HashItemsVisitor::calculate(def_id={:?})", def_id); + self.calculate_def_hash(DepNode::Hir(def_id), false, &mut walk_op); + self.calculate_def_hash(DepNode::HirBody(def_id), true, &mut walk_op); + } + + fn calculate_def_hash(&mut self, + dep_node: DepNode, + hash_bodies: bool, + walk_op: &mut W) + where W: for<'v> FnMut(&mut StrictVersionHashVisitor<'v, 'a, 'tcx>) + { let mut state = IchHasher::new(); walk_op(&mut StrictVersionHashVisitor::new(&mut state, self.tcx, &mut self.def_path_hashes, &mut self.codemap, - self.hash_spans)); + self.hash_spans, + hash_bodies)); let bytes_hashed = state.bytes_hashed(); let item_hash = state.finish(); - self.hashes.insert(DepNode::Hir(def_id), item_hash); - debug!("calculate_item_hash: def_id={:?} hash={:?}", def_id, item_hash); + debug!("calculate_def_hash: dep_node={:?} hash={:?}", dep_node, item_hash); + self.hashes.insert(dep_node, item_hash); let bytes_hashed = self.tcx.sess.perf_stats.incr_comp_bytes_hashed.get() + - bytes_hashed; + bytes_hashed; self.tcx.sess.perf_stats.incr_comp_bytes_hashed.set(bytes_hashed); } @@ -160,8 +184,8 @@ impl<'a, 'tcx> HashItemsVisitor<'a, 'tcx> { let crate_disambiguator = self.tcx.sess.local_crate_disambiguator(); "crate_disambiguator".hash(&mut crate_state); - crate_disambiguator.len().hash(&mut crate_state); - crate_disambiguator.hash(&mut crate_state); + crate_disambiguator.as_str().len().hash(&mut crate_state); + crate_disambiguator.as_str().hash(&mut crate_state); // add each item (in some deterministic order) to the overall // crate hash. @@ -188,7 +212,8 @@ impl<'a, 'tcx> HashItemsVisitor<'a, 'tcx> { self.tcx, &mut self.def_path_hashes, &mut self.codemap, - self.hash_spans); + self.hash_spans, + false); visitor.hash_attributes(&krate.attrs); } @@ -199,15 +224,23 @@ impl<'a, 'tcx> HashItemsVisitor<'a, 'tcx> { } -impl<'a, 'tcx> visit::Visitor<'tcx> for HashItemsVisitor<'a, 'tcx> { +impl<'a, 'tcx> Visitor<'tcx> for HashItemsVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::None + } + fn visit_item(&mut self, item: &'tcx hir::Item) { self.calculate_node_id(item.id, |v| v.visit_item(item)); visit::walk_item(self, item); } + fn visit_impl_item(&mut self, impl_item: &'tcx hir::ImplItem) { + self.calculate_node_id(impl_item.id, |v| v.visit_impl_item(impl_item)); + visit::walk_impl_item(self, impl_item); + } + fn visit_foreign_item(&mut self, item: &'tcx hir::ForeignItem) { self.calculate_node_id(item.id, |v| v.visit_foreign_item(item)); visit::walk_foreign_item(self, item); } } - diff --git a/src/librustc_incremental/calculate_svh/svh_visitor.rs b/src/librustc_incremental/calculate_svh/svh_visitor.rs index 2358d60d0d..ec44e19df1 100644 --- a/src/librustc_incremental/calculate_svh/svh_visitor.rs +++ b/src/librustc_incremental/calculate_svh/svh_visitor.rs @@ -8,11 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// FIXME (#14132): Even this SVH computation still has implementation -// artifacts: namely, the order of item declaration will affect the -// hash computation, but for many kinds of items the order of -// declaration should be irrelevant to the ABI. - use self::SawExprComponent::*; use self::SawAbiComponent::*; use self::SawItemComponent::*; @@ -23,10 +18,12 @@ use syntax::abi::Abi; use syntax::ast::{self, Name, NodeId}; use syntax::attr; use syntax::parse::token; +use syntax::symbol::{Symbol, InternedString}; use syntax_pos::{Span, NO_EXPANSION, COMMAND_LINE_EXPN, BytePos}; +use syntax::tokenstream; use rustc::hir; use rustc::hir::*; -use rustc::hir::def::{Def, PathResolution}; +use rustc::hir::def::Def; use rustc::hir::def_id::DefId; use rustc::hir::intravisit as visit; use rustc::ty::TyCtxt; @@ -35,7 +32,7 @@ use std::hash::Hash; use super::def_path_hash::DefPathHashes; use super::caching_codemap_view::CachingCodemapView; -use super::hasher::IchHasher; +use super::IchHasher; const IGNORED_ATTRIBUTES: &'static [&'static str] = &[ "cfg", @@ -55,6 +52,7 @@ pub struct StrictVersionHashVisitor<'a, 'hash: 'a, 'tcx: 'hash> { hash_spans: bool, codemap: &'a mut CachingCodemapView<'tcx>, overflow_checks_enabled: bool, + hash_bodies: bool, } impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { @@ -62,7 +60,8 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { tcx: TyCtxt<'hash, 'tcx, 'tcx>, def_path_hashes: &'a mut DefPathHashes<'hash, 'tcx>, codemap: &'a mut CachingCodemapView<'tcx>, - hash_spans: bool) + hash_spans: bool, + hash_bodies: bool) -> Self { let check_overflow = tcx.sess.opts.debugging_opts.force_overflow_checks .unwrap_or(tcx.sess.opts.debug_assertions); @@ -74,6 +73,7 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { hash_spans: hash_spans, codemap: codemap, overflow_checks_enabled: check_overflow, + hash_bodies: hash_bodies, } } @@ -173,8 +173,8 @@ enum SawAbiComponent<'a> { // FIXME (#14132): should we include (some function of) // ident.ctxt as well? - SawIdent(token::InternedString), - SawStructDef(token::InternedString), + SawIdent(InternedString), + SawStructDef(InternedString), SawLifetime, SawLifetimeDef(usize), @@ -188,10 +188,10 @@ enum SawAbiComponent<'a> { SawImplItem(SawTraitOrImplItemComponent), SawStructField, SawVariant, + SawQPath, SawPath(bool), SawPathSegment, SawPathParameters, - SawPathListItem, SawBlock, SawPat(SawPatComponent), SawLocal, @@ -199,6 +199,8 @@ enum SawAbiComponent<'a> { SawExpr(SawExprComponent<'a>), SawStmt, SawVis, + SawAssociatedItemKind(hir::AssociatedItemKind), + SawDefaultness(hir::Defaultness), SawWherePredicate, SawTyParamBound, SawPolyTraitRef, @@ -234,11 +236,11 @@ enum SawAbiComponent<'a> { #[derive(Hash)] enum SawExprComponent<'a> { - SawExprLoop(Option), - SawExprField(token::InternedString), + SawExprLoop(Option), + SawExprField(InternedString), SawExprTupField(usize), - SawExprBreak(Option), - SawExprAgain(Option), + SawExprBreak(Option), + SawExprAgain(Option), SawExprBox, SawExprArray, @@ -248,6 +250,8 @@ enum SawExprComponent<'a> { SawExprBinary(hir::BinOp_), SawExprUnary(hir::UnOp), SawExprLit(ast::LitKind), + SawExprLitStr(InternedString, ast::StrStyle), + SawExprLitFloat(InternedString, Option), SawExprCast, SawExprType, SawExprIf, @@ -258,7 +262,7 @@ enum SawExprComponent<'a> { SawExprAssign, SawExprAssignOp(hir::BinOp_), SawExprIndex, - SawExprPath(Option), + SawExprPath, SawExprAddrOf(hir::Mutability), SawExprRet, SawExprInlineAsm(&'a hir::InlineAsm), @@ -316,12 +320,12 @@ fn saw_expr<'a>(node: &'a Expr_, ExprUnary(op, _) => { (SawExprUnary(op), unop_can_panic_at_runtime(op)) } - ExprLit(ref lit) => (SawExprLit(lit.node.clone()), false), + ExprLit(ref lit) => (saw_lit(lit), false), ExprCast(..) => (SawExprCast, false), ExprType(..) => (SawExprType, false), ExprIf(..) => (SawExprIf, false), ExprWhile(..) => (SawExprWhile, false), - ExprLoop(_, id) => (SawExprLoop(id.map(|id| id.node.as_str())), false), + ExprLoop(_, id, _) => (SawExprLoop(id.map(|id| id.node.as_str())), false), ExprMatch(..) => (SawExprMatch, false), ExprClosure(cc, _, _, _) => (SawExprClosure(cc), false), ExprBlock(..) => (SawExprBlock, false), @@ -332,10 +336,10 @@ fn saw_expr<'a>(node: &'a Expr_, ExprField(_, name) => (SawExprField(name.node.as_str()), false), ExprTupField(_, id) => (SawExprTupField(id.node), false), ExprIndex(..) => (SawExprIndex, true), - ExprPath(ref qself, _) => (SawExprPath(qself.as_ref().map(|q| q.position)), false), + ExprPath(_) => (SawExprPath, false), ExprAddrOf(m, _) => (SawExprAddrOf(m), false), - ExprBreak(id) => (SawExprBreak(id.map(|id| id.node.as_str())), false), - ExprAgain(id) => (SawExprAgain(id.map(|id| id.node.as_str())), false), + ExprBreak(label, _) => (SawExprBreak(label.map(|l| l.name.as_str())), false), + ExprAgain(label) => (SawExprAgain(label.map(|l| l.name.as_str())), false), ExprRet(..) => (SawExprRet, false), ExprInlineAsm(ref a,..) => (SawExprInlineAsm(a), false), ExprStruct(..) => (SawExprStruct, false), @@ -343,10 +347,19 @@ fn saw_expr<'a>(node: &'a Expr_, } } +fn saw_lit(lit: &ast::Lit) -> SawExprComponent<'static> { + match lit.node { + ast::LitKind::Str(s, style) => SawExprLitStr(s.as_str(), style), + ast::LitKind::Float(s, ty) => SawExprLitFloat(s.as_str(), Some(ty)), + ast::LitKind::FloatUnsuffixed(s) => SawExprLitFloat(s.as_str(), None), + ref node @ _ => SawExprLit(node.clone()), + } +} + #[derive(Hash)] enum SawItemComponent { SawItemExternCrate, - SawItemUse, + SawItemUse(UseKind), SawItemStatic(Mutability), SawItemConst, SawItemFn(Unsafety, Constness, Abi), @@ -364,7 +377,7 @@ enum SawItemComponent { fn saw_item(node: &Item_) -> SawItemComponent { match *node { ItemExternCrate(..) => SawItemExternCrate, - ItemUse(..) => SawItemUse, + ItemUse(_, kind) => SawItemUse(kind), ItemStatic(_, mutability, _) => SawItemStatic(mutability), ItemConst(..) =>SawItemConst, ItemFn(_, unsafety, constness, abi, _, _) => SawItemFn(unsafety, constness, abi), @@ -401,7 +414,7 @@ fn saw_pat(node: &PatKind) -> SawPatComponent { PatKind::Binding(bindingmode, ..) => SawPatBinding(bindingmode), PatKind::Struct(..) => SawPatStruct, PatKind::TupleStruct(..) => SawPatTupleStruct, - PatKind::Path(..) => SawPatPath, + PatKind::Path(_) => SawPatPath, PatKind::Tuple(..) => SawPatTuple, PatKind::Box(..) => SawPatBox, PatKind::Ref(_, mutability) => SawPatRef(mutability), @@ -437,7 +450,7 @@ fn saw_ty(node: &Ty_) -> SawTyComponent { TyBareFn(ref barefnty) => SawTyBareFn(barefnty.unsafety, barefnty.abi), TyNever => SawTyNever, TyTup(..) => SawTyTup, - TyPath(..) => SawTyPath, + TyPath(_) => SawTyPath, TyObjectSum(..) => SawTyObjectSum, TyPolyTraitRef(..) => SawTyPolyTraitRef, TyImplTrait(..) => SawTyImplTrait, @@ -449,15 +462,16 @@ fn saw_ty(node: &Ty_) -> SawTyComponent { #[derive(Hash)] enum SawTraitOrImplItemComponent { SawTraitOrImplItemConst, - SawTraitOrImplItemMethod(Unsafety, Constness, Abi), + // The boolean signifies whether a body is present + SawTraitOrImplItemMethod(Unsafety, Constness, Abi, bool), SawTraitOrImplItemType } fn saw_trait_item(ti: &TraitItem_) -> SawTraitOrImplItemComponent { match *ti { ConstTraitItem(..) => SawTraitOrImplItemConst, - MethodTraitItem(ref sig, _) => - SawTraitOrImplItemMethod(sig.unsafety, sig.constness, sig.abi), + MethodTraitItem(ref sig, ref body) => + SawTraitOrImplItemMethod(sig.unsafety, sig.constness, sig.abi, body.is_some()), TypeTraitItem(..) => SawTraitOrImplItemType } } @@ -466,7 +480,7 @@ fn saw_impl_item(ii: &ImplItemKind) -> SawTraitOrImplItemComponent { match *ii { ImplItemKind::Const(..) => SawTraitOrImplItemConst, ImplItemKind::Method(ref sig, _) => - SawTraitOrImplItemMethod(sig.unsafety, sig.constness, sig.abi), + SawTraitOrImplItemMethod(sig.unsafety, sig.constness, sig.abi, true), ImplItemKind::Type(..) => SawTraitOrImplItemType } } @@ -499,8 +513,12 @@ macro_rules! hash_span { } impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'hash, 'tcx> { - fn visit_nested_item(&mut self, _: ItemId) { - // Each item is hashed independently; ignore nested items. + fn nested_visit_map<'this>(&'this mut self) -> visit::NestedVisitorMap<'this, 'tcx> { + if self.hash_bodies { + visit::NestedVisitorMap::OnlyBodies(&self.tcx.map) + } else { + visit::NestedVisitorMap::None + } } fn visit_variant_data(&mut self, @@ -601,9 +619,11 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has visit::walk_item(self, i) } - fn visit_mod(&mut self, m: &'tcx Mod, _s: Span, n: NodeId) { + fn visit_mod(&mut self, m: &'tcx Mod, span: Span, n: NodeId) { debug!("visit_mod: st={:?}", self.st); - SawMod.hash(self.st); visit::walk_mod(self, m, n) + SawMod.hash(self.st); + hash_span!(self, span); + visit::walk_mod(self, m, n) } fn visit_ty(&mut self, t: &'tcx Ty) { @@ -649,6 +669,13 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has visit::walk_struct_field(self, s) } + fn visit_qpath(&mut self, qpath: &'tcx QPath, id: NodeId, span: Span) { + debug!("visit_qpath: st={:?}", self.st); + SawQPath.hash(self.st); + self.hash_discriminant(qpath); + visit::walk_qpath(self, qpath, id, span) + } + fn visit_path(&mut self, path: &'tcx Path, _: ast::NodeId) { debug!("visit_path: st={:?}", self.st); SawPath(path.global).hash(self.st); @@ -656,6 +683,10 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has visit::walk_path(self, path) } + fn visit_def_mention(&mut self, def: Def) { + self.hash_def(def); + } + fn visit_block(&mut self, b: &'tcx Block) { debug!("visit_block: st={:?}", self.st); SawBlock.hash(self.st); @@ -697,6 +728,18 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has visit::walk_vis(self, v) } + fn visit_associated_item_kind(&mut self, kind: &'tcx AssociatedItemKind) { + debug!("visit_associated_item_kind: st={:?}", self.st); + SawAssociatedItemKind(*kind).hash(self.st); + visit::walk_associated_item_kind(self, kind); + } + + fn visit_defaultness(&mut self, defaultness: &'tcx Defaultness) { + debug!("visit_associated_item_kind: st={:?}", self.st); + SawDefaultness(*defaultness).hash(self.st); + visit::walk_defaultness(self, defaultness); + } + fn visit_where_predicate(&mut self, predicate: &'tcx WherePredicate) { debug!("visit_where_predicate: st={:?}", self.st); SawWherePredicate.hash(self.st); @@ -721,14 +764,6 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has visit::walk_poly_trait_ref(self, t, m) } - fn visit_path_list_item(&mut self, prefix: &'tcx Path, item: &'tcx PathListItem) { - debug!("visit_path_list_item: st={:?}", self.st); - SawPathListItem.hash(self.st); - self.hash_discriminant(&item.node); - hash_span!(self, item.span); - visit::walk_path_list_item(self, prefix, item) - } - fn visit_path_segment(&mut self, path_span: Span, path_segment: &'tcx PathSegment) { debug!("visit_path_segment: st={:?}", self.st); SawPathSegment.hash(self.st); @@ -759,9 +794,10 @@ impl<'a, 'hash, 'tcx> visit::Visitor<'tcx> for StrictVersionHashVisitor<'a, 'has debug!("visit_macro_def: st={:?}", self.st); SawMacroDef.hash(self.st); hash_attrs!(self, ¯o_def.attrs); + for tt in ¯o_def.body { + self.hash_token_tree(tt); + } visit::walk_macro_def(self, macro_def) - // FIXME(mw): We should hash the body of the macro too but we don't - // have a stable way of doing so yet. } } @@ -781,11 +817,6 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { // or not an entry was present (we are already hashing what // variant it is above when we visit the HIR). - if let Some(def) = self.tcx.def_map.borrow().get(&id) { - debug!("hash_resolve: id={:?} def={:?} st={:?}", id, def, self.st); - self.hash_partial_def(def); - } - if let Some(traits) = self.tcx.trait_map.get(&id) { debug!("hash_resolve: id={:?} traits={:?} st={:?}", id, traits, self.st); traits.len().hash(self.st); @@ -807,11 +838,6 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { self.compute_def_id_hash(def_id).hash(self.st); } - fn hash_partial_def(&mut self, def: &PathResolution) { - self.hash_def(def.base_def); - def.depth.hash(self.st); - } - fn hash_def(&mut self, def: Def) { match def { // Crucial point: for all of these variants, the variant + @@ -834,7 +860,8 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { Def::Const(..) | Def::AssociatedConst(..) | Def::Local(..) | - Def::Upvar(..) => { + Def::Upvar(..) | + Def::Macro(..) => { DefHash::SawDefId.hash(self.st); self.hash_def_id(def.def_id()); } @@ -866,22 +893,16 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { // ignoring span information, it doesn't matter here self.hash_discriminant(&meta_item.node); + meta_item.name.as_str().len().hash(self.st); + meta_item.name.as_str().hash(self.st); + match meta_item.node { - ast::MetaItemKind::Word(ref s) => { - s.len().hash(self.st); - s.hash(self.st); - } - ast::MetaItemKind::NameValue(ref s, ref lit) => { - s.len().hash(self.st); - s.hash(self.st); - lit.node.hash(self.st); - } - ast::MetaItemKind::List(ref s, ref items) => { - s.len().hash(self.st); - s.hash(self.st); + ast::MetaItemKind::Word => {} + ast::MetaItemKind::NameValue(ref lit) => saw_lit(lit).hash(self.st), + ast::MetaItemKind::List(ref items) => { // Sort subitems so the hash does not depend on their order let indices = self.indices_sorted_by(&items, |p| { - (p.name(), fnv::hash(&p.literal().map(|i| &i.node))) + (p.name().map(Symbol::as_str), fnv::hash(&p.literal().map(saw_lit))) }); items.len().hash(self.st); for (index, &item_index) in indices.iter().enumerate() { @@ -893,7 +914,7 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { self.hash_meta_item(meta_item); } ast::NestedMetaItemKind::Literal(ref lit) => { - lit.node.hash(self.st); + saw_lit(lit).hash(self.st); } } } @@ -906,11 +927,11 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { let indices = self.indices_sorted_by(attributes, |attr| attr.name()); for i in indices { - let attr = &attributes[i].node; + let attr = &attributes[i]; if !attr.is_sugared_doc && - !IGNORED_ATTRIBUTES.contains(&&*attr.value.name()) { + !IGNORED_ATTRIBUTES.contains(&&*attr.value.name().as_str()) { SawAttribute(attr.style).hash(self.st); - self.hash_meta_item(&*attr.value); + self.hash_meta_item(&attr.value); } } } @@ -930,4 +951,158 @@ impl<'a, 'hash, 'tcx> StrictVersionHashVisitor<'a, 'hash, 'tcx> { self.overflow_checks_enabled = true; } } + + fn hash_token_tree(&mut self, tt: &tokenstream::TokenTree) { + self.hash_discriminant(tt); + match *tt { + tokenstream::TokenTree::Token(span, ref token) => { + hash_span!(self, span); + self.hash_token(token, span); + } + tokenstream::TokenTree::Delimited(span, ref delimited) => { + hash_span!(self, span); + let tokenstream::Delimited { + ref delim, + open_span, + ref tts, + close_span, + } = **delimited; + + delim.hash(self.st); + hash_span!(self, open_span); + tts.len().hash(self.st); + for sub_tt in tts { + self.hash_token_tree(sub_tt); + } + hash_span!(self, close_span); + } + tokenstream::TokenTree::Sequence(span, ref sequence_repetition) => { + hash_span!(self, span); + let tokenstream::SequenceRepetition { + ref tts, + ref separator, + op, + num_captures, + } = **sequence_repetition; + + tts.len().hash(self.st); + for sub_tt in tts { + self.hash_token_tree(sub_tt); + } + self.hash_discriminant(separator); + if let Some(ref separator) = *separator { + self.hash_token(separator, span); + } + op.hash(self.st); + num_captures.hash(self.st); + } + } + } + + fn hash_token(&mut self, + token: &token::Token, + error_reporting_span: Span) { + self.hash_discriminant(token); + match *token { + token::Token::Eq | + token::Token::Lt | + token::Token::Le | + token::Token::EqEq | + token::Token::Ne | + token::Token::Ge | + token::Token::Gt | + token::Token::AndAnd | + token::Token::OrOr | + token::Token::Not | + token::Token::Tilde | + token::Token::At | + token::Token::Dot | + token::Token::DotDot | + token::Token::DotDotDot | + token::Token::Comma | + token::Token::Semi | + token::Token::Colon | + token::Token::ModSep | + token::Token::RArrow | + token::Token::LArrow | + token::Token::FatArrow | + token::Token::Pound | + token::Token::Dollar | + token::Token::Question | + token::Token::Underscore | + token::Token::Whitespace | + token::Token::Comment | + token::Token::Eof => {} + + token::Token::BinOp(bin_op_token) | + token::Token::BinOpEq(bin_op_token) => bin_op_token.hash(self.st), + + token::Token::OpenDelim(delim_token) | + token::Token::CloseDelim(delim_token) => delim_token.hash(self.st), + + token::Token::Literal(ref lit, ref opt_name) => { + self.hash_discriminant(lit); + match *lit { + token::Lit::Byte(val) | + token::Lit::Char(val) | + token::Lit::Integer(val) | + token::Lit::Float(val) | + token::Lit::Str_(val) | + token::Lit::ByteStr(val) => val.as_str().hash(self.st), + token::Lit::StrRaw(val, n) | + token::Lit::ByteStrRaw(val, n) => { + val.as_str().hash(self.st); + n.hash(self.st); + } + }; + opt_name.map(ast::Name::as_str).hash(self.st); + } + + token::Token::Ident(ident) | + token::Token::Lifetime(ident) | + token::Token::SubstNt(ident) => ident.name.as_str().hash(self.st), + token::Token::MatchNt(ident1, ident2) => { + ident1.name.as_str().hash(self.st); + ident2.name.as_str().hash(self.st); + } + + token::Token::Interpolated(ref non_terminal) => { + // FIXME(mw): This could be implemented properly. It's just a + // lot of work, since we would need to hash the AST + // in a stable way, in addition to the HIR. + // Since this is hardly used anywhere, just emit a + // warning for now. + if self.tcx.sess.opts.debugging_opts.incremental.is_some() { + let msg = format!("Quasi-quoting might make incremental \ + compilation very inefficient: {:?}", + non_terminal); + self.tcx.sess.span_warn(error_reporting_span, &msg[..]); + } + + non_terminal.hash(self.st); + } + + token::Token::DocComment(val) | + token::Token::Shebang(val) => val.as_str().hash(self.st), + } + } + + pub fn hash_crate_root_module(&mut self, krate: &'tcx Crate) { + let hir::Crate { + ref module, + ref attrs, + span, + + // These fields are handled separately: + exported_macros: _, + items: _, + impl_items: _, + exprs: _, + } = *krate; + + visit::Visitor::visit_mod(self, module, span, ast::CRATE_NODE_ID); + // Crate attributes are not copied over to the root `Mod`, so hash them + // explicitly here. + hash_attrs!(self, attrs); + } } diff --git a/src/librustc_incremental/ich/fingerprint.rs b/src/librustc_incremental/ich/fingerprint.rs index 005ac3896c..d296d8293f 100644 --- a/src/librustc_incremental/ich/fingerprint.rs +++ b/src/librustc_incremental/ich/fingerprint.rs @@ -9,6 +9,8 @@ // except according to those terms. use rustc_serialize::{Encodable, Decodable, Encoder, Decoder}; +use rustc_data_structures::stable_hasher; +use rustc_data_structures::ToHex; const FINGERPRINT_LENGTH: usize = 16; @@ -44,6 +46,10 @@ impl Fingerprint { ((self.0[6] as u64) << 48) | ((self.0[7] as u64) << 56) } + + pub fn to_hex(&self) -> String { + self.0.to_hex() + } } impl Encodable for Fingerprint { @@ -79,3 +85,12 @@ impl ::std::fmt::Display for Fingerprint { Ok(()) } } + + +impl stable_hasher::StableHasherResult for Fingerprint { + fn finish(mut hasher: stable_hasher::StableHasher) -> Self { + let mut fingerprint = Fingerprint::zero(); + fingerprint.0.copy_from_slice(hasher.finalize()); + fingerprint + } +} diff --git a/src/librustc_incremental/lib.rs b/src/librustc_incremental/lib.rs index 4a5a6b9bea..ce73b14ef2 100644 --- a/src/librustc_incremental/lib.rs +++ b/src/librustc_incremental/lib.rs @@ -19,8 +19,6 @@ html_root_url = "https://doc.rust-lang.org/nightly/")] #![cfg_attr(not(stage0), deny(warnings))] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] -#![cfg_attr(stage0, feature(question_mark))] #![feature(rustc_private)] #![feature(staged_api)] #![feature(rand)] @@ -50,6 +48,7 @@ pub mod ich; pub use assert_dep_graph::assert_dep_graph; pub use calculate_svh::compute_incremental_hashes_map; pub use calculate_svh::IncrementalHashesMap; +pub use calculate_svh::IchHasher; pub use persist::load_dep_graph; pub use persist::save_dep_graph; pub use persist::save_trans_partition; diff --git a/src/librustc_incremental/persist/data.rs b/src/librustc_incremental/persist/data.rs index 734ffe6a94..f0e4f4f99e 100644 --- a/src/librustc_incremental/persist/data.rs +++ b/src/librustc_incremental/persist/data.rs @@ -13,7 +13,7 @@ use rustc::dep_graph::{DepNode, WorkProduct, WorkProductId}; use rustc::hir::def_id::DefIndex; use std::sync::Arc; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use ich::Fingerprint; use super::directory::DefPathIndex; @@ -106,7 +106,7 @@ pub struct SerializedMetadataHashes { /// is only populated if -Z query-dep-graph is specified. It will be /// empty otherwise. Importing crates are perfectly happy with just having /// the DefIndex. - pub index_map: FnvHashMap + pub index_map: FxHashMap } /// The hash for some metadata that (when saving) will be exported diff --git a/src/librustc_incremental/persist/directory.rs b/src/librustc_incremental/persist/directory.rs index d238121872..546feb2122 100644 --- a/src/librustc_incremental/persist/directory.rs +++ b/src/librustc_incremental/persist/directory.rs @@ -84,8 +84,8 @@ impl DefIdDirectory { assert_eq!(old_info.krate, krate); let old_name: &str = &old_info.name; let old_disambiguator: &str = &old_info.disambiguator; - let new_name: &str = &tcx.crate_name(krate); - let new_disambiguator: &str = &tcx.crate_disambiguator(krate); + let new_name: &str = &tcx.crate_name(krate).as_str(); + let new_disambiguator: &str = &tcx.crate_disambiguator(krate).as_str(); old_name == new_name && old_disambiguator == new_disambiguator } } @@ -99,8 +99,8 @@ impl DefIdDirectory { let new_krates: HashMap<_, _> = once(LOCAL_CRATE) .chain(tcx.sess.cstore.crates()) - .map(|krate| (make_key(&tcx.crate_name(krate), - &tcx.crate_disambiguator(krate)), krate)) + .map(|krate| (make_key(&tcx.crate_name(krate).as_str(), + &tcx.crate_disambiguator(krate).as_str()), krate)) .collect(); let ids = self.paths.iter() diff --git a/src/librustc_incremental/persist/dirty_clean.rs b/src/librustc_incremental/persist/dirty_clean.rs index 94478f6603..40873011a7 100644 --- a/src/librustc_incremental/persist/dirty_clean.rs +++ b/src/librustc_incremental/persist/dirty_clean.rs @@ -45,10 +45,9 @@ use super::load::DirtyNodes; use rustc::dep_graph::{DepGraphQuery, DepNode}; use rustc::hir; use rustc::hir::def_id::DefId; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use syntax::ast::{self, Attribute, NestedMetaItem}; -use rustc_data_structures::fnv::{FnvHashSet, FnvHashMap}; -use syntax::parse::token::InternedString; +use rustc_data_structures::fx::{FxHashSet, FxHashMap}; use syntax_pos::Span; use rustc::ty::TyCtxt; use ich::Fingerprint; @@ -67,14 +66,14 @@ pub fn check_dirty_clean_annotations<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } let _ignore = tcx.dep_graph.in_ignore(); - let dirty_inputs: FnvHashSet> = + let dirty_inputs: FxHashSet> = dirty_inputs.iter() .filter_map(|d| retraced.map(d)) .collect(); let query = tcx.dep_graph.query(); debug!("query-nodes: {:?}", query.nodes()); let krate = tcx.map.krate(); - krate.visit_all_items(&mut DirtyCleanVisitor { + krate.visit_all_item_likes(&mut DirtyCleanVisitor { tcx: tcx, query: &query, dirty_inputs: dirty_inputs, @@ -84,16 +83,15 @@ pub fn check_dirty_clean_annotations<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, pub struct DirtyCleanVisitor<'a, 'tcx:'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, query: &'a DepGraphQuery, - dirty_inputs: FnvHashSet>, + dirty_inputs: FxHashSet>, } impl<'a, 'tcx> DirtyCleanVisitor<'a, 'tcx> { - fn dep_node(&self, attr: &Attribute, def_id: DefId) -> DepNode { for item in attr.meta_item_list().unwrap_or(&[]) { if item.check_name(LABEL) { let value = expect_associated_value(self.tcx, item); - match DepNode::from_label_string(&value[..], def_id) { + match DepNode::from_label_string(&value.as_str(), def_id) { Ok(def_id) => return def_id, Err(()) => { self.tcx.sess.span_fatal( @@ -116,7 +114,8 @@ impl<'a, 'tcx> DirtyCleanVisitor<'a, 'tcx> { match dep_node { DepNode::Krate | - DepNode::Hir(_) => { + DepNode::Hir(_) | + DepNode::HirBody(_) => { // HIR nodes are inputs, so if we are asserting that the HIR node is // dirty, we check the dirty input set. if !self.dirty_inputs.contains(&dep_node) { @@ -145,7 +144,8 @@ impl<'a, 'tcx> DirtyCleanVisitor<'a, 'tcx> { match dep_node { DepNode::Krate | - DepNode::Hir(_) => { + DepNode::Hir(_) | + DepNode::HirBody(_) => { // For HIR nodes, check the inputs. if self.dirty_inputs.contains(&dep_node) { let dep_node_str = self.dep_node_str(&dep_node); @@ -169,7 +169,7 @@ impl<'a, 'tcx> DirtyCleanVisitor<'a, 'tcx> { } } -impl<'a, 'tcx> Visitor<'tcx> for DirtyCleanVisitor<'a, 'tcx> { +impl<'a, 'tcx> ItemLikeVisitor<'tcx> for DirtyCleanVisitor<'a, 'tcx> { fn visit_item(&mut self, item: &'tcx hir::Item) { let def_id = self.tcx.map.local_def_id(item.id); for attr in self.tcx.get_attrs(def_id).iter() { @@ -184,18 +184,21 @@ impl<'a, 'tcx> Visitor<'tcx> for DirtyCleanVisitor<'a, 'tcx> { } } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } pub fn check_dirty_clean_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - prev_metadata_hashes: &FnvHashMap, - current_metadata_hashes: &FnvHashMap) { + prev_metadata_hashes: &FxHashMap, + current_metadata_hashes: &FxHashMap) { if !tcx.sess.opts.debugging_opts.query_dep_graph { return; } tcx.dep_graph.with_ignore(||{ let krate = tcx.map.krate(); - krate.visit_all_items(&mut DirtyCleanMetadataVisitor { + krate.visit_all_item_likes(&mut DirtyCleanMetadataVisitor { tcx: tcx, prev_metadata_hashes: prev_metadata_hashes, current_metadata_hashes: current_metadata_hashes, @@ -205,11 +208,11 @@ pub fn check_dirty_clean_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, pub struct DirtyCleanMetadataVisitor<'a, 'tcx:'a, 'm> { tcx: TyCtxt<'a, 'tcx, 'tcx>, - prev_metadata_hashes: &'m FnvHashMap, - current_metadata_hashes: &'m FnvHashMap, + prev_metadata_hashes: &'m FxHashMap, + current_metadata_hashes: &'m FxHashMap, } -impl<'a, 'tcx, 'm> Visitor<'tcx> for DirtyCleanMetadataVisitor<'a, 'tcx, 'm> { +impl<'a, 'tcx, 'm> ItemLikeVisitor<'tcx> for DirtyCleanMetadataVisitor<'a, 'tcx, 'm> { fn visit_item(&mut self, item: &'tcx hir::Item) { let def_id = self.tcx.map.local_def_id(item.id); @@ -225,6 +228,9 @@ impl<'a, 'tcx, 'm> Visitor<'tcx> for DirtyCleanMetadataVisitor<'a, 'tcx, 'm> { } } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } impl<'a, 'tcx, 'm> DirtyCleanMetadataVisitor<'a, 'tcx, 'm> { @@ -270,13 +276,7 @@ fn check_config(tcx: TyCtxt, attr: &ast::Attribute) -> bool { if item.check_name(CFG) { let value = expect_associated_value(tcx, item); debug!("check_config: searching for cfg {:?}", value); - for cfg in &config[..] { - if cfg.check_name(&value[..]) { - debug!("check_config: matched {:?}", cfg); - return true; - } - } - return false; + return config.contains(&(value, None)); } } @@ -285,7 +285,7 @@ fn check_config(tcx: TyCtxt, attr: &ast::Attribute) -> bool { &format!("no cfg attribute")); } -fn expect_associated_value(tcx: TyCtxt, item: &NestedMetaItem) -> InternedString { +fn expect_associated_value(tcx: TyCtxt, item: &NestedMetaItem) -> ast::Name { if let Some(value) = item.value_str() { value } else { diff --git a/src/librustc_incremental/persist/file_format.rs b/src/librustc_incremental/persist/file_format.rs index 7c2b69e762..b67caa6750 100644 --- a/src/librustc_incremental/persist/file_format.rs +++ b/src/librustc_incremental/persist/file_format.rs @@ -24,6 +24,7 @@ use std::path::Path; use std::fs::File; use std::env; +use rustc::session::Session; use rustc::session::config::nightly_options; /// The first few bytes of files generated by incremental compilation @@ -59,7 +60,7 @@ pub fn write_file_header(stream: &mut W) -> io::Result<()> { /// incompatible version of the compiler. /// - Returns `Err(..)` if some kind of IO error occurred while reading the /// file. -pub fn read_file(path: &Path) -> io::Result>> { +pub fn read_file(sess: &Session, path: &Path) -> io::Result>> { if !path.exists() { return Ok(None); } @@ -72,6 +73,7 @@ pub fn read_file(path: &Path) -> io::Result>> { let mut file_magic = [0u8; 4]; file.read_exact(&mut file_magic)?; if file_magic != FILE_MAGIC { + report_format_mismatch(sess, path, "Wrong FILE_MAGIC"); return Ok(None) } } @@ -85,6 +87,7 @@ pub fn read_file(path: &Path) -> io::Result>> { ((header_format_version[1] as u16) << 8); if header_format_version != HEADER_FORMAT_VERSION { + report_format_mismatch(sess, path, "Wrong HEADER_FORMAT_VERSION"); return Ok(None) } } @@ -99,6 +102,7 @@ pub fn read_file(path: &Path) -> io::Result>> { file.read_exact(&mut buffer[..])?; if &buffer[..] != rustc_version().as_bytes() { + report_format_mismatch(sess, path, "Different compiler version"); return Ok(None); } } @@ -109,6 +113,16 @@ pub fn read_file(path: &Path) -> io::Result>> { Ok(Some(data)) } +fn report_format_mismatch(sess: &Session, file: &Path, message: &str) { + debug!("read_file: {}", message); + + if sess.opts.debugging_opts.incremental_info { + println!("incremental: ignoring cache artifact `{}`: {}", + file.file_name().unwrap().to_string_lossy(), + message); + } +} + fn rustc_version() -> String { if nightly_options::is_nightly_build() { if let Some(val) = env::var_os("RUSTC_FORCE_INCR_COMP_ARTIFACT_HEADER") { diff --git a/src/librustc_incremental/persist/fs.rs b/src/librustc_incremental/persist/fs.rs index ff7c3d0512..2ad37e98c7 100644 --- a/src/librustc_incremental/persist/fs.rs +++ b/src/librustc_incremental/persist/fs.rs @@ -119,8 +119,8 @@ use rustc::hir::svh::Svh; use rustc::session::Session; use rustc::ty::TyCtxt; use rustc::util::fs as fs_util; -use rustc_data_structures::flock; -use rustc_data_structures::fnv::{FnvHashSet, FnvHashMap}; +use rustc_data_structures::{flock, base_n}; +use rustc_data_structures::fx::{FxHashSet, FxHashMap}; use std::ffi::OsString; use std::fs as std_fs; @@ -135,6 +135,12 @@ const DEP_GRAPH_FILENAME: &'static str = "dep-graph.bin"; const WORK_PRODUCTS_FILENAME: &'static str = "work-products.bin"; const METADATA_HASHES_FILENAME: &'static str = "metadata.bin"; +// We encode integers using the following base, so they are shorter than decimal +// or hexadecimal numbers (we want short file and directory names). Since these +// numbers will be used in file names, we choose an encoding that is not +// case-sensitive (as opposed to base64, for example). +const INT_ENCODE_BASE: u64 = 36; + pub fn dep_graph_path(sess: &Session) -> PathBuf { in_incr_comp_dir_sess(sess, DEP_GRAPH_FILENAME) } @@ -195,7 +201,20 @@ pub fn prepare_session_directory(tcx: TyCtxt) -> Result { debug!("crate-dir: {}", crate_dir.display()); try!(create_dir(tcx.sess, &crate_dir, "crate")); - let mut source_directories_already_tried = FnvHashSet(); + // Hack: canonicalize the path *after creating the directory* + // because, on windows, long paths can cause problems; + // canonicalization inserts this weird prefix that makes windows + // tolerate long paths. + let crate_dir = match crate_dir.canonicalize() { + Ok(v) => v, + Err(err) => { + tcx.sess.err(&format!("incremental compilation: error canonicalizing path `{}`: {}", + crate_dir.display(), err)); + return Err(()); + } + }; + + let mut source_directories_already_tried = FxHashSet(); loop { // Generate a session directory of the form: @@ -327,7 +346,7 @@ pub fn finalize_session_directory(sess: &Session, svh: Svh) { let mut new_sub_dir_name = String::from(&old_sub_dir_name[.. dash_indices[2] + 1]); // Append the svh - new_sub_dir_name.push_str(&encode_base_36(svh.as_u64())); + base_n::push_str(svh.as_u64(), INT_ENCODE_BASE, &mut new_sub_dir_name); // Create the full path let new_path = incr_comp_session_dir.parent().unwrap().join(new_sub_dir_name); @@ -416,8 +435,8 @@ fn copy_files(target_dir: &Path, } if print_stats_on_success { - println!("incr. comp. session directory: {} files hard-linked", files_linked); - println!("incr. comp. session directory: {} files copied", files_copied); + println!("incremental: session directory: {} files hard-linked", files_linked); + println!("incremental: session directory: {} files copied", files_copied); } Ok(files_linked > 0 || files_copied == 0) @@ -433,7 +452,8 @@ fn generate_session_dir_path(crate_dir: &Path) -> PathBuf { let directory_name = format!("s-{}-{}-working", timestamp, - encode_base_36(random_number as u64)); + base_n::encode(random_number as u64, + INT_ENCODE_BASE)); debug!("generate_session_dir_path: directory_name = {}", directory_name); let directory_path = crate_dir.join(directory_name); debug!("generate_session_dir_path: directory_path = {}", directory_path.display()); @@ -490,7 +510,7 @@ fn delete_session_dir_lock_file(sess: &Session, /// Find the most recent published session directory that is not in the /// ignore-list. fn find_source_directory(crate_dir: &Path, - source_directories_already_tried: &FnvHashSet) + source_directories_already_tried: &FxHashSet) -> Option { let iter = crate_dir.read_dir() .unwrap() // FIXME @@ -500,7 +520,7 @@ fn find_source_directory(crate_dir: &Path, } fn find_source_directory_in_iter(iter: I, - source_directories_already_tried: &FnvHashSet) + source_directories_already_tried: &FxHashSet) -> Option where I: Iterator { @@ -562,27 +582,11 @@ fn extract_timestamp_from_session_dir(directory_name: &str) string_to_timestamp(&directory_name[dash_indices[0]+1 .. dash_indices[1]]) } -const BASE_36: &'static [u8] = b"0123456789abcdefghijklmnopqrstuvwxyz"; - -fn encode_base_36(mut n: u64) -> String { - let mut s = Vec::with_capacity(13); - loop { - s.push(BASE_36[(n % 36) as usize]); - n /= 36; - - if n == 0 { - break; - } - } - s.reverse(); - String::from_utf8(s).unwrap() -} - fn timestamp_to_string(timestamp: SystemTime) -> String { let duration = timestamp.duration_since(UNIX_EPOCH).unwrap(); let micros = duration.as_secs() * 1_000_000 + (duration.subsec_nanos() as u64) / 1000; - encode_base_36(micros) + base_n::encode(micros, INT_ENCODE_BASE) } fn string_to_timestamp(s: &str) -> Result { @@ -600,7 +604,7 @@ fn string_to_timestamp(s: &str) -> Result { } fn crate_path_tcx(tcx: TyCtxt, cnum: CrateNum) -> PathBuf { - crate_path(tcx.sess, &tcx.crate_name(cnum), &tcx.crate_disambiguator(cnum)) + crate_path(tcx.sess, &tcx.crate_name(cnum).as_str(), &tcx.crate_disambiguator(cnum).as_str()) } /// Finds the session directory containing the correct metadata hashes file for @@ -629,7 +633,7 @@ pub fn find_metadata_hashes_for(tcx: TyCtxt, cnum: CrateNum) -> Option }; let target_svh = tcx.sess.cstore.crate_hash(cnum); - let target_svh = encode_base_36(target_svh.as_u64()); + let target_svh = base_n::encode(target_svh.as_u64(), INT_ENCODE_BASE); let sub_dir = find_metadata_hashes_iter(&target_svh, dir_entries.filter_map(|e| { e.ok().map(|e| e.file_name().to_string_lossy().into_owned()) @@ -677,7 +681,9 @@ fn crate_path(sess: &Session, let mut hasher = DefaultHasher::new(); crate_disambiguator.hash(&mut hasher); - let crate_name = format!("{}-{}", crate_name, encode_base_36(hasher.finish())); + let crate_name = format!("{}-{}", + crate_name, + base_n::encode(hasher.finish(), INT_ENCODE_BASE)); incr_dir.join(crate_name) } @@ -704,8 +710,8 @@ pub fn garbage_collect_session_directories(sess: &Session) -> io::Result<()> { // First do a pass over the crate directory, collecting lock files and // session directories - let mut session_directories = FnvHashSet(); - let mut lock_files = FnvHashSet(); + let mut session_directories = FxHashSet(); + let mut lock_files = FxHashSet(); for dir_entry in try!(crate_directory.read_dir()) { let dir_entry = match dir_entry { @@ -731,7 +737,7 @@ pub fn garbage_collect_session_directories(sess: &Session) -> io::Result<()> { } // Now map from lock files to session directories - let lock_file_to_session_dir: FnvHashMap> = + let lock_file_to_session_dir: FxHashMap> = lock_files.into_iter() .map(|lock_file_name| { assert!(lock_file_name.ends_with(LOCK_FILE_EXT)); @@ -774,7 +780,7 @@ pub fn garbage_collect_session_directories(sess: &Session) -> io::Result<()> { } // Filter out `None` directories - let lock_file_to_session_dir: FnvHashMap = + let lock_file_to_session_dir: FxHashMap = lock_file_to_session_dir.into_iter() .filter_map(|(lock_file_name, directory_name)| { directory_name.map(|n| (lock_file_name, n)) @@ -898,7 +904,7 @@ pub fn garbage_collect_session_directories(sess: &Session) -> io::Result<()> { } fn all_except_most_recent(deletion_candidates: Vec<(SystemTime, PathBuf, Option)>) - -> FnvHashMap> { + -> FxHashMap> { let most_recent = deletion_candidates.iter() .map(|&(timestamp, ..)| timestamp) .max(); @@ -909,7 +915,7 @@ fn all_except_most_recent(deletion_candidates: Vec<(SystemTime, PathBuf, Option< .map(|(_, path, lock)| (path, lock)) .collect() } else { - FnvHashMap() + FxHashMap() } } @@ -946,19 +952,19 @@ fn test_all_except_most_recent() { (UNIX_EPOCH + Duration::new(5, 0), PathBuf::from("5"), None), (UNIX_EPOCH + Duration::new(3, 0), PathBuf::from("3"), None), (UNIX_EPOCH + Duration::new(2, 0), PathBuf::from("2"), None), - ]).keys().cloned().collect::>(), + ]).keys().cloned().collect::>(), vec![ PathBuf::from("1"), PathBuf::from("2"), PathBuf::from("3"), PathBuf::from("4"), - ].into_iter().collect::>() + ].into_iter().collect::>() ); assert_eq!(all_except_most_recent( vec![ - ]).keys().cloned().collect::>(), - FnvHashSet() + ]).keys().cloned().collect::>(), + FxHashSet() ); } @@ -973,7 +979,7 @@ fn test_timestamp_serialization() { #[test] fn test_find_source_directory_in_iter() { - let already_visited = FnvHashSet(); + let already_visited = FxHashSet(); // Find newest assert_eq!(find_source_directory_in_iter( @@ -1049,21 +1055,3 @@ fn test_find_metadata_hashes_iter() None ); } - -#[test] -fn test_encode_base_36() { - fn test(n: u64) { - assert_eq!(Ok(n), u64::from_str_radix(&encode_base_36(n)[..], 36)); - } - - test(0); - test(1); - test(35); - test(36); - test(37); - test(u64::max_value()); - - for i in 0 .. 1_000 { - test(i * 983); - } -} diff --git a/src/librustc_incremental/persist/hash.rs b/src/librustc_incremental/persist/hash.rs index e365cbbd3a..e5203ea02b 100644 --- a/src/librustc_incremental/persist/hash.rs +++ b/src/librustc_incremental/persist/hash.rs @@ -12,7 +12,7 @@ use rustc::dep_graph::DepNode; use rustc::hir::def_id::{CrateNum, DefId}; use rustc::hir::svh::Svh; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::flock; use rustc_serialize::Decodable; use rustc_serialize::opaque::Decoder; @@ -26,8 +26,8 @@ use super::file_format; pub struct HashContext<'a, 'tcx: 'a> { pub tcx: TyCtxt<'a, 'tcx, 'tcx>, incremental_hashes_map: &'a IncrementalHashesMap, - item_metadata_hashes: FnvHashMap, - crate_hashes: FnvHashMap, + item_metadata_hashes: FxHashMap, + crate_hashes: FxHashMap, } impl<'a, 'tcx> HashContext<'a, 'tcx> { @@ -37,15 +37,17 @@ impl<'a, 'tcx> HashContext<'a, 'tcx> { HashContext { tcx: tcx, incremental_hashes_map: incremental_hashes_map, - item_metadata_hashes: FnvHashMap(), - crate_hashes: FnvHashMap(), + item_metadata_hashes: FxHashMap(), + crate_hashes: FxHashMap(), } } pub fn is_hashable(dep_node: &DepNode) -> bool { match *dep_node { DepNode::Krate | - DepNode::Hir(_) => true, + DepNode::Hir(_) | + DepNode::HirBody(_) => + true, DepNode::MetaData(def_id) => !def_id.is_local(), _ => false, } @@ -58,7 +60,7 @@ impl<'a, 'tcx> HashContext<'a, 'tcx> { } // HIR nodes (which always come from our crate) are an input: - DepNode::Hir(def_id) => { + DepNode::Hir(def_id) | DepNode::HirBody(def_id) => { assert!(def_id.is_local(), "cannot hash HIR for non-local def-id {:?} => {:?}", def_id, @@ -154,7 +156,7 @@ impl<'a, 'tcx> HashContext<'a, 'tcx> { let hashes_file_path = metadata_hash_import_path(&session_dir); - match file_format::read_file(&hashes_file_path) + match file_format::read_file(self.tcx.sess, &hashes_file_path) { Ok(Some(data)) => { match self.load_from_data(cnum, &data, svh) { diff --git a/src/librustc_incremental/persist/load.rs b/src/librustc_incremental/persist/load.rs index 7cef246b6c..8ff04a565e 100644 --- a/src/librustc_incremental/persist/load.rs +++ b/src/librustc_incremental/persist/load.rs @@ -15,7 +15,7 @@ use rustc::hir::def_id::DefId; use rustc::hir::svh::Svh; use rustc::session::Session; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::{FnvHashSet, FnvHashMap}; +use rustc_data_structures::fx::{FxHashSet, FxHashMap}; use rustc_serialize::Decodable as RustcDecodable; use rustc_serialize::opaque::Decoder; use std::fs; @@ -30,7 +30,7 @@ use super::hash::*; use super::fs::*; use super::file_format; -pub type DirtyNodes = FnvHashSet>; +pub type DirtyNodes = FxHashSet>; /// If we are in incremental mode, and a previous dep-graph exists, /// then load up those nodes/edges that are still valid into the @@ -93,7 +93,7 @@ fn load_dep_graph_if_exists<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } fn load_data(sess: &Session, path: &Path) -> Option> { - match file_format::read_file(path) { + match file_format::read_file(sess, path) { Ok(Some(data)) => return Some(data), Ok(None) => { // The file either didn't exist or was produced by an incompatible @@ -132,6 +132,10 @@ pub fn decode_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let prev_commandline_args_hash = u64::decode(&mut dep_graph_decoder)?; if prev_commandline_args_hash != tcx.sess.opts.dep_tracking_hash() { + if tcx.sess.opts.debugging_opts.incremental_info { + println!("incremental: completely ignoring cache because of \ + differing commandline arguments"); + } // We can't reuse the cache, purge it. debug!("decode_dep_graph: differing commandline arg hashes"); for swp in work_products { @@ -183,7 +187,7 @@ pub fn decode_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, // Compute which work-products have an input that has changed or // been removed. Put the dirty ones into a set. - let mut dirty_target_nodes = FnvHashSet(); + let mut dirty_target_nodes = FxHashSet(); for &(raw_source_node, ref target_node) in &retraced_edges { if dirty_raw_source_nodes.contains(raw_source_node) { if !dirty_target_nodes.contains(target_node) { @@ -192,7 +196,8 @@ pub fn decode_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, if tcx.sess.opts.debugging_opts.incremental_info { // It'd be nice to pretty-print these paths better than just // using the `Debug` impls, but wev. - println!("module {:?} is dirty because {:?} changed or was removed", + println!("incremental: module {:?} is dirty because {:?} \ + changed or was removed", target_node, raw_source_node.map_def(|&index| { Some(directory.def_path_string(tcx, index)) @@ -239,7 +244,7 @@ fn dirty_nodes<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, retraced: &RetracedDefIdDirectory) -> DirtyNodes { let mut hcx = HashContext::new(tcx, incremental_hashes_map); - let mut dirty_nodes = FnvHashSet(); + let mut dirty_nodes = FxHashSet(); for hash in serialized_hashes { if let Some(dep_node) = retraced.map(&hash.dep_node) { @@ -250,11 +255,24 @@ fn dirty_nodes<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, current_hash); continue; } + + if tcx.sess.opts.debugging_opts.incremental_dump_hash { + println!("node {:?} is dirty as hash is {:?} was {:?}", + dep_node.map_def(|&def_id| Some(tcx.def_path(def_id))).unwrap(), + current_hash, + hash.hash); + } + debug!("initial_dirty_nodes: {:?} is dirty as hash is {:?}, was {:?}", dep_node.map_def(|&def_id| Some(tcx.def_path(def_id))).unwrap(), current_hash, hash.hash); } else { + if tcx.sess.opts.debugging_opts.incremental_dump_hash { + println!("node {:?} is dirty as it was removed", + hash.dep_node); + } + debug!("initial_dirty_nodes: {:?} is dirty as it was removed", hash.dep_node); } @@ -270,21 +288,26 @@ fn dirty_nodes<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, /// otherwise no longer applicable. fn reconcile_work_products<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, work_products: Vec, - dirty_target_nodes: &FnvHashSet>) { + dirty_target_nodes: &FxHashSet>) { debug!("reconcile_work_products({:?})", work_products); for swp in work_products { if dirty_target_nodes.contains(&DepNode::WorkProduct(swp.id.clone())) { debug!("reconcile_work_products: dep-node for {:?} is dirty", swp); delete_dirty_work_product(tcx, swp); } else { - let all_files_exist = - swp.work_product - .saved_files - .iter() - .all(|&(_, ref file_name)| { - let path = in_incr_comp_dir_sess(tcx.sess, &file_name); - path.exists() - }); + let mut all_files_exist = true; + for &(_, ref file_name) in swp.work_product.saved_files.iter() { + let path = in_incr_comp_dir_sess(tcx.sess, file_name); + if !path.exists() { + all_files_exist = false; + + if tcx.sess.opts.debugging_opts.incremental_info { + println!("incremental: could not find file for up-to-date work product: {}", + path.display()); + } + } + } + if all_files_exist { debug!("reconcile_work_products: all files for {:?} exist", swp); tcx.dep_graph.insert_previous_work_product(&swp.id, swp.work_product); @@ -314,7 +337,7 @@ fn delete_dirty_work_product(tcx: TyCtxt, fn load_prev_metadata_hashes(tcx: TyCtxt, retraced: &RetracedDefIdDirectory, - output: &mut FnvHashMap) { + output: &mut FxHashMap) { if !tcx.sess.opts.debugging_opts.query_dep_graph { return } @@ -331,7 +354,7 @@ fn load_prev_metadata_hashes(tcx: TyCtxt, debug!("load_prev_metadata_hashes() - File: {}", file_path.display()); - let data = match file_format::read_file(&file_path) { + let data = match file_format::read_file(tcx.sess, &file_path) { Ok(Some(data)) => data, Ok(None) => { debug!("load_prev_metadata_hashes() - File produced by incompatible \ diff --git a/src/librustc_incremental/persist/preds.rs b/src/librustc_incremental/persist/preds.rs index fe1d627253..e1968ce8d7 100644 --- a/src/librustc_incremental/persist/preds.rs +++ b/src/librustc_incremental/persist/preds.rs @@ -10,7 +10,7 @@ use rustc::dep_graph::{DepGraphQuery, DepNode}; use rustc::hir::def_id::DefId; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::graph::{DepthFirstTraversal, INCOMING, NodeIndex}; use super::hash::*; @@ -23,11 +23,11 @@ pub struct Predecessors<'query> { // nodes. // - Values: transitive predecessors of the key that are hashable // (e.g., HIR nodes, input meta-data nodes) - pub inputs: FnvHashMap<&'query DepNode, Vec<&'query DepNode>>, + pub inputs: FxHashMap<&'query DepNode, Vec<&'query DepNode>>, // - Keys: some hashable node // - Values: the hash thereof - pub hashes: FnvHashMap<&'query DepNode, Fingerprint>, + pub hashes: FxHashMap<&'query DepNode, Fingerprint>, } impl<'q> Predecessors<'q> { @@ -37,7 +37,7 @@ impl<'q> Predecessors<'q> { let all_nodes = query.graph.all_nodes(); let tcx = hcx.tcx; - let inputs: FnvHashMap<_, _> = all_nodes.iter() + let inputs: FxHashMap<_, _> = all_nodes.iter() .enumerate() .filter(|&(_, node)| match node.data { DepNode::WorkProduct(_) => true, @@ -60,7 +60,7 @@ impl<'q> Predecessors<'q> { }) .collect(); - let mut hashes = FnvHashMap(); + let mut hashes = FxHashMap(); for input in inputs.values().flat_map(|v| v.iter().cloned()) { hashes.entry(input) .or_insert_with(|| hcx.hash(input).unwrap()); diff --git a/src/librustc_incremental/persist/save.rs b/src/librustc_incremental/persist/save.rs index bc156b0e89..f3bbd02dff 100644 --- a/src/librustc_incremental/persist/save.rs +++ b/src/librustc_incremental/persist/save.rs @@ -13,7 +13,7 @@ use rustc::hir::def_id::DefId; use rustc::hir::svh::Svh; use rustc::session::Session; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_serialize::Encodable as RustcEncodable; use rustc_serialize::opaque::Encoder; use std::hash::Hash; @@ -30,7 +30,7 @@ use super::preds::*; use super::fs::*; use super::dirty_clean; use super::file_format; -use calculate_svh::hasher::IchHasher; +use calculate_svh::IchHasher; pub fn save_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, incremental_hashes_map: &IncrementalHashesMap, @@ -46,7 +46,7 @@ pub fn save_dep_graph<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let query = tcx.dep_graph.query(); let mut hcx = HashContext::new(tcx, incremental_hashes_map); let preds = Predecessors::new(&query, &mut hcx); - let mut current_metadata_hashes = FnvHashMap(); + let mut current_metadata_hashes = FxHashMap(); // IMPORTANT: We are saving the metadata hashes *before* the dep-graph, // since metadata-encoding might add new entries to the @@ -145,8 +145,8 @@ pub fn encode_dep_graph(preds: &Predecessors, for (&target, sources) in &preds.inputs { match *target { DepNode::MetaData(ref def_id) => { - // Metadata *targets* are always local metadata nodes. We handle - // those in `encode_metadata_hashes`, which comes later. + // Metadata *targets* are always local metadata nodes. We have + // already handled those in `encode_metadata_hashes`. assert!(def_id.is_local()); continue; } @@ -159,6 +159,12 @@ pub fn encode_dep_graph(preds: &Predecessors, } } + if tcx.sess.opts.debugging_opts.incremental_dump_hash { + for (dep_node, hash) in &preds.hashes { + println!("HIR hash for {:?} is {}", dep_node, hash); + } + } + // Create the serialized dep-graph. let graph = SerializedDepGraph { edges: edges, @@ -186,7 +192,7 @@ pub fn encode_metadata_hashes(tcx: TyCtxt, svh: Svh, preds: &Predecessors, builder: &mut DefIdDirectoryBuilder, - current_metadata_hashes: &mut FnvHashMap, + current_metadata_hashes: &mut FxHashMap, encoder: &mut Encoder) -> io::Result<()> { // For each `MetaData(X)` node where `X` is local, accumulate a @@ -198,10 +204,10 @@ pub fn encode_metadata_hashes(tcx: TyCtxt, // (I initially wrote this with an iterator, but it seemed harder to read.) let mut serialized_hashes = SerializedMetadataHashes { hashes: vec![], - index_map: FnvHashMap() + index_map: FxHashMap() }; - let mut def_id_hashes = FnvHashMap(); + let mut def_id_hashes = FxHashMap(); for (&target, sources) in &preds.inputs { let def_id = match *target { @@ -248,6 +254,15 @@ pub fn encode_metadata_hashes(tcx: TyCtxt, let hash = state.finish(); debug!("save: metadata hash for {:?} is {}", def_id, hash); + + if tcx.sess.opts.debugging_opts.incremental_dump_hash { + println!("metadata hash for {:?} is {}", def_id, hash); + for dep_node in sources { + println!("metadata hash for {:?} depends on {:?} with hash {}", + def_id, dep_node, preds.hashes[dep_node]); + } + } + serialized_hashes.hashes.push(SerializedMetadataHash { def_index: def_id.index, hash: hash, diff --git a/src/librustc_lint/bad_style.rs b/src/librustc_lint/bad_style.rs index fea3de5952..2aa74407af 100644 --- a/src/librustc_lint/bad_style.rs +++ b/src/librustc_lint/bad_style.rs @@ -29,10 +29,10 @@ pub enum MethodLateContext { pub fn method_context(cx: &LateContext, id: ast::NodeId, span: Span) -> MethodLateContext { let def_id = cx.tcx.map.local_def_id(id); - match cx.tcx.impl_or_trait_items.borrow().get(&def_id) { + match cx.tcx.associated_items.borrow().get(&def_id) { None => span_bug!(span, "missing method descriptor?!"), Some(item) => { - match item.container() { + match item.container { ty::TraitContainer(..) => MethodLateContext::TraitDefaultImpl, ty::ImplContainer(cid) => { match cx.tcx.impl_trait_ref(cid) { @@ -81,19 +81,12 @@ impl NonCamelCaseTypes { .concat() } - let s = name.as_str(); - if !is_camel_case(name) { - let c = to_camel_case(&s); + let c = to_camel_case(&name.as_str()); let m = if c.is_empty() { - format!("{} `{}` should have a camel case name such as `CamelCase`", - sort, - s) + format!("{} `{}` should have a camel case name such as `CamelCase`", sort, name) } else { - format!("{} `{}` should have a camel case name such as `{}`", - sort, - s, - c) + format!("{} `{}` should have a camel case name such as `{}`", sort, name, c) }; cx.span_lint(NON_CAMEL_CASE_TYPES, span, &m[..]); } @@ -106,7 +99,7 @@ impl LintPass for NonCamelCaseTypes { } } -impl LateLintPass for NonCamelCaseTypes { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonCamelCaseTypes { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { let extern_repr_count = it.attrs .iter() @@ -233,7 +226,7 @@ impl LintPass for NonSnakeCase { } } -impl LateLintPass for NonSnakeCase { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonSnakeCase { fn check_crate(&mut self, cx: &LateContext, cr: &hir::Crate) { let attr_crate_name = cr.attrs .iter() @@ -241,8 +234,8 @@ impl LateLintPass for NonSnakeCase { .and_then(|at| at.value_str().map(|s| (at, s))); if let Some(ref name) = cx.tcx.sess.opts.crate_name { self.check_snake_case(cx, "crate", name, None); - } else if let Some((attr, ref name)) = attr_crate_name { - self.check_snake_case(cx, "crate", name, Some(attr.span)); + } else if let Some((attr, name)) = attr_crate_name { + self.check_snake_case(cx, "crate", &name.as_str(), Some(attr.span)); } } @@ -250,7 +243,7 @@ impl LateLintPass for NonSnakeCase { cx: &LateContext, fk: FnKind, _: &hir::FnDecl, - _: &hir::Block, + _: &hir::Expr, span: Span, id: ast::NodeId) { match fk { @@ -295,12 +288,17 @@ impl LateLintPass for NonSnakeCase { } fn check_pat(&mut self, cx: &LateContext, p: &hir::Pat) { - if let &PatKind::Binding(_, ref path1, _) = &p.node { - // Exclude parameter names from foreign functions (they have no `Def`) - if cx.tcx.expect_def_or_none(p.id).is_some() { - self.check_snake_case(cx, "variable", &path1.node.as_str(), Some(p.span)); + // Exclude parameter names from foreign functions + let parent_node = cx.tcx.map.get_parent_node(p.id); + if let hir::map::NodeForeignItem(item) = cx.tcx.map.get(parent_node) { + if let hir::ForeignItemFn(..) = item.node { + return; } } + + if let &PatKind::Binding(_, _, ref path1, _) = &p.node { + self.check_snake_case(cx, "variable", &path1.node.as_str(), Some(p.span)); + } } fn check_struct_def(&mut self, @@ -326,21 +324,19 @@ pub struct NonUpperCaseGlobals; impl NonUpperCaseGlobals { fn check_upper_case(cx: &LateContext, sort: &str, name: ast::Name, span: Span) { - let s = name.as_str(); - - if s.chars().any(|c| c.is_lowercase()) { - let uc = NonSnakeCase::to_snake_case(&s).to_uppercase(); - if uc != &s[..] { + if name.as_str().chars().any(|c| c.is_lowercase()) { + let uc = NonSnakeCase::to_snake_case(&name.as_str()).to_uppercase(); + if name != &*uc { cx.span_lint(NON_UPPER_CASE_GLOBALS, span, &format!("{} `{}` should have an upper case name such as `{}`", sort, - s, + name, uc)); } else { cx.span_lint(NON_UPPER_CASE_GLOBALS, span, - &format!("{} `{}` should have an upper case name", sort, s)); + &format!("{} `{}` should have an upper case name", sort, name)); } } } @@ -352,7 +348,7 @@ impl LintPass for NonUpperCaseGlobals { } } -impl LateLintPass for NonUpperCaseGlobals { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonUpperCaseGlobals { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { match it.node { hir::ItemStatic(..) => { @@ -385,9 +381,9 @@ impl LateLintPass for NonUpperCaseGlobals { fn check_pat(&mut self, cx: &LateContext, p: &hir::Pat) { // Lint for constants that look like binding identifiers (#7526) - if let PatKind::Path(None, ref path) = p.node { + if let PatKind::Path(hir::QPath::Resolved(None, ref path)) = p.node { if !path.global && path.segments.len() == 1 && path.segments[0].parameters.is_empty() { - if let Def::Const(..) = cx.tcx.expect_def(p.id) { + if let Def::Const(..) = path.def { NonUpperCaseGlobals::check_upper_case(cx, "constant in pattern", path.segments[0].name, diff --git a/src/librustc_lint/builtin.rs b/src/librustc_lint/builtin.rs index 28dc71fd59..cd414846af 100644 --- a/src/librustc_lint/builtin.rs +++ b/src/librustc_lint/builtin.rs @@ -30,14 +30,13 @@ use rustc::hir::def::Def; use rustc::hir::def_id::DefId; -use middle::stability; use rustc::cfg; use rustc::ty::subst::Substs; use rustc::ty::{self, Ty, TyCtxt}; use rustc::traits::{self, Reveal}; use rustc::hir::map as hir_map; use util::nodemap::NodeSet; -use lint::{Level, LateContext, LintContext, LintArray, Lint}; +use lint::{Level, LateContext, LintContext, LintArray}; use lint::{LintPass, LateLintPass, EarlyLintPass, EarlyContext}; use std::collections::HashSet; @@ -70,7 +69,7 @@ impl LintPass for WhileTrue { } } -impl LateLintPass for WhileTrue { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for WhileTrue { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { if let hir::ExprWhile(ref cond, ..) = e.node { if let hir::ExprLit(ref lit) = cond.node { @@ -94,7 +93,7 @@ declare_lint! { pub struct BoxPointers; impl BoxPointers { - fn check_heap_type<'a, 'tcx>(&self, cx: &LateContext<'a, 'tcx>, span: Span, ty: Ty<'tcx>) { + fn check_heap_type<'a, 'tcx>(&self, cx: &LateContext, span: Span, ty: Ty) { for leaf_ty in ty.walk() { if let ty::TyBox(_) = leaf_ty.sty { let m = format!("type uses owned (Box type) pointers: {}", ty); @@ -110,7 +109,7 @@ impl LintPass for BoxPointers { } } -impl LateLintPass for BoxPointers { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for BoxPointers { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { match it.node { hir::ItemFn(..) | @@ -118,9 +117,10 @@ impl LateLintPass for BoxPointers { hir::ItemEnum(..) | hir::ItemStruct(..) | hir::ItemUnion(..) => { - self.check_heap_type(cx, it.span, cx.tcx.tables().node_id_to_type(it.id)) + let def_id = cx.tcx.map.local_def_id(it.id); + self.check_heap_type(cx, it.span, cx.tcx.item_type(def_id)) } - _ => (), + _ => () } // If it's a struct, we also have to check the fields' types @@ -128,9 +128,9 @@ impl LateLintPass for BoxPointers { hir::ItemStruct(ref struct_def, _) | hir::ItemUnion(ref struct_def, _) => { for struct_field in struct_def.fields() { - self.check_heap_type(cx, - struct_field.span, - cx.tcx.tables().node_id_to_type(struct_field.id)); + let def_id = cx.tcx.map.local_def_id(struct_field.id); + self.check_heap_type(cx, struct_field.span, + cx.tcx.item_type(def_id)); } } _ => (), @@ -158,14 +158,14 @@ impl LintPass for NonShorthandFieldPatterns { } } -impl LateLintPass for NonShorthandFieldPatterns { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonShorthandFieldPatterns { fn check_pat(&mut self, cx: &LateContext, pat: &hir::Pat) { if let PatKind::Struct(_, ref field_pats, _) = pat.node { for fieldpat in field_pats { if fieldpat.node.is_shorthand { continue; } - if let PatKind::Binding(_, ident, None) = fieldpat.node.pat.node { + if let PatKind::Binding(_, _, ident, None) = fieldpat.node.pat.node { if ident.node == fieldpat.node.name { cx.span_lint(NON_SHORTHAND_FIELD_PATTERNS, fieldpat.span, @@ -194,7 +194,7 @@ impl LintPass for UnsafeCode { } } -impl LateLintPass for UnsafeCode { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnsafeCode { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { if let hir::ExprBlock(ref blk) = e.node { // Don't warn about generated blocks, that'll just pollute the output. @@ -220,9 +220,9 @@ impl LateLintPass for UnsafeCode { fn check_fn(&mut self, cx: &LateContext, - fk: FnKind, + fk: FnKind<'tcx>, _: &hir::FnDecl, - _: &hir::Block, + _: &hir::Expr, span: Span, _: ast::NodeId) { match fk { @@ -327,7 +327,7 @@ impl LintPass for MissingDoc { } } -impl LateLintPass for MissingDoc { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingDoc { fn enter_lint_attrs(&mut self, _: &LateContext, attrs: &[ast::Attribute]) { let doc_hidden = self.doc_hidden() || attrs.iter().any(|attr| { @@ -340,7 +340,7 @@ impl LateLintPass for MissingDoc { self.doc_hidden_stack.push(doc_hidden); } - fn exit_lint_attrs(&mut self, _: &LateContext, _: &[ast::Attribute]) { + fn exit_lint_attrs(&mut self, _: &LateContext, _attrs: &[ast::Attribute]) { self.doc_hidden_stack.pop().expect("empty doc_hidden_stack"); } @@ -386,16 +386,16 @@ impl LateLintPass for MissingDoc { "a trait" } hir::ItemTy(..) => "a type alias", - hir::ItemImpl(.., Some(ref trait_ref), _, ref impl_items) => { + hir::ItemImpl(.., Some(ref trait_ref), _, ref impl_item_refs) => { // If the trait is private, add the impl items to private_traits so they don't get // reported for missing docs. - let real_trait = cx.tcx.expect_def(trait_ref.ref_id).def_id(); + let real_trait = trait_ref.path.def.def_id(); if let Some(node_id) = cx.tcx.map.as_local_node_id(real_trait) { match cx.tcx.map.find(node_id) { Some(hir_map::NodeItem(item)) => { if item.vis == hir::Visibility::Inherited { - for itm in impl_items { - self.private_traits.insert(itm.id); + for impl_item_ref in impl_item_refs { + self.private_traits.insert(impl_item_ref.id.node_id); } } } @@ -494,7 +494,7 @@ impl LintPass for MissingCopyImplementations { } } -impl LateLintPass for MissingCopyImplementations { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingCopyImplementations { fn check_item(&mut self, cx: &LateContext, item: &hir::Item) { if !cx.access_levels.is_reachable(item.id) { return; @@ -563,7 +563,7 @@ impl LintPass for MissingDebugImplementations { } } -impl LateLintPass for MissingDebugImplementations { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingDebugImplementations { fn check_item(&mut self, cx: &LateContext, item: &hir::Item) { if !cx.access_levels.is_reachable(item.id) { return; @@ -585,11 +585,9 @@ impl LateLintPass for MissingDebugImplementations { let debug_def = cx.tcx.lookup_trait_def(debug); let mut impls = NodeSet(); debug_def.for_each_impl(cx.tcx, |d| { - if let Some(n) = cx.tcx.map.as_local_node_id(d) { - if let Some(ty_def) = cx.tcx.tables().node_id_to_type(n).ty_to_def_id() { - if let Some(node_id) = cx.tcx.map.as_local_node_id(ty_def) { - impls.insert(node_id); - } + if let Some(ty_def) = cx.tcx.item_type(d).ty_to_def_id() { + if let Some(node_id) = cx.tcx.map.as_local_node_id(ty_def) { + impls.insert(node_id); } } }); @@ -607,142 +605,6 @@ impl LateLintPass for MissingDebugImplementations { } } -declare_lint! { - DEPRECATED, - Warn, - "detects use of deprecated items" -} - -/// Checks for use of items with `#[deprecated]` or `#[rustc_deprecated]` attributes -#[derive(Clone)] -pub struct Deprecated { - /// Tracks the `NodeId` of the current item. - /// - /// This is required since not all node ids are present in the hir map. - current_item: ast::NodeId, -} - -impl Deprecated { - pub fn new() -> Deprecated { - Deprecated { current_item: ast::CRATE_NODE_ID } - } - - fn lint(&self, - cx: &LateContext, - _id: DefId, - span: Span, - stability: &Option<&attr::Stability>, - deprecation: &Option) { - // Deprecated attributes apply in-crate and cross-crate. - if let Some(&attr::Stability{rustc_depr: Some(attr::RustcDeprecation{ref reason, ..}), ..}) - = *stability { - output(cx, DEPRECATED, span, Some(&reason)) - } else if let Some(ref depr_entry) = *deprecation { - if let Some(parent_depr) = cx.tcx.lookup_deprecation_entry(self.parent_def(cx)) { - if parent_depr.same_origin(depr_entry) { - return; - } - } - - output(cx, DEPRECATED, span, depr_entry.attr.note.as_ref().map(|x| &**x)) - } - - fn output(cx: &LateContext, lint: &'static Lint, span: Span, note: Option<&str>) { - let msg = if let Some(note) = note { - format!("use of deprecated item: {}", note) - } else { - format!("use of deprecated item") - }; - - cx.span_lint(lint, span, &msg); - } - } - - fn push_item(&mut self, item_id: ast::NodeId) { - self.current_item = item_id; - } - - fn item_post(&mut self, cx: &LateContext, item_id: ast::NodeId) { - assert_eq!(self.current_item, item_id); - self.current_item = cx.tcx.map.get_parent(item_id); - } - - fn parent_def(&self, cx: &LateContext) -> DefId { - cx.tcx.map.local_def_id(self.current_item) - } -} - -impl LintPass for Deprecated { - fn get_lints(&self) -> LintArray { - lint_array!(DEPRECATED) - } -} - -impl LateLintPass for Deprecated { - fn check_item(&mut self, cx: &LateContext, item: &hir::Item) { - self.push_item(item.id); - stability::check_item(cx.tcx, - item, - false, - &mut |id, sp, stab, depr| self.lint(cx, id, sp, &stab, &depr)); - } - - fn check_item_post(&mut self, cx: &LateContext, item: &hir::Item) { - self.item_post(cx, item.id); - } - - fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { - stability::check_expr(cx.tcx, - e, - &mut |id, sp, stab, depr| self.lint(cx, id, sp, &stab, &depr)); - } - - fn check_path(&mut self, cx: &LateContext, path: &hir::Path, id: ast::NodeId) { - stability::check_path(cx.tcx, - path, - id, - &mut |id, sp, stab, depr| self.lint(cx, id, sp, &stab, &depr)); - } - - fn check_path_list_item(&mut self, cx: &LateContext, item: &hir::PathListItem) { - stability::check_path_list_item(cx.tcx, - item, - &mut |id, sp, stab, depr| { - self.lint(cx, id, sp, &stab, &depr) - }); - } - - fn check_pat(&mut self, cx: &LateContext, pat: &hir::Pat) { - stability::check_pat(cx.tcx, - pat, - &mut |id, sp, stab, depr| self.lint(cx, id, sp, &stab, &depr)); - } - - fn check_impl_item(&mut self, _: &LateContext, item: &hir::ImplItem) { - self.push_item(item.id); - } - - fn check_impl_item_post(&mut self, cx: &LateContext, item: &hir::ImplItem) { - self.item_post(cx, item.id); - } - - fn check_trait_item(&mut self, _: &LateContext, item: &hir::TraitItem) { - self.push_item(item.id); - } - - fn check_trait_item_post(&mut self, cx: &LateContext, item: &hir::TraitItem) { - self.item_post(cx, item.id); - } - - fn check_foreign_item(&mut self, _: &LateContext, item: &hir::ForeignItem) { - self.push_item(item.id); - } - - fn check_foreign_item_post(&mut self, cx: &LateContext, item: &hir::ForeignItem) { - self.item_post(cx, item.id); - } -} - declare_lint! { DEPRECATED_ATTR, Warn, @@ -773,9 +635,9 @@ impl LintPass for DeprecatedAttr { impl EarlyLintPass for DeprecatedAttr { fn check_attribute(&mut self, cx: &EarlyContext, attr: &ast::Attribute) { - let name = &*attr.name(); + let name = attr.name(); for &&(n, _, ref g) in &self.depr_attrs { - if n == name { + if name == n { if let &AttributeGate::Gated(Stability::Deprecated(link), ref name, ref reason, @@ -807,18 +669,18 @@ impl LintPass for UnconditionalRecursion { } } -impl LateLintPass for UnconditionalRecursion { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnconditionalRecursion { fn check_fn(&mut self, cx: &LateContext, fn_kind: FnKind, _: &hir::FnDecl, - blk: &hir::Block, + blk: &hir::Expr, sp: Span, id: ast::NodeId) { let method = match fn_kind { FnKind::ItemFn(..) => None, FnKind::Method(..) => { - cx.tcx.impl_or_trait_item(cx.tcx.map.local_def_id(id)).as_opt_method() + Some(cx.tcx.associated_item(cx.tcx.map.local_def_id(id))) } // closures can't recur, so they don't matter. FnKind::Closure(_) => return, @@ -928,8 +790,12 @@ impl LateLintPass for UnconditionalRecursion { fn expr_refers_to_this_fn(tcx: TyCtxt, fn_id: ast::NodeId, id: ast::NodeId) -> bool { match tcx.map.get(id) { hir_map::NodeExpr(&hir::Expr { node: hir::ExprCall(ref callee, _), .. }) => { - tcx.expect_def_or_none(callee.id) - .map_or(false, |def| def.def_id() == tcx.map.local_def_id(fn_id)) + let def = if let hir::ExprPath(ref qpath) = callee.node { + tcx.tables().qpath_def(qpath, callee.id) + } else { + return false; + }; + def.def_id() == tcx.map.local_def_id(fn_id) } _ => false, } @@ -937,7 +803,7 @@ impl LateLintPass for UnconditionalRecursion { // Check if the expression `id` performs a call to `method`. fn expr_refers_to_this_method<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - method: &ty::Method, + method: &ty::AssociatedItem, id: ast::NodeId) -> bool { use rustc::ty::adjustment::*; @@ -967,10 +833,13 @@ impl LateLintPass for UnconditionalRecursion { // Check for calls to methods via explicit paths (e.g. `T::method()`). match tcx.map.get(id) { hir_map::NodeExpr(&hir::Expr { node: hir::ExprCall(ref callee, _), .. }) => { - // The callee is an arbitrary expression, - // it doesn't necessarily have a definition. - match tcx.expect_def_or_none(callee.id) { - Some(Def::Method(def_id)) => { + let def = if let hir::ExprPath(ref qpath) = callee.node { + tcx.tables().qpath_def(qpath, callee.id) + } else { + return false; + }; + match def { + Def::Method(def_id) => { let substs = tcx.tables().node_id_item_substs(callee.id) .unwrap_or_else(|| tcx.intern_substs(&[])); method_call_refers_to_method( @@ -986,14 +855,14 @@ impl LateLintPass for UnconditionalRecursion { // Check if the method call to the method with the ID `callee_id` // and instantiated with `callee_substs` refers to method `method`. fn method_call_refers_to_method<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - method: &ty::Method, + method: &ty::AssociatedItem, callee_id: DefId, callee_substs: &Substs<'tcx>, expr_id: ast::NodeId) -> bool { - let callee_item = tcx.impl_or_trait_item(callee_id); + let callee_item = tcx.associated_item(callee_id); - match callee_item.container() { + match callee_item.container { // This is an inherent method, so the `def_id` refers // directly to the method definition. ty::ImplContainer(_) => callee_id == method.def_id, @@ -1034,7 +903,7 @@ impl LateLintPass for UnconditionalRecursion { let container = ty::ImplContainer(vtable_impl.impl_def_id); // It matches if it comes from the same impl, // and has the same method name. - container == method.container && callee_item.name() == method.name + container == method.container && callee_item.name == method.name } // There's no way to know if this call is @@ -1063,7 +932,7 @@ impl LintPass for PluginAsLibrary { } } -impl LateLintPass for PluginAsLibrary { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for PluginAsLibrary { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { if cx.sess().plugin_registrar_fn.get().is_some() { // We're compiling a plugin; it's fine to link other plugins. @@ -1129,7 +998,7 @@ impl LintPass for InvalidNoMangleItems { } } -impl LateLintPass for InvalidNoMangleItems { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for InvalidNoMangleItems { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { match it.node { hir::ItemFn(.., ref generics, _) => { @@ -1183,7 +1052,7 @@ impl LintPass for MutableTransmutes { } } -impl LateLintPass for MutableTransmutes { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MutableTransmutes { fn check_expr(&mut self, cx: &LateContext, expr: &hir::Expr) { use syntax::abi::Abi::RustIntrinsic; @@ -1203,19 +1072,20 @@ impl LateLintPass for MutableTransmutes { (cx: &LateContext<'a, 'tcx>, expr: &hir::Expr) -> Option<(&'tcx ty::TypeVariants<'tcx>, &'tcx ty::TypeVariants<'tcx>)> { - match expr.node { - hir::ExprPath(..) => (), - _ => return None, - } - if let Def::Fn(did) = cx.tcx.expect_def(expr.id) { + let def = if let hir::ExprPath(ref qpath) = expr.node { + cx.tcx.tables().qpath_def(qpath, expr.id) + } else { + return None; + }; + if let Def::Fn(did) = def { if !def_id_is_transmute(cx, did) { return None; } let typ = cx.tcx.tables().node_id_to_type(expr.id); match typ.sty { ty::TyFnDef(.., ref bare_fn) if bare_fn.abi == RustIntrinsic => { - let from = bare_fn.sig.0.inputs[0]; - let to = bare_fn.sig.0.output; + let from = bare_fn.sig.skip_binder().inputs()[0]; + let to = bare_fn.sig.skip_binder().output(); return Some((&from.sty, &to.sty)); } _ => (), @@ -1225,11 +1095,11 @@ impl LateLintPass for MutableTransmutes { } fn def_id_is_transmute(cx: &LateContext, def_id: DefId) -> bool { - match cx.tcx.lookup_item_type(def_id).ty.sty { + match cx.tcx.item_type(def_id).sty { ty::TyFnDef(.., ref bfty) if bfty.abi == RustIntrinsic => (), _ => return false, } - cx.tcx.item_name(def_id).as_str() == "transmute" + cx.tcx.item_name(def_id) == "transmute" } } } @@ -1250,7 +1120,7 @@ impl LintPass for UnstableFeatures { } } -impl LateLintPass for UnstableFeatures { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnstableFeatures { fn check_attribute(&mut self, ctx: &LateContext, attr: &ast::Attribute) { if attr.meta().check_name("feature") { if let Some(items) = attr.meta().meta_item_list() { @@ -1277,12 +1147,12 @@ impl LintPass for UnionsWithDropFields { } } -impl LateLintPass for UnionsWithDropFields { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnionsWithDropFields { fn check_item(&mut self, ctx: &LateContext, item: &hir::Item) { if let hir::ItemUnion(ref vdata, _) = item.node { let param_env = &ty::ParameterEnvironment::for_item(ctx.tcx, item.id); for field in vdata.fields() { - let field_ty = ctx.tcx.tables().node_id_to_type(field.id); + let field_ty = ctx.tcx.item_type(ctx.tcx.map.local_def_id(field.id)); if ctx.tcx.type_needs_drop_given_env(field_ty, param_env) { ctx.span_lint(UNIONS_WITH_DROP_FIELDS, field.span, diff --git a/src/librustc_lint/lib.rs b/src/librustc_lint/lib.rs index 114c0ea556..a53d43b2a2 100644 --- a/src/librustc_lint/lib.rs +++ b/src/librustc_lint/lib.rs @@ -31,7 +31,6 @@ #![cfg_attr(test, feature(test))] #![feature(box_patterns)] #![feature(box_syntax)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(quote)] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] @@ -111,6 +110,7 @@ pub fn register_builtins(store: &mut lint::LintStore, sess: Option<&Session>) { add_early_builtin!(sess, UnusedParens, + UnusedImportBraces, ); add_early_builtin_with_new!(sess, @@ -129,7 +129,6 @@ pub fn register_builtins(store: &mut lint::LintStore, sess: Option<&Session>) { NonCamelCaseTypes, NonSnakeCase, NonUpperCaseGlobals, - UnusedImportBraces, NonShorthandFieldPatterns, UnusedUnsafe, UnsafeCode, @@ -145,7 +144,6 @@ pub fn register_builtins(store: &mut lint::LintStore, sess: Option<&Session>) { ); add_builtin_with_new!(sess, - Deprecated, TypeLimits, MissingDoc, MissingDebugImplementations, @@ -232,6 +230,14 @@ pub fn register_builtins(store: &mut lint::LintStore, sess: Option<&Session>) { id: LintId::of(EXTRA_REQUIREMENT_IN_IMPL), reference: "issue #37166 ", }, + FutureIncompatibleInfo { + id: LintId::of(LEGACY_DIRECTORY_OWNERSHIP), + reference: "issue #37872 ", + }, + FutureIncompatibleInfo { + id: LintId::of(LEGACY_IMPORTS), + reference: "issue #38260 ", + }, ]); // Register renamed and removed lints diff --git a/src/librustc_lint/types.rs b/src/librustc_lint/types.rs index b04759955a..751c9c3440 100644 --- a/src/librustc_lint/types.rs +++ b/src/librustc_lint/types.rs @@ -18,7 +18,7 @@ use rustc::traits::Reveal; use middle::const_val::ConstVal; use rustc_const_eval::eval_const_expr_partial; use rustc_const_eval::EvalHint::ExprTypeChecked; -use util::nodemap::FnvHashSet; +use util::nodemap::FxHashSet; use lint::{LateContext, LintContext, LintArray}; use lint::{LintPass, LateLintPass}; @@ -103,7 +103,7 @@ impl LintPass for TypeLimits { } } -impl LateLintPass for TypeLimits { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for TypeLimits { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { match e.node { hir::ExprUnary(hir::UnNeg, ref expr) => { @@ -219,9 +219,9 @@ impl LateLintPass for TypeLimits { ty::TyFloat(t) => { let (min, max) = float_ty_range(t); let lit_val: f64 = match lit.node { - ast::LitKind::Float(ref v, _) | - ast::LitKind::FloatUnsuffixed(ref v) => { - match v.parse() { + ast::LitKind::Float(v, _) | + ast::LitKind::FloatUnsuffixed(v) => { + match v.as_str().parse() { Ok(f) => f, Err(_) => return, } @@ -396,7 +396,7 @@ enum FfiResult { /// expanded to cover NonZero raw pointers and newtypes. /// FIXME: This duplicates code in trans. fn is_repr_nullable_ptr<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - def: ty::AdtDef<'tcx>, + def: &'tcx ty::AdtDef, substs: &Substs<'tcx>) -> bool { if def.variants.len() == 2 { @@ -428,7 +428,7 @@ fn is_repr_nullable_ptr<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { /// Check if the given type is "ffi-safe" (has a stable, well-defined /// representation which can be exported to C code). - fn check_type_for_ffi(&self, cache: &mut FnvHashSet>, ty: Ty<'tcx>) -> FfiResult { + fn check_type_for_ffi(&self, cache: &mut FxHashSet>, ty: Ty<'tcx>) -> FfiResult { use self::FfiResult::*; let cx = self.cx.tcx; @@ -572,7 +572,7 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { consider using a raw pointer instead") } - ty::TyTrait(..) => { + ty::TyDynamic(..) => { FfiUnsafe("found Rust trait type in foreign module, \ consider using a raw pointer instead") } @@ -603,8 +603,8 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { } let sig = cx.erase_late_bound_regions(&bare_fn.sig); - if !sig.output.is_nil() { - let r = self.check_type_for_ffi(cache, sig.output); + if !sig.output().is_nil() { + let r = self.check_type_for_ffi(cache, sig.output()); match r { FfiSafe => {} _ => { @@ -612,7 +612,7 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { } } } - for arg in sig.inputs { + for arg in sig.inputs() { let r = self.check_type_for_ffi(cache, arg); match r { FfiSafe => {} @@ -639,7 +639,7 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { // any generic types right now: let ty = self.cx.tcx.normalize_associated_type(&ty); - match self.check_type_for_ffi(&mut FnvHashSet(), ty) { + match self.check_type_for_ffi(&mut FxHashSet(), ty) { FfiResult::FfiSafe => {} FfiResult::FfiUnsafe(s) => { self.cx.span_lint(IMPROPER_CTYPES, sp, s); @@ -675,16 +675,15 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { fn check_foreign_fn(&mut self, id: ast::NodeId, decl: &hir::FnDecl) { let def_id = self.cx.tcx.map.local_def_id(id); - let scheme = self.cx.tcx.lookup_item_type(def_id); - let sig = scheme.ty.fn_sig(); + let sig = self.cx.tcx.item_type(def_id).fn_sig(); let sig = self.cx.tcx.erase_late_bound_regions(&sig); - for (&input_ty, input_hir) in sig.inputs.iter().zip(&decl.inputs) { - self.check_type_for_ffi_and_report_errors(input_hir.ty.span, &input_ty); + for (input_ty, input_hir) in sig.inputs().iter().zip(&decl.inputs) { + self.check_type_for_ffi_and_report_errors(input_hir.ty.span, input_ty); } if let hir::Return(ref ret_hir) = decl.output { - let ret_ty = sig.output; + let ret_ty = sig.output(); if !ret_ty.is_nil() { self.check_type_for_ffi_and_report_errors(ret_hir.span, ret_ty); } @@ -693,8 +692,8 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> { fn check_foreign_static(&mut self, id: ast::NodeId, span: Span) { let def_id = self.cx.tcx.map.local_def_id(id); - let scheme = self.cx.tcx.lookup_item_type(def_id); - self.check_type_for_ffi_and_report_errors(span, scheme.ty); + let ty = self.cx.tcx.item_type(def_id); + self.check_type_for_ffi_and_report_errors(span, ty); } } @@ -707,7 +706,7 @@ impl LintPass for ImproperCTypes { } } -impl LateLintPass for ImproperCTypes { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for ImproperCTypes { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { let mut vis = ImproperCTypesVisitor { cx: cx }; if let hir::ItemForeignMod(ref nmod) = it.node { @@ -735,22 +734,24 @@ impl LintPass for VariantSizeDifferences { } } -impl LateLintPass for VariantSizeDifferences { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for VariantSizeDifferences { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { if let hir::ItemEnum(ref enum_definition, ref gens) = it.node { if gens.ty_params.is_empty() { // sizes only make sense for non-generic types - let t = cx.tcx.tables().node_id_to_type(it.id); + let t = cx.tcx.item_type(cx.tcx.map.local_def_id(it.id)); let layout = cx.tcx.infer_ctxt(None, None, Reveal::All).enter(|infcx| { let ty = cx.tcx.erase_regions(&t); - ty.layout(&infcx) - .unwrap_or_else(|e| bug!("failed to get layout for `{}`: {}", t, e)) + ty.layout(&infcx).unwrap_or_else(|e| { + bug!("failed to get layout for `{}`: {}", t, e) + }) }); if let Layout::General { ref variants, ref size, discr, .. } = *layout { let discr_size = Primitive::Int(discr).size(&cx.tcx.data_layout).bytes(); - debug!("enum `{}` is {} bytes large", t, size.bytes()); + debug!("enum `{}` is {} bytes large with layout:\n{:#?}", + t, size.bytes(), layout); let (largest, slargest, largest_index) = enum_definition.variants .iter() diff --git a/src/librustc_lint/unused.rs b/src/librustc_lint/unused.rs index 15430a5c9f..429bfb8e3d 100644 --- a/src/librustc_lint/unused.rs +++ b/src/librustc_lint/unused.rs @@ -8,10 +8,9 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use rustc::hir::pat_util; use rustc::ty; use rustc::ty::adjustment; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; use lint::{LateContext, EarlyContext, LintContext, LintArray}; use lint::{LintPass, EarlyLintPass, LateLintPass}; @@ -19,8 +18,8 @@ use std::collections::hash_map::Entry::{Occupied, Vacant}; use syntax::ast; use syntax::attr; -use syntax::feature_gate::{KNOWN_ATTRIBUTES, AttributeType}; -use syntax::parse::token::keywords; +use syntax::feature_gate::{BUILTIN_ATTRIBUTES, AttributeType}; +use syntax::symbol::keywords; use syntax::ptr::P; use syntax_pos::Span; @@ -42,13 +41,13 @@ impl UnusedMut { // collect all mutable pattern and group their NodeIDs by their Identifier to // avoid false warnings in match arms with multiple patterns - let mut mutables = FnvHashMap(); + let mut mutables = FxHashMap(); for p in pats { - pat_util::pat_bindings(p, |mode, id, _, path1| { + p.each_binding(|mode, id, _, path1| { let name = path1.node; if let hir::BindByValue(hir::MutMutable) = mode { if !name.as_str().starts_with("_") { - match mutables.entry(name.0 as usize) { + match mutables.entry(name) { Vacant(entry) => { entry.insert(vec![id]); } @@ -78,7 +77,7 @@ impl LintPass for UnusedMut { } } -impl LateLintPass for UnusedMut { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedMut { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { if let hir::ExprMatch(_, ref arms, _) = e.node { for a in arms { @@ -99,7 +98,7 @@ impl LateLintPass for UnusedMut { cx: &LateContext, _: FnKind, decl: &hir::FnDecl, - _: &hir::Block, + _: &hir::Expr, _: Span, _: ast::NodeId) { for a in &decl.inputs { @@ -129,7 +128,7 @@ impl LintPass for UnusedResults { } } -impl LateLintPass for UnusedResults { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedResults { fn check_stmt(&mut self, cx: &LateContext, s: &hir::Stmt) { let expr = match s.node { hir::StmtSemi(ref expr, _) => &**expr, @@ -162,7 +161,7 @@ impl LateLintPass for UnusedResults { // check for #[must_use="..."] if let Some(s) = attr.value_str() { msg.push_str(": "); - msg.push_str(&s); + msg.push_str(&s.as_str()); } cx.span_lint(UNUSED_MUST_USE, sp, &msg); return true; @@ -188,7 +187,7 @@ impl LintPass for UnusedUnsafe { } } -impl LateLintPass for UnusedUnsafe { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedUnsafe { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { if let hir::ExprBlock(ref blk) = e.node { // Don't warn about generated blocks, that'll just pollute the output. @@ -215,10 +214,10 @@ impl LintPass for PathStatements { } } -impl LateLintPass for PathStatements { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for PathStatements { fn check_stmt(&mut self, cx: &LateContext, s: &hir::Stmt) { if let hir::StmtSemi(ref expr, _) = s.node { - if let hir::ExprPath(..) = expr.node { + if let hir::ExprPath(_) = expr.node { cx.span_lint(PATH_STATEMENTS, s.span, "path statement with no effect"); } } @@ -240,12 +239,12 @@ impl LintPass for UnusedAttributes { } } -impl LateLintPass for UnusedAttributes { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedAttributes { fn check_attribute(&mut self, cx: &LateContext, attr: &ast::Attribute) { debug!("checking attribute: {:?}", attr); // Note that check_name() marks the attribute as used if it matches. - for &(ref name, ty, _) in KNOWN_ATTRIBUTES { + for &(ref name, ty, _) in BUILTIN_ATTRIBUTES { match ty { AttributeType::Whitelisted if attr.check_name(name) => { debug!("{:?} is Whitelisted", name); @@ -267,17 +266,17 @@ impl LateLintPass for UnusedAttributes { debug!("Emitting warning for: {:?}", attr); cx.span_lint(UNUSED_ATTRIBUTES, attr.span, "unused attribute"); // Is it a builtin attribute that must be used at the crate level? - let known_crate = KNOWN_ATTRIBUTES.iter() + let known_crate = BUILTIN_ATTRIBUTES.iter() .find(|&&(name, ty, _)| attr.name() == name && ty == AttributeType::CrateLevel) .is_some(); // Has a plugin registered this attribute as one which must be used at // the crate level? let plugin_crate = plugin_attributes.iter() - .find(|&&(ref x, t)| &*attr.name() == x && AttributeType::CrateLevel == t) + .find(|&&(ref x, t)| attr.name() == &**x && AttributeType::CrateLevel == t) .is_some(); if known_crate || plugin_crate { - let msg = match attr.node.style { + let msg = match attr.style { ast::AttrStyle::Outer => { "crate-level attribute should be an inner attribute: add an exclamation \ mark: #![foo]" @@ -406,11 +405,11 @@ impl LintPass for UnusedImportBraces { } } -impl LateLintPass for UnusedImportBraces { - fn check_item(&mut self, cx: &LateContext, item: &hir::Item) { - if let hir::ItemUse(ref view_path) = item.node { - if let hir::ViewPathList(_, ref items) = view_path.node { - if items.len() == 1 && items[0].node.name != keywords::SelfValue.name() { +impl EarlyLintPass for UnusedImportBraces { + fn check_item(&mut self, cx: &EarlyContext, item: &ast::Item) { + if let ast::ItemKind::Use(ref view_path) = item.node { + if let ast::ViewPathList(_, ref items) = view_path.node { + if items.len() == 1 && items[0].node.name.name != keywords::SelfValue.name() { let msg = format!("braces around {} is unnecessary", items[0].node.name); cx.span_lint(UNUSED_IMPORT_BRACES, item.span, &msg); } @@ -434,7 +433,7 @@ impl LintPass for UnusedAllocation { } } -impl LateLintPass for UnusedAllocation { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedAllocation { fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) { match e.node { hir::ExprBox(_) => {} diff --git a/src/librustc_llvm/Cargo.lock b/src/librustc_llvm/Cargo.lock new file mode 100644 index 0000000000..17678ef2bb --- /dev/null +++ b/src/librustc_llvm/Cargo.lock @@ -0,0 +1,22 @@ +[root] +name = "rustc_llvm" +version = "0.0.0" +dependencies = [ + "build_helper 0.1.0", + "gcc 0.3.28 (registry+https://github.com/rust-lang/crates.io-index)", + "rustc_bitflags 0.0.0", +] + +[[package]] +name = "build_helper" +version = "0.1.0" + +[[package]] +name = "gcc" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" + +[[package]] +name = "rustc_bitflags" +version = "0.0.0" + diff --git a/src/librustc_llvm/build.rs b/src/librustc_llvm/build.rs index 4d3a4d09dc..86c40a0208 100644 --- a/src/librustc_llvm/build.rs +++ b/src/librustc_llvm/build.rs @@ -17,6 +17,35 @@ use std::path::{PathBuf, Path}; use build_helper::output; +fn detect_llvm_link(llvm_config: &Path) -> (&'static str, Option<&'static str>) { + let mut version_cmd = Command::new(llvm_config); + version_cmd.arg("--version"); + let version_output = output(&mut version_cmd); + let mut parts = version_output.split('.').take(2) + .filter_map(|s| s.parse::().ok()); + if let (Some(major), Some(minor)) = (parts.next(), parts.next()) { + if major > 3 || (major == 3 && minor >= 9) { + // Force the link mode we want, preferring static by default, but + // possibly overridden by `configure --enable-llvm-link-shared`. + if env::var_os("LLVM_LINK_SHARED").is_some() { + return ("dylib", Some("--link-shared")); + } else { + return ("static", Some("--link-static")); + } + } else if major == 3 && minor == 8 { + // Find out LLVM's default linking mode. + let mut mode_cmd = Command::new(llvm_config); + mode_cmd.arg("--shared-mode"); + if output(&mut mode_cmd).trim() == "shared" { + return ("dylib", None); + } else { + return ("static", None); + } + } + } + ("static", None) +} + fn main() { println!("cargo:rustc-cfg=cargobuild"); @@ -66,7 +95,7 @@ fn main() { let is_crossed = target != host; let optional_components = - ["x86", "arm", "aarch64", "mips", "powerpc", "pnacl", "systemz", "jsbackend"]; + ["x86", "arm", "aarch64", "mips", "powerpc", "pnacl", "systemz", "jsbackend", "msp430"]; // FIXME: surely we don't need all these components, right? Stuff like mcjit // or interpreter the compiler itself never uses. @@ -116,6 +145,9 @@ fn main() { cfg.flag("-DLLVM_RUSTLLVM"); } + println!("cargo:rerun-if-changed=../rustllvm/PassWrapper.cpp"); + println!("cargo:rerun-if-changed=../rustllvm/RustWrapper.cpp"); + println!("cargo:rerun-if-changed=../rustllvm/ArchiveWrapper.cpp"); cfg.file("../rustllvm/PassWrapper.cpp") .file("../rustllvm/RustWrapper.cpp") .file("../rustllvm/ArchiveWrapper.cpp") @@ -123,22 +155,16 @@ fn main() { .cpp_link_stdlib(None) // we handle this below .compile("librustllvm.a"); + let (llvm_kind, llvm_link_arg) = detect_llvm_link(&llvm_config); + // Link in all LLVM libraries, if we're uwring the "wrong" llvm-config then // we don't pick up system libs because unfortunately they're for the host // of llvm-config, not the target that we're attempting to link. let mut cmd = Command::new(&llvm_config); cmd.arg("--libs"); - // Force static linking with "--link-static" if available. - let mut version_cmd = Command::new(&llvm_config); - version_cmd.arg("--version"); - let version_output = output(&mut version_cmd); - let mut parts = version_output.split('.'); - if let (Some(major), Some(minor)) = (parts.next().and_then(|s| s.parse::().ok()), - parts.next().and_then(|s| s.parse::().ok())) { - if major > 3 || (major == 3 && minor >= 8) { - cmd.arg("--link-static"); - } + if let Some(link_arg) = llvm_link_arg { + cmd.arg(link_arg); } if !is_crossed { @@ -174,7 +200,7 @@ fn main() { } let kind = if name.starts_with("LLVM") { - "static" + llvm_kind } else { "dylib" }; @@ -204,6 +230,13 @@ fn main() { } } + // OpenBSD has a particular C++ runtime library name + let stdcppname = if target.contains("openbsd") { + "estdc++" + } else { + "stdc++" + }; + // C++ runtime library if !target.contains("msvc") { if let Some(s) = env::var_os("LLVM_STATIC_STDCPP") { @@ -211,11 +244,11 @@ fn main() { let path = PathBuf::from(s); println!("cargo:rustc-link-search=native={}", path.parent().unwrap().display()); - println!("cargo:rustc-link-lib=static=stdc++"); + println!("cargo:rustc-link-lib=static={}", stdcppname); } else if cxxflags.contains("stdlib=libc++") { println!("cargo:rustc-link-lib=c++"); } else { - println!("cargo:rustc-link-lib=stdc++"); + println!("cargo:rustc-link-lib={}", stdcppname); } } } diff --git a/src/librustc_llvm/diagnostic.rs b/src/librustc_llvm/diagnostic.rs index 8767f03b3e..cef6199a74 100644 --- a/src/librustc_llvm/diagnostic.rs +++ b/src/librustc_llvm/diagnostic.rs @@ -13,7 +13,7 @@ pub use self::OptimizationDiagnosticKind::*; pub use self::Diagnostic::*; -use libc::{c_char, c_uint}; +use libc::c_uint; use std::ptr; use {DiagnosticInfoRef, TwineRef, ValueRef}; @@ -45,32 +45,37 @@ impl OptimizationDiagnosticKind { pub struct OptimizationDiagnostic { pub kind: OptimizationDiagnosticKind, - pub pass_name: *const c_char, + pub pass_name: String, pub function: ValueRef, pub debug_loc: DebugLocRef, - pub message: TwineRef, + pub message: String, } impl OptimizationDiagnostic { unsafe fn unpack(kind: OptimizationDiagnosticKind, di: DiagnosticInfoRef) -> OptimizationDiagnostic { + let mut function = ptr::null_mut(); + let mut debug_loc = ptr::null_mut(); - let mut opt = OptimizationDiagnostic { + let mut message = None; + let pass_name = super::build_string(|pass_name| + message = super::build_string(|message| + super::LLVMRustUnpackOptimizationDiagnostic(di, + pass_name, + &mut function, + &mut debug_loc, + message) + ) + ); + + OptimizationDiagnostic { kind: kind, - pass_name: ptr::null(), - function: ptr::null_mut(), - debug_loc: ptr::null_mut(), - message: ptr::null_mut(), - }; - - super::LLVMRustUnpackOptimizationDiagnostic(di, - &mut opt.pass_name, - &mut opt.function, - &mut opt.debug_loc, - &mut opt.message); - - opt + pass_name: pass_name.expect("got a non-UTF8 pass name from LLVM"), + function: function, + debug_loc: debug_loc, + message: message.expect("got a non-UTF8 OptimizationDiagnostic message from LLVM") + } } } diff --git a/src/librustc_llvm/ffi.rs b/src/librustc_llvm/ffi.rs index 78a9d67ed7..f3d4c17654 100644 --- a/src/librustc_llvm/ffi.rs +++ b/src/librustc_llvm/ffi.rs @@ -11,7 +11,7 @@ use debuginfo::{DIBuilderRef, DIDescriptor, DIFile, DILexicalBlock, DISubprogram, DIType, DIBasicType, DIDerivedType, DICompositeType, DIScope, DIVariable, DIGlobalVariable, DIArray, DISubrange, DITemplateTypeParameter, DIEnumerator, - DINameSpace}; + DINameSpace, DIFlags}; use libc::{c_uint, c_int, size_t, c_char}; use libc::{c_longlong, c_ulonglong, c_void}; @@ -41,6 +41,7 @@ pub enum CallConv { ColdCallConv = 9, X86StdcallCallConv = 64, X86FastcallCallConv = 65, + ArmAapcsCallConv = 67, X86_64_SysV = 78, X86_64_Win64 = 79, X86_VectorCall = 80, @@ -63,6 +64,15 @@ pub enum Linkage { CommonLinkage = 10, } +// LLVMRustVisibility +#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)] +#[repr(C)] +pub enum Visibility { + Default = 0, + Hidden = 1, + Protected = 2, +} + /// LLVMDiagnosticSeverity #[derive(Copy, Clone, Debug)] #[repr(C)] @@ -82,59 +92,31 @@ pub enum DLLStorageClass { DllExport = 2, // Function to be accessible from DLL. } -bitflags! { - #[derive(Default, Debug)] - flags Attribute : u64 { - const ZExt = 1 << 0, - const SExt = 1 << 1, - const NoReturn = 1 << 2, - const InReg = 1 << 3, - const StructRet = 1 << 4, - const NoUnwind = 1 << 5, - const NoAlias = 1 << 6, - const ByVal = 1 << 7, - const Nest = 1 << 8, - const ReadNone = 1 << 9, - const ReadOnly = 1 << 10, - const NoInline = 1 << 11, - const AlwaysInline = 1 << 12, - const OptimizeForSize = 1 << 13, - const StackProtect = 1 << 14, - const StackProtectReq = 1 << 15, - const NoCapture = 1 << 21, - const NoRedZone = 1 << 22, - const NoImplicitFloat = 1 << 23, - const Naked = 1 << 24, - const InlineHint = 1 << 25, - const ReturnsTwice = 1 << 29, - const UWTable = 1 << 30, - const NonLazyBind = 1 << 31, - - // Some of these are missing from the LLVM C API, the rest are - // present, but commented out, and preceded by the following warning: - // FIXME: These attributes are currently not included in the C API as - // a temporary measure until the API/ABI impact to the C API is understood - // and the path forward agreed upon. - const SanitizeAddress = 1 << 32, - const MinSize = 1 << 33, - const NoDuplicate = 1 << 34, - const StackProtectStrong = 1 << 35, - const SanitizeThread = 1 << 36, - const SanitizeMemory = 1 << 37, - const NoBuiltin = 1 << 38, - const Returned = 1 << 39, - const Cold = 1 << 40, - const Builtin = 1 << 41, - const OptimizeNone = 1 << 42, - const InAlloca = 1 << 43, - const NonNull = 1 << 44, - const JumpTable = 1 << 45, - const Convergent = 1 << 46, - const SafeStack = 1 << 47, - const NoRecurse = 1 << 48, - const InaccessibleMemOnly = 1 << 49, - const InaccessibleMemOrArgMemOnly = 1 << 50, - } +/// Matches LLVMRustAttribute in rustllvm.h +/// Semantically a subset of the C++ enum llvm::Attribute::AttrKind, +/// though it is not ABI compatible (since it's a C++ enum) +#[repr(C)] +#[derive(Copy, Clone, Debug)] +pub enum Attribute { + AlwaysInline = 0, + ByVal = 1, + Cold = 2, + InlineHint = 3, + MinSize = 4, + Naked = 5, + NoAlias = 6, + NoCapture = 7, + NoInline = 8, + NonNull = 9, + NoRedZone = 10, + NoReturn = 11, + NoUnwind = 12, + OptimizeForSize = 13, + ReadOnly = 14, + SExt = 15, + StructRet = 16, + UWTable = 17, + ZExt = 18, } /// LLVMIntPredicate @@ -426,8 +408,8 @@ pub type OperandBundleDefRef = *mut OperandBundleDef_opaque; pub type DiagnosticHandler = unsafe extern "C" fn(DiagnosticInfoRef, *mut c_void); pub type InlineAsmDiagHandler = unsafe extern "C" fn(SMDiagnosticRef, *const c_void, c_uint); + pub mod debuginfo { - pub use self::DIDescriptorFlags::*; use super::MetadataRef; #[allow(missing_copy_implementations)] @@ -452,24 +434,29 @@ pub mod debuginfo { pub type DIEnumerator = DIDescriptor; pub type DITemplateTypeParameter = DIDescriptor; - #[derive(Copy, Clone)] - pub enum DIDescriptorFlags { - FlagPrivate = 1 << 0, - FlagProtected = 1 << 1, - FlagFwdDecl = 1 << 2, - FlagAppleBlock = 1 << 3, - FlagBlockByrefStruct = 1 << 4, - FlagVirtual = 1 << 5, - FlagArtificial = 1 << 6, - FlagExplicit = 1 << 7, - FlagPrototyped = 1 << 8, - FlagObjcClassComplete = 1 << 9, - FlagObjectPointer = 1 << 10, - FlagVector = 1 << 11, - FlagStaticMember = 1 << 12, - FlagIndirectVariable = 1 << 13, - FlagLValueReference = 1 << 14, - FlagRValueReference = 1 << 15, + // These values **must** match with LLVMRustDIFlags!! + bitflags! { + #[repr(C)] + #[derive(Debug, Default)] + flags DIFlags: ::libc::uint32_t { + const FlagZero = 0, + const FlagPrivate = 1, + const FlagProtected = 2, + const FlagPublic = 3, + const FlagFwdDecl = (1 << 2), + const FlagAppleBlock = (1 << 3), + const FlagBlockByrefStruct = (1 << 4), + const FlagVirtual = (1 << 5), + const FlagArtificial = (1 << 6), + const FlagExplicit = (1 << 7), + const FlagPrototyped = (1 << 8), + const FlagObjcClassComplete = (1 << 9), + const FlagObjectPointer = (1 << 10), + const FlagVector = (1 << 11), + const FlagStaticMember = (1 << 12), + const FlagLValueReference = (1 << 13), + const FlagRValueReference = (1 << 14), + } } } @@ -484,11 +471,9 @@ pub mod debuginfo { // generates an llvmdeps.rs file next to this one which will be // automatically updated whenever LLVM is updated to include an up-to-date // set of the libraries we need to link to LLVM for. -#[link(name = "rustllvm", kind = "static")] -#[cfg(not(cargobuild))] -extern "C" {} - -#[linked_from = "rustllvm"] // not quite true but good enough +#[cfg_attr(not(all(stage0,cargobuild)), + link(name = "rustllvm", kind = "static"))] // not quite true but good enough +#[cfg_attr(stage0, linked_from = "rustllvm")] extern "C" { // Create and destroy contexts. pub fn LLVMContextCreate() -> ContextRef; @@ -505,10 +490,6 @@ extern "C" { pub fn LLVMGetDataLayout(M: ModuleRef) -> *const c_char; pub fn LLVMSetDataLayout(M: ModuleRef, Triple: *const c_char); - /// Target triple. See Module::getTargetTriple. - pub fn LLVMGetTarget(M: ModuleRef) -> *const c_char; - pub fn LLVMSetTarget(M: ModuleRef, Triple: *const c_char); - /// See Module::dump. pub fn LLVMDumpModule(M: ModuleRef); @@ -518,8 +499,8 @@ extern "C" { /// See llvm::LLVMTypeKind::getTypeID. pub fn LLVMRustGetTypeKind(Ty: TypeRef) -> TypeKind; - /// See llvm::LLVMType::getContext. - pub fn LLVMGetTypeContext(Ty: TypeRef) -> ContextRef; + /// See llvm::Value::getContext + pub fn LLVMRustGetValueContext(V: ValueRef) -> ContextRef; // Operations on integer types pub fn LLVMInt1TypeInContext(C: ContextRef) -> TypeRef; @@ -534,9 +515,6 @@ extern "C" { // Operations on real types pub fn LLVMFloatTypeInContext(C: ContextRef) -> TypeRef; pub fn LLVMDoubleTypeInContext(C: ContextRef) -> TypeRef; - pub fn LLVMX86FP80TypeInContext(C: ContextRef) -> TypeRef; - pub fn LLVMFP128TypeInContext(C: ContextRef) -> TypeRef; - pub fn LLVMPPCFP128TypeInContext(C: ContextRef) -> TypeRef; // Operations on function types pub fn LLVMFunctionType(ReturnType: TypeRef, @@ -544,7 +522,6 @@ extern "C" { ParamCount: c_uint, IsVarArg: Bool) -> TypeRef; - pub fn LLVMIsFunctionVarArg(FunctionTy: TypeRef) -> Bool; pub fn LLVMGetReturnType(FunctionTy: TypeRef) -> TypeRef; pub fn LLVMCountParamTypes(FunctionTy: TypeRef) -> c_uint; pub fn LLVMGetParamTypes(FunctionTy: TypeRef, Dest: *mut TypeRef); @@ -566,20 +543,16 @@ extern "C" { pub fn LLVMGetElementType(Ty: TypeRef) -> TypeRef; pub fn LLVMGetArrayLength(ArrayTy: TypeRef) -> c_uint; - pub fn LLVMGetPointerAddressSpace(PointerTy: TypeRef) -> c_uint; - pub fn LLVMGetPointerToGlobal(EE: ExecutionEngineRef, V: ValueRef) -> *const c_void; pub fn LLVMGetVectorSize(VectorTy: TypeRef) -> c_uint; // Operations on other types pub fn LLVMVoidTypeInContext(C: ContextRef) -> TypeRef; - pub fn LLVMLabelTypeInContext(C: ContextRef) -> TypeRef; pub fn LLVMRustMetadataTypeInContext(C: ContextRef) -> TypeRef; // Operations on all values pub fn LLVMTypeOf(Val: ValueRef) -> TypeRef; pub fn LLVMGetValueName(Val: ValueRef) -> *const c_char; pub fn LLVMSetValueName(Val: ValueRef, Name: *const c_char); - pub fn LLVMDumpValue(Val: ValueRef); pub fn LLVMReplaceAllUsesWith(OldVal: ValueRef, NewVal: ValueRef); pub fn LLVMSetMetadata(Val: ValueRef, KindID: c_uint, Node: ValueRef); @@ -587,45 +560,25 @@ extern "C" { pub fn LLVMGetFirstUse(Val: ValueRef) -> UseRef; pub fn LLVMGetNextUse(U: UseRef) -> UseRef; pub fn LLVMGetUser(U: UseRef) -> ValueRef; - pub fn LLVMGetUsedValue(U: UseRef) -> ValueRef; // Operations on Users - pub fn LLVMGetNumOperands(Val: ValueRef) -> c_int; pub fn LLVMGetOperand(Val: ValueRef, Index: c_uint) -> ValueRef; - pub fn LLVMSetOperand(Val: ValueRef, Index: c_uint, Op: ValueRef); // Operations on constants of any type pub fn LLVMConstNull(Ty: TypeRef) -> ValueRef; - // all zeroes - pub fn LLVMConstAllOnes(Ty: TypeRef) -> ValueRef; pub fn LLVMConstICmp(Pred: IntPredicate, V1: ValueRef, V2: ValueRef) -> ValueRef; pub fn LLVMConstFCmp(Pred: RealPredicate, V1: ValueRef, V2: ValueRef) -> ValueRef; // only for isize/vector pub fn LLVMGetUndef(Ty: TypeRef) -> ValueRef; - pub fn LLVMIsConstant(Val: ValueRef) -> Bool; pub fn LLVMIsNull(Val: ValueRef) -> Bool; pub fn LLVMIsUndef(Val: ValueRef) -> Bool; - pub fn LLVMConstPointerNull(Ty: TypeRef) -> ValueRef; // Operations on metadata - pub fn LLVMMDStringInContext(C: ContextRef, Str: *const c_char, SLen: c_uint) -> ValueRef; pub fn LLVMMDNodeInContext(C: ContextRef, Vals: *const ValueRef, Count: c_uint) -> ValueRef; - pub fn LLVMAddNamedMetadataOperand(M: ModuleRef, Str: *const c_char, Val: ValueRef); // Operations on scalar constants pub fn LLVMConstInt(IntTy: TypeRef, N: c_ulonglong, SignExtend: Bool) -> ValueRef; - pub fn LLVMConstIntOfString(IntTy: TypeRef, Text: *const c_char, Radix: u8) -> ValueRef; - pub fn LLVMConstIntOfStringAndSize(IntTy: TypeRef, - Text: *const c_char, - SLen: c_uint, - Radix: u8) - -> ValueRef; pub fn LLVMConstReal(RealTy: TypeRef, N: f64) -> ValueRef; - pub fn LLVMConstRealOfString(RealTy: TypeRef, Text: *const c_char) -> ValueRef; - pub fn LLVMConstRealOfStringAndSize(RealTy: TypeRef, - Text: *const c_char, - SLen: c_uint) - -> ValueRef; pub fn LLVMConstIntGetZExtValue(ConstantVal: ValueRef) -> c_ulonglong; pub fn LLVMConstIntGetSExtValue(ConstantVal: ValueRef) -> c_longlong; @@ -649,28 +602,18 @@ extern "C" { pub fn LLVMConstVector(ScalarConstantVals: *const ValueRef, Size: c_uint) -> ValueRef; // Constant expressions - pub fn LLVMAlignOf(Ty: TypeRef) -> ValueRef; pub fn LLVMSizeOf(Ty: TypeRef) -> ValueRef; pub fn LLVMConstNeg(ConstantVal: ValueRef) -> ValueRef; - pub fn LLVMConstNSWNeg(ConstantVal: ValueRef) -> ValueRef; - pub fn LLVMConstNUWNeg(ConstantVal: ValueRef) -> ValueRef; pub fn LLVMConstFNeg(ConstantVal: ValueRef) -> ValueRef; pub fn LLVMConstNot(ConstantVal: ValueRef) -> ValueRef; pub fn LLVMConstAdd(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNSWAdd(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNUWAdd(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstFAdd(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstSub(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNSWSub(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNUWSub(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstFSub(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstMul(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNSWMul(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstNUWMul(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstFMul(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstUDiv(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstSDiv(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstExactSDiv(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstFDiv(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstURem(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstSRem(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; @@ -681,19 +624,8 @@ extern "C" { pub fn LLVMConstShl(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstLShr(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; pub fn LLVMConstAShr(LHSConstant: ValueRef, RHSConstant: ValueRef) -> ValueRef; - pub fn LLVMConstGEP(ConstantVal: ValueRef, - ConstantIndices: *const ValueRef, - NumIndices: c_uint) - -> ValueRef; - pub fn LLVMConstInBoundsGEP(ConstantVal: ValueRef, - ConstantIndices: *const ValueRef, - NumIndices: c_uint) - -> ValueRef; pub fn LLVMConstTrunc(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstSExt(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstZExt(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstFPTrunc(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstFPExt(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstUIToFP(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstSIToFP(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstFPToUI(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; @@ -701,42 +633,19 @@ extern "C" { pub fn LLVMConstPtrToInt(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstIntToPtr(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstBitCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstZExtOrBitCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstSExtOrBitCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstTruncOrBitCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstPointerCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; pub fn LLVMConstIntCast(ConstantVal: ValueRef, ToType: TypeRef, isSigned: Bool) -> ValueRef; pub fn LLVMConstFPCast(ConstantVal: ValueRef, ToType: TypeRef) -> ValueRef; - pub fn LLVMConstSelect(ConstantCondition: ValueRef, - ConstantIfTrue: ValueRef, - ConstantIfFalse: ValueRef) - -> ValueRef; - pub fn LLVMConstExtractElement(VectorConstant: ValueRef, IndexConstant: ValueRef) -> ValueRef; - pub fn LLVMConstInsertElement(VectorConstant: ValueRef, - ElementValueConstant: ValueRef, - IndexConstant: ValueRef) - -> ValueRef; - pub fn LLVMConstShuffleVector(VectorAConstant: ValueRef, - VectorBConstant: ValueRef, - MaskConstant: ValueRef) - -> ValueRef; pub fn LLVMConstExtractValue(AggConstant: ValueRef, IdxList: *const c_uint, NumIdx: c_uint) -> ValueRef; - pub fn LLVMConstInsertValue(AggConstant: ValueRef, - ElementValueConstant: ValueRef, - IdxList: *const c_uint, - NumIdx: c_uint) - -> ValueRef; pub fn LLVMConstInlineAsm(Ty: TypeRef, AsmString: *const c_char, Constraints: *const c_char, HasSideEffects: Bool, IsAlignStack: Bool) -> ValueRef; - pub fn LLVMBlockAddress(F: ValueRef, BB: BasicBlockRef) -> ValueRef; - // Operations on global variables, functions, and aliases (globals) @@ -746,8 +655,8 @@ extern "C" { pub fn LLVMRustSetLinkage(Global: ValueRef, RustLinkage: Linkage); pub fn LLVMGetSection(Global: ValueRef) -> *const c_char; pub fn LLVMSetSection(Global: ValueRef, Section: *const c_char); - pub fn LLVMGetVisibility(Global: ValueRef) -> c_uint; - pub fn LLVMSetVisibility(Global: ValueRef, Viz: c_uint); + pub fn LLVMRustGetVisibility(Global: ValueRef) -> Visibility; + pub fn LLVMRustSetVisibility(Global: ValueRef, Viz: Visibility); pub fn LLVMGetAlignment(Global: ValueRef) -> c_uint; pub fn LLVMSetAlignment(Global: ValueRef, Bytes: c_uint); pub fn LLVMSetDLLStorageClass(V: ValueRef, C: DLLStorageClass); @@ -756,126 +665,60 @@ extern "C" { // Operations on global variables pub fn LLVMIsAGlobalVariable(GlobalVar: ValueRef) -> ValueRef; pub fn LLVMAddGlobal(M: ModuleRef, Ty: TypeRef, Name: *const c_char) -> ValueRef; - pub fn LLVMAddGlobalInAddressSpace(M: ModuleRef, - Ty: TypeRef, - Name: *const c_char, - AddressSpace: c_uint) - -> ValueRef; pub fn LLVMGetNamedGlobal(M: ModuleRef, Name: *const c_char) -> ValueRef; pub fn LLVMRustGetOrInsertGlobal(M: ModuleRef, Name: *const c_char, T: TypeRef) -> ValueRef; pub fn LLVMGetFirstGlobal(M: ModuleRef) -> ValueRef; - pub fn LLVMGetLastGlobal(M: ModuleRef) -> ValueRef; pub fn LLVMGetNextGlobal(GlobalVar: ValueRef) -> ValueRef; - pub fn LLVMGetPreviousGlobal(GlobalVar: ValueRef) -> ValueRef; pub fn LLVMDeleteGlobal(GlobalVar: ValueRef); pub fn LLVMGetInitializer(GlobalVar: ValueRef) -> ValueRef; pub fn LLVMSetInitializer(GlobalVar: ValueRef, ConstantVal: ValueRef); - pub fn LLVMIsThreadLocal(GlobalVar: ValueRef) -> Bool; pub fn LLVMSetThreadLocal(GlobalVar: ValueRef, IsThreadLocal: Bool); pub fn LLVMIsGlobalConstant(GlobalVar: ValueRef) -> Bool; pub fn LLVMSetGlobalConstant(GlobalVar: ValueRef, IsConstant: Bool); pub fn LLVMRustGetNamedValue(M: ModuleRef, Name: *const c_char) -> ValueRef; - // Operations on aliases - pub fn LLVMAddAlias(M: ModuleRef, - Ty: TypeRef, - Aliasee: ValueRef, - Name: *const c_char) - -> ValueRef; - // Operations on functions pub fn LLVMAddFunction(M: ModuleRef, Name: *const c_char, FunctionTy: TypeRef) -> ValueRef; pub fn LLVMGetNamedFunction(M: ModuleRef, Name: *const c_char) -> ValueRef; pub fn LLVMGetFirstFunction(M: ModuleRef) -> ValueRef; - pub fn LLVMGetLastFunction(M: ModuleRef) -> ValueRef; pub fn LLVMGetNextFunction(Fn: ValueRef) -> ValueRef; - pub fn LLVMGetPreviousFunction(Fn: ValueRef) -> ValueRef; - pub fn LLVMDeleteFunction(Fn: ValueRef); pub fn LLVMRustGetOrInsertFunction(M: ModuleRef, Name: *const c_char, FunctionTy: TypeRef) -> ValueRef; - pub fn LLVMGetIntrinsicID(Fn: ValueRef) -> c_uint; - pub fn LLVMGetFunctionCallConv(Fn: ValueRef) -> c_uint; pub fn LLVMSetFunctionCallConv(Fn: ValueRef, CC: c_uint); - pub fn LLVMGetGC(Fn: ValueRef) -> *const c_char; - pub fn LLVMSetGC(Fn: ValueRef, Name: *const c_char); pub fn LLVMRustAddDereferenceableAttr(Fn: ValueRef, index: c_uint, bytes: u64); - pub fn LLVMRustAddFunctionAttribute(Fn: ValueRef, index: c_uint, PA: u64); - pub fn LLVMRustAddFunctionAttrString(Fn: ValueRef, index: c_uint, Name: *const c_char); + pub fn LLVMRustAddFunctionAttribute(Fn: ValueRef, index: c_uint, attr: Attribute); pub fn LLVMRustAddFunctionAttrStringValue(Fn: ValueRef, index: c_uint, Name: *const c_char, Value: *const c_char); - pub fn LLVMRustRemoveFunctionAttributes(Fn: ValueRef, index: c_uint, attr: u64); - pub fn LLVMRustRemoveFunctionAttrString(Fn: ValueRef, index: c_uint, Name: *const c_char); - pub fn LLVMGetFunctionAttr(Fn: ValueRef) -> c_uint; - pub fn LLVMRemoveFunctionAttr(Fn: ValueRef, val: c_uint); + pub fn LLVMRustRemoveFunctionAttributes(Fn: ValueRef, index: c_uint, attr: Attribute); // Operations on parameters pub fn LLVMCountParams(Fn: ValueRef) -> c_uint; - pub fn LLVMGetParams(Fn: ValueRef, Params: *const ValueRef); pub fn LLVMGetParam(Fn: ValueRef, Index: c_uint) -> ValueRef; - pub fn LLVMGetParamParent(Inst: ValueRef) -> ValueRef; - pub fn LLVMGetFirstParam(Fn: ValueRef) -> ValueRef; - pub fn LLVMGetLastParam(Fn: ValueRef) -> ValueRef; - pub fn LLVMGetNextParam(Arg: ValueRef) -> ValueRef; - pub fn LLVMGetPreviousParam(Arg: ValueRef) -> ValueRef; - pub fn LLVMAddAttribute(Arg: ValueRef, PA: c_uint); - pub fn LLVMRemoveAttribute(Arg: ValueRef, PA: c_uint); - pub fn LLVMGetAttribute(Arg: ValueRef) -> c_uint; - pub fn LLVMSetParamAlignment(Arg: ValueRef, align: c_uint); // Operations on basic blocks pub fn LLVMBasicBlockAsValue(BB: BasicBlockRef) -> ValueRef; - pub fn LLVMValueIsBasicBlock(Val: ValueRef) -> Bool; - pub fn LLVMValueAsBasicBlock(Val: ValueRef) -> BasicBlockRef; pub fn LLVMGetBasicBlockParent(BB: BasicBlockRef) -> ValueRef; - pub fn LLVMCountBasicBlocks(Fn: ValueRef) -> c_uint; - pub fn LLVMGetBasicBlocks(Fn: ValueRef, BasicBlocks: *const ValueRef); - pub fn LLVMGetFirstBasicBlock(Fn: ValueRef) -> BasicBlockRef; - pub fn LLVMGetLastBasicBlock(Fn: ValueRef) -> BasicBlockRef; - pub fn LLVMGetNextBasicBlock(BB: BasicBlockRef) -> BasicBlockRef; - pub fn LLVMGetPreviousBasicBlock(BB: BasicBlockRef) -> BasicBlockRef; - pub fn LLVMGetEntryBasicBlock(Fn: ValueRef) -> BasicBlockRef; - pub fn LLVMAppendBasicBlockInContext(C: ContextRef, Fn: ValueRef, Name: *const c_char) -> BasicBlockRef; - pub fn LLVMInsertBasicBlockInContext(C: ContextRef, - BB: BasicBlockRef, - Name: *const c_char) - -> BasicBlockRef; pub fn LLVMDeleteBasicBlock(BB: BasicBlockRef); - pub fn LLVMMoveBasicBlockAfter(BB: BasicBlockRef, MoveAfter: BasicBlockRef); - - pub fn LLVMMoveBasicBlockBefore(BB: BasicBlockRef, MoveBefore: BasicBlockRef); - // Operations on instructions pub fn LLVMGetInstructionParent(Inst: ValueRef) -> BasicBlockRef; pub fn LLVMGetFirstInstruction(BB: BasicBlockRef) -> ValueRef; - pub fn LLVMGetLastInstruction(BB: BasicBlockRef) -> ValueRef; - pub fn LLVMGetNextInstruction(Inst: ValueRef) -> ValueRef; - pub fn LLVMGetPreviousInstruction(Inst: ValueRef) -> ValueRef; pub fn LLVMInstructionEraseFromParent(Inst: ValueRef); // Operations on call sites pub fn LLVMSetInstructionCallConv(Instr: ValueRef, CC: c_uint); - pub fn LLVMGetInstructionCallConv(Instr: ValueRef) -> c_uint; - pub fn LLVMAddInstrAttribute(Instr: ValueRef, index: c_uint, IA: c_uint); - pub fn LLVMRemoveInstrAttribute(Instr: ValueRef, index: c_uint, IA: c_uint); - pub fn LLVMSetInstrParamAlignment(Instr: ValueRef, index: c_uint, align: c_uint); - pub fn LLVMRustAddCallSiteAttribute(Instr: ValueRef, index: c_uint, Val: u64); + pub fn LLVMRustAddCallSiteAttribute(Instr: ValueRef, index: c_uint, attr: Attribute); pub fn LLVMRustAddDereferenceableCallSiteAttr(Instr: ValueRef, index: c_uint, bytes: u64); - // Operations on call instructions (only) - pub fn LLVMIsTailCall(CallInst: ValueRef) -> Bool; - pub fn LLVMSetTailCall(CallInst: ValueRef, IsTailCall: Bool); - // Operations on load/store instructions (only) - pub fn LLVMGetVolatile(MemoryAccessInst: ValueRef) -> Bool; pub fn LLVMSetVolatile(MemoryAccessInst: ValueRef, volatile: Bool); // Operations on phi nodes @@ -883,9 +726,6 @@ extern "C" { IncomingValues: *const ValueRef, IncomingBlocks: *const BasicBlockRef, Count: c_uint); - pub fn LLVMCountIncoming(PhiNode: ValueRef) -> c_uint; - pub fn LLVMGetIncomingValue(PhiNode: ValueRef, Index: c_uint) -> ValueRef; - pub fn LLVMGetIncomingBlock(PhiNode: ValueRef, Index: c_uint) -> BasicBlockRef; // Instruction builders pub fn LLVMCreateBuilderInContext(C: ContextRef) -> BuilderRef; @@ -893,11 +733,6 @@ extern "C" { pub fn LLVMPositionBuilderBefore(Builder: BuilderRef, Instr: ValueRef); pub fn LLVMPositionBuilderAtEnd(Builder: BuilderRef, Block: BasicBlockRef); pub fn LLVMGetInsertBlock(Builder: BuilderRef) -> BasicBlockRef; - pub fn LLVMClearInsertionPosition(Builder: BuilderRef); - pub fn LLVMInsertIntoBuilder(Builder: BuilderRef, Instr: ValueRef); - pub fn LLVMInsertIntoBuilderWithName(Builder: BuilderRef, - Instr: ValueRef, - Name: *const c_char); pub fn LLVMDisposeBuilder(Builder: BuilderRef); // Metadata @@ -969,9 +804,6 @@ extern "C" { // Add a case to the switch instruction pub fn LLVMAddCase(Switch: ValueRef, OnVal: ValueRef, Dest: BasicBlockRef); - // Add a destination to the indirectbr instruction - pub fn LLVMAddDestination(IndirectBr: ValueRef, Dest: BasicBlockRef); - // Add a clause to the landing pad instruction pub fn LLVMAddClause(LandingPad: ValueRef, ClauseVal: ValueRef); @@ -1365,9 +1197,6 @@ extern "C" { /// Creates target data from a target layout string. pub fn LLVMCreateTargetData(StringRep: *const c_char) -> TargetDataRef; - /// Number of bytes clobbered when doing a Store to *T. - pub fn LLVMStoreSizeOfType(TD: TargetDataRef, Ty: TypeRef) -> c_ulonglong; - /// Number of bytes clobbered when doing a Store to *T. pub fn LLVMSizeOfTypeInBits(TD: TargetDataRef, Ty: TypeRef) -> c_ulonglong; @@ -1386,9 +1215,6 @@ extern "C" { Element: c_uint) -> c_ulonglong; - /// Returns the minimum alignment of a type when part of a call frame. - pub fn LLVMCallFrameAlignmentOfType(TD: TargetDataRef, Ty: TypeRef) -> c_uint; - /// Disposes target data. pub fn LLVMDisposeTargetData(TD: TargetDataRef); @@ -1404,65 +1230,12 @@ extern "C" { /// Runs a pass manager on a module. pub fn LLVMRunPassManager(PM: PassManagerRef, M: ModuleRef) -> Bool; - /// Runs the function passes on the provided function. - pub fn LLVMRunFunctionPassManager(FPM: PassManagerRef, F: ValueRef) -> Bool; - - /// Initializes all the function passes scheduled in the manager - pub fn LLVMInitializeFunctionPassManager(FPM: PassManagerRef) -> Bool; - - /// Finalizes all the function passes scheduled in the manager - pub fn LLVMFinalizeFunctionPassManager(FPM: PassManagerRef) -> Bool; - pub fn LLVMInitializePasses(); - /// Adds a verification pass. - pub fn LLVMAddVerifierPass(PM: PassManagerRef); - - pub fn LLVMAddGlobalOptimizerPass(PM: PassManagerRef); - pub fn LLVMAddIPSCCPPass(PM: PassManagerRef); - pub fn LLVMAddDeadArgEliminationPass(PM: PassManagerRef); - pub fn LLVMAddInstructionCombiningPass(PM: PassManagerRef); - pub fn LLVMAddCFGSimplificationPass(PM: PassManagerRef); - pub fn LLVMAddFunctionInliningPass(PM: PassManagerRef); - pub fn LLVMAddFunctionAttrsPass(PM: PassManagerRef); - pub fn LLVMAddScalarReplAggregatesPass(PM: PassManagerRef); - pub fn LLVMAddScalarReplAggregatesPassSSA(PM: PassManagerRef); - pub fn LLVMAddJumpThreadingPass(PM: PassManagerRef); - pub fn LLVMAddConstantPropagationPass(PM: PassManagerRef); - pub fn LLVMAddReassociatePass(PM: PassManagerRef); - pub fn LLVMAddLoopRotatePass(PM: PassManagerRef); - pub fn LLVMAddLICMPass(PM: PassManagerRef); - pub fn LLVMAddLoopUnswitchPass(PM: PassManagerRef); - pub fn LLVMAddLoopDeletionPass(PM: PassManagerRef); - pub fn LLVMAddLoopUnrollPass(PM: PassManagerRef); - pub fn LLVMAddGVNPass(PM: PassManagerRef); - pub fn LLVMAddMemCpyOptPass(PM: PassManagerRef); - pub fn LLVMAddSCCPPass(PM: PassManagerRef); - pub fn LLVMAddDeadStoreEliminationPass(PM: PassManagerRef); - pub fn LLVMAddStripDeadPrototypesPass(PM: PassManagerRef); - pub fn LLVMAddConstantMergePass(PM: PassManagerRef); - pub fn LLVMAddArgumentPromotionPass(PM: PassManagerRef); - pub fn LLVMAddTailCallEliminationPass(PM: PassManagerRef); - pub fn LLVMAddIndVarSimplifyPass(PM: PassManagerRef); - pub fn LLVMAddAggressiveDCEPass(PM: PassManagerRef); - pub fn LLVMAddGlobalDCEPass(PM: PassManagerRef); - pub fn LLVMAddCorrelatedValuePropagationPass(PM: PassManagerRef); - pub fn LLVMAddPruneEHPass(PM: PassManagerRef); - pub fn LLVMAddSimplifyLibCallsPass(PM: PassManagerRef); - pub fn LLVMAddLoopIdiomPass(PM: PassManagerRef); - pub fn LLVMAddEarlyCSEPass(PM: PassManagerRef); - pub fn LLVMAddTypeBasedAliasAnalysisPass(PM: PassManagerRef); - pub fn LLVMAddBasicAliasAnalysisPass(PM: PassManagerRef); - pub fn LLVMPassManagerBuilderCreate() -> PassManagerBuilderRef; pub fn LLVMPassManagerBuilderDispose(PMB: PassManagerBuilderRef); - pub fn LLVMPassManagerBuilderSetOptLevel(PMB: PassManagerBuilderRef, - OptimizationLevel: c_uint); pub fn LLVMPassManagerBuilderSetSizeLevel(PMB: PassManagerBuilderRef, Value: Bool); - pub fn LLVMPassManagerBuilderSetDisableUnitAtATime(PMB: PassManagerBuilderRef, Value: Bool); pub fn LLVMPassManagerBuilderSetDisableUnrollLoops(PMB: PassManagerBuilderRef, Value: Bool); - pub fn LLVMPassManagerBuilderSetDisableSimplifyLibCalls(PMB: PassManagerBuilderRef, - Value: Bool); pub fn LLVMPassManagerBuilderUseInlinerWithThreshold(PMB: PassManagerBuilderRef, threshold: c_uint); pub fn LLVMPassManagerBuilderPopulateModulePassManager(PMB: PassManagerBuilderRef, @@ -1475,10 +1248,6 @@ extern "C" { Internalize: Bool, RunInliner: Bool); - /// Destroys a memory buffer. - pub fn LLVMDisposeMemoryBuffer(MemBuf: MemoryBufferRef); - - // Stuff that's in rustllvm/ because it's not upstream yet. /// Opens an object file. @@ -1503,18 +1272,7 @@ extern "C" { /// Reads the given file and returns it as a memory buffer. Use /// LLVMDisposeMemoryBuffer() to get rid of it. pub fn LLVMRustCreateMemoryBufferWithContentsOfFile(Path: *const c_char) -> MemoryBufferRef; - /// Borrows the contents of the memory buffer (doesn't copy it) - pub fn LLVMCreateMemoryBufferWithMemoryRange(InputData: *const c_char, - InputDataLength: size_t, - BufferName: *const c_char, - RequiresNull: Bool) - -> MemoryBufferRef; - pub fn LLVMCreateMemoryBufferWithMemoryRangeCopy(InputData: *const c_char, - InputDataLength: size_t, - BufferName: *const c_char) - -> MemoryBufferRef; - pub fn LLVMIsMultithreaded() -> Bool; pub fn LLVMStartMultithreaded() -> Bool; /// Returns a string describing the last error caused by an LLVMRust* call. @@ -1590,7 +1348,7 @@ extern "C" { isLocalToUnit: bool, isDefinition: bool, ScopeLine: c_uint, - Flags: c_uint, + Flags: DIFlags, isOptimized: bool, Fn: ValueRef, TParam: DIArray, @@ -1618,7 +1376,7 @@ extern "C" { LineNumber: c_uint, SizeInBits: u64, AlignInBits: u64, - Flags: c_uint, + Flags: DIFlags, DerivedFrom: DIType, Elements: DIArray, RunTimeLang: c_uint, @@ -1634,7 +1392,7 @@ extern "C" { SizeInBits: u64, AlignInBits: u64, OffsetInBits: u64, - Flags: c_uint, + Flags: DIFlags, Ty: DIType) -> DIDerivedType; @@ -1659,7 +1417,8 @@ extern "C" { Ty: DIType, isLocalToUnit: bool, Val: ValueRef, - Decl: DIDescriptor) + Decl: DIDescriptor, + AlignInBits: u64) -> DIGlobalVariable; pub fn LLVMRustDIBuilderCreateVariable(Builder: DIBuilderRef, @@ -1670,8 +1429,9 @@ extern "C" { LineNo: c_uint, Ty: DIType, AlwaysPreserve: bool, - Flags: c_uint, - ArgNo: c_uint) + Flags: DIFlags, + ArgNo: c_uint, + AlignInBits: u64) -> DIVariable; pub fn LLVMRustDIBuilderCreateArrayType(Builder: DIBuilderRef, @@ -1730,7 +1490,7 @@ extern "C" { LineNumber: c_uint, SizeInBits: u64, AlignInBits: u64, - Flags: c_uint, + Flags: DIFlags, Elements: DIArray, RunTimeLang: c_uint, UniqueId: *const c_char) @@ -1771,9 +1531,6 @@ extern "C" { pub fn LLVMRustWriteTypeToString(Type: TypeRef, s: RustStringRef); pub fn LLVMRustWriteValueToString(value_ref: ValueRef, s: RustStringRef); - pub fn LLVMIsAArgument(value_ref: ValueRef) -> ValueRef; - - pub fn LLVMIsAAllocaInst(value_ref: ValueRef) -> ValueRef; pub fn LLVMIsAConstantInt(value_ref: ValueRef) -> ValueRef; pub fn LLVMRustPassKind(Pass: PassRef) -> PassKind; @@ -1843,17 +1600,16 @@ extern "C" { DiagnosticContext: *mut c_void); pub fn LLVMRustUnpackOptimizationDiagnostic(DI: DiagnosticInfoRef, - pass_name_out: *mut *const c_char, + pass_name_out: RustStringRef, function_out: *mut ValueRef, debugloc_out: *mut DebugLocRef, - message_out: *mut TwineRef); + message_out: RustStringRef); pub fn LLVMRustUnpackInlineAsmDiagnostic(DI: DiagnosticInfoRef, cookie_out: *mut c_uint, message_out: *mut TwineRef, instruction_out: *mut ValueRef); pub fn LLVMRustWriteDiagnosticInfoToString(DI: DiagnosticInfoRef, s: RustStringRef); - pub fn LLVMGetDiagInfoSeverity(DI: DiagnosticInfoRef) -> DiagnosticSeverity; pub fn LLVMRustGetDiagInfoKind(DI: DiagnosticInfoRef) -> DiagnosticKind; pub fn LLVMRustWriteDebugLocToString(C: ContextRef, DL: DebugLocRef, s: RustStringRef); diff --git a/src/librustc_llvm/lib.rs b/src/librustc_llvm/lib.rs index da09bfa66d..a15edcd44b 100644 --- a/src/librustc_llvm/lib.rs +++ b/src/librustc_llvm/lib.rs @@ -24,11 +24,12 @@ #![feature(associated_consts)] #![feature(box_syntax)] +#![feature(concat_idents)] #![feature(libc)] #![feature(link_args)] +#![cfg_attr(stage0, feature(linked_from))] #![feature(staged_api)] -#![feature(linked_from)] -#![feature(concat_idents)] +#![cfg_attr(not(stage0), feature(rustc_private))] extern crate libc; #[macro_use] @@ -67,63 +68,15 @@ impl LLVMRustResult { } } -#[derive(Copy, Clone, Default, Debug)] -pub struct Attributes { - regular: Attribute, - dereferenceable_bytes: u64, -} - -impl Attributes { - pub fn set(&mut self, attr: Attribute) -> &mut Self { - self.regular = self.regular | attr; - self - } - - pub fn unset(&mut self, attr: Attribute) -> &mut Self { - self.regular = self.regular - attr; - self - } - - pub fn set_dereferenceable(&mut self, bytes: u64) -> &mut Self { - self.dereferenceable_bytes = bytes; - self - } - - pub fn unset_dereferenceable(&mut self) -> &mut Self { - self.dereferenceable_bytes = 0; - self - } - - pub fn apply_llfn(&self, idx: AttributePlace, llfn: ValueRef) { - unsafe { - self.regular.apply_llfn(idx, llfn); - if self.dereferenceable_bytes != 0 { - LLVMRustAddDereferenceableAttr(llfn, idx.as_uint(), self.dereferenceable_bytes); - } - } - } - - pub fn apply_callsite(&self, idx: AttributePlace, callsite: ValueRef) { - unsafe { - self.regular.apply_callsite(idx, callsite); - if self.dereferenceable_bytes != 0 { - LLVMRustAddDereferenceableCallSiteAttr(callsite, - idx.as_uint(), - self.dereferenceable_bytes); - } - } - } -} - pub fn AddFunctionAttrStringValue(llfn: ValueRef, idx: AttributePlace, - attr: &'static str, - value: &'static str) { + attr: &CStr, + value: &CStr) { unsafe { LLVMRustAddFunctionAttrStringValue(llfn, idx.as_uint(), - attr.as_ptr() as *const _, - value.as_ptr() as *const _) + attr.as_ptr(), + value.as_ptr()) } } @@ -139,7 +92,7 @@ impl AttributePlace { AttributePlace::Argument(0) } - fn as_uint(self) -> c_uint { + pub fn as_uint(self) -> c_uint { match self { AttributePlace::Function => !0, AttributePlace::Argument(i) => i, @@ -228,15 +181,15 @@ pub fn set_thread_local(global: ValueRef, is_thread_local: bool) { impl Attribute { pub fn apply_llfn(&self, idx: AttributePlace, llfn: ValueRef) { - unsafe { LLVMRustAddFunctionAttribute(llfn, idx.as_uint(), self.bits()) } + unsafe { LLVMRustAddFunctionAttribute(llfn, idx.as_uint(), *self) } } pub fn apply_callsite(&self, idx: AttributePlace, callsite: ValueRef) { - unsafe { LLVMRustAddCallSiteAttribute(callsite, idx.as_uint(), self.bits()) } + unsafe { LLVMRustAddCallSiteAttribute(callsite, idx.as_uint(), *self) } } pub fn unapply_llfn(&self, idx: AttributePlace, llfn: ValueRef) { - unsafe { LLVMRustRemoveFunctionAttributes(llfn, idx.as_uint(), self.bits()) } + unsafe { LLVMRustRemoveFunctionAttributes(llfn, idx.as_uint(), *self) } } pub fn toggle_llfn(&self, idx: AttributePlace, llfn: ValueRef, set: bool) { @@ -412,6 +365,11 @@ pub fn initialize_available_targets() { LLVMInitializeJSBackendTargetInfo, LLVMInitializeJSBackendTarget, LLVMInitializeJSBackendTargetMC); + init_target!(llvm_component = "msp430", + LLVMInitializeMSP430TargetInfo, + LLVMInitializeMSP430Target, + LLVMInitializeMSP430TargetMC, + LLVMInitializeMSP430AsmPrinter); } pub fn last_error() -> Option { diff --git a/src/librustc_metadata/astencode.rs b/src/librustc_metadata/astencode.rs index e009955b92..6598b7dcc5 100644 --- a/src/librustc_metadata/astencode.rs +++ b/src/librustc_metadata/astencode.rs @@ -10,7 +10,7 @@ use rustc::hir::map as ast_map; -use rustc::hir::intravisit::{Visitor, IdRangeComputingVisitor, IdRange}; +use rustc::hir::intravisit::{Visitor, IdRangeComputingVisitor, IdRange, NestedVisitorMap}; use cstore::CrateMetadata; use encoder::EncodeContext; @@ -18,7 +18,7 @@ use schema::*; use rustc::middle::cstore::{InlinedItem, InlinedItemRef}; use rustc::middle::const_qualif::ConstQualif; -use rustc::hir::def::{self, Def}; +use rustc::hir::def::Def; use rustc::hir::def_id::DefId; use rustc::ty::{self, TyCtxt, Ty}; @@ -35,7 +35,7 @@ pub struct Ast<'tcx> { #[derive(RustcEncodable, RustcDecodable)] enum TableEntry<'tcx> { - Def(Def), + TypeRelativeDef(Def), NodeType(Ty<'tcx>), ItemSubsts(ty::ItemSubsts<'tcx>), Adjustment(ty::adjustment::Adjustment<'tcx>), @@ -43,13 +43,9 @@ enum TableEntry<'tcx> { } impl<'a, 'tcx> EncodeContext<'a, 'tcx> { - pub fn encode_inlined_item(&mut self, ii: InlinedItemRef) -> Lazy> { - let mut id_visitor = IdRangeComputingVisitor::new(); - match ii { - InlinedItemRef::Item(_, i) => id_visitor.visit_item(i), - InlinedItemRef::TraitItem(_, ti) => id_visitor.visit_trait_item(ti), - InlinedItemRef::ImplItem(_, ii) => id_visitor.visit_impl_item(ii), - } + pub fn encode_inlined_item(&mut self, ii: InlinedItemRef<'tcx>) -> Lazy> { + let mut id_visitor = IdRangeComputingVisitor::new(&self.tcx.map); + ii.visit(&mut id_visitor); let ii_pos = self.position(); ii.encode(self).unwrap(); @@ -60,11 +56,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { ecx: self, count: 0, }; - match ii { - InlinedItemRef::Item(_, i) => visitor.visit_item(i), - InlinedItemRef::TraitItem(_, ti) => visitor.visit_trait_item(ti), - InlinedItemRef::ImplItem(_, ii) => visitor.visit_impl_item(ii), - } + ii.visit(&mut visitor); visitor.count }; @@ -81,7 +73,11 @@ struct SideTableEncodingIdVisitor<'a, 'b: 'a, 'tcx: 'b> { count: usize, } -impl<'a, 'b, 'tcx, 'v> Visitor<'v> for SideTableEncodingIdVisitor<'a, 'b, 'tcx> { +impl<'a, 'b, 'tcx> Visitor<'tcx> for SideTableEncodingIdVisitor<'a, 'b, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.ecx.tcx.map) + } + fn visit_id(&mut self, id: ast::NodeId) { debug!("Encoding side tables for id {}", id); @@ -93,7 +89,8 @@ impl<'a, 'b, 'tcx, 'v> Visitor<'v> for SideTableEncodingIdVisitor<'a, 'b, 'tcx> } }; - encode(tcx.expect_def_or_none(id).map(TableEntry::Def)); + encode(tcx.tables().type_relative_path_defs.get(&id).cloned() + .map(TableEntry::TypeRelativeDef)); encode(tcx.tables().node_types.get(&id).cloned().map(TableEntry::NodeType)); encode(tcx.tables().item_substs.get(&id).cloned().map(TableEntry::ItemSubsts)); encode(tcx.tables().adjustments.get(&id).cloned().map(TableEntry::Adjustment)); @@ -121,27 +118,26 @@ pub fn decode_inlined_item<'a, 'tcx>(cdata: &CrateMetadata, }]; let ii = ast.item.decode((cdata, tcx, id_ranges)); + let item_node_id = tcx.sess.next_node_id(); let ii = ast_map::map_decoded_item(&tcx.map, parent_def_path, parent_did, ii, - tcx.sess.next_node_id()); + item_node_id); - let item_node_id = match ii { - &InlinedItem::Item(_, ref i) => i.id, - &InlinedItem::TraitItem(_, ref ti) => ti.id, - &InlinedItem::ImplItem(_, ref ii) => ii.id, - }; let inlined_did = tcx.map.local_def_id(item_node_id); - tcx.register_item_type(inlined_did, tcx.lookup_item_type(orig_did)); + let ty = tcx.item_type(orig_did); + let generics = tcx.item_generics(orig_did); + tcx.item_types.borrow_mut().insert(inlined_did, ty); + tcx.generics.borrow_mut().insert(inlined_did, generics); for (id, entry) in ast.side_tables.decode((cdata, tcx, id_ranges)) { match entry { - TableEntry::Def(def) => { - tcx.def_map.borrow_mut().insert(id, def::PathResolution::new(def)); + TableEntry::TypeRelativeDef(def) => { + tcx.tables.borrow_mut().type_relative_path_defs.insert(id, def); } TableEntry::NodeType(ty) => { - tcx.node_type_insert(id, ty); + tcx.tables.borrow_mut().node_types.insert(id, ty); } TableEntry::ItemSubsts(item_substs) => { tcx.tables.borrow_mut().item_substs.insert(id, item_substs); diff --git a/src/librustc_metadata/creader.rs b/src/librustc_metadata/creader.rs index e72ac84199..d36242537b 100644 --- a/src/librustc_metadata/creader.rs +++ b/src/librustc_metadata/creader.rs @@ -16,33 +16,35 @@ use schema::CrateRoot; use rustc::hir::def_id::{CrateNum, DefIndex}; use rustc::hir::svh::Svh; -use rustc::middle::cstore::LoadedMacros; +use rustc::middle::cstore::DepKind; use rustc::session::{config, Session}; use rustc_back::PanicStrategy; use rustc::session::search_paths::PathKind; use rustc::middle; use rustc::middle::cstore::{CrateStore, validate_crate_name, ExternCrate}; -use rustc::util::nodemap::{FnvHashMap, FnvHashSet}; +use rustc::util::nodemap::FxHashSet; +use rustc::middle::cstore::NativeLibrary; use rustc::hir::map::Definitions; use std::cell::{RefCell, Cell}; use std::ops::Deref; use std::path::PathBuf; use std::rc::Rc; -use std::fs; +use std::{cmp, fs}; use syntax::ast; use syntax::abi::Abi; -use syntax::parse; use syntax::attr; use syntax::ext::base::SyntaxExtension; -use syntax::parse::token::{InternedString, intern}; -use syntax_pos::{self, Span, mk_sp}; +use syntax::feature_gate::{self, GateIssue}; +use syntax::symbol::Symbol; +use syntax_pos::{Span, DUMMY_SP}; use log; pub struct Library { pub dylib: Option<(PathBuf, PathKind)>, pub rlib: Option<(PathBuf, PathKind)>, + pub rmeta: Option<(PathBuf, PathKind)>, pub metadata: MetadataBlob, } @@ -50,43 +52,36 @@ pub struct CrateLoader<'a> { pub sess: &'a Session, cstore: &'a CStore, next_crate_num: CrateNum, - foreign_item_map: FnvHashMap>, - local_crate_name: String, + local_crate_name: Symbol, } fn dump_crates(cstore: &CStore) { info!("resolved crates:"); - cstore.iter_crate_data_origins(|_, data, opt_source| { + cstore.iter_crate_data(|_, data| { info!(" name: {}", data.name()); info!(" cnum: {}", data.cnum); info!(" hash: {}", data.hash()); - info!(" reqd: {}", data.explicitly_linked.get()); - opt_source.map(|cs| { - let CrateSource { dylib, rlib, cnum: _ } = cs; - dylib.map(|dl| info!(" dylib: {}", dl.0.display())); - rlib.map(|rl| info!(" rlib: {}", rl.0.display())); - }); - }) -} - -fn should_link(i: &ast::Item) -> bool { - !attr::contains_name(&i.attrs, "no_link") + info!(" reqd: {:?}", data.dep_kind.get()); + let CrateSource { dylib, rlib, rmeta } = data.source.clone(); + dylib.map(|dl| info!(" dylib: {}", dl.0.display())); + rlib.map(|rl| info!(" rlib: {}", rl.0.display())); + rmeta.map(|rl| info!(" rmeta: {}", rl.0.display())); + }); } #[derive(Debug)] struct ExternCrateInfo { - ident: String, - name: String, + ident: Symbol, + name: Symbol, id: ast::NodeId, - should_link: bool, + dep_kind: DepKind, } fn register_native_lib(sess: &Session, cstore: &CStore, span: Option, - name: String, - kind: cstore::NativeLibraryKind) { - if name.is_empty() { + lib: NativeLibrary) { + if lib.name.as_str().is_empty() { match span { Some(span) => { struct_span_err!(sess, span, E0454, @@ -101,17 +96,28 @@ fn register_native_lib(sess: &Session, return } let is_osx = sess.target.target.options.is_like_osx; - if kind == cstore::NativeFramework && !is_osx { + if lib.kind == cstore::NativeFramework && !is_osx { let msg = "native frameworks are only available on OSX targets"; match span { - Some(span) => { - span_err!(sess, span, E0455, - "{}", msg) - } + Some(span) => span_err!(sess, span, E0455, "{}", msg), None => sess.err(msg), } } - cstore.add_used_library(name, kind); + if lib.cfg.is_some() && !sess.features.borrow().link_cfg { + feature_gate::emit_feature_err(&sess.parse_sess, + "link_cfg", + span.unwrap(), + GateIssue::Language, + "is feature gated"); + } + cstore.add_used_library(lib); +} + +fn relevant_lib(sess: &Session, lib: &NativeLibrary) -> bool { + match lib.cfg { + Some(ref cfg) => attr::cfg_matches(cfg, &sess.parse_sess, None), + None => true, + } } // Extra info about a crate loaded for plugins or exported macros. @@ -148,8 +154,7 @@ impl<'a> CrateLoader<'a> { sess: sess, cstore: cstore, next_crate_num: cstore.next_crate_num(), - foreign_item_map: FnvHashMap(), - local_crate_name: local_crate_name.to_owned(), + local_crate_name: Symbol::intern(local_crate_name), } } @@ -162,22 +167,26 @@ impl<'a> CrateLoader<'a> { Some(name) => { validate_crate_name(Some(self.sess), &name.as_str(), Some(i.span)); - name.to_string() + name } - None => i.ident.to_string(), + None => i.ident.name, }; Some(ExternCrateInfo { - ident: i.ident.to_string(), + ident: i.ident.name, name: name, id: i.id, - should_link: should_link(i), + dep_kind: if attr::contains_name(&i.attrs, "no_link") { + DepKind::UnexportedMacrosOnly + } else { + DepKind::Explicit + }, }) } _ => None } } - fn existing_match(&self, name: &str, hash: Option<&Svh>, kind: PathKind) + fn existing_match(&self, name: Symbol, hash: Option<&Svh>, kind: PathKind) -> Option { let mut ret = None; self.cstore.iter_crate_data(|cnum, data| { @@ -199,7 +208,7 @@ impl<'a> CrateLoader<'a> { // `source` stores paths which are normalized which may be different // from the strings on the command line. let source = self.cstore.used_crate_source(cnum); - if let Some(locs) = self.sess.opts.externs.get(name) { + if let Some(locs) = self.sess.opts.externs.get(&*name.as_str()) { let found = locs.iter().any(|l| { let l = fs::canonicalize(l).ok(); source.dylib.as_ref().map(|p| &p.0) == l.as_ref() || @@ -231,7 +240,7 @@ impl<'a> CrateLoader<'a> { root: &CrateRoot) { // Check for (potential) conflicts with the local crate if self.local_crate_name == root.name && - self.sess.local_crate_disambiguator() == &root.disambiguator[..] { + self.sess.local_crate_disambiguator() == root.disambiguator { span_fatal!(self.sess, span, E0519, "the current crate is indistinguishable from one of its \ dependencies: it has the same crate-name `{}` and was \ @@ -256,13 +265,12 @@ impl<'a> CrateLoader<'a> { fn register_crate(&mut self, root: &Option, - ident: &str, - name: &str, + ident: Symbol, + name: Symbol, span: Span, lib: Library, - explicitly_linked: bool) - -> (CrateNum, Rc, - cstore::CrateSource) { + dep_kind: DepKind) + -> (CrateNum, Rc) { info!("register crate `extern crate {} as {}`", name, ident); let crate_root = lib.metadata.get_root(); self.verify_no_symbol_conflicts(span, &crate_root); @@ -277,6 +285,7 @@ impl<'a> CrateLoader<'a> { ident: ident.to_string(), dylib: lib.dylib.clone().map(|p| p.0), rlib: lib.rlib.clone().map(|p| p.0), + rmeta: lib.rmeta.clone().map(|p| p.0), }) } else { None @@ -284,86 +293,108 @@ impl<'a> CrateLoader<'a> { // Maintain a reference to the top most crate. let root = if root.is_some() { root } else { &crate_paths }; - let Library { dylib, rlib, metadata } = lib; + let Library { dylib, rlib, rmeta, metadata } = lib; - let cnum_map = self.resolve_crate_deps(root, &crate_root, &metadata, cnum, span); + let cnum_map = self.resolve_crate_deps(root, &crate_root, &metadata, cnum, span, dep_kind); - if crate_root.macro_derive_registrar.is_some() { - self.sess.span_err(span, "crates of the `proc-macro` crate type \ - cannot be linked at runtime"); - } - - let cmeta = Rc::new(cstore::CrateMetadata { - name: name.to_string(), + let mut cmeta = cstore::CrateMetadata { + name: name, extern_crate: Cell::new(None), key_map: metadata.load_key_map(crate_root.index), + proc_macros: crate_root.macro_derive_registrar.map(|_| { + self.load_derive_macros(&crate_root, dylib.clone().map(|p| p.0), span) + }), root: crate_root, blob: metadata, cnum_map: RefCell::new(cnum_map), cnum: cnum, codemap_import_info: RefCell::new(vec![]), - explicitly_linked: Cell::new(explicitly_linked), - }); - - let source = cstore::CrateSource { - dylib: dylib, - rlib: rlib, - cnum: cnum, + dep_kind: Cell::new(dep_kind), + source: cstore::CrateSource { + dylib: dylib, + rlib: rlib, + rmeta: rmeta, + }, + dllimport_foreign_items: FxHashSet(), }; + let dllimports: Vec<_> = cmeta.get_native_libraries().iter() + .filter(|lib| relevant_lib(self.sess, lib) && + lib.kind == cstore::NativeLibraryKind::NativeUnknown) + .flat_map(|lib| &lib.foreign_items) + .map(|id| *id) + .collect(); + cmeta.dllimport_foreign_items.extend(dllimports); + + let cmeta = Rc::new(cmeta); self.cstore.set_crate_data(cnum, cmeta.clone()); - self.cstore.add_used_crate_source(source.clone()); - (cnum, cmeta, source) + (cnum, cmeta) } fn resolve_crate(&mut self, root: &Option, - ident: &str, - name: &str, + ident: Symbol, + name: Symbol, hash: Option<&Svh>, span: Span, - kind: PathKind, - explicitly_linked: bool) - -> (CrateNum, Rc, cstore::CrateSource) { + path_kind: PathKind, + mut dep_kind: DepKind) + -> (CrateNum, Rc) { info!("resolving crate `extern crate {} as {}`", name, ident); - let result = match self.existing_match(name, hash, kind) { - Some(cnum) => LoadResult::Previous(cnum), - None => { - info!("falling back to a load"); - let mut locate_ctxt = locator::Context { - sess: self.sess, - span: span, - ident: ident, - crate_name: name, - hash: hash.map(|a| &*a), - filesearch: self.sess.target_filesearch(kind), - target: &self.sess.target.target, - triple: &self.sess.opts.target_triple, - root: root, + let result = if let Some(cnum) = self.existing_match(name, hash, path_kind) { + LoadResult::Previous(cnum) + } else { + info!("falling back to a load"); + let mut locate_ctxt = locator::Context { + sess: self.sess, + span: span, + ident: ident, + crate_name: name, + hash: hash.map(|a| &*a), + filesearch: self.sess.target_filesearch(path_kind), + target: &self.sess.target.target, + triple: &self.sess.opts.target_triple, + root: root, + rejected_via_hash: vec![], + rejected_via_triple: vec![], + rejected_via_kind: vec![], + rejected_via_version: vec![], + rejected_via_filename: vec![], + should_match_name: true, + is_proc_macro: Some(false), + }; + + self.load(&mut locate_ctxt).or_else(|| { + dep_kind = DepKind::UnexportedMacrosOnly; + + let mut proc_macro_locator = locator::Context { + target: &self.sess.host, + triple: config::host_triple(), + filesearch: self.sess.host_filesearch(path_kind), rejected_via_hash: vec![], rejected_via_triple: vec![], rejected_via_kind: vec![], rejected_via_version: vec![], - should_match_name: true, + rejected_via_filename: vec![], + is_proc_macro: Some(true), + ..locate_ctxt }; - match self.load(&mut locate_ctxt) { - Some(result) => result, - None => locate_ctxt.report_errs(), - } - } + + self.load(&mut proc_macro_locator) + }).unwrap_or_else(|| locate_ctxt.report_errs()) }; match result { LoadResult::Previous(cnum) => { let data = self.cstore.get_crate_data(cnum); - if explicitly_linked && !data.explicitly_linked.get() { - data.explicitly_linked.set(explicitly_linked); + if data.root.macro_derive_registrar.is_some() { + dep_kind = DepKind::UnexportedMacrosOnly; } - (cnum, data, self.cstore.used_crate_source(cnum)) + data.dep_kind.set(cmp::max(data.dep_kind.get(), dep_kind)); + (cnum, data) } LoadResult::Loaded(library) => { - self.register_crate(root, ident, name, span, library, - explicitly_linked) + self.register_crate(root, ident, name, span, library, dep_kind) } } } @@ -401,7 +432,7 @@ impl<'a> CrateLoader<'a> { fn update_extern_crate(&mut self, cnum: CrateNum, mut extern_crate: ExternCrate, - visited: &mut FnvHashSet<(CrateNum, bool)>) + visited: &mut FxHashSet<(CrateNum, bool)>) { if !visited.insert((cnum, extern_crate.direct)) { return } @@ -436,46 +467,44 @@ impl<'a> CrateLoader<'a> { crate_root: &CrateRoot, metadata: &MetadataBlob, krate: CrateNum, - span: Span) + span: Span, + dep_kind: DepKind) -> cstore::CrateNumMap { debug!("resolving deps of external crate"); - // The map from crate numbers in the crate we're resolving to local crate - // numbers - let deps = crate_root.crate_deps.decode(metadata); - let map: FnvHashMap<_, _> = deps.enumerate().map(|(crate_num, dep)| { - debug!("resolving dep crate {} hash: `{}`", dep.name, dep.hash); - let (local_cnum, ..) = self.resolve_crate(root, - &dep.name.as_str(), - &dep.name.as_str(), - Some(&dep.hash), - span, - PathKind::Dependency, - dep.explicitly_linked); - (CrateNum::new(crate_num + 1), local_cnum) - }).collect(); + if crate_root.macro_derive_registrar.is_some() { + return cstore::CrateNumMap::new(); + } - let max_cnum = map.values().cloned().max().map(|cnum| cnum.as_u32()).unwrap_or(0); - - // we map 0 and all other holes in the map to our parent crate. The "additional" + // The map from crate numbers in the crate we're resolving to local crate numbers. + // We map 0 and all other holes in the map to our parent crate. The "additional" // self-dependencies should be harmless. - (0..max_cnum+1).map(|cnum| { - map.get(&CrateNum::from_u32(cnum)).cloned().unwrap_or(krate) - }).collect() + ::std::iter::once(krate).chain(crate_root.crate_deps.decode(metadata).map(|dep| { + debug!("resolving dep crate {} hash: `{}`", dep.name, dep.hash); + if dep.kind == DepKind::UnexportedMacrosOnly { + return krate; + } + let dep_kind = match dep_kind { + DepKind::MacrosOnly => DepKind::MacrosOnly, + _ => dep.kind, + }; + let (local_cnum, ..) = self.resolve_crate( + root, dep.name, dep.name, Some(&dep.hash), span, PathKind::Dependency, dep_kind, + ); + local_cnum + })).collect() } fn read_extension_crate(&mut self, span: Span, info: &ExternCrateInfo) -> ExtensionCrate { - info!("read extension crate {} `extern crate {} as {}` linked={}", - info.id, info.name, info.ident, info.should_link); + info!("read extension crate {} `extern crate {} as {}` dep_kind={:?}", + info.id, info.name, info.ident, info.dep_kind); let target_triple = &self.sess.opts.target_triple[..]; let is_cross = target_triple != config::host_triple(); let mut target_only = false; - let ident = info.ident.clone(); - let name = info.name.clone(); let mut locate_ctxt = locator::Context { sess: self.sess, span: span, - ident: &ident[..], - crate_name: &name[..], + ident: info.ident, + crate_name: info.name, hash: None, filesearch: self.sess.host_filesearch(PathKind::Crate), target: &self.sess.host, @@ -485,7 +514,9 @@ impl<'a> CrateLoader<'a> { rejected_via_triple: vec![], rejected_via_kind: vec![], rejected_via_version: vec![], + rejected_via_filename: vec![], should_match_name: true, + is_proc_macro: None, }; let library = self.load(&mut locate_ctxt).or_else(|| { if !is_cross { @@ -508,9 +539,8 @@ impl<'a> CrateLoader<'a> { let (dylib, metadata) = match library { LoadResult::Previous(cnum) => { - let dylib = self.cstore.opt_used_crate_source(cnum).unwrap().dylib; let data = self.cstore.get_crate_data(cnum); - (dylib, PMDSource::Registered(data)) + (data.source.dylib.clone(), PMDSource::Registered(data)) } LoadResult::Loaded(library) => { let dylib = library.dylib.clone(); @@ -526,68 +556,6 @@ impl<'a> CrateLoader<'a> { } } - fn read_macros(&mut self, item: &ast::Item, ekrate: &ExtensionCrate) -> LoadedMacros { - let root = ekrate.metadata.get_root(); - let source_name = format!("<{} macros>", item.ident); - let mut macro_rules = Vec::new(); - - for def in root.macro_defs.decode(&*ekrate.metadata) { - // NB: Don't use parse::parse_tts_from_source_str because it parses with - // quote_depth > 0. - let mut p = parse::new_parser_from_source_str(&self.sess.parse_sess, - source_name.clone(), - def.body); - let lo = p.span.lo; - let body = match p.parse_all_token_trees() { - Ok(body) => body, - Err(mut err) => { - err.emit(); - self.sess.abort_if_errors(); - unreachable!(); - } - }; - let local_span = mk_sp(lo, p.prev_span.hi); - - // Mark the attrs as used - for attr in &def.attrs { - attr::mark_used(attr); - } - - macro_rules.push(ast::MacroDef { - ident: ast::Ident::with_empty_ctxt(def.name), - id: ast::DUMMY_NODE_ID, - span: local_span, - imported_from: Some(item.ident), - allow_internal_unstable: attr::contains_name(&def.attrs, "allow_internal_unstable"), - attrs: def.attrs, - body: body, - }); - self.sess.imported_macro_spans.borrow_mut() - .insert(local_span, (def.name.as_str().to_string(), def.span)); - } - - if let Some(id) = root.macro_derive_registrar { - let dylib = match ekrate.dylib.clone() { - Some(dylib) => dylib, - None => span_bug!(item.span, "proc-macro crate not dylib"), - }; - if ekrate.target_only { - let message = format!("proc-macro crate is not available for \ - triple `{}` (only found {})", - config::host_triple(), - self.sess.opts.target_triple); - self.sess.span_fatal(item.span, &message); - } - - // custom derive crates currently should not have any macro_rules! - // exported macros, enforced elsewhere - assert_eq!(macro_rules.len(), 0); - LoadedMacros::ProcMacros(self.load_derive_macros(item, id, root.hash, dylib)) - } else { - LoadedMacros::MacroRules(macro_rules) - } - } - /// Load custom derive macros. /// /// Note that this is intentionally similar to how we load plugins today, @@ -595,38 +563,47 @@ impl<'a> CrateLoader<'a> { /// implemented as dynamic libraries, but we have a possible future where /// custom derive (and other macro-1.1 style features) are implemented via /// executables and custom IPC. - fn load_derive_macros(&mut self, item: &ast::Item, index: DefIndex, svh: Svh, path: PathBuf) - -> Vec<(ast::Name, SyntaxExtension)> { + fn load_derive_macros(&mut self, root: &CrateRoot, dylib: Option, span: Span) + -> Vec<(ast::Name, Rc)> { use std::{env, mem}; use proc_macro::TokenStream; use proc_macro::__internal::Registry; use rustc_back::dynamic_lib::DynamicLibrary; use syntax_ext::deriving::custom::CustomDerive; + let path = match dylib { + Some(dylib) => dylib, + None => span_bug!(span, "proc-macro crate not dylib"), + }; // Make sure the path contains a / or the linker will search for it. let path = env::current_dir().unwrap().join(path); let lib = match DynamicLibrary::open(Some(&path)) { Ok(lib) => lib, - Err(err) => self.sess.span_fatal(item.span, &err), + Err(err) => self.sess.span_fatal(span, &err), }; - let sym = self.sess.generate_derive_registrar_symbol(&svh, index); + let sym = self.sess.generate_derive_registrar_symbol(&root.hash, + root.macro_derive_registrar.unwrap()); let registrar = unsafe { let sym = match lib.symbol(&sym) { Ok(f) => f, - Err(err) => self.sess.span_fatal(item.span, &err), + Err(err) => self.sess.span_fatal(span, &err), }; mem::transmute::<*mut u8, fn(&mut Registry)>(sym) }; - struct MyRegistrar(Vec<(ast::Name, SyntaxExtension)>); + struct MyRegistrar(Vec<(ast::Name, Rc)>); impl Registry for MyRegistrar { fn register_custom_derive(&mut self, trait_name: &str, - expand: fn(TokenStream) -> TokenStream) { - let derive = SyntaxExtension::CustomDerive(Box::new(CustomDerive::new(expand))); - self.0.push((intern(trait_name), derive)); + expand: fn(TokenStream) -> TokenStream, + attributes: &[&'static str]) { + let attrs = attributes.iter().cloned().map(Symbol::intern).collect(); + let derive = SyntaxExtension::CustomDerive( + Box::new(CustomDerive::new(expand, attrs)) + ); + self.0.push((Symbol::intern(trait_name), Rc::new(derive))); } } @@ -644,10 +621,10 @@ impl<'a> CrateLoader<'a> { pub fn find_plugin_registrar(&mut self, span: Span, name: &str) -> Option<(PathBuf, Svh, DefIndex)> { let ekrate = self.read_extension_crate(span, &ExternCrateInfo { - name: name.to_string(), - ident: name.to_string(), + name: Symbol::intern(name), + ident: Symbol::intern(name), id: ast::DUMMY_NODE_ID, - should_link: false, + dep_kind: DepKind::UnexportedMacrosOnly, }); if ekrate.target_only { @@ -678,18 +655,28 @@ impl<'a> CrateLoader<'a> { } } - fn register_statically_included_foreign_items(&mut self) { + fn get_foreign_items_of_kind(&self, kind: cstore::NativeLibraryKind) -> Vec { + let mut items = vec![]; let libs = self.cstore.get_used_libraries(); - for (lib, list) in self.foreign_item_map.iter() { - let is_static = libs.borrow().iter().any(|&(ref name, kind)| { - lib == name && kind == cstore::NativeStatic - }); - if is_static { - for id in list { - self.cstore.add_statically_included_foreign_item(*id); - } + for lib in libs.borrow().iter() { + if relevant_lib(self.sess, lib) && lib.kind == kind { + items.extend(&lib.foreign_items); } } + items + } + + fn register_statically_included_foreign_items(&mut self) { + for id in self.get_foreign_items_of_kind(cstore::NativeStatic) { + self.cstore.add_statically_included_foreign_item(id); + } + } + + fn register_dllimport_foreign_items(&mut self) { + let mut dllimports = self.cstore.dllimport_foreign_items.borrow_mut(); + for id in self.get_foreign_items_of_kind(cstore::NativeUnknown) { + dllimports.insert(id); + } } fn inject_panic_runtime(&mut self, krate: &ast::Crate) { @@ -721,7 +708,7 @@ impl<'a> CrateLoader<'a> { // #![panic_runtime] crate. self.inject_dependency_if(cnum, "a panic runtime", &|data| data.needs_panic_runtime()); - runtime_found = runtime_found || data.explicitly_linked.get(); + runtime_found = runtime_found || data.dep_kind.get() == DepKind::Explicit; } }); @@ -745,14 +732,14 @@ impl<'a> CrateLoader<'a> { // in terms of everyone has a compatible panic runtime format, that's // performed later as part of the `dependency_format` module. let name = match desired_strategy { - PanicStrategy::Unwind => "panic_unwind", - PanicStrategy::Abort => "panic_abort", + PanicStrategy::Unwind => Symbol::intern("panic_unwind"), + PanicStrategy::Abort => Symbol::intern("panic_abort"), }; info!("panic runtime not found -- loading {}", name); - let (cnum, data, _) = self.resolve_crate(&None, name, name, None, - syntax_pos::DUMMY_SP, - PathKind::Crate, false); + let dep_kind = DepKind::Implicit; + let (cnum, data) = + self.resolve_crate(&None, name, name, None, DUMMY_SP, PathKind::Crate, dep_kind); // Sanity check the loaded crate to ensure it is indeed a panic runtime // and the panic strategy is indeed what we thought it was. @@ -786,7 +773,7 @@ impl<'a> CrateLoader<'a> { self.inject_dependency_if(cnum, "an allocator", &|data| data.needs_allocator()); found_required_allocator = found_required_allocator || - data.explicitly_linked.get(); + data.dep_kind.get() == DepKind::Explicit; } }); if !needs_allocator || found_required_allocator { return } @@ -807,7 +794,8 @@ impl<'a> CrateLoader<'a> { config::CrateTypeProcMacro | config::CrateTypeCdylib | config::CrateTypeStaticlib => need_lib_alloc = true, - config::CrateTypeRlib => {} + config::CrateTypeRlib | + config::CrateTypeMetadata => {} } } if !need_lib_alloc && !need_exe_alloc { return } @@ -828,13 +816,13 @@ impl<'a> CrateLoader<'a> { // * Staticlibs and Rust dylibs use system malloc // * Rust dylibs used as dependencies to rust use jemalloc let name = if need_lib_alloc && !self.sess.opts.cg.prefer_dynamic { - &self.sess.target.target.options.lib_allocation_crate + Symbol::intern(&self.sess.target.target.options.lib_allocation_crate) } else { - &self.sess.target.target.options.exe_allocation_crate + Symbol::intern(&self.sess.target.target.options.exe_allocation_crate) }; - let (cnum, data, _) = self.resolve_crate(&None, name, name, None, - syntax_pos::DUMMY_SP, - PathKind::Crate, false); + let dep_kind = DepKind::Implicit; + let (cnum, data) = + self.resolve_crate(&None, name, name, None, DUMMY_SP, PathKind::Crate, dep_kind); // Sanity check the crate we loaded to ensure that it is indeed an // allocator. @@ -892,13 +880,14 @@ impl<'a> CrateLoader<'a> { impl<'a> CrateLoader<'a> { pub fn preprocess(&mut self, krate: &ast::Crate) { for attr in krate.attrs.iter().filter(|m| m.name() == "link_args") { - if let Some(ref linkarg) = attr.value_str() { - self.cstore.add_used_link_args(&linkarg); + if let Some(linkarg) = attr.value_str() { + self.cstore.add_used_link_args(&linkarg.as_str()); } } } - fn process_foreign_mod(&mut self, i: &ast::Item, fm: &ast::ForeignMod) { + fn process_foreign_mod(&mut self, i: &ast::Item, fm: &ast::ForeignMod, + definitions: &Definitions) { if fm.abi == Abi::Rust || fm.abi == Abi::RustIntrinsic || fm.abi == Abi::PlatformIntrinsic { return; } @@ -906,7 +895,7 @@ impl<'a> CrateLoader<'a> { // First, add all of the custom #[link_args] attributes for m in i.attrs.iter().filter(|a| a.check_name("link_args")) { if let Some(linkarg) = m.value_str() { - self.cstore.add_used_link_args(&linkarg); + self.cstore.add_used_link_args(&linkarg.as_str()); } } @@ -918,7 +907,7 @@ impl<'a> CrateLoader<'a> { }; let kind = items.iter().find(|k| { k.check_name("kind") - }).and_then(|a| a.value_str()); + }).and_then(|a| a.value_str()).map(Symbol::as_str); let kind = match kind.as_ref().map(|s| &s[..]) { Some("static") => cstore::NativeStatic, Some("dylib") => cstore::NativeUnknown, @@ -940,21 +929,25 @@ impl<'a> CrateLoader<'a> { struct_span_err!(self.sess, m.span, E0459, "#[link(...)] specified without `name = \"foo\"`") .span_label(m.span, &format!("missing `name` argument")).emit(); - InternedString::new("foo") + Symbol::intern("foo") } }; - register_native_lib(self.sess, self.cstore, Some(m.span), n.to_string(), kind); - } - - // Finally, process the #[linked_from = "..."] attribute - for m in i.attrs.iter().filter(|a| a.check_name("linked_from")) { - let lib_name = match m.value_str() { - Some(name) => name, - None => continue, + let cfg = items.iter().find(|k| { + k.check_name("cfg") + }).and_then(|a| a.meta_item_list()); + let cfg = cfg.map(|list| { + list[0].meta_item().unwrap().clone() + }); + let foreign_items = fm.items.iter() + .map(|it| definitions.opt_def_index(it.id).unwrap()) + .collect(); + let lib = NativeLibrary { + name: n, + kind: kind, + cfg: cfg, + foreign_items: foreign_items, }; - let list = self.foreign_item_map.entry(lib_name.to_string()) - .or_insert(Vec::new()); - list.extend(fm.items.iter().map(|it| it.id)); + register_native_lib(self.sess, self.cstore, Some(m.span), lib); } } } @@ -968,62 +961,76 @@ impl<'a> middle::cstore::CrateLoader for CrateLoader<'a> { dump_crates(&self.cstore); } - for &(ref name, kind) in &self.sess.opts.libs { - register_native_lib(self.sess, self.cstore, None, name.clone(), kind); - } - self.register_statically_included_foreign_items(); - } - - fn process_item(&mut self, item: &ast::Item, definitions: &Definitions, load_macros: bool) - -> Option { - match item.node { - ast::ItemKind::ExternCrate(_) => {} - ast::ItemKind::ForeignMod(ref fm) => { - self.process_foreign_mod(item, fm); - return None; - } - _ => return None, - } - - let info = self.extract_crate_info(item).unwrap(); - let loaded_macros = if load_macros { - let ekrate = self.read_extension_crate(item.span, &info); - let loaded_macros = self.read_macros(item, &ekrate); - - // If this is a proc-macro crate or `#[no_link]` crate, it is only used at compile time, - // so we return here to avoid registering the crate. - if loaded_macros.is_proc_macros() || !info.should_link { - return Some(loaded_macros); - } - - // Register crate now to avoid double-reading metadata - if let PMDSource::Owned(lib) = ekrate.metadata { - if ekrate.target_only || config::host_triple() == self.sess.opts.target_triple { - let ExternCrateInfo { ref ident, ref name, .. } = info; - self.register_crate(&None, ident, name, item.span, lib, true); + // Process libs passed on the command line + // First, check for errors + let mut renames = FxHashSet(); + for &(ref name, ref new_name, _) in &self.sess.opts.libs { + if let &Some(ref new_name) = new_name { + if new_name.is_empty() { + self.sess.err( + &format!("an empty renaming target was specified for library `{}`",name)); + } else if !self.cstore.get_used_libraries().borrow().iter() + .any(|lib| lib.name == name as &str) { + self.sess.err(&format!("renaming of the library `{}` was specified, \ + however this crate contains no #[link(...)] \ + attributes referencing this library.", name)); + } else if renames.contains(name) { + self.sess.err(&format!("multiple renamings were specified for library `{}` .", + name)); + } else { + renames.insert(name); } } - - Some(loaded_macros) - } else { - if !info.should_link { - return None; + } + // Update kind and, optionally, the name of all native libaries + // (there may be more than one) with the specified name. + for &(ref name, ref new_name, kind) in &self.sess.opts.libs { + let mut found = false; + for lib in self.cstore.get_used_libraries().borrow_mut().iter_mut() { + if lib.name == name as &str { + lib.kind = kind; + if let &Some(ref new_name) = new_name { + lib.name = Symbol::intern(new_name); + } + found = true; + } } - None - }; + if !found { + // Add if not found + let new_name = new_name.as_ref().map(|s| &**s); // &Option -> Option<&str> + let lib = NativeLibrary { + name: Symbol::intern(new_name.unwrap_or(name)), + kind: kind, + cfg: None, + foreign_items: Vec::new(), + }; + register_native_lib(self.sess, self.cstore, None, lib); + } + } + self.register_statically_included_foreign_items(); + self.register_dllimport_foreign_items(); + } - let (cnum, ..) = self.resolve_crate( - &None, &info.ident, &info.name, None, item.span, PathKind::Crate, true, - ); + fn process_item(&mut self, item: &ast::Item, definitions: &Definitions) { + match item.node { + ast::ItemKind::ForeignMod(ref fm) => { + self.process_foreign_mod(item, fm, definitions) + }, + ast::ItemKind::ExternCrate(_) => { + let info = self.extract_crate_info(item).unwrap(); + let (cnum, ..) = self.resolve_crate( + &None, info.ident, info.name, None, item.span, PathKind::Crate, info.dep_kind, + ); - let def_id = definitions.opt_local_def_id(item.id).unwrap(); - let len = definitions.def_path(def_id.index).data.len(); + let def_id = definitions.opt_local_def_id(item.id).unwrap(); + let len = definitions.def_path(def_id.index).data.len(); - let extern_crate = - ExternCrate { def_id: def_id, span: item.span, direct: true, path_len: len }; - self.update_extern_crate(cnum, extern_crate, &mut FnvHashSet()); - self.cstore.add_extern_mod_stmt_cnum(info.id, cnum); - - loaded_macros + let extern_crate = + ExternCrate { def_id: def_id, span: item.span, direct: true, path_len: len }; + self.update_extern_crate(cnum, extern_crate, &mut FxHashSet()); + self.cstore.add_extern_mod_stmt_cnum(info.id, cnum); + } + _ => {} + } } } diff --git a/src/librustc_metadata/cstore.rs b/src/librustc_metadata/cstore.rs index 58c70f959b..7700ebde18 100644 --- a/src/librustc_metadata/cstore.rs +++ b/src/librustc_metadata/cstore.rs @@ -15,24 +15,25 @@ use locator; use schema; use rustc::dep_graph::DepGraph; -use rustc::hir::def_id::{CRATE_DEF_INDEX, CrateNum, DefIndex, DefId}; +use rustc::hir::def_id::{CRATE_DEF_INDEX, LOCAL_CRATE, CrateNum, DefIndex, DefId}; use rustc::hir::map::DefKey; use rustc::hir::svh::Svh; -use rustc::middle::cstore::ExternCrate; +use rustc::middle::cstore::{DepKind, ExternCrate}; use rustc_back::PanicStrategy; use rustc_data_structures::indexed_vec::IndexVec; -use rustc::util::nodemap::{FnvHashMap, NodeMap, NodeSet, DefIdMap}; +use rustc::util::nodemap::{FxHashMap, FxHashSet, NodeMap, DefIdMap}; use std::cell::{RefCell, Cell}; use std::rc::Rc; -use std::path::PathBuf; use flate::Bytes; use syntax::{ast, attr}; +use syntax::ext::base::SyntaxExtension; +use syntax::symbol::Symbol; use syntax_pos; -pub use rustc::middle::cstore::{NativeLibraryKind, LinkagePreference}; +pub use rustc::middle::cstore::{NativeLibrary, NativeLibraryKind, LinkagePreference}; pub use rustc::middle::cstore::{NativeStatic, NativeFramework, NativeUnknown}; -pub use rustc::middle::cstore::{CrateSource, LinkMeta}; +pub use rustc::middle::cstore::{CrateSource, LinkMeta, LibSource}; // A map from external crate numbers (as decoded from some crate file) to // local crate numbers (as generated during this session). Each external @@ -43,6 +44,7 @@ pub type CrateNumMap = IndexVec; pub enum MetadataBlob { Inflated(Bytes), Archive(locator::ArchiveMetadata), + Raw(Vec), } /// Holds information about a syntax_pos::FileMap imported from another crate. @@ -57,7 +59,7 @@ pub struct ImportedFileMap { } pub struct CrateMetadata { - pub name: String, + pub name: Symbol, /// Information about the extern crate that caused this crate to /// be loaded. If this is `None`, then the crate was injected @@ -76,13 +78,14 @@ pub struct CrateMetadata { /// hashmap, which gives the reverse mapping. This allows us to /// quickly retrace a `DefPath`, which is needed for incremental /// compilation support. - pub key_map: FnvHashMap, + pub key_map: FxHashMap, - /// Flag if this crate is required by an rlib version of this crate, or in - /// other words whether it was explicitly linked to. An example of a crate - /// where this is false is when an allocator crate is injected into the - /// dependency list, and therefore isn't actually needed to link an rlib. - pub explicitly_linked: Cell, + pub dep_kind: Cell, + pub source: CrateSource, + + pub proc_macros: Option)>>, + // Foreign items imported from a dylib (Windows only) + pub dllimport_foreign_items: FxHashSet, } pub struct CachedInlinedItem { @@ -94,13 +97,13 @@ pub struct CachedInlinedItem { pub struct CStore { pub dep_graph: DepGraph, - metas: RefCell>>, + metas: RefCell>>, /// Map from NodeId's of local extern crate statements to crate numbers extern_mod_crate_map: RefCell>, - used_crate_sources: RefCell>, - used_libraries: RefCell>, + used_libraries: RefCell>, used_link_args: RefCell>, - statically_included_foreign_items: RefCell, + statically_included_foreign_items: RefCell>, + pub dllimport_foreign_items: RefCell>, pub inlined_item_cache: RefCell>>, pub defid_for_inlined_node: RefCell>, pub visible_parent_map: RefCell>, @@ -110,15 +113,15 @@ impl CStore { pub fn new(dep_graph: &DepGraph) -> CStore { CStore { dep_graph: dep_graph.clone(), - metas: RefCell::new(FnvHashMap()), - extern_mod_crate_map: RefCell::new(FnvHashMap()), - used_crate_sources: RefCell::new(Vec::new()), + metas: RefCell::new(FxHashMap()), + extern_mod_crate_map: RefCell::new(FxHashMap()), used_libraries: RefCell::new(Vec::new()), used_link_args: RefCell::new(Vec::new()), - statically_included_foreign_items: RefCell::new(NodeSet()), - visible_parent_map: RefCell::new(FnvHashMap()), - inlined_item_cache: RefCell::new(FnvHashMap()), - defid_for_inlined_node: RefCell::new(FnvHashMap()), + statically_included_foreign_items: RefCell::new(FxHashSet()), + dllimport_foreign_items: RefCell::new(FxHashSet()), + visible_parent_map: RefCell::new(FxHashMap()), + inlined_item_cache: RefCell::new(FxHashMap()), + defid_for_inlined_node: RefCell::new(FxHashMap()), } } @@ -146,38 +149,9 @@ impl CStore { } } - /// Like `iter_crate_data`, but passes source paths (if available) as well. - pub fn iter_crate_data_origins(&self, mut i: I) - where I: FnMut(CrateNum, &CrateMetadata, Option) - { - for (&k, v) in self.metas.borrow().iter() { - let origin = self.opt_used_crate_source(k); - origin.as_ref().map(|cs| { - assert!(k == cs.cnum); - }); - i(k, &v, origin); - } - } - - pub fn add_used_crate_source(&self, src: CrateSource) { - let mut used_crate_sources = self.used_crate_sources.borrow_mut(); - if !used_crate_sources.contains(&src) { - used_crate_sources.push(src); - } - } - - pub fn opt_used_crate_source(&self, cnum: CrateNum) -> Option { - self.used_crate_sources - .borrow_mut() - .iter() - .find(|source| source.cnum == cnum) - .cloned() - } - pub fn reset(&self) { self.metas.borrow_mut().clear(); self.extern_mod_crate_map.borrow_mut().clear(); - self.used_crate_sources.borrow_mut().clear(); self.used_libraries.borrow_mut().clear(); self.used_link_args.borrow_mut().clear(); self.statically_included_foreign_items.borrow_mut().clear(); @@ -216,22 +190,33 @@ impl CStore { // positions. pub fn do_get_used_crates(&self, prefer: LinkagePreference) - -> Vec<(CrateNum, Option)> { + -> Vec<(CrateNum, LibSource)> { let mut ordering = Vec::new(); for (&num, _) in self.metas.borrow().iter() { self.push_dependencies_in_postorder(&mut ordering, num); } info!("topological ordering: {:?}", ordering); ordering.reverse(); - let mut libs = self.used_crate_sources + let mut libs = self.metas .borrow() .iter() - .map(|src| { - (src.cnum, - match prefer { - LinkagePreference::RequireDynamic => src.dylib.clone().map(|p| p.0), - LinkagePreference::RequireStatic => src.rlib.clone().map(|p| p.0), - }) + .filter_map(|(&cnum, data)| { + if data.dep_kind.get().macros_only() { return None; } + let path = match prefer { + LinkagePreference::RequireDynamic => data.source.dylib.clone().map(|p| p.0), + LinkagePreference::RequireStatic => data.source.rlib.clone().map(|p| p.0), + }; + let path = match path { + Some(p) => LibSource::Some(p), + None => { + if data.source.rmeta.is_some() { + LibSource::MetadataOnly + } else { + LibSource::None + } + } + }; + Some((cnum, path)) }) .collect::>(); libs.sort_by(|&(a, _), &(b, _)| { @@ -242,12 +227,12 @@ impl CStore { libs } - pub fn add_used_library(&self, lib: String, kind: NativeLibraryKind) { - assert!(!lib.is_empty()); - self.used_libraries.borrow_mut().push((lib, kind)); + pub fn add_used_library(&self, lib: NativeLibrary) { + assert!(!lib.name.as_str().is_empty()); + self.used_libraries.borrow_mut().push(lib); } - pub fn get_used_libraries<'a>(&'a self) -> &'a RefCell> { + pub fn get_used_libraries(&self) -> &RefCell> { &self.used_libraries } @@ -265,12 +250,13 @@ impl CStore { self.extern_mod_crate_map.borrow_mut().insert(emod_id, cnum); } - pub fn add_statically_included_foreign_item(&self, id: ast::NodeId) { + pub fn add_statically_included_foreign_item(&self, id: DefIndex) { self.statically_included_foreign_items.borrow_mut().insert(id); } - pub fn do_is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool { - self.statically_included_foreign_items.borrow().contains(&id) + pub fn do_is_statically_included_foreign_item(&self, def_id: DefId) -> bool { + assert!(def_id.krate == LOCAL_CRATE); + self.statically_included_foreign_items.borrow().contains(&def_id.index) } pub fn do_extern_mod_stmt_cnum(&self, emod_id: ast::NodeId) -> Option { @@ -279,14 +265,14 @@ impl CStore { } impl CrateMetadata { - pub fn name(&self) -> &str { - &self.root.name + pub fn name(&self) -> Symbol { + self.root.name } pub fn hash(&self) -> Svh { self.root.hash } - pub fn disambiguator(&self) -> &str { - &self.root.disambiguator + pub fn disambiguator(&self) -> Symbol { + self.root.disambiguator } pub fn is_staged_api(&self) -> bool { diff --git a/src/librustc_metadata/cstore_impl.rs b/src/librustc_metadata/cstore_impl.rs index a618c98ff7..79daab6bbc 100644 --- a/src/librustc_metadata/cstore_impl.rs +++ b/src/librustc_metadata/cstore_impl.rs @@ -13,12 +13,13 @@ use encoder; use locator; use schema; -use rustc::middle::cstore::{InlinedItem, CrateStore, CrateSource, ExternCrate}; -use rustc::middle::cstore::{NativeLibraryKind, LinkMeta, LinkagePreference}; +use rustc::middle::cstore::{InlinedItem, CrateStore, CrateSource, LibSource, DepKind, ExternCrate}; +use rustc::middle::cstore::{NativeLibrary, LinkMeta, LinkagePreference, LoadedMacro}; use rustc::hir::def::{self, Def}; use rustc::middle::lang_items; +use rustc::session::Session; use rustc::ty::{self, Ty, TyCtxt}; -use rustc::hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX}; +use rustc::hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX, LOCAL_CRATE}; use rustc::dep_graph::DepNode; use rustc::hir::map as hir_map; @@ -27,10 +28,11 @@ use rustc::mir::Mir; use rustc::util::nodemap::{NodeSet, DefIdMap}; use rustc_back::PanicStrategy; -use std::path::PathBuf; use syntax::ast; use syntax::attr; -use syntax::parse::token; +use syntax::parse::new_parser_from_source_str; +use syntax::symbol::Symbol; +use syntax_pos::{mk_sp, Span}; use rustc::hir::svh::Svh; use rustc_back::target::Target; use rustc::hir; @@ -41,6 +43,11 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(def.krate).get_def(def.index) } + fn def_span(&self, sess: &Session, def: DefId) -> Span { + self.dep_graph.read(DepNode::MetaData(def)); + self.get_crate_data(def.krate).get_span(def.index, sess) + } + fn stability(&self, def: DefId) -> Option { self.dep_graph.read(DepNode::MetaData(def)); self.get_crate_data(def.krate).get_stability(def.index) @@ -108,13 +115,13 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(def_id.krate).get_item_attrs(def_id.index) } - fn trait_def<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) -> ty::TraitDef<'tcx> + fn trait_def<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) -> ty::TraitDef { self.dep_graph.read(DepNode::MetaData(def)); self.get_crate_data(def.krate).get_trait_def(def.index, tcx) } - fn adt_def<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) -> ty::AdtDefMaster<'tcx> + fn adt_def<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) -> &'tcx ty::AdtDef { self.dep_graph.read(DepNode::MetaData(def)); self.get_crate_data(def.krate).get_adt_def(def.index, tcx) @@ -144,7 +151,7 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { result } - fn impl_or_trait_items(&self, def_id: DefId) -> Vec { + fn associated_item_def_ids(&self, def_id: DefId) -> Vec { self.dep_graph.read(DepNode::MetaData(def_id)); let mut result = vec![]; self.get_crate_data(def_id.krate) @@ -182,11 +189,10 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(def_id.krate).get_trait_of_item(def_id.index) } - fn impl_or_trait_item<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) - -> Option> + fn associated_item<'a>(&self, def: DefId) -> Option { self.dep_graph.read(DepNode::MetaData(def)); - self.get_crate_data(def.krate).get_impl_or_trait_item(def.index, tcx) + self.get_crate_data(def.krate).get_associated_item(def.index) } fn is_const_fn(&self, did: DefId) -> bool @@ -210,9 +216,17 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(did.krate).is_foreign_item(did.index) } - fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool + fn is_statically_included_foreign_item(&self, def_id: DefId) -> bool { - self.do_is_statically_included_foreign_item(id) + self.do_is_statically_included_foreign_item(def_id) + } + + fn is_dllimport_foreign_item(&self, def_id: DefId) -> bool { + if def_id.krate == LOCAL_CRATE { + self.dllimport_foreign_items.borrow().contains(&def_id.index) + } else { + self.get_crate_data(def_id.krate).is_dllimport_foreign_item(def_id.index) + } } fn dylib_dependency_formats(&self, cnum: CrateNum) @@ -221,6 +235,17 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(cnum).get_dylib_dependency_formats() } + fn dep_kind(&self, cnum: CrateNum) -> DepKind + { + self.get_crate_data(cnum).dep_kind.get() + } + + fn export_macros(&self, cnum: CrateNum) { + if self.get_crate_data(cnum).dep_kind.get() == DepKind::UnexportedMacrosOnly { + self.get_crate_data(cnum).dep_kind.set(DepKind::MacrosOnly) + } + } + fn lang_items(&self, cnum: CrateNum) -> Vec<(DefIndex, usize)> { self.get_crate_data(cnum).get_lang_items() @@ -237,11 +262,6 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(cnum).is_staged_api() } - fn is_explicitly_linked(&self, cnum: CrateNum) -> bool - { - self.get_crate_data(cnum).explicitly_linked.get() - } - fn is_allocator(&self, cnum: CrateNum) -> bool { self.get_crate_data(cnum).is_allocator() @@ -260,14 +280,14 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(cnum).panic_strategy() } - fn crate_name(&self, cnum: CrateNum) -> token::InternedString + fn crate_name(&self, cnum: CrateNum) -> Symbol { - token::intern_and_get_ident(&self.get_crate_data(cnum).name[..]) + self.get_crate_data(cnum).name } - fn original_crate_name(&self, cnum: CrateNum) -> token::InternedString + fn original_crate_name(&self, cnum: CrateNum) -> Symbol { - token::intern_and_get_ident(&self.get_crate_data(cnum).name()) + self.get_crate_data(cnum).name() } fn extern_crate(&self, cnum: CrateNum) -> Option @@ -280,9 +300,9 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_hash(cnum) } - fn crate_disambiguator(&self, cnum: CrateNum) -> token::InternedString + fn crate_disambiguator(&self, cnum: CrateNum) -> Symbol { - token::intern_and_get_ident(&self.get_crate_data(cnum).disambiguator()) + self.get_crate_data(cnum).disambiguator() } fn plugin_registrar_fn(&self, cnum: CrateNum) -> Option @@ -293,14 +313,22 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { }) } - fn native_libraries(&self, cnum: CrateNum) -> Vec<(NativeLibraryKind, String)> + fn derive_registrar_fn(&self, cnum: CrateNum) -> Option + { + self.get_crate_data(cnum).root.macro_derive_registrar.map(|index| DefId { + krate: cnum, + index: index + }) + } + + fn native_libraries(&self, cnum: CrateNum) -> Vec { self.get_crate_data(cnum).get_native_libraries() } - fn reachable_ids(&self, cnum: CrateNum) -> Vec + fn exported_symbols(&self, cnum: CrateNum) -> Vec { - self.get_crate_data(cnum).get_reachable_ids() + self.get_crate_data(cnum).get_exported_symbols() } fn is_no_builtins(&self, cnum: CrateNum) -> bool { @@ -351,6 +379,51 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { result } + fn load_macro(&self, id: DefId, sess: &Session) -> LoadedMacro { + let data = self.get_crate_data(id.krate); + if let Some(ref proc_macros) = data.proc_macros { + return LoadedMacro::ProcMacro(proc_macros[id.index.as_usize() - 1].1.clone()); + } + + let (name, def) = data.get_macro(id.index); + let source_name = format!("<{} macros>", name); + + // NB: Don't use parse_tts_from_source_str because it parses with quote_depth > 0. + let mut parser = new_parser_from_source_str(&sess.parse_sess, source_name, def.body); + + let lo = parser.span.lo; + let body = match parser.parse_all_token_trees() { + Ok(body) => body, + Err(mut err) => { + err.emit(); + sess.abort_if_errors(); + unreachable!(); + } + }; + let local_span = mk_sp(lo, parser.prev_span.hi); + + // Mark the attrs as used + let attrs = data.get_item_attrs(id.index); + for attr in &attrs { + attr::mark_used(attr); + } + + let name = data.def_key(id.index).disambiguated_data.data + .get_opt_name().expect("no name in load_macro"); + sess.imported_macro_spans.borrow_mut() + .insert(local_span, (name.to_string(), data.get_span(id.index, sess))); + + LoadedMacro::MacroRules(ast::MacroDef { + ident: ast::Ident::with_empty_ctxt(name), + id: ast::DUMMY_NODE_ID, + span: local_span, + imported_from: None, // FIXME + allow_internal_unstable: attr::contains_name(&attrs, "allow_internal_unstable"), + attrs: attrs, + body: body, + }) + } + fn maybe_get_item_ast<'a>(&'tcx self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) @@ -393,12 +466,10 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { let find_inlined_item_root = |inlined_item_id| { let mut node = inlined_item_id; - let mut path = Vec::with_capacity(10); // If we can't find the inline root after a thousand hops, we can // be pretty sure there's something wrong with the HIR map. for _ in 0 .. 1000 { - path.push(node); let parent_node = tcx.map.get_parent_node(node); if parent_node == node { return node; @@ -414,27 +485,9 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { .borrow_mut() .insert(def_id, None); } - Some(&InlinedItem::Item(d, ref item)) => { - assert_eq!(d, def_id); - let inlined_root_node_id = find_inlined_item_root(item.id); - cache_inlined_item(def_id, item.id, inlined_root_node_id); - } - Some(&InlinedItem::TraitItem(_, ref trait_item)) => { - let inlined_root_node_id = find_inlined_item_root(trait_item.id); - cache_inlined_item(def_id, trait_item.id, inlined_root_node_id); - - // Associated consts already have to be evaluated in `typeck`, so - // the logic to do that already exists in `middle`. In order to - // reuse that code, it needs to be able to look up the traits for - // inlined items. - let ty_trait_item = tcx.impl_or_trait_item(def_id).clone(); - let trait_item_def_id = tcx.map.local_def_id(trait_item.id); - tcx.impl_or_trait_items.borrow_mut() - .insert(trait_item_def_id, ty_trait_item); - } - Some(&InlinedItem::ImplItem(_, ref impl_item)) => { - let inlined_root_node_id = find_inlined_item_root(impl_item.id); - cache_inlined_item(def_id, impl_item.id, inlined_root_node_id); + Some(&InlinedItem { ref body, .. }) => { + let inlined_root_node_id = find_inlined_item_root(body.id); + cache_inlined_item(def_id, inlined_root_node_id, inlined_root_node_id); } } @@ -473,6 +526,11 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { self.get_crate_data(def.krate).is_item_mir_available(def.index) } + fn can_have_local_instance<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId) -> bool { + self.dep_graph.read(DepNode::MetaData(def)); + def.is_local() || self.get_crate_data(def.krate).can_have_local_instance(tcx, def.index) + } + fn crates(&self) -> Vec { let mut result = vec![]; @@ -480,7 +538,7 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { result } - fn used_libraries(&self) -> Vec<(String, NativeLibraryKind)> + fn used_libraries(&self) -> Vec { self.get_used_libraries().borrow().clone() } @@ -500,14 +558,14 @@ impl<'tcx> CrateStore<'tcx> for cstore::CStore { locator::meta_section_name(target) } - fn used_crates(&self, prefer: LinkagePreference) -> Vec<(CrateNum, Option)> + fn used_crates(&self, prefer: LinkagePreference) -> Vec<(CrateNum, LibSource)> { self.do_get_used_crates(prefer) } fn used_crate_source(&self, cnum: CrateNum) -> CrateSource { - self.opt_used_crate_source(cnum).unwrap() + self.get_crate_data(cnum).source.clone() } fn extern_mod_stmt_cnum(&self, emod_id: ast::NodeId) -> Option diff --git a/src/librustc_metadata/decoder.rs b/src/librustc_metadata/decoder.rs index ccd497860d..af325cefa6 100644 --- a/src/librustc_metadata/decoder.rs +++ b/src/librustc_metadata/decoder.rs @@ -11,20 +11,21 @@ // Decoding metadata from a single crate's metadata use astencode::decode_inlined_item; -use cstore::{self, CrateMetadata, MetadataBlob, NativeLibraryKind}; +use cstore::{self, CrateMetadata, MetadataBlob, NativeLibrary}; use index::Index; use schema::*; use rustc::hir::map as hir_map; use rustc::hir::map::{DefKey, DefPathData}; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::FxHashMap; use rustc::hir; use rustc::hir::intravisit::IdRange; use rustc::middle::cstore::{InlinedItem, LinkagePreference}; use rustc::hir::def::{self, Def, CtorKind}; -use rustc::hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE}; +use rustc::hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX, LOCAL_CRATE}; use rustc::middle::lang_items; +use rustc::session::Session; use rustc::ty::{self, Ty, TyCtxt}; use rustc::ty::subst::Substs; @@ -36,7 +37,6 @@ use std::borrow::Cow; use std::cell::Ref; use std::io; use std::mem; -use std::rc::Rc; use std::str; use std::u32; @@ -44,12 +44,13 @@ use rustc_serialize::{Decodable, Decoder, SpecializedDecoder, opaque}; use syntax::attr; use syntax::ast::{self, NodeId}; use syntax::codemap; -use syntax_pos::{self, Span, BytePos, Pos}; +use syntax_pos::{self, Span, BytePos, Pos, DUMMY_SP}; pub struct DecodeContext<'a, 'tcx: 'a> { opaque: opaque::Decoder<'a>, - tcx: Option>, cdata: Option<&'a CrateMetadata>, + sess: Option<&'a Session>, + tcx: Option>, from_id_range: IdRange, to_id_range: IdRange, @@ -62,22 +63,21 @@ pub struct DecodeContext<'a, 'tcx: 'a> { /// Abstract over the various ways one can create metadata decoders. pub trait Metadata<'a, 'tcx>: Copy { fn raw_bytes(self) -> &'a [u8]; - fn cdata(self) -> Option<&'a CrateMetadata> { - None - } - fn tcx(self) -> Option> { - None - } + fn cdata(self) -> Option<&'a CrateMetadata> { None } + fn sess(self) -> Option<&'a Session> { None } + fn tcx(self) -> Option> { None } fn decoder(self, pos: usize) -> DecodeContext<'a, 'tcx> { let id_range = IdRange { min: NodeId::from_u32(u32::MIN), max: NodeId::from_u32(u32::MAX), }; + let tcx = self.tcx(); DecodeContext { opaque: opaque::Decoder::new(self.raw_bytes(), pos), cdata: self.cdata(), - tcx: self.tcx(), + sess: self.sess().or(tcx.map(|tcx| tcx.sess)), + tcx: tcx, from_id_range: id_range, to_id_range: id_range, last_filemap_index: 0, @@ -89,8 +89,9 @@ pub trait Metadata<'a, 'tcx>: Copy { impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a MetadataBlob { fn raw_bytes(self) -> &'a [u8] { match *self { - MetadataBlob::Inflated(ref vec) => &vec[..], + MetadataBlob::Inflated(ref vec) => vec, MetadataBlob::Archive(ref ar) => ar.as_slice(), + MetadataBlob::Raw(ref vec) => vec, } } } @@ -104,6 +105,18 @@ impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a CrateMetadata { } } +impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, &'a Session) { + fn raw_bytes(self) -> &'a [u8] { + self.0.raw_bytes() + } + fn cdata(self) -> Option<&'a CrateMetadata> { + Some(self.0) + } + fn sess(self) -> Option<&'a Session> { + Some(&self.1) + } +} + impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, TyCtxt<'a, 'tcx, 'tcx>) { fn raw_bytes(self) -> &'a [u8] { self.0.raw_bytes() @@ -280,8 +293,8 @@ impl<'a, 'tcx> SpecializedDecoder for DecodeContext<'a, 'tcx> { let lo = BytePos::decode(self)?; let hi = BytePos::decode(self)?; - let tcx = if let Some(tcx) = self.tcx { - tcx + let sess = if let Some(sess) = self.sess { + sess } else { return Ok(syntax_pos::mk_sp(lo, hi)); }; @@ -299,7 +312,7 @@ impl<'a, 'tcx> SpecializedDecoder for DecodeContext<'a, 'tcx> { (lo, hi) }; - let imported_filemaps = self.cdata().imported_filemaps(&tcx.sess.codemap()); + let imported_filemaps = self.cdata().imported_filemaps(&sess.codemap()); let filemap = { // Optimize for the case that most spans within a translated item // originate from the same filemap. @@ -409,18 +422,31 @@ impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::BareFnTy<'tcx>> for DecodeContext<'a } } -impl<'a, 'tcx> SpecializedDecoder> for DecodeContext<'a, 'tcx> { - fn specialized_decode(&mut self) -> Result, Self::Error> { +impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::AdtDef> for DecodeContext<'a, 'tcx> { + fn specialized_decode(&mut self) -> Result<&'tcx ty::AdtDef, Self::Error> { let def_id = DefId::decode(self)?; Ok(self.tcx().lookup_adt_def(def_id)) } } +impl<'a, 'tcx> SpecializedDecoder<&'tcx ty::Slice>> + for DecodeContext<'a, 'tcx> { + fn specialized_decode(&mut self) + -> Result<&'tcx ty::Slice>, Self::Error> { + Ok(self.tcx().mk_existential_predicates((0..self.read_usize()?) + .map(|_| Decodable::decode(self)))?) + } +} + impl<'a, 'tcx> MetadataBlob { pub fn is_compatible(&self) -> bool { self.raw_bytes().starts_with(METADATA_HEADER) } + pub fn get_rustc_version(&self) -> String { + Lazy::with_position(METADATA_HEADER.len() + 4).decode(self) + } + pub fn get_root(&self) -> CrateRoot { let slice = self.raw_bytes(); let offset = METADATA_HEADER.len(); @@ -432,7 +458,7 @@ impl<'a, 'tcx> MetadataBlob { /// Go through each item in the metadata and create a map from that /// item's def-key to the item's DefIndex. - pub fn load_key_map(&self, index: LazySeq) -> FnvHashMap { + pub fn load_key_map(&self, index: LazySeq) -> FxHashMap { index.iter_enumerated(self.raw_bytes()) .map(|(index, item)| (item.decode(self).def_key.decode(self), index)) .collect() @@ -469,6 +495,7 @@ impl<'tcx> EntryKind<'tcx> { EntryKind::Variant(_) => Def::Variant(did), EntryKind::Trait(_) => Def::Trait(did), EntryKind::Enum => Def::Enum(did), + EntryKind::MacroDef(_) => Def::Macro(did), EntryKind::ForeignMod | EntryKind::Impl(_) | @@ -477,10 +504,23 @@ impl<'tcx> EntryKind<'tcx> { EntryKind::Closure(_) => return None, }) } + fn is_const_fn(&self, meta: &CrateMetadata) -> bool { + let constness = match *self { + EntryKind::Method(data) => data.decode(meta).fn_data.constness, + EntryKind::Fn(data) => data.decode(meta).constness, + _ => hir::Constness::NotConst, + }; + constness == hir::Constness::Const + } } impl<'a, 'tcx> CrateMetadata { + fn is_proc_macro(&self, id: DefIndex) -> bool { + self.proc_macros.is_some() && id != CRATE_DEF_INDEX + } + fn maybe_entry(&self, item_id: DefIndex) -> Option>> { + assert!(!self.is_proc_macro(item_id)); self.root.index.lookup(self.blob.raw_bytes(), item_id) } @@ -513,29 +553,38 @@ impl<'a, 'tcx> CrateMetadata { } pub fn get_def(&self, index: DefIndex) -> Option { - self.entry(index).kind.to_def(self.local_def_id(index)) + match self.is_proc_macro(index) { + true => Some(Def::Macro(self.local_def_id(index))), + false => self.entry(index).kind.to_def(self.local_def_id(index)), + } + } + + pub fn get_span(&self, index: DefIndex, sess: &Session) -> Span { + match self.is_proc_macro(index) { + true => DUMMY_SP, + false => self.entry(index).span.decode((self, sess)), + } } pub fn get_trait_def(&self, item_id: DefIndex, tcx: TyCtxt<'a, 'tcx, 'tcx>) - -> ty::TraitDef<'tcx> { + -> ty::TraitDef { let data = match self.entry(item_id).kind { EntryKind::Trait(data) => data.decode(self), _ => bug!(), }; - ty::TraitDef::new(data.unsafety, + ty::TraitDef::new(self.local_def_id(item_id), + data.unsafety, data.paren_sugar, - tcx.lookup_generics(self.local_def_id(item_id)), - data.trait_ref.decode((self, tcx)), self.def_path(item_id).unwrap().deterministic_hash(tcx)) } fn get_variant(&self, item: &Entry<'tcx>, index: DefIndex) - -> (ty::VariantDefData<'tcx, 'tcx>, Option) { + -> (ty::VariantDef, Option) { let data = match item.kind { EntryKind::Variant(data) | EntryKind::Struct(data) | @@ -543,28 +592,26 @@ impl<'a, 'tcx> CrateMetadata { _ => bug!(), }; - let fields = item.children - .decode(self) - .map(|index| { + (ty::VariantDef { + did: self.local_def_id(data.struct_ctor.unwrap_or(index)), + name: self.item_name(item), + fields: item.children.decode(self).map(|index| { let f = self.entry(index); - ty::FieldDefData::new(self.local_def_id(index), self.item_name(&f), f.visibility) - }) - .collect(); - - (ty::VariantDefData { - did: self.local_def_id(data.struct_ctor.unwrap_or(index)), - name: self.item_name(item), - fields: fields, - disr_val: ConstInt::Infer(data.disr), - ctor_kind: data.ctor_kind, - }, - data.struct_ctor) + ty::FieldDef { + did: self.local_def_id(index), + name: self.item_name(&f), + vis: f.visibility + } + }).collect(), + disr_val: ConstInt::Infer(data.disr), + ctor_kind: data.ctor_kind, + }, data.struct_ctor) } pub fn get_adt_def(&self, item_id: DefIndex, tcx: TyCtxt<'a, 'tcx, 'tcx>) - -> ty::AdtDefMaster<'tcx> { + -> &'tcx ty::AdtDef { let item = self.entry(item_id); let did = self.local_def_id(item_id); let mut ctor_index = None; @@ -589,26 +636,10 @@ impl<'a, 'tcx> CrateMetadata { _ => bug!("get_adt_def called on a non-ADT {:?}", did), }; - let adt = tcx.intern_adt_def(did, kind, variants); + let adt = tcx.alloc_adt_def(did, kind, variants); if let Some(ctor_index) = ctor_index { // Make adt definition available through constructor id as well. - tcx.insert_adt_def(self.local_def_id(ctor_index), adt); - } - - // this needs to be done *after* the variant is interned, - // to support recursive structures - for variant in &adt.variants { - for field in &variant.fields { - debug!("evaluating the type of {:?}::{:?}", - variant.name, - field.name); - let ty = self.get_type(field.did.index, tcx); - field.fulfill_ty(ty); - debug!("evaluating the type of {:?}::{:?}: {:?}", - variant.name, - field.name, - ty); - } + tcx.adt_defs.borrow_mut().insert(self.local_def_id(ctor_index), adt); } adt @@ -643,15 +674,24 @@ impl<'a, 'tcx> CrateMetadata { } pub fn get_stability(&self, id: DefIndex) -> Option { - self.entry(id).stability.map(|stab| stab.decode(self)) + match self.is_proc_macro(id) { + true => None, + false => self.entry(id).stability.map(|stab| stab.decode(self)), + } } pub fn get_deprecation(&self, id: DefIndex) -> Option { - self.entry(id).deprecation.map(|depr| depr.decode(self)) + match self.is_proc_macro(id) { + true => None, + false => self.entry(id).deprecation.map(|depr| depr.decode(self)), + } } pub fn get_visibility(&self, id: DefIndex) -> ty::Visibility { - self.entry(id).visibility + match self.is_proc_macro(id) { + true => ty::Visibility::Public, + false => self.entry(id).visibility, + } } fn get_impl_data(&self, id: DefIndex) -> ImplData<'tcx> { @@ -691,6 +731,16 @@ impl<'a, 'tcx> CrateMetadata { pub fn each_child_of_item(&self, id: DefIndex, mut callback: F) where F: FnMut(def::Export) { + if let Some(ref proc_macros) = self.proc_macros { + if id == CRATE_DEF_INDEX { + for (id, &(name, _)) in proc_macros.iter().enumerate() { + let def = Def::Macro(DefId { krate: self.cnum, index: DefIndex::new(id + 1) }); + callback(def::Export { name: name, def: def }); + } + } + return + } + // Find the item. let item = match self.maybe_entry(id) { None => return, @@ -698,10 +748,21 @@ impl<'a, 'tcx> CrateMetadata { }; // Iterate over all children. + let macros_only = self.dep_kind.get().macros_only(); for child_index in item.children.decode(self) { + if macros_only { + continue + } + // Get the item. if let Some(child) = self.maybe_entry(child_index) { let child = child.decode(self); + match child.kind { + EntryKind::MacroDef(..) => {} + _ if macros_only => continue, + _ => {} + } + // Hand off the item to the callback. match child.kind { // FIXME(eddyb) Don't encode these in children. @@ -760,6 +821,11 @@ impl<'a, 'tcx> CrateMetadata { if let EntryKind::Mod(data) = item.kind { for exp in data.decode(self).reexports.decode(self) { + match exp.def { + Def::Macro(..) => {} + _ if macros_only => continue, + _ => {} + } callback(exp); } } @@ -770,6 +836,7 @@ impl<'a, 'tcx> CrateMetadata { id: DefIndex) -> Option<&'tcx InlinedItem> { debug!("Looking up item: {:?}", id); + if self.is_proc_macro(id) { return None; } let item_doc = self.entry(id); let item_did = self.local_def_id(id); let parent_def_id = self.local_def_id(self.def_key(id).parent.unwrap()); @@ -782,20 +849,44 @@ impl<'a, 'tcx> CrateMetadata { } pub fn is_item_mir_available(&self, id: DefIndex) -> bool { + !self.is_proc_macro(id) && self.maybe_entry(id).and_then(|item| item.decode(self).mir).is_some() } + pub fn can_have_local_instance(&self, + tcx: TyCtxt<'a, 'tcx, 'tcx>, + id: DefIndex) -> bool { + self.maybe_entry(id).map_or(false, |item| { + let item = item.decode(self); + // if we don't have a MIR, then this item was never meant to be locally instantiated + // or we have a bug in the metadata serialization + item.mir.is_some() && ( + // items with generics always can have local instances if monomorphized + item.generics.map_or(false, |generics| { + let generics = generics.decode((self, tcx)); + generics.parent_types != 0 || !generics.types.is_empty() + }) || + match item.kind { + EntryKind::Closure(_) => true, + _ => false, + } || + item.kind.is_const_fn(self) || + attr::requests_inline(&self.get_attributes(&item)) + ) + }) + } + pub fn maybe_get_item_mir(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, id: DefIndex) -> Option> { - self.entry(id).mir.map(|mir| mir.decode((self, tcx))) + match self.is_proc_macro(id) { + true => None, + false => self.entry(id).mir.map(|mir| mir.decode((self, tcx))), + } } - pub fn get_impl_or_trait_item(&self, - id: DefIndex, - tcx: TyCtxt<'a, 'tcx, 'tcx>) - -> Option> { + pub fn get_associated_item(&self, id: DefIndex) -> Option { let item = self.entry(id); let parent_and_name = || { let def_key = item.def_key.decode(self); @@ -806,52 +897,40 @@ impl<'a, 'tcx> CrateMetadata { Some(match item.kind { EntryKind::AssociatedConst(container) => { let (parent, name) = parent_and_name(); - ty::ConstTraitItem(Rc::new(ty::AssociatedConst { + ty::AssociatedItem { name: name, - ty: item.ty.unwrap().decode((self, tcx)), + kind: ty::AssociatedKind::Const, vis: item.visibility, defaultness: container.defaultness(), def_id: self.local_def_id(id), container: container.with_def_id(parent), - has_value: container.has_body(), - })) + method_has_self_argument: false + } } EntryKind::Method(data) => { let (parent, name) = parent_and_name(); - let ity = item.ty.unwrap().decode((self, tcx)); - let fty = match ity.sty { - ty::TyFnDef(.., fty) => fty, - _ => { - bug!("the type {:?} of the method {:?} is not a function?", - ity, - name) - } - }; - let data = data.decode(self); - ty::MethodTraitItem(Rc::new(ty::Method { + ty::AssociatedItem { name: name, - generics: tcx.lookup_generics(self.local_def_id(id)), - predicates: item.predicates.unwrap().decode((self, tcx)), - fty: fty, - explicit_self: data.explicit_self.decode((self, tcx)), + kind: ty::AssociatedKind::Method, vis: item.visibility, defaultness: data.container.defaultness(), - has_body: data.container.has_body(), def_id: self.local_def_id(id), container: data.container.with_def_id(parent), - })) + method_has_self_argument: data.has_self + } } EntryKind::AssociatedType(container) => { let (parent, name) = parent_and_name(); - ty::TypeTraitItem(Rc::new(ty::AssociatedType { + ty::AssociatedItem { name: name, - ty: item.ty.map(|ty| ty.decode((self, tcx))), + kind: ty::AssociatedKind::Type, vis: item.visibility, defaultness: container.defaultness(), def_id: self.local_def_id(id), container: container.with_def_id(parent), - })) + method_has_self_argument: false + } } _ => return None, }) @@ -880,6 +959,9 @@ impl<'a, 'tcx> CrateMetadata { } pub fn get_item_attrs(&self, node_id: DefIndex) -> Vec { + if self.is_proc_macro(node_id) { + return Vec::new(); + } // The attributes for a tuple struct are attached to the definition, not the ctor; // we assume that someone passing in a tuple struct ctor is actually wanting to // look at the definition @@ -904,7 +986,7 @@ impl<'a, 'tcx> CrateMetadata { .decode(self) .map(|mut attr| { // Need new unique IDs: old thread-local IDs won't map to new threads. - attr.node.id = attr::mk_attr_id(); + attr.id = attr::mk_attr_id(); attr }) .collect() @@ -939,6 +1021,7 @@ impl<'a, 'tcx> CrateMetadata { let filter = match filter.map(|def_id| self.reverse_translate_def_id(def_id)) { Some(Some(def_id)) => Some((def_id.krate.as_u32(), def_id.index)), Some(None) => return, + None if self.proc_macros.is_some() => return, None => None, }; @@ -966,7 +1049,7 @@ impl<'a, 'tcx> CrateMetadata { } - pub fn get_native_libraries(&self) -> Vec<(NativeLibraryKind, String)> { + pub fn get_native_libraries(&self) -> Vec { self.root.native_libraries.decode(self).collect() } @@ -996,17 +1079,20 @@ impl<'a, 'tcx> CrateMetadata { arg_names.decode(self).collect() } - pub fn get_reachable_ids(&self) -> Vec { - self.root.reachable_ids.decode(self).map(|index| self.local_def_id(index)).collect() + pub fn get_exported_symbols(&self) -> Vec { + self.root.exported_symbols.decode(self).map(|index| self.local_def_id(index)).collect() + } + + pub fn get_macro(&self, id: DefIndex) -> (ast::Name, MacroDef) { + let entry = self.entry(id); + match entry.kind { + EntryKind::MacroDef(macro_def) => (self.item_name(&entry), macro_def.decode(self)), + _ => bug!(), + } } pub fn is_const_fn(&self, id: DefIndex) -> bool { - let constness = match self.entry(id).kind { - EntryKind::Method(data) => data.decode(self).fn_data.constness, - EntryKind::Fn(data) => data.decode(self).constness, - _ => hir::Constness::NotConst, - }; - constness == hir::Constness::Const + self.entry(id).kind.is_const_fn(self) } pub fn is_foreign_item(&self, id: DefIndex) -> bool { @@ -1018,6 +1104,10 @@ impl<'a, 'tcx> CrateMetadata { } } + pub fn is_dllimport_foreign_item(&self, id: DefIndex) -> bool { + self.dllimport_foreign_items.contains(&id) + } + pub fn is_defaulted_trait(&self, trait_id: DefIndex) -> bool { match self.entry(trait_id).kind { EntryKind::Trait(data) => data.decode(self).has_default_impl, @@ -1051,7 +1141,18 @@ impl<'a, 'tcx> CrateMetadata { pub fn def_key(&self, id: DefIndex) -> hir_map::DefKey { debug!("def_key: id={:?}", id); - self.entry(id).def_key.decode(self) + if self.is_proc_macro(id) { + let name = self.proc_macros.as_ref().unwrap()[id.as_usize() - 1].0; + hir_map::DefKey { + parent: Some(CRATE_DEF_INDEX), + disambiguated_data: hir_map::DisambiguatedDefPathData { + data: hir_map::DefPathData::MacroDef(name.as_str()), + disambiguator: 0, + }, + } + } else { + self.entry(id).def_key.decode(self) + } } // Returns the path leading to the thing with this `id`. Note that @@ -1059,7 +1160,7 @@ impl<'a, 'tcx> CrateMetadata { // returns `None` pub fn def_path(&self, id: DefIndex) -> Option { debug!("def_path(id={:?})", id); - if self.maybe_entry(id).is_some() { + if self.is_proc_macro(id) || self.maybe_entry(id).is_some() { Some(hir_map::DefPath::make(self.cnum, id, |parent| self.def_key(parent))) } else { None diff --git a/src/librustc_metadata/diagnostics.rs b/src/librustc_metadata/diagnostics.rs index b2f4760727..6cf1a9e8a3 100644 --- a/src/librustc_metadata/diagnostics.rs +++ b/src/librustc_metadata/diagnostics.rs @@ -56,9 +56,11 @@ An unknown "kind" was specified for a link attribute. Erroneous code example: ``` Please specify a valid "kind" value, from one of the following: + * static * dylib * framework + "##, E0459: r##" diff --git a/src/librustc_metadata/encoder.rs b/src/librustc_metadata/encoder.rs index fdb117ef81..443f3fbaa6 100644 --- a/src/librustc_metadata/encoder.rs +++ b/src/librustc_metadata/encoder.rs @@ -13,7 +13,7 @@ use index::Index; use schema::*; use rustc::middle::cstore::{InlinedItemRef, LinkMeta}; -use rustc::middle::cstore::{LinkagePreference, NativeLibraryKind}; +use rustc::middle::cstore::{LinkagePreference, NativeLibrary}; use rustc::hir::def; use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefIndex, DefId}; use rustc::middle::dependency_format::Linkage; @@ -23,7 +23,7 @@ use rustc::traits::specialization_graph; use rustc::ty::{self, Ty, TyCtxt}; use rustc::session::config::{self, CrateTypeProcMacro}; -use rustc::util::nodemap::{FnvHashMap, NodeSet}; +use rustc::util::nodemap::{FxHashMap, NodeSet}; use rustc_serialize::{Encodable, Encoder, SpecializedEncoder, opaque}; use std::hash::Hash; @@ -34,11 +34,12 @@ use std::rc::Rc; use std::u32; use syntax::ast::{self, CRATE_NODE_ID}; use syntax::attr; -use syntax; +use syntax::symbol::Symbol; use syntax_pos; use rustc::hir::{self, PatKind}; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; +use rustc::hir::intravisit::{Visitor, NestedVisitorMap}; use rustc::hir::intravisit; use super::index_builder::{FromId, IndexBuilder, Untracked}; @@ -49,11 +50,11 @@ pub struct EncodeContext<'a, 'tcx: 'a> { reexports: &'a def::ExportMap, link_meta: &'a LinkMeta, cstore: &'a cstore::CStore, - reachable: &'a NodeSet, + exported_symbols: &'a NodeSet, lazy_state: LazyState, - type_shorthands: FnvHashMap, usize>, - predicate_shorthands: FnvHashMap, usize>, + type_shorthands: FxHashMap, usize>, + predicate_shorthands: FxHashMap, usize>, } macro_rules! encoder_methods { @@ -200,7 +201,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { variant: &U, map: M) -> Result<(), ::Error> - where M: for<'b> Fn(&'b mut Self) -> &'b mut FnvHashMap, + where M: for<'b> Fn(&'b mut Self) -> &'b mut FxHashMap, T: Clone + Eq + Hash, U: Encodable { @@ -246,7 +247,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { fn encode_item_type(&mut self, def_id: DefId) -> Lazy> { let tcx = self.tcx; - self.lazy(&tcx.lookup_item_type(def_id).ty) + self.lazy(&tcx.item_type(def_id)) } /// Encode data for the given variant of the given ADT. The @@ -274,6 +275,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: EntryKind::Variant(self.lazy(&data)), visibility: enum_vis.simplify(), + span: self.lazy(&tcx.def_span(def_id)), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&tcx.get_attrs(def_id)), children: self.lazy_seq(variant.fields.iter().map(|f| { @@ -312,6 +314,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: EntryKind::Mod(self.lazy(&data)), visibility: vis.simplify(), + span: self.lazy(&md.inner), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(attrs), children: self.lazy_seq(md.item_ids.iter().map(|item_id| { @@ -392,6 +395,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: EntryKind::Field, visibility: field.vis.simplify(), + span: self.lazy(&tcx.def_span(def_id)), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&variant_data.fields()[field_index].attrs), children: LazySeq::empty(), @@ -425,6 +429,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: EntryKind::Struct(self.lazy(&data)), visibility: struct_vis.simplify(), + span: self.lazy(&tcx.def_span(def_id)), def_key: self.encode_def_key(def_id), attributes: LazySeq::empty(), children: LazySeq::empty(), @@ -444,12 +449,12 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { fn encode_generics(&mut self, def_id: DefId) -> Lazy> { let tcx = self.tcx; - self.lazy(tcx.lookup_generics(def_id)) + self.lazy(tcx.item_generics(def_id)) } fn encode_predicates(&mut self, def_id: DefId) -> Lazy> { let tcx = self.tcx; - self.lazy(&tcx.lookup_predicates(def_id)) + self.lazy(&tcx.item_predicates(def_id)) } fn encode_info_for_trait_item(&mut self, def_id: DefId) -> Entry<'tcx> { @@ -457,19 +462,20 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { let node_id = tcx.map.as_local_node_id(def_id).unwrap(); let ast_item = tcx.map.expect_trait_item(node_id); - let trait_item = tcx.impl_or_trait_item(def_id); + let trait_item = tcx.associated_item(def_id); - let container = |has_body| if has_body { - AssociatedContainer::TraitWithDefault - } else { - AssociatedContainer::TraitRequired + let container = match trait_item.defaultness { + hir::Defaultness::Default { has_value: true } => + AssociatedContainer::TraitWithDefault, + hir::Defaultness::Default { has_value: false } => + AssociatedContainer::TraitRequired, + hir::Defaultness::Final => + span_bug!(ast_item.span, "traits cannot have final items"), }; - let kind = match trait_item { - ty::ConstTraitItem(ref associated_const) => { - EntryKind::AssociatedConst(container(associated_const.has_value)) - } - ty::MethodTraitItem(ref method_ty) => { + let kind = match trait_item.kind { + ty::AssociatedKind::Const => EntryKind::AssociatedConst(container), + ty::AssociatedKind::Method => { let fn_data = if let hir::MethodTraitItem(ref sig, _) = ast_item.node { FnData { constness: hir::Constness::NotConst, @@ -478,30 +484,36 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { } else { bug!() }; - let data = MethodData { + EntryKind::Method(self.lazy(&MethodData { fn_data: fn_data, - container: container(method_ty.has_body), - explicit_self: self.lazy(&method_ty.explicit_self), - }; - EntryKind::Method(self.lazy(&data)) + container: container, + has_self: trait_item.method_has_self_argument, + })) } - ty::TypeTraitItem(_) => EntryKind::AssociatedType(container(false)), + ty::AssociatedKind::Type => EntryKind::AssociatedType(container), }; Entry { kind: kind, - visibility: trait_item.vis().simplify(), + visibility: trait_item.vis.simplify(), + span: self.lazy(&ast_item.span), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&ast_item.attrs), children: LazySeq::empty(), stability: self.encode_stability(def_id), deprecation: self.encode_deprecation(def_id), - ty: match trait_item { - ty::ConstTraitItem(_) | - ty::MethodTraitItem(_) => Some(self.encode_item_type(def_id)), - ty::TypeTraitItem(ref associated_type) => { - associated_type.ty.map(|ty| self.lazy(&ty)) + ty: match trait_item.kind { + ty::AssociatedKind::Const | + ty::AssociatedKind::Method => { + Some(self.encode_item_type(def_id)) + } + ty::AssociatedKind::Type => { + if trait_item.defaultness.has_value() { + Some(self.encode_item_type(def_id)) + } else { + None + } } }, inherent_impls: LazySeq::empty(), @@ -509,9 +521,13 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { generics: Some(self.encode_generics(def_id)), predicates: Some(self.encode_predicates(def_id)), - ast: if let ty::ConstTraitItem(_) = trait_item { - let trait_def_id = trait_item.container().id(); - Some(self.encode_inlined_item(InlinedItemRef::TraitItem(trait_def_id, ast_item))) + ast: if let hir::ConstTraitItem(_, Some(_)) = ast_item.node { + // We only save the HIR for associated consts with bodies + // (InlinedItemRef::from_trait_item panics otherwise) + let trait_def_id = trait_item.container.id(); + Some(self.encode_inlined_item( + InlinedItemRef::from_trait_item(trait_def_id, ast_item, tcx) + )) } else { None }, @@ -520,19 +536,23 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { } fn encode_info_for_impl_item(&mut self, def_id: DefId) -> Entry<'tcx> { + let tcx = self.tcx; + let node_id = self.tcx.map.as_local_node_id(def_id).unwrap(); let ast_item = self.tcx.map.expect_impl_item(node_id); - let impl_item = self.tcx.impl_or_trait_item(def_id); - let impl_def_id = impl_item.container().id(); + let impl_item = self.tcx.associated_item(def_id); + let impl_def_id = impl_item.container.id(); - let container = match ast_item.defaultness { - hir::Defaultness::Default => AssociatedContainer::ImplDefault, + let container = match impl_item.defaultness { + hir::Defaultness::Default { has_value: true } => AssociatedContainer::ImplDefault, hir::Defaultness::Final => AssociatedContainer::ImplFinal, + hir::Defaultness::Default { has_value: false } => + span_bug!(ast_item.span, "impl items always have values (currently)"), }; - let kind = match impl_item { - ty::ConstTraitItem(_) => EntryKind::AssociatedConst(container), - ty::MethodTraitItem(ref method_ty) => { + let kind = match impl_item.kind { + ty::AssociatedKind::Const => EntryKind::AssociatedConst(container), + ty::AssociatedKind::Method => { let fn_data = if let hir::ImplItemKind::Method(ref sig, _) = ast_item.node { FnData { constness: sig.constness, @@ -541,51 +561,48 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { } else { bug!() }; - let data = MethodData { + EntryKind::Method(self.lazy(&MethodData { fn_data: fn_data, container: container, - explicit_self: self.lazy(&method_ty.explicit_self), - }; - EntryKind::Method(self.lazy(&data)) + has_self: impl_item.method_has_self_argument, + })) } - ty::TypeTraitItem(_) => EntryKind::AssociatedType(container), + ty::AssociatedKind::Type => EntryKind::AssociatedType(container) }; - let (ast, mir) = if let ty::ConstTraitItem(_) = impl_item { + let (ast, mir) = if impl_item.kind == ty::AssociatedKind::Const { (true, true) } else if let hir::ImplItemKind::Method(ref sig, _) = ast_item.node { - let generics = self.tcx.lookup_generics(def_id); + let generics = self.tcx.item_generics(def_id); let types = generics.parent_types as usize + generics.types.len(); let needs_inline = types > 0 || attr::requests_inline(&ast_item.attrs); let is_const_fn = sig.constness == hir::Constness::Const; - (is_const_fn, needs_inline || is_const_fn) + let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir; + (is_const_fn, needs_inline || is_const_fn || always_encode_mir) } else { (false, false) }; Entry { kind: kind, - visibility: impl_item.vis().simplify(), + visibility: impl_item.vis.simplify(), + span: self.lazy(&ast_item.span), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&ast_item.attrs), children: LazySeq::empty(), stability: self.encode_stability(def_id), deprecation: self.encode_deprecation(def_id), - ty: match impl_item { - ty::ConstTraitItem(_) | - ty::MethodTraitItem(_) => Some(self.encode_item_type(def_id)), - ty::TypeTraitItem(ref associated_type) => { - associated_type.ty.map(|ty| self.lazy(&ty)) - } - }, + ty: Some(self.encode_item_type(def_id)), inherent_impls: LazySeq::empty(), variances: LazySeq::empty(), generics: Some(self.encode_generics(def_id)), predicates: Some(self.encode_predicates(def_id)), ast: if ast { - Some(self.encode_inlined_item(InlinedItemRef::ImplItem(impl_def_id, ast_item))) + Some(self.encode_inlined_item( + InlinedItemRef::from_impl_item(impl_def_id, ast_item, tcx) + )) } else { None }, @@ -595,10 +612,10 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { fn encode_fn_arg_names(&mut self, decl: &hir::FnDecl) -> LazySeq { self.lazy_seq(decl.inputs.iter().map(|arg| { - if let PatKind::Binding(_, ref path1, _) = arg.pat.node { + if let PatKind::Binding(_, _, ref path1, _) = arg.pat.node { path1.node } else { - syntax::parse::token::intern("") + Symbol::intern("") } })) } @@ -628,7 +645,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { self.tcx.lookup_deprecation(def_id).map(|depr| self.lazy(&depr)) } - fn encode_info_for_item(&mut self, (def_id, item): (DefId, &hir::Item)) -> Entry<'tcx> { + fn encode_info_for_item(&mut self, (def_id, item): (DefId, &'tcx hir::Item)) -> Entry<'tcx> { let tcx = self.tcx; debug!("encoding info for item at {}", @@ -720,19 +737,19 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { unsafety: trait_def.unsafety, paren_sugar: trait_def.paren_sugar, has_default_impl: tcx.trait_has_default_impl(def_id), - trait_ref: self.lazy(&trait_def.trait_ref), - super_predicates: self.lazy(&tcx.lookup_super_predicates(def_id)), + super_predicates: self.lazy(&tcx.item_super_predicates(def_id)), }; EntryKind::Trait(self.lazy(&data)) } hir::ItemExternCrate(_) | - hir::ItemUse(_) => bug!("cannot encode info for item {:?}", item), + hir::ItemUse(..) => bug!("cannot encode info for item {:?}", item), }; Entry { kind: kind, visibility: item.vis.simplify(), + span: self.lazy(&item.span), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&item.attrs), children: match item.node { @@ -758,7 +775,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { } hir::ItemImpl(..) | hir::ItemTrait(..) => { - self.lazy_seq(tcx.impl_or_trait_items(def_id).iter().map(|&def_id| { + self.lazy_seq(tcx.associated_item_def_ids(def_id).iter().map(|&def_id| { assert!(def_id.is_local()); def_id.index })) @@ -815,16 +832,20 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { ast: match item.node { hir::ItemConst(..) | hir::ItemFn(_, _, hir::Constness::Const, ..) => { - Some(self.encode_inlined_item(InlinedItemRef::Item(def_id, item))) + Some(self.encode_inlined_item( + InlinedItemRef::from_item(def_id, item, tcx) + )) } _ => None, }, mir: match item.node { + hir::ItemStatic(..) | hir::ItemConst(..) => self.encode_mir(def_id), hir::ItemFn(_, _, constness, _, ref generics, _) => { let tps_len = generics.ty_params.len(); let needs_inline = tps_len > 0 || attr::requests_inline(&item.attrs); - if needs_inline || constness == hir::Constness::Const { + let always_encode_mir = self.tcx.sess.opts.debugging_opts.always_encode_mir; + if needs_inline || constness == hir::Constness::Const || always_encode_mir { self.encode_mir(def_id) } else { None @@ -834,6 +855,31 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { }, } } + + /// Serialize the text of exported macros + fn encode_info_for_macro_def(&mut self, macro_def: &hir::MacroDef) -> Entry<'tcx> { + let def_id = self.tcx.map.local_def_id(macro_def.id); + Entry { + kind: EntryKind::MacroDef(self.lazy(&MacroDef { + body: ::syntax::print::pprust::tts_to_string(¯o_def.body) + })), + visibility: ty::Visibility::Public, + span: self.lazy(¯o_def.span), + def_key: self.encode_def_key(def_id), + + attributes: self.encode_attributes(¯o_def.attrs), + children: LazySeq::empty(), + stability: None, + deprecation: None, + ty: None, + inherent_impls: LazySeq::empty(), + variances: LazySeq::empty(), + generics: None, + predicates: None, + ast: None, + mir: None, + } + } } impl<'a, 'b, 'tcx> IndexBuilder<'a, 'b, 'tcx> { @@ -880,14 +926,14 @@ impl<'a, 'b, 'tcx> IndexBuilder<'a, 'b, 'tcx> { self.encode_fields(def_id); } hir::ItemImpl(..) => { - for &trait_item_def_id in &self.tcx.impl_or_trait_items(def_id)[..] { + for &trait_item_def_id in &self.tcx.associated_item_def_ids(def_id)[..] { self.record(trait_item_def_id, EncodeContext::encode_info_for_impl_item, trait_item_def_id); } } hir::ItemTrait(..) => { - for &item_def_id in &self.tcx.impl_or_trait_items(def_id)[..] { + for &item_def_id in &self.tcx.associated_item_def_ids(def_id)[..] { self.record(item_def_id, EncodeContext::encode_info_for_trait_item, item_def_id); @@ -920,6 +966,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: kind, visibility: nitem.vis.simplify(), + span: self.lazy(&nitem.span), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&nitem.attrs), children: LazySeq::empty(), @@ -943,6 +990,9 @@ struct EncodeVisitor<'a, 'b: 'a, 'tcx: 'b> { } impl<'a, 'b, 'tcx> Visitor<'tcx> for EncodeVisitor<'a, 'b, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.index.tcx.map) + } fn visit_expr(&mut self, ex: &'tcx hir::Expr) { intravisit::walk_expr(self, ex); self.index.encode_info_for_expr(ex); @@ -952,7 +1002,7 @@ impl<'a, 'b, 'tcx> Visitor<'tcx> for EncodeVisitor<'a, 'b, 'tcx> { let def_id = self.index.tcx.map.local_def_id(item.id); match item.node { hir::ItemExternCrate(_) | - hir::ItemUse(_) => (), // ignore these + hir::ItemUse(..) => (), // ignore these _ => self.index.record(def_id, EncodeContext::encode_info_for_item, (def_id, item)), } self.index.encode_addl_info_for_item(item); @@ -968,6 +1018,10 @@ impl<'a, 'b, 'tcx> Visitor<'tcx> for EncodeVisitor<'a, 'b, 'tcx> { intravisit::walk_ty(self, ty); self.index.encode_info_for_ty(ty); } + fn visit_macro_def(&mut self, macro_def: &'tcx hir::MacroDef) { + let def_id = self.index.tcx.map.local_def_id(macro_def.id); + self.index.record(def_id, EncodeContext::encode_info_for_macro_def, macro_def); + } } impl<'a, 'b, 'tcx> IndexBuilder<'a, 'b, 'tcx> { @@ -991,9 +1045,11 @@ impl<'a, 'b, 'tcx> IndexBuilder<'a, 'b, 'tcx> { impl<'a, 'tcx> EncodeContext<'a, 'tcx> { fn encode_info_for_anon_ty(&mut self, def_id: DefId) -> Entry<'tcx> { + let tcx = self.tcx; Entry { kind: EntryKind::Type, visibility: ty::Visibility::Public, + span: self.lazy(&tcx.def_span(def_id)), def_key: self.encode_def_key(def_id), attributes: LazySeq::empty(), children: LazySeq::empty(), @@ -1022,16 +1078,17 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { Entry { kind: EntryKind::Closure(self.lazy(&data)), visibility: ty::Visibility::Public, + span: self.lazy(&tcx.def_span(def_id)), def_key: self.encode_def_key(def_id), attributes: self.encode_attributes(&tcx.get_attrs(def_id)), children: LazySeq::empty(), stability: None, deprecation: None, - ty: None, + ty: Some(self.encode_item_type(def_id)), inherent_impls: LazySeq::empty(), variances: LazySeq::empty(), - generics: None, + generics: Some(self.encode_generics(def_id)), predicates: None, ast: None, @@ -1046,7 +1103,10 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { EncodeContext::encode_info_for_mod, FromId(CRATE_NODE_ID, (&krate.module, &krate.attrs, &hir::Public))); let mut visitor = EncodeVisitor { index: index }; - krate.visit_all_items(&mut visitor); + krate.visit_all_item_likes(&mut visitor.as_deep_visitor()); + for macro_def in &krate.exported_macros { + visitor.visit_macro_def(macro_def); + } visitor.index.into_items() } @@ -1082,9 +1142,9 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { let deps = get_ordered_deps(self.cstore); self.lazy_seq(deps.iter().map(|&(_, ref dep)| { CrateDep { - name: syntax::parse::token::intern(dep.name()), + name: dep.name(), hash: dep.hash(), - explicitly_linked: dep.explicitly_linked.get(), + kind: dep.dep_kind.get(), } })) } @@ -1103,14 +1163,9 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { self.lazy_seq_ref(&tcx.lang_items.missing)) } - fn encode_native_libraries(&mut self) -> LazySeq<(NativeLibraryKind, String)> { + fn encode_native_libraries(&mut self) -> LazySeq { let used_libraries = self.tcx.sess.cstore.used_libraries(); - self.lazy_seq(used_libraries.into_iter().filter_map(|(lib, kind)| { - match kind { - cstore::NativeStatic => None, // these libraries are not propagated - cstore::NativeFramework | cstore::NativeUnknown => Some((kind, lib)), - } - })) + self.lazy_seq(used_libraries) } fn encode_codemap(&mut self) -> LazySeq { @@ -1118,35 +1173,20 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { let all_filemaps = codemap.files.borrow(); self.lazy_seq_ref(all_filemaps.iter() .filter(|filemap| { - // No need to export empty filemaps, as they can't contain spans - // that need translation. - // Also no need to re-export imported filemaps, as any downstream + // No need to re-export imported filemaps, as any downstream // crate will import them from their original source. - !filemap.lines.borrow().is_empty() && !filemap.is_imported() + !filemap.is_imported() }) .map(|filemap| &**filemap)) } - - /// Serialize the text of the exported macros - fn encode_macro_defs(&mut self) -> LazySeq { - let tcx = self.tcx; - self.lazy_seq(tcx.map.krate().exported_macros.iter().map(|def| { - MacroDef { - name: def.name, - attrs: def.attrs.to_vec(), - span: def.span, - body: ::syntax::print::pprust::tts_to_string(&def.body), - } - })) - } } struct ImplVisitor<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, - impls: FnvHashMap>, + impls: FxHashMap>, } -impl<'a, 'tcx, 'v> Visitor<'v> for ImplVisitor<'a, 'tcx> { +impl<'a, 'tcx, 'v> ItemLikeVisitor<'v> for ImplVisitor<'a, 'tcx> { fn visit_item(&mut self, item: &hir::Item) { if let hir::ItemImpl(..) = item.node { let impl_id = self.tcx.map.local_def_id(item.id); @@ -1158,6 +1198,10 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ImplVisitor<'a, 'tcx> { } } } + + fn visit_impl_item(&mut self, _impl_item: &'v hir::ImplItem) { + // handled in `visit_item` above + } } impl<'a, 'tcx> EncodeContext<'a, 'tcx> { @@ -1165,9 +1209,9 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { fn encode_impls(&mut self) -> LazySeq { let mut visitor = ImplVisitor { tcx: self.tcx, - impls: FnvHashMap(), + impls: FxHashMap(), }; - self.tcx.map.krate().visit_all_items(&mut visitor); + self.tcx.map.krate().visit_all_item_likes(&mut visitor); let all_impls: Vec<_> = visitor.impls .into_iter() @@ -1182,16 +1226,16 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { self.lazy_seq(all_impls) } - // Encodes all reachable symbols in this crate into the metadata. + // Encodes all symbols exported from this crate into the metadata. // // This pass is seeded off the reachability list calculated in the // middle::reachable module but filters out items that either don't have a // symbol associated with them (they weren't translated) or if they're an FFI // definition (as that's not defined in this crate). - fn encode_reachable(&mut self) -> LazySeq { - let reachable = self.reachable; + fn encode_exported_symbols(&mut self) -> LazySeq { + let exported_symbols = self.exported_symbols; let tcx = self.tcx; - self.lazy_seq(reachable.iter().map(|&id| tcx.map.local_def_id(id).index)) + self.lazy_seq(exported_symbols.iter().map(|&id| tcx.map.local_def_id(id).index)) } fn encode_dylib_dependency_formats(&mut self) -> LazySeq> { @@ -1232,20 +1276,15 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { let codemap = self.encode_codemap(); let codemap_bytes = self.position() - i; - // Encode macro definitions - i = self.position(); - let macro_defs = self.encode_macro_defs(); - let macro_defs_bytes = self.position() - i; - // Encode the def IDs of impls, for coherence checking. i = self.position(); let impls = self.encode_impls(); let impl_bytes = self.position() - i; - // Encode reachability info. + // Encode exported symbols info. i = self.position(); - let reachable_ids = self.encode_reachable(); - let reachable_bytes = self.position() - i; + let exported_symbols = self.encode_exported_symbols(); + let exported_symbols_bytes = self.position() - i; // Encode and index the items. i = self.position(); @@ -1260,11 +1299,10 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { let link_meta = self.link_meta; let is_proc_macro = tcx.sess.crate_types.borrow().contains(&CrateTypeProcMacro); let root = self.lazy(&CrateRoot { - rustc_version: rustc_version(), - name: link_meta.crate_name.clone(), + name: link_meta.crate_name, triple: tcx.sess.opts.target_triple.clone(), hash: link_meta.crate_hash, - disambiguator: tcx.sess.local_crate_disambiguator().to_string(), + disambiguator: tcx.sess.local_crate_disambiguator(), panic_strategy: tcx.sess.panic_strategy(), plugin_registrar_fn: tcx.sess .plugin_registrar_fn @@ -1283,9 +1321,8 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { lang_items_missing: lang_items_missing, native_libraries: native_libraries, codemap: codemap, - macro_defs: macro_defs, impls: impls, - reachable_ids: reachable_ids, + exported_symbols: exported_symbols, index: index, }); @@ -1304,9 +1341,8 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> { println!(" lang item bytes: {}", lang_item_bytes); println!(" native bytes: {}", native_lib_bytes); println!(" codemap bytes: {}", codemap_bytes); - println!(" macro def bytes: {}", macro_defs_bytes); println!(" impl bytes: {}", impl_bytes); - println!(" reachable bytes: {}", reachable_bytes); + println!(" exp. symbols bytes: {}", exported_symbols_bytes); println!(" item bytes: {}", item_bytes); println!(" index bytes: {}", index_bytes); println!(" zero bytes: {}", zero_bytes); @@ -1344,7 +1380,7 @@ pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, cstore: &cstore::CStore, reexports: &def::ExportMap, link_meta: &LinkMeta, - reachable: &NodeSet) + exported_symbols: &NodeSet) -> Vec { let mut cursor = Cursor::new(vec![]); cursor.write_all(METADATA_HEADER).unwrap(); @@ -1352,18 +1388,26 @@ pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, // Will be filed with the root position after encoding everything. cursor.write_all(&[0, 0, 0, 0]).unwrap(); - let root = EncodeContext { + let root = { + let mut ecx = EncodeContext { opaque: opaque::Encoder::new(&mut cursor), tcx: tcx, reexports: reexports, link_meta: link_meta, cstore: cstore, - reachable: reachable, + exported_symbols: exported_symbols, lazy_state: LazyState::NoNode, type_shorthands: Default::default(), predicate_shorthands: Default::default(), - } - .encode_crate_root(); + }; + + // Encode the rustc version string in a predictable location. + rustc_version().encode(&mut ecx).unwrap(); + + // Encode all the entries and extra information in the crate, + // culminating in the `CrateRoot` which points to all of it. + ecx.encode_crate_root() + }; let mut result = cursor.into_inner(); // Encode the root position. diff --git a/src/librustc_metadata/index.rs b/src/librustc_metadata/index.rs index 53e6988c75..5b52b26884 100644 --- a/src/librustc_metadata/index.rs +++ b/src/librustc_metadata/index.rs @@ -70,7 +70,7 @@ impl<'tcx> LazySeq { index, words.len()); - let position = u32::from_le(words[index]); + let position = u32::from_le(words[index].get()); if position == u32::MAX { debug!("Index::lookup: position=u32::MAX"); None @@ -84,7 +84,7 @@ impl<'tcx> LazySeq { bytes: &'a [u8]) -> impl Iterator>)> + 'a { let words = &bytes_to_words(&bytes[self.position..])[..self.len]; - words.iter().enumerate().filter_map(|(index, &position)| { + words.iter().map(|word| word.get()).enumerate().filter_map(|(index, position)| { if position == u32::MAX { None } else { @@ -95,8 +95,16 @@ impl<'tcx> LazySeq { } } -fn bytes_to_words(b: &[u8]) -> &[u32] { - unsafe { slice::from_raw_parts(b.as_ptr() as *const u32, b.len() / 4) } +#[repr(packed)] +#[derive(Copy, Clone)] +struct Unaligned(T); + +impl Unaligned { + fn get(self) -> T { self.0 } +} + +fn bytes_to_words(b: &[u8]) -> &[Unaligned] { + unsafe { slice::from_raw_parts(b.as_ptr() as *const Unaligned, b.len() / 4) } } fn words_to_bytes(w: &[u32]) -> &[u8] { diff --git a/src/librustc_metadata/index_builder.rs b/src/librustc_metadata/index_builder.rs index 9938e20d18..1a74a92545 100644 --- a/src/librustc_metadata/index_builder.rs +++ b/src/librustc_metadata/index_builder.rs @@ -195,6 +195,7 @@ read_hir!(hir::Item); read_hir!(hir::ImplItem); read_hir!(hir::TraitItem); read_hir!(hir::ForeignItem); +read_hir!(hir::MacroDef); /// Leaks access to a value of type T without any tracking. This is /// suitable for ambiguous types like `usize`, which *could* represent diff --git a/src/librustc_metadata/lib.rs b/src/librustc_metadata/lib.rs index 2fd40181d7..25fa201a69 100644 --- a/src/librustc_metadata/lib.rs +++ b/src/librustc_metadata/lib.rs @@ -20,10 +20,7 @@ #![feature(box_patterns)] #![feature(conservative_impl_trait)] #![feature(core_intrinsics)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(proc_macro_internals)] -#![feature(proc_macro_lib)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(quote)] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] diff --git a/src/librustc_metadata/locator.rs b/src/librustc_metadata/locator.rs index 0461d7ec06..de465ea92f 100644 --- a/src/librustc_metadata/locator.rs +++ b/src/librustc_metadata/locator.rs @@ -53,6 +53,13 @@ //! is a platform-defined dynamic library. Each library has a metadata somewhere //! inside of it. //! +//! A third kind of dependency is an rmeta file. These are metadata files and do +//! not contain any code, etc. To a first approximation, these are treated in the +//! same way as rlibs. Where there is both an rlib and an rmeta file, the rlib +//! gets priority (even if the rmeta file is newer). An rmeta file is only +//! useful for checking a downstream crate, attempting to link one will cause an +//! error. +//! //! When translating a crate name to a crate on the filesystem, we all of a //! sudden need to take into account both rlibs and dylibs! Linkage later on may //! use either one of these files, as each has their pros/cons. The job of crate @@ -217,23 +224,24 @@ use creader::Library; use schema::{METADATA_HEADER, rustc_version}; use rustc::hir::svh::Svh; -use rustc::session::Session; +use rustc::session::{config, Session}; use rustc::session::filesearch::{FileSearch, FileMatches, FileDoesntMatch}; use rustc::session::search_paths::PathKind; use rustc::util::common; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::FxHashMap; use rustc_llvm as llvm; use rustc_llvm::{False, ObjectFile, mk_section_iter}; use rustc_llvm::archive_ro::ArchiveRO; use errors::DiagnosticBuilder; +use syntax::symbol::Symbol; use syntax_pos::Span; use rustc_back::target::Target; use std::cmp; use std::fmt; -use std::fs; -use std::io; +use std::fs::{self, File}; +use std::io::{self, Read}; use std::path::{Path, PathBuf}; use std::ptr; use std::slice; @@ -249,8 +257,8 @@ pub struct CrateMismatch { pub struct Context<'a> { pub sess: &'a Session, pub span: Span, - pub ident: &'a str, - pub crate_name: &'a str, + pub ident: Symbol, + pub crate_name: Symbol, pub hash: Option<&'a Svh>, // points to either self.sess.target.target or self.sess.host, must match triple pub target: &'a Target, @@ -261,7 +269,9 @@ pub struct Context<'a> { pub rejected_via_triple: Vec, pub rejected_via_kind: Vec, pub rejected_via_version: Vec, + pub rejected_via_filename: Vec, pub should_match_name: bool, + pub is_proc_macro: Option, } pub struct ArchiveMetadata { @@ -274,6 +284,7 @@ pub struct CratePaths { pub ident: String, pub dylib: Option, pub rlib: Option, + pub rmeta: Option, } pub const METADATA_FILENAME: &'static str = "rust.metadata.bin"; @@ -281,6 +292,7 @@ pub const METADATA_FILENAME: &'static str = "rust.metadata.bin"; #[derive(Copy, Clone, PartialEq)] enum CrateFlavor { Rlib, + Rmeta, Dylib, } @@ -288,6 +300,7 @@ impl fmt::Display for CrateFlavor { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str(match *self { CrateFlavor::Rlib => "rlib", + CrateFlavor::Rmeta => "rmeta", CrateFlavor::Dylib => "dylib", }) } @@ -295,12 +308,7 @@ impl fmt::Display for CrateFlavor { impl CratePaths { fn paths(&self) -> Vec { - match (&self.dylib, &self.rlib) { - (&None, &None) => vec![], - (&Some(ref p), &None) | - (&None, &Some(ref p)) => vec![p.clone()], - (&Some(ref p1), &Some(ref p2)) => vec![p1.clone(), p2.clone()], - } + self.dylib.iter().chain(self.rlib.iter()).chain(self.rmeta.iter()).cloned().collect() } } @@ -354,6 +362,11 @@ impl<'a> Context<'a> { "can't find crate for `{}`{}", self.ident, add); + + if (self.ident == "std" || self.ident == "core") + && self.triple != config::host_triple() { + err.note(&format!("the `{}` target may not be installed", self.triple)); + } err.span_label(self.span, &format!("can't find crate")); err }; @@ -405,6 +418,18 @@ impl<'a> Context<'a> { got)); } } + if !self.rejected_via_filename.is_empty() { + let dylibname = self.dylibname(); + let mismatches = self.rejected_via_filename.iter(); + for &CrateMismatch { ref path, .. } in mismatches { + err.note(&format!("extern location for {} is of an unknown type: {}", + self.crate_name, + path.display())) + .help(&format!("file name should be lib*.rlib or {}*.{}", + dylibname.0, + dylibname.1)); + } + } err.emit(); self.sess.abort_if_errors(); @@ -416,7 +441,7 @@ impl<'a> Context<'a> { // must be loaded via -L plus some filtering. if self.hash.is_none() { self.should_match_name = false; - if let Some(s) = self.sess.opts.externs.get(self.crate_name) { + if let Some(s) = self.sess.opts.externs.get(&self.crate_name.as_str()) { return self.find_commandline_library(s.iter()); } self.should_match_name = true; @@ -430,7 +455,7 @@ impl<'a> Context<'a> { let rlib_prefix = format!("lib{}", self.crate_name); let staticlib_prefix = format!("{}{}", staticpair.0, self.crate_name); - let mut candidates = FnvHashMap(); + let mut candidates = FxHashMap(); let mut staticlibs = vec![]; // First, find all possible candidate rlibs and dylibs purely based on @@ -451,32 +476,35 @@ impl<'a> Context<'a> { None => return FileDoesntMatch, Some(file) => file, }; - let (hash, rlib) = if file.starts_with(&rlib_prefix[..]) && file.ends_with(".rlib") { - (&file[(rlib_prefix.len())..(file.len() - ".rlib".len())], true) - } else if file.starts_with(&dylib_prefix) && - file.ends_with(&dypair.1) { - (&file[(dylib_prefix.len())..(file.len() - dypair.1.len())], false) - } else { - if file.starts_with(&staticlib_prefix[..]) && file.ends_with(&staticpair.1) { - staticlibs.push(CrateMismatch { - path: path.to_path_buf(), - got: "static".to_string(), - }); - } - return FileDoesntMatch; - }; + let (hash, found_kind) = + if file.starts_with(&rlib_prefix[..]) && file.ends_with(".rlib") { + (&file[(rlib_prefix.len())..(file.len() - ".rlib".len())], CrateFlavor::Rlib) + } else if file.starts_with(&rlib_prefix[..]) && file.ends_with(".rmeta") { + (&file[(rlib_prefix.len())..(file.len() - ".rmeta".len())], CrateFlavor::Rmeta) + } else if file.starts_with(&dylib_prefix) && + file.ends_with(&dypair.1) { + (&file[(dylib_prefix.len())..(file.len() - dypair.1.len())], CrateFlavor::Dylib) + } else { + if file.starts_with(&staticlib_prefix[..]) && file.ends_with(&staticpair.1) { + staticlibs.push(CrateMismatch { + path: path.to_path_buf(), + got: "static".to_string(), + }); + } + return FileDoesntMatch; + }; info!("lib candidate: {}", path.display()); let hash_str = hash.to_string(); let slot = candidates.entry(hash_str) - .or_insert_with(|| (FnvHashMap(), FnvHashMap())); - let (ref mut rlibs, ref mut dylibs) = *slot; + .or_insert_with(|| (FxHashMap(), FxHashMap(), FxHashMap())); + let (ref mut rlibs, ref mut rmetas, ref mut dylibs) = *slot; fs::canonicalize(path) .map(|p| { - if rlib { - rlibs.insert(p, kind); - } else { - dylibs.insert(p, kind); + match found_kind { + CrateFlavor::Rlib => { rlibs.insert(p, kind); } + CrateFlavor::Rmeta => { rmetas.insert(p, kind); } + CrateFlavor::Dylib => { dylibs.insert(p, kind); } } FileMatches }) @@ -492,16 +520,18 @@ impl<'a> Context<'a> { // A Library candidate is created if the metadata for the set of // libraries corresponds to the crate id and hash criteria that this // search is being performed for. - let mut libraries = FnvHashMap(); - for (_hash, (rlibs, dylibs)) in candidates { + let mut libraries = FxHashMap(); + for (_hash, (rlibs, rmetas, dylibs)) in candidates { let mut slot = None; let rlib = self.extract_one(rlibs, CrateFlavor::Rlib, &mut slot); + let rmeta = self.extract_one(rmetas, CrateFlavor::Rmeta, &mut slot); let dylib = self.extract_one(dylibs, CrateFlavor::Dylib, &mut slot); if let Some((h, m)) = slot { libraries.insert(h, Library { dylib: dylib, rlib: rlib, + rmeta: rmeta, metadata: m, }); } @@ -527,7 +557,7 @@ impl<'a> Context<'a> { if let Some((ref p, _)) = lib.rlib { err.note(&format!("path: {}", p.display())); } - note_crate_name(&mut err, &lib.metadata.get_root().name); + note_crate_name(&mut err, &lib.metadata.get_root().name.as_str()); } err.emit(); None @@ -544,7 +574,7 @@ impl<'a> Context<'a> { // be read, it is assumed that the file isn't a valid rust library (no // errors are emitted). fn extract_one(&mut self, - m: FnvHashMap, + m: FxHashMap, flavor: CrateFlavor, slot: &mut Option<(Svh, MetadataBlob)>) -> Option<(PathBuf, PathKind)> { @@ -622,19 +652,26 @@ impl<'a> Context<'a> { } fn crate_matches(&mut self, metadata: &MetadataBlob, libpath: &Path) -> Option { - let root = metadata.get_root(); let rustc_version = rustc_version(); - if root.rustc_version != rustc_version { + let found_version = metadata.get_rustc_version(); + if found_version != rustc_version { info!("Rejecting via version: expected {} got {}", rustc_version, - root.rustc_version); + found_version); self.rejected_via_version.push(CrateMismatch { path: libpath.to_path_buf(), - got: root.rustc_version, + got: found_version, }); return None; } + let root = metadata.get_root(); + if let Some(is_proc_macro) = self.is_proc_macro { + if root.macro_derive_registrar.is_some() != is_proc_macro { + return None; + } + } + if self.should_match_name { if self.crate_name != root.name { info!("Rejecting via crate name"); @@ -690,8 +727,9 @@ impl<'a> Context<'a> { // rlibs/dylibs. let sess = self.sess; let dylibname = self.dylibname(); - let mut rlibs = FnvHashMap(); - let mut dylibs = FnvHashMap(); + let mut rlibs = FxHashMap(); + let mut rmetas = FxHashMap(); + let mut dylibs = FxHashMap(); { let locs = locs.map(|l| PathBuf::from(l)).filter(|loc| { if !loc.exists() { @@ -709,7 +747,8 @@ impl<'a> Context<'a> { return false; } }; - if file.starts_with("lib") && file.ends_with(".rlib") { + if file.starts_with("lib") && + (file.ends_with(".rlib") || file.ends_with(".rmeta")) { return true; } else { let (ref prefix, ref suffix) = dylibname; @@ -717,13 +756,12 @@ impl<'a> Context<'a> { return true; } } - sess.struct_err(&format!("extern location for {} is of an unknown type: {}", - self.crate_name, - loc.display())) - .help(&format!("file name should be lib*.rlib or {}*.{}", - dylibname.0, - dylibname.1)) - .emit(); + + self.rejected_via_filename.push(CrateMismatch { + path: loc.clone(), + got: String::new(), + }); + false }); @@ -732,6 +770,8 @@ impl<'a> Context<'a> { for loc in locs { if loc.file_name().unwrap().to_str().unwrap().ends_with(".rlib") { rlibs.insert(fs::canonicalize(&loc).unwrap(), PathKind::ExternFlag); + } else if loc.file_name().unwrap().to_str().unwrap().ends_with(".rmeta") { + rmetas.insert(fs::canonicalize(&loc).unwrap(), PathKind::ExternFlag); } else { dylibs.insert(fs::canonicalize(&loc).unwrap(), PathKind::ExternFlag); } @@ -741,9 +781,10 @@ impl<'a> Context<'a> { // Extract the rlib/dylib pair. let mut slot = None; let rlib = self.extract_one(rlibs, CrateFlavor::Rlib, &mut slot); + let rmeta = self.extract_one(rmetas, CrateFlavor::Rmeta, &mut slot); let dylib = self.extract_one(dylibs, CrateFlavor::Dylib, &mut slot); - if rlib.is_none() && dylib.is_none() { + if rlib.is_none() && rmeta.is_none() && dylib.is_none() { return None; } match slot { @@ -751,6 +792,7 @@ impl<'a> Context<'a> { Some(Library { dylib: dylib, rlib: rlib, + rmeta: rmeta, metadata: metadata, }) } @@ -838,6 +880,15 @@ fn get_metadata_section_imp(target: &Target, Ok(blob) } }; + } else if flavor == CrateFlavor::Rmeta { + let mut file = File::open(filename).map_err(|_| + format!("could not open file: '{}'", filename.display()))?; + let mut buf = vec![]; + file.read_to_end(&mut buf).map_err(|_| + format!("failed to read rlib metadata: '{}'", filename.display()))?; + let blob = MetadataBlob::Raw(buf); + verify_decompressed_encoding_version(&blob, filename)?; + return Ok(blob); } unsafe { let buf = common::path2cstr(filename); @@ -921,6 +972,8 @@ pub fn list_file_metadata(target: &Target, path: &Path, out: &mut io::Write) -> let filename = path.file_name().unwrap().to_str().unwrap(); let flavor = if filename.ends_with(".rlib") { CrateFlavor::Rlib + } else if filename.ends_with(".rmeta") { + CrateFlavor::Rmeta } else { CrateFlavor::Dylib }; diff --git a/src/librustc_metadata/schema.rs b/src/librustc_metadata/schema.rs index 3d1bd77d8b..f92051cbf1 100644 --- a/src/librustc_metadata/schema.rs +++ b/src/librustc_metadata/schema.rs @@ -14,7 +14,7 @@ use index; use rustc::hir; use rustc::hir::def::{self, CtorKind}; use rustc::hir::def_id::{DefIndex, DefId}; -use rustc::middle::cstore::{LinkagePreference, NativeLibraryKind}; +use rustc::middle::cstore::{DepKind, LinkagePreference, NativeLibrary}; use rustc::middle::lang_items; use rustc::mir; use rustc::ty::{self, Ty}; @@ -22,6 +22,7 @@ use rustc_back::PanicStrategy; use rustc_serialize as serialize; use syntax::{ast, attr}; +use syntax::symbol::Symbol; use syntax_pos::{self, Span}; use std::marker::PhantomData; @@ -33,15 +34,17 @@ pub fn rustc_version() -> String { /// Metadata encoding version. /// NB: increment this if you change the format of metadata such that -/// the rustc version can't be found to compare with `RUSTC_VERSION`. -pub const METADATA_VERSION: u8 = 3; +/// the rustc version can't be found to compare with `rustc_version()`. +pub const METADATA_VERSION: u8 = 4; /// Metadata header which includes `METADATA_VERSION`. /// To get older versions of rustc to ignore this metadata, /// there are 4 zero bytes at the start, which are treated /// as a length of 0 by old compilers. /// -/// This header is followed by the position of the `CrateRoot`. +/// This header is followed by the position of the `CrateRoot`, +/// which is encoded as a 32-bit big-endian unsigned integer, +/// and further followed by the rustc version string. pub const METADATA_HEADER: &'static [u8; 12] = &[0, 0, 0, 0, b'r', b'u', b's', b't', 0, 0, 0, METADATA_VERSION]; @@ -162,11 +165,10 @@ pub enum LazyState { #[derive(RustcEncodable, RustcDecodable)] pub struct CrateRoot { - pub rustc_version: String, - pub name: String, + pub name: Symbol, pub triple: String, pub hash: hir::svh::Svh, - pub disambiguator: String, + pub disambiguator: Symbol, pub panic_strategy: PanicStrategy, pub plugin_registrar_fn: Option, pub macro_derive_registrar: Option, @@ -175,11 +177,10 @@ pub struct CrateRoot { pub dylib_dependency_formats: LazySeq>, pub lang_items: LazySeq<(DefIndex, usize)>, pub lang_items_missing: LazySeq, - pub native_libraries: LazySeq<(NativeLibraryKind, String)>, + pub native_libraries: LazySeq, pub codemap: LazySeq, - pub macro_defs: LazySeq, pub impls: LazySeq, - pub reachable_ids: LazySeq, + pub exported_symbols: LazySeq, pub index: LazySeq, } @@ -187,7 +188,7 @@ pub struct CrateRoot { pub struct CrateDep { pub name: ast::Name, pub hash: hir::svh::Svh, - pub explicitly_linked: bool, + pub kind: DepKind, } #[derive(RustcEncodable, RustcDecodable)] @@ -196,18 +197,11 @@ pub struct TraitImpls { pub impls: LazySeq, } -#[derive(RustcEncodable, RustcDecodable)] -pub struct MacroDef { - pub name: ast::Name, - pub attrs: Vec, - pub span: Span, - pub body: String, -} - #[derive(RustcEncodable, RustcDecodable)] pub struct Entry<'tcx> { pub kind: EntryKind<'tcx>, pub visibility: ty::Visibility, + pub span: Lazy, pub def_key: Lazy, pub attributes: LazySeq, pub children: LazySeq, @@ -241,11 +235,12 @@ pub enum EntryKind<'tcx> { Fn(Lazy), ForeignFn(Lazy), Mod(Lazy), + MacroDef(Lazy), Closure(Lazy>), Trait(Lazy>), Impl(Lazy>), DefaultImpl(Lazy>), - Method(Lazy>), + Method(Lazy), AssociatedType(AssociatedContainer), AssociatedConst(AssociatedContainer), } @@ -255,6 +250,11 @@ pub struct ModData { pub reexports: LazySeq, } +#[derive(RustcEncodable, RustcDecodable)] +pub struct MacroDef { + pub body: String, +} + #[derive(RustcEncodable, RustcDecodable)] pub struct FnData { pub constness: hir::Constness, @@ -276,7 +276,6 @@ pub struct TraitData<'tcx> { pub unsafety: hir::Unsafety, pub paren_sugar: bool, pub has_default_impl: bool, - pub trait_ref: Lazy>, pub super_predicates: Lazy>, } @@ -300,7 +299,7 @@ pub enum AssociatedContainer { } impl AssociatedContainer { - pub fn with_def_id(&self, def_id: DefId) -> ty::ImplOrTraitItemContainer { + pub fn with_def_id(&self, def_id: DefId) -> ty::AssociatedItemContainer { match *self { AssociatedContainer::TraitRequired | AssociatedContainer::TraitWithDefault => ty::TraitContainer(def_id), @@ -310,21 +309,16 @@ impl AssociatedContainer { } } - pub fn has_body(&self) -> bool { - match *self { - AssociatedContainer::TraitRequired => false, - - AssociatedContainer::TraitWithDefault | - AssociatedContainer::ImplDefault | - AssociatedContainer::ImplFinal => true, - } - } - pub fn defaultness(&self) -> hir::Defaultness { match *self { - AssociatedContainer::TraitRequired | + AssociatedContainer::TraitRequired => hir::Defaultness::Default { + has_value: false, + }, + AssociatedContainer::TraitWithDefault | - AssociatedContainer::ImplDefault => hir::Defaultness::Default, + AssociatedContainer::ImplDefault => hir::Defaultness::Default { + has_value: true, + }, AssociatedContainer::ImplFinal => hir::Defaultness::Final, } @@ -332,10 +326,10 @@ impl AssociatedContainer { } #[derive(RustcEncodable, RustcDecodable)] -pub struct MethodData<'tcx> { +pub struct MethodData { pub fn_data: FnData, pub container: AssociatedContainer, - pub explicit_self: Lazy>, + pub has_self: bool, } #[derive(RustcEncodable, RustcDecodable)] diff --git a/src/librustc_mir/build/block.rs b/src/librustc_mir/build/block.rs index b53f8c4da8..2c7b47c766 100644 --- a/src/librustc_mir/build/block.rs +++ b/src/librustc_mir/build/block.rs @@ -54,7 +54,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { let tcx = this.hir.tcx(); // Enter the remainder scope, i.e. the bindings' destruction scope. - this.push_scope(remainder_scope, block); + this.push_scope(remainder_scope); let_extent_stack.push(remainder_scope); // Declare the bindings, which may create a visibility scope. diff --git a/src/librustc_mir/build/cfg.rs b/src/librustc_mir/build/cfg.rs index 9f612175e5..71e97e4bfe 100644 --- a/src/librustc_mir/build/cfg.rs +++ b/src/librustc_mir/build/cfg.rs @@ -40,11 +40,6 @@ impl<'tcx> CFG<'tcx> { self.block_data_mut(block).statements.push(statement); } - pub fn current_location(&mut self, block: BasicBlock) -> Location { - let index = self.block_data(block).statements.len(); - Location { block: block, statement_index: index } - } - pub fn push_assign(&mut self, block: BasicBlock, source_info: SourceInfo, diff --git a/src/librustc_mir/build/expr/as_rvalue.rs b/src/librustc_mir/build/expr/as_rvalue.rs index 490f675c3d..b75e52fd4b 100644 --- a/src/librustc_mir/build/expr/as_rvalue.rs +++ b/src/librustc_mir/build/expr/as_rvalue.rs @@ -13,7 +13,7 @@ use std; use rustc_const_math::{ConstMathErr, Op}; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::indexed_vec::Idx; use build::{BlockAnd, BlockAndExtension, Builder}; @@ -190,7 +190,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { // first process the set of fields that were provided // (evaluating them in order given by user) - let fields_map: FnvHashMap<_, _> = + let fields_map: FxHashMap<_, _> = fields.into_iter() .map(|f| (f.name, unpack!(block = this.as_operand(block, f.expr)))) .collect(); diff --git a/src/librustc_mir/build/expr/into.rs b/src/librustc_mir/build/expr/into.rs index 5fa0844222..ffd9525933 100644 --- a/src/librustc_mir/build/expr/into.rs +++ b/src/librustc_mir/build/expr/into.rs @@ -169,48 +169,46 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { this.cfg.terminate(block, source_info, TerminatorKind::Goto { target: loop_block }); - let might_break = this.in_loop_scope(loop_block, exit_block, move |this| { - // conduct the test, if necessary - let body_block; - if let Some(cond_expr) = opt_cond_expr { - // This loop has a condition, ergo its exit_block is reachable. - this.find_loop_scope(expr_span, None).might_break = true; + this.in_loop_scope( + loop_block, exit_block, destination.clone(), + move |this| { + // conduct the test, if necessary + let body_block; + if let Some(cond_expr) = opt_cond_expr { + let loop_block_end; + let cond = unpack!( + loop_block_end = this.as_operand(loop_block, cond_expr)); + body_block = this.cfg.start_new_block(); + this.cfg.terminate(loop_block_end, source_info, + TerminatorKind::If { + cond: cond, + targets: (body_block, exit_block) + }); - let loop_block_end; - let cond = unpack!(loop_block_end = this.as_operand(loop_block, cond_expr)); - body_block = this.cfg.start_new_block(); - this.cfg.terminate(loop_block_end, source_info, - TerminatorKind::If { - cond: cond, - targets: (body_block, exit_block) - }); - } else { - body_block = loop_block; + // if the test is false, there's no `break` to assign `destination`, so + // we have to do it; this overwrites any `break`-assigned value but it's + // always `()` anyway + this.cfg.push_assign_unit(exit_block, source_info, destination); + } else { + body_block = loop_block; + } + + // The “return” value of the loop body must always be an unit. We therefore + // introduce a unit temporary as the destination for the loop body. + let tmp = this.get_unit_temp(); + // Execute the body, branching back to the test. + let body_block_end = unpack!(this.into(&tmp, body_block, body)); + this.cfg.terminate(body_block_end, source_info, + TerminatorKind::Goto { target: loop_block }); } - - // The “return” value of the loop body must always be an unit, but we cannot - // reuse that as a “return” value of the whole loop expressions, because some - // loops are diverging (e.g. `loop {}`). Thus, we introduce a unit temporary as - // the destination for the loop body and assign the loop’s own “return” value - // immediately after the iteration is finished. - let tmp = this.get_unit_temp(); - // Execute the body, branching back to the test. - let body_block_end = unpack!(this.into(&tmp, body_block, body)); - this.cfg.terminate(body_block_end, source_info, - TerminatorKind::Goto { target: loop_block }); - }); - // If the loop may reach its exit_block, we assign an empty tuple to the - // destination to keep the MIR well-formed. - if might_break { - this.cfg.push_assign_unit(exit_block, source_info, destination); - } + ); exit_block.unit() } ExprKind::Call { ty, fun, args } => { let diverges = match ty.sty { ty::TyFnDef(_, _, ref f) | ty::TyFnPtr(ref f) => { // FIXME(canndrew): This is_never should probably be an is_uninhabited - f.sig.0.output.is_never() + f.sig.skip_binder().output().is_never() } _ => false }; diff --git a/src/librustc_mir/build/expr/stmt.rs b/src/librustc_mir/build/expr/stmt.rs index 4a1926e7c5..f04d630379 100644 --- a/src/librustc_mir/build/expr/stmt.rs +++ b/src/librustc_mir/build/expr/stmt.rs @@ -11,9 +11,7 @@ use build::{BlockAnd, BlockAndExtension, Builder}; use build::scope::LoopScope; use hair::*; -use rustc::middle::region::CodeExtent; use rustc::mir::*; -use syntax_pos::Span; impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { @@ -79,14 +77,28 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { block.unit() } ExprKind::Continue { label } => { - this.break_or_continue(expr_span, label, block, - |loop_scope| loop_scope.continue_block) + let LoopScope { continue_block, extent, .. } = + *this.find_loop_scope(expr_span, label); + this.exit_scope(expr_span, extent, block, continue_block); + this.cfg.start_new_block().unit() } - ExprKind::Break { label } => { - this.break_or_continue(expr_span, label, block, |loop_scope| { - loop_scope.might_break = true; - loop_scope.break_block - }) + ExprKind::Break { label, value } => { + let (break_block, extent, destination) = { + let LoopScope { + break_block, + extent, + ref break_destination, + .. + } = *this.find_loop_scope(expr_span, label); + (break_block, extent, break_destination.clone()) + }; + if let Some(value) = value { + unpack!(block = this.into(&destination, block, value)) + } else { + this.cfg.push_assign_unit(block, source_info, &destination) + } + this.exit_scope(expr_span, extent, block, break_block); + this.cfg.start_new_block().unit() } ExprKind::Return { value } => { block = match value { @@ -115,20 +127,4 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { } } - fn break_or_continue(&mut self, - span: Span, - label: Option, - block: BasicBlock, - exit_selector: F) - -> BlockAnd<()> - where F: FnOnce(&mut LoopScope) -> BasicBlock - { - let (exit_block, extent) = { - let loop_scope = self.find_loop_scope(span, label); - (exit_selector(loop_scope), loop_scope.extent) - }; - self.exit_scope(span, extent, block, exit_block); - self.cfg.start_new_block().unit() - } - } diff --git a/src/librustc_mir/build/matches/mod.rs b/src/librustc_mir/build/matches/mod.rs index 727e634ef9..e06d940de7 100644 --- a/src/librustc_mir/build/matches/mod.rs +++ b/src/librustc_mir/build/matches/mod.rs @@ -14,7 +14,7 @@ //! details. use build::{BlockAnd, BlockAndExtension, Builder}; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::bitvec::BitVector; use rustc::middle::const_val::ConstVal; use rustc::ty::{AdtDef, Ty}; @@ -301,7 +301,7 @@ pub struct MatchPair<'pat, 'tcx:'pat> { enum TestKind<'tcx> { // test the branches of enum Switch { - adt_def: AdtDef<'tcx>, + adt_def: &'tcx AdtDef, variants: BitVector, }, @@ -309,7 +309,7 @@ enum TestKind<'tcx> { SwitchInt { switch_ty: Ty<'tcx>, options: Vec, - indices: FnvHashMap, + indices: FxHashMap, }, // test for equality diff --git a/src/librustc_mir/build/matches/test.rs b/src/librustc_mir/build/matches/test.rs index 5984b0f789..cb449037ae 100644 --- a/src/librustc_mir/build/matches/test.rs +++ b/src/librustc_mir/build/matches/test.rs @@ -18,7 +18,7 @@ use build::Builder; use build::matches::{Candidate, MatchPair, Test, TestKind}; use hair::*; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::bitvec::BitVector; use rustc::middle::const_val::ConstVal; use rustc::ty::{self, Ty}; @@ -54,7 +54,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { // these maps are empty to start; cases are // added below in add_cases_to_switch options: vec![], - indices: FnvHashMap(), + indices: FxHashMap(), } } } @@ -110,7 +110,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { candidate: &Candidate<'pat, 'tcx>, switch_ty: Ty<'tcx>, options: &mut Vec, - indices: &mut FnvHashMap) + indices: &mut FxHashMap) -> bool { let match_pair = match candidate.match_pairs.iter().find(|mp| mp.lvalue == *test_lvalue) { @@ -615,7 +615,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { fn candidate_after_variant_switch<'pat>(&mut self, match_pair_index: usize, - adt_def: ty::AdtDef<'tcx>, + adt_def: &'tcx ty::AdtDef, variant_index: usize, subpatterns: &'pat [FieldPattern<'tcx>], candidate: &Candidate<'pat, 'tcx>) diff --git a/src/librustc_mir/build/mod.rs b/src/librustc_mir/build/mod.rs index b37dd8dd0a..0e4dbb0477 100644 --- a/src/librustc_mir/build/mod.rs +++ b/src/librustc_mir/build/mod.rs @@ -18,7 +18,7 @@ use rustc::util::nodemap::NodeMap; use rustc::hir; use syntax::abi::Abi; use syntax::ast; -use syntax::parse::token::keywords; +use syntax::symbol::keywords; use syntax_pos::Span; use rustc_data_structures::indexed_vec::{IndexVec, Idx}; @@ -36,16 +36,9 @@ pub struct Builder<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { /// see the `scope` module for more details scopes: Vec>, - /// for each scope, a span of blocks that defines it; - /// we track these for use in region and borrow checking, - /// but these are liable to get out of date once optimization - /// begins. They are also hopefully temporary, and will be - /// no longer needed when we adopt graph-based regions. - scope_auxiliary: IndexVec, - /// the current set of loops; see the `scope` module for more /// details - loop_scopes: Vec, + loop_scopes: Vec>, /// the vector of all scopes that we have created thus far; /// we track this for debuginfo later @@ -82,30 +75,6 @@ impl Idx for ScopeId { } } -/// For each scope, we track the extent (from the HIR) and a -/// single-entry-multiple-exit subgraph that contains all the -/// statements/terminators within it. -/// -/// This information is separated out from the main `ScopeData` -/// because it is short-lived. First, the extent contains node-ids, -/// so it cannot be saved and re-loaded. Second, any optimization will mess up -/// the dominator/postdominator information. -/// -/// The intention is basically to use this information to do -/// regionck/borrowck and then throw it away once we are done. -pub struct ScopeAuxiliary { - /// extent of this scope from the MIR. - pub extent: CodeExtent, - - /// "entry point": dominator of all nodes in the scope - pub dom: Location, - - /// "exit points": mutual postdominators of all nodes in the scope - pub postdoms: Vec, -} - -pub type ScopeAuxiliaryVec = IndexVec; - /////////////////////////////////////////////////////////////////////////// /// The `BlockAnd` "monad" packages up the new basic block along with a /// produced value (sometimes just unit, of course). The `unpack!` @@ -155,9 +124,10 @@ macro_rules! unpack { pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, fn_id: ast::NodeId, arguments: A, + abi: Abi, return_ty: Ty<'gcx>, - ast_block: &'gcx hir::Block) - -> (Mir<'tcx>, ScopeAuxiliaryVec) + ast_body: &'gcx hir::Expr) + -> Mir<'tcx> where A: Iterator, Option<&'gcx hir::Pat>)> { let arguments: Vec<_> = arguments.collect(); @@ -166,7 +136,7 @@ pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, let span = tcx.map.span(fn_id); let mut builder = Builder::new(hir, span, arguments.len(), return_ty); - let body_id = ast_block.id; + let body_id = ast_body.id; let call_site_extent = tcx.region_maps.lookup_code_extent( CodeExtentData::CallSiteScope { fn_id: fn_id, body_id: body_id }); @@ -176,7 +146,7 @@ pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, let mut block = START_BLOCK; unpack!(block = builder.in_scope(call_site_extent, block, |builder| { unpack!(block = builder.in_scope(arg_extent, block, |builder| { - builder.args_and_body(block, return_ty, &arguments, arg_extent, ast_block) + builder.args_and_body(block, &arguments, arg_extent, ast_body) })); // Attribute epilogue to function's closing brace let fn_end = Span { lo: span.hi, ..span }; @@ -191,12 +161,9 @@ pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, assert_eq!(block, builder.return_block()); let mut spread_arg = None; - match tcx.tables().node_id_to_type(fn_id).sty { - ty::TyFnDef(_, _, f) if f.abi == Abi::RustCall => { - // RustCall pseudo-ABI untuples the last argument. - spread_arg = Some(Local::new(arguments.len())); - } - _ => {} + if abi == Abi::RustCall { + // RustCall pseudo-ABI untuples the last argument. + spread_arg = Some(Local::new(arguments.len())); } // Gather the upvars of a closure, if any. @@ -215,7 +182,7 @@ pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, by_ref: by_ref }; if let Some(hir::map::NodeLocal(pat)) = tcx.map.find(var_id) { - if let hir::PatKind::Binding(_, ref ident, _) = pat.node { + if let hir::PatKind::Binding(_, _, ref ident, _) = pat.node { decl.debug_name = ident.node; } } @@ -223,15 +190,15 @@ pub fn construct_fn<'a, 'gcx, 'tcx, A>(hir: Cx<'a, 'gcx, 'tcx>, }).collect() }); - let (mut mir, aux) = builder.finish(upvar_decls, return_ty); + let mut mir = builder.finish(upvar_decls, return_ty); mir.spread_arg = spread_arg; - (mir, aux) + mir } pub fn construct_const<'a, 'gcx, 'tcx>(hir: Cx<'a, 'gcx, 'tcx>, item_id: ast::NodeId, ast_expr: &'tcx hir::Expr) - -> (Mir<'tcx>, ScopeAuxiliaryVec) { + -> Mir<'tcx> { let tcx = hir.tcx(); let ty = tcx.tables().expr_ty_adjusted(ast_expr); let span = tcx.map.span(item_id); @@ -271,7 +238,6 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { scopes: vec![], visibility_scopes: IndexVec::new(), visibility_scope: ARGUMENT_VISIBILITY_SCOPE, - scope_auxiliary: IndexVec::new(), loop_scopes: vec![], local_decls: IndexVec::from_elem_n(LocalDecl::new_return_pointer(return_ty), 1), var_indices: NodeMap(), @@ -290,30 +256,29 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { fn finish(self, upvar_decls: Vec, return_ty: Ty<'tcx>) - -> (Mir<'tcx>, ScopeAuxiliaryVec) { + -> Mir<'tcx> { for (index, block) in self.cfg.basic_blocks.iter().enumerate() { if block.terminator.is_none() { span_bug!(self.fn_span, "no terminator on block {:?}", index); } } - (Mir::new(self.cfg.basic_blocks, - self.visibility_scopes, - IndexVec::new(), - return_ty, - self.local_decls, - self.arg_count, - upvar_decls, - self.fn_span - ), self.scope_auxiliary) + Mir::new(self.cfg.basic_blocks, + self.visibility_scopes, + IndexVec::new(), + return_ty, + self.local_decls, + self.arg_count, + upvar_decls, + self.fn_span + ) } fn args_and_body(&mut self, mut block: BasicBlock, - return_ty: Ty<'tcx>, arguments: &[(Ty<'gcx>, Option<&'gcx hir::Pat>)], argument_extent: CodeExtent, - ast_block: &'gcx hir::Block) + ast_body: &'gcx hir::Expr) -> BlockAnd<()> { // Allocate locals for the function arguments @@ -321,7 +286,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { // If this is a simple binding pattern, give the local a nice name for debuginfo. let mut name = None; if let Some(pat) = pattern { - if let hir::PatKind::Binding(_, ref ident, _) = pat.node { + if let hir::PatKind::Binding(_, _, ref ident, _) = pat.node { name = Some(ident.node); } } @@ -342,12 +307,12 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { if let Some(pattern) = pattern { let pattern = Pattern::from_hir(self.hir.tcx(), pattern); - scope = self.declare_bindings(scope, ast_block.span, &pattern); + scope = self.declare_bindings(scope, ast_body.span, &pattern); unpack!(block = self.lvalue_into_pattern(block, pattern, &lvalue)); } // Make sure we drop (parts of) the argument even when not matched on. - self.schedule_drop(pattern.as_ref().map_or(ast_block.span, |pat| pat.span), + self.schedule_drop(pattern.as_ref().map_or(ast_body.span, |pat| pat.span), argument_extent, &lvalue, ty); } @@ -357,13 +322,8 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { self.visibility_scope = visibility_scope; } - // FIXME(#32959): temporary hack for the issue at hand - let return_is_unit = return_ty.is_nil(); - // start the first basic block and translate the body - unpack!(block = self.ast_block(&Lvalue::Local(RETURN_POINTER), - return_is_unit, block, ast_block)); - - block.unit() + let body = self.hir.mirror(ast_body); + self.into(&Lvalue::Local(RETURN_POINTER), block, body) } fn get_unit_temp(&mut self) -> Lvalue<'tcx> { diff --git a/src/librustc_mir/build/scope.rs b/src/librustc_mir/build/scope.rs index af8170a1b8..c02a1822d7 100644 --- a/src/librustc_mir/build/scope.rs +++ b/src/librustc_mir/build/scope.rs @@ -86,7 +86,7 @@ should go to. */ -use build::{BlockAnd, BlockAndExtension, Builder, CFG, ScopeAuxiliary, ScopeId}; +use build::{BlockAnd, BlockAndExtension, Builder, CFG}; use rustc::middle::region::{CodeExtent, CodeExtentData}; use rustc::middle::lang_items; use rustc::ty::subst::{Kind, Subst}; @@ -94,17 +94,13 @@ use rustc::ty::{Ty, TyCtxt}; use rustc::mir::*; use syntax_pos::Span; use rustc_data_structures::indexed_vec::Idx; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; pub struct Scope<'tcx> { - /// the scope-id within the scope_auxiliary - id: ScopeId, - /// The visibility scope this scope was created in. visibility_scope: VisibilityScope, - /// the extent of this scope within source code; also stored in - /// `ScopeAuxiliary`, but kept here for convenience + /// the extent of this scope within source code. extent: CodeExtent, /// Whether there's anything to do for the cleanup path, that is, @@ -140,7 +136,7 @@ pub struct Scope<'tcx> { free: Option>, /// The cache for drop chain on “normal” exit into a particular BasicBlock. - cached_exits: FnvHashMap<(BasicBlock, CodeExtent), BasicBlock>, + cached_exits: FxHashMap<(BasicBlock, CodeExtent), BasicBlock>, } struct DropData<'tcx> { @@ -181,7 +177,7 @@ struct FreeData<'tcx> { } #[derive(Clone, Debug)] -pub struct LoopScope { +pub struct LoopScope<'tcx> { /// Extent of the loop pub extent: CodeExtent, /// Where the body of the loop begins @@ -189,8 +185,9 @@ pub struct LoopScope { /// Block to branch into when the loop terminates (either by being `break`-en out from, or by /// having its condition to become false) pub break_block: BasicBlock, - /// Indicates the reachability of the break_block for this loop - pub might_break: bool + /// The destination of the loop expression itself (i.e. where to put the result of a `break` + /// expression) + pub break_destination: Lvalue<'tcx>, } impl<'tcx> Scope<'tcx> { @@ -250,10 +247,10 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { /// /// Returns the might_break attribute of the LoopScope used. pub fn in_loop_scope(&mut self, - loop_block: BasicBlock, - break_block: BasicBlock, - f: F) - -> bool + loop_block: BasicBlock, + break_block: BasicBlock, + break_destination: Lvalue<'tcx>, + f: F) where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) { let extent = self.extent_of_innermost_scope(); @@ -261,13 +258,12 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { extent: extent.clone(), continue_block: loop_block, break_block: break_block, - might_break: false + break_destination: break_destination, }; self.loop_scopes.push(loop_scope); f(self); let loop_scope = self.loop_scopes.pop().unwrap(); assert!(loop_scope.extent == extent); - loop_scope.might_break } /// Convenience wrapper that pushes a scope and then executes `f` @@ -276,7 +272,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd { debug!("in_scope(extent={:?}, block={:?})", extent, block); - self.push_scope(extent, block); + self.push_scope(extent); let rv = unpack!(block = f(self)); unpack!(block = self.pop_scope(extent, block)); debug!("in_scope: exiting extent={:?} block={:?}", extent, block); @@ -287,23 +283,16 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { /// scope and call `pop_scope` afterwards. Note that these two /// calls must be paired; using `in_scope` as a convenience /// wrapper maybe preferable. - pub fn push_scope(&mut self, extent: CodeExtent, entry: BasicBlock) { + pub fn push_scope(&mut self, extent: CodeExtent) { debug!("push_scope({:?})", extent); - let id = ScopeId::new(self.scope_auxiliary.len()); let vis_scope = self.visibility_scope; self.scopes.push(Scope { - id: id, visibility_scope: vis_scope, extent: extent, needs_cleanup: false, drops: vec![], free: None, - cached_exits: FnvHashMap() - }); - self.scope_auxiliary.push(ScopeAuxiliary { - extent: extent, - dom: self.cfg.current_location(entry), - postdoms: vec![] + cached_exits: FxHashMap() }); } @@ -325,9 +314,6 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { &self.scopes, block, self.arg_count)); - self.scope_auxiliary[scope.id] - .postdoms - .push(self.cfg.current_location(block)); block.unit() } @@ -375,9 +361,6 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { self.cfg.terminate(block, scope.source_info(span), free); block = next; } - self.scope_auxiliary[scope.id] - .postdoms - .push(self.cfg.current_location(block)); } } let scope = &self.scopes[len - scope_count]; @@ -403,7 +386,7 @@ impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> { pub fn find_loop_scope(&mut self, span: Span, label: Option) - -> &mut LoopScope { + -> &mut LoopScope<'tcx> { let loop_scopes = &mut self.loop_scopes; match label { None => { @@ -800,13 +783,12 @@ fn build_free<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, data: &FreeData<'tcx>, target: BasicBlock) -> TerminatorKind<'tcx> { - let free_func = tcx.lang_items.require(lang_items::BoxFreeFnLangItem) - .unwrap_or_else(|e| tcx.sess.fatal(&e)); + let free_func = tcx.require_lang_item(lang_items::BoxFreeFnLangItem); let substs = tcx.intern_substs(&[Kind::from(data.item_ty)]); TerminatorKind::Call { func: Operand::Constant(Constant { span: data.span, - ty: tcx.lookup_item_type(free_func).ty.subst(tcx, substs), + ty: tcx.item_type(free_func).subst(tcx, substs), literal: Literal::Item { def_id: free_func, substs: substs diff --git a/src/librustc_mir/hair/cx/block.rs b/src/librustc_mir/hair/cx/block.rs index cb69de2cb3..b355c8f2c4 100644 --- a/src/librustc_mir/hair/cx/block.rs +++ b/src/librustc_mir/hair/cx/block.rs @@ -26,7 +26,7 @@ impl<'tcx> Mirror<'tcx> for &'tcx hir::Block { extent: cx.tcx.region_maps.node_extent(self.id), span: self.span, stmts: stmts, - expr: self.expr.to_ref() + expr: self.expr.to_ref(), } } } @@ -34,39 +34,44 @@ impl<'tcx> Mirror<'tcx> for &'tcx hir::Block { fn mirror_stmts<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, block_id: ast::NodeId, stmts: &'tcx [hir::Stmt]) - -> Vec> -{ + -> Vec> { let mut result = vec![]; for (index, stmt) in stmts.iter().enumerate() { match stmt.node { - hir::StmtExpr(ref expr, id) | hir::StmtSemi(ref expr, id) => + hir::StmtExpr(ref expr, id) | + hir::StmtSemi(ref expr, id) => { result.push(StmtRef::Mirror(Box::new(Stmt { span: stmt.span, kind: StmtKind::Expr { scope: cx.tcx.region_maps.node_extent(id), - expr: expr.to_ref() + expr: expr.to_ref(), + }, + }))) + } + hir::StmtDecl(ref decl, id) => { + match decl.node { + hir::DeclItem(..) => { + // ignore for purposes of the MIR } - }))), - hir::StmtDecl(ref decl, id) => match decl.node { - hir::DeclItem(..) => { /* ignore for purposes of the MIR */ } - hir::DeclLocal(ref local) => { - let remainder_extent = CodeExtentData::Remainder(BlockRemainder { - block: block_id, - first_statement_index: index as u32, - }); - let remainder_extent = - cx.tcx.region_maps.lookup_code_extent(remainder_extent); + hir::DeclLocal(ref local) => { + let remainder_extent = CodeExtentData::Remainder(BlockRemainder { + block: block_id, + first_statement_index: index as u32, + }); + let remainder_extent = + cx.tcx.region_maps.lookup_code_extent(remainder_extent); - let pattern = Pattern::from_hir(cx.tcx, &local.pat); - result.push(StmtRef::Mirror(Box::new(Stmt { - span: stmt.span, - kind: StmtKind::Let { - remainder_scope: remainder_extent, - init_scope: cx.tcx.region_maps.node_extent(id), - pattern: pattern, - initializer: local.init.to_ref(), - }, - }))); + let pattern = Pattern::from_hir(cx.tcx, &local.pat); + result.push(StmtRef::Mirror(Box::new(Stmt { + span: stmt.span, + kind: StmtKind::Let { + remainder_scope: remainder_extent, + init_scope: cx.tcx.region_maps.node_extent(id), + pattern: pattern, + initializer: local.init.to_ref(), + }, + }))); + } } } } diff --git a/src/librustc_mir/hair/cx/expr.rs b/src/librustc_mir/hair/cx/expr.rs index ba0d3b49a6..d579cdb042 100644 --- a/src/librustc_mir/hair/cx/expr.rs +++ b/src/librustc_mir/hair/cx/expr.rs @@ -18,10 +18,8 @@ use rustc::hir::map; use rustc::hir::def::{Def, CtorKind}; use rustc::middle::const_val::ConstVal; use rustc_const_eval as const_eval; -use rustc::middle::region::CodeExtent; use rustc::ty::{self, AdtKind, VariantDef, Ty}; use rustc::ty::cast::CastKind as TyCastKind; -use rustc::mir::*; use rustc::hir; use syntax::ptr::P; @@ -38,7 +36,8 @@ impl<'tcx> Mirror<'tcx> for &'tcx hir::Expr { let adj = cx.tcx.tables().adjustments.get(&self.id).cloned(); debug!("make_mirror: unadjusted-expr={:?} applying adjustments={:?}", - expr, adj); + expr, + adj); // Now apply adjustments, if any. match adj.map(|adj| (adj.kind, adj.target)) { @@ -80,41 +79,44 @@ impl<'tcx> Mirror<'tcx> for &'tcx hir::Expr { for i in 0..autoderefs { let i = i as u32; let adjusted_ty = - expr.ty.adjust_for_autoderef( - cx.tcx, - self.id, - self.span, - i, - |mc| cx.tcx.tables().method_map.get(&mc).map(|m| m.ty)); - debug!("make_mirror: autoderef #{}, adjusted_ty={:?}", i, adjusted_ty); + expr.ty.adjust_for_autoderef(cx.tcx, self.id, self.span, i, |mc| { + cx.tcx.tables().method_map.get(&mc).map(|m| m.ty) + }); + debug!("make_mirror: autoderef #{}, adjusted_ty={:?}", + i, + adjusted_ty); let method_key = ty::MethodCall::autoderef(self.id, i); - let meth_ty = - cx.tcx.tables().method_map.get(&method_key).map(|m| m.ty); + let meth_ty = cx.tcx.tables().method_map.get(&method_key).map(|m| m.ty); let kind = if let Some(meth_ty) = meth_ty { debug!("make_mirror: overloaded autoderef (meth_ty={:?})", meth_ty); let ref_ty = cx.tcx.no_late_bound_regions(&meth_ty.fn_ret()); let (region, mutbl) = match ref_ty { - Some(&ty::TyS { - sty: ty::TyRef(region, mt), .. - }) => (region, mt.mutbl), - _ => span_bug!(expr.span, "autoderef returned bad type") + Some(&ty::TyS { sty: ty::TyRef(region, mt), .. }) => (region, mt.mutbl), + _ => span_bug!(expr.span, "autoderef returned bad type"), }; expr = Expr { temp_lifetime: temp_lifetime, - ty: cx.tcx.mk_ref( - region, ty::TypeAndMut { ty: expr.ty, mutbl: mutbl }), + ty: cx.tcx.mk_ref(region, + ty::TypeAndMut { + ty: expr.ty, + mutbl: mutbl, + }), span: expr.span, kind: ExprKind::Borrow { region: region, borrow_kind: to_borrow_kind(mutbl), - arg: expr.to_ref() - } + arg: expr.to_ref(), + }, }; - overloaded_lvalue(cx, self, method_key, - PassArgs::ByRef, expr.to_ref(), vec![]) + overloaded_lvalue(cx, + self, + method_key, + PassArgs::ByRef, + expr.to_ref(), + vec![]) } else { debug!("make_mirror: built-in autoderef"); ExprKind::Deref { arg: expr.to_ref() } @@ -150,7 +152,11 @@ impl<'tcx> Mirror<'tcx> for &'tcx hir::Expr { let region = cx.tcx.mk_region(region); expr = Expr { temp_lifetime: temp_lifetime, - ty: cx.tcx.mk_ref(region, ty::TypeAndMut { ty: expr.ty, mutbl: m }), + ty: cx.tcx.mk_ref(region, + ty::TypeAndMut { + ty: expr.ty, + mutbl: m, + }), span: self.span, kind: ExprKind::Borrow { region: region, @@ -242,57 +248,62 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, let sig = match method.ty.sty { ty::TyFnDef(.., fn_ty) => &fn_ty.sig, - _ => span_bug!(expr.span, "type of method is not an fn") + _ => span_bug!(expr.span, "type of method is not an fn"), }; - let sig = cx.tcx.no_late_bound_regions(sig).unwrap_or_else(|| { - span_bug!(expr.span, "method call has late-bound regions") - }); + let sig = cx.tcx + .no_late_bound_regions(sig) + .unwrap_or_else(|| span_bug!(expr.span, "method call has late-bound regions")); - assert_eq!(sig.inputs.len(), 2); + assert_eq!(sig.inputs().len(), 2); let tupled_args = Expr { - ty: sig.inputs[1], + ty: sig.inputs()[1], temp_lifetime: temp_lifetime, span: expr.span, - kind: ExprKind::Tuple { - fields: args.iter().map(ToRef::to_ref).collect() - } + kind: ExprKind::Tuple { fields: args.iter().map(ToRef::to_ref).collect() }, }; ExprKind::Call { ty: method.ty, fun: method.to_ref(), - args: vec![fun.to_ref(), tupled_args.to_ref()] + args: vec![fun.to_ref(), tupled_args.to_ref()], } } else { - let adt_data = if let hir::ExprPath(..) = fun.node { + let adt_data = if let hir::ExprPath(hir::QPath::Resolved(_, ref path)) = fun.node { // Tuple-like ADTs are represented as ExprCall. We convert them here. - expr_ty.ty_adt_def().and_then(|adt_def|{ - match cx.tcx.expect_def(fun.id) { + expr_ty.ty_adt_def().and_then(|adt_def| { + match path.def { Def::VariantCtor(variant_id, CtorKind::Fn) => { Some((adt_def, adt_def.variant_index_with_id(variant_id))) - }, - Def::StructCtor(_, CtorKind::Fn) => { - Some((adt_def, 0)) - }, - _ => None + } + Def::StructCtor(_, CtorKind::Fn) => Some((adt_def, 0)), + _ => None, } }) - } else { None }; + } else { + None + }; if let Some((adt_def, index)) = adt_data { - let substs = cx.tcx.tables().node_id_item_substs(fun.id) + let substs = cx.tcx + .tables() + .node_id_item_substs(fun.id) .unwrap_or_else(|| cx.tcx.intern_substs(&[])); - let field_refs = args.iter().enumerate().map(|(idx, e)| FieldExprRef { - name: Field::new(idx), - expr: e.to_ref() - }).collect(); + let field_refs = args.iter() + .enumerate() + .map(|(idx, e)| { + FieldExprRef { + name: Field::new(idx), + expr: e.to_ref(), + } + }) + .collect(); ExprKind::Adt { adt_def: adt_def, substs: substs, variant_index: index, fields: field_refs, - base: None + base: None, } } else { ExprKind::Call { @@ -316,9 +327,7 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } } - hir::ExprBlock(ref blk) => { - ExprKind::Block { body: &blk } - } + hir::ExprBlock(ref blk) => ExprKind::Block { body: &blk }, hir::ExprAssign(ref lhs, ref rhs) => { ExprKind::Assign { @@ -334,8 +343,12 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } else { PassArgs::ByRef }; - overloaded_operator(cx, expr, ty::MethodCall::expr(expr.id), - pass_args, lhs.to_ref(), vec![rhs]) + overloaded_operator(cx, + expr, + ty::MethodCall::expr(expr.id), + pass_args, + lhs.to_ref(), + vec![rhs]) } else { ExprKind::AssignOp { op: bin_op(op.node), @@ -345,9 +358,7 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } } - hir::ExprLit(..) => ExprKind::Literal { - literal: cx.const_eval_literal(expr) - }, + hir::ExprLit(..) => ExprKind::Literal { literal: cx.const_eval_literal(expr) }, hir::ExprBinary(op, ref lhs, ref rhs) => { if cx.tcx.tables().is_method_call(expr.id) { @@ -356,8 +367,12 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } else { PassArgs::ByRef }; - overloaded_operator(cx, expr, ty::MethodCall::expr(expr.id), - pass_args, lhs.to_ref(), vec![rhs]) + overloaded_operator(cx, + expr, + ty::MethodCall::expr(expr.id), + pass_args, + lhs.to_ref(), + vec![rhs]) } else { // FIXME overflow match (op.node, cx.constness) { @@ -407,8 +422,12 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, hir::ExprIndex(ref lhs, ref index) => { if cx.tcx.tables().is_method_call(expr.id) { - overloaded_lvalue(cx, expr, ty::MethodCall::expr(expr.id), - PassArgs::ByValue, lhs.to_ref(), vec![index]) + overloaded_lvalue(cx, + expr, + ty::MethodCall::expr(expr.id), + PassArgs::ByValue, + lhs.to_ref(), + vec![index]) } else { ExprKind::Index { lhs: lhs.to_ref(), @@ -419,8 +438,12 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, hir::ExprUnary(hir::UnOp::UnDeref, ref arg) => { if cx.tcx.tables().is_method_call(expr.id) { - overloaded_lvalue(cx, expr, ty::MethodCall::expr(expr.id), - PassArgs::ByValue, arg.to_ref(), vec![]) + overloaded_lvalue(cx, + expr, + ty::MethodCall::expr(expr.id), + PassArgs::ByValue, + arg.to_ref(), + vec![]) } else { ExprKind::Deref { arg: arg.to_ref() } } @@ -428,8 +451,12 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, hir::ExprUnary(hir::UnOp::UnNot, ref arg) => { if cx.tcx.tables().is_method_call(expr.id) { - overloaded_operator(cx, expr, ty::MethodCall::expr(expr.id), - PassArgs::ByValue, arg.to_ref(), vec![]) + overloaded_operator(cx, + expr, + ty::MethodCall::expr(expr.id), + PassArgs::ByValue, + arg.to_ref(), + vec![]) } else { ExprKind::Unary { op: UnOp::Not, @@ -440,14 +467,16 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, hir::ExprUnary(hir::UnOp::UnNeg, ref arg) => { if cx.tcx.tables().is_method_call(expr.id) { - overloaded_operator(cx, expr, ty::MethodCall::expr(expr.id), - PassArgs::ByValue, arg.to_ref(), vec![]) + overloaded_operator(cx, + expr, + ty::MethodCall::expr(expr.id), + PassArgs::ByValue, + arg.to_ref(), + vec![]) } else { // FIXME runtime-overflow if let hir::ExprLit(_) = arg.node { - ExprKind::Literal { - literal: cx.const_eval_literal(expr), - } + ExprKind::Literal { literal: cx.const_eval_literal(expr) } } else { ExprKind::Unary { op: UnOp::Neg, @@ -457,54 +486,56 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } } - hir::ExprStruct(_, ref fields, ref base) => { + hir::ExprStruct(ref qpath, ref fields, ref base) => { match expr_ty.sty { - ty::TyAdt(adt, substs) => match adt.adt_kind() { - AdtKind::Struct | AdtKind::Union => { - let field_refs = field_refs(&adt.variants[0], fields); - ExprKind::Adt { - adt_def: adt, - variant_index: 0, - substs: substs, - fields: field_refs, - base: base.as_ref().map(|base| { - FruInfo { - base: base.to_ref(), - field_types: - cx.tcx.tables().fru_field_types[&expr.id].clone() - } - }) + ty::TyAdt(adt, substs) => { + match adt.adt_kind() { + AdtKind::Struct | AdtKind::Union => { + let field_refs = field_refs(&adt.variants[0], fields); + ExprKind::Adt { + adt_def: adt, + variant_index: 0, + substs: substs, + fields: field_refs, + base: base.as_ref().map(|base| { + FruInfo { + base: base.to_ref(), + field_types: cx.tcx.tables().fru_field_types[&expr.id] + .clone(), + } + }), + } } - } - AdtKind::Enum => { - match cx.tcx.expect_def(expr.id) { - Def::Variant(variant_id) => { - assert!(base.is_none()); + AdtKind::Enum => { + let def = match *qpath { + hir::QPath::Resolved(_, ref path) => path.def, + hir::QPath::TypeRelative(..) => Def::Err, + }; + match def { + Def::Variant(variant_id) => { + assert!(base.is_none()); - let index = adt.variant_index_with_id(variant_id); - let field_refs = field_refs(&adt.variants[index], fields); - ExprKind::Adt { - adt_def: adt, - variant_index: index, - substs: substs, - fields: field_refs, - base: None + let index = adt.variant_index_with_id(variant_id); + let field_refs = field_refs(&adt.variants[index], fields); + ExprKind::Adt { + adt_def: adt, + variant_index: index, + substs: substs, + fields: field_refs, + base: None, + } + } + _ => { + span_bug!(expr.span, "unexpected def: {:?}", def); } - } - ref def => { - span_bug!( - expr.span, - "unexpected def: {:?}", - def); } } } - }, + } _ => { - span_bug!( - expr.span, - "unexpected type for struct literal: {:?}", - expr_ty); + span_bug!(expr.span, + "unexpected type for struct literal: {:?}", + expr_ty); } } } @@ -514,15 +545,13 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, let (def_id, substs) = match closure_ty.sty { ty::TyClosure(def_id, substs) => (def_id, substs), _ => { - span_bug!(expr.span, - "closure expr w/o closure type: {:?}", - closure_ty); + span_bug!(expr.span, "closure expr w/o closure type: {:?}", closure_ty); } }; let upvars = cx.tcx.with_freevars(expr.id, |freevars| { freevars.iter() - .enumerate() - .map(|(i, fv)| capture_freevar(cx, expr, fv, substs.upvar_tys[i])) + .zip(substs.upvar_tys(def_id, cx.tcx)) + .map(|(fv, ty)| capture_freevar(cx, expr, fv, ty)) .collect() }); ExprKind::Closure { @@ -532,68 +561,90 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } } - hir::ExprPath(..) => { - convert_path_expr(cx, expr) + hir::ExprPath(ref qpath) => { + let def = cx.tcx.tables().qpath_def(qpath, expr.id); + convert_path_expr(cx, expr, def) } hir::ExprInlineAsm(ref asm, ref outputs, ref inputs) => { ExprKind::InlineAsm { asm: asm, outputs: outputs.to_ref(), - inputs: inputs.to_ref() + inputs: inputs.to_ref(), } } // Now comes the rote stuff: - - hir::ExprRepeat(ref v, ref c) => ExprKind::Repeat { - value: v.to_ref(), - count: TypedConstVal { - ty: cx.tcx.tables().expr_ty(c), - span: c.span, - value: match const_eval::eval_const_expr(cx.tcx.global_tcx(), c) { - ConstVal::Integral(ConstInt::Usize(u)) => u, - other => bug!("constant evaluation of repeat count yielded {:?}", other), + hir::ExprRepeat(ref v, ref c) => { + ExprKind::Repeat { + value: v.to_ref(), + count: TypedConstVal { + ty: cx.tcx.tables().expr_ty(c), + span: c.span, + value: match const_eval::eval_const_expr(cx.tcx.global_tcx(), c) { + ConstVal::Integral(ConstInt::Usize(u)) => u, + other => bug!("constant evaluation of repeat count yielded {:?}", other), + }, }, } - }, - hir::ExprRet(ref v) => - ExprKind::Return { value: v.to_ref() }, - hir::ExprBreak(label) => - ExprKind::Break { label: label.map(|_| loop_label(cx, expr)) }, - hir::ExprAgain(label) => - ExprKind::Continue { label: label.map(|_| loop_label(cx, expr)) }, - hir::ExprMatch(ref discr, ref arms, _) => - ExprKind::Match { discriminant: discr.to_ref(), - arms: arms.iter().map(|a| convert_arm(cx, a)).collect() }, - hir::ExprIf(ref cond, ref then, ref otherwise) => - ExprKind::If { condition: cond.to_ref(), - then: block::to_expr_ref(cx, then), - otherwise: otherwise.to_ref() }, - hir::ExprWhile(ref cond, ref body, _) => - ExprKind::Loop { condition: Some(cond.to_ref()), - body: block::to_expr_ref(cx, body) }, - hir::ExprLoop(ref body, _) => - ExprKind::Loop { condition: None, - body: block::to_expr_ref(cx, body) }, + } + hir::ExprRet(ref v) => ExprKind::Return { value: v.to_ref() }, + hir::ExprBreak(label, ref value) => { + ExprKind::Break { + label: label.map(|label| cx.tcx.region_maps.node_extent(label.loop_id)), + value: value.to_ref(), + } + } + hir::ExprAgain(label) => { + ExprKind::Continue { + label: label.map(|label| cx.tcx.region_maps.node_extent(label.loop_id)), + } + } + hir::ExprMatch(ref discr, ref arms, _) => { + ExprKind::Match { + discriminant: discr.to_ref(), + arms: arms.iter().map(|a| convert_arm(cx, a)).collect(), + } + } + hir::ExprIf(ref cond, ref then, ref otherwise) => { + ExprKind::If { + condition: cond.to_ref(), + then: block::to_expr_ref(cx, then), + otherwise: otherwise.to_ref(), + } + } + hir::ExprWhile(ref cond, ref body, _) => { + ExprKind::Loop { + condition: Some(cond.to_ref()), + body: block::to_expr_ref(cx, body), + } + } + hir::ExprLoop(ref body, _, _) => { + ExprKind::Loop { + condition: None, + body: block::to_expr_ref(cx, body), + } + } hir::ExprField(ref source, name) => { let index = match cx.tcx.tables().expr_ty_adjusted(source).sty { - ty::TyAdt(adt_def, _) => - adt_def.variants[0].index_of_field_named(name.node), - ref ty => - span_bug!(expr.span, "field of non-ADT: {:?}", ty), + ty::TyAdt(adt_def, _) => adt_def.variants[0].index_of_field_named(name.node), + ref ty => span_bug!(expr.span, "field of non-ADT: {:?}", ty), }; - let index = index.unwrap_or_else(|| { - span_bug!( - expr.span, - "no index found for field `{}`", - name.node) - }); - ExprKind::Field { lhs: source.to_ref(), name: Field::new(index) } + let index = + index.unwrap_or_else(|| { + span_bug!(expr.span, "no index found for field `{}`", name.node) + }); + ExprKind::Field { + lhs: source.to_ref(), + name: Field::new(index), + } + } + hir::ExprTupField(ref source, index) => { + ExprKind::Field { + lhs: source.to_ref(), + name: Field::new(index.node as usize), + } } - hir::ExprTupField(ref source, index) => - ExprKind::Field { lhs: source.to_ref(), - name: Field::new(index.node as usize) }, hir::ExprCast(ref source, _) => { // Check to see if this cast is a "coercion cast", where the cast is actually done // using a coercion (or is a no-op). @@ -604,17 +655,15 @@ fn make_mirror_unadjusted<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, ExprKind::Cast { source: source.to_ref() } } } - hir::ExprType(ref source, _) => - return source.make_mirror(cx), - hir::ExprBox(ref value) => + hir::ExprType(ref source, _) => return source.make_mirror(cx), + hir::ExprBox(ref value) => { ExprKind::Box { value: value.to_ref(), - value_extents: cx.tcx.region_maps.node_extent(value.id) - }, - hir::ExprArray(ref fields) => - ExprKind::Vec { fields: fields.to_ref() }, - hir::ExprTup(ref fields) => - ExprKind::Tuple { fields: fields.to_ref() }, + value_extents: cx.tcx.region_maps.node_extent(value.id), + } + } + hir::ExprArray(ref fields) => ExprKind::Vec { fields: fields.to_ref() }, + hir::ExprTup(ref fields) => ExprKind::Tuple { fields: fields.to_ref() }, }; Expr { @@ -651,8 +700,7 @@ fn to_borrow_kind(m: hir::Mutability) -> BorrowKind { } } -fn convert_arm<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, - arm: &'tcx hir::Arm) -> Arm<'tcx> { +fn convert_arm<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, arm: &'tcx hir::Arm) -> Arm<'tcx> { Arm { patterns: arm.pats.iter().map(|p| Pattern::from_hir(cx.tcx, p)).collect(), guard: arm.guard.to_ref(), @@ -661,44 +709,51 @@ fn convert_arm<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, } fn convert_path_expr<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, - expr: &'tcx hir::Expr) + expr: &'tcx hir::Expr, + def: Def) -> ExprKind<'tcx> { - let substs = cx.tcx.tables().node_id_item_substs(expr.id) + let substs = cx.tcx + .tables() + .node_id_item_substs(expr.id) .unwrap_or_else(|| cx.tcx.intern_substs(&[])); - let def = cx.tcx.expect_def(expr.id); let def_id = match def { // A regular function, constructor function or a constant. - Def::Fn(def_id) | Def::Method(def_id) | + Def::Fn(def_id) | + Def::Method(def_id) | Def::StructCtor(def_id, CtorKind::Fn) | Def::VariantCtor(def_id, CtorKind::Fn) | - Def::Const(def_id) | Def::AssociatedConst(def_id) => def_id, + Def::Const(def_id) | + Def::AssociatedConst(def_id) => def_id, Def::StructCtor(def_id, CtorKind::Const) | Def::VariantCtor(def_id, CtorKind::Const) => { match cx.tcx.tables().node_id_to_type(expr.id).sty { // A unit struct/variant which is used as a value. // We return a completely different ExprKind here to account for this special case. - ty::TyAdt(adt_def, substs) => return ExprKind::Adt { - adt_def: adt_def, - variant_index: adt_def.variant_index_with_id(def_id), - substs: substs, - fields: vec![], - base: None, - }, - ref sty => bug!("unexpected sty: {:?}", sty) + ty::TyAdt(adt_def, substs) => { + return ExprKind::Adt { + adt_def: adt_def, + variant_index: adt_def.variant_index_with_id(def_id), + substs: substs, + fields: vec![], + base: None, + } + } + ref sty => bug!("unexpected sty: {:?}", sty), } } - Def::Static(node_id, _) => return ExprKind::StaticRef { - id: node_id, - }, + Def::Static(node_id, _) => return ExprKind::StaticRef { id: node_id }, Def::Local(..) | Def::Upvar(..) => return convert_var(cx, expr, def), _ => span_bug!(expr.span, "def `{:?}` not yet implemented", def), }; ExprKind::Literal { - literal: Literal::Item { def_id: def_id, substs: substs } + literal: Literal::Item { + def_id: def_id, + substs: substs, + }, } } @@ -711,20 +766,21 @@ fn convert_var<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, match def { Def::Local(def_id) => { let node_id = cx.tcx.map.as_local_node_id(def_id).unwrap(); - ExprKind::VarRef { - id: node_id, - } + ExprKind::VarRef { id: node_id } } Def::Upvar(def_id, index, closure_expr_id) => { let id_var = cx.tcx.map.as_local_node_id(def_id).unwrap(); - debug!("convert_var(upvar({:?}, {:?}, {:?}))", id_var, index, closure_expr_id); + debug!("convert_var(upvar({:?}, {:?}, {:?}))", + id_var, + index, + closure_expr_id); let var_ty = cx.tcx.tables().node_id_to_type(id_var); let body_id = match cx.tcx.map.find(closure_expr_id) { Some(map::NodeExpr(expr)) => { match expr.node { - hir::ExprClosure(.., ref body, _) => body.id, + hir::ExprClosure(.., body_id, _) => body_id.node_id(), _ => { span_bug!(expr.span, "closure expr is not a closure expr"); } @@ -749,41 +805,45 @@ fn convert_var<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, let self_expr = match cx.tcx.closure_kind(cx.tcx.map.local_def_id(closure_expr_id)) { ty::ClosureKind::Fn => { - let ref_closure_ty = - cx.tcx.mk_ref(region, - ty::TypeAndMut { ty: closure_ty, - mutbl: hir::MutImmutable }); + let ref_closure_ty = cx.tcx.mk_ref(region, + ty::TypeAndMut { + ty: closure_ty, + mutbl: hir::MutImmutable, + }); Expr { ty: closure_ty, temp_lifetime: temp_lifetime, span: expr.span, kind: ExprKind::Deref { arg: Expr { - ty: ref_closure_ty, - temp_lifetime: temp_lifetime, - span: expr.span, - kind: ExprKind::SelfRef - }.to_ref() - } + ty: ref_closure_ty, + temp_lifetime: temp_lifetime, + span: expr.span, + kind: ExprKind::SelfRef, + } + .to_ref(), + }, } } ty::ClosureKind::FnMut => { - let ref_closure_ty = - cx.tcx.mk_ref(region, - ty::TypeAndMut { ty: closure_ty, - mutbl: hir::MutMutable }); + let ref_closure_ty = cx.tcx.mk_ref(region, + ty::TypeAndMut { + ty: closure_ty, + mutbl: hir::MutMutable, + }); Expr { ty: closure_ty, temp_lifetime: temp_lifetime, span: expr.span, kind: ExprKind::Deref { arg: Expr { - ty: ref_closure_ty, - temp_lifetime: temp_lifetime, - span: expr.span, - kind: ExprKind::SelfRef - }.to_ref() - } + ty: ref_closure_ty, + temp_lifetime: temp_lifetime, + span: expr.span, + kind: ExprKind::SelfRef, + } + .to_ref(), + }, } } ty::ClosureKind::FnOnce => { @@ -811,10 +871,7 @@ fn convert_var<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, let upvar_capture = match cx.tcx.tables().upvar_capture(upvar_id) { Some(c) => c, None => { - span_bug!( - expr.span, - "no upvar_capture for {:?}", - upvar_id); + span_bug!(expr.span, "no upvar_capture for {:?}", upvar_id); } }; match upvar_capture { @@ -822,15 +879,16 @@ fn convert_var<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, ty::UpvarCapture::ByRef(borrow) => { ExprKind::Deref { arg: Expr { - temp_lifetime: temp_lifetime, - ty: cx.tcx.mk_ref(borrow.region, - ty::TypeAndMut { - ty: var_ty, - mutbl: borrow.kind.to_mutbl_lossy() - }), - span: expr.span, - kind: field_kind, - }.to_ref() + temp_lifetime: temp_lifetime, + ty: cx.tcx.mk_ref(borrow.region, + ty::TypeAndMut { + ty: var_ty, + mutbl: borrow.kind.to_mutbl_lossy(), + }), + span: expr.span, + kind: field_kind, + } + .to_ref(), } } } @@ -882,30 +940,31 @@ fn overloaded_operator<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, // the arguments, unfortunately, do not, so if this is a ByRef // operator, we have to gin up the autorefs (but by value is easy) match pass_args { - PassArgs::ByValue => { - argrefs.extend(args.iter().map(|arg| arg.to_ref())) - } + PassArgs::ByValue => argrefs.extend(args.iter().map(|arg| arg.to_ref())), PassArgs::ByRef => { let region = cx.tcx.node_scope_region(expr.id); let temp_lifetime = cx.tcx.region_maps.temporary_scope(expr.id); - argrefs.extend( - args.iter() - .map(|arg| { - let arg_ty = cx.tcx.tables().expr_ty_adjusted(arg); - let adjusted_ty = - cx.tcx.mk_ref(region, - ty::TypeAndMut { ty: arg_ty, - mutbl: hir::MutImmutable }); - Expr { + argrefs.extend(args.iter() + .map(|arg| { + let arg_ty = cx.tcx.tables().expr_ty_adjusted(arg); + let adjusted_ty = cx.tcx.mk_ref(region, + ty::TypeAndMut { + ty: arg_ty, + mutbl: hir::MutImmutable, + }); + Expr { temp_lifetime: temp_lifetime, ty: adjusted_ty, span: expr.span, - kind: ExprKind::Borrow { region: region, - borrow_kind: BorrowKind::Shared, - arg: arg.to_ref() } - }.to_ref() - })) + kind: ExprKind::Borrow { + region: region, + borrow_kind: BorrowKind::Shared, + arg: arg.to_ref(), + }, + } + .to_ref() + })) } } @@ -969,9 +1028,7 @@ fn capture_freevar<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, kind: convert_var(cx, closure_expr, freevar.def), }; match upvar_capture { - ty::UpvarCapture::ByValue => { - captured_var.to_ref() - } + ty::UpvarCapture::ByValue => captured_var.to_ref(), ty::UpvarCapture::ByRef(upvar_borrow) => { let borrow_kind = match upvar_borrow.kind { ty::BorrowKind::ImmBorrow => BorrowKind::Shared, @@ -979,34 +1036,30 @@ fn capture_freevar<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, ty::BorrowKind::MutBorrow => BorrowKind::Mut, }; Expr { - temp_lifetime: temp_lifetime, - ty: freevar_ty, - span: closure_expr.span, - kind: ExprKind::Borrow { region: upvar_borrow.region, - borrow_kind: borrow_kind, - arg: captured_var.to_ref() } - }.to_ref() + temp_lifetime: temp_lifetime, + ty: freevar_ty, + span: closure_expr.span, + kind: ExprKind::Borrow { + region: upvar_borrow.region, + borrow_kind: borrow_kind, + arg: captured_var.to_ref(), + }, + } + .to_ref() } } } -fn loop_label<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>, - expr: &'tcx hir::Expr) -> CodeExtent { - match cx.tcx.expect_def(expr.id) { - Def::Label(loop_id) => cx.tcx.region_maps.node_extent(loop_id), - d => span_bug!(expr.span, "loop scope resolved to {:?}", d), - } -} - /// Converts a list of named fields (i.e. for struct-like struct/enum ADTs) into FieldExprRef. -fn field_refs<'tcx>(variant: VariantDef<'tcx>, +fn field_refs<'tcx>(variant: &'tcx VariantDef, fields: &'tcx [hir::Field]) - -> Vec> -{ + -> Vec> { fields.iter() - .map(|field| FieldExprRef { - name: Field::new(variant.index_of_field_named(field.name.node).unwrap()), - expr: field.expr.to_ref(), - }) - .collect() + .map(|field| { + FieldExprRef { + name: Field::new(variant.index_of_field_named(field.name.node).unwrap()), + expr: field.expr.to_ref(), + } + }) + .collect() } diff --git a/src/librustc_mir/hair/cx/mod.rs b/src/librustc_mir/hair/cx/mod.rs index 678db1e544..7d111fccd0 100644 --- a/src/librustc_mir/hair/cx/mod.rs +++ b/src/librustc_mir/hair/cx/mod.rs @@ -8,15 +8,13 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -/*! - * This module contains the code to convert from the wacky tcx data - * structures into the hair. The `builder` is generally ignorant of - * the tcx etc, and instead goes through the `Cx` for most of its - * work. - */ +//! This module contains the code to convert from the wacky tcx data +//! structures into the hair. The `builder` is generally ignorant of +//! the tcx etc, and instead goes through the `Cx` for most of its +//! work. +//! use hair::*; -use rustc::mir::*; use rustc::mir::transform::MirSource; use rustc::middle::const_val::ConstVal; @@ -24,41 +22,34 @@ use rustc_const_eval as const_eval; use rustc_data_structures::indexed_vec::Idx; use rustc::dep_graph::DepNode; use rustc::hir::def_id::DefId; -use rustc::hir::intravisit::FnKind; use rustc::hir::map::blocks::FnLikeNode; use rustc::infer::InferCtxt; use rustc::ty::subst::Subst; use rustc::ty::{self, Ty, TyCtxt}; -use syntax::parse::token; +use syntax::symbol::{Symbol, InternedString}; use rustc::hir; use rustc_const_math::{ConstInt, ConstUsize}; #[derive(Copy, Clone)] -pub struct Cx<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { +pub struct Cx<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> { tcx: TyCtxt<'a, 'gcx, 'tcx>, infcx: &'a InferCtxt<'a, 'gcx, 'tcx>, constness: hir::Constness, /// True if this constant/function needs overflow checks. - check_overflow: bool + check_overflow: bool, } impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { - pub fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>, - src: MirSource) - -> Cx<'a, 'gcx, 'tcx> { + pub fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>, src: MirSource) -> Cx<'a, 'gcx, 'tcx> { let constness = match src { MirSource::Const(_) | MirSource::Static(..) => hir::Constness::Const, MirSource::Fn(id) => { let fn_like = FnLikeNode::from_node(infcx.tcx.map.get(id)); - match fn_like.map(|f| f.kind()) { - Some(FnKind::ItemFn(_, _, _, c, ..)) => c, - Some(FnKind::Method(_, m, ..)) => m.constness, - _ => hir::Constness::NotConst - } + fn_like.map_or(hir::Constness::NotConst, |f| f.constness()) } - MirSource::Promoted(..) => bug!() + MirSource::Promoted(..) => bug!(), }; let src_node_id = src.item_id(); @@ -76,13 +67,16 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { // Some functions always have overflow checks enabled, // however, they may not get codegen'd, depending on // the settings for the crate they are translated in. - let mut check_overflow = attrs.iter().any(|item| { - item.check_name("rustc_inherit_overflow_checks") - }); + let mut check_overflow = attrs.iter() + .any(|item| item.check_name("rustc_inherit_overflow_checks")); // Respect -Z force-overflow-checks=on and -C debug-assertions. - check_overflow |= infcx.tcx.sess.opts.debugging_opts.force_overflow_checks - .unwrap_or(infcx.tcx.sess.opts.debug_assertions); + check_overflow |= infcx.tcx + .sess + .opts + .debugging_opts + .force_overflow_checks + .unwrap_or(infcx.tcx.sess.opts.debug_assertions); // Constants and const fn's always need overflow checks. check_overflow |= constness == hir::Constness::Const; @@ -91,7 +85,7 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { tcx: infcx.tcx, infcx: infcx, constness: constness, - check_overflow: check_overflow + check_overflow: check_overflow, } } } @@ -108,7 +102,7 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { pub fn usize_literal(&mut self, value: u64) -> Literal<'tcx> { match ConstUsize::new(value, self.tcx.sess.target.uint_type) { - Ok(val) => Literal::Value { value: ConstVal::Integral(ConstInt::Usize(val))}, + Ok(val) => Literal::Value { value: ConstVal::Integral(ConstInt::Usize(val)) }, Err(_) => bug!("usize literal out of range for target"), } } @@ -121,7 +115,7 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { self.tcx.mk_nil() } - pub fn str_literal(&mut self, value: token::InternedString) -> Literal<'tcx> { + pub fn str_literal(&mut self, value: InternedString) -> Literal<'tcx> { Literal::Value { value: ConstVal::Str(value) } } @@ -134,9 +128,7 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { } pub fn const_eval_literal(&mut self, e: &hir::Expr) -> Literal<'tcx> { - Literal::Value { - value: const_eval::eval_const_expr(self.tcx.global_tcx(), e) - } + Literal::Value { value: const_eval::eval_const_expr(self.tcx.global_tcx(), e) } } pub fn trait_method(&mut self, @@ -145,33 +137,28 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { self_ty: Ty<'tcx>, params: &[Ty<'tcx>]) -> (Ty<'tcx>, Literal<'tcx>) { - let method_name = token::intern(method_name); + let method_name = Symbol::intern(method_name); let substs = self.tcx.mk_substs_trait(self_ty, params); - for trait_item in self.tcx.trait_items(trait_def_id).iter() { - match *trait_item { - ty::ImplOrTraitItem::MethodTraitItem(ref method) => { - if method.name == method_name { - let method_ty = self.tcx.lookup_item_type(method.def_id); - let method_ty = method_ty.ty.subst(self.tcx, substs); - return (method_ty, Literal::Item { - def_id: method.def_id, + for item in self.tcx.associated_items(trait_def_id) { + if item.kind == ty::AssociatedKind::Method && item.name == method_name { + let method_ty = self.tcx.item_type(item.def_id); + let method_ty = method_ty.subst(self.tcx, substs); + return (method_ty, + Literal::Item { + def_id: item.def_id, substs: substs, }); - } - } - ty::ImplOrTraitItem::ConstTraitItem(..) | - ty::ImplOrTraitItem::TypeTraitItem(..) => {} } } bug!("found no method `{}` in `{:?}`", method_name, trait_def_id); } - pub fn num_variants(&mut self, adt_def: ty::AdtDef) -> usize { + pub fn num_variants(&mut self, adt_def: &ty::AdtDef) -> usize { adt_def.variants.len() } - pub fn all_fields(&mut self, adt_def: ty::AdtDef, variant_index: usize) -> Vec { + pub fn all_fields(&mut self, adt_def: &ty::AdtDef, variant_index: usize) -> Vec { (0..adt_def.variants[variant_index].fields.len()) .map(Field::new) .collect() @@ -180,7 +167,8 @@ impl<'a, 'gcx, 'tcx> Cx<'a, 'gcx, 'tcx> { pub fn needs_drop(&mut self, ty: Ty<'tcx>) -> bool { let ty = self.tcx.lift_to_global(&ty).unwrap_or_else(|| { bug!("MIR: Cx::needs_drop({}) got \ - type with inference types/regions", ty); + type with inference types/regions", + ty); }); self.tcx.type_needs_drop_given_env(ty, &self.infcx.parameter_environment) } diff --git a/src/librustc_mir/hair/cx/to_ref.rs b/src/librustc_mir/hair/cx/to_ref.rs index 63dbde4743..6930a959d6 100644 --- a/src/librustc_mir/hair/cx/to_ref.rs +++ b/src/librustc_mir/hair/cx/to_ref.rs @@ -18,7 +18,7 @@ pub trait ToRef { fn to_ref(self) -> Self::Output; } -impl<'a,'tcx:'a> ToRef for &'tcx hir::Expr { +impl<'a, 'tcx: 'a> ToRef for &'tcx hir::Expr { type Output = ExprRef<'tcx>; fn to_ref(self) -> ExprRef<'tcx> { @@ -26,7 +26,7 @@ impl<'a,'tcx:'a> ToRef for &'tcx hir::Expr { } } -impl<'a,'tcx:'a> ToRef for &'tcx P { +impl<'a, 'tcx: 'a> ToRef for &'tcx P { type Output = ExprRef<'tcx>; fn to_ref(self) -> ExprRef<'tcx> { @@ -34,7 +34,7 @@ impl<'a,'tcx:'a> ToRef for &'tcx P { } } -impl<'a,'tcx:'a> ToRef for Expr<'tcx> { +impl<'a, 'tcx: 'a> ToRef for Expr<'tcx> { type Output = ExprRef<'tcx>; fn to_ref(self) -> ExprRef<'tcx> { @@ -42,8 +42,8 @@ impl<'a,'tcx:'a> ToRef for Expr<'tcx> { } } -impl<'a,'tcx:'a,T,U> ToRef for &'tcx Option - where &'tcx T: ToRef +impl<'a, 'tcx: 'a, T, U> ToRef for &'tcx Option + where &'tcx T: ToRef { type Output = Option; @@ -52,8 +52,8 @@ impl<'a,'tcx:'a,T,U> ToRef for &'tcx Option } } -impl<'a,'tcx:'a,T,U> ToRef for &'tcx Vec - where &'tcx T: ToRef +impl<'a, 'tcx: 'a, T, U> ToRef for &'tcx Vec + where &'tcx T: ToRef { type Output = Vec; @@ -62,8 +62,8 @@ impl<'a,'tcx:'a,T,U> ToRef for &'tcx Vec } } -impl<'a,'tcx:'a,T,U> ToRef for &'tcx P<[T]> - where &'tcx T: ToRef +impl<'a, 'tcx: 'a, T, U> ToRef for &'tcx P<[T]> + where &'tcx T: ToRef { type Output = Vec; diff --git a/src/librustc_mir/hair/mod.rs b/src/librustc_mir/hair/mod.rs index e211334e54..22c07f1903 100644 --- a/src/librustc_mir/hair/mod.rs +++ b/src/librustc_mir/hair/mod.rs @@ -202,6 +202,7 @@ pub enum ExprKind<'tcx> { }, Break { label: Option, + value: Option>, }, Continue { label: Option, @@ -220,7 +221,7 @@ pub enum ExprKind<'tcx> { fields: Vec>, }, Adt { - adt_def: AdtDef<'tcx>, + adt_def: &'tcx AdtDef, variant_index: usize, substs: &'tcx Substs<'tcx>, fields: Vec>, diff --git a/src/librustc_mir/lib.rs b/src/librustc_mir/lib.rs index aa56daf888..617bd81d96 100644 --- a/src/librustc_mir/lib.rs +++ b/src/librustc_mir/lib.rs @@ -22,11 +22,10 @@ Rust MIR: a lowered representation of Rust. Also: an experiment! #![feature(associated_consts)] #![feature(box_patterns)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] +#![cfg_attr(stage0, feature(item_like_imports))] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #[macro_use] extern crate log; extern crate graphviz as dot; diff --git a/src/librustc_mir/mir_map.rs b/src/librustc_mir/mir_map.rs index 0ffc59fe6b..e2a516edbc 100644 --- a/src/librustc_mir/mir_map.rs +++ b/src/librustc_mir/mir_map.rs @@ -30,16 +30,17 @@ use rustc::traits::Reveal; use rustc::ty::{self, Ty, TyCtxt}; use rustc::ty::subst::Substs; use rustc::hir; -use rustc::hir::intravisit::{self, FnKind, Visitor}; +use rustc::hir::intravisit::{self, FnKind, Visitor, NestedVisitorMap}; +use syntax::abi::Abi; use syntax::ast; use syntax_pos::Span; use std::mem; pub fn build_mir_for_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { - tcx.visit_all_items_in_krate(DepNode::Mir, &mut BuildMir { + tcx.visit_all_item_likes_in_krate(DepNode::Mir, &mut BuildMir { tcx: tcx - }); + }.as_deep_visitor()); } /// A pass to lift all the types and substitutions in a Mir @@ -102,11 +103,11 @@ impl<'a, 'gcx, 'tcx> BuildMir<'a, 'gcx> { impl<'a, 'gcx, 'tcx> CxBuilder<'a, 'gcx, 'tcx> { fn build(&'tcx mut self, f: F) - where F: for<'b> FnOnce(Cx<'b, 'gcx, 'tcx>) -> (Mir<'tcx>, build::ScopeAuxiliaryVec) + where F: for<'b> FnOnce(Cx<'b, 'gcx, 'tcx>) -> Mir<'tcx> { let (src, def_id) = (self.src, self.def_id); self.infcx.enter(|infcx| { - let (mut mir, scope_auxiliary) = f(Cx::new(&infcx, src)); + let mut mir = f(Cx::new(&infcx, src)); // Convert the Mir to global types. let tcx = infcx.tcx.global_tcx(); @@ -119,7 +120,7 @@ impl<'a, 'gcx, 'tcx> CxBuilder<'a, 'gcx, 'tcx> { mem::transmute::>(mir) }; - pretty::dump_mir(tcx, "mir_map", &0, src, &mir, Some(&scope_auxiliary)); + pretty::dump_mir(tcx, "mir_map", &0, src, &mir); let mir = tcx.alloc_mir(mir); assert!(tcx.mir_map.borrow_mut().insert(def_id, mir).is_none()); @@ -143,6 +144,10 @@ impl<'a, 'gcx> BuildMir<'a, 'gcx> { } impl<'a, 'tcx> Visitor<'tcx> for BuildMir<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + // Const and static items. fn visit_item(&mut self, item: &'tcx hir::Item) { match item.node { @@ -209,7 +214,7 @@ impl<'a, 'tcx> Visitor<'tcx> for BuildMir<'a, 'tcx> { fn visit_fn(&mut self, fk: FnKind<'tcx>, decl: &'tcx hir::FnDecl, - body: &'tcx hir::Block, + body_id: hir::ExprId, span: Span, id: ast::NodeId) { // fetch the fully liberated fn signature (that is, all bound @@ -221,10 +226,11 @@ impl<'a, 'tcx> Visitor<'tcx> for BuildMir<'a, 'tcx> { } }; - let implicit_argument = if let FnKind::Closure(..) = fk { - Some((closure_self_ty(self.tcx, id, body.id), None)) + let (abi, implicit_argument) = if let FnKind::Closure(..) = fk { + (Abi::Rust, Some((closure_self_ty(self.tcx, id, body_id.node_id()), None))) } else { - None + let def_id = self.tcx.map.local_def_id(id); + (self.tcx.item_type(def_id).fn_abi(), None) }; let explicit_arguments = @@ -232,15 +238,17 @@ impl<'a, 'tcx> Visitor<'tcx> for BuildMir<'a, 'tcx> { .iter() .enumerate() .map(|(index, arg)| { - (fn_sig.inputs[index], Some(&*arg.pat)) + (fn_sig.inputs()[index], Some(&*arg.pat)) }); + let body = self.tcx.map.expr(body_id); + let arguments = implicit_argument.into_iter().chain(explicit_arguments); self.cx(MirSource::Fn(id)).build(|cx| { - build::construct_fn(cx, id, arguments, fn_sig.output, body) + build::construct_fn(cx, id, arguments, abi, fn_sig.output(), body) }); - intravisit::walk_fn(self, fk, decl, body, span, id); + intravisit::walk_fn(self, fk, decl, body_id, span, id); } } diff --git a/src/librustc_mir/pretty.rs b/src/librustc_mir/pretty.rs index d2fc8aeaa2..e7188d5369 100644 --- a/src/librustc_mir/pretty.rs +++ b/src/librustc_mir/pretty.rs @@ -8,13 +8,12 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use build::{ScopeAuxiliaryVec, ScopeId}; use rustc::hir; use rustc::hir::def_id::DefId; use rustc::mir::*; use rustc::mir::transform::MirSource; use rustc::ty::TyCtxt; -use rustc_data_structures::fnv::FnvHashMap; +use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::indexed_vec::{Idx}; use std::fmt::Display; use std::fs; @@ -43,8 +42,7 @@ pub fn dump_mir<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, pass_name: &str, disambiguator: &Display, src: MirSource, - mir: &Mir<'tcx>, - auxiliary: Option<&ScopeAuxiliaryVec>) { + mir: &Mir<'tcx>) { let filters = match tcx.sess.opts.debugging_opts.dump_mir { None => return, Some(ref filters) => filters, @@ -81,7 +79,7 @@ pub fn dump_mir<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, writeln!(file, "// pass_name = {}", pass_name)?; writeln!(file, "// disambiguator = {}", disambiguator)?; writeln!(file, "")?; - write_mir_fn(tcx, src, mir, &mut file, auxiliary)?; + write_mir_fn(tcx, src, mir, &mut file)?; Ok(()) }); } @@ -106,52 +104,24 @@ pub fn write_mir_pretty<'a, 'b, 'tcx, I>(tcx: TyCtxt<'b, 'tcx, 'tcx>, let id = tcx.map.as_local_node_id(def_id).unwrap(); let src = MirSource::from_node(tcx, id); - write_mir_fn(tcx, src, mir, w, None)?; + write_mir_fn(tcx, src, mir, w)?; for (i, mir) in mir.promoted.iter_enumerated() { writeln!(w, "")?; - write_mir_fn(tcx, MirSource::Promoted(id, i), mir, w, None)?; + write_mir_fn(tcx, MirSource::Promoted(id, i), mir, w)?; } } Ok(()) } -enum Annotation { - EnterScope(ScopeId), - ExitScope(ScopeId), -} - -fn scope_entry_exit_annotations(auxiliary: Option<&ScopeAuxiliaryVec>) - -> FnvHashMap> -{ - // compute scope/entry exit annotations - let mut annotations = FnvHashMap(); - if let Some(auxiliary) = auxiliary { - for (scope_id, auxiliary) in auxiliary.iter_enumerated() { - annotations.entry(auxiliary.dom) - .or_insert(vec![]) - .push(Annotation::EnterScope(scope_id)); - - for &loc in &auxiliary.postdoms { - annotations.entry(loc) - .or_insert(vec![]) - .push(Annotation::ExitScope(scope_id)); - } - } - } - return annotations; -} - pub fn write_mir_fn<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, src: MirSource, mir: &Mir<'tcx>, - w: &mut Write, - auxiliary: Option<&ScopeAuxiliaryVec>) + w: &mut Write) -> io::Result<()> { - let annotations = scope_entry_exit_annotations(auxiliary); write_mir_intro(tcx, src, mir, w)?; for block in mir.basic_blocks().indices() { - write_basic_block(tcx, block, mir, w, &annotations)?; + write_basic_block(tcx, block, mir, w)?; if block.index() + 1 != mir.basic_blocks().len() { writeln!(w, "")?; } @@ -165,8 +135,7 @@ pub fn write_mir_fn<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, fn write_basic_block(tcx: TyCtxt, block: BasicBlock, mir: &Mir, - w: &mut Write, - annotations: &FnvHashMap>) + w: &mut Write) -> io::Result<()> { let data = &mir[block]; @@ -176,19 +145,6 @@ fn write_basic_block(tcx: TyCtxt, // List of statements in the middle. let mut current_location = Location { block: block, statement_index: 0 }; for statement in &data.statements { - if let Some(ref annotations) = annotations.get(¤t_location) { - for annotation in annotations.iter() { - match *annotation { - Annotation::EnterScope(id) => - writeln!(w, "{0}{0}// Enter Scope({1})", - INDENT, id.index())?, - Annotation::ExitScope(id) => - writeln!(w, "{0}{0}// Exit Scope({1})", - INDENT, id.index())?, - } - } - } - let indented_mir = format!("{0}{0}{1:?};", INDENT, statement); writeln!(w, "{0:1$} // {2}", indented_mir, @@ -217,7 +173,7 @@ fn comment(tcx: TyCtxt, SourceInfo { span, scope }: SourceInfo) -> String { /// Returns the total number of variables printed. fn write_scope_tree(tcx: TyCtxt, mir: &Mir, - scope_tree: &FnvHashMap>, + scope_tree: &FxHashMap>, w: &mut Write, parent: VisibilityScope, depth: usize) @@ -283,7 +239,7 @@ fn write_mir_intro<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, writeln!(w, " {{")?; // construct a scope tree and write it out - let mut scope_tree: FnvHashMap> = FnvHashMap(); + let mut scope_tree: FxHashMap> = FxHashMap(); for (index, scope_data) in mir.visibility_scopes.iter().enumerate() { if let Some(parent) = scope_data.parent_scope { scope_tree.entry(parent) diff --git a/src/librustc_mir/transform/copy_prop.rs b/src/librustc_mir/transform/copy_prop.rs index 8c8c42a1c7..3cbf8573ba 100644 --- a/src/librustc_mir/transform/copy_prop.rs +++ b/src/librustc_mir/transform/copy_prop.rs @@ -30,7 +30,7 @@ //! future. use def_use::DefUseAnalysis; -use rustc::mir::{Constant, Local, Location, Lvalue, Mir, Operand, Rvalue, StatementKind}; +use rustc::mir::{Constant, Local, LocalKind, Location, Lvalue, Mir, Operand, Rvalue, StatementKind}; use rustc::mir::transform::{MirPass, MirSource, Pass}; use rustc::mir::visit::MutVisitor; use rustc::ty::TyCtxt; @@ -65,11 +65,10 @@ impl<'tcx> MirPass<'tcx> for CopyPropagation { } } - // We only run when the MIR optimization level is at least 1. This avoids messing up debug - // info. - match tcx.sess.opts.debugging_opts.mir_opt_level { - Some(0) | None => return, - _ => {} + // We only run when the MIR optimization level is > 1. + // This avoids a slow pass, and messing up debug info. + if tcx.sess.opts.debugging_opts.mir_opt_level <= 1 { + return; } loop { @@ -123,7 +122,7 @@ impl<'tcx> MirPass<'tcx> for CopyPropagation { local == dest_local => { let maybe_action = match *operand { Operand::Consume(ref src_lvalue) => { - Action::local_copy(&def_use_analysis, src_lvalue) + Action::local_copy(&mir, &def_use_analysis, src_lvalue) } Operand::Constant(ref src_constant) => { Action::constant(src_constant) @@ -160,7 +159,7 @@ enum Action<'tcx> { } impl<'tcx> Action<'tcx> { - fn local_copy(def_use_analysis: &DefUseAnalysis, src_lvalue: &Lvalue<'tcx>) + fn local_copy(mir: &Mir<'tcx>, def_use_analysis: &DefUseAnalysis, src_lvalue: &Lvalue<'tcx>) -> Option> { // The source must be a local. let src_local = if let Lvalue::Local(local) = *src_lvalue { @@ -196,7 +195,9 @@ impl<'tcx> Action<'tcx> { // SRC = X; // USE(SRC); let src_def_count = src_use_info.def_count_not_including_drop(); - if src_def_count != 1 { + // allow function arguments to be propagated + if src_def_count > 1 || + (src_def_count == 0 && mir.local_kind(src_local) != LocalKind::Arg) { debug!(" Can't copy-propagate local: {} defs of src", src_use_info.def_count_not_including_drop()); return None diff --git a/src/librustc_mir/transform/deaggregator.rs b/src/librustc_mir/transform/deaggregator.rs index fcdeae6d6c..771f05f7bc 100644 --- a/src/librustc_mir/transform/deaggregator.rs +++ b/src/librustc_mir/transform/deaggregator.rs @@ -23,84 +23,80 @@ impl<'tcx> MirPass<'tcx> for Deaggregator { let node_id = source.item_id(); let node_path = tcx.item_path_str(tcx.map.local_def_id(node_id)); debug!("running on: {:?}", node_path); - // we only run when mir_opt_level > 1 - match tcx.sess.opts.debugging_opts.mir_opt_level { - Some(0) | - Some(1) | - None => { return; }, - _ => {} - }; + // we only run when mir_opt_level > 2 + if tcx.sess.opts.debugging_opts.mir_opt_level <= 2 { + return; + } // Do not trigger on constants. Could be revised in future if let MirSource::Fn(_) = source {} else { return; } // In fact, we might not want to trigger in other cases. // Ex: when we could use SROA. See issue #35259 - let mut curr: usize = 0; for bb in mir.basic_blocks_mut() { - let idx = match get_aggregate_statement_index(curr, &bb.statements) { - Some(idx) => idx, - None => continue, - }; - // do the replacement - debug!("removing statement {:?}", idx); - let src_info = bb.statements[idx].source_info; - let suffix_stmts = bb.statements.split_off(idx+1); - let orig_stmt = bb.statements.pop().unwrap(); - let (lhs, rhs) = match orig_stmt.kind { - StatementKind::Assign(ref lhs, ref rhs) => (lhs, rhs), - _ => span_bug!(src_info.span, "expected assign, not {:?}", orig_stmt), - }; - let (agg_kind, operands) = match rhs { - &Rvalue::Aggregate(ref agg_kind, ref operands) => (agg_kind, operands), - _ => span_bug!(src_info.span, "expected aggregate, not {:?}", rhs), - }; - let (adt_def, variant, substs) = match agg_kind { - &AggregateKind::Adt(adt_def, variant, substs, None) => (adt_def, variant, substs), - _ => span_bug!(src_info.span, "expected struct, not {:?}", rhs), - }; - let n = bb.statements.len(); - bb.statements.reserve(n + operands.len() + suffix_stmts.len()); - for (i, op) in operands.iter().enumerate() { - let ref variant_def = adt_def.variants[variant]; - let ty = variant_def.fields[i].ty(tcx, substs); - let rhs = Rvalue::Use(op.clone()); + let mut curr: usize = 0; + while let Some(idx) = get_aggregate_statement_index(curr, &bb.statements) { + // do the replacement + debug!("removing statement {:?}", idx); + let src_info = bb.statements[idx].source_info; + let suffix_stmts = bb.statements.split_off(idx+1); + let orig_stmt = bb.statements.pop().unwrap(); + let (lhs, rhs) = match orig_stmt.kind { + StatementKind::Assign(ref lhs, ref rhs) => (lhs, rhs), + _ => span_bug!(src_info.span, "expected assign, not {:?}", orig_stmt), + }; + let (agg_kind, operands) = match rhs { + &Rvalue::Aggregate(ref agg_kind, ref operands) => (agg_kind, operands), + _ => span_bug!(src_info.span, "expected aggregate, not {:?}", rhs), + }; + let (adt_def, variant, substs) = match agg_kind { + &AggregateKind::Adt(adt_def, variant, substs, None) + => (adt_def, variant, substs), + _ => span_bug!(src_info.span, "expected struct, not {:?}", rhs), + }; + let n = bb.statements.len(); + bb.statements.reserve(n + operands.len() + suffix_stmts.len()); + for (i, op) in operands.iter().enumerate() { + let ref variant_def = adt_def.variants[variant]; + let ty = variant_def.fields[i].ty(tcx, substs); + let rhs = Rvalue::Use(op.clone()); - let lhs_cast = if adt_def.variants.len() > 1 { - Lvalue::Projection(Box::new(LvalueProjection { - base: lhs.clone(), - elem: ProjectionElem::Downcast(adt_def, variant), - })) - } else { - lhs.clone() + let lhs_cast = if adt_def.variants.len() > 1 { + Lvalue::Projection(Box::new(LvalueProjection { + base: lhs.clone(), + elem: ProjectionElem::Downcast(adt_def, variant), + })) + } else { + lhs.clone() + }; + + let lhs_proj = Lvalue::Projection(Box::new(LvalueProjection { + base: lhs_cast, + elem: ProjectionElem::Field(Field::new(i), ty), + })); + let new_statement = Statement { + source_info: src_info, + kind: StatementKind::Assign(lhs_proj, rhs), + }; + debug!("inserting: {:?} @ {:?}", new_statement, idx + i); + bb.statements.push(new_statement); + } + + // if the aggregate was an enum, we need to set the discriminant + if adt_def.variants.len() > 1 { + let set_discriminant = Statement { + kind: StatementKind::SetDiscriminant { + lvalue: lhs.clone(), + variant_index: variant, + }, + source_info: src_info, + }; + bb.statements.push(set_discriminant); }; - let lhs_proj = Lvalue::Projection(Box::new(LvalueProjection { - base: lhs_cast, - elem: ProjectionElem::Field(Field::new(i), ty), - })); - let new_statement = Statement { - source_info: src_info, - kind: StatementKind::Assign(lhs_proj, rhs), - }; - debug!("inserting: {:?} @ {:?}", new_statement, idx + i); - bb.statements.push(new_statement); + curr = bb.statements.len(); + bb.statements.extend(suffix_stmts); } - - // if the aggregate was an enum, we need to set the discriminant - if adt_def.variants.len() > 1 { - let set_discriminant = Statement { - kind: StatementKind::SetDiscriminant { - lvalue: lhs.clone(), - variant_index: variant, - }, - source_info: src_info, - }; - bb.statements.push(set_discriminant); - }; - - curr = bb.statements.len(); - bb.statements.extend(suffix_stmts); } } } diff --git a/src/librustc_mir/transform/dump_mir.rs b/src/librustc_mir/transform/dump_mir.rs index b8fd9fb12a..035f33de91 100644 --- a/src/librustc_mir/transform/dump_mir.rs +++ b/src/librustc_mir/transform/dump_mir.rs @@ -64,8 +64,7 @@ impl<'tcx> MirPassHook<'tcx> for DumpMir { is_after: is_after }, src, - mir, - None + mir ); } } diff --git a/src/librustc_mir/transform/instcombine.rs b/src/librustc_mir/transform/instcombine.rs index a01724d6d0..3f6abb31fe 100644 --- a/src/librustc_mir/transform/instcombine.rs +++ b/src/librustc_mir/transform/instcombine.rs @@ -14,7 +14,7 @@ use rustc::mir::{Location, Lvalue, Mir, Operand, ProjectionElem, Rvalue, Local}; use rustc::mir::transform::{MirPass, MirSource, Pass}; use rustc::mir::visit::{MutVisitor, Visitor}; use rustc::ty::TyCtxt; -use rustc::util::nodemap::FnvHashSet; +use rustc::util::nodemap::FxHashSet; use rustc_data_structures::indexed_vec::Idx; use std::mem; @@ -38,7 +38,7 @@ impl<'tcx> MirPass<'tcx> for InstCombine { _: MirSource, mir: &mut Mir<'tcx>) { // We only run when optimizing MIR (at any level). - if tcx.sess.opts.debugging_opts.mir_opt_level == Some(0) { + if tcx.sess.opts.debugging_opts.mir_opt_level == 0 { return } @@ -107,5 +107,5 @@ impl<'b, 'a, 'tcx> Visitor<'tcx> for OptimizationFinder<'b, 'a, 'tcx> { #[derive(Default)] struct OptimizationList { - and_stars: FnvHashSet, + and_stars: FxHashSet, } diff --git a/src/librustc_mir/transform/promote_consts.rs b/src/librustc_mir/transform/promote_consts.rs index 41698574e0..b64ca58bb3 100644 --- a/src/librustc_mir/transform/promote_consts.rs +++ b/src/librustc_mir/transform/promote_consts.rs @@ -66,6 +66,7 @@ impl TempState { /// A "root candidate" for promotion, which will become the /// returned value in a promoted MIR, unless it's a subset /// of a larger candidate. +#[derive(Debug)] pub enum Candidate { /// Borrow of a constant temporary. Ref(Location), @@ -190,15 +191,12 @@ impl<'a, 'tcx> Promoter<'a, 'tcx> { /// promoted MIR, recursing through temps. fn promote_temp(&mut self, temp: Local) -> Local { let old_keep_original = self.keep_original; - let (bb, stmt_idx) = match self.temps[temp] { - TempState::Defined { - location: Location { block, statement_index }, - uses - } if uses > 0 => { + let loc = match self.temps[temp] { + TempState::Defined { location, uses } if uses > 0 => { if uses > 1 { self.keep_original = true; } - (block, statement_index) + location } state => { span_bug!(self.promoted.span, "{:?} not promotable: {:?}", @@ -209,91 +207,82 @@ impl<'a, 'tcx> Promoter<'a, 'tcx> { self.temps[temp] = TempState::PromotedOut; } - let no_stmts = self.source[bb].statements.len(); + let no_stmts = self.source[loc.block].statements.len(); + let new_temp = self.promoted.local_decls.push( + LocalDecl::new_temp(self.source.local_decls[temp].ty)); + + debug!("promote({:?} @ {:?}/{:?}, {:?})", + temp, loc, no_stmts, self.keep_original); // First, take the Rvalue or Call out of the source MIR, // or duplicate it, depending on keep_original. - let (mut rvalue, mut call) = (None, None); - let source_info = if stmt_idx < no_stmts { - let statement = &mut self.source[bb].statements[stmt_idx]; - let rhs = match statement.kind { - StatementKind::Assign(_, ref mut rhs) => rhs, - _ => { - span_bug!(statement.source_info.span, "{:?} is not an assignment", - statement); + if loc.statement_index < no_stmts { + let (mut rvalue, source_info) = { + let statement = &mut self.source[loc.block].statements[loc.statement_index]; + let rhs = match statement.kind { + StatementKind::Assign(_, ref mut rhs) => rhs, + _ => { + span_bug!(statement.source_info.span, "{:?} is not an assignment", + statement); + } + }; + + (if self.keep_original { + rhs.clone() + } else { + let unit = Rvalue::Aggregate(AggregateKind::Tuple, vec![]); + mem::replace(rhs, unit) + }, statement.source_info) + }; + + self.visit_rvalue(&mut rvalue, loc); + self.assign(new_temp, rvalue, source_info.span); + } else { + let terminator = if self.keep_original { + self.source[loc.block].terminator().clone() + } else { + let terminator = self.source[loc.block].terminator_mut(); + let target = match terminator.kind { + TerminatorKind::Call { destination: Some((_, target)), .. } => target, + ref kind => { + span_bug!(terminator.source_info.span, "{:?} not promotable", kind); + } + }; + Terminator { + source_info: terminator.source_info, + kind: mem::replace(&mut terminator.kind, TerminatorKind::Goto { + target: target + }) } }; - if self.keep_original { - rvalue = Some(rhs.clone()); - } else { - let unit = Rvalue::Aggregate(AggregateKind::Tuple, vec![]); - rvalue = Some(mem::replace(rhs, unit)); - } - statement.source_info - } else if self.keep_original { - let terminator = self.source[bb].terminator().clone(); - call = Some(terminator.kind); - terminator.source_info - } else { - let terminator = self.source[bb].terminator_mut(); - let target = match terminator.kind { - TerminatorKind::Call { - destination: ref mut dest @ Some(_), - ref mut cleanup, .. - } => { - // No cleanup necessary. - cleanup.take(); - // We'll put a new destination in later. - dest.take().unwrap().1 + match terminator.kind { + TerminatorKind::Call { mut func, mut args, .. } => { + self.visit_operand(&mut func, loc); + for arg in &mut args { + self.visit_operand(arg, loc); + } + + let last = self.promoted.basic_blocks().last().unwrap(); + let new_target = self.new_block(); + + *self.promoted[last].terminator_mut() = Terminator { + kind: TerminatorKind::Call { + func: func, + args: args, + cleanup: None, + destination: Some((Lvalue::Local(new_temp), new_target)) + }, + ..terminator + }; } ref kind => { span_bug!(terminator.source_info.span, "{:?} not promotable", kind); } }; - call = Some(mem::replace(&mut terminator.kind, TerminatorKind::Goto { - target: target - })); - terminator.source_info }; - // Then, recurse for components in the Rvalue or Call. - if stmt_idx < no_stmts { - self.visit_rvalue(rvalue.as_mut().unwrap(), Location { - block: bb, - statement_index: stmt_idx - }); - } else { - self.visit_terminator_kind(bb, call.as_mut().unwrap(), Location { - block: bb, - statement_index: no_stmts - }); - } - - let new_temp = self.promoted.local_decls.push( - LocalDecl::new_temp(self.source.local_decls[temp].ty)); - - // Inject the Rvalue or Call into the promoted MIR. - if stmt_idx < no_stmts { - self.assign(new_temp, rvalue.unwrap(), source_info.span); - } else { - let last = self.promoted.basic_blocks().last().unwrap(); - let new_target = self.new_block(); - let mut call = call.unwrap(); - match call { - TerminatorKind::Call { ref mut destination, ..} => { - *destination = Some((Lvalue::Local(new_temp), new_target)); - } - _ => bug!() - } - let terminator = self.promoted[last].terminator_mut(); - terminator.source_info.span = source_info.span; - terminator.kind = call; - } - - // Restore the old duplication state. self.keep_original = old_keep_original; - new_temp } @@ -355,6 +344,7 @@ pub fn promote_candidates<'a, 'tcx>(mir: &mut Mir<'tcx>, mut temps: IndexVec, candidates: Vec) { // Visit candidates in reverse, in case they're nested. + debug!("promote_candidates({:?})", candidates); for candidate in candidates.into_iter().rev() { let (span, ty) = match candidate { Candidate::Ref(Location { block: bb, statement_index: stmt_idx }) => { diff --git a/src/librustc_mir/transform/qualify_consts.rs b/src/librustc_mir/transform/qualify_consts.rs index b33a7060e3..d144651fb7 100644 --- a/src/librustc_mir/transform/qualify_consts.rs +++ b/src/librustc_mir/transform/qualify_consts.rs @@ -19,7 +19,6 @@ use rustc_data_structures::indexed_vec::{IndexVec, Idx}; use rustc::hir; use rustc::hir::map as hir_map; use rustc::hir::def_id::DefId; -use rustc::hir::intravisit::FnKind; use rustc::hir::map::blocks::FnLikeNode; use rustc::traits::{self, Reveal}; use rustc::ty::{self, TyCtxt, Ty}; @@ -29,6 +28,7 @@ use rustc::mir::traversal::ReversePostorder; use rustc::mir::transform::{Pass, MirPass, MirSource}; use rustc::mir::visit::{LvalueContext, Visitor}; use rustc::util::nodemap::DefIdMap; +use rustc::middle::lang_items; use syntax::abi::Abi; use syntax::feature_gate::UnstableFeatures; use syntax_pos::Span; @@ -116,15 +116,10 @@ impl fmt::Display for Mode { pub fn is_const_fn(tcx: TyCtxt, def_id: DefId) -> bool { if let Some(node_id) = tcx.map.as_local_node_id(def_id) { - let fn_like = FnLikeNode::from_node(tcx.map.get(node_id)); - match fn_like.map(|f| f.kind()) { - Some(FnKind::ItemFn(_, _, _, c, ..)) => { - c == hir::Constness::Const - } - Some(FnKind::Method(_, m, ..)) => { - m.constness == hir::Constness::Const - } - _ => false + if let Some(fn_like) = FnLikeNode::from_node(tcx.map.get(node_id)) { + fn_like.constness() == hir::Constness::Const + } else { + false } } else { tcx.sess.cstore.is_const_fn(def_id) @@ -277,8 +272,12 @@ impl<'a, 'tcx> Qualifier<'a, 'tcx, 'tcx> { .and_then(|impl_node_id| self.tcx.map.find(impl_node_id)) .map(|node| { if let hir_map::NodeItem(item) = node { - if let hir::ItemImpl(_, _, _, _, _, ref methods) = item.node { - span = methods.first().map(|method| method.span); + if let hir::ItemImpl(.., ref impl_item_refs) = item.node { + span = impl_item_refs.first() + .map(|iiref| { + self.tcx.map.impl_item(iiref.id) + .span + }); } } }); @@ -994,9 +993,9 @@ impl<'tcx> MirPass<'tcx> for QualifyAndPromoteConstants { Entry::Vacant(entry) => { // Guard against `const` recursion. entry.insert(Qualif::RECURSIVE); + Mode::Const } } - Mode::Const } MirSource::Static(_, hir::MutImmutable) => Mode::Static, MirSource::Static(_, hir::MutMutable) => Mode::StaticMut, @@ -1042,7 +1041,9 @@ impl<'tcx> MirPass<'tcx> for QualifyAndPromoteConstants { tcx.infer_ctxt(None, None, Reveal::NotSpecializable).enter(|infcx| { let cause = traits::ObligationCause::new(mir.span, id, traits::SharedStatic); let mut fulfillment_cx = traits::FulfillmentContext::new(); - fulfillment_cx.register_builtin_bound(&infcx, ty, ty::BoundSync, cause); + fulfillment_cx.register_bound(&infcx, ty, + tcx.require_lang_item(lang_items::SyncTraitLangItem), + cause); if let Err(err) = fulfillment_cx.select_all_or_error(&infcx) { infcx.report_fulfillment_errors(&err); } diff --git a/src/librustc_mir/transform/type_check.rs b/src/librustc_mir/transform/type_check.rs index 9d3afe541c..4c86331a52 100644 --- a/src/librustc_mir/transform/type_check.rs +++ b/src/librustc_mir/transform/type_check.rs @@ -127,7 +127,7 @@ impl<'a, 'b, 'gcx, 'tcx> TypeVerifier<'a, 'b, 'gcx, 'tcx> { match *lvalue { Lvalue::Local(index) => LvalueTy::Ty { ty: self.mir.local_decls[index].ty }, Lvalue::Static(def_id) => - LvalueTy::Ty { ty: self.tcx().lookup_item_type(def_id).ty }, + LvalueTy::Ty { ty: self.tcx().item_type(def_id) }, Lvalue::Projection(ref proj) => { let base_ty = self.sanitize_lvalue(&proj.base, location); if let LvalueTy::Ty { ty } = base_ty { @@ -274,9 +274,15 @@ impl<'a, 'b, 'gcx, 'tcx> TypeVerifier<'a, 'b, 'gcx, 'tcx> { ty::TyAdt(adt_def, substs) if adt_def.is_univariant() => { (&adt_def.variants[0], substs) } - ty::TyTuple(tys) | ty::TyClosure(_, ty::ClosureSubsts { - upvar_tys: tys, .. - }) => { + ty::TyClosure(def_id, substs) => { + return match substs.upvar_tys(def_id, tcx).nth(field.index()) { + Some(ty) => Ok(ty), + None => Err(FieldAccessError::OutOfRange { + field_count: substs.upvar_tys(def_id, tcx).count() + }) + } + } + ty::TyTuple(tys) => { return match tys.get(field.index()) { Some(&ty) => Ok(ty), None => Err(FieldAccessError::OutOfRange { @@ -300,32 +306,43 @@ impl<'a, 'b, 'gcx, 'tcx> TypeVerifier<'a, 'b, 'gcx, 'tcx> { pub struct TypeChecker<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { infcx: &'a InferCtxt<'a, 'gcx, 'tcx>, fulfillment_cx: traits::FulfillmentContext<'tcx>, - last_span: Span + last_span: Span, + body_id: ast::NodeId, } impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { - fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>) -> Self { + fn new(infcx: &'a InferCtxt<'a, 'gcx, 'tcx>, body_id: ast::NodeId) -> Self { TypeChecker { infcx: infcx, fulfillment_cx: traits::FulfillmentContext::new(), - last_span: DUMMY_SP + last_span: DUMMY_SP, + body_id: body_id, } } - fn sub_types(&self, span: Span, sup: Ty<'tcx>, sub: Ty<'tcx>) - -> infer::UnitResult<'tcx> - { - self.infcx.sub_types(false, infer::TypeOrigin::Misc(span), sup, sub) - // FIXME(#32730) propagate obligations - .map(|InferOk { obligations, .. }| assert!(obligations.is_empty())) + fn misc(&self, span: Span) -> traits::ObligationCause<'tcx> { + traits::ObligationCause::misc(span, self.body_id) } - fn eq_types(&self, span: Span, a: Ty<'tcx>, b: Ty<'tcx>) + pub fn register_infer_ok_obligations(&mut self, infer_ok: InferOk<'tcx, T>) -> T { + for obligation in infer_ok.obligations { + self.fulfillment_cx.register_predicate_obligation(self.infcx, obligation); + } + infer_ok.value + } + + fn sub_types(&mut self, sup: Ty<'tcx>, sub: Ty<'tcx>) + -> infer::UnitResult<'tcx> + { + self.infcx.sub_types(false, &self.misc(self.last_span), sup, sub) + .map(|ok| self.register_infer_ok_obligations(ok)) + } + + fn eq_types(&mut self, span: Span, a: Ty<'tcx>, b: Ty<'tcx>) -> infer::UnitResult<'tcx> { - self.infcx.eq_types(false, infer::TypeOrigin::Misc(span), a, b) - // FIXME(#32730) propagate obligations - .map(|InferOk { obligations, .. }| assert!(obligations.is_empty())) + self.infcx.eq_types(false, &self.misc(span), a, b) + .map(|ok| self.register_infer_ok_obligations(ok)) } fn tcx(&self) -> TyCtxt<'a, 'gcx, 'tcx> { @@ -340,7 +357,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { let lv_ty = lv.ty(mir, tcx).to_ty(tcx); let rv_ty = rv.ty(mir, tcx); if let Some(rv_ty) = rv_ty { - if let Err(terr) = self.sub_types(self.last_span, rv_ty, lv_ty) { + if let Err(terr) = self.sub_types(rv_ty, lv_ty) { span_mirbug!(self, stmt, "bad assignment ({:?} = {:?}): {:?}", lv_ty, rv_ty, terr); } @@ -401,7 +418,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { } => { let lv_ty = location.ty(mir, tcx).to_ty(tcx); let rv_ty = value.ty(mir, tcx); - if let Err(terr) = self.sub_types(self.last_span, rv_ty, lv_ty) { + if let Err(terr) = self.sub_types(rv_ty, lv_ty) { span_mirbug!(self, term, "bad DropAndReplace ({:?} = {:?}): {:?}", lv_ty, rv_ty, terr); } @@ -418,7 +435,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { } TerminatorKind::SwitchInt { ref discr, switch_ty, .. } => { let discr_ty = discr.ty(mir, tcx).to_ty(tcx); - if let Err(terr) = self.sub_types(self.last_span, discr_ty, switch_ty) { + if let Err(terr) = self.sub_types(discr_ty, switch_ty) { span_mirbug!(self, term, "bad SwitchInt ({:?} on {:?}): {:?}", switch_ty, discr_ty, terr); } @@ -480,7 +497,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { } } - fn check_call_dest(&self, + fn check_call_dest(&mut self, mir: &Mir<'tcx>, term: &Terminator<'tcx>, sig: &ty::FnSig<'tcx>, @@ -489,35 +506,35 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { match *destination { Some((ref dest, _)) => { let dest_ty = dest.ty(mir, tcx).to_ty(tcx); - if let Err(terr) = self.sub_types(self.last_span, sig.output, dest_ty) { + if let Err(terr) = self.sub_types(sig.output(), dest_ty) { span_mirbug!(self, term, "call dest mismatch ({:?} <- {:?}): {:?}", - dest_ty, sig.output, terr); + dest_ty, sig.output(), terr); } }, None => { // FIXME(canndrew): This is_never should probably be an is_uninhabited - if !sig.output.is_never() { + if !sig.output().is_never() { span_mirbug!(self, term, "call to converging function {:?} w/o dest", sig); } }, } } - fn check_call_inputs(&self, + fn check_call_inputs(&mut self, mir: &Mir<'tcx>, term: &Terminator<'tcx>, sig: &ty::FnSig<'tcx>, args: &[Operand<'tcx>]) { debug!("check_call_inputs({:?}, {:?})", sig, args); - if args.len() < sig.inputs.len() || - (args.len() > sig.inputs.len() && !sig.variadic) { + if args.len() < sig.inputs().len() || + (args.len() > sig.inputs().len() && !sig.variadic) { span_mirbug!(self, term, "call to {:?} with wrong # of args", sig); } - for (n, (fn_arg, op_arg)) in sig.inputs.iter().zip(args).enumerate() { + for (n, (fn_arg, op_arg)) in sig.inputs().iter().zip(args).enumerate() { let op_arg_ty = op_arg.ty(mir, self.tcx()); - if let Err(terr) = self.sub_types(self.last_span, op_arg_ty, fn_arg) { + if let Err(terr) = self.sub_types(op_arg_ty, fn_arg) { span_mirbug!(self, term, "bad arg #{:?} ({:?} <- {:?}): {:?}", n, fn_arg, op_arg_ty, terr); } @@ -535,7 +552,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { } } - fn check_box_free_inputs(&self, + fn check_box_free_inputs(&mut self, mir: &Mir<'tcx>, term: &Terminator<'tcx>, sig: &ty::FnSig<'tcx>, @@ -545,12 +562,12 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { // box_free takes a Box as a pointer. Allow for that. - if sig.inputs.len() != 1 { + if sig.inputs().len() != 1 { span_mirbug!(self, term, "box_free should take 1 argument"); return; } - let pointee_ty = match sig.inputs[0].sty { + let pointee_ty = match sig.inputs()[0].sty { ty::TyRawPtr(mt) => mt.ty, _ => { span_mirbug!(self, term, "box_free should take a raw ptr"); @@ -572,7 +589,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> { } }; - if let Err(terr) = self.sub_types(self.last_span, arg_ty, pointee_ty) { + if let Err(terr) = self.sub_types(arg_ty, pointee_ty) { span_mirbug!(self, term, "bad box_free arg ({:?} <- {:?}): {:?}", pointee_ty, arg_ty, terr); } @@ -709,7 +726,7 @@ impl<'tcx> MirPass<'tcx> for TypeckMir { } let param_env = ty::ParameterEnvironment::for_item(tcx, src.item_id()); tcx.infer_ctxt(None, Some(param_env), Reveal::NotSpecializable).enter(|infcx| { - let mut checker = TypeChecker::new(&infcx); + let mut checker = TypeChecker::new(&infcx, src.item_id()); { let mut verifier = TypeVerifier::new(&mut checker, mir); verifier.visit_mir(mir); diff --git a/src/librustc_passes/ast_validation.rs b/src/librustc_passes/ast_validation.rs index 828efbf373..2d0f086475 100644 --- a/src/librustc_passes/ast_validation.rs +++ b/src/librustc_passes/ast_validation.rs @@ -21,7 +21,8 @@ use rustc::session::Session; use syntax::ast::*; use syntax::attr; use syntax::codemap::Spanned; -use syntax::parse::token::{self, keywords}; +use syntax::parse::token; +use syntax::symbol::keywords; use syntax::visit::{self, Visitor}; use syntax_pos::Span; use errors; @@ -39,7 +40,7 @@ impl<'a> AstValidator<'a> { if label.name == keywords::StaticLifetime.name() { self.err_handler().span_err(span, &format!("invalid label name `{}`", label.name)); } - if label.name.as_str() == "'_" { + if label.name == "'_" { self.session.add_lint(lint::builtin::LIFETIME_UNDERSCORE, id, span, @@ -85,11 +86,24 @@ impl<'a> AstValidator<'a> { _ => {} } } + + fn no_questions_in_bounds(&self, bounds: &TyParamBounds, where_: &str, is_trait: bool) { + for bound in bounds { + if let TraitTyParamBound(ref poly, TraitBoundModifier::Maybe) = *bound { + let mut err = self.err_handler().struct_span_err(poly.span, + &format!("`?Trait` is not permitted in {}", where_)); + if is_trait { + err.note(&format!("traits are `?{}` by default", poly.trait_ref.path)); + } + err.emit(); + } + } + } } -impl<'a> Visitor for AstValidator<'a> { - fn visit_lifetime(&mut self, lt: &Lifetime) { - if lt.name.as_str() == "'_" { +impl<'a> Visitor<'a> for AstValidator<'a> { + fn visit_lifetime(&mut self, lt: &'a Lifetime) { + if lt.name == "'_" { self.session.add_lint(lint::builtin::LIFETIME_UNDERSCORE, lt.id, lt.span, @@ -99,13 +113,13 @@ impl<'a> Visitor for AstValidator<'a> { visit::walk_lifetime(self, lt) } - fn visit_expr(&mut self, expr: &Expr) { + fn visit_expr(&mut self, expr: &'a Expr) { match expr.node { ExprKind::While(.., Some(ident)) | ExprKind::Loop(_, Some(ident)) | ExprKind::WhileLet(.., Some(ident)) | ExprKind::ForLoop(.., Some(ident)) | - ExprKind::Break(Some(ident)) | + ExprKind::Break(Some(ident), _) | ExprKind::Continue(Some(ident)) => { self.check_label(ident.node, ident.span, expr.id); } @@ -115,7 +129,7 @@ impl<'a> Visitor for AstValidator<'a> { visit::walk_expr(self, expr) } - fn visit_ty(&mut self, ty: &Ty) { + fn visit_ty(&mut self, ty: &'a Ty) { match ty.node { TyKind::BareFn(ref bfty) => { self.check_decl_no_pat(&bfty.decl, |span, _| { @@ -129,13 +143,17 @@ impl<'a> Visitor for AstValidator<'a> { err.emit(); }); } + TyKind::ObjectSum(_, ref bounds) | + TyKind::PolyTraitRef(ref bounds) => { + self.no_questions_in_bounds(bounds, "trait object types", false); + } _ => {} } visit::walk_ty(self, ty) } - fn visit_path(&mut self, path: &Path, id: NodeId) { + fn visit_path(&mut self, path: &'a Path, id: NodeId) { if path.global && path.segments.len() > 0 { let ident = path.segments[0].identifier; if token::Ident(ident).is_path_segment_keyword() { @@ -149,7 +167,7 @@ impl<'a> Visitor for AstValidator<'a> { visit::walk_path(self, path) } - fn visit_item(&mut self, item: &Item) { + fn visit_item(&mut self, item: &'a Item) { match item.node { ItemKind::Use(ref view_path) => { let path = view_path.node.path(); @@ -188,7 +206,8 @@ impl<'a> Visitor for AstValidator<'a> { } } } - ItemKind::Trait(.., ref trait_items) => { + ItemKind::Trait(.., ref bounds, ref trait_items) => { + self.no_questions_in_bounds(bounds, "supertraits", true); for trait_item in trait_items { if let TraitItemKind::Method(ref sig, ref block) = trait_item.node { self.check_trait_fn_not_const(sig.constness); @@ -206,6 +225,13 @@ impl<'a> Visitor for AstValidator<'a> { ItemKind::Mod(_) => { // Ensure that `path` attributes on modules are recorded as used (c.f. #35584). attr::first_attr_value_str_by_name(&item.attrs, "path"); + if let Some(attr) = + item.attrs.iter().find(|attr| attr.name() == "warn_directory_ownership") { + let lint = lint::builtin::LEGACY_DIRECTORY_OWNERSHIP; + let msg = "cannot declare a new module at this location"; + self.session.add_lint(lint, item.id, item.span, msg.to_string()); + attr::mark_used(attr); + } } ItemKind::Union(ref vdata, _) => { if !vdata.is_struct() { @@ -223,7 +249,7 @@ impl<'a> Visitor for AstValidator<'a> { visit::walk_item(self, item) } - fn visit_foreign_item(&mut self, fi: &ForeignItem) { + fn visit_foreign_item(&mut self, fi: &'a ForeignItem) { match fi.node { ForeignItemKind::Fn(ref decl, _) => { self.check_decl_no_pat(decl, |span, is_recent| { @@ -246,7 +272,7 @@ impl<'a> Visitor for AstValidator<'a> { visit::walk_foreign_item(self, fi) } - fn visit_vis(&mut self, vis: &Visibility) { + fn visit_vis(&mut self, vis: &'a Visibility) { match *vis { Visibility::Restricted { ref path, .. } => { if !path.segments.iter().all(|segment| segment.parameters.is_empty()) { diff --git a/src/librustc_passes/consts.rs b/src/librustc_passes/consts.rs index f23539e88f..86f56d0035 100644 --- a/src/librustc_passes/consts.rs +++ b/src/librustc_passes/consts.rs @@ -27,7 +27,7 @@ use rustc::dep_graph::DepNode; use rustc::ty::cast::CastKind; use rustc_const_eval::{ConstEvalErr, lookup_const_fn_by_id, compare_lit_exprs}; -use rustc_const_eval::{eval_const_expr_partial, lookup_const_by_id}; +use rustc_const_eval::{ConstFnNode, eval_const_expr_partial, lookup_const_by_id}; use rustc_const_eval::ErrKind::{IndexOpFeatureGated, UnimplementedConstVal, MiscCatchAll, Math}; use rustc_const_eval::ErrKind::{ErroneousReferencedConstant, MiscBinaryOp, NonConstPath}; use rustc_const_eval::ErrKind::UnresolvedPath; @@ -48,7 +48,7 @@ use rustc::lint::builtin::CONST_ERR; use rustc::hir::{self, PatKind}; use syntax::ast; use syntax_pos::Span; -use rustc::hir::intravisit::{self, FnKind, Visitor}; +use rustc::hir::intravisit::{self, FnKind, Visitor, NestedVisitorMap}; use std::collections::hash_map::Entry; use std::cmp::Ordering; @@ -100,7 +100,7 @@ impl<'a, 'gcx> CheckCrateVisitor<'a, 'gcx> { .enter(|infcx| f(&mut euv::ExprUseVisitor::new(self, &infcx))) } - fn global_expr(&mut self, mode: Mode, expr: &hir::Expr) -> ConstQualif { + fn global_expr(&mut self, mode: Mode, expr: &'gcx hir::Expr) -> ConstQualif { assert!(mode != Mode::Var); match self.tcx.const_qualif_map.borrow_mut().entry(expr.id) { Entry::Occupied(entry) => return *entry.get(), @@ -132,9 +132,9 @@ impl<'a, 'gcx> CheckCrateVisitor<'a, 'gcx> { } fn fn_like(&mut self, - fk: FnKind, - fd: &hir::FnDecl, - b: &hir::Block, + fk: FnKind<'gcx>, + fd: &'gcx hir::FnDecl, + b: hir::ExprId, s: Span, fn_id: ast::NodeId) -> ConstQualif { @@ -160,7 +160,8 @@ impl<'a, 'gcx> CheckCrateVisitor<'a, 'gcx> { }; let qualif = self.with_mode(mode, |this| { - this.with_euv(Some(fn_id), |euv| euv.walk_fn(fd, b)); + let body = this.tcx.map.expr(b); + this.with_euv(Some(fn_id), |euv| euv.walk_fn(fd, body)); intravisit::walk_fn(this, fk, fd, b, s, fn_id); this.qualif }); @@ -179,21 +180,39 @@ impl<'a, 'gcx> CheckCrateVisitor<'a, 'gcx> { /// Returns true if the call is to a const fn or method. fn handle_const_fn_call(&mut self, _expr: &hir::Expr, def_id: DefId, ret_ty: Ty<'gcx>) -> bool { - if let Some(fn_like) = lookup_const_fn_by_id(self.tcx, def_id) { - let qualif = self.fn_like(fn_like.kind(), - fn_like.decl(), - fn_like.body(), - fn_like.span(), - fn_like.id()); - self.add_qualif(qualif); + match lookup_const_fn_by_id(self.tcx, def_id) { + Some(ConstFnNode::Local(fn_like)) => { + let qualif = self.fn_like(fn_like.kind(), + fn_like.decl(), + fn_like.body(), + fn_like.span(), + fn_like.id()); - if ret_ty.type_contents(self.tcx).interior_unsafe() { - self.add_qualif(ConstQualif::MUTABLE_MEM); - } + self.add_qualif(qualif); - true - } else { - false + if ret_ty.type_contents(self.tcx).interior_unsafe() { + self.add_qualif(ConstQualif::MUTABLE_MEM); + } + + true + }, + Some(ConstFnNode::Inlined(ii)) => { + let node_id = ii.body.id; + + let qualif = match self.tcx.const_qualif_map.borrow_mut().entry(node_id) { + Entry::Occupied(entry) => *entry.get(), + _ => bug!("const qualif entry missing for inlined item") + }; + + self.add_qualif(qualif); + + if ret_ty.type_contents(self.tcx).interior_unsafe() { + self.add_qualif(ConstQualif::MUTABLE_MEM); + } + + true + }, + None => false } } @@ -213,8 +232,12 @@ impl<'a, 'gcx> CheckCrateVisitor<'a, 'gcx> { } } -impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { - fn visit_item(&mut self, i: &hir::Item) { +impl<'a, 'tcx> Visitor<'tcx> for CheckCrateVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + + fn visit_item(&mut self, i: &'tcx hir::Item) { debug!("visit_item(item={})", self.tcx.map.node_to_string(i.id)); assert_eq!(self.mode, Mode::Var); match i.node { @@ -240,7 +263,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { } } - fn visit_trait_item(&mut self, t: &'v hir::TraitItem) { + fn visit_trait_item(&mut self, t: &'tcx hir::TraitItem) { match t.node { hir::ConstTraitItem(_, ref default) => { if let Some(ref expr) = *default { @@ -253,7 +276,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { } } - fn visit_impl_item(&mut self, i: &'v hir::ImplItem) { + fn visit_impl_item(&mut self, i: &'tcx hir::ImplItem) { match i.node { hir::ImplItemKind::Const(_, ref expr) => { self.global_expr(Mode::Const, &expr); @@ -263,15 +286,15 @@ impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { } fn visit_fn(&mut self, - fk: FnKind<'v>, - fd: &'v hir::FnDecl, - b: &'v hir::Block, + fk: FnKind<'tcx>, + fd: &'tcx hir::FnDecl, + b: hir::ExprId, s: Span, fn_id: ast::NodeId) { self.fn_like(fk, fd, b, s, fn_id); } - fn visit_pat(&mut self, p: &hir::Pat) { + fn visit_pat(&mut self, p: &'tcx hir::Pat) { match p.node { PatKind::Lit(ref lit) => { self.global_expr(Mode::Const, &lit); @@ -296,7 +319,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { } } - fn visit_block(&mut self, block: &hir::Block) { + fn visit_block(&mut self, block: &'tcx hir::Block) { // Check all statements in the block for stmt in &block.stmts { match stmt.node { @@ -315,7 +338,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for CheckCrateVisitor<'a, 'tcx> { intravisit::walk_block(self, block); } - fn visit_expr(&mut self, ex: &hir::Expr) { + fn visit_expr(&mut self, ex: &'tcx hir::Expr) { let mut outer = self.qualif; self.qualif = ConstQualif::empty(); @@ -487,8 +510,9 @@ fn check_expr<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, e: &hir::Expr, node _ => {} } } - hir::ExprPath(..) => { - match v.tcx.expect_def(e.id) { + hir::ExprPath(ref qpath) => { + let def = v.tcx.tables().qpath_def(qpath, e.id); + match def { Def::VariantCtor(_, CtorKind::Const) => { // Size is determined by the whole enum, may be non-zero. v.add_qualif(ConstQualif::NON_ZERO_SIZED); @@ -531,18 +555,23 @@ fn check_expr<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, e: &hir::Expr, node }; } // The callee is an arbitrary expression, it doesn't necessarily have a definition. - let is_const = match v.tcx.expect_def_or_none(callee.id) { - Some(Def::StructCtor(_, CtorKind::Fn)) | - Some(Def::VariantCtor(_, CtorKind::Fn)) => { + let def = if let hir::ExprPath(ref qpath) = callee.node { + v.tcx.tables().qpath_def(qpath, callee.id) + } else { + Def::Err + }; + let is_const = match def { + Def::StructCtor(_, CtorKind::Fn) | + Def::VariantCtor(_, CtorKind::Fn) => { // `NON_ZERO_SIZED` is about the call result, not about the ctor itself. v.add_qualif(ConstQualif::NON_ZERO_SIZED); true } - Some(Def::Fn(did)) => { + Def::Fn(did) => { v.handle_const_fn_call(e, did, node_ty) } - Some(Def::Method(did)) => { - match v.tcx.impl_or_trait_item(did).container() { + Def::Method(did) => { + match v.tcx.associated_item(did).container { ty::ImplContainer(_) => { v.handle_const_fn_call(e, did, node_ty) } @@ -557,7 +586,7 @@ fn check_expr<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, e: &hir::Expr, node } hir::ExprMethodCall(..) => { let method = v.tcx.tables().method_map[&method_call]; - let is_const = match v.tcx.impl_or_trait_item(method.def_id).container() { + let is_const = match v.tcx.associated_item(method.def_id).container { ty::ImplContainer(_) => v.handle_const_fn_call(e, method.def_id, node_ty), ty::TraitContainer(_) => false }; @@ -610,7 +639,7 @@ fn check_expr<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, e: &hir::Expr, node hir::ExprLoop(..) | // More control flow (also not very meaningful). - hir::ExprBreak(_) | + hir::ExprBreak(..) | hir::ExprAgain(_) | hir::ExprRet(_) | @@ -644,13 +673,13 @@ fn check_adjustments<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, e: &hir::Exp } pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { - tcx.visit_all_items_in_krate(DepNode::CheckConst, - &mut CheckCrateVisitor { - tcx: tcx, - mode: Mode::Var, - qualif: ConstQualif::NOT_CONST, - rvalue_borrows: NodeMap(), - }); + tcx.visit_all_item_likes_in_krate(DepNode::CheckConst, + &mut CheckCrateVisitor { + tcx: tcx, + mode: Mode::Var, + qualif: ConstQualif::NOT_CONST, + rvalue_borrows: NodeMap(), + }.as_deep_visitor()); tcx.sess.abort_if_errors(); } diff --git a/src/librustc_passes/diagnostics.rs b/src/librustc_passes/diagnostics.rs index 89b8aa8141..b2ef1abd2a 100644 --- a/src/librustc_passes/diagnostics.rs +++ b/src/librustc_passes/diagnostics.rs @@ -228,4 +228,5 @@ pub impl Foo for Bar { register_diagnostics! { E0472, // asm! is unsupported on this target E0561, // patterns aren't allowed in function pointer types + E0571, // `break` with a value in a non-`loop`-loop } diff --git a/src/librustc_passes/hir_stats.rs b/src/librustc_passes/hir_stats.rs index 1858671589..f7e026866e 100644 --- a/src/librustc_passes/hir_stats.rs +++ b/src/librustc_passes/hir_stats.rs @@ -15,7 +15,7 @@ use rustc::hir; use rustc::hir::intravisit as hir_visit; use rustc::util::common::to_readable_str; -use rustc::util::nodemap::{FnvHashMap, FnvHashSet}; +use rustc::util::nodemap::{FxHashMap, FxHashSet}; use syntax::ast::{self, NodeId, AttrId}; use syntax::visit as ast_visit; use syntax_pos::Span; @@ -34,25 +34,25 @@ struct NodeData { struct StatCollector<'k> { krate: Option<&'k hir::Crate>, - data: FnvHashMap<&'static str, NodeData>, - seen: FnvHashSet, + data: FxHashMap<&'static str, NodeData>, + seen: FxHashSet, } pub fn print_hir_stats(krate: &hir::Crate) { let mut collector = StatCollector { krate: Some(krate), - data: FnvHashMap(), - seen: FnvHashSet(), + data: FxHashMap(), + seen: FxHashSet(), }; hir_visit::walk_crate(&mut collector, krate); collector.print("HIR STATS"); } -pub fn print_ast_stats(krate: &ast::Crate, title: &str) { +pub fn print_ast_stats<'v>(krate: &'v ast::Crate, title: &str) { let mut collector = StatCollector { krate: None, - data: FnvHashMap(), - seen: FnvHashSet(), + data: FxHashMap(), + seen: FxHashSet(), }; ast_visit::walk_crate(&mut collector, krate); collector.print(title); @@ -106,12 +106,20 @@ impl<'k> StatCollector<'k> { } impl<'v> hir_visit::Visitor<'v> for StatCollector<'v> { + fn nested_visit_map<'this>(&'this mut self) -> hir_visit::NestedVisitorMap<'this, 'v> { + panic!("visit_nested_xxx must be manually implemented in this visitor") + } fn visit_nested_item(&mut self, id: hir::ItemId) { let nested_item = self.krate.unwrap().item(id.id); self.visit_item(nested_item) } + fn visit_nested_impl_item(&mut self, impl_item_id: hir::ImplItemId) { + let nested_impl_item = self.krate.unwrap().impl_item(impl_item_id); + self.visit_impl_item(nested_impl_item) + } + fn visit_item(&mut self, i: &'v hir::Item) { self.record("Item", Id::Node(i.id), i); hir_visit::walk_item(self, i) @@ -164,7 +172,7 @@ impl<'v> hir_visit::Visitor<'v> for StatCollector<'v> { fn visit_fn(&mut self, fk: hir_visit::FnKind<'v>, fd: &'v hir::FnDecl, - b: &'v hir::Block, + b: hir::ExprId, s: Span, id: NodeId) { self.record("FnDecl", Id::None, fd); @@ -210,29 +218,26 @@ impl<'v> hir_visit::Visitor<'v> for StatCollector<'v> { self.record("LifetimeDef", Id::None, lifetime); hir_visit::walk_lifetime_def(self, lifetime) } + fn visit_qpath(&mut self, qpath: &'v hir::QPath, id: NodeId, span: Span) { + self.record("QPath", Id::None, qpath); + hir_visit::walk_qpath(self, qpath, id, span) + } fn visit_path(&mut self, path: &'v hir::Path, _id: NodeId) { self.record("Path", Id::None, path); hir_visit::walk_path(self, path) } - fn visit_path_list_item(&mut self, - prefix: &'v hir::Path, - item: &'v hir::PathListItem) { - self.record("PathListItem", Id::Node(item.node.id), item); - hir_visit::walk_path_list_item(self, prefix, item) - } fn visit_path_segment(&mut self, path_span: Span, path_segment: &'v hir::PathSegment) { self.record("PathSegment", Id::None, path_segment); hir_visit::walk_path_segment(self, path_span, path_segment) } - fn visit_assoc_type_binding(&mut self, type_binding: &'v hir::TypeBinding) { self.record("TypeBinding", Id::Node(type_binding.id), type_binding); hir_visit::walk_assoc_type_binding(self, type_binding) } fn visit_attribute(&mut self, attr: &'v ast::Attribute) { - self.record("Attribute", Id::Attr(attr.node.id), attr); + self.record("Attribute", Id::Attr(attr.id), attr); } fn visit_macro_def(&mut self, macro_def: &'v hir::MacroDef) { self.record("MacroDef", Id::Node(macro_def.id), macro_def); @@ -240,134 +245,133 @@ impl<'v> hir_visit::Visitor<'v> for StatCollector<'v> { } } -impl<'v> ast_visit::Visitor for StatCollector<'v> { +impl<'v> ast_visit::Visitor<'v> for StatCollector<'v> { - fn visit_mod(&mut self, m: &ast::Mod, _s: Span, _n: NodeId) { + fn visit_mod(&mut self, m: &'v ast::Mod, _s: Span, _n: NodeId) { self.record("Mod", Id::None, m); ast_visit::walk_mod(self, m) } - fn visit_foreign_item(&mut self, i: &ast::ForeignItem) { + fn visit_foreign_item(&mut self, i: &'v ast::ForeignItem) { self.record("ForeignItem", Id::None, i); ast_visit::walk_foreign_item(self, i) } - fn visit_item(&mut self, i: &ast::Item) { + fn visit_item(&mut self, i: &'v ast::Item) { self.record("Item", Id::None, i); ast_visit::walk_item(self, i) } - fn visit_local(&mut self, l: &ast::Local) { + fn visit_local(&mut self, l: &'v ast::Local) { self.record("Local", Id::None, l); ast_visit::walk_local(self, l) } - fn visit_block(&mut self, b: &ast::Block) { + fn visit_block(&mut self, b: &'v ast::Block) { self.record("Block", Id::None, b); ast_visit::walk_block(self, b) } - fn visit_stmt(&mut self, s: &ast::Stmt) { + fn visit_stmt(&mut self, s: &'v ast::Stmt) { self.record("Stmt", Id::None, s); ast_visit::walk_stmt(self, s) } - fn visit_arm(&mut self, a: &ast::Arm) { + fn visit_arm(&mut self, a: &'v ast::Arm) { self.record("Arm", Id::None, a); ast_visit::walk_arm(self, a) } - fn visit_pat(&mut self, p: &ast::Pat) { + fn visit_pat(&mut self, p: &'v ast::Pat) { self.record("Pat", Id::None, p); ast_visit::walk_pat(self, p) } - fn visit_expr(&mut self, ex: &ast::Expr) { + fn visit_expr(&mut self, ex: &'v ast::Expr) { self.record("Expr", Id::None, ex); ast_visit::walk_expr(self, ex) } - fn visit_ty(&mut self, t: &ast::Ty) { + fn visit_ty(&mut self, t: &'v ast::Ty) { self.record("Ty", Id::None, t); ast_visit::walk_ty(self, t) } fn visit_fn(&mut self, - fk: ast_visit::FnKind, - fd: &ast::FnDecl, - b: &ast::Block, + fk: ast_visit::FnKind<'v>, + fd: &'v ast::FnDecl, s: Span, _: NodeId) { self.record("FnDecl", Id::None, fd); - ast_visit::walk_fn(self, fk, fd, b, s) + ast_visit::walk_fn(self, fk, fd, s) } - fn visit_trait_item(&mut self, ti: &ast::TraitItem) { + fn visit_trait_item(&mut self, ti: &'v ast::TraitItem) { self.record("TraitItem", Id::None, ti); ast_visit::walk_trait_item(self, ti) } - fn visit_impl_item(&mut self, ii: &ast::ImplItem) { + fn visit_impl_item(&mut self, ii: &'v ast::ImplItem) { self.record("ImplItem", Id::None, ii); ast_visit::walk_impl_item(self, ii) } - fn visit_ty_param_bound(&mut self, bounds: &ast::TyParamBound) { + fn visit_ty_param_bound(&mut self, bounds: &'v ast::TyParamBound) { self.record("TyParamBound", Id::None, bounds); ast_visit::walk_ty_param_bound(self, bounds) } - fn visit_struct_field(&mut self, s: &ast::StructField) { + fn visit_struct_field(&mut self, s: &'v ast::StructField) { self.record("StructField", Id::None, s); ast_visit::walk_struct_field(self, s) } fn visit_variant(&mut self, - v: &ast::Variant, - g: &ast::Generics, + v: &'v ast::Variant, + g: &'v ast::Generics, item_id: NodeId) { self.record("Variant", Id::None, v); ast_visit::walk_variant(self, v, g, item_id) } - fn visit_lifetime(&mut self, lifetime: &ast::Lifetime) { + fn visit_lifetime(&mut self, lifetime: &'v ast::Lifetime) { self.record("Lifetime", Id::None, lifetime); ast_visit::walk_lifetime(self, lifetime) } - fn visit_lifetime_def(&mut self, lifetime: &ast::LifetimeDef) { + fn visit_lifetime_def(&mut self, lifetime: &'v ast::LifetimeDef) { self.record("LifetimeDef", Id::None, lifetime); ast_visit::walk_lifetime_def(self, lifetime) } - fn visit_mac(&mut self, mac: &ast::Mac) { + fn visit_mac(&mut self, mac: &'v ast::Mac) { self.record("Mac", Id::None, mac); } fn visit_path_list_item(&mut self, - prefix: &ast::Path, - item: &ast::PathListItem) { + prefix: &'v ast::Path, + item: &'v ast::PathListItem) { self.record("PathListItem", Id::None, item); ast_visit::walk_path_list_item(self, prefix, item) } fn visit_path_segment(&mut self, path_span: Span, - path_segment: &ast::PathSegment) { + path_segment: &'v ast::PathSegment) { self.record("PathSegment", Id::None, path_segment); ast_visit::walk_path_segment(self, path_span, path_segment) } - fn visit_assoc_type_binding(&mut self, type_binding: &ast::TypeBinding) { + fn visit_assoc_type_binding(&mut self, type_binding: &'v ast::TypeBinding) { self.record("TypeBinding", Id::None, type_binding); ast_visit::walk_assoc_type_binding(self, type_binding) } - fn visit_attribute(&mut self, attr: &ast::Attribute) { + fn visit_attribute(&mut self, attr: &'v ast::Attribute) { self.record("Attribute", Id::None, attr); } - fn visit_macro_def(&mut self, macro_def: &ast::MacroDef) { + fn visit_macro_def(&mut self, macro_def: &'v ast::MacroDef) { self.record("MacroDef", Id::None, macro_def); ast_visit::walk_macro_def(self, macro_def) } diff --git a/src/librustc_passes/lib.rs b/src/librustc_passes/lib.rs index 039a76d25c..670ef426c2 100644 --- a/src/librustc_passes/lib.rs +++ b/src/librustc_passes/lib.rs @@ -23,7 +23,6 @@ html_root_url = "https://doc.rust-lang.org/nightly/")] #![cfg_attr(not(stage0), deny(warnings))] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(rustc_diagnostic_macros)] #![feature(staged_api)] #![feature(rustc_private)] @@ -47,6 +46,7 @@ pub mod ast_validation; pub mod consts; pub mod hir_stats; pub mod loops; +pub mod mir_stats; pub mod no_asm; pub mod rvalues; pub mod static_recursion; diff --git a/src/librustc_passes/loops.rs b/src/librustc_passes/loops.rs index e942707acd..10f464a990 100644 --- a/src/librustc_passes/loops.rs +++ b/src/librustc_passes/loops.rs @@ -13,59 +13,121 @@ use rustc::session::Session; use rustc::dep_graph::DepNode; use rustc::hir::map::Map; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir; +use syntax::ast; use syntax_pos::Span; +#[derive(Clone, Copy, PartialEq)] +enum LoopKind { + Loop(hir::LoopSource), + WhileLoop, +} + +impl LoopKind { + fn name(self) -> &'static str { + match self { + LoopKind::Loop(hir::LoopSource::Loop) => "loop", + LoopKind::Loop(hir::LoopSource::WhileLet) => "while let", + LoopKind::Loop(hir::LoopSource::ForLoop) => "for", + LoopKind::WhileLoop => "while", + } + } +} + #[derive(Clone, Copy, PartialEq)] enum Context { Normal, - Loop, + Loop(LoopKind), Closure, } #[derive(Copy, Clone)] -struct CheckLoopVisitor<'a> { +struct CheckLoopVisitor<'a, 'ast: 'a> { sess: &'a Session, + hir_map: &'a Map<'ast>, cx: Context, } pub fn check_crate(sess: &Session, map: &Map) { let _task = map.dep_graph.in_task(DepNode::CheckLoops); let krate = map.krate(); - krate.visit_all_items(&mut CheckLoopVisitor { + krate.visit_all_item_likes(&mut CheckLoopVisitor { sess: sess, + hir_map: map, cx: Normal, - }); + }.as_deep_visitor()); } -impl<'a, 'v> Visitor<'v> for CheckLoopVisitor<'a> { - fn visit_item(&mut self, i: &hir::Item) { +impl<'a, 'ast> Visitor<'ast> for CheckLoopVisitor<'a, 'ast> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + NestedVisitorMap::OnlyBodies(&self.hir_map) + } + + fn visit_item(&mut self, i: &'ast hir::Item) { self.with_context(Normal, |v| intravisit::walk_item(v, i)); } - fn visit_expr(&mut self, e: &hir::Expr) { + fn visit_impl_item(&mut self, i: &'ast hir::ImplItem) { + self.with_context(Normal, |v| intravisit::walk_impl_item(v, i)); + } + + fn visit_expr(&mut self, e: &'ast hir::Expr) { match e.node { hir::ExprWhile(ref e, ref b, _) => { - self.visit_expr(&e); - self.with_context(Loop, |v| v.visit_block(&b)); + self.with_context(Loop(LoopKind::WhileLoop), |v| { + v.visit_expr(&e); + v.visit_block(&b); + }); } - hir::ExprLoop(ref b, _) => { - self.with_context(Loop, |v| v.visit_block(&b)); + hir::ExprLoop(ref b, _, source) => { + self.with_context(Loop(LoopKind::Loop(source)), |v| v.visit_block(&b)); } - hir::ExprClosure(.., ref b, _) => { - self.with_context(Closure, |v| v.visit_block(&b)); + hir::ExprClosure(.., b, _) => { + self.with_context(Closure, |v| v.visit_body(b)); + } + hir::ExprBreak(label, ref opt_expr) => { + if opt_expr.is_some() { + let loop_kind = if let Some(label) = label { + if label.loop_id == ast::DUMMY_NODE_ID { + None + } else { + Some(match self.hir_map.expect_expr(label.loop_id).node { + hir::ExprWhile(..) => LoopKind::WhileLoop, + hir::ExprLoop(_, _, source) => LoopKind::Loop(source), + ref r => span_bug!(e.span, + "break label resolved to a non-loop: {:?}", r), + }) + } + } else if let Loop(kind) = self.cx { + Some(kind) + } else { + // `break` outside a loop - caught below + None + }; + match loop_kind { + None | Some(LoopKind::Loop(hir::LoopSource::Loop)) => (), + Some(kind) => { + struct_span_err!(self.sess, e.span, E0571, + "`break` with value from a `{}` loop", + kind.name()) + .span_label(e.span, + &format!("can only break with a value inside `loop`")) + .emit(); + } + } + } + self.require_loop("break", e.span); } - hir::ExprBreak(_) => self.require_loop("break", e.span), hir::ExprAgain(_) => self.require_loop("continue", e.span), _ => intravisit::walk_expr(self, e), } } } -impl<'a> CheckLoopVisitor<'a> { +impl<'a, 'ast> CheckLoopVisitor<'a, 'ast> { fn with_context(&mut self, cx: Context, f: F) - where F: FnOnce(&mut CheckLoopVisitor<'a>) + where F: FnOnce(&mut CheckLoopVisitor<'a, 'ast>) { let old_cx = self.cx; self.cx = cx; @@ -75,7 +137,7 @@ impl<'a> CheckLoopVisitor<'a> { fn require_loop(&self, name: &str, span: Span) { match self.cx { - Loop => {} + Loop(_) => {} Closure => { struct_span_err!(self.sess, span, E0267, "`{}` inside of a closure", name) .span_label(span, &format!("cannot break inside of a closure")) diff --git a/src/librustc_passes/mir_stats.rs b/src/librustc_passes/mir_stats.rs new file mode 100644 index 0000000000..cec1c20519 --- /dev/null +++ b/src/librustc_passes/mir_stats.rs @@ -0,0 +1,319 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// The visitors in this module collect sizes and counts of the most important +// pieces of MIR. The resulting numbers are good approximations but not +// completely accurate (some things might be counted twice, others missed). + +use rustc_const_math::{ConstUsize}; +use rustc::middle::const_val::{ConstVal}; +use rustc::mir::{AggregateKind, AssertMessage, BasicBlock, BasicBlockData}; +use rustc::mir::{Constant, Literal, Location, LocalDecl}; +use rustc::mir::{Lvalue, LvalueElem, LvalueProjection}; +use rustc::mir::{Mir, Operand, ProjectionElem}; +use rustc::mir::{Rvalue, SourceInfo, Statement, StatementKind}; +use rustc::mir::{Terminator, TerminatorKind, TypedConstVal, VisibilityScope, VisibilityScopeData}; +use rustc::mir::visit as mir_visit; +use rustc::mir::visit::Visitor; +use rustc::ty::{ClosureSubsts, TyCtxt}; +use rustc::util::common::to_readable_str; +use rustc::util::nodemap::{FxHashMap}; + +struct NodeData { + count: usize, + size: usize, +} + +struct StatCollector<'a, 'tcx: 'a> { + _tcx: TyCtxt<'a, 'tcx, 'tcx>, + data: FxHashMap<&'static str, NodeData>, +} + +pub fn print_mir_stats<'tcx, 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, title: &str) { + let mut collector = StatCollector { + _tcx: tcx, + data: FxHashMap(), + }; + // For debugging instrumentation like this, we don't need to worry + // about maintaining the dep graph. + let _ignore = tcx.dep_graph.in_ignore(); + let mir_map = tcx.mir_map.borrow(); + for def_id in mir_map.keys() { + let mir = mir_map.get(&def_id).unwrap(); + collector.visit_mir(&mir.borrow()); + } + collector.print(title); +} + +impl<'a, 'tcx> StatCollector<'a, 'tcx> { + + fn record_with_size(&mut self, label: &'static str, node_size: usize) { + let entry = self.data.entry(label).or_insert(NodeData { + count: 0, + size: 0, + }); + + entry.count += 1; + entry.size = node_size; + } + + fn record(&mut self, label: &'static str, node: &T) { + self.record_with_size(label, ::std::mem::size_of_val(node)); + } + + fn print(&self, title: &str) { + let mut stats: Vec<_> = self.data.iter().collect(); + + stats.sort_by_key(|&(_, ref d)| d.count * d.size); + + println!("\n{}\n", title); + + println!("{:<32}{:>18}{:>14}{:>14}", + "Name", "Accumulated Size", "Count", "Item Size"); + println!("------------------------------------------------------------------------------"); + + for (label, data) in stats { + println!("{:<32}{:>18}{:>14}{:>14}", + label, + to_readable_str(data.count * data.size), + to_readable_str(data.count), + to_readable_str(data.size)); + } + println!("------------------------------------------------------------------------------"); + } +} + +impl<'a, 'tcx> mir_visit::Visitor<'tcx> for StatCollector<'a, 'tcx> { + fn visit_mir(&mut self, mir: &Mir<'tcx>) { + self.record("Mir", mir); + + // since the `super_mir` method does not traverse the MIR of + // promoted rvalues, (but we still want to gather statistics + // on the structures represented there) we manually traverse + // the promoted rvalues here. + for promoted_mir in &mir.promoted { + self.visit_mir(promoted_mir); + } + + self.super_mir(mir); + } + + fn visit_basic_block_data(&mut self, + block: BasicBlock, + data: &BasicBlockData<'tcx>) { + self.record("BasicBlockData", data); + self.super_basic_block_data(block, data); + } + + fn visit_visibility_scope_data(&mut self, + scope_data: &VisibilityScopeData) { + self.record("VisibilityScopeData", scope_data); + self.super_visibility_scope_data(scope_data); + } + + fn visit_statement(&mut self, + block: BasicBlock, + statement: &Statement<'tcx>, + location: Location) { + self.record("Statement", statement); + self.record(match statement.kind { + StatementKind::Assign(..) => "StatementKind::Assign", + StatementKind::SetDiscriminant { .. } => "StatementKind::SetDiscriminant", + StatementKind::StorageLive(..) => "StatementKind::StorageLive", + StatementKind::StorageDead(..) => "StatementKind::StorageDead", + StatementKind::Nop => "StatementKind::Nop", + }, &statement.kind); + self.super_statement(block, statement, location); + } + + fn visit_terminator(&mut self, + block: BasicBlock, + terminator: &Terminator<'tcx>, + location: Location) { + self.record("Terminator", terminator); + self.super_terminator(block, terminator, location); + } + + fn visit_terminator_kind(&mut self, + block: BasicBlock, + kind: &TerminatorKind<'tcx>, + location: Location) { + self.record("TerminatorKind", kind); + self.record(match *kind { + TerminatorKind::Goto { .. } => "TerminatorKind::Goto", + TerminatorKind::If { .. } => "TerminatorKind::If", + TerminatorKind::Switch { .. } => "TerminatorKind::Switch", + TerminatorKind::SwitchInt { .. } => "TerminatorKind::SwitchInt", + TerminatorKind::Resume => "TerminatorKind::Resume", + TerminatorKind::Return => "TerminatorKind::Return", + TerminatorKind::Unreachable => "TerminatorKind::Unreachable", + TerminatorKind::Drop { .. } => "TerminatorKind::Drop", + TerminatorKind::DropAndReplace { .. } => "TerminatorKind::DropAndReplace", + TerminatorKind::Call { .. } => "TerminatorKind::Call", + TerminatorKind::Assert { .. } => "TerminatorKind::Assert", + }, kind); + self.super_terminator_kind(block, kind, location); + } + + fn visit_assert_message(&mut self, + msg: &AssertMessage<'tcx>, + location: Location) { + self.record("AssertMessage", msg); + self.record(match *msg { + AssertMessage::BoundsCheck { .. } => "AssertMessage::BoundsCheck", + AssertMessage::Math(..) => "AssertMessage::Math", + }, msg); + self.super_assert_message(msg, location); + } + + fn visit_rvalue(&mut self, + rvalue: &Rvalue<'tcx>, + location: Location) { + self.record("Rvalue", rvalue); + let rvalue_kind = match *rvalue { + Rvalue::Use(..) => "Rvalue::Use", + Rvalue::Repeat(..) => "Rvalue::Repeat", + Rvalue::Ref(..) => "Rvalue::Ref", + Rvalue::Len(..) => "Rvalue::Len", + Rvalue::Cast(..) => "Rvalue::Cast", + Rvalue::BinaryOp(..) => "Rvalue::BinaryOp", + Rvalue::CheckedBinaryOp(..) => "Rvalue::CheckedBinaryOp", + Rvalue::UnaryOp(..) => "Rvalue::UnaryOp", + Rvalue::Box(..) => "Rvalue::Box", + Rvalue::Aggregate(ref kind, ref _operands) => { + // AggregateKind is not distinguished by visit API, so + // record it. (`super_rvalue` handles `_operands`.) + self.record(match *kind { + AggregateKind::Array => "AggregateKind::Array", + AggregateKind::Tuple => "AggregateKind::Tuple", + AggregateKind::Adt(..) => "AggregateKind::Adt", + AggregateKind::Closure(..) => "AggregateKind::Closure", + }, kind); + + "Rvalue::Aggregate" + } + Rvalue::InlineAsm { .. } => "Rvalue::InlineAsm", + }; + self.record(rvalue_kind, rvalue); + self.super_rvalue(rvalue, location); + } + + fn visit_operand(&mut self, + operand: &Operand<'tcx>, + location: Location) { + self.record("Operand", operand); + self.record(match *operand { + Operand::Consume(..) => "Operand::Consume", + Operand::Constant(..) => "Operand::Constant", + }, operand); + self.super_operand(operand, location); + } + + fn visit_lvalue(&mut self, + lvalue: &Lvalue<'tcx>, + context: mir_visit::LvalueContext<'tcx>, + location: Location) { + self.record("Lvalue", lvalue); + self.record(match *lvalue { + Lvalue::Local(..) => "Lvalue::Local", + Lvalue::Static(..) => "Lvalue::Static", + Lvalue::Projection(..) => "Lvalue::Projection", + }, lvalue); + self.super_lvalue(lvalue, context, location); + } + + fn visit_projection(&mut self, + lvalue: &LvalueProjection<'tcx>, + context: mir_visit::LvalueContext<'tcx>, + location: Location) { + self.record("LvalueProjection", lvalue); + self.super_projection(lvalue, context, location); + } + + fn visit_projection_elem(&mut self, + lvalue: &LvalueElem<'tcx>, + context: mir_visit::LvalueContext<'tcx>, + location: Location) { + self.record("LvalueElem", lvalue); + self.record(match *lvalue { + ProjectionElem::Deref => "LvalueElem::Deref", + ProjectionElem::Subslice { .. } => "LvalueElem::Subslice", + ProjectionElem::Field(..) => "LvalueElem::Field", + ProjectionElem::Index(..) => "LvalueElem::Index", + ProjectionElem::ConstantIndex { .. } => "LvalueElem::ConstantIndex", + ProjectionElem::Downcast(..) => "LvalueElem::Downcast", + }, lvalue); + self.super_projection_elem(lvalue, context, location); + } + + fn visit_constant(&mut self, + constant: &Constant<'tcx>, + location: Location) { + self.record("Constant", constant); + self.super_constant(constant, location); + } + + fn visit_literal(&mut self, + literal: &Literal<'tcx>, + location: Location) { + self.record("Literal", literal); + self.record(match *literal { + Literal::Item { .. } => "Literal::Item", + Literal::Value { .. } => "Literal::Value", + Literal::Promoted { .. } => "Literal::Promoted", + }, literal); + self.super_literal(literal, location); + } + + fn visit_source_info(&mut self, + source_info: &SourceInfo) { + self.record("SourceInfo", source_info); + self.super_source_info(source_info); + } + + fn visit_closure_substs(&mut self, + substs: &ClosureSubsts<'tcx>) { + self.record("ClosureSubsts", substs); + self.super_closure_substs(substs); + } + + fn visit_const_val(&mut self, + const_val: &ConstVal, + _: Location) { + self.record("ConstVal", const_val); + self.super_const_val(const_val); + } + + fn visit_const_usize(&mut self, + const_usize: &ConstUsize, + _: Location) { + self.record("ConstUsize", const_usize); + self.super_const_usize(const_usize); + } + + fn visit_typed_const_val(&mut self, + val: &TypedConstVal<'tcx>, + location: Location) { + self.record("TypedConstVal", val); + self.super_typed_const_val(val, location); + } + + fn visit_local_decl(&mut self, + local_decl: &LocalDecl<'tcx>) { + self.record("LocalDecl", local_decl); + self.super_local_decl(local_decl); + } + + fn visit_visibility_scope(&mut self, + scope: &VisibilityScope) { + self.record("VisiblityScope", scope); + self.super_visibility_scope(scope); + } +} diff --git a/src/librustc_passes/no_asm.rs b/src/librustc_passes/no_asm.rs index af3065d64e..4dbf57a99b 100644 --- a/src/librustc_passes/no_asm.rs +++ b/src/librustc_passes/no_asm.rs @@ -31,8 +31,8 @@ struct CheckNoAsm<'a> { sess: &'a Session, } -impl<'a> Visitor for CheckNoAsm<'a> { - fn visit_expr(&mut self, e: &ast::Expr) { +impl<'a> Visitor<'a> for CheckNoAsm<'a> { + fn visit_expr(&mut self, e: &'a ast::Expr) { match e.node { ast::ExprKind::InlineAsm(_) => { span_err!(self.sess, diff --git a/src/librustc_passes/rvalues.rs b/src/librustc_passes/rvalues.rs index c3ef5a72a2..ddb5af1e80 100644 --- a/src/librustc_passes/rvalues.rs +++ b/src/librustc_passes/rvalues.rs @@ -18,24 +18,28 @@ use rustc::ty::{self, TyCtxt, ParameterEnvironment}; use rustc::traits::Reveal; use rustc::hir; -use rustc::hir::intravisit; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use syntax::ast; use syntax_pos::Span; pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { let mut rvcx = RvalueContext { tcx: tcx }; - tcx.visit_all_items_in_krate(DepNode::RvalueCheck, &mut rvcx); + tcx.visit_all_item_likes_in_krate(DepNode::RvalueCheck, &mut rvcx.as_deep_visitor()); } struct RvalueContext<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, } -impl<'a, 'tcx, 'v> intravisit::Visitor<'v> for RvalueContext<'a, 'tcx> { +impl<'a, 'tcx> Visitor<'tcx> for RvalueContext<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + fn visit_fn(&mut self, - fk: intravisit::FnKind<'v>, - fd: &'v hir::FnDecl, - b: &'v hir::Block, + fk: intravisit::FnKind<'tcx>, + fd: &'tcx hir::FnDecl, + b: hir::ExprId, s: Span, fn_id: ast::NodeId) { // FIXME (@jroesch) change this to be an inference context @@ -46,8 +50,9 @@ impl<'a, 'tcx, 'v> intravisit::Visitor<'v> for RvalueContext<'a, 'tcx> { tcx: infcx.tcx, param_env: ¶m_env }; + let body = infcx.tcx.map.expr(b); let mut euv = euv::ExprUseVisitor::new(&mut delegate, &infcx); - euv.walk_fn(fd, b); + euv.walk_fn(fd, body); }); intravisit::walk_fn(self, fk, fd, b, s, fn_id) } diff --git a/src/librustc_passes/static_recursion.rs b/src/librustc_passes/static_recursion.rs index 0e0f8a8456..b5be4aa5e6 100644 --- a/src/librustc_passes/static_recursion.rs +++ b/src/librustc_passes/static_recursion.rs @@ -14,29 +14,31 @@ use rustc::dep_graph::DepNode; use rustc::hir::map as ast_map; use rustc::session::{CompileResult, Session}; -use rustc::hir::def::{Def, CtorKind, DefMap}; -use rustc::util::nodemap::NodeMap; +use rustc::hir::def::{Def, CtorKind}; +use rustc::util::nodemap::{NodeMap, NodeSet}; use syntax::ast; use syntax::feature_gate::{GateIssue, emit_feature_err}; use syntax_pos::Span; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir; -use std::cell::RefCell; - struct CheckCrateVisitor<'a, 'ast: 'a> { sess: &'a Session, - def_map: &'a DefMap, ast_map: &'a ast_map::Map<'ast>, // `discriminant_map` is a cache that associates the `NodeId`s of local // variant definitions with the discriminant expression that applies to // each one. If the variant uses the default values (starting from `0`), // then `None` is stored. - discriminant_map: RefCell>>, + discriminant_map: NodeMap>, + detected_recursive_ids: NodeSet, } impl<'a, 'ast: 'a> Visitor<'ast> for CheckCrateVisitor<'a, 'ast> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + NestedVisitorMap::None + } + fn visit_item(&mut self, it: &'ast hir::Item) { match it.node { hir::ItemStatic(..) | @@ -87,49 +89,49 @@ impl<'a, 'ast: 'a> Visitor<'ast> for CheckCrateVisitor<'a, 'ast> { } } -pub fn check_crate<'ast>(sess: &Session, - def_map: &DefMap, - ast_map: &ast_map::Map<'ast>) - -> CompileResult { +pub fn check_crate<'ast>(sess: &Session, ast_map: &ast_map::Map<'ast>) -> CompileResult { let _task = ast_map.dep_graph.in_task(DepNode::CheckStaticRecursion); let mut visitor = CheckCrateVisitor { sess: sess, - def_map: def_map, ast_map: ast_map, - discriminant_map: RefCell::new(NodeMap()), + discriminant_map: NodeMap(), + detected_recursive_ids: NodeSet(), }; sess.track_errors(|| { - ast_map.krate().visit_all_items(&mut visitor); + // FIXME(#37712) could use ItemLikeVisitor if trait items were item-like + ast_map.krate().visit_all_item_likes(&mut visitor.as_deep_visitor()); }) } -struct CheckItemRecursionVisitor<'a, 'ast: 'a> { - root_span: &'a Span, - sess: &'a Session, - ast_map: &'a ast_map::Map<'ast>, - def_map: &'a DefMap, - discriminant_map: &'a RefCell>>, +struct CheckItemRecursionVisitor<'a, 'b: 'a, 'ast: 'b> { + root_span: &'b Span, + sess: &'b Session, + ast_map: &'b ast_map::Map<'ast>, + discriminant_map: &'a mut NodeMap>, idstack: Vec, + detected_recursive_ids: &'a mut NodeSet, } -impl<'a, 'ast: 'a> CheckItemRecursionVisitor<'a, 'ast> { - fn new(v: &'a CheckCrateVisitor<'a, 'ast>, - span: &'a Span) - -> CheckItemRecursionVisitor<'a, 'ast> { +impl<'a, 'b: 'a, 'ast: 'b> CheckItemRecursionVisitor<'a, 'b, 'ast> { + fn new(v: &'a mut CheckCrateVisitor<'b, 'ast>, span: &'b Span) -> Self { CheckItemRecursionVisitor { root_span: span, sess: v.sess, ast_map: v.ast_map, - def_map: v.def_map, - discriminant_map: &v.discriminant_map, + discriminant_map: &mut v.discriminant_map, idstack: Vec::new(), + detected_recursive_ids: &mut v.detected_recursive_ids, } } fn with_item_id_pushed(&mut self, id: ast::NodeId, f: F, span: Span) where F: Fn(&mut Self) { if self.idstack.iter().any(|&x| x == id) { + if self.detected_recursive_ids.contains(&id) { + return; + } + self.detected_recursive_ids.insert(id); let any_static = self.idstack.iter().any(|&x| { if let ast_map::NodeItem(item) = self.ast_map.get(x) { if let hir::ItemStatic(..) = item.node { @@ -168,15 +170,14 @@ impl<'a, 'ast: 'a> CheckItemRecursionVisitor<'a, 'ast> { // So for every variant, we need to track whether there is an expression // somewhere in the enum definition that controls its discriminant. We do // this by starting from the end and searching backward. - fn populate_enum_discriminants(&self, enum_definition: &'ast hir::EnumDef) { + fn populate_enum_discriminants(&mut self, enum_definition: &'ast hir::EnumDef) { // Get the map, and return if we already processed this enum or if it // has no variants. - let mut discriminant_map = self.discriminant_map.borrow_mut(); match enum_definition.variants.first() { None => { return; } - Some(variant) if discriminant_map.contains_key(&variant.node.data.id()) => { + Some(variant) if self.discriminant_map.contains_key(&variant.node.data.id()) => { return; } _ => {} @@ -190,7 +191,7 @@ impl<'a, 'ast: 'a> CheckItemRecursionVisitor<'a, 'ast> { // is affected by that expression. if let Some(ref expr) = variant.node.disr_expr { for id in &variant_stack { - discriminant_map.insert(*id, Some(expr)); + self.discriminant_map.insert(*id, Some(expr)); } variant_stack.clear() } @@ -198,12 +199,15 @@ impl<'a, 'ast: 'a> CheckItemRecursionVisitor<'a, 'ast> { // If we are at the top, that always starts at 0, so any variant on the // stack has a default value and does not need to be checked. for id in &variant_stack { - discriminant_map.insert(*id, None); + self.discriminant_map.insert(*id, None); } } } -impl<'a, 'ast: 'a> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'ast> { +impl<'a, 'b: 'a, 'ast: 'b> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'b, 'ast> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> { + NestedVisitorMap::OnlyBodies(&self.ast_map) + } fn visit_item(&mut self, it: &'ast hir::Item) { self.with_item_id_pushed(it.id, |v| intravisit::walk_item(v, it), it.span); } @@ -223,7 +227,7 @@ impl<'a, 'ast: 'a> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'ast> { _: ast::NodeId) { let variant_id = variant.node.data.id(); let maybe_expr; - if let Some(get_expr) = self.discriminant_map.borrow().get(&variant_id) { + if let Some(get_expr) = self.discriminant_map.get(&variant_id) { // This is necessary because we need to let the `discriminant_map` // borrow fall out of scope, so that we can reborrow farther down. maybe_expr = (*get_expr).clone(); @@ -247,51 +251,46 @@ impl<'a, 'ast: 'a> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'ast> { self.with_item_id_pushed(ii.id, |v| intravisit::walk_impl_item(v, ii), ii.span); } - fn visit_expr(&mut self, e: &'ast hir::Expr) { - match e.node { - hir::ExprPath(..) => { - match self.def_map.get(&e.id).map(|d| d.base_def) { - Some(Def::Static(def_id, _)) | - Some(Def::AssociatedConst(def_id)) | - Some(Def::Const(def_id)) => { - if let Some(node_id) = self.ast_map.as_local_node_id(def_id) { - match self.ast_map.get(node_id) { - ast_map::NodeItem(item) => self.visit_item(item), - ast_map::NodeTraitItem(item) => self.visit_trait_item(item), - ast_map::NodeImplItem(item) => self.visit_impl_item(item), - ast_map::NodeForeignItem(_) => {} - _ => { - span_bug!(e.span, - "expected item, found {}", - self.ast_map.node_to_string(node_id)); - } - } + fn visit_path(&mut self, path: &'ast hir::Path, _: ast::NodeId) { + match path.def { + Def::Static(def_id, _) | + Def::AssociatedConst(def_id) | + Def::Const(def_id) => { + if let Some(node_id) = self.ast_map.as_local_node_id(def_id) { + match self.ast_map.get(node_id) { + ast_map::NodeItem(item) => self.visit_item(item), + ast_map::NodeTraitItem(item) => self.visit_trait_item(item), + ast_map::NodeImplItem(item) => self.visit_impl_item(item), + ast_map::NodeForeignItem(_) => {} + _ => { + span_bug!(path.span, + "expected item, found {}", + self.ast_map.node_to_string(node_id)); } } - // For variants, we only want to check expressions that - // affect the specific variant used, but we need to check - // the whole enum definition to see what expression that - // might be (if any). - Some(Def::VariantCtor(variant_id, CtorKind::Const)) => { - if let Some(variant_id) = self.ast_map.as_local_node_id(variant_id) { - let variant = self.ast_map.expect_variant(variant_id); - let enum_id = self.ast_map.get_parent(variant_id); - let enum_item = self.ast_map.expect_item(enum_id); - if let hir::ItemEnum(ref enum_def, ref generics) = enum_item.node { - self.populate_enum_discriminants(enum_def); - self.visit_variant(variant, generics, enum_id); - } else { - span_bug!(e.span, - "`check_static_recursion` found \ - non-enum in Def::VariantCtor"); - } - } + } + } + // For variants, we only want to check expressions that + // affect the specific variant used, but we need to check + // the whole enum definition to see what expression that + // might be (if any). + Def::VariantCtor(variant_id, CtorKind::Const) => { + if let Some(variant_id) = self.ast_map.as_local_node_id(variant_id) { + let variant = self.ast_map.expect_variant(variant_id); + let enum_id = self.ast_map.get_parent(variant_id); + let enum_item = self.ast_map.expect_item(enum_id); + if let hir::ItemEnum(ref enum_def, ref generics) = enum_item.node { + self.populate_enum_discriminants(enum_def); + self.visit_variant(variant, generics, enum_id); + } else { + span_bug!(path.span, + "`check_static_recursion` found \ + non-enum in Def::VariantCtor"); } - _ => (), } } _ => (), } - intravisit::walk_expr(self, e); + intravisit::walk_path(self, path); } } diff --git a/src/librustc_plugin/build.rs b/src/librustc_plugin/build.rs index ff3038c3d1..75046f6aeb 100644 --- a/src/librustc_plugin/build.rs +++ b/src/librustc_plugin/build.rs @@ -16,14 +16,14 @@ use errors; use syntax_pos::Span; use rustc::dep_graph::DepNode; use rustc::hir::map::Map; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir; struct RegistrarFinder { registrars: Vec<(ast::NodeId, Span)> , } -impl<'v> Visitor<'v> for RegistrarFinder { +impl<'v> ItemLikeVisitor<'v> for RegistrarFinder { fn visit_item(&mut self, item: &hir::Item) { if let hir::ItemFn(..) = item.node { if attr::contains_name(&item.attrs, @@ -32,6 +32,9 @@ impl<'v> Visitor<'v> for RegistrarFinder { } } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } /// Find the function marked with `#[plugin_registrar]`, if any. @@ -42,7 +45,7 @@ pub fn find_plugin_registrar(diagnostic: &errors::Handler, let krate = hir_map.krate(); let mut finder = RegistrarFinder { registrars: Vec::new() }; - krate.visit_all_items(&mut finder); + krate.visit_all_item_likes(&mut finder); match finder.registrars.len() { 0 => None, diff --git a/src/librustc_plugin/load.rs b/src/librustc_plugin/load.rs index 4438241999..1bfc445fca 100644 --- a/src/librustc_plugin/load.rs +++ b/src/librustc_plugin/load.rs @@ -69,9 +69,9 @@ pub fn load_plugins(sess: &Session, for plugin in plugins { // plugins must have a name and can't be key = value match plugin.name() { - Some(ref name) if !plugin.is_value_str() => { + Some(name) if !plugin.is_value_str() => { let args = plugin.meta_item_list().map(ToOwned::to_owned); - loader.load_plugin(plugin.span, name, args.unwrap_or_default()); + loader.load_plugin(plugin.span, &name.as_str(), args.unwrap_or_default()); }, _ => call_malformed_plugin_attribute(sess, attr.span), } diff --git a/src/librustc_plugin/registry.rs b/src/librustc_plugin/registry.rs index 88e248e2ef..fe2f9713d1 100644 --- a/src/librustc_plugin/registry.rs +++ b/src/librustc_plugin/registry.rs @@ -17,7 +17,7 @@ use rustc::mir::transform::MirMapPass; use syntax::ext::base::{SyntaxExtension, NamedSyntaxExtension, NormalTT, IdentTT}; use syntax::ext::base::MacroExpanderFn; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax::ast; use syntax::feature_gate::AttributeType; use syntax_pos::Span; @@ -101,7 +101,7 @@ impl<'a> Registry<'a> { /// /// This is the most general hook into `libsyntax`'s expansion behavior. pub fn register_syntax_extension(&mut self, name: ast::Name, extension: SyntaxExtension) { - if name.as_str() == "macro_rules" { + if name == "macro_rules" { panic!("user-defined macros may not be named `macro_rules`"); } self.syntax_exts.push((name, match extension { @@ -121,7 +121,7 @@ impl<'a> Registry<'a> { /// It builds for you a `NormalTT` that calls `expander`, /// and also takes care of interning the macro's name. pub fn register_macro(&mut self, name: &str, expander: MacroExpanderFn) { - self.register_syntax_extension(token::intern(name), + self.register_syntax_extension(Symbol::intern(name), NormalTT(Box::new(expander), None, false)); } diff --git a/src/librustc_privacy/lib.rs b/src/librustc_privacy/lib.rs index cbe2cd2628..145b9176f6 100644 --- a/src/librustc_privacy/lib.rs +++ b/src/librustc_privacy/lib.rs @@ -17,7 +17,6 @@ html_root_url = "https://doc.rust-lang.org/nightly/")] #![cfg_attr(not(stage0), deny(warnings))] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(staged_api)] @@ -30,11 +29,13 @@ use rustc::dep_graph::DepNode; use rustc::hir::{self, PatKind}; use rustc::hir::def::{self, Def, CtorKind}; use rustc::hir::def_id::DefId; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; +use rustc::hir::itemlikevisit::DeepVisitor; use rustc::hir::pat_util::EnumerateAndAdjustIterator; use rustc::lint; use rustc::middle::privacy::{AccessLevel, AccessLevels}; -use rustc::ty::{self, TyCtxt}; +use rustc::ty::{self, TyCtxt, Ty, TypeFoldable}; +use rustc::ty::fold::TypeVisitor; use rustc::util::nodemap::NodeSet; use syntax::ast; use syntax_pos::Span; @@ -61,36 +62,33 @@ struct EmbargoVisitor<'a, 'tcx: 'a> { } struct ReachEverythingInTheInterfaceVisitor<'b, 'a: 'b, 'tcx: 'a> { + item_def_id: DefId, ev: &'b mut EmbargoVisitor<'a, 'tcx>, } impl<'a, 'tcx> EmbargoVisitor<'a, 'tcx> { - fn ty_level(&self, ty: &hir::Ty) -> Option { - if let hir::TyPath(..) = ty.node { - match self.tcx.expect_def(ty.id) { - Def::PrimTy(..) | Def::SelfTy(..) | Def::TyParam(..) => { - Some(AccessLevel::Public) - } - def => { - if let Some(node_id) = self.tcx.map.as_local_node_id(def.def_id()) { - self.get(node_id) - } else { - Some(AccessLevel::Public) - } - } - } + fn item_ty_level(&self, item_def_id: DefId) -> Option { + let ty_def_id = match self.tcx.item_type(item_def_id).sty { + ty::TyAdt(adt, _) => adt.did, + ty::TyDynamic(ref obj, ..) if obj.principal().is_some() => + obj.principal().unwrap().def_id(), + ty::TyProjection(ref proj) => proj.trait_ref.def_id, + _ => return Some(AccessLevel::Public) + }; + if let Some(node_id) = self.tcx.map.as_local_node_id(ty_def_id) { + self.get(node_id) } else { Some(AccessLevel::Public) } } - fn trait_level(&self, trait_ref: &hir::TraitRef) -> Option { - let did = self.tcx.expect_def(trait_ref.ref_id).def_id(); - if let Some(node_id) = self.tcx.map.as_local_node_id(did) { - self.get(node_id) - } else { - Some(AccessLevel::Public) + fn impl_trait_level(&self, impl_def_id: DefId) -> Option { + if let Some(trait_ref) = self.tcx.impl_trait_ref(impl_def_id) { + if let Some(node_id) = self.tcx.map.as_local_node_id(trait_ref.def_id) { + return self.get(node_id); + } } + Some(AccessLevel::Public) } fn get(&self, id: ast::NodeId) -> Option { @@ -110,30 +108,32 @@ impl<'a, 'tcx> EmbargoVisitor<'a, 'tcx> { } } - fn reach<'b>(&'b mut self) -> ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { - ReachEverythingInTheInterfaceVisitor { ev: self } + fn reach<'b>(&'b mut self, item_id: ast::NodeId) + -> ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { + ReachEverythingInTheInterfaceVisitor { + item_def_id: self.tcx.map.local_def_id(item_id), + ev: self, + } } } -impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { +impl<'a, 'tcx> Visitor<'tcx> for EmbargoVisitor<'a, 'tcx> { /// We want to visit items in the context of their containing /// module and so forth, so supply a crate for doing a deep walk. - fn visit_nested_item(&mut self, item: hir::ItemId) { - let tcx = self.tcx; - self.visit_item(tcx.map.expect_item(item.id)) + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::All(&self.tcx.map) } - fn visit_item(&mut self, item: &hir::Item) { + fn visit_item(&mut self, item: &'tcx hir::Item) { let inherited_item_level = match item.node { // Impls inherit level from their types and traits - hir::ItemImpl(.., None, ref ty, _) => { - self.ty_level(&ty) + hir::ItemImpl(..) => { + let def_id = self.tcx.map.local_def_id(item.id); + cmp::min(self.item_ty_level(def_id), self.impl_trait_level(def_id)) } - hir::ItemImpl(.., Some(ref trait_ref), ref ty, _) => { - cmp::min(self.ty_level(&ty), self.trait_level(trait_ref)) - } - hir::ItemDefaultImpl(_, ref trait_ref) => { - self.trait_level(trait_ref) + hir::ItemDefaultImpl(..) => { + let def_id = self.tcx.map.local_def_id(item.id); + self.impl_trait_level(def_id) } // Foreign mods inherit level from parents hir::ItemForeignMod(..) => { @@ -158,15 +158,17 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { } } } - hir::ItemImpl(.., None, _, ref impl_items) => { - for impl_item in impl_items { + hir::ItemImpl(.., None, _, ref impl_item_refs) => { + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); if impl_item.vis == hir::Public { self.update(impl_item.id, item_level); } } } - hir::ItemImpl(.., Some(_), _, ref impl_items) => { - for impl_item in impl_items { + hir::ItemImpl(.., Some(_), _, ref impl_item_refs) => { + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); self.update(impl_item.id, item_level); } } @@ -203,22 +205,54 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { hir::ItemMod(..) => {} // Reexports are handled in visit_mod hir::ItemUse(..) => {} + // The interface is empty + hir::ItemDefaultImpl(..) => {} // Visit everything - hir::ItemConst(..) | hir::ItemStatic(..) | hir::ItemFn(..) | - hir::ItemTrait(..) | hir::ItemTy(..) | hir::ItemImpl(.., Some(..), _, _) => { + hir::ItemConst(..) | hir::ItemStatic(..) | + hir::ItemFn(..) | hir::ItemTy(..) => { if item_level.is_some() { - self.reach().visit_item(item); + self.reach(item.id).generics().predicates().item_type(); } } - // Visit everything, but enum variants have their own levels - hir::ItemEnum(ref def, ref generics) => { + hir::ItemTrait(.., ref trait_items) => { if item_level.is_some() { - self.reach().visit_generics(generics); + self.reach(item.id).generics().predicates(); + + for trait_item in trait_items { + let mut reach = self.reach(trait_item.id); + reach.generics().predicates(); + + if let hir::TypeTraitItem(_, None) = trait_item.node { + // No type to visit. + } else { + reach.item_type(); + } + } + } + } + // Visit everything except for private impl items + hir::ItemImpl(.., ref trait_ref, _, ref impl_items) => { + if item_level.is_some() { + self.reach(item.id).generics().predicates().impl_trait_ref(); + + for impl_item in impl_items { + let id = impl_item.id.node_id; + if trait_ref.is_some() || self.get(id).is_some() { + self.reach(id).generics().predicates().item_type(); + } + } + } + } + + // Visit everything, but enum variants have their own levels + hir::ItemEnum(ref def, _) => { + if item_level.is_some() { + self.reach(item.id).generics().predicates(); } for variant in &def.variants { if self.get(variant.node.data.id()).is_some() { for field in variant.node.data.fields() { - self.reach().visit_struct_field(field); + self.reach(field.id).item_type(); } // Corner case: if the variant is reachable, but its // enum is not, make the enum reachable as well. @@ -230,31 +264,18 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { hir::ItemForeignMod(ref foreign_mod) => { for foreign_item in &foreign_mod.items { if self.get(foreign_item.id).is_some() { - self.reach().visit_foreign_item(foreign_item); + self.reach(foreign_item.id).generics().predicates().item_type(); } } } // Visit everything except for private fields - hir::ItemStruct(ref struct_def, ref generics) | - hir::ItemUnion(ref struct_def, ref generics) => { + hir::ItemStruct(ref struct_def, _) | + hir::ItemUnion(ref struct_def, _) => { if item_level.is_some() { - self.reach().visit_generics(generics); + self.reach(item.id).generics().predicates(); for field in struct_def.fields() { if self.get(field.id).is_some() { - self.reach().visit_struct_field(field); - } - } - } - } - // The interface is empty - hir::ItemDefaultImpl(..) => {} - // Visit everything except for private impl items - hir::ItemImpl(.., ref generics, None, _, ref impl_items) => { - if item_level.is_some() { - self.reach().visit_generics(generics); - for impl_item in impl_items { - if self.get(impl_item.id).is_some() { - self.reach().visit_impl_item(impl_item); + self.reach(field.id).item_type(); } } } @@ -269,7 +290,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { self.prev_level = orig_level; } - fn visit_block(&mut self, b: &'v hir::Block) { + fn visit_block(&mut self, b: &'tcx hir::Block) { let orig_level = replace(&mut self.prev_level, None); // Blocks can have public items, for example impls, but they always @@ -280,7 +301,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { self.prev_level = orig_level; } - fn visit_mod(&mut self, m: &hir::Mod, _sp: Span, id: ast::NodeId) { + fn visit_mod(&mut self, m: &'tcx hir::Mod, _sp: Span, id: ast::NodeId) { // This code is here instead of in visit_item so that the // crate module gets processed as well. if self.prev_level.is_some() { @@ -296,76 +317,72 @@ impl<'a, 'tcx, 'v> Visitor<'v> for EmbargoVisitor<'a, 'tcx> { intravisit::walk_mod(self, m, id); } - fn visit_macro_def(&mut self, md: &'v hir::MacroDef) { + fn visit_macro_def(&mut self, md: &'tcx hir::MacroDef) { self.update(md.id, Some(AccessLevel::Public)); } -} -impl<'b, 'a, 'tcx: 'a> ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { - // Make the type hidden under a type alias reachable - fn reach_aliased_type(&mut self, item: &hir::Item, path: &hir::Path) { - if let hir::ItemTy(ref ty, ref generics) = item.node { - // See `fn is_public_type_alias` for details - self.visit_ty(ty); - let provided_params = path.segments.last().unwrap().parameters.types().len(); - for ty_param in &generics.ty_params[provided_params..] { - if let Some(ref default_ty) = ty_param.default { - self.visit_ty(default_ty); - } - } - } - } -} - -impl<'b, 'a, 'tcx: 'a, 'v> Visitor<'v> for ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { - fn visit_ty(&mut self, ty: &hir::Ty) { - if let hir::TyPath(_, ref path) = ty.node { - let def = self.ev.tcx.expect_def(ty.id); - match def { - Def::Struct(def_id) | Def::Union(def_id) | Def::Enum(def_id) | - Def::TyAlias(def_id) | Def::Trait(def_id) | Def::AssociatedTy(def_id) => { - if let Some(mut node_id) = self.ev.tcx.map.as_local_node_id(def_id) { - // Check the trait for associated types. - if let hir::map::NodeTraitItem(_) = self.ev.tcx.map.get(node_id) { - node_id = self.ev.tcx.map.get_parent(node_id); - } - - let item = self.ev.tcx.map.expect_item(node_id); - if let Def::TyAlias(..) = def { - // Type aliases are substituted. Associated type aliases are not - // substituted yet, but ideally they should be. - if self.ev.get(item.id).is_none() { - self.reach_aliased_type(item, path); - } - } else { - self.ev.update(item.id, Some(AccessLevel::Reachable)); - } - } - } - - _ => {} + fn visit_ty(&mut self, ty: &'tcx hir::Ty) { + if let hir::TyImplTrait(..) = ty.node { + if self.get(ty.id).is_some() { + // Reach the (potentially private) type and the API being exposed. + self.reach(ty.id).item_type().predicates(); } } intravisit::walk_ty(self, ty); } +} - fn visit_trait_ref(&mut self, trait_ref: &hir::TraitRef) { - let def_id = self.ev.tcx.expect_def(trait_ref.ref_id).def_id(); - if let Some(node_id) = self.ev.tcx.map.as_local_node_id(def_id) { +impl<'b, 'a, 'tcx> ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { + fn generics(&mut self) -> &mut Self { + self.ev.tcx.item_generics(self.item_def_id).visit_with(self); + self + } + + fn predicates(&mut self) -> &mut Self { + self.ev.tcx.item_predicates(self.item_def_id).visit_with(self); + self + } + + fn item_type(&mut self) -> &mut Self { + self.ev.tcx.item_type(self.item_def_id).visit_with(self); + self + } + + fn impl_trait_ref(&mut self) -> &mut Self { + self.ev.tcx.impl_trait_ref(self.item_def_id).visit_with(self); + self + } +} + +impl<'b, 'a, 'tcx> TypeVisitor<'tcx> for ReachEverythingInTheInterfaceVisitor<'b, 'a, 'tcx> { + fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool { + let ty_def_id = match ty.sty { + ty::TyAdt(adt, _) => Some(adt.did), + ty::TyDynamic(ref obj, ..) => obj.principal().map(|p| p.def_id()), + ty::TyProjection(ref proj) => Some(proj.trait_ref.def_id), + ty::TyFnDef(def_id, ..) | + ty::TyAnon(def_id, _) => Some(def_id), + _ => None + }; + + if let Some(def_id) = ty_def_id { + if let Some(node_id) = self.ev.tcx.map.as_local_node_id(def_id) { + self.ev.update(node_id, Some(AccessLevel::Reachable)); + } + } + + ty.super_visit_with(self) + } + + fn visit_trait_ref(&mut self, trait_ref: ty::TraitRef<'tcx>) -> bool { + if let Some(node_id) = self.ev.tcx.map.as_local_node_id(trait_ref.def_id) { let item = self.ev.tcx.map.expect_item(node_id); self.ev.update(item.id, Some(AccessLevel::Reachable)); } - intravisit::walk_trait_ref(self, trait_ref); + trait_ref.super_visit_with(self) } - - // Don't recurse into function bodies - fn visit_block(&mut self, _: &hir::Block) {} - // Don't recurse into expressions in array sizes or const initializers - fn visit_expr(&mut self, _: &hir::Expr) {} - // Don't recurse into patterns in function arguments - fn visit_pat(&mut self, _: &hir::Pat) {} } //////////////////////////////////////////////////////////////////////////////// @@ -388,7 +405,7 @@ impl<'a, 'tcx> PrivacyVisitor<'a, 'tcx> { } // Checks that a field is in scope. - fn check_field(&mut self, span: Span, def: ty::AdtDef<'tcx>, field: ty::FieldDef<'tcx>) { + fn check_field(&mut self, span: Span, def: &'tcx ty::AdtDef, field: &'tcx ty::FieldDef) { if !def.is_enum() && !field.vis.is_accessible_from(self.curitem, &self.tcx.map) { struct_span_err!(self.tcx.sess, span, E0451, "field `{}` of {} `{}` is private", field.name, def.variant_descr(), self.tcx.item_path_str(def.did)) @@ -399,7 +416,7 @@ impl<'a, 'tcx> PrivacyVisitor<'a, 'tcx> { // Checks that a method is in scope. fn check_method(&mut self, span: Span, method_def_id: DefId) { - match self.tcx.impl_or_trait_item(method_def_id).container() { + match self.tcx.associated_item(method_def_id).container { // Trait methods are always all public. The only controlling factor // is whether the trait itself is accessible or not. ty::TraitContainer(trait_def_id) if !self.item_is_accessible(trait_def_id) => { @@ -412,30 +429,30 @@ impl<'a, 'tcx> PrivacyVisitor<'a, 'tcx> { } } -impl<'a, 'tcx, 'v> Visitor<'v> for PrivacyVisitor<'a, 'tcx> { +impl<'a, 'tcx> Visitor<'tcx> for PrivacyVisitor<'a, 'tcx> { /// We want to visit items in the context of their containing /// module and so forth, so supply a crate for doing a deep walk. - fn visit_nested_item(&mut self, item: hir::ItemId) { - let tcx = self.tcx; - self.visit_item(tcx.map.expect_item(item.id)) + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::All(&self.tcx.map) } - fn visit_item(&mut self, item: &hir::Item) { + fn visit_item(&mut self, item: &'tcx hir::Item) { let orig_curitem = replace(&mut self.curitem, item.id); intravisit::walk_item(self, item); self.curitem = orig_curitem; } - fn visit_expr(&mut self, expr: &hir::Expr) { + fn visit_expr(&mut self, expr: &'tcx hir::Expr) { match expr.node { hir::ExprMethodCall(..) => { let method_call = ty::MethodCall::expr(expr.id); let method = self.tcx.tables().method_map[&method_call]; self.check_method(expr.span, method.def_id); } - hir::ExprStruct(_, ref expr_fields, _) => { + hir::ExprStruct(ref qpath, ref expr_fields, _) => { + let def = self.tcx.tables().qpath_def(qpath, expr.id); let adt = self.tcx.tables().expr_ty(expr).ty_adt_def().unwrap(); - let variant = adt.variant_of_def(self.tcx.expect_def(expr.id)); + let variant = adt.variant_of_def(def); // RFC 736: ensure all unmentioned fields are visible. // Rather than computing the set of unmentioned fields // (i.e. `all_fields - fields`), just check them all, @@ -453,9 +470,9 @@ impl<'a, 'tcx, 'v> Visitor<'v> for PrivacyVisitor<'a, 'tcx> { } } } - hir::ExprPath(..) => { - if let def @ Def::StructCtor(_, CtorKind::Fn) = self.tcx.expect_def(expr.id) { - let adt_def = self.tcx.expect_variant_def(def); + hir::ExprPath(hir::QPath::Resolved(_, ref path)) => { + if let Def::StructCtor(_, CtorKind::Fn) = path.def { + let adt_def = self.tcx.expect_variant_def(path.def); let private_indexes = adt_def.fields.iter().enumerate().filter(|&(_, field)| { !field.vis.is_accessible_from(self.curitem, &self.tcx.map) }).map(|(i, _)| i).collect::>(); @@ -486,7 +503,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for PrivacyVisitor<'a, 'tcx> { intravisit::walk_expr(self, expr); } - fn visit_pat(&mut self, pattern: &hir::Pat) { + fn visit_pat(&mut self, pattern: &'tcx hir::Pat) { // Foreign functions do not have their patterns mapped in the def_map, // and there's nothing really relevant there anyway, so don't bother // checking privacy. If you can name the type then you can pass it to an @@ -494,9 +511,10 @@ impl<'a, 'tcx, 'v> Visitor<'v> for PrivacyVisitor<'a, 'tcx> { if self.in_foreign { return } match pattern.node { - PatKind::Struct(_, ref fields, _) => { + PatKind::Struct(ref qpath, ref fields, _) => { + let def = self.tcx.tables().qpath_def(qpath, pattern.id); let adt = self.tcx.tables().pat_ty(pattern).ty_adt_def().unwrap(); - let variant = adt.variant_of_def(self.tcx.expect_def(pattern.id)); + let variant = adt.variant_of_def(def); for field in fields { self.check_field(field.span, adt, variant.field_named(field.node.name)); } @@ -522,7 +540,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for PrivacyVisitor<'a, 'tcx> { intravisit::walk_pat(self, pattern); } - fn visit_foreign_item(&mut self, fi: &hir::ForeignItem) { + fn visit_foreign_item(&mut self, fi: &'tcx hir::ForeignItem) { self.in_foreign = true; intravisit::walk_foreign_item(self, fi); self.in_foreign = false; @@ -556,8 +574,8 @@ struct ObsoleteCheckTypeForPrivatenessVisitor<'a, 'b: 'a, 'tcx: 'b> { } impl<'a, 'tcx> ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> { - fn path_is_private_type(&self, path_id: ast::NodeId) -> bool { - let did = match self.tcx.expect_def(path_id) { + fn path_is_private_type(&self, path: &hir::Path) -> bool { + let did = match path.def { Def::PrimTy(..) | Def::SelfTy(..) => return false, def => def.def_id(), }; @@ -585,7 +603,7 @@ impl<'a, 'tcx> ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> { fn check_ty_param_bound(&mut self, ty_param_bound: &hir::TyParamBound) { if let hir::TraitTyParamBound(ref trait_ref, _) = *ty_param_bound { - if self.path_is_private_type(trait_ref.trait_ref.ref_id) { + if self.path_is_private_type(&trait_ref.trait_ref.path) { self.old_error_set.insert(trait_ref.trait_ref.ref_id); } } @@ -597,14 +615,21 @@ impl<'a, 'tcx> ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> { } impl<'a, 'b, 'tcx, 'v> Visitor<'v> for ObsoleteCheckTypeForPrivatenessVisitor<'a, 'b, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'v> { + NestedVisitorMap::None + } + fn visit_ty(&mut self, ty: &hir::Ty) { - if let hir::TyPath(..) = ty.node { - if self.inner.path_is_private_type(ty.id) { + if let hir::TyPath(hir::QPath::Resolved(_, ref path)) = ty.node { + if self.inner.path_is_private_type(path) { self.contains_private = true; // found what we're looking for so let's stop // working. return - } else if self.at_outer_type { + } + } + if let hir::TyPath(_) = ty.node { + if self.at_outer_type { self.outer_type_is_public_path = true; } } @@ -616,15 +641,14 @@ impl<'a, 'b, 'tcx, 'v> Visitor<'v> for ObsoleteCheckTypeForPrivatenessVisitor<'a fn visit_expr(&mut self, _: &hir::Expr) {} } -impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> { +impl<'a, 'tcx> Visitor<'tcx> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> { /// We want to visit items in the context of their containing /// module and so forth, so supply a crate for doing a deep walk. - fn visit_nested_item(&mut self, item: hir::ItemId) { - let tcx = self.tcx; - self.visit_item(tcx.map.expect_item(item.id)) + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::All(&self.tcx.map) } - fn visit_item(&mut self, item: &hir::Item) { + fn visit_item(&mut self, item: &'tcx hir::Item) { match item.node { // contents of a private mod can be reexported, so we need // to check internals. @@ -649,7 +673,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> // (i.e. we could just return here to not check them at // all, or some worse estimation of whether an impl is // publicly visible). - hir::ItemImpl(.., ref g, ref trait_ref, ref self_, ref impl_items) => { + hir::ItemImpl(.., ref g, ref trait_ref, ref self_, ref impl_item_refs) => { // `impl [... for] Private` is never visible. let self_contains_private; // impl [... for] Public<...>, but not `impl [... for] @@ -675,7 +699,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> let not_private_trait = trait_ref.as_ref().map_or(true, // no trait counts as public trait |tr| { - let did = self.tcx.expect_def(tr.ref_id).def_id(); + let did = tr.path.def.def_id(); if let Some(node_id) = self.tcx.map.as_local_node_id(did) { self.trait_is_public(node_id) @@ -694,16 +718,17 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> // are private (because `T` won't be visible externally). let trait_or_some_public_method = trait_ref.is_some() || - impl_items.iter() - .any(|impl_item| { - match impl_item.node { - hir::ImplItemKind::Const(..) | - hir::ImplItemKind::Method(..) => { - self.access_levels.is_reachable(impl_item.id) - } - hir::ImplItemKind::Type(_) => false, - } - }); + impl_item_refs.iter() + .any(|impl_item_ref| { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); + match impl_item.node { + hir::ImplItemKind::Const(..) | + hir::ImplItemKind::Method(..) => { + self.access_levels.is_reachable(impl_item.id) + } + hir::ImplItemKind::Type(_) => false, + } + }); if !self_contains_private && not_private_trait && @@ -713,12 +738,13 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> match *trait_ref { None => { - for impl_item in impl_items { + for impl_item_ref in impl_item_refs { // This is where we choose whether to walk down // further into the impl to check its items. We // should only walk into public items so that we // don't erroneously report errors for private // types in private items. + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); match impl_item.node { hir::ImplItemKind::Const(..) | hir::ImplItemKind::Method(..) @@ -750,7 +776,8 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> intravisit::walk_path(self, &tr.path); // Those in 3. are warned with this call. - for impl_item in impl_items { + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); if let hir::ImplItemKind::Type(ref ty) = impl_item.node { self.visit_ty(ty); } @@ -761,7 +788,8 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> // impl Public { ... }. Any public static // methods will be visible as `Public::foo`. let mut found_pub_static = false; - for impl_item in impl_items { + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); match impl_item.node { hir::ImplItemKind::Const(..) => { if self.item_is_public(&impl_item.id, &impl_item.vis) { @@ -805,7 +833,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> intravisit::walk_item(self, item); } - fn visit_generics(&mut self, generics: &hir::Generics) { + fn visit_generics(&mut self, generics: &'tcx hir::Generics) { for ty_param in generics.ty_params.iter() { for bound in ty_param.bounds.iter() { self.check_ty_param_bound(bound) @@ -826,22 +854,25 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> } } - fn visit_foreign_item(&mut self, item: &hir::ForeignItem) { + fn visit_foreign_item(&mut self, item: &'tcx hir::ForeignItem) { if self.access_levels.is_reachable(item.id) { intravisit::walk_foreign_item(self, item) } } - fn visit_ty(&mut self, t: &hir::Ty) { - if let hir::TyPath(..) = t.node { - if self.path_is_private_type(t.id) { + fn visit_ty(&mut self, t: &'tcx hir::Ty) { + if let hir::TyPath(hir::QPath::Resolved(_, ref path)) = t.node { + if self.path_is_private_type(path) { self.old_error_set.insert(t.id); } } intravisit::walk_ty(self, t) } - fn visit_variant(&mut self, v: &hir::Variant, g: &hir::Generics, item_id: ast::NodeId) { + fn visit_variant(&mut self, + v: &'tcx hir::Variant, + g: &'tcx hir::Generics, + item_id: ast::NodeId) { if self.access_levels.is_reachable(v.node.data.id()) { self.in_variant = true; intravisit::walk_variant(self, v, g, item_id); @@ -849,7 +880,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> } } - fn visit_struct_field(&mut self, s: &hir::StructField) { + fn visit_struct_field(&mut self, s: &'tcx hir::StructField) { if s.vis == hir::Public || self.in_variant { intravisit::walk_struct_field(self, s); } @@ -859,8 +890,8 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> // expression/block context can't possibly contain exported things. // (Making them no-ops stops us from traversing the whole AST without // having to be super careful about our `walk_...` calls above.) - fn visit_block(&mut self, _: &hir::Block) {} - fn visit_expr(&mut self, _: &hir::Expr) {} + fn visit_block(&mut self, _: &'tcx hir::Block) {} + fn visit_expr(&mut self, _: &'tcx hir::Expr) {} } /////////////////////////////////////////////////////////////////////////////// @@ -872,117 +903,96 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ObsoleteVisiblePrivateTypesVisitor<'a, 'tcx> struct SearchInterfaceForPrivateItemsVisitor<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, + item_def_id: DefId, + span: Span, /// The visitor checks that each component type is at least this visible required_visibility: ty::Visibility, /// The visibility of the least visible component that has been visited min_visibility: ty::Visibility, - old_error_set: &'a NodeSet, + has_old_errors: bool, } impl<'a, 'tcx: 'a> SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> { - fn new(tcx: TyCtxt<'a, 'tcx, 'tcx>, old_error_set: &'a NodeSet) -> Self { - SearchInterfaceForPrivateItemsVisitor { - tcx: tcx, - min_visibility: ty::Visibility::Public, - required_visibility: ty::Visibility::PrivateExternal, - old_error_set: old_error_set, - } + fn generics(&mut self) -> &mut Self { + self.tcx.item_generics(self.item_def_id).visit_with(self); + self + } + + fn predicates(&mut self) -> &mut Self { + self.tcx.item_predicates(self.item_def_id).visit_with(self); + self + } + + fn item_type(&mut self) -> &mut Self { + self.tcx.item_type(self.item_def_id).visit_with(self); + self + } + + fn impl_trait_ref(&mut self) -> &mut Self { + self.tcx.impl_trait_ref(self.item_def_id).visit_with(self); + self } } -impl<'a, 'tcx: 'a> SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> { - // Return the visibility of the type alias's least visible component type when substituted - fn substituted_alias_visibility(&self, item: &hir::Item, path: &hir::Path) - -> Option { - // Type alias is considered public if the aliased type is - // public, even if the type alias itself is private. So, something - // like `type A = u8; pub fn f() -> A {...}` doesn't cause an error. - if let hir::ItemTy(ref ty, ref generics) = item.node { - let mut check = SearchInterfaceForPrivateItemsVisitor::new(self.tcx, - self.old_error_set); - check.visit_ty(ty); - // If a private type alias with default type parameters is used in public - // interface we must ensure, that the defaults are public if they are actually used. - // ``` - // type Alias = T; - // pub fn f() -> Alias {...} // `Private` is implicitly used here, so it must be public - // ``` - let provided_params = path.segments.last().unwrap().parameters.types().len(); - for ty_param in &generics.ty_params[provided_params..] { - if let Some(ref default_ty) = ty_param.default { - check.visit_ty(default_ty); - } - } - Some(check.min_visibility) - } else { - None - } - } -} - -impl<'a, 'tcx: 'a, 'v> Visitor<'v> for SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> { - fn visit_ty(&mut self, ty: &hir::Ty) { - if let hir::TyPath(_, ref path) = ty.node { - match self.tcx.expect_def(ty.id) { - Def::PrimTy(..) | Def::SelfTy(..) | Def::TyParam(..) => { - // Public - } - Def::AssociatedTy(..) - if self.required_visibility == ty::Visibility::PrivateExternal => { +impl<'a, 'tcx: 'a> TypeVisitor<'tcx> for SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> { + fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool { + let ty_def_id = match ty.sty { + ty::TyAdt(adt, _) => Some(adt.did), + ty::TyDynamic(ref obj, ..) => obj.principal().map(|p| p.def_id()), + ty::TyProjection(ref proj) => { + if self.required_visibility == ty::Visibility::PrivateExternal { // Conservatively approximate the whole type alias as public without // recursing into its components when determining impl publicity. // For example, `impl ::Alias {...}` may be a public impl // even if both `Type` and `Trait` are private. // Ideally, associated types should be substituted in the same way as // free type aliases, but this isn't done yet. - return + return false; } - Def::Struct(def_id) | Def::Union(def_id) | Def::Enum(def_id) | - Def::TyAlias(def_id) | Def::Trait(def_id) | Def::AssociatedTy(def_id) => { - // Non-local means public (private items can't leave their crate, modulo bugs) - if let Some(mut node_id) = self.tcx.map.as_local_node_id(def_id) { - // Check the trait for associated types. - if let hir::map::NodeTraitItem(_) = self.tcx.map.get(node_id) { - node_id = self.tcx.map.get_parent(node_id); - } - let item = self.tcx.map.expect_item(node_id); - let vis = match self.substituted_alias_visibility(item, path) { - Some(vis) => vis, - None => ty::Visibility::from_hir(&item.vis, node_id, self.tcx), - }; + Some(proj.trait_ref.def_id) + } + _ => None + }; - if !vis.is_at_least(self.min_visibility, &self.tcx.map) { - self.min_visibility = vis; - } - if !vis.is_at_least(self.required_visibility, &self.tcx.map) { - if self.tcx.sess.features.borrow().pub_restricted || - self.old_error_set.contains(&ty.id) { - let mut err = struct_span_err!(self.tcx.sess, ty.span, E0446, - "private type in public interface"); - err.span_label(ty.span, &format!("can't leak private type")); - err.emit(); - } else { - self.tcx.sess.add_lint(lint::builtin::PRIVATE_IN_PUBLIC, - node_id, - ty.span, - format!("private type in public \ - interface (error E0446)")); - } - } + if let Some(def_id) = ty_def_id { + // Non-local means public (private items can't leave their crate, modulo bugs) + if let Some(node_id) = self.tcx.map.as_local_node_id(def_id) { + let item = self.tcx.map.expect_item(node_id); + let vis = ty::Visibility::from_hir(&item.vis, node_id, self.tcx); + + if !vis.is_at_least(self.min_visibility, &self.tcx.map) { + self.min_visibility = vis; + } + if !vis.is_at_least(self.required_visibility, &self.tcx.map) { + if self.tcx.sess.features.borrow().pub_restricted || self.has_old_errors { + let mut err = struct_span_err!(self.tcx.sess, self.span, E0446, + "private type `{}` in public interface", ty); + err.span_label(self.span, &format!("can't leak private type")); + err.emit(); + } else { + self.tcx.sess.add_lint(lint::builtin::PRIVATE_IN_PUBLIC, + node_id, + self.span, + format!("private type `{}` in public \ + interface (error E0446)", ty)); } } - _ => {} } } - intravisit::walk_ty(self, ty); + if let ty::TyProjection(ref proj) = ty.sty { + // Avoid calling `visit_trait_ref` below on the trait, + // as we have already checked the trait itself above. + proj.trait_ref.super_visit_with(self) + } else { + ty.super_visit_with(self) + } } - fn visit_trait_ref(&mut self, trait_ref: &hir::TraitRef) { + fn visit_trait_ref(&mut self, trait_ref: ty::TraitRef<'tcx>) -> bool { // Non-local means public (private items can't leave their crate, modulo bugs) - let def_id = self.tcx.expect_def(trait_ref.ref_id).def_id(); - if let Some(node_id) = self.tcx.map.as_local_node_id(def_id) { + if let Some(node_id) = self.tcx.map.as_local_node_id(trait_ref.def_id) { let item = self.tcx.map.expect_item(node_id); let vis = ty::Visibility::from_hir(&item.vis, node_id, self.tcx); @@ -990,63 +1000,81 @@ impl<'a, 'tcx: 'a, 'v> Visitor<'v> for SearchInterfaceForPrivateItemsVisitor<'a, self.min_visibility = vis; } if !vis.is_at_least(self.required_visibility, &self.tcx.map) { - if self.tcx.sess.features.borrow().pub_restricted || - self.old_error_set.contains(&trait_ref.ref_id) { - struct_span_err!(self.tcx.sess, trait_ref.path.span, E0445, - "private trait in public interface") - .span_label(trait_ref.path.span, &format!( + if self.tcx.sess.features.borrow().pub_restricted || self.has_old_errors { + struct_span_err!(self.tcx.sess, self.span, E0445, + "private trait `{}` in public interface", trait_ref) + .span_label(self.span, &format!( "private trait can't be public")) .emit(); } else { self.tcx.sess.add_lint(lint::builtin::PRIVATE_IN_PUBLIC, node_id, - trait_ref.path.span, - "private trait in public interface (error E0445)" - .to_string()); + self.span, + format!("private trait `{}` in public \ + interface (error E0445)", trait_ref)); } } } - intravisit::walk_trait_ref(self, trait_ref); + trait_ref.super_visit_with(self) } - - // Don't recurse into function bodies - fn visit_block(&mut self, _: &hir::Block) {} - // Don't recurse into expressions in array sizes or const initializers - fn visit_expr(&mut self, _: &hir::Expr) {} - // Don't recurse into patterns in function arguments - fn visit_pat(&mut self, _: &hir::Pat) {} } struct PrivateItemsInPublicInterfacesVisitor<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, old_error_set: &'a NodeSet, + inner_visibility: ty::Visibility, } impl<'a, 'tcx> PrivateItemsInPublicInterfacesVisitor<'a, 'tcx> { - // A type is considered public if it doesn't contain any private components - fn ty_visibility(&self, ty: &hir::Ty) -> ty::Visibility { - let mut check = SearchInterfaceForPrivateItemsVisitor::new(self.tcx, self.old_error_set); - check.visit_ty(ty); - check.min_visibility - } + fn check(&self, item_id: ast::NodeId, required_visibility: ty::Visibility) + -> SearchInterfaceForPrivateItemsVisitor<'a, 'tcx> { + let mut has_old_errors = false; - // A trait reference is considered public if it doesn't contain any private components - fn trait_ref_visibility(&self, trait_ref: &hir::TraitRef) -> ty::Visibility { - let mut check = SearchInterfaceForPrivateItemsVisitor::new(self.tcx, self.old_error_set); - check.visit_trait_ref(trait_ref); - check.min_visibility + // Slow path taken only if there any errors in the crate. + for &id in self.old_error_set { + // Walk up the nodes until we find `item_id` (or we hit a root). + let mut id = id; + loop { + if id == item_id { + has_old_errors = true; + break; + } + let parent = self.tcx.map.get_parent_node(id); + if parent == id { + break; + } + id = parent; + } + + if has_old_errors { + break; + } + } + + SearchInterfaceForPrivateItemsVisitor { + tcx: self.tcx, + item_def_id: self.tcx.map.local_def_id(item_id), + span: self.tcx.map.span(item_id), + min_visibility: ty::Visibility::Public, + required_visibility: required_visibility, + has_old_errors: has_old_errors, + } } } -impl<'a, 'tcx, 'v> Visitor<'v> for PrivateItemsInPublicInterfacesVisitor<'a, 'tcx> { - fn visit_item(&mut self, item: &hir::Item) { +impl<'a, 'tcx> Visitor<'tcx> for PrivateItemsInPublicInterfacesVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + + fn visit_item(&mut self, item: &'tcx hir::Item) { + let tcx = self.tcx; let min = |vis1: ty::Visibility, vis2| { - if vis1.is_at_least(vis2, &self.tcx.map) { vis2 } else { vis1 } + if vis1.is_at_least(vis2, &tcx.map) { vis2 } else { vis1 } }; - let mut check = SearchInterfaceForPrivateItemsVisitor::new(self.tcx, self.old_error_set); - let item_visibility = ty::Visibility::from_hir(&item.vis, item.id, self.tcx); + let item_visibility = ty::Visibility::from_hir(&item.vis, item.id, tcx); match item.node { // Crates are always public @@ -1057,58 +1085,113 @@ impl<'a, 'tcx, 'v> Visitor<'v> for PrivateItemsInPublicInterfacesVisitor<'a, 'tc hir::ItemUse(..) => {} // Subitems of these items have inherited publicity hir::ItemConst(..) | hir::ItemStatic(..) | hir::ItemFn(..) | - hir::ItemEnum(..) | hir::ItemTrait(..) | hir::ItemTy(..) => { - check.required_visibility = item_visibility; - check.visit_item(item); + hir::ItemTy(..) => { + self.check(item.id, item_visibility).generics().predicates().item_type(); + + // Recurse for e.g. `impl Trait` (see `visit_ty`). + self.inner_visibility = item_visibility; + intravisit::walk_item(self, item); + } + hir::ItemTrait(.., ref trait_items) => { + self.check(item.id, item_visibility).generics().predicates(); + + for trait_item in trait_items { + let mut check = self.check(trait_item.id, item_visibility); + check.generics().predicates(); + + if let hir::TypeTraitItem(_, None) = trait_item.node { + // No type to visit. + } else { + check.item_type(); + } + } + } + hir::ItemEnum(ref def, _) => { + self.check(item.id, item_visibility).generics().predicates(); + + for variant in &def.variants { + for field in variant.node.data.fields() { + self.check(field.id, item_visibility).item_type(); + } + } } // Subitems of foreign modules have their own publicity hir::ItemForeignMod(ref foreign_mod) => { for foreign_item in &foreign_mod.items { - check.required_visibility = - ty::Visibility::from_hir(&foreign_item.vis, item.id, self.tcx); - check.visit_foreign_item(foreign_item); + let vis = ty::Visibility::from_hir(&foreign_item.vis, item.id, tcx); + self.check(foreign_item.id, vis).generics().predicates().item_type(); } } // Subitems of structs and unions have their own publicity - hir::ItemStruct(ref struct_def, ref generics) | - hir::ItemUnion(ref struct_def, ref generics) => { - check.required_visibility = item_visibility; - check.visit_generics(generics); + hir::ItemStruct(ref struct_def, _) | + hir::ItemUnion(ref struct_def, _) => { + self.check(item.id, item_visibility).generics().predicates(); for field in struct_def.fields() { - let field_visibility = ty::Visibility::from_hir(&field.vis, item.id, self.tcx); - check.required_visibility = min(item_visibility, field_visibility); - check.visit_struct_field(field); + let field_visibility = ty::Visibility::from_hir(&field.vis, item.id, tcx); + self.check(field.id, min(item_visibility, field_visibility)).item_type(); } } // The interface is empty hir::ItemDefaultImpl(..) => {} // An inherent impl is public when its type is public // Subitems of inherent impls have their own publicity - hir::ItemImpl(.., ref generics, None, ref ty, ref impl_items) => { - let ty_vis = self.ty_visibility(ty); - check.required_visibility = ty_vis; - check.visit_generics(generics); + hir::ItemImpl(.., None, _, ref impl_item_refs) => { + let ty_vis = self.check(item.id, ty::Visibility::PrivateExternal) + .item_type().min_visibility; + self.check(item.id, ty_vis).generics().predicates(); - for impl_item in impl_items { + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); let impl_item_vis = - ty::Visibility::from_hir(&impl_item.vis, item.id, self.tcx); - check.required_visibility = min(impl_item_vis, ty_vis); - check.visit_impl_item(impl_item); + ty::Visibility::from_hir(&impl_item.vis, item.id, tcx); + self.check(impl_item.id, min(impl_item_vis, ty_vis)) + .generics().predicates().item_type(); + + // Recurse for e.g. `impl Trait` (see `visit_ty`). + self.inner_visibility = impl_item_vis; + intravisit::walk_impl_item(self, impl_item); } } // A trait impl is public when both its type and its trait are public // Subitems of trait impls have inherited publicity - hir::ItemImpl(.., ref generics, Some(ref trait_ref), ref ty, ref impl_items) => { - let vis = min(self.ty_visibility(ty), self.trait_ref_visibility(trait_ref)); - check.required_visibility = vis; - check.visit_generics(generics); - for impl_item in impl_items { - check.visit_impl_item(impl_item); + hir::ItemImpl(.., Some(_), _, ref impl_item_refs) => { + let vis = self.check(item.id, ty::Visibility::PrivateExternal) + .item_type().impl_trait_ref().min_visibility; + self.check(item.id, vis).generics().predicates(); + for impl_item_ref in impl_item_refs { + let impl_item = self.tcx.map.impl_item(impl_item_ref.id); + self.check(impl_item.id, vis).generics().predicates().item_type(); + + // Recurse for e.g. `impl Trait` (see `visit_ty`). + self.inner_visibility = vis; + intravisit::walk_impl_item(self, impl_item); } } } } + + fn visit_impl_item(&mut self, _impl_item: &'tcx hir::ImplItem) { + // handled in `visit_item` above + } + + fn visit_ty(&mut self, ty: &'tcx hir::Ty) { + if let hir::TyImplTrait(..) = ty.node { + // Check the traits being exposed, as they're separate, + // e.g. `impl Iterator` has two predicates, + // `X: Iterator` and `::Item == T`, + // where `X` is the `impl Iterator` itself, + // stored in `item_predicates`, not in the `Ty` itself. + self.check(ty.id, self.inner_visibility).predicates(); + } + + intravisit::walk_ty(self, ty); + } + + // Don't recurse into expressions in array sizes or const initializers + fn visit_expr(&mut self, _: &'tcx hir::Expr) {} + // Don't recurse into patterns in function arguments + fn visit_pat(&mut self, _: &'tcx hir::Pat) {} } pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, @@ -1160,8 +1243,9 @@ pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let mut visitor = PrivateItemsInPublicInterfacesVisitor { tcx: tcx, old_error_set: &visitor.old_error_set, + inner_visibility: ty::Visibility::Public, }; - krate.visit_all_items(&mut visitor); + krate.visit_all_item_likes(&mut DeepVisitor::new(&mut visitor)); } visitor.access_levels diff --git a/src/librustc_resolve/build_reduced_graph.rs b/src/librustc_resolve/build_reduced_graph.rs index d90fe769ca..45f5da5f11 100644 --- a/src/librustc_resolve/build_reduced_graph.rs +++ b/src/librustc_resolve/build_reduced_graph.rs @@ -14,48 +14,55 @@ //! any imports resolved. use macros::{InvocationData, LegacyScope}; -use resolve_imports::ImportDirectiveSubclass::{self, GlobImport}; -use {Module, ModuleS, ModuleKind}; -use Namespace::{self, TypeNS, ValueNS}; -use {NameBinding, NameBindingKind, ToNameBinding}; -use Resolver; +use resolve_imports::ImportDirective; +use resolve_imports::ImportDirectiveSubclass::{self, GlobImport, SingleImport}; +use {Resolver, Module, ModuleS, ModuleKind, NameBinding, NameBindingKind, ToNameBinding}; +use Namespace::{self, TypeNS, ValueNS, MacroNS}; use {resolve_error, resolve_struct_error, ResolutionError}; -use rustc::middle::cstore::LoadedMacros; +use rustc::middle::cstore::LoadedMacro; use rustc::hir::def::*; -use rustc::hir::def_id::{CRATE_DEF_INDEX, DefId}; +use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefId}; use rustc::ty; -use rustc::util::nodemap::FnvHashMap; use std::cell::Cell; use std::rc::Rc; use syntax::ast::Name; use syntax::attr; -use syntax::parse::token; use syntax::ast::{self, Block, ForeignItem, ForeignItemKind, Item, ItemKind}; use syntax::ast::{Mutability, StmtKind, TraitItem, TraitItemKind}; use syntax::ast::{Variant, ViewPathGlob, ViewPathList, ViewPathSimple}; -use syntax::ext::base::{SyntaxExtension, Resolver as SyntaxResolver}; +use syntax::ext::base::SyntaxExtension; +use syntax::ext::base::Determinacy::Undetermined; use syntax::ext::expand::mark_tts; use syntax::ext::hygiene::Mark; -use syntax::feature_gate::{self, emit_feature_err}; use syntax::ext::tt::macro_rules; -use syntax::parse::token::keywords; +use syntax::symbol::keywords; use syntax::visit::{self, Visitor}; use syntax_pos::{Span, DUMMY_SP}; -impl<'a> ToNameBinding<'a> for (Module<'a>, Span, ty::Visibility) { +impl<'a> ToNameBinding<'a> for (Module<'a>, ty::Visibility, Span, Mark) { fn to_name_binding(self) -> NameBinding<'a> { - NameBinding { kind: NameBindingKind::Module(self.0), span: self.1, vis: self.2 } + NameBinding { + kind: NameBindingKind::Module(self.0), + vis: self.1, + span: self.2, + expansion: self.3, + } } } -impl<'a> ToNameBinding<'a> for (Def, Span, ty::Visibility) { +impl<'a> ToNameBinding<'a> for (Def, ty::Visibility, Span, Mark) { fn to_name_binding(self) -> NameBinding<'a> { - NameBinding { kind: NameBindingKind::Def(self.0), span: self.1, vis: self.2 } + NameBinding { + kind: NameBindingKind::Def(self.0), + vis: self.1, + span: self.2, + expansion: self.3, + } } } @@ -64,7 +71,6 @@ struct LegacyMacroImports { import_all: Option, imports: Vec<(Name, Span)>, reexports: Vec<(Name, Span)>, - no_link: bool, } impl<'b> Resolver<'b> { @@ -129,9 +135,9 @@ impl<'b> Resolver<'b> { let is_prelude = attr::contains_name(&item.attrs, "prelude_import"); match view_path.node { - ViewPathSimple(binding, ref full_path) => { + ViewPathSimple(mut binding, ref full_path) => { let mut source = full_path.segments.last().unwrap().identifier; - let source_name = source.name.as_str(); + let source_name = source.name; if source_name == "mod" || source_name == "self" { resolve_error(self, view_path.span, @@ -143,6 +149,9 @@ impl<'b> Resolver<'b> { ModuleKind::Block(..) => unreachable!(), }; source.name = crate_name; + if binding.name == "$crate" { + binding.name = crate_name; + } self.session.struct_span_warn(item.span, "`$crate` may not be imported") .note("`use $crate;` was erroneously allowed and \ @@ -150,9 +159,14 @@ impl<'b> Resolver<'b> { .emit(); } - let subclass = ImportDirectiveSubclass::single(binding.name, source.name); - let span = view_path.span; - self.add_import_directive(module_path, subclass, span, item.id, vis); + let subclass = SingleImport { + target: binding.name, + source: source.name, + result: self.per_ns(|_, _| Cell::new(Err(Undetermined))), + }; + self.add_import_directive( + module_path, subclass, view_path.span, item.id, vis, expansion, + ); } ViewPathList(_, ref source_items) => { // Make sure there's at most one `mod` import in the list. @@ -198,9 +212,15 @@ impl<'b> Resolver<'b> { (module_path.to_vec(), name, rename) } }; - let subclass = ImportDirectiveSubclass::single(rename, name); - let (span, id) = (source_item.span, source_item.node.id); - self.add_import_directive(module_path, subclass, span, id, vis); + let subclass = SingleImport { + target: rename, + source: name, + result: self.per_ns(|_, _| Cell::new(Err(Undetermined))), + }; + let id = source_item.node.id; + self.add_import_directive( + module_path, subclass, source_item.span, id, vis, expansion, + ); } } ViewPathGlob(_) => { @@ -208,60 +228,35 @@ impl<'b> Resolver<'b> { is_prelude: is_prelude, max_vis: Cell::new(ty::Visibility::PrivateExternal), }; - let span = view_path.span; - self.add_import_directive(module_path, subclass, span, item.id, vis); + self.add_import_directive( + module_path, subclass, view_path.span, item.id, vis, expansion, + ); } } } ItemKind::ExternCrate(_) => { - let legacy_imports = self.legacy_macro_imports(&item.attrs); - // `#[macro_use]` and `#[macro_reexport]` are only allowed at the crate root. - if self.current_module.parent.is_some() && { - legacy_imports.import_all.is_some() || !legacy_imports.imports.is_empty() || - !legacy_imports.reexports.is_empty() - } { - if self.current_module.parent.is_some() { - span_err!(self.session, item.span, E0468, - "an `extern crate` loading macros must be at the crate root"); - } - } - - let loaded_macros = if legacy_imports != LegacyMacroImports::default() { - self.crate_loader.process_item(item, &self.definitions, true) - } else { - self.crate_loader.process_item(item, &self.definitions, false) - }; + self.crate_loader.process_item(item, &self.definitions); // n.b. we don't need to look at the path option here, because cstore already did - let crate_id = self.session.cstore.extern_mod_stmt_cnum(item.id); - let module = if let Some(crate_id) = crate_id { - let def_id = DefId { - krate: crate_id, - index: CRATE_DEF_INDEX, - }; - let module = self.arenas.alloc_module(ModuleS { - extern_crate_id: Some(item.id), - populated: Cell::new(false), - ..ModuleS::new(Some(parent), ModuleKind::Def(Def::Mod(def_id), name)) - }); - self.define(parent, name, TypeNS, (module, sp, vis)); - self.populate_module_if_necessary(module); - module - } else { - // Define an empty module - let def = Def::Mod(self.definitions.local_def_id(item.id)); - let module = ModuleS::new(Some(parent), ModuleKind::Def(def, name)); - let module = self.arenas.alloc_module(module); - self.define(parent, name, TypeNS, (module, sp, vis)); - module - }; - - if let Some(loaded_macros) = loaded_macros { - self.import_extern_crate_macros( - item, module, loaded_macros, legacy_imports, expansion == Mark::root(), - ); - } + let crate_id = self.session.cstore.extern_mod_stmt_cnum(item.id).unwrap(); + let module = self.get_extern_crate_root(crate_id); + let binding = (module, ty::Visibility::Public, sp, expansion).to_name_binding(); + let binding = self.arenas.alloc_name_binding(binding); + let directive = self.arenas.alloc_import_directive(ImportDirective { + id: item.id, + parent: parent, + imported_module: Cell::new(Some(module)), + subclass: ImportDirectiveSubclass::ExternCrate, + span: item.span, + module_path: Vec::new(), + vis: Cell::new(vis), + expansion: expansion, + }); + let imported_binding = self.import(binding, directive); + self.define(parent, name, TypeNS, imported_binding); + self.populate_module_if_necessary(module); + self.process_legacy_macro_imports(item, module, expansion); } ItemKind::Mod(..) if item.ident == keywords::Invalid.ident() => {} // Crate root @@ -275,45 +270,43 @@ impl<'b> Resolver<'b> { normal_ancestor_id: Some(item.id), ..ModuleS::new(Some(parent), ModuleKind::Def(def, name)) }); - self.define(parent, name, TypeNS, (module, sp, vis)); + self.define(parent, name, TypeNS, (module, vis, sp, expansion)); self.module_map.insert(item.id, module); // Descend into the module. self.current_module = module; } - ItemKind::ForeignMod(..) => { - self.crate_loader.process_item(item, &self.definitions, false); - } + ItemKind::ForeignMod(..) => self.crate_loader.process_item(item, &self.definitions), // These items live in the value namespace. ItemKind::Static(_, m, _) => { let mutbl = m == Mutability::Mutable; let def = Def::Static(self.definitions.local_def_id(item.id), mutbl); - self.define(parent, name, ValueNS, (def, sp, vis)); + self.define(parent, name, ValueNS, (def, vis, sp, expansion)); } ItemKind::Const(..) => { let def = Def::Const(self.definitions.local_def_id(item.id)); - self.define(parent, name, ValueNS, (def, sp, vis)); + self.define(parent, name, ValueNS, (def, vis, sp, expansion)); } ItemKind::Fn(..) => { let def = Def::Fn(self.definitions.local_def_id(item.id)); - self.define(parent, name, ValueNS, (def, sp, vis)); + self.define(parent, name, ValueNS, (def, vis, sp, expansion)); } // These items live in the type namespace. ItemKind::Ty(..) => { let def = Def::TyAlias(self.definitions.local_def_id(item.id)); - self.define(parent, name, TypeNS, (def, sp, vis)); + self.define(parent, name, TypeNS, (def, vis, sp, expansion)); } ItemKind::Enum(ref enum_definition, _) => { let def = Def::Enum(self.definitions.local_def_id(item.id)); let module = self.new_module(parent, ModuleKind::Def(def, name), true); - self.define(parent, name, TypeNS, (module, sp, vis)); + self.define(parent, name, TypeNS, (module, vis, sp, expansion)); for variant in &(*enum_definition).variants { - self.build_reduced_graph_for_variant(variant, module, vis); + self.build_reduced_graph_for_variant(variant, module, vis, expansion); } } @@ -321,14 +314,14 @@ impl<'b> Resolver<'b> { ItemKind::Struct(ref struct_def, _) => { // Define a name in the type namespace. let def = Def::Struct(self.definitions.local_def_id(item.id)); - self.define(parent, name, TypeNS, (def, sp, vis)); + self.define(parent, name, TypeNS, (def, vis, sp, expansion)); // If this is a tuple or unit struct, define a name // in the value namespace as well. if !struct_def.is_struct() { let ctor_def = Def::StructCtor(self.definitions.local_def_id(struct_def.id()), CtorKind::from_ast(struct_def)); - self.define(parent, name, ValueNS, (ctor_def, sp, vis)); + self.define(parent, name, ValueNS, (ctor_def, vis, sp, expansion)); } // Record field names for error reporting. @@ -342,7 +335,7 @@ impl<'b> Resolver<'b> { ItemKind::Union(ref vdata, _) => { let def = Def::Union(self.definitions.local_def_id(item.id)); - self.define(parent, name, TypeNS, (def, sp, vis)); + self.define(parent, name, TypeNS, (def, vis, sp, expansion)); // Record field names for error reporting. let field_names = vdata.fields().iter().filter_map(|field| { @@ -361,7 +354,7 @@ impl<'b> Resolver<'b> { // Add all the items within to a new module. let module = self.new_module(parent, ModuleKind::Def(Def::Trait(def_id), name), true); - self.define(parent, name, TypeNS, (module, sp, vis)); + self.define(parent, name, TypeNS, (module, vis, sp, expansion)); self.current_module = module; } ItemKind::Mac(_) => panic!("unexpanded macro in resolve!"), @@ -373,37 +366,38 @@ impl<'b> Resolver<'b> { fn build_reduced_graph_for_variant(&mut self, variant: &Variant, parent: Module<'b>, - vis: ty::Visibility) { + vis: ty::Visibility, + expansion: Mark) { let name = variant.node.name.name; let def_id = self.definitions.local_def_id(variant.node.data.id()); // Define a name in the type namespace. let def = Def::Variant(def_id); - self.define(parent, name, TypeNS, (def, variant.span, vis)); + self.define(parent, name, TypeNS, (def, vis, variant.span, expansion)); // Define a constructor name in the value namespace. // Braced variants, unlike structs, generate unusable names in // value namespace, they are reserved for possible future use. let ctor_kind = CtorKind::from_ast(&variant.node.data); let ctor_def = Def::VariantCtor(def_id, ctor_kind); - self.define(parent, name, ValueNS, (ctor_def, variant.span, vis)); + self.define(parent, name, ValueNS, (ctor_def, vis, variant.span, expansion)); } /// Constructs the reduced graph for one foreign item. - fn build_reduced_graph_for_foreign_item(&mut self, foreign_item: &ForeignItem) { + fn build_reduced_graph_for_foreign_item(&mut self, item: &ForeignItem, expansion: Mark) { let parent = self.current_module; - let name = foreign_item.ident.name; + let name = item.ident.name; - let def = match foreign_item.node { + let def = match item.node { ForeignItemKind::Fn(..) => { - Def::Fn(self.definitions.local_def_id(foreign_item.id)) + Def::Fn(self.definitions.local_def_id(item.id)) } ForeignItemKind::Static(_, m) => { - Def::Static(self.definitions.local_def_id(foreign_item.id), m) + Def::Static(self.definitions.local_def_id(item.id), m) } }; - let vis = self.resolve_visibility(&foreign_item.vis); - self.define(parent, name, ValueNS, (def, foreign_item.span, vis)); + let vis = self.resolve_visibility(&item.vis); + self.define(parent, name, ValueNS, (def, vis, item.span, expansion)); } fn build_reduced_graph_for_block(&mut self, block: &Block) { @@ -422,41 +416,40 @@ impl<'b> Resolver<'b> { } /// Builds the reduced graph for a single item in an external crate. - fn build_reduced_graph_for_external_crate_def(&mut self, parent: Module<'b>, - child: Export) { + fn build_reduced_graph_for_external_crate_def(&mut self, parent: Module<'b>, child: Export) { let name = child.name; let def = child.def; let def_id = def.def_id(); - let vis = if parent.is_trait() { - ty::Visibility::Public - } else { - self.session.cstore.visibility(def_id) + let vis = match def { + Def::Macro(..) => ty::Visibility::Public, + _ if parent.is_trait() => ty::Visibility::Public, + _ => self.session.cstore.visibility(def_id), }; match def { Def::Mod(..) | Def::Enum(..) => { let module = self.new_module(parent, ModuleKind::Def(def, name), false); - self.define(parent, name, TypeNS, (module, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (module, vis, DUMMY_SP, Mark::root())); } Def::Variant(..) => { - self.define(parent, name, TypeNS, (def, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (def, vis, DUMMY_SP, Mark::root())); } Def::VariantCtor(..) => { - self.define(parent, name, ValueNS, (def, DUMMY_SP, vis)); + self.define(parent, name, ValueNS, (def, vis, DUMMY_SP, Mark::root())); } Def::Fn(..) | Def::Static(..) | Def::Const(..) | Def::AssociatedConst(..) | Def::Method(..) => { - self.define(parent, name, ValueNS, (def, DUMMY_SP, vis)); + self.define(parent, name, ValueNS, (def, vis, DUMMY_SP, Mark::root())); } Def::Trait(..) => { let module = self.new_module(parent, ModuleKind::Def(def, name), false); - self.define(parent, name, TypeNS, (module, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (module, vis, DUMMY_SP, Mark::root())); // If this is a trait, add all the trait item names to the trait info. - let trait_item_def_ids = self.session.cstore.impl_or_trait_items(def_id); + let trait_item_def_ids = self.session.cstore.associated_item_def_ids(def_id); for trait_item_def_id in trait_item_def_ids { let trait_item_name = self.session.cstore.def_key(trait_item_def_id) .disambiguated_data.data.get_opt_name() @@ -465,25 +458,28 @@ impl<'b> Resolver<'b> { } } Def::TyAlias(..) | Def::AssociatedTy(..) => { - self.define(parent, name, TypeNS, (def, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (def, vis, DUMMY_SP, Mark::root())); } Def::Struct(..) => { - self.define(parent, name, TypeNS, (def, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (def, vis, DUMMY_SP, Mark::root())); // Record field names for error reporting. let field_names = self.session.cstore.struct_field_names(def_id); self.insert_field_names(def_id, field_names); } Def::StructCtor(..) => { - self.define(parent, name, ValueNS, (def, DUMMY_SP, vis)); + self.define(parent, name, ValueNS, (def, vis, DUMMY_SP, Mark::root())); } Def::Union(..) => { - self.define(parent, name, TypeNS, (def, DUMMY_SP, vis)); + self.define(parent, name, TypeNS, (def, vis, DUMMY_SP, Mark::root())); // Record field names for error reporting. let field_names = self.session.cstore.struct_field_names(def_id); self.insert_field_names(def_id, field_names); } + Def::Macro(..) => { + self.define(parent, name, MacroNS, (def, vis, DUMMY_SP, Mark::root())); + } Def::Local(..) | Def::PrimTy(..) | Def::TyParam(..) | @@ -496,6 +492,48 @@ impl<'b> Resolver<'b> { } } + fn get_extern_crate_root(&mut self, cnum: CrateNum) -> Module<'b> { + let def_id = DefId { krate: cnum, index: CRATE_DEF_INDEX }; + let name = self.session.cstore.crate_name(cnum); + let macros_only = self.session.cstore.dep_kind(cnum).macros_only(); + let arenas = self.arenas; + *self.extern_crate_roots.entry((cnum, macros_only)).or_insert_with(|| { + arenas.alloc_module(ModuleS { + populated: Cell::new(false), + ..ModuleS::new(None, ModuleKind::Def(Def::Mod(def_id), name)) + }) + }) + } + + pub fn get_macro(&mut self, def: Def) -> Rc { + let def_id = match def { + Def::Macro(def_id) => def_id, + _ => panic!("Expected Def::Macro(..)"), + }; + if let Some(ext) = self.macro_map.get(&def_id) { + return ext.clone(); + } + + let mut macro_rules = match self.session.cstore.load_macro(def_id, &self.session) { + LoadedMacro::MacroRules(macro_rules) => macro_rules, + LoadedMacro::ProcMacro(ext) => return ext, + }; + + let mark = Mark::fresh(); + let invocation = self.arenas.alloc_invocation_data(InvocationData { + module: Cell::new(self.get_extern_crate_root(def_id.krate)), + def_index: CRATE_DEF_INDEX, + const_integer: false, + legacy_scope: Cell::new(LegacyScope::Empty), + expansion: Cell::new(LegacyScope::Empty), + }); + self.invocations.insert(mark, invocation); + macro_rules.body = mark_tts(¯o_rules.body, mark); + let ext = Rc::new(macro_rules::compile(&self.session.parse_sess, ¯o_rules)); + self.macro_map.insert(def_id, ext.clone()); + ext + } + /// Ensures that the reduced graph rooted at the given external module /// is built, building it if it is not. pub fn populate_module_if_necessary(&mut self, module: Module<'b>) { @@ -506,90 +544,62 @@ impl<'b> Resolver<'b> { module.populated.set(true) } - fn import_extern_crate_macros(&mut self, - extern_crate: &Item, - module: Module<'b>, - loaded_macros: LoadedMacros, - legacy_imports: LegacyMacroImports, - allow_shadowing: bool) { - let import_macro = |this: &mut Self, name, ext: Rc<_>, span| { - this.used_crates.insert(module.def_id().unwrap().krate); - if let SyntaxExtension::NormalTT(..) = *ext { - this.macro_names.insert(name); - } - if this.builtin_macros.insert(name, ext).is_some() && !allow_shadowing { - let msg = format!("`{}` is already in scope", name); - let note = - "macro-expanded `#[macro_use]`s may not shadow existing macros (see RFC 1560)"; - this.session.struct_span_err(span, &msg).note(note).emit(); - } - }; + fn legacy_import_macro(&mut self, + name: Name, + binding: &'b NameBinding<'b>, + span: Span, + allow_shadowing: bool) { + self.used_crates.insert(binding.def().def_id().krate); + self.macro_names.insert(name); + if self.builtin_macros.insert(name, binding).is_some() && !allow_shadowing { + let msg = format!("`{}` is already in scope", name); + let note = + "macro-expanded `#[macro_use]`s may not shadow existing macros (see RFC 1560)"; + self.session.struct_span_err(span, &msg).note(note).emit(); + } + } - match loaded_macros { - LoadedMacros::MacroRules(macros) => { - let mark = Mark::fresh(); - if !macros.is_empty() { - let invocation = self.arenas.alloc_invocation_data(InvocationData { - module: Cell::new(module), - def_index: CRATE_DEF_INDEX, - const_integer: false, - legacy_scope: Cell::new(LegacyScope::Empty), - expansion: Cell::new(LegacyScope::Empty), - }); - self.invocations.insert(mark, invocation); - } + fn process_legacy_macro_imports(&mut self, item: &Item, module: Module<'b>, expansion: Mark) { + let allow_shadowing = expansion == Mark::root(); + let legacy_imports = self.legacy_macro_imports(&item.attrs); + let cnum = module.def_id().unwrap().krate; - let mut macros: FnvHashMap<_, _> = macros.into_iter().map(|mut def| { - def.body = mark_tts(&def.body, mark); - let ext = macro_rules::compile(&self.session.parse_sess, &def); - (def.ident.name, (def, Rc::new(ext))) - }).collect(); + // `#[macro_use]` and `#[macro_reexport]` are only allowed at the crate root. + if self.current_module.parent.is_some() && legacy_imports != LegacyMacroImports::default() { + span_err!(self.session, item.span, E0468, + "an `extern crate` loading macros must be at the crate root"); + } else if !self.use_extern_macros && + self.session.cstore.dep_kind(cnum).macros_only() && + legacy_imports == LegacyMacroImports::default() { + let msg = "custom derive crates and `#[no_link]` crates have no effect without \ + `#[macro_use]`"; + self.session.span_warn(item.span, msg); + self.used_crates.insert(cnum); // Avoid the normal unused extern crate warning + } - if let Some(span) = legacy_imports.import_all { - for (&name, &(_, ref ext)) in macros.iter() { - import_macro(self, name, ext.clone(), span); - } + if let Some(span) = legacy_imports.import_all { + module.for_each_child(|name, ns, binding| if ns == MacroNS { + self.legacy_import_macro(name, binding, span, allow_shadowing); + }); + } else { + for (name, span) in legacy_imports.imports { + let result = self.resolve_name_in_module(module, name, MacroNS, false, None); + if let Ok(binding) = result { + self.legacy_import_macro(name, binding, span, allow_shadowing); } else { - for (name, span) in legacy_imports.imports { - if let Some(&(_, ref ext)) = macros.get(&name) { - import_macro(self, name, ext.clone(), span); - } else { - span_err!(self.session, span, E0469, "imported macro not found"); - } - } - } - for (name, span) in legacy_imports.reexports { - if let Some((mut def, _)) = macros.remove(&name) { - def.id = self.next_node_id(); - self.exported_macros.push(def); - } else { - span_err!(self.session, span, E0470, "reexported macro not found"); - } + span_err!(self.session, span, E0469, "imported macro not found"); } } - - LoadedMacros::ProcMacros(macros) => { - if !self.session.features.borrow().proc_macro { - let sess = &self.session.parse_sess; - let issue = feature_gate::GateIssue::Language; - let msg = - "loading custom derive macro crates is experimentally supported"; - emit_feature_err(sess, "proc_macro", extern_crate.span, issue, msg); - } - if !legacy_imports.imports.is_empty() { - let msg = "`proc-macro` crates cannot be selectively imported from, \ - must use `#[macro_use]`"; - self.session.span_err(extern_crate.span, msg); - } - if !legacy_imports.reexports.is_empty() { - let msg = "`proc-macro` crates cannot be reexported from"; - self.session.span_err(extern_crate.span, msg); - } - if let Some(span) = legacy_imports.import_all { - for (name, ext) in macros { - import_macro(self, name, Rc::new(ext), span); - } - } + } + for (name, span) in legacy_imports.reexports { + let krate = module.def_id().unwrap().krate; + self.used_crates.insert(krate); + self.session.cstore.export_macros(krate); + let result = self.resolve_name_in_module(module, name, MacroNS, false, None); + if let Ok(binding) = result { + self.macro_exports.push(Export { name: name, def: binding.def() }); + } else { + span_err!(self.session, span, E0470, "reexported macro not found"); } } } @@ -600,7 +610,7 @@ impl<'b> Resolver<'b> { if attr.check_name("macro_escape") { let msg = "macro_escape is a deprecated synonym for macro_use"; let mut err = self.session.struct_span_warn(attr.span, msg); - if let ast::AttrStyle::Inner = attr.node.style { + if let ast::AttrStyle::Inner = attr.style { err.help("consider an outer attribute, #[macro_use] mod ...").emit(); } else { err.emit(); @@ -625,7 +635,7 @@ impl<'b> Resolver<'b> { match attr.meta_item_list() { Some(names) => for attr in names { if let Some(word) = attr.word() { - imports.imports.push((token::intern(&word.name()), attr.span())); + imports.imports.push((word.name(), attr.span())); } else { span_err!(self.session, attr.span(), E0466, "bad macro import"); } @@ -639,7 +649,7 @@ impl<'b> Resolver<'b> { if let Some(names) = attr.meta_item_list() { for attr in names { if let Some(word) = attr.word() { - imports.reexports.push((token::intern(&word.name()), attr.span())); + imports.reexports.push((word.name(), attr.span())); } else { bad_macro_reexport(self, attr.span()); } @@ -647,8 +657,6 @@ impl<'b> Resolver<'b> { } else { bad_macro_reexport(self, attr.span()); } - } else if attr.check_name("no_link") { - imports.no_link = true; } } imports @@ -663,7 +671,9 @@ pub struct BuildReducedGraphVisitor<'a, 'b: 'a> { impl<'a, 'b> BuildReducedGraphVisitor<'a, 'b> { fn visit_invoc(&mut self, id: ast::NodeId) -> &'b InvocationData<'b> { - let invocation = self.resolver.invocations[&Mark::from_placeholder_id(id)]; + let mark = Mark::from_placeholder_id(id); + self.resolver.current_module.unresolved_invocations.borrow_mut().insert(mark); + let invocation = self.resolver.invocations[&mark]; invocation.module.set(self.resolver.current_module); invocation.legacy_scope.set(self.legacy_scope); invocation @@ -672,7 +682,7 @@ impl<'a, 'b> BuildReducedGraphVisitor<'a, 'b> { macro_rules! method { ($visit:ident: $ty:ty, $invoc:path, $walk:ident) => { - fn $visit(&mut self, node: &$ty) { + fn $visit(&mut self, node: &'a $ty) { if let $invoc(..) = node.node { self.visit_invoc(node.id); } else { @@ -682,13 +692,13 @@ macro_rules! method { } } -impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> { +impl<'a, 'b> Visitor<'a> for BuildReducedGraphVisitor<'a, 'b> { method!(visit_impl_item: ast::ImplItem, ast::ImplItemKind::Macro, walk_impl_item); method!(visit_expr: ast::Expr, ast::ExprKind::Mac, walk_expr); method!(visit_pat: ast::Pat, ast::PatKind::Mac, walk_pat); method!(visit_ty: ast::Ty, ast::TyKind::Mac, walk_ty); - fn visit_item(&mut self, item: &Item) { + fn visit_item(&mut self, item: &'a Item) { let macro_use = match item.node { ItemKind::Mac(..) if item.id == ast::DUMMY_NODE_ID => return, // Scope placeholder ItemKind::Mac(..) => { @@ -707,7 +717,7 @@ impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> { } } - fn visit_stmt(&mut self, stmt: &ast::Stmt) { + fn visit_stmt(&mut self, stmt: &'a ast::Stmt) { if let ast::StmtKind::Mac(..) = stmt.node { self.legacy_scope = LegacyScope::Expansion(self.visit_invoc(stmt.id)); } else { @@ -715,12 +725,12 @@ impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> { } } - fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) { - self.resolver.build_reduced_graph_for_foreign_item(foreign_item); + fn visit_foreign_item(&mut self, foreign_item: &'a ForeignItem) { + self.resolver.build_reduced_graph_for_foreign_item(foreign_item, self.expansion); visit::walk_foreign_item(self, foreign_item); } - fn visit_block(&mut self, block: &Block) { + fn visit_block(&mut self, block: &'a Block) { let (parent, legacy_scope) = (self.resolver.current_module, self.legacy_scope); self.resolver.build_reduced_graph_for_block(block); visit::walk_block(self, block); @@ -728,7 +738,7 @@ impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> { self.legacy_scope = legacy_scope; } - fn visit_trait_item(&mut self, item: &TraitItem) { + fn visit_trait_item(&mut self, item: &'a TraitItem) { let parent = self.resolver.current_module; let def_id = parent.def_id().unwrap(); @@ -753,7 +763,7 @@ impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> { self.resolver.trait_item_map.insert((item.ident.name, def_id), is_static_method); let vis = ty::Visibility::Public; - self.resolver.define(parent, item.ident.name, ns, (def, item.span, vis)); + self.resolver.define(parent, item.ident.name, ns, (def, vis, item.span, self.expansion)); self.resolver.current_module = parent.parent.unwrap(); // nearest normal ancestor visit::walk_trait_item(self, item); diff --git a/src/librustc_resolve/check_unused.rs b/src/librustc_resolve/check_unused.rs index e1ea40809d..41391c65a1 100644 --- a/src/librustc_resolve/check_unused.rs +++ b/src/librustc_resolve/check_unused.rs @@ -22,16 +22,18 @@ use std::ops::{Deref, DerefMut}; use Resolver; -use Namespace::{TypeNS, ValueNS}; use rustc::lint; +use rustc::util::nodemap::NodeMap; use syntax::ast::{self, ViewPathGlob, ViewPathList, ViewPathSimple}; use syntax::visit::{self, Visitor}; -use syntax_pos::{Span, DUMMY_SP}; +use syntax_pos::{Span, MultiSpan, DUMMY_SP}; struct UnusedImportCheckVisitor<'a, 'b: 'a> { resolver: &'a mut Resolver<'b>, + /// All the (so far) unused imports, grouped path list + unused_imports: NodeMap>, } // Deref and DerefMut impls allow treating UnusedImportCheckVisitor as Resolver. @@ -52,29 +54,28 @@ impl<'a, 'b> DerefMut for UnusedImportCheckVisitor<'a, 'b> { impl<'a, 'b> UnusedImportCheckVisitor<'a, 'b> { // We have information about whether `use` (import) directives are actually // used now. If an import is not used at all, we signal a lint error. - fn check_import(&mut self, id: ast::NodeId, span: Span) { - if !self.used_imports.contains(&(id, TypeNS)) && - !self.used_imports.contains(&(id, ValueNS)) { + fn check_import(&mut self, item_id: ast::NodeId, id: ast::NodeId, span: Span) { + let mut used = false; + self.per_ns(|this, ns| used |= this.used_imports.contains(&(id, ns))); + if !used { if self.maybe_unused_trait_imports.contains(&id) { // Check later. return; } - let msg = if let Ok(snippet) = self.session.codemap().span_to_snippet(span) { - format!("unused import: `{}`", snippet) - } else { - "unused import".to_string() - }; - self.session.add_lint(lint::builtin::UNUSED_IMPORTS, id, span, msg); + self.unused_imports.entry(item_id).or_insert_with(NodeMap).insert(id, span); } else { // This trait import is definitely used, in a way other than // method resolution. self.maybe_unused_trait_imports.remove(&id); + if let Some(i) = self.unused_imports.get_mut(&item_id) { + i.remove(&id); + } } } } -impl<'a, 'b> Visitor for UnusedImportCheckVisitor<'a, 'b> { - fn visit_item(&mut self, item: &ast::Item) { +impl<'a, 'b> Visitor<'a> for UnusedImportCheckVisitor<'a, 'b> { + fn visit_item(&mut self, item: &'a ast::Item) { visit::walk_item(self, item); // Ignore is_public import statements because there's no way to be sure // whether they're used or not. Also ignore imports with a dummy span @@ -98,16 +99,22 @@ impl<'a, 'b> Visitor for UnusedImportCheckVisitor<'a, 'b> { ast::ItemKind::Use(ref p) => { match p.node { ViewPathSimple(..) => { - self.check_import(item.id, p.span) + self.check_import(item.id, item.id, p.span) } ViewPathList(_, ref list) => { + if list.len() == 0 { + self.unused_imports + .entry(item.id) + .or_insert_with(NodeMap) + .insert(item.id, item.span); + } for i in list { - self.check_import(i.node.id, i.span); + self.check_import(item.id, i.node.id, i.span); } } ViewPathGlob(_) => { - self.check_import(item.id, p.span) + self.check_import(item.id, item.id, p.span); } } } @@ -117,6 +124,35 @@ impl<'a, 'b> Visitor for UnusedImportCheckVisitor<'a, 'b> { } pub fn check_crate(resolver: &mut Resolver, krate: &ast::Crate) { - let mut visitor = UnusedImportCheckVisitor { resolver: resolver }; + let mut visitor = UnusedImportCheckVisitor { + resolver: resolver, + unused_imports: NodeMap(), + }; visit::walk_crate(&mut visitor, krate); + + for (id, spans) in &visitor.unused_imports { + let len = spans.len(); + let mut spans = spans.values().map(|s| *s).collect::>(); + spans.sort(); + let ms = MultiSpan::from_spans(spans.clone()); + let mut span_snippets = spans.iter() + .filter_map(|s| { + match visitor.session.codemap().span_to_snippet(*s) { + Ok(s) => Some(format!("`{}`", s)), + _ => None, + } + }).collect::>(); + span_snippets.sort(); + let msg = format!("unused import{}{}", + if len > 1 { "s" } else { "" }, + if span_snippets.len() > 0 { + format!(": {}", span_snippets.join(", ")) + } else { + String::new() + }); + visitor.session.add_lint(lint::builtin::UNUSED_IMPORTS, + *id, + ms, + msg); + } } diff --git a/src/librustc_resolve/diagnostics.rs b/src/librustc_resolve/diagnostics.rs index 5eb269030a..d54f4e7b20 100644 --- a/src/librustc_resolve/diagnostics.rs +++ b/src/librustc_resolve/diagnostics.rs @@ -59,7 +59,7 @@ items under a new local name. An example of this error: -```compile_fail +```ignore use foo::baz; use bar::*; // error, do `use foo::baz as quux` instead on the previous line diff --git a/src/librustc_resolve/lib.rs b/src/librustc_resolve/lib.rs index fc632298c0..4d25609bfd 100644 --- a/src/librustc_resolve/lib.rs +++ b/src/librustc_resolve/lib.rs @@ -18,8 +18,6 @@ #![cfg_attr(not(stage0), deny(warnings))] #![feature(associated_consts)] -#![feature(borrow_state)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(staged_api)] @@ -35,12 +33,9 @@ extern crate arena; extern crate rustc; use self::Namespace::*; -use self::ResolveResult::*; use self::FallbackSuggestion::*; use self::TypeParameters::*; use self::RibKind::*; -use self::UseLexicalScopeFlag::*; -use self::ModulePrefixResult::*; use rustc::hir::map::{Definitions, DefCollector}; use rustc::hir::{self, PrimTy, TyBool, TyChar, TyFloat, TyInt, TyUint, TyStr}; @@ -51,13 +46,14 @@ use rustc::hir::def::*; use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefId}; use rustc::ty; use rustc::hir::{Freevar, FreevarMap, TraitCandidate, TraitMap, GlobMap}; -use rustc::util::nodemap::{NodeMap, NodeSet, FnvHashMap, FnvHashSet}; +use rustc::util::nodemap::{NodeMap, NodeSet, FxHashMap, FxHashSet}; use syntax::ext::hygiene::{Mark, SyntaxContext}; use syntax::ast::{self, FloatTy}; use syntax::ast::{CRATE_NODE_ID, Name, NodeId, Ident, SpannedIdent, IntTy, UintTy}; use syntax::ext::base::SyntaxExtension; -use syntax::parse::token::{self, keywords}; +use syntax::ext::base::Determinacy::{Determined, Undetermined}; +use syntax::symbol::{Symbol, keywords}; use syntax::util::lev_distance::find_best_match_for_name; use syntax::visit::{self, FnKind, Visitor}; @@ -68,7 +64,7 @@ use syntax::ast::{Item, ItemKind, ImplItem, ImplItemKind}; use syntax::ast::{Local, Mutability, Pat, PatKind, Path}; use syntax::ast::{PathSegment, PathParameters, QSelf, TraitItemKind, TraitRef, Ty, TyKind}; -use syntax_pos::{Span, DUMMY_SP}; +use syntax_pos::{Span, DUMMY_SP, MultiSpan}; use errors::DiagnosticBuilder; use std::cell::{Cell, RefCell}; @@ -76,7 +72,7 @@ use std::fmt; use std::mem::replace; use std::rc::Rc; -use resolve_imports::{ImportDirective, NameResolution}; +use resolve_imports::{ImportDirective, ImportDirectiveSubclass, NameResolution, ImportResolver}; use macros::{InvocationData, LegacyBinding, LegacyScope}; // NB: This module needs to be declared first so diagnostics are @@ -90,7 +86,7 @@ mod resolve_imports; enum SuggestionType { Macro(String), - Function(token::InternedString), + Function(Symbol), NotFound, } @@ -108,7 +104,7 @@ enum ResolutionError<'a> { /// error E0403: the name is already used for a type parameter in this type parameter list NameAlreadyUsedInTypeParameterList(Name, &'a Span), /// error E0404: is not a trait - IsNotATrait(&'a str), + IsNotATrait(&'a str, &'a str), /// error E0405: use of undeclared trait name UndeclaredTraitName(&'a str, SuggestedCandidates), /// error E0407: method is not a member of trait @@ -191,10 +187,6 @@ fn resolve_struct_error<'b, 'a: 'b, 'c>(resolver: &'b Resolver<'a>, span: syntax_pos::Span, resolution_error: ResolutionError<'c>) -> DiagnosticBuilder<'a> { - if !resolver.emit_errors { - return resolver.session.diagnostic().struct_dummy(); - } - match resolution_error { ResolutionError::TypeParametersFromOuterFunction => { let mut err = struct_span_err!(resolver.session, @@ -223,13 +215,13 @@ fn resolve_struct_error<'b, 'a: 'b, 'c>(resolver: &'b Resolver<'a>, err } - ResolutionError::IsNotATrait(name) => { + ResolutionError::IsNotATrait(name, kind_name) => { let mut err = struct_span_err!(resolver.session, span, E0404, "`{}` is not a trait", name); - err.span_label(span, &format!("not a trait")); + err.span_label(span, &format!("expected trait, found {}", kind_name)); err } ResolutionError::UndeclaredTraitName(name, candidates) => { @@ -498,7 +490,7 @@ struct BindingInfo { } // Map from the name in a pattern to its binding mode. -type BindingMap = FnvHashMap; +type BindingMap = FxHashMap; #[derive(Copy, Clone, PartialEq, Eq, Debug)] enum PatternSource { @@ -533,40 +525,68 @@ impl PatternSource { pub enum Namespace { TypeNS, ValueNS, + MacroNS, } -impl<'a> Visitor for Resolver<'a> { - fn visit_item(&mut self, item: &Item) { +#[derive(Clone, Default, Debug)] +pub struct PerNS { + value_ns: T, + type_ns: T, + macro_ns: Option, +} + +impl ::std::ops::Index for PerNS { + type Output = T; + fn index(&self, ns: Namespace) -> &T { + match ns { + ValueNS => &self.value_ns, + TypeNS => &self.type_ns, + MacroNS => self.macro_ns.as_ref().unwrap(), + } + } +} + +impl ::std::ops::IndexMut for PerNS { + fn index_mut(&mut self, ns: Namespace) -> &mut T { + match ns { + ValueNS => &mut self.value_ns, + TypeNS => &mut self.type_ns, + MacroNS => self.macro_ns.as_mut().unwrap(), + } + } +} + +impl<'a, 'tcx> Visitor<'tcx> for Resolver<'a> { + fn visit_item(&mut self, item: &'tcx Item) { self.resolve_item(item); } - fn visit_arm(&mut self, arm: &Arm) { + fn visit_arm(&mut self, arm: &'tcx Arm) { self.resolve_arm(arm); } - fn visit_block(&mut self, block: &Block) { + fn visit_block(&mut self, block: &'tcx Block) { self.resolve_block(block); } - fn visit_expr(&mut self, expr: &Expr) { + fn visit_expr(&mut self, expr: &'tcx Expr) { self.resolve_expr(expr, None); } - fn visit_local(&mut self, local: &Local) { + fn visit_local(&mut self, local: &'tcx Local) { self.resolve_local(local); } - fn visit_ty(&mut self, ty: &Ty) { + fn visit_ty(&mut self, ty: &'tcx Ty) { self.resolve_type(ty); } - fn visit_poly_trait_ref(&mut self, tref: &ast::PolyTraitRef, m: &ast::TraitBoundModifier) { - match self.resolve_trait_reference(tref.trait_ref.ref_id, &tref.trait_ref.path, 0) { - Ok(def) => self.record_def(tref.trait_ref.ref_id, def), - Err(_) => { - // error already reported - self.record_def(tref.trait_ref.ref_id, err_path_resolution()) - } - } + fn visit_poly_trait_ref(&mut self, + tref: &'tcx ast::PolyTraitRef, + m: &'tcx ast::TraitBoundModifier) { + let ast::Path { ref segments, span, global } = tref.trait_ref.path; + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); + let def = self.resolve_trait_reference(&path, global, None, span); + self.record_def(tref.trait_ref.ref_id, def); visit::walk_poly_trait_ref(self, tref, m); } fn visit_variant(&mut self, - variant: &ast::Variant, - generics: &Generics, + variant: &'tcx ast::Variant, + generics: &'tcx Generics, item_id: ast::NodeId) { if let Some(ref dis_expr) = variant.node.disr_expr { // resolve the discriminator expr as a constant @@ -582,7 +602,7 @@ impl<'a> Visitor for Resolver<'a> { item_id, variant.span); } - fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) { + fn visit_foreign_item(&mut self, foreign_item: &'tcx ForeignItem) { let type_parameters = match foreign_item.node { ForeignItemKind::Fn(_, ref generics) => { HasTypeParameters(generics, ItemRibKind) @@ -594,9 +614,8 @@ impl<'a> Visitor for Resolver<'a> { }); } fn visit_fn(&mut self, - function_kind: FnKind, - declaration: &FnDecl, - block: &Block, + function_kind: FnKind<'tcx>, + declaration: &'tcx FnDecl, _: Span, node_id: NodeId) { let rib_kind = match function_kind { @@ -604,42 +623,50 @@ impl<'a> Visitor for Resolver<'a> { self.visit_generics(generics); ItemRibKind } - FnKind::Method(_, sig, _) => { + FnKind::Method(_, sig, _, _) => { self.visit_generics(&sig.generics); MethodRibKind(!sig.decl.has_self()) } - FnKind::Closure => ClosureRibKind(node_id), + FnKind::Closure(_) => ClosureRibKind(node_id), }; - self.resolve_function(rib_kind, declaration, block); + + // Create a value rib for the function. + self.ribs[ValueNS].push(Rib::new(rib_kind)); + + // Create a label rib for the function. + self.label_ribs.push(Rib::new(rib_kind)); + + // Add each argument to the rib. + let mut bindings_list = FxHashMap(); + for argument in &declaration.inputs { + self.resolve_pattern(&argument.pat, PatternSource::FnParam, &mut bindings_list); + + self.visit_ty(&argument.ty); + + debug!("(resolving function) recorded argument"); + } + visit::walk_fn_ret_ty(self, &declaration.output); + + // Resolve the function body. + match function_kind { + FnKind::ItemFn(.., body) | + FnKind::Method(.., body) => { + self.visit_block(body); + } + FnKind::Closure(body) => { + self.visit_expr(body); + } + }; + + debug!("(resolving function) leaving function"); + + self.label_ribs.pop(); + self.ribs[ValueNS].pop(); } } pub type ErrorMessage = Option<(Span, String)>; -#[derive(Clone, PartialEq, Eq)] -pub enum ResolveResult { - Failed(ErrorMessage), // Failed to resolve the name, optional helpful error message. - Indeterminate, // Couldn't determine due to unresolved globs. - Success(T), // Successfully resolved the import. -} - -impl ResolveResult { - fn and_then ResolveResult>(self, f: F) -> ResolveResult { - match self { - Failed(msg) => Failed(msg), - Indeterminate => Indeterminate, - Success(t) => f(t), - } - } - - fn success(self) -> Option { - match self { - Success(t) => Some(t), - _ => None, - } - } -} - enum FallbackSuggestion { NoSuggestion, Field, @@ -689,61 +716,35 @@ enum RibKind<'a> { MacroDefinition(Mark), } -#[derive(Copy, Clone)] -enum UseLexicalScopeFlag { - DontUseLexicalScope, - UseLexicalScope, -} - -enum ModulePrefixResult<'a> { - NoPrefixFound, - PrefixFound(Module<'a>, usize), -} - /// One local scope. #[derive(Debug)] struct Rib<'a> { - bindings: FnvHashMap, + bindings: FxHashMap, kind: RibKind<'a>, } impl<'a> Rib<'a> { fn new(kind: RibKind<'a>) -> Rib<'a> { Rib { - bindings: FnvHashMap(), + bindings: FxHashMap(), kind: kind, } } } /// A definition along with the index of the rib it was found on +#[derive(Copy, Clone)] struct LocalDef { ribs: Option<(Namespace, usize)>, def: Def, } -impl LocalDef { - fn from_def(def: Def) -> Self { - LocalDef { - ribs: None, - def: def, - } - } -} - enum LexicalScopeBinding<'a> { Item(&'a NameBinding<'a>), - LocalDef(LocalDef), + Def(Def), } impl<'a> LexicalScopeBinding<'a> { - fn local_def(self) -> LocalDef { - match self { - LexicalScopeBinding::LocalDef(local_def) => local_def, - LexicalScopeBinding::Item(binding) => LocalDef::from_def(binding.def()), - } - } - fn item(self) -> Option<&'a NameBinding<'a>> { match self { LexicalScopeBinding::Item(binding) => Some(binding), @@ -752,6 +753,21 @@ impl<'a> LexicalScopeBinding<'a> { } } +#[derive(Copy, Clone, PartialEq)] +enum PathScope { + Global, + Lexical, + Import, +} + +#[derive(Clone)] +enum PathResult<'a> { + Module(Module<'a>), + NonModule(PathResolution), + Indeterminate, + Failed(String, bool /* is the error from the last segment? */), +} + enum ModuleKind { Block(NodeId), Def(Def, Name), @@ -765,11 +781,12 @@ pub struct ModuleS<'a> { // The node id of the closest normal module (`mod`) ancestor (including this module). normal_ancestor_id: Option, - // If the module is an extern crate, `def` is root of the external crate and `extern_crate_id` - // is the NodeId of the local `extern crate` item (otherwise, `extern_crate_id` is None). - extern_crate_id: Option, + resolutions: RefCell>>>, + legacy_macro_resolutions: RefCell>, + macro_resolutions: RefCell, PathScope, Span)>>, - resolutions: RefCell>>>, + // Macro invocations that can expand into items in this module. + unresolved_invocations: RefCell>, no_implicit_prelude: bool, @@ -793,8 +810,10 @@ impl<'a> ModuleS<'a> { parent: parent, kind: kind, normal_ancestor_id: None, - extern_crate_id: None, - resolutions: RefCell::new(FnvHashMap()), + resolutions: RefCell::new(FxHashMap()), + legacy_macro_resolutions: RefCell::new(Vec::new()), + macro_resolutions: RefCell::new(Vec::new()), + unresolved_invocations: RefCell::new(FxHashSet()), no_implicit_prelude: false, glob_importers: RefCell::new(Vec::new()), globs: RefCell::new((Vec::new())), @@ -850,6 +869,7 @@ impl<'a> fmt::Debug for ModuleS<'a> { #[derive(Clone, Debug)] pub struct NameBinding<'a> { kind: NameBindingKind<'a>, + expansion: Mark, span: Span, vis: ty::Visibility, } @@ -876,6 +896,7 @@ enum NameBindingKind<'a> { Ambiguity { b1: &'a NameBinding<'a>, b2: &'a NameBinding<'a>, + legacy: bool, } } @@ -884,18 +905,19 @@ struct PrivacyError<'a>(Span, Name, &'a NameBinding<'a>); struct AmbiguityError<'a> { span: Span, name: Name, + lexical: bool, b1: &'a NameBinding<'a>, b2: &'a NameBinding<'a>, + legacy: bool, } impl<'a> NameBinding<'a> { - fn module(&self) -> Result, bool /* true if an error has already been reported */> { + fn module(&self) -> Option> { match self.kind { - NameBindingKind::Module(module) => Ok(module), + NameBindingKind::Module(module) => Some(module), NameBindingKind::Import { binding, .. } => binding.module(), - NameBindingKind::Def(Def::Err) => Err(true), - NameBindingKind::Def(_) => Err(false), - NameBindingKind::Ambiguity { .. } => Err(false), + NameBindingKind::Ambiguity { legacy: true, b1, .. } => b1.module(), + _ => None, } } @@ -904,10 +926,19 @@ impl<'a> NameBinding<'a> { NameBindingKind::Def(def) => def, NameBindingKind::Module(module) => module.def().unwrap(), NameBindingKind::Import { binding, .. } => binding.def(), + NameBindingKind::Ambiguity { legacy: true, b1, .. } => b1.def(), NameBindingKind::Ambiguity { .. } => Def::Err, } } + fn get_macro(&self, resolver: &mut Resolver<'a>) -> Rc { + match self.kind { + NameBindingKind::Import { binding, .. } => binding.get_macro(resolver), + NameBindingKind::Ambiguity { b1, .. } => b1.get_macro(resolver), + _ => resolver.get_macro(self.def()), + } + } + // We sometimes need to treat variants as `pub` for backwards compatibility fn pseudo_vis(&self) -> ty::Visibility { if self.is_variant() { ty::Visibility::Public } else { self.vis } @@ -922,7 +953,14 @@ impl<'a> NameBinding<'a> { } fn is_extern_crate(&self) -> bool { - self.module().ok().and_then(|module| module.extern_crate_id).is_some() + match self.kind { + NameBindingKind::Import { + directive: &ImportDirective { + subclass: ImportDirectiveSubclass::ExternCrate, .. + }, .. + } => true, + _ => false, + } } fn is_import(&self) -> bool { @@ -935,7 +973,7 @@ impl<'a> NameBinding<'a> { fn is_glob_import(&self) -> bool { match self.kind { NameBindingKind::Import { directive, .. } => directive.is_glob(), - NameBindingKind::Ambiguity { .. } => true, + NameBindingKind::Ambiguity { b1, .. } => b1.is_glob_import(), _ => false, } } @@ -950,12 +988,12 @@ impl<'a> NameBinding<'a> { /// Interns the names of the primitive types. struct PrimitiveTypeTable { - primitive_types: FnvHashMap, + primitive_types: FxHashMap, } impl PrimitiveTypeTable { fn new() -> PrimitiveTypeTable { - let mut table = PrimitiveTypeTable { primitive_types: FnvHashMap() }; + let mut table = PrimitiveTypeTable { primitive_types: FxHashMap() }; table.intern("bool", TyBool); table.intern("char", TyChar); @@ -977,7 +1015,7 @@ impl PrimitiveTypeTable { } fn intern(&mut self, string: &str, primitive_type: PrimTy) { - self.primitive_types.insert(token::intern(string), primitive_type); + self.primitive_types.insert(Symbol::intern(string), primitive_type); } } @@ -989,17 +1027,17 @@ pub struct Resolver<'a> { // Maps the node id of a statement to the expansions of the `macro_rules!`s // immediately above the statement (if appropriate). - macros_at_scope: FnvHashMap>, + macros_at_scope: FxHashMap>, graph_root: Module<'a>, prelude: Option>, - trait_item_map: FnvHashMap<(Name, DefId), bool /* is static method? */>, + trait_item_map: FxHashMap<(Name, DefId), bool /* is static method? */>, // Names of fields of an item `DefId` accessible with dot syntax. // Used for hints during error reporting. - field_names: FnvHashMap>, + field_names: FxHashMap>, // All imports known to succeed or fail. determined_imports: Vec<&'a ImportDirective<'a>>, @@ -1010,12 +1048,9 @@ pub struct Resolver<'a> { // The module that represents the current item scope. current_module: Module<'a>, - // The current set of local scopes, for values. + // The current set of local scopes for types and values. // FIXME #4948: Reuse ribs to avoid allocation. - value_ribs: Vec>, - - // The current set of local scopes, for types. - type_ribs: Vec>, + ribs: PerNS>>, // The current set of local scopes, for labels. label_ribs: Vec>, @@ -1029,7 +1064,7 @@ pub struct Resolver<'a> { // The idents for the primitive types. primitive_type_table: PrimitiveTypeTable, - pub def_map: DefMap, + def_map: DefMap, pub freevars: FreevarMap, freevars_seen: NodeMap>, pub export_map: ExportMap, @@ -1050,19 +1085,15 @@ pub struct Resolver<'a> { // There will be an anonymous module created around `g` with the ID of the // entry block for `f`. module_map: NodeMap>, - - // Whether or not to print error messages. Can be set to true - // when getting additional info for error message suggestions, - // so as to avoid printing duplicate errors - emit_errors: bool, + extern_crate_roots: FxHashMap<(CrateNum, bool /* MacrosOnly? */), Module<'a>>, pub make_glob_map: bool, // Maps imports to the names of items actually imported (this actually maps // all imports, but only glob imports are actually interesting). pub glob_map: GlobMap, - used_imports: FnvHashSet<(NodeId, Namespace)>, - used_crates: FnvHashSet, + used_imports: FxHashSet<(NodeId, Namespace)>, + used_crates: FxHashSet, pub maybe_unused_trait_imports: NodeSet, privacy_errors: Vec>, @@ -1071,16 +1102,21 @@ pub struct Resolver<'a> { arenas: &'a ResolverArenas<'a>, dummy_binding: &'a NameBinding<'a>, - new_import_semantics: bool, // true if `#![feature(item_like_imports)]` + use_extern_macros: bool, // true if `#![feature(use_extern_macros)]` pub exported_macros: Vec, crate_loader: &'a mut CrateLoader, - macro_names: FnvHashSet, - builtin_macros: FnvHashMap>, - lexical_macro_resolutions: Vec<(Name, LegacyScope<'a>)>, + macro_names: FxHashSet, + builtin_macros: FxHashMap>, + lexical_macro_resolutions: Vec<(Name, &'a Cell>)>, + macro_map: FxHashMap>, + macro_exports: Vec, // Maps the `Mark` of an expansion to its containing module or block. - invocations: FnvHashMap>, + invocations: FxHashMap>, + + // Avoid duplicated errors for "name already defined". + name_already_seen: FxHashMap, } pub struct ResolverArenas<'a> { @@ -1136,24 +1172,23 @@ impl<'a> ty::NodeIdTree for Resolver<'a> { } impl<'a> hir::lowering::Resolver for Resolver<'a> { - fn resolve_generated_global_path(&mut self, path: &hir::Path, is_value: bool) -> Def { + fn resolve_hir_path(&mut self, path: &mut hir::Path, is_value: bool) { let namespace = if is_value { ValueNS } else { TypeNS }; - match self.resolve_crate_relative_path(path.span, &path.segments, namespace) { - Ok(binding) => binding.def(), - Err(true) => Def::Err, - Err(false) => { - let path_name = &format!("{}", path); - let error = - ResolutionError::UnresolvedName { - path: path_name, - message: "", - context: UnresolvedNameContext::Other, - is_static_method: false, - is_field: false, - def: Def::Err, - }; - resolve_error(self, path.span, error); - Def::Err + let hir::Path { ref segments, span, global, ref mut def } = *path; + let path: Vec<_> = segments.iter().map(|seg| Ident::with_empty_ctxt(seg.name)).collect(); + let scope = if global { PathScope::Global } else { PathScope::Lexical }; + match self.resolve_path(&path, scope, Some(namespace), Some(span)) { + PathResult::Module(module) => *def = module.def().unwrap(), + PathResult::NonModule(path_res) if path_res.depth == 0 => *def = path_res.base_def, + PathResult::NonModule(..) => match self.resolve_path(&path, scope, None, Some(span)) { + PathResult::Failed(msg, _) => { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); + } + _ => {} + }, + PathResult::Indeterminate => unreachable!(), + PathResult::Failed(msg, _) => { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); } } } @@ -1162,31 +1197,11 @@ impl<'a> hir::lowering::Resolver for Resolver<'a> { self.def_map.get(&id).cloned() } - fn record_resolution(&mut self, id: NodeId, def: Def) { - self.def_map.insert(id, PathResolution::new(def)); - } - fn definitions(&mut self) -> &mut Definitions { &mut self.definitions } } -trait Named { - fn ident(&self) -> Ident; -} - -impl Named for ast::PathSegment { - fn ident(&self) -> Ident { - self.identifier - } -} - -impl Named for hir::PathSegment { - fn ident(&self) -> Ident { - Ident::with_empty_ctxt(self.name) - } -} - impl<'a> Resolver<'a> { pub fn new(session: &'a Session, krate: &Crate, @@ -1206,7 +1221,7 @@ impl<'a> Resolver<'a> { let mut definitions = Definitions::new(); DefCollector::new(&mut definitions).collect_root(); - let mut invocations = FnvHashMap(); + let mut invocations = FxHashMap(); invocations.insert(Mark::root(), arenas.alloc_invocation_data(InvocationData::root(graph_root))); @@ -1214,22 +1229,25 @@ impl<'a> Resolver<'a> { session: session, definitions: definitions, - macros_at_scope: FnvHashMap(), + macros_at_scope: FxHashMap(), // The outermost module has def ID 0; this is not reflected in the // AST. graph_root: graph_root, prelude: None, - trait_item_map: FnvHashMap(), - field_names: FnvHashMap(), + trait_item_map: FxHashMap(), + field_names: FxHashMap(), determined_imports: Vec::new(), indeterminate_imports: Vec::new(), current_module: graph_root, - value_ribs: vec![Rib::new(ModuleRibKind(graph_root))], - type_ribs: vec![Rib::new(ModuleRibKind(graph_root))], + ribs: PerNS { + value_ns: vec![Rib::new(ModuleRibKind(graph_root))], + type_ns: vec![Rib::new(ModuleRibKind(graph_root))], + macro_ns: None, + }, label_ribs: Vec::new(), current_trait_ref: None, @@ -1243,13 +1261,13 @@ impl<'a> Resolver<'a> { export_map: NodeMap(), trait_map: NodeMap(), module_map: module_map, + extern_crate_roots: FxHashMap(), - emit_errors: true, make_glob_map: make_glob_map == MakeGlobMap::Yes, glob_map: NodeMap(), - used_imports: FnvHashSet(), - used_crates: FnvHashSet(), + used_imports: FxHashSet(), + used_crates: FxHashSet(), maybe_unused_trait_imports: NodeSet(), privacy_errors: Vec::new(), @@ -1259,17 +1277,21 @@ impl<'a> Resolver<'a> { arenas: arenas, dummy_binding: arenas.alloc_name_binding(NameBinding { kind: NameBindingKind::Def(Def::Err), + expansion: Mark::root(), span: DUMMY_SP, vis: ty::Visibility::Public, }), - new_import_semantics: session.features.borrow().item_like_imports, + use_extern_macros: session.features.borrow().use_extern_macros, exported_macros: Vec::new(), crate_loader: crate_loader, - macro_names: FnvHashSet(), - builtin_macros: FnvHashMap(), + macro_names: FxHashSet(), + builtin_macros: FxHashMap(), lexical_macro_resolutions: Vec::new(), + macro_map: FxHashMap(), + macro_exports: Vec::new(), invocations: invocations, + name_already_seen: FxHashMap(), } } @@ -1285,16 +1307,22 @@ impl<'a> Resolver<'a> { } } + fn per_ns T>(&mut self, mut f: F) -> PerNS { + PerNS { + type_ns: f(self, TypeNS), + value_ns: f(self, ValueNS), + macro_ns: match self.use_extern_macros { + true => Some(f(self, MacroNS)), + false => None, + }, + } + } + /// Entry point to crate resolution. pub fn resolve_crate(&mut self, krate: &Crate) { - // Collect `DefId`s for exported macro defs. - for def in &krate.exported_macros { - DefCollector::new(&mut self.definitions).with_parent(CRATE_DEF_INDEX, |collector| { - collector.visit_macro_def(def) - }) - } - + ImportResolver { resolver: self }.finalize_imports(); self.current_module = self.graph_root; + self.finalize_current_module_macro_resolutions(); visit::walk_crate(self, krate); check_unused::check_crate(self, krate); @@ -1310,14 +1338,10 @@ impl<'a> Resolver<'a> { }) } - fn get_ribs<'b>(&'b mut self, ns: Namespace) -> &'b mut Vec> { - match ns { ValueNS => &mut self.value_ribs, TypeNS => &mut self.type_ribs } - } - fn record_use(&mut self, name: Name, ns: Namespace, binding: &'a NameBinding<'a>, span: Span) -> bool /* true if an error was reported */ { // track extern crates for unused_extern_crate lint - if let Some(DefId { krate, .. }) = binding.module().ok().and_then(ModuleS::def_id) { + if let Some(DefId { krate, .. }) = binding.module().and_then(ModuleS::def_id) { self.used_crates.insert(krate); } @@ -1329,10 +1353,14 @@ impl<'a> Resolver<'a> { self.record_use(name, ns, binding, span) } NameBindingKind::Import { .. } => false, - NameBindingKind::Ambiguity { b1, b2 } => { - let ambiguity_error = AmbiguityError { span: span, name: name, b1: b1, b2: b2 }; - self.ambiguity_errors.push(ambiguity_error); - true + NameBindingKind::Ambiguity { b1, b2, legacy } => { + self.ambiguity_errors.push(AmbiguityError { + span: span, name: name, lexical: false, b1: b1, b2: b2, legacy: legacy, + }); + if legacy { + self.record_use(name, ns, b1, span); + } + !legacy } _ => false } @@ -1340,168 +1368,10 @@ impl<'a> Resolver<'a> { fn add_to_glob_map(&mut self, id: NodeId, name: Name) { if self.make_glob_map { - self.glob_map.entry(id).or_insert_with(FnvHashSet).insert(name); + self.glob_map.entry(id).or_insert_with(FxHashSet).insert(name); } } - fn expect_module(&mut self, name: Name, binding: &'a NameBinding<'a>, span: Option) - -> ResolveResult> { - match binding.module() { - Ok(module) => Success(module), - Err(true) => Failed(None), - Err(false) => { - let msg = format!("Not a module `{}`", name); - Failed(span.map(|span| (span, msg))) - } - } - } - - /// Resolves the given module path from the given root `search_module`. - fn resolve_module_path_from_root(&mut self, - mut search_module: Module<'a>, - module_path: &[Ident], - index: usize, - span: Option) - -> ResolveResult> { - fn search_parent_externals<'a>(this: &mut Resolver<'a>, needle: Name, module: Module<'a>) - -> Option> { - match this.resolve_name_in_module(module, needle, TypeNS, false, None) { - Success(binding) if binding.is_extern_crate() => Some(module), - _ => if let (&ModuleKind::Def(..), Some(parent)) = (&module.kind, module.parent) { - search_parent_externals(this, needle, parent) - } else { - None - }, - } - } - - let mut index = index; - let module_path_len = module_path.len(); - - // Resolve the module part of the path. This does not involve looking - // upward though scope chains; we simply resolve names directly in - // modules as we go. - while index < module_path_len { - let name = module_path[index].name; - match self.resolve_name_in_module(search_module, name, TypeNS, false, span) { - Failed(_) => { - let segment_name = name.as_str(); - let module_name = module_to_string(search_module); - let msg = if "???" == &module_name { - let current_module = self.current_module; - match search_parent_externals(self, name, current_module) { - Some(module) => { - let path_str = names_to_string(module_path); - let target_mod_str = module_to_string(&module); - let current_mod_str = module_to_string(current_module); - - let prefix = if target_mod_str == current_mod_str { - "self::".to_string() - } else { - format!("{}::", target_mod_str) - }; - - format!("Did you mean `{}{}`?", prefix, path_str) - } - None => format!("Maybe a missing `extern crate {};`?", segment_name), - } - } else { - format!("Could not find `{}` in `{}`", segment_name, module_name) - }; - - return Failed(span.map(|span| (span, msg))); - } - Indeterminate => { - debug!("(resolving module path for import) module resolution is \ - indeterminate: {}", - name); - return Indeterminate; - } - Success(binding) => { - // Check to see whether there are type bindings, and, if - // so, whether there is a module within. - match self.expect_module(name, binding, span) { - Success(module) => search_module = module, - result @ _ => return result, - } - } - } - - index += 1; - } - - return Success(search_module); - } - - /// Attempts to resolve the module part of an import directive or path - /// rooted at the given module. - fn resolve_module_path(&mut self, - module_path: &[Ident], - use_lexical_scope: UseLexicalScopeFlag, - span: Option) - -> ResolveResult> { - if module_path.len() == 0 { - return Success(self.graph_root) // Use the crate root - } - - debug!("(resolving module path for import) processing `{}` rooted at `{}`", - names_to_string(module_path), - module_to_string(self.current_module)); - - // Resolve the module prefix, if any. - let module_prefix_result = self.resolve_module_prefix(module_path, span); - - let search_module; - let start_index; - match module_prefix_result { - Failed(err) => return Failed(err), - Indeterminate => { - debug!("(resolving module path for import) indeterminate; bailing"); - return Indeterminate; - } - Success(NoPrefixFound) => { - // There was no prefix, so we're considering the first element - // of the path. How we handle this depends on whether we were - // instructed to use lexical scope or not. - match use_lexical_scope { - DontUseLexicalScope => { - // This is a crate-relative path. We will start the - // resolution process at index zero. - search_module = self.graph_root; - start_index = 0; - } - UseLexicalScope => { - // This is not a crate-relative path. We resolve the - // first component of the path in the current lexical - // scope and then proceed to resolve below that. - let ident = module_path[0]; - let lexical_binding = - self.resolve_ident_in_lexical_scope(ident, TypeNS, span); - if let Some(binding) = lexical_binding.and_then(LexicalScopeBinding::item) { - match self.expect_module(ident.name, binding, span) { - Success(containing_module) => { - search_module = containing_module; - start_index = 1; - } - result @ _ => return result, - } - } else { - let msg = - format!("Use of undeclared type or module `{}`", ident.name); - return Failed(span.map(|span| (span, msg))); - } - } - } - } - Success(PrefixFound(ref containing_module, index)) => { - search_module = containing_module; - start_index = index; - } - } - - self.resolve_module_path_from_root(search_module, module_path, start_index, span) - } - /// This resolves the identifier `ident` in the namespace `ns` in the current lexical scope. /// More specifically, we proceed up the hierarchy of scopes and return the binding for /// `ident` in the first scope that defines it (or None if no scopes define it). @@ -1529,19 +1399,20 @@ impl<'a> Resolver<'a> { } // Walk backwards up the ribs in scope. - for i in (0 .. self.get_ribs(ns).len()).rev() { - if let Some(def) = self.get_ribs(ns)[i].bindings.get(&ident).cloned() { + for i in (0 .. self.ribs[ns].len()).rev() { + if let Some(def) = self.ribs[ns][i].bindings.get(&ident).cloned() { // The ident resolves to a type parameter or local variable. - return Some(LexicalScopeBinding::LocalDef(LocalDef { - ribs: Some((ns, i)), - def: def, + return Some(LexicalScopeBinding::Def(if let Some(span) = record_used { + self.adjust_local_def(LocalDef { ribs: Some((ns, i)), def: def }, span) + } else { + def })); } - if let ModuleRibKind(module) = self.get_ribs(ns)[i].kind { + if let ModuleRibKind(module) = self.ribs[ns][i].kind { let name = ident.name; - let item = self.resolve_name_in_module(module, name, ns, true, record_used); - if let Success(binding) = item { + let item = self.resolve_name_in_module(module, name, ns, false, record_used); + if let Ok(binding) = item { // The ident resolves to an item. return Some(LexicalScopeBinding::Item(binding)); } @@ -1549,14 +1420,14 @@ impl<'a> Resolver<'a> { if let ModuleKind::Block(..) = module.kind { // We can see through blocks } else if !module.no_implicit_prelude { return self.prelude.and_then(|prelude| { - self.resolve_name_in_module(prelude, name, ns, false, None).success() + self.resolve_name_in_module(prelude, name, ns, false, None).ok() }).map(LexicalScopeBinding::Item) } else { return None; } } - if let MacroDefinition(mac) = self.get_ribs(ns)[i].kind { + if let MacroDefinition(mac) = self.ribs[ns][i].kind { // If an invocation of this macro created `ident`, give up on `ident` // and switch to `ident`'s source from the macro definition. let (source_ctxt, source_macro) = ident.ctxt.source(); @@ -1569,45 +1440,6 @@ impl<'a> Resolver<'a> { None } - /// Resolves a "module prefix". A module prefix is one or both of (a) `self::`; - /// (b) some chain of `super::`. - /// grammar: (SELF MOD_SEP ) ? (SUPER MOD_SEP) * - fn resolve_module_prefix(&mut self, module_path: &[Ident], span: Option) - -> ResolveResult> { - if &*module_path[0].name.as_str() == "$crate" { - return Success(PrefixFound(self.resolve_crate_var(module_path[0].ctxt), 1)); - } - - // Start at the current module if we see `self` or `super`, or at the - // top of the crate otherwise. - let mut i = match &*module_path[0].name.as_str() { - "self" => 1, - "super" => 0, - _ => return Success(NoPrefixFound), - }; - - let mut containing_module = - self.module_map[&self.current_module.normal_ancestor_id.unwrap()]; - - // Now loop through all the `super`s we find. - while i < module_path.len() && "super" == module_path[i].name.as_str() { - debug!("(resolving module prefix) resolving `super` at {}", - module_to_string(&containing_module)); - if let Some(parent) = containing_module.parent { - containing_module = self.module_map[&parent.normal_ancestor_id.unwrap()]; - i += 1; - } else { - let msg = "There are too many initial `super`s.".into(); - return Failed(span.map(|span| (span, msg))); - } - } - - debug!("(resolving module prefix) finished resolving prefix at {}", - module_to_string(&containing_module)); - - return Success(PrefixFound(containing_module, i)); - } - fn resolve_crate_var(&mut self, mut crate_var_ctxt: SyntaxContext) -> Module<'a> { while crate_var_ctxt.source().0 != SyntaxContext::empty() { crate_var_ctxt = crate_var_ctxt.source().0; @@ -1641,14 +1473,15 @@ impl<'a> Resolver<'a> { if let Some(module) = module { // Move down in the graph. let orig_module = replace(&mut self.current_module, module); - self.value_ribs.push(Rib::new(ModuleRibKind(module))); - self.type_ribs.push(Rib::new(ModuleRibKind(module))); + self.ribs[ValueNS].push(Rib::new(ModuleRibKind(module))); + self.ribs[TypeNS].push(Rib::new(ModuleRibKind(module))); + self.finalize_current_module_macro_resolutions(); f(self); self.current_module = orig_module; - self.value_ribs.pop(); - self.type_ribs.pop(); + self.ribs[ValueNS].pop(); + self.ribs[TypeNS].pop(); } else { f(self); } @@ -1699,7 +1532,7 @@ impl<'a> Resolver<'a> { } ItemKind::DefaultImpl(_, ref trait_ref) => { - self.with_optional_trait_ref(Some(trait_ref), |_, _| {}); + self.with_optional_trait_ref(Some(trait_ref), |_, _| {}, None); } ItemKind::Impl(.., ref generics, ref opt_trait_ref, ref self_type, ref impl_items) => self.resolve_implementation(generics, @@ -1765,24 +1598,32 @@ impl<'a> Resolver<'a> { ItemKind::Use(ref view_path) => { match view_path.node { ast::ViewPathList(ref prefix, ref items) => { + let path: Vec<_> = + prefix.segments.iter().map(|seg| seg.identifier).collect(); // Resolve prefix of an import with empty braces (issue #28388) if items.is_empty() && !prefix.segments.is_empty() { - match self.resolve_crate_relative_path(prefix.span, - &prefix.segments, - TypeNS) { - Ok(binding) => { - let def = binding.def(); - self.record_def(item.id, PathResolution::new(def)); - } - Err(true) => self.record_def(item.id, err_path_resolution()), - Err(false) => { - resolve_error(self, - prefix.span, - ResolutionError::FailedToResolve( - &path_names_to_string(prefix, 0))); - self.record_def(item.id, err_path_resolution()); + let (scope, span) = (PathScope::Import, prefix.span); + // FIXME(#38012) This should be a module path, not anything in TypeNS. + let result = + self.resolve_path(&path, scope, Some(TypeNS), Some(span)); + let (def, msg) = match result { + PathResult::Module(module) => (module.def().unwrap(), None), + PathResult::NonModule(res) if res.depth == 0 => + (res.base_def, None), + PathResult::NonModule(_) => { + // Resolve a module path for better errors + match self.resolve_path(&path, scope, None, Some(span)) { + PathResult::Failed(msg, _) => (Def::Err, Some(msg)), + _ => unreachable!(), + } } + PathResult::Indeterminate => unreachable!(), + PathResult::Failed(msg, _) => (Def::Err, Some(msg)), + }; + if let Some(msg) = msg { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); } + self.record_def(item.id, PathResolution::new(def)); } } _ => {} @@ -1803,7 +1644,7 @@ impl<'a> Resolver<'a> { match type_parameters { HasTypeParameters(generics, rib_kind) => { let mut function_type_rib = Rib::new(rib_kind); - let mut seen_bindings = FnvHashMap(); + let mut seen_bindings = FxHashMap(); for type_parameter in &generics.ty_params { let name = type_parameter.ident.name; debug!("with_type_parameter_rib: {}", type_parameter.id); @@ -1823,7 +1664,7 @@ impl<'a> Resolver<'a> { function_type_rib.bindings.insert(Ident::with_empty_ctxt(name), def); self.record_def(type_parameter.id, PathResolution::new(def)); } - self.type_ribs.push(function_type_rib); + self.ribs[TypeNS].push(function_type_rib); } NoTypeParameters => { @@ -1834,7 +1675,7 @@ impl<'a> Resolver<'a> { f(self); if let HasTypeParameters(..) = type_parameters { - self.type_ribs.pop(); + self.ribs[TypeNS].pop(); } } @@ -1849,93 +1690,61 @@ impl<'a> Resolver<'a> { fn with_constant_rib(&mut self, f: F) where F: FnOnce(&mut Resolver) { - self.value_ribs.push(Rib::new(ConstantItemRibKind)); - self.type_ribs.push(Rib::new(ConstantItemRibKind)); + self.ribs[ValueNS].push(Rib::new(ConstantItemRibKind)); + self.ribs[TypeNS].push(Rib::new(ConstantItemRibKind)); f(self); - self.type_ribs.pop(); - self.value_ribs.pop(); - } - - fn resolve_function(&mut self, - rib_kind: RibKind<'a>, - declaration: &FnDecl, - block: &Block) { - // Create a value rib for the function. - self.value_ribs.push(Rib::new(rib_kind)); - - // Create a label rib for the function. - self.label_ribs.push(Rib::new(rib_kind)); - - // Add each argument to the rib. - let mut bindings_list = FnvHashMap(); - for argument in &declaration.inputs { - self.resolve_pattern(&argument.pat, PatternSource::FnParam, &mut bindings_list); - - self.visit_ty(&argument.ty); - - debug!("(resolving function) recorded argument"); - } - visit::walk_fn_ret_ty(self, &declaration.output); - - // Resolve the function body. - self.visit_block(block); - - debug!("(resolving function) leaving function"); - - self.label_ribs.pop(); - self.value_ribs.pop(); + self.ribs[TypeNS].pop(); + self.ribs[ValueNS].pop(); } fn resolve_trait_reference(&mut self, - id: NodeId, - trait_path: &Path, - path_depth: usize) - -> Result { - self.resolve_path(id, trait_path, path_depth, TypeNS).and_then(|path_res| { - match path_res.base_def { - Def::Trait(_) => { - debug!("(resolving trait) found trait def: {:?}", path_res); - return Ok(path_res); - } - Def::Err => return Err(true), - _ => {} + path: &[Ident], + global: bool, + generics: Option<&Generics>, + span: Span) + -> PathResolution { + let scope = if global { PathScope::Global } else { PathScope::Lexical }; + let def = match self.resolve_path(path, scope, None, Some(span)) { + PathResult::Module(module) => Some(module.def().unwrap()), + PathResult::NonModule(..) => return err_path_resolution(), + PathResult::Failed(msg, false) => { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); + return err_path_resolution(); + } + _ => match self.resolve_path(path, scope, Some(TypeNS), None) { + PathResult::NonModule(path_resolution) => Some(path_resolution.base_def), + _ => None, + }, + }; + + if let Some(def) = def { + if let Def::Trait(_) = def { + return PathResolution::new(def); } - let mut err = resolve_struct_error(self, trait_path.span, { - ResolutionError::IsNotATrait(&path_names_to_string(trait_path, path_depth)) + let mut err = resolve_struct_error(self, span, { + ResolutionError::IsNotATrait(&names_to_string(path), def.kind_name()) }); + if let Some(generics) = generics { + if let Some(span) = generics.span_for_name(&names_to_string(path)) { + err.span_label(span, &"type parameter defined here"); + } + } // If it's a typedef, give a note - if let Def::TyAlias(..) = path_res.base_def { + if let Def::TyAlias(..) = def { err.note(&format!("type aliases cannot be used for traits")); } err.emit(); - Err(true) - }).map_err(|error_reported| { - if error_reported { return } - + } else { // find possible candidates - let trait_name = trait_path.segments.last().unwrap().identifier.name; - let candidates = - self.lookup_candidates( - trait_name, - TypeNS, - |def| match def { - Def::Trait(_) => true, - _ => false, - }, - ); + let is_trait = |def| match def { Def::Trait(_) => true, _ => false }; + let candidates = self.lookup_candidates(path.last().unwrap().name, TypeNS, is_trait); - // create error object - let name = &path_names_to_string(trait_path, path_depth); - let error = - ResolutionError::UndeclaredTraitName( - name, - candidates, - ); - - resolve_error(self, trait_path.span, error); - }) + let path = names_to_string(path); + resolve_error(self, span, ResolutionError::UndeclaredTraitName(&path, candidates)); + } + err_path_resolution() } fn with_current_self_type(&mut self, self_type: &Ty, f: F) -> T @@ -1948,21 +1757,24 @@ impl<'a> Resolver<'a> { result } - fn with_optional_trait_ref(&mut self, opt_trait_ref: Option<&TraitRef>, f: F) -> T + fn with_optional_trait_ref(&mut self, + opt_trait_ref: Option<&TraitRef>, + f: F, + generics: Option<&Generics>) + -> T where F: FnOnce(&mut Resolver, Option) -> T { let mut new_val = None; let mut new_id = None; if let Some(trait_ref) = opt_trait_ref { - if let Ok(path_res) = self.resolve_trait_reference(trait_ref.ref_id, - &trait_ref.path, - 0) { - assert!(path_res.depth == 0); - self.record_def(trait_ref.ref_id, path_res); + let ast::Path { ref segments, span, global } = trait_ref.path; + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); + let path_res = self.resolve_trait_reference(&path, global, generics, span); + assert!(path_res.depth == 0); + self.record_def(trait_ref.ref_id, path_res); + if path_res.base_def != Def::Err { new_val = Some((path_res.base_def.def_id(), trait_ref.clone())); new_id = Some(path_res.base_def.def_id()); - } else { - self.record_def(trait_ref.ref_id, err_path_resolution()); } visit::walk_trait_ref(self, trait_ref); } @@ -1979,9 +1791,9 @@ impl<'a> Resolver<'a> { // plain insert (no renaming, types are not currently hygienic....) self_type_rib.bindings.insert(keywords::SelfType.ident(), self_def); - self.type_ribs.push(self_type_rib); + self.ribs[TypeNS].push(self_type_rib); f(self); - self.type_ribs.pop(); + self.ribs[TypeNS].pop(); } fn resolve_implementation(&mut self, @@ -2044,7 +1856,7 @@ impl<'a> Resolver<'a> { } }); }); - }); + }, Some(&generics)); }); } @@ -2069,7 +1881,7 @@ impl<'a> Resolver<'a> { walk_list!(self, visit_expr, &local.init); // Resolve the pattern. - self.resolve_pattern(&local.pat, PatternSource::Let, &mut FnvHashMap()); + self.resolve_pattern(&local.pat, PatternSource::Let, &mut FxHashMap()); } // build a map from pattern identifiers to binding-info's. @@ -2077,7 +1889,7 @@ impl<'a> Resolver<'a> { // that expands into an or-pattern where one 'x' was from the // user and one 'x' came from the macro. fn binding_mode_map(&mut self, pat: &Pat) -> BindingMap { - let mut binding_map = FnvHashMap(); + let mut binding_map = FxHashMap(); pat.walk(&mut |pat| { if let PatKind::Ident(binding_mode, ident, ref sub_pat) = pat.node { @@ -2135,9 +1947,9 @@ impl<'a> Resolver<'a> { } fn resolve_arm(&mut self, arm: &Arm) { - self.value_ribs.push(Rib::new(NormalRibKind)); + self.ribs[ValueNS].push(Rib::new(NormalRibKind)); - let mut bindings_list = FnvHashMap(); + let mut bindings_list = FxHashMap(); for pattern in &arm.pats { self.resolve_pattern(&pattern, PatternSource::Match, &mut bindings_list); } @@ -2149,7 +1961,7 @@ impl<'a> Resolver<'a> { walk_list!(self, visit_expr, &arm.guard); self.visit_expr(&arm.body); - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); } fn resolve_block(&mut self, block: &Block) { @@ -2161,11 +1973,12 @@ impl<'a> Resolver<'a> { let mut num_macro_definition_ribs = 0; if let Some(anonymous_module) = anonymous_module { debug!("(resolving block) found anonymous module, moving down"); - self.value_ribs.push(Rib::new(ModuleRibKind(anonymous_module))); - self.type_ribs.push(Rib::new(ModuleRibKind(anonymous_module))); + self.ribs[ValueNS].push(Rib::new(ModuleRibKind(anonymous_module))); + self.ribs[TypeNS].push(Rib::new(ModuleRibKind(anonymous_module))); self.current_module = anonymous_module; + self.finalize_current_module_macro_resolutions(); } else { - self.value_ribs.push(Rib::new(NormalRibKind)); + self.ribs[ValueNS].push(Rib::new(NormalRibKind)); } // Descend into the block. @@ -2173,7 +1986,7 @@ impl<'a> Resolver<'a> { if let Some(marks) = self.macros_at_scope.remove(&stmt.id) { num_macro_definition_ribs += marks.len() as u32; for mark in marks { - self.value_ribs.push(Rib::new(MacroDefinition(mark))); + self.ribs[ValueNS].push(Rib::new(MacroDefinition(mark))); self.label_ribs.push(Rib::new(MacroDefinition(mark))); } } @@ -2184,90 +1997,63 @@ impl<'a> Resolver<'a> { // Move back up. self.current_module = orig_module; for _ in 0 .. num_macro_definition_ribs { - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); self.label_ribs.pop(); } - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); if let Some(_) = anonymous_module { - self.type_ribs.pop(); + self.ribs[TypeNS].pop(); } debug!("(resolving block) leaving block"); } fn resolve_type(&mut self, ty: &Ty) { - match ty.node { - TyKind::Path(ref maybe_qself, ref path) => { - // This is a path in the type namespace. Walk through scopes - // looking for it. - if let Some(def) = self.resolve_possibly_assoc_item(ty.id, maybe_qself.as_ref(), - path, TypeNS) { - match def.base_def { - Def::Mod(..) if def.depth == 0 => { - self.session.span_err(path.span, "expected type, found module"); - self.record_def(ty.id, err_path_resolution()); - } - _ => { - // Write the result into the def map. - debug!("(resolving type) writing resolution for `{}` (id {}) = {:?}", - path_names_to_string(path, 0), ty.id, def); - self.record_def(ty.id, def); - } + if let TyKind::Path(ref maybe_qself, ref path) = ty.node { + // This is a path in the type namespace. Walk through scopes looking for it. + if let Some(def) = + self.resolve_possibly_assoc_item(ty.id, maybe_qself.as_ref(), path, TypeNS) { + match def.base_def { + Def::Mod(..) if def.depth == 0 => { + self.session.span_err(path.span, "expected type, found module"); + self.record_def(ty.id, err_path_resolution()); } + _ => { + // Write the result into the def map. + debug!("(resolving type) writing resolution for `{}` (id {}) = {:?}", + path_names_to_string(path, 0), ty.id, def); + self.record_def(ty.id, def); + } + } + } else { + self.record_def(ty.id, err_path_resolution()); + // Keep reporting some errors even if they're ignored above. + let kind = if maybe_qself.is_some() { "associated type" } else { "type name" }; + let is_invalid_self_type_name = { + path.segments.len() > 0 && + maybe_qself.is_none() && + path.segments[0].identifier.name == keywords::SelfType.name() + }; + + if is_invalid_self_type_name { + resolve_error(self, ty.span, ResolutionError::SelfUsedOutsideImplOrTrait); } else { - self.record_def(ty.id, err_path_resolution()); - - // Keep reporting some errors even if they're ignored above. - if let Err(true) = self.resolve_path(ty.id, path, 0, TypeNS) { - // `resolve_path` already reported the error - } else { - let kind = if maybe_qself.is_some() { - "associated type" - } else { - "type name" - }; - - let is_invalid_self_type_name = path.segments.len() > 0 && - maybe_qself.is_none() && - path.segments[0].identifier.name == - keywords::SelfType.name(); - if is_invalid_self_type_name { - resolve_error(self, - ty.span, - ResolutionError::SelfUsedOutsideImplOrTrait); - } else { - let segment = path.segments.last(); - let segment = segment.expect("missing name in path"); - let type_name = segment.identifier.name; - - let candidates = - self.lookup_candidates( - type_name, - TypeNS, - |def| match def { - Def::Trait(_) | - Def::Enum(_) | - Def::Struct(_) | - Def::Union(_) | - Def::TyAlias(_) => true, - _ => false, - }, - ); - - // create error object - let name = &path_names_to_string(path, 0); - let error = - ResolutionError::UseOfUndeclared( - kind, - name, - candidates, - ); - - resolve_error(self, ty.span, error); + let type_name = path.segments.last().unwrap().identifier.name; + let candidates = self.lookup_candidates(type_name, TypeNS, |def| { + match def { + Def::Trait(_) | + Def::Enum(_) | + Def::Struct(_) | + Def::Union(_) | + Def::TyAlias(_) => true, + _ => false, } - } + }); + + let name = &path_names_to_string(path, 0); + let error = ResolutionError::UseOfUndeclared(kind, name, candidates); + resolve_error(self, ty.span, error); } } - _ => {} } // Resolve embedded types. visit::walk_ty(self, ty); @@ -2278,7 +2064,7 @@ impl<'a> Resolver<'a> { pat_id: NodeId, outer_pat_id: NodeId, pat_src: PatternSource, - bindings: &mut FnvHashMap) + bindings: &mut FxHashMap) -> PathResolution { // Add the binding to the local ribs, if it // doesn't already exist in the bindings map. (We @@ -2308,7 +2094,7 @@ impl<'a> Resolver<'a> { Some(..) if pat_src == PatternSource::Match => { // `Variant1(a) | Variant2(a)`, ok // Reuse definition from the first `a`. - def = self.value_ribs.last_mut().unwrap().bindings[&ident.node]; + def = self.ribs[ValueNS].last_mut().unwrap().bindings[&ident.node]; } Some(..) => { span_bug!(ident.span, "two bindings with the same name from \ @@ -2318,7 +2104,7 @@ impl<'a> Resolver<'a> { // A completely fresh binding, add to the lists if it's valid. if ident.node.name != keywords::Invalid.name() { bindings.insert(ident.node, outer_pat_id); - self.value_ribs.last_mut().unwrap().bindings.insert(ident.node, def); + self.ribs[ValueNS].last_mut().unwrap().bindings.insert(ident.node, def); } } } @@ -2361,13 +2147,8 @@ impl<'a> Resolver<'a> { resolution } } else { - if let Err(false) = self.resolve_path(pat_id, path, 0, namespace) { - resolve_error( - self, - path.span, - ResolutionError::PatPathUnresolved(expected_what, path) - ); - } + let error = ResolutionError::PatPathUnresolved(expected_what, path); + resolve_error(self, path.span, error); err_path_resolution() }; @@ -2391,7 +2172,7 @@ impl<'a> Resolver<'a> { pat_src: PatternSource, // Maps idents to the node ID for the // outermost pattern that binds them. - bindings: &mut FnvHashMap) { + bindings: &mut FxHashMap) { // Visit all direct subpatterns of this pattern. let outer_pat_id = pat.id; pat.walk(&mut |pat| { @@ -2479,79 +2260,32 @@ impl<'a> Resolver<'a> { id: NodeId, maybe_qself: Option<&QSelf>, path: &Path, - namespace: Namespace) + ns: Namespace) -> Option { - let max_assoc_types; + let ast::Path { ref segments, global, span } = *path; + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); + let scope = if global { PathScope::Global } else { PathScope::Lexical }; - match maybe_qself { - Some(qself) => { - if qself.position == 0 { - // FIXME: Create some fake resolution that can't possibly be a type. - return Some(PathResolution { - base_def: Def::Mod(self.definitions.local_def_id(ast::CRATE_NODE_ID)), - depth: path.segments.len(), - }); - } - max_assoc_types = path.segments.len() - qself.position; - // Make sure the trait is valid. - let _ = self.resolve_trait_reference(id, path, max_assoc_types); - } - None => { - max_assoc_types = path.segments.len(); + if let Some(qself) = maybe_qself { + if qself.position == 0 { + // FIXME: Create some fake resolution that can't possibly be a type. + return Some(PathResolution { + base_def: Def::Mod(self.definitions.local_def_id(ast::CRATE_NODE_ID)), + depth: path.len(), + }); } + // Make sure the trait is valid. + self.resolve_trait_reference(&path[..qself.position], global, None, span); } - let mut resolution = self.with_no_errors(|this| { - this.resolve_path(id, path, 0, namespace).ok() - }); - for depth in 1..max_assoc_types { - if resolution.is_some() { - break; + let result = match self.resolve_path(&path, scope, Some(ns), Some(span)) { + PathResult::NonModule(path_res) => match path_res.base_def { + Def::Trait(..) if maybe_qself.is_some() => return None, + _ => path_res, + }, + PathResult::Module(module) if !module.is_normal() => { + PathResolution::new(module.def().unwrap()) } - self.with_no_errors(|this| { - let partial_resolution = this.resolve_path(id, path, depth, TypeNS).ok(); - if let Some(Def::Mod(..)) = partial_resolution.map(|r| r.base_def) { - // Modules cannot have associated items - } else { - resolution = partial_resolution; - } - }); - } - resolution - } - - /// Skips `path_depth` trailing segments, which is also reflected in the - /// returned value. See `hir::def::PathResolution` for more info. - fn resolve_path(&mut self, id: NodeId, path: &Path, path_depth: usize, namespace: Namespace) - -> Result { - debug!("resolve_path(id={:?} path={:?}, path_depth={:?})", id, path, path_depth); - - let span = path.span; - let segments = &path.segments[..path.segments.len() - path_depth]; - - let mk_res = |def| PathResolution { base_def: def, depth: path_depth }; - - if path.global { - let binding = self.resolve_crate_relative_path(span, segments, namespace); - return binding.map(|binding| mk_res(binding.def())); - } - - // Try to find a path to an item in a module. - let last_ident = segments.last().unwrap().identifier; - // Resolve a single identifier with fallback to primitive types - let resolve_identifier_with_fallback = |this: &mut Self, record_used| { - let def = this.resolve_identifier(last_ident, namespace, record_used); - match def { - None | Some(LocalDef{def: Def::Mod(..), ..}) if namespace == TypeNS => - this.primitive_type_table - .primitive_types - .get(&last_ident.name) - .map_or(def, |prim_ty| Some(LocalDef::from_def(Def::PrimTy(*prim_ty)))), - _ => def - } - }; - - if segments.len() == 1 { // In `a(::assoc_item)*` `a` cannot be a module. If `a` does resolve to a module we // don't report an error right away, but try to fallback to a primitive type. // So, we are still able to successfully resolve something like @@ -2564,47 +2298,148 @@ impl<'a> Resolver<'a> { // // Such behavior is required for backward compatibility. // The same fallback is used when `a` resolves to nothing. - let def = resolve_identifier_with_fallback(self, Some(span)).ok_or(false); - return def.and_then(|def| self.adjust_local_def(def, span).ok_or(true)).map(mk_res); - } - - let unqualified_def = resolve_identifier_with_fallback(self, None); - let qualified_binding = self.resolve_module_relative_path(span, segments, namespace); - match (qualified_binding, unqualified_def) { - (Ok(binding), Some(ref ud)) if binding.def() == ud.def && - segments[0].identifier.name.as_str() != "$crate" => { - self.session - .add_lint(lint::builtin::UNUSED_QUALIFICATIONS, - id, - span, - "unnecessary qualification".to_string()); + PathResult::Module(..) | PathResult::Failed(..) + if scope == PathScope::Lexical && (ns == TypeNS || path.len() > 1) && + self.primitive_type_table.primitive_types.contains_key(&path[0].name) => { + PathResolution { + base_def: Def::PrimTy(self.primitive_type_table.primitive_types[&path[0].name]), + depth: segments.len() - 1, + } } - _ => {} + PathResult::Module(module) => PathResolution::new(module.def().unwrap()), + PathResult::Failed(msg, false) => { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); + err_path_resolution() + } + _ => return None, + }; + + if path.len() == 1 || result.base_def == Def::Err || global { + return Some(result); } - qualified_binding.map(|binding| mk_res(binding.def())) + let unqualified_result = { + match self.resolve_path(&[*path.last().unwrap()], PathScope::Lexical, Some(ns), None) { + PathResult::NonModule(path_res) => path_res.base_def, + PathResult::Module(module) => module.def().unwrap(), + _ => return Some(result), + } + }; + if result.base_def == unqualified_result && path[0].name != "$crate" { + let lint = lint::builtin::UNUSED_QUALIFICATIONS; + self.session.add_lint(lint, id, span, "unnecessary qualification".to_string()); + } + + Some(result) } - // Resolve a single identifier - fn resolve_identifier(&mut self, - identifier: Ident, - namespace: Namespace, - record_used: Option) - -> Option { - if identifier.name == keywords::Invalid.name() { - return None; + fn resolve_path(&mut self, + path: &[Ident], + scope: PathScope, + opt_ns: Option, // `None` indicates a module path + record_used: Option) + -> PathResult<'a> { + let (mut module, allow_self) = match scope { + PathScope::Lexical => (None, true), + PathScope::Import => (Some(self.graph_root), true), + PathScope::Global => (Some(self.graph_root), false), + }; + let mut allow_super = allow_self; + + for (i, &ident) in path.iter().enumerate() { + let is_last = i == path.len() - 1; + let ns = if is_last { opt_ns.unwrap_or(TypeNS) } else { TypeNS }; + + if i == 0 && allow_self && ns == TypeNS && ident.name == keywords::SelfValue.name() { + module = Some(self.module_map[&self.current_module.normal_ancestor_id.unwrap()]); + continue + } else if i == 0 && allow_self && ns == TypeNS && ident.name == "$crate" { + module = Some(self.resolve_crate_var(ident.ctxt)); + continue + } else if allow_super && ns == TypeNS && ident.name == keywords::Super.name() { + let current_module = if i == 0 { self.current_module } else { module.unwrap() }; + let self_module = self.module_map[¤t_module.normal_ancestor_id.unwrap()]; + if let Some(parent) = self_module.parent { + module = Some(self.module_map[&parent.normal_ancestor_id.unwrap()]); + continue + } else { + let msg = "There are too many initial `super`s.".to_string(); + return PathResult::Failed(msg, false); + } + } + allow_super = false; + + let binding = if let Some(module) = module { + self.resolve_name_in_module(module, ident.name, ns, false, record_used) + } else if opt_ns == Some(MacroNS) { + self.resolve_lexical_macro_path_segment(ident.name, ns, record_used) + } else { + match self.resolve_ident_in_lexical_scope(ident, ns, record_used) { + Some(LexicalScopeBinding::Item(binding)) => Ok(binding), + Some(LexicalScopeBinding::Def(def)) + if opt_ns == Some(TypeNS) || opt_ns == Some(ValueNS) => { + return PathResult::NonModule(PathResolution { + base_def: def, + depth: path.len() - 1, + }); + } + _ => Err(if record_used.is_some() { Determined } else { Undetermined }), + } + }; + + match binding { + Ok(binding) => { + if let Some(next_module) = binding.module() { + module = Some(next_module); + } else if binding.def() == Def::Err { + return PathResult::NonModule(err_path_resolution()); + } else if opt_ns.is_some() && !(opt_ns == Some(MacroNS) && !is_last) { + return PathResult::NonModule(PathResolution { + base_def: binding.def(), + depth: path.len() - i - 1, + }); + } else { + return PathResult::Failed(format!("Not a module `{}`", ident), is_last); + } + } + Err(Undetermined) => return PathResult::Indeterminate, + Err(Determined) => { + if let Some(module) = module { + if opt_ns.is_some() && !module.is_normal() { + return PathResult::NonModule(PathResolution { + base_def: module.def().unwrap(), + depth: path.len() - i, + }); + } + } + let msg = if module.and_then(ModuleS::def) == self.graph_root.def() { + let is_mod = |def| match def { Def::Mod(..) => true, _ => false }; + let mut candidates = + self.lookup_candidates(ident.name, TypeNS, is_mod).candidates; + candidates.sort_by_key(|path| (path.segments.len(), path.to_string())); + if let Some(candidate) = candidates.get(0) { + format!("Did you mean `{}`?", candidate) + } else { + format!("Maybe a missing `extern crate {};`?", ident) + } + } else if i == 0 { + format!("Use of undeclared type or module `{}`", ident) + } else { + format!("Could not find `{}` in `{}`", ident, path[i - 1]) + }; + return PathResult::Failed(msg, is_last); + } + } } - self.resolve_ident_in_lexical_scope(identifier, namespace, record_used) - .map(LexicalScopeBinding::local_def) + PathResult::Module(module.unwrap()) } // Resolve a local definition, potentially adjusting for closures. - fn adjust_local_def(&mut self, local_def: LocalDef, span: Span) -> Option { + fn adjust_local_def(&mut self, local_def: LocalDef, span: Span) -> Def { let ribs = match local_def.ribs { - Some((TypeNS, i)) => &self.type_ribs[i + 1..], - Some((ValueNS, i)) => &self.value_ribs[i + 1..], - _ => &[] as &[_], + Some((ns, i)) => &self.ribs[ns][i + 1..], + None => &[] as &[_], }; let mut def = local_def.def; match def { @@ -2647,14 +2482,14 @@ impl<'a> Resolver<'a> { resolve_error(self, span, ResolutionError::CannotCaptureDynamicEnvironmentInFnItem); - return None; + return Def::Err; } ConstantItemRibKind => { // Still doesn't deal with upvars resolve_error(self, span, ResolutionError::AttemptToUseNonConstantValueInConstant); - return None; + return Def::Err; } } } @@ -2673,90 +2508,19 @@ impl<'a> Resolver<'a> { resolve_error(self, span, ResolutionError::TypeParametersFromOuterFunction); - return None; + return Def::Err; } ConstantItemRibKind => { // see #9186 resolve_error(self, span, ResolutionError::OuterTypeParameterContext); - return None; + return Def::Err; } } } } _ => {} } - return Some(def); - } - - // resolve a "module-relative" path, e.g. a::b::c - fn resolve_module_relative_path(&mut self, - span: Span, - segments: &[ast::PathSegment], - namespace: Namespace) - -> Result<&'a NameBinding<'a>, - bool /* true if an error was reported */> { - let module_path = - segments.split_last().unwrap().1.iter().map(|ps| ps.identifier).collect::>(); - - let containing_module; - match self.resolve_module_path(&module_path, UseLexicalScope, Some(span)) { - Failed(err) => { - if let Some((span, msg)) = err { - resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); - } - return Err(true); - } - Indeterminate => return Err(false), - Success(resulting_module) => { - containing_module = resulting_module; - } - } - - let name = segments.last().unwrap().identifier.name; - let result = - self.resolve_name_in_module(containing_module, name, namespace, false, Some(span)); - result.success().ok_or(false) - } - - /// Invariant: This must be called only during main resolution, not during - /// import resolution. - fn resolve_crate_relative_path(&mut self, span: Span, segments: &[T], namespace: Namespace) - -> Result<&'a NameBinding<'a>, - bool /* true if an error was reported */> - where T: Named, - { - let module_path = segments.split_last().unwrap().1.iter().map(T::ident).collect::>(); - let root_module = self.graph_root; - - let containing_module; - match self.resolve_module_path_from_root(root_module, &module_path, 0, Some(span)) { - Failed(err) => { - if let Some((span, msg)) = err { - resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); - } - return Err(true); - } - - Indeterminate => return Err(false), - - Success(resulting_module) => { - containing_module = resulting_module; - } - } - - let name = segments.last().unwrap().ident().name; - let result = - self.resolve_name_in_module(containing_module, name, namespace, false, Some(span)); - result.success().ok_or(false) - } - - fn with_no_errors(&mut self, f: F) -> T - where F: FnOnce(&mut Resolver) -> T - { - self.emit_errors = false; - let rs = f(self); - self.emit_errors = true; - rs + return def; } // Calls `f` with a `Resolver` whose current lexical scope is `module`'s lexical scope, @@ -2766,8 +2530,8 @@ impl<'a> Resolver<'a> { where F: FnOnce(&mut Resolver<'a>) -> T, { self.with_empty_ribs(|this| { - this.value_ribs.push(Rib::new(ModuleRibKind(module))); - this.type_ribs.push(Rib::new(ModuleRibKind(module))); + this.ribs[ValueNS].push(Rib::new(ModuleRibKind(module))); + this.ribs[TypeNS].push(Rib::new(ModuleRibKind(module))); f(this) }) } @@ -2775,13 +2539,11 @@ impl<'a> Resolver<'a> { fn with_empty_ribs(&mut self, f: F) -> T where F: FnOnce(&mut Resolver<'a>) -> T, { - let value_ribs = replace(&mut self.value_ribs, Vec::new()); - let type_ribs = replace(&mut self.type_ribs, Vec::new()); + let ribs = replace(&mut self.ribs, PerNS::>::default()); let label_ribs = replace(&mut self.label_ribs, Vec::new()); let result = f(self); - self.value_ribs = value_ribs; - self.type_ribs = type_ribs; + self.ribs = ribs; self.label_ribs = label_ribs; result } @@ -2829,17 +2591,17 @@ impl<'a> Resolver<'a> { } fn find_best_match(&mut self, name: &str) -> SuggestionType { - if let Some(macro_name) = self.macro_names.iter().find(|n| n.as_str() == name) { + if let Some(macro_name) = self.macro_names.iter().find(|&n| n == &name) { return SuggestionType::Macro(format!("{}!", macro_name)); } - let names = self.value_ribs + let names = self.ribs[ValueNS] .iter() .rev() .flat_map(|rib| rib.bindings.keys().map(|ident| &ident.name)); if let Some(found) = find_best_match_for_name(names, name, None) { - if name != found { + if found != name { return SuggestionType::Function(found); } } SuggestionType::NotFound @@ -2884,11 +2646,7 @@ impl<'a> Resolver<'a> { let msg = format!("did you mean to write: `{} {{ /* fields */ }}`?", path_name); - if self.emit_errors { - err.help(&msg); - } else { - err.span_help(expr.span, &msg); - } + err.help(&msg); err.emit(); self.record_def(expr.id, err_path_resolution()); } else { @@ -2909,13 +2667,17 @@ impl<'a> Resolver<'a> { } else { // Be helpful if the name refers to a struct let path_name = path_names_to_string(path, 0); - let type_res = self.with_no_errors(|this| { - this.resolve_path(expr.id, path, 0, TypeNS) - }); + let ast::Path { ref segments, global, .. } = *path; + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); + let scope = if global { PathScope::Global } else { PathScope::Lexical }; + let type_res = match self.resolve_path(&path, scope, Some(TypeNS), None) { + PathResult::NonModule(type_res) => Some(type_res), + _ => None, + }; self.record_def(expr.id, err_path_resolution()); - if let Ok(Def::Struct(..)) = type_res.map(|r| r.base_def) { + if let Some(Def::Struct(..)) = type_res.map(|r| r.base_def) { let error_variant = ResolutionError::StructVariantUsedAsFunction(&path_name); let mut err = resolve_struct_error(self, expr.span, error_variant); @@ -2923,96 +2685,76 @@ impl<'a> Resolver<'a> { let msg = format!("did you mean to write: `{} {{ /* fields */ }}`?", path_name); - if self.emit_errors { - err.help(&msg); - } else { - err.span_help(expr.span, &msg); - } + err.help(&msg); err.emit(); } else { // Keep reporting some errors even if they're ignored above. - if let Err(true) = self.resolve_path(expr.id, path, 0, ValueNS) { - // `resolve_path` already reported the error - } else { - let mut method_scope = false; - let mut is_static = false; - self.value_ribs.iter().rev().all(|rib| { - method_scope = match rib.kind { - MethodRibKind(is_static_) => { - is_static = is_static_; - true - } - ItemRibKind | ConstantItemRibKind => false, - _ => return true, // Keep advancing - }; - false // Stop advancing - }); - - if method_scope && - &path_name[..] == keywords::SelfValue.name().as_str() { - resolve_error(self, - expr.span, - ResolutionError::SelfNotAvailableInStaticMethod); - } else { - let last_name = path.segments.last().unwrap().identifier.name; - let (mut msg, is_field) = - match self.find_fallback_in_self_type(last_name) { - NoSuggestion => { - // limit search to 5 to reduce the number - // of stupid suggestions - (match self.find_best_match(&path_name) { - SuggestionType::Macro(s) => { - format!("the macro `{}`", s) - } - SuggestionType::Function(s) => format!("`{}`", s), - SuggestionType::NotFound => "".to_string(), - }, false) - } - Field => { - (if is_static && method_scope { - "".to_string() - } else { - format!("`self.{}`", path_name) - }, true) - } - TraitItem => (format!("to call `self.{}`", path_name), false), - TraitMethod(path_str) => - (format!("to call `{}::{}`", path_str, path_name), false), - }; - - let mut context = UnresolvedNameContext::Other; - let mut def = Def::Err; - if !msg.is_empty() { - msg = format!("did you mean {}?", msg); - } else { - // we display a help message if this is a module - let name_path: Vec<_> = - path.segments.iter().map(|seg| seg.identifier).collect(); - - match self.resolve_module_path(&name_path[..], - UseLexicalScope, - Some(expr.span)) { - Success(e) => { - if let Some(def_type) = e.def() { - def = def_type; - } - context = UnresolvedNameContext::PathIsMod(parent); - }, - _ => {}, - }; + let mut method_scope = false; + let mut is_static = false; + self.ribs[ValueNS].iter().rev().all(|rib| { + method_scope = match rib.kind { + MethodRibKind(is_static_) => { + is_static = is_static_; + true } + ItemRibKind | ConstantItemRibKind => false, + _ => return true, // Keep advancing + }; + false // Stop advancing + }); - resolve_error(self, - expr.span, - ResolutionError::UnresolvedName { - path: &path_name, - message: &msg, - context: context, - is_static_method: method_scope && is_static, - is_field: is_field, - def: def, - }); + if method_scope && keywords::SelfValue.name() == &*path_name { + let error = ResolutionError::SelfNotAvailableInStaticMethod; + resolve_error(self, expr.span, error); + } else { + let fallback = + self.find_fallback_in_self_type(path.last().unwrap().name); + let (mut msg, is_field) = match fallback { + NoSuggestion => { + // limit search to 5 to reduce the number + // of stupid suggestions + (match self.find_best_match(&path_name) { + SuggestionType::Macro(s) => { + format!("the macro `{}`", s) + } + SuggestionType::Function(s) => format!("`{}`", s), + SuggestionType::NotFound => "".to_string(), + }, false) + } + Field => { + (if is_static && method_scope { + "".to_string() + } else { + format!("`self.{}`", path_name) + }, true) + } + TraitItem => (format!("to call `self.{}`", path_name), false), + TraitMethod(path_str) => + (format!("to call `{}::{}`", path_str, path_name), false), + }; + + let mut context = UnresolvedNameContext::Other; + let mut def = Def::Err; + if !msg.is_empty() { + msg = format!("did you mean {}?", msg); + } else { + // we display a help message if this is a module + if let PathResult::Module(module) = + self.resolve_path(&path, scope, None, None) { + def = module.def().unwrap(); + context = UnresolvedNameContext::PathIsMod(parent); + } } + + let error = ResolutionError::UnresolvedName { + path: &path_name, + message: &msg, + context: context, + is_static_method: method_scope && is_static, + is_field: is_field, + def: def, + }; + resolve_error(self, expr.span, error); } } } @@ -3026,31 +2768,34 @@ impl<'a> Resolver<'a> { visit::walk_expr(self, expr); } - ExprKind::Break(Some(label)) | ExprKind::Continue(Some(label)) => { + ExprKind::Break(Some(label), _) | ExprKind::Continue(Some(label)) => { match self.search_label(label.node) { None => { self.record_def(expr.id, err_path_resolution()); resolve_error(self, label.span, - ResolutionError::UndeclaredLabel(&label.node.name.as_str())) + ResolutionError::UndeclaredLabel(&label.node.name.as_str())); } Some(def @ Def::Label(_)) => { // Since this def is a label, it is never read. - self.record_def(expr.id, PathResolution::new(def)) + self.record_def(expr.id, PathResolution::new(def)); } Some(_) => { - span_bug!(expr.span, "label wasn't mapped to a label def!") + span_bug!(expr.span, "label wasn't mapped to a label def!"); } } + + // visit `break` argument if any + visit::walk_expr(self, expr); } ExprKind::IfLet(ref pattern, ref subexpression, ref if_block, ref optional_else) => { self.visit_expr(subexpression); - self.value_ribs.push(Rib::new(NormalRibKind)); - self.resolve_pattern(pattern, PatternSource::IfLet, &mut FnvHashMap()); + self.ribs[ValueNS].push(Rib::new(NormalRibKind)); + self.resolve_pattern(pattern, PatternSource::IfLet, &mut FxHashMap()); self.visit_block(if_block); - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); optional_else.as_ref().map(|expr| self.visit_expr(expr)); } @@ -3064,22 +2809,22 @@ impl<'a> Resolver<'a> { ExprKind::WhileLet(ref pattern, ref subexpression, ref block, label) => { self.visit_expr(subexpression); - self.value_ribs.push(Rib::new(NormalRibKind)); - self.resolve_pattern(pattern, PatternSource::WhileLet, &mut FnvHashMap()); + self.ribs[ValueNS].push(Rib::new(NormalRibKind)); + self.resolve_pattern(pattern, PatternSource::WhileLet, &mut FxHashMap()); self.resolve_labeled_block(label, expr.id, block); - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); } ExprKind::ForLoop(ref pattern, ref subexpression, ref block, label) => { self.visit_expr(subexpression); - self.value_ribs.push(Rib::new(NormalRibKind)); - self.resolve_pattern(pattern, PatternSource::For, &mut FnvHashMap()); + self.ribs[ValueNS].push(Rib::new(NormalRibKind)); + self.resolve_pattern(pattern, PatternSource::For, &mut FxHashMap()); self.resolve_labeled_block(label, expr.id, block); - self.value_ribs.pop(); + self.ribs[ValueNS].pop(); } ExprKind::Field(ref subexpression, _) => { @@ -3208,6 +2953,7 @@ impl<'a> Resolver<'a> { let mut lookup_results = Vec::new(); let mut worklist = Vec::new(); + let mut seen_modules = FxHashSet(); worklist.push((self.graph_root, Vec::new(), false)); while let Some((in_module, @@ -3218,7 +2964,7 @@ impl<'a> Resolver<'a> { in_module.for_each_child(|name, ns, name_binding| { // avoid imports entirely - if name_binding.is_import() { return; } + if name_binding.is_import() && !name_binding.is_extern_crate() { return; } // collect results based on the filter function if name == lookup_name && ns == namespace { @@ -3235,7 +2981,7 @@ impl<'a> Resolver<'a> { segms.push(segment); let path = Path { span: span, - global: true, + global: false, segments: segms, }; // the entity is accessible in the following cases: @@ -3252,28 +2998,18 @@ impl<'a> Resolver<'a> { } // collect submodules to explore - if let Ok(module) = name_binding.module() { + if let Some(module) = name_binding.module() { // form the path - let path_segments = match module.kind { - _ if module.parent.is_none() => path_segments.clone(), - ModuleKind::Def(_, name) => { - let mut paths = path_segments.clone(); - let ident = Ident::with_empty_ctxt(name); - let params = PathParameters::none(); - let segm = PathSegment { - identifier: ident, - parameters: params, - }; - paths.push(segm); - paths - } - _ => bug!(), - }; + let mut path_segments = path_segments.clone(); + path_segments.push(PathSegment { + identifier: Ident::with_empty_ctxt(name), + parameters: PathParameters::none(), + }); if !in_module_is_extern || name_binding.vis == ty::Visibility::Public { // add the module to the lookup let is_extern = in_module_is_extern || name_binding.is_extern_crate(); - if !worklist.iter().any(|&(m, ..)| m.def() == module.def()) { + if seen_modules.insert(module.def_id().unwrap()) { worklist.push((module, path_segments, is_extern)); } } @@ -3295,34 +3031,32 @@ impl<'a> Resolver<'a> { } fn resolve_visibility(&mut self, vis: &ast::Visibility) -> ty::Visibility { - let (path, id) = match *vis { + let (segments, span, id) = match *vis { ast::Visibility::Public => return ty::Visibility::Public, ast::Visibility::Crate(_) => return ty::Visibility::Restricted(ast::CRATE_NODE_ID), - ast::Visibility::Restricted { ref path, id } => (path, id), + ast::Visibility::Restricted { ref path, id } => (&path.segments, path.span, id), ast::Visibility::Inherited => { return ty::Visibility::Restricted(self.current_module.normal_ancestor_id.unwrap()); } }; - let segments: Vec<_> = path.segments.iter().map(|seg| seg.identifier).collect(); + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); let mut path_resolution = err_path_resolution(); - let vis = match self.resolve_module_path(&segments, DontUseLexicalScope, Some(path.span)) { - Success(module) => { + let vis = match self.resolve_path(&path, PathScope::Import, None, Some(span)) { + PathResult::Module(module) => { path_resolution = PathResolution::new(module.def().unwrap()); ty::Visibility::Restricted(module.normal_ancestor_id.unwrap()) } - Indeterminate => unreachable!(), - Failed(err) => { - if let Some((span, msg)) = err { - self.session.span_err(span, &format!("failed to resolve module path. {}", msg)); - } + PathResult::Failed(msg, _) => { + self.session.span_err(span, &format!("failed to resolve module path. {}", msg)); ty::Visibility::Public } + _ => ty::Visibility::Public, }; self.def_map.insert(id, path_resolution); if !self.is_accessible(vis) { let msg = format!("visibilities can only be restricted to ancestor modules"); - self.session.span_err(path.span, &msg); + self.session.span_err(span, &msg); } vis } @@ -3337,24 +3071,51 @@ impl<'a> Resolver<'a> { fn report_errors(&mut self) { self.report_shadowing_errors(); - let mut reported_spans = FnvHashSet(); + let mut reported_spans = FxHashSet(); - for &AmbiguityError { span, name, b1, b2 } in &self.ambiguity_errors { + for &AmbiguityError { span, name, b1, b2, lexical, legacy } in &self.ambiguity_errors { if !reported_spans.insert(span) { continue } - let msg1 = format!("`{}` could resolve to the name imported here", name); - let msg2 = format!("`{}` could also resolve to the name imported here", name); - self.session.struct_span_err(span, &format!("`{}` is ambiguous", name)) - .span_note(b1.span, &msg1) - .span_note(b2.span, &msg2) - .note(&format!("Consider adding an explicit import of `{}` to disambiguate", name)) - .emit(); + let participle = |binding: &NameBinding| { + if binding.is_import() { "imported" } else { "defined" } + }; + let msg1 = format!("`{}` could resolve to the name {} here", name, participle(b1)); + let msg2 = format!("`{}` could also resolve to the name {} here", name, participle(b2)); + let note = if !lexical && b1.is_glob_import() { + format!("consider adding an explicit import of `{}` to disambiguate", name) + } else if let Def::Macro(..) = b1.def() { + format!("macro-expanded {} do not shadow", + if b1.is_import() { "macro imports" } else { "macros" }) + } else { + format!("macro-expanded {} do not shadow when used in a macro invocation path", + if b1.is_import() { "imports" } else { "items" }) + }; + if legacy { + let id = match b2.kind { + NameBindingKind::Import { directive, .. } => directive.id, + _ => unreachable!(), + }; + let mut span = MultiSpan::from_span(span); + span.push_span_label(b1.span, msg1); + span.push_span_label(b2.span, msg2); + let msg = format!("`{}` is ambiguous", name); + self.session.add_lint(lint::builtin::LEGACY_IMPORTS, id, span, msg); + } else { + self.session.struct_span_err(span, &format!("`{}` is ambiguous", name)) + .span_note(b1.span, &msg1) + .span_note(b2.span, &msg2) + .note(¬e) + .emit(); + } } for &PrivacyError(span, name, binding) in &self.privacy_errors { if !reported_spans.insert(span) { continue } if binding.is_extern_crate() { // Warn when using an inaccessible extern crate. - let node_id = binding.module().unwrap().extern_crate_id.unwrap(); + let node_id = match binding.kind { + NameBindingKind::Import { directive, .. } => directive.id, + _ => unreachable!(), + }; let msg = format!("extern crate `{}` is private", name); self.session.add_lint(lint::builtin::INACCESSIBLE_EXTERN_CRATE, node_id, span, msg); } else { @@ -3366,12 +3127,12 @@ impl<'a> Resolver<'a> { fn report_shadowing_errors(&mut self) { for (name, scope) in replace(&mut self.lexical_macro_resolutions, Vec::new()) { - self.resolve_macro_name(scope, name); + self.resolve_legacy_scope(scope, name, true); } - let mut reported_errors = FnvHashSet(); + let mut reported_errors = FxHashSet(); for binding in replace(&mut self.disallowed_shadowing, Vec::new()) { - if self.resolve_macro_name(binding.parent, binding.name).is_some() && + if self.resolve_legacy_scope(&binding.parent, binding.name, false).is_some() && reported_errors.insert((binding.name, binding.span)) { let msg = format!("`{}` is already in scope", binding.name); self.session.struct_span_err(binding.span, &msg) @@ -3382,7 +3143,7 @@ impl<'a> Resolver<'a> { } } - fn report_conflict(&self, + fn report_conflict(&mut self, parent: Module, name: Name, ns: Namespace, @@ -3400,18 +3161,26 @@ impl<'a> Resolver<'a> { _ => "enum", }; - let (participle, noun) = match old_binding.is_import() || old_binding.is_extern_crate() { + let (participle, noun) = match old_binding.is_import() { true => ("imported", "import"), false => ("defined", "definition"), }; let span = binding.span; + + if let Some(s) = self.name_already_seen.get(&name) { + if s == &span { + return; + } + } + let msg = { let kind = match (ns, old_binding.module()) { (ValueNS, _) => "a value", - (TypeNS, Ok(module)) if module.extern_crate_id.is_some() => "an extern crate", - (TypeNS, Ok(module)) if module.is_normal() => "a module", - (TypeNS, Ok(module)) if module.is_trait() => "a trait", + (MacroNS, _) => "a macro", + (TypeNS, _) if old_binding.is_extern_crate() => "an extern crate", + (TypeNS, Some(module)) if module.is_normal() => "a module", + (TypeNS, Some(module)) if module.is_trait() => "a trait", (TypeNS, _) => "a type", }; format!("{} named `{}` has already been {} in this {}", @@ -3424,7 +3193,7 @@ impl<'a> Resolver<'a> { e.span_label(span, &format!("`{}` was already imported", name)); e }, - (true, _) | (_, true) if binding.is_import() || old_binding.is_import() => { + (true, _) | (_, true) if binding.is_import() && old_binding.is_import() => { let mut e = struct_span_err!(self.session, span, E0254, "{}", msg); e.span_label(span, &"already imported"); e @@ -3457,6 +3226,7 @@ impl<'a> Resolver<'a> { err.span_label(old_binding.span, &format!("previous {} of `{}` here", noun, name)); } err.emit(); + self.name_already_seen.insert(name, span); } } @@ -3548,7 +3318,7 @@ fn module_to_string(module: Module) -> String { } } else { // danger, shouldn't be ident? - names.push(token::str_to_ident("")); + names.push(Ident::from_str("")); collect_mod(names, module.parent.unwrap()); } } diff --git a/src/librustc_resolve/macros.rs b/src/librustc_resolve/macros.rs index e3078a42f6..59452d3040 100644 --- a/src/librustc_resolve/macros.rs +++ b/src/librustc_resolve/macros.rs @@ -8,22 +8,30 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use {Module, Resolver}; +use {AmbiguityError, Resolver, ResolutionError, resolve_error}; +use {Module, ModuleKind, NameBinding, NameBindingKind, PathScope, PathResult}; +use Namespace::{self, MacroNS}; use build_reduced_graph::BuildReducedGraphVisitor; -use rustc::hir::def_id::{CRATE_DEF_INDEX, DefIndex}; +use resolve_imports::ImportResolver; +use rustc::hir::def_id::{DefId, BUILTIN_MACROS_CRATE, CRATE_DEF_INDEX, DefIndex}; +use rustc::hir::def::{Def, Export}; use rustc::hir::map::{self, DefCollector}; +use rustc::ty; use std::cell::Cell; use std::rc::Rc; -use syntax::ast; +use syntax::ast::{self, Name}; use syntax::errors::DiagnosticBuilder; use syntax::ext::base::{self, Determinacy, MultiModifier, MultiDecorator}; use syntax::ext::base::{NormalTT, SyntaxExtension}; use syntax::ext::expand::Expansion; use syntax::ext::hygiene::Mark; use syntax::ext::tt::macro_rules; -use syntax::parse::token::intern; +use syntax::feature_gate::{emit_feature_err, GateIssue}; +use syntax::fold::{self, Folder}; +use syntax::ptr::P; use syntax::util::lev_distance::find_best_match_for_name; -use syntax_pos::Span; +use syntax::visit::Visitor; +use syntax_pos::{Span, DUMMY_SP}; #[derive(Clone)] pub struct InvocationData<'a> { @@ -59,26 +67,18 @@ pub enum LegacyScope<'a> { Binding(&'a LegacyBinding<'a>), } -impl<'a> LegacyScope<'a> { - fn simplify_expansion(mut invoc: &'a InvocationData<'a>) -> Self { - while let LegacyScope::Invocation(_) = invoc.expansion.get() { - match invoc.legacy_scope.get() { - LegacyScope::Expansion(new_invoc) => invoc = new_invoc, - LegacyScope::Binding(_) => break, - scope @ _ => return scope, - } - } - LegacyScope::Expansion(invoc) - } -} - pub struct LegacyBinding<'a> { - pub parent: LegacyScope<'a>, + pub parent: Cell>, pub name: ast::Name, ext: Rc, pub span: Span, } +pub enum MacroBinding<'a> { + Legacy(&'a LegacyBinding<'a>), + Modern(&'a NameBinding<'a>), +} + impl<'a> base::Resolver for Resolver<'a> { fn next_node_id(&mut self) -> ast::NodeId { self.session.next_node_id() @@ -97,28 +97,59 @@ impl<'a> base::Resolver for Resolver<'a> { mark } + fn eliminate_crate_var(&mut self, item: P) -> P { + struct EliminateCrateVar<'b, 'a: 'b>(&'b mut Resolver<'a>); + + impl<'a, 'b> Folder for EliminateCrateVar<'a, 'b> { + fn fold_path(&mut self, mut path: ast::Path) -> ast::Path { + let ident = path.segments[0].identifier; + if ident.name == "$crate" { + path.global = true; + let module = self.0.resolve_crate_var(ident.ctxt); + if module.is_local() { + path.segments.remove(0); + } else { + path.segments[0].identifier = match module.kind { + ModuleKind::Def(_, name) => ast::Ident::with_empty_ctxt(name), + _ => unreachable!(), + }; + } + } + path + } + + fn fold_mac(&mut self, mac: ast::Mac) -> ast::Mac { + fold::noop_fold_mac(mac, self) + } + } + + EliminateCrateVar(self).fold_item(item).expect_one("") + } + fn visit_expansion(&mut self, mark: Mark, expansion: &Expansion) { let invocation = self.invocations[&mark]; self.collect_def_ids(invocation, expansion); self.current_module = invocation.module.get(); + self.current_module.unresolved_invocations.borrow_mut().remove(&mark); let mut visitor = BuildReducedGraphVisitor { resolver: self, legacy_scope: LegacyScope::Invocation(invocation), expansion: mark, }; expansion.visit_with(&mut visitor); + self.current_module.unresolved_invocations.borrow_mut().remove(&mark); invocation.expansion.set(visitor.legacy_scope); } fn add_macro(&mut self, scope: Mark, mut def: ast::MacroDef, export: bool) { - if &def.ident.name.as_str() == "macro_rules" { + if def.ident.name == "macro_rules" { self.session.span_err(def.span, "user-defined macros may not be named `macro_rules`"); } let invocation = self.invocations[&scope]; let binding = self.arenas.alloc_legacy_binding(LegacyBinding { - parent: invocation.legacy_scope.get(), + parent: Cell::new(invocation.legacy_scope.get()), name: def.ident.name, ext: Rc::new(macro_rules::compile(&self.session.parse_sess, &def)), span: def.span, @@ -128,6 +159,13 @@ impl<'a> base::Resolver for Resolver<'a> { if export { def.id = self.next_node_id(); + DefCollector::new(&mut self.definitions).with_parent(CRATE_DEF_INDEX, |collector| { + collector.visit_macro_def(&def) + }); + self.macro_exports.push(Export { + name: def.ident.name, + def: Def::Macro(self.definitions.local_def_id(def.id)), + }); self.exported_macros.push(def); } } @@ -136,18 +174,32 @@ impl<'a> base::Resolver for Resolver<'a> { if let NormalTT(..) = *ext { self.macro_names.insert(ident.name); } - self.builtin_macros.insert(ident.name, ext); + let def_id = DefId { + krate: BUILTIN_MACROS_CRATE, + index: DefIndex::new(self.macro_map.len()), + }; + self.macro_map.insert(def_id, ext); + let binding = self.arenas.alloc_name_binding(NameBinding { + kind: NameBindingKind::Def(Def::Macro(def_id)), + span: DUMMY_SP, + vis: ty::Visibility::PrivateExternal, + expansion: Mark::root(), + }); + self.builtin_macros.insert(ident.name, binding); } fn add_expansions_at_stmt(&mut self, id: ast::NodeId, macros: Vec) { self.macros_at_scope.insert(id, macros); } + fn resolve_imports(&mut self) { + ImportResolver { resolver: self }.resolve_imports() + } + fn find_attr_invoc(&mut self, attrs: &mut Vec) -> Option { for i in 0..attrs.len() { - let name = intern(&attrs[i].name()); - match self.builtin_macros.get(&name) { - Some(ext) => match **ext { + match self.builtin_macros.get(&attrs[i].name()).cloned() { + Some(binding) => match *binding.get_macro(self) { MultiModifier(..) | MultiDecorator(..) | SyntaxExtension::AttrProcMacro(..) => { return Some(attrs.remove(i)) } @@ -161,72 +213,201 @@ impl<'a> base::Resolver for Resolver<'a> { fn resolve_macro(&mut self, scope: Mark, path: &ast::Path, force: bool) -> Result, Determinacy> { - if path.segments.len() > 1 || path.global || !path.segments[0].parameters.is_empty() { - self.session.span_err(path.span, "expected macro name without module separators"); + let ast::Path { ref segments, global, span } = *path; + if segments.iter().any(|segment| !segment.parameters.is_empty()) { + let kind = + if segments.last().unwrap().parameters.is_empty() { "module" } else { "macro" }; + let msg = format!("type parameters are not allowed on {}s", kind); + self.session.span_err(path.span, &msg); return Err(Determinacy::Determined); } - let name = path.segments[0].identifier.name; + let path_scope = if global { PathScope::Global } else { PathScope::Lexical }; + let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect(); let invocation = self.invocations[&scope]; - if let LegacyScope::Expansion(parent) = invocation.legacy_scope.get() { - invocation.legacy_scope.set(LegacyScope::simplify_expansion(parent)); - } - self.resolve_macro_name(invocation.legacy_scope.get(), name).ok_or_else(|| { - if force { - let msg = format!("macro undefined: '{}!'", name); - let mut err = self.session.struct_span_err(path.span, &msg); - self.suggest_macro_name(&name.as_str(), &mut err); - err.emit(); - Determinacy::Determined - } else { - Determinacy::Undetermined + self.current_module = invocation.module.get(); + + if path.len() > 1 || global { + if !self.use_extern_macros { + let msg = "non-ident macro paths are experimental"; + let feature = "use_extern_macros"; + emit_feature_err(&self.session.parse_sess, feature, span, GateIssue::Language, msg); + return Err(Determinacy::Determined); } - }) + + let ext = match self.resolve_path(&path, path_scope, Some(MacroNS), None) { + PathResult::NonModule(path_res) => Ok(self.get_macro(path_res.base_def)), + PathResult::Module(..) => unreachable!(), + PathResult::Indeterminate if !force => return Err(Determinacy::Undetermined), + _ => Err(Determinacy::Determined), + }; + self.current_module.macro_resolutions.borrow_mut() + .push((path.into_boxed_slice(), path_scope, span)); + return ext; + } + + let name = path[0].name; + let result = match self.resolve_legacy_scope(&invocation.legacy_scope, name, false) { + Some(MacroBinding::Legacy(binding)) => Ok(binding.ext.clone()), + Some(MacroBinding::Modern(binding)) => Ok(binding.get_macro(self)), + None => match self.resolve_lexical_macro_path_segment(name, MacroNS, None) { + Ok(binding) => Ok(binding.get_macro(self)), + Err(Determinacy::Undetermined) if !force => return Err(Determinacy::Undetermined), + _ => { + let msg = format!("macro undefined: '{}!'", name); + let mut err = self.session.struct_span_err(span, &msg); + self.suggest_macro_name(&name.as_str(), &mut err); + err.emit(); + return Err(Determinacy::Determined); + }, + }, + }; + + if self.use_extern_macros { + self.current_module.legacy_macro_resolutions.borrow_mut().push((scope, name, span)); + } + result } } impl<'a> Resolver<'a> { - pub fn resolve_macro_name(&mut self, mut scope: LegacyScope<'a>, name: ast::Name) - -> Option> { + // Resolve the initial segment of a non-global macro path (e.g. `foo` in `foo::bar!();`) + pub fn resolve_lexical_macro_path_segment(&mut self, + name: Name, + ns: Namespace, + record_used: Option) + -> Result<&'a NameBinding<'a>, Determinacy> { + let mut module = self.current_module; + let mut potential_expanded_shadower: Option<&NameBinding> = None; + loop { + // Since expanded macros may not shadow the lexical scope (enforced below), + // we can ignore unresolved invocations (indicated by the penultimate argument). + match self.resolve_name_in_module(module, name, ns, true, record_used) { + Ok(binding) => { + let span = match record_used { + Some(span) => span, + None => return Ok(binding), + }; + match potential_expanded_shadower { + Some(shadower) if shadower.def() != binding.def() => { + self.ambiguity_errors.push(AmbiguityError { + span: span, name: name, b1: shadower, b2: binding, lexical: true, + legacy: false, + }); + return Ok(shadower); + } + _ if binding.expansion == Mark::root() => return Ok(binding), + _ => potential_expanded_shadower = Some(binding), + } + }, + Err(Determinacy::Undetermined) => return Err(Determinacy::Undetermined), + Err(Determinacy::Determined) => {} + } + + match module.kind { + ModuleKind::Block(..) => module = module.parent.unwrap(), + ModuleKind::Def(..) => return match potential_expanded_shadower { + Some(binding) => Ok(binding), + None if record_used.is_some() => Err(Determinacy::Determined), + None => Err(Determinacy::Undetermined), + }, + } + } + } + + pub fn resolve_legacy_scope(&mut self, + mut scope: &'a Cell>, + name: Name, + record_used: bool) + -> Option> { let mut possible_time_travel = None; let mut relative_depth: u32 = 0; + let mut binding = None; loop { - scope = match scope { + match scope.get() { LegacyScope::Empty => break, LegacyScope::Expansion(invocation) => { - if let LegacyScope::Empty = invocation.expansion.get() { - if possible_time_travel.is_none() { - possible_time_travel = Some(scope); + match invocation.expansion.get() { + LegacyScope::Invocation(_) => scope.set(invocation.legacy_scope.get()), + LegacyScope::Empty => { + if possible_time_travel.is_none() { + possible_time_travel = Some(scope); + } + scope = &invocation.legacy_scope; + } + _ => { + relative_depth += 1; + scope = &invocation.expansion; } - invocation.legacy_scope.get() - } else { - relative_depth += 1; - invocation.expansion.get() } } LegacyScope::Invocation(invocation) => { relative_depth = relative_depth.saturating_sub(1); - invocation.legacy_scope.get() + scope = &invocation.legacy_scope; } - LegacyScope::Binding(binding) => { - if binding.name == name { - if let Some(scope) = possible_time_travel { - // Check for disallowed shadowing later - self.lexical_macro_resolutions.push((name, scope)); - } else if relative_depth > 0 { - self.disallowed_shadowing.push(binding); + LegacyScope::Binding(potential_binding) => { + if potential_binding.name == name { + if (!self.use_extern_macros || record_used) && relative_depth > 0 { + self.disallowed_shadowing.push(potential_binding); } - return Some(binding.ext.clone()); + binding = Some(potential_binding); + break } - binding.parent + scope = &potential_binding.parent; } }; } - if let Some(scope) = possible_time_travel { - self.lexical_macro_resolutions.push((name, scope)); + let binding = match binding { + Some(binding) => MacroBinding::Legacy(binding), + None => match self.builtin_macros.get(&name).cloned() { + Some(binding) => MacroBinding::Modern(binding), + None => return None, + }, + }; + + if !self.use_extern_macros { + if let Some(scope) = possible_time_travel { + // Check for disallowed shadowing later + self.lexical_macro_resolutions.push((name, scope)); + } + } + + Some(binding) + } + + pub fn finalize_current_module_macro_resolutions(&mut self) { + let module = self.current_module; + for &(ref path, scope, span) in module.macro_resolutions.borrow().iter() { + match self.resolve_path(path, scope, Some(MacroNS), Some(span)) { + PathResult::NonModule(_) => {}, + PathResult::Failed(msg, _) => { + resolve_error(self, span, ResolutionError::FailedToResolve(&msg)); + } + _ => unreachable!(), + } + } + + for &(mark, name, span) in module.legacy_macro_resolutions.borrow().iter() { + let legacy_scope = &self.invocations[&mark].legacy_scope; + let legacy_resolution = self.resolve_legacy_scope(legacy_scope, name, true); + let resolution = self.resolve_lexical_macro_path_segment(name, MacroNS, Some(span)); + let (legacy_resolution, resolution) = match (legacy_resolution, resolution) { + (Some(legacy_resolution), Ok(resolution)) => (legacy_resolution, resolution), + _ => continue, + }; + let (legacy_span, participle) = match legacy_resolution { + MacroBinding::Modern(binding) if binding.def() == resolution.def() => continue, + MacroBinding::Modern(binding) => (binding.span, "imported"), + MacroBinding::Legacy(binding) => (binding.span, "defined"), + }; + let msg1 = format!("`{}` could resolve to the macro {} here", name, participle); + let msg2 = format!("`{}` could also resolve to the macro imported here", name); + self.session.struct_span_err(span, &format!("`{}` is ambiguous", name)) + .span_note(legacy_span, &msg1) + .span_note(resolution.span, &msg2) + .emit(); } - self.builtin_macros.get(&name).cloned() } fn suggest_macro_name(&mut self, name: &str, err: &mut DiagnosticBuilder<'a>) { diff --git a/src/librustc_resolve/resolve_imports.rs b/src/librustc_resolve/resolve_imports.rs index 2b3945bd0d..5ea79ece37 100644 --- a/src/librustc_resolve/resolve_imports.rs +++ b/src/librustc_resolve/resolve_imports.rs @@ -10,32 +10,27 @@ use self::ImportDirectiveSubclass::*; -use Module; -use Namespace::{self, TypeNS, ValueNS}; -use {NameBinding, NameBindingKind, PrivacyError, ToNameBinding}; -use ResolveResult; -use ResolveResult::*; +use {AmbiguityError, Module, PerNS}; +use Namespace::{self, TypeNS, MacroNS}; +use {NameBinding, NameBindingKind, PathResult, PathScope, PrivacyError, ToNameBinding}; use Resolver; -use UseLexicalScopeFlag::DontUseLexicalScope; use {names_to_string, module_to_string}; use {resolve_error, ResolutionError}; use rustc::ty; use rustc::lint::builtin::PRIVATE_IN_PUBLIC; use rustc::hir::def::*; +use rustc::util::nodemap::FxHashSet; use syntax::ast::{Ident, NodeId, Name}; use syntax::ext::base::Determinacy::{self, Determined, Undetermined}; +use syntax::ext::hygiene::Mark; +use syntax::symbol::keywords; use syntax::util::lev_distance::find_best_match_for_name; use syntax_pos::Span; use std::cell::{Cell, RefCell}; - -impl<'a> Resolver<'a> { - pub fn resolve_imports(&mut self) { - ImportResolver { resolver: self }.resolve_imports(); - } -} +use std::mem; /// Contains data for specific types of import directives. #[derive(Clone, Debug)] @@ -43,37 +38,27 @@ pub enum ImportDirectiveSubclass<'a> { SingleImport { target: Name, source: Name, - value_result: Cell, Determinacy>>, - type_result: Cell, Determinacy>>, + result: PerNS, Determinacy>>>, }, GlobImport { is_prelude: bool, max_vis: Cell, // The visibility of the greatest reexport. // n.b. `max_vis` is only used in `finalize_import` to check for reexport errors. }, -} - -impl<'a> ImportDirectiveSubclass<'a> { - pub fn single(target: Name, source: Name) -> Self { - SingleImport { - target: target, - source: source, - type_result: Cell::new(Err(Undetermined)), - value_result: Cell::new(Err(Undetermined)), - } - } + ExternCrate, } /// One import directive. #[derive(Debug,Clone)] pub struct ImportDirective<'a> { pub id: NodeId, - parent: Module<'a>, - module_path: Vec, - imported_module: Cell>>, // the resolution of `module_path` - subclass: ImportDirectiveSubclass<'a>, - span: Span, - vis: Cell, + pub parent: Module<'a>, + pub module_path: Vec, + pub imported_module: Cell>>, // the resolution of `module_path` + pub subclass: ImportDirectiveSubclass<'a>, + pub span: Span, + pub vis: Cell, + pub expansion: Mark, } impl<'a> ImportDirective<'a> { @@ -89,7 +74,7 @@ pub struct NameResolution<'a> { single_imports: SingleImports<'a>, /// The least shadowable known binding for this name, or None if there are no known bindings. pub binding: Option<&'a NameBinding<'a>>, - duplicate_globs: Vec<&'a NameBinding<'a>>, + shadows_glob: Option<&'a NameBinding<'a>>, } #[derive(Clone, Debug)] @@ -155,99 +140,99 @@ impl<'a> Resolver<'a> { module: Module<'a>, name: Name, ns: Namespace, - allow_private_imports: bool, + ignore_unresolved_invocations: bool, record_used: Option) - -> ResolveResult<&'a NameBinding<'a>> { + -> Result<&'a NameBinding<'a>, Determinacy> { self.populate_module_if_necessary(module); - let resolution = self.resolution(module, name, ns); - let resolution = match resolution.borrow_state() { - ::std::cell::BorrowState::Unused => resolution.borrow_mut(), - _ => return Failed(None), // This happens when there is a cycle of imports - }; - - let new_import_semantics = self.new_import_semantics; - let is_disallowed_private_import = |binding: &NameBinding| { - !new_import_semantics && !allow_private_imports && // disallowed - binding.vis != ty::Visibility::Public && binding.is_import() // non-`pub` import - }; + let resolution = self.resolution(module, name, ns) + .try_borrow_mut() + .map_err(|_| Determined)?; // This happens when there is a cycle of imports if let Some(span) = record_used { if let Some(binding) = resolution.binding { - if is_disallowed_private_import(binding) { - return Failed(None); + if let Some(shadowed_glob) = resolution.shadows_glob { + // If we ignore unresolved invocations, we must forbid + // expanded shadowing to avoid time travel. + if ignore_unresolved_invocations && + binding.expansion != Mark::root() && + ns != MacroNS && // In MacroNS, `try_define` always forbids this shadowing + binding.def() != shadowed_glob.def() { + self.ambiguity_errors.push(AmbiguityError { + span: span, name: name, lexical: false, b1: binding, b2: shadowed_glob, + legacy: false, + }); + } } if self.record_use(name, ns, binding, span) { - return Success(self.dummy_binding); + return Ok(self.dummy_binding); } if !self.is_accessible(binding.vis) { self.privacy_errors.push(PrivacyError(span, name, binding)); } } - return resolution.binding.map(Success).unwrap_or(Failed(None)); + return resolution.binding.ok_or(Determined); } - // If the resolution doesn't depend on glob definability, check privacy and return. - if let Some(result) = self.try_result(&resolution, ns) { - return result.and_then(|binding| { - if self.is_accessible(binding.vis) && !is_disallowed_private_import(binding) || - binding.is_extern_crate() { // c.f. issue #37020 - Success(binding) - } else { - Failed(None) + let check_usable = |this: &mut Self, binding: &'a NameBinding<'a>| { + // `extern crate` are always usable for backwards compatability, see issue #37020. + let usable = this.is_accessible(binding.vis) || binding.is_extern_crate(); + if usable { Ok(binding) } else { Err(Determined) } + }; + + // Items and single imports are not shadowable. + if let Some(binding) = resolution.binding { + if !binding.is_glob_import() { + return check_usable(self, binding); + } + } + + // Check if a single import can still define the name. + match resolution.single_imports { + SingleImports::AtLeastOne => return Err(Undetermined), + SingleImports::MaybeOne(directive) if self.is_accessible(directive.vis.get()) => { + let module = match directive.imported_module.get() { + Some(module) => module, + None => return Err(Undetermined), + }; + let name = match directive.subclass { + SingleImport { source, .. } => source, + _ => unreachable!(), + }; + match self.resolve_name_in_module(module, name, ns, false, None) { + Err(Determined) => {} + _ => return Err(Undetermined), } - }); + } + SingleImports::MaybeOne(_) | SingleImports::None => {}, + } + + let no_unresolved_invocations = + ignore_unresolved_invocations || module.unresolved_invocations.borrow().is_empty(); + match resolution.binding { + // In `MacroNS`, expanded bindings do not shadow (enforced in `try_define`). + Some(binding) if no_unresolved_invocations || ns == MacroNS => + return check_usable(self, binding), + None if no_unresolved_invocations => {} + _ => return Err(Undetermined), } // Check if the globs are determined for directive in module.globs.borrow().iter() { if self.is_accessible(directive.vis.get()) { if let Some(module) = directive.imported_module.get() { - let result = self.resolve_name_in_module(module, name, ns, true, None); - if let Indeterminate = result { - return Indeterminate; + let result = self.resolve_name_in_module(module, name, ns, false, None); + if let Err(Undetermined) = result { + return Err(Undetermined); } } else { - return Indeterminate; + return Err(Undetermined); } } } - Failed(None) - } - - // Returns Some(the resolution of the name), or None if the resolution depends - // on whether more globs can define the name. - fn try_result(&mut self, resolution: &NameResolution<'a>, ns: Namespace) - -> Option>> { - match resolution.binding { - Some(binding) if !binding.is_glob_import() => - return Some(Success(binding)), // Items and single imports are not shadowable. - _ => {} - }; - - // Check if a single import can still define the name. - match resolution.single_imports { - SingleImports::AtLeastOne => return Some(Indeterminate), - SingleImports::MaybeOne(directive) if self.is_accessible(directive.vis.get()) => { - let module = match directive.imported_module.get() { - Some(module) => module, - None => return Some(Indeterminate), - }; - let name = match directive.subclass { - SingleImport { source, .. } => source, - GlobImport { .. } => unreachable!(), - }; - match self.resolve_name_in_module(module, name, ns, true, None) { - Failed(_) => {} - _ => return Some(Indeterminate), - } - } - SingleImports::MaybeOne(_) | SingleImports::None => {}, - } - - resolution.binding.map(Success) + Err(Determined) } // Add an import directive to the current module. @@ -256,7 +241,8 @@ impl<'a> Resolver<'a> { subclass: ImportDirectiveSubclass<'a>, span: Span, id: NodeId, - vis: ty::Visibility) { + vis: ty::Visibility, + expansion: Mark) { let current_module = self.current_module; let directive = self.arenas.alloc_import_directive(ImportDirective { parent: current_module, @@ -266,27 +252,29 @@ impl<'a> Resolver<'a> { span: span, id: id, vis: Cell::new(vis), + expansion: expansion, }); self.indeterminate_imports.push(directive); match directive.subclass { SingleImport { target, .. } => { - for &ns in &[ValueNS, TypeNS] { - let mut resolution = self.resolution(current_module, target, ns).borrow_mut(); + self.per_ns(|this, ns| { + let mut resolution = this.resolution(current_module, target, ns).borrow_mut(); resolution.single_imports.add_directive(directive); - } + }); } // We don't add prelude imports to the globs since they only affect lexical scopes, // which are not relevant to import resolution. GlobImport { is_prelude: true, .. } => {} GlobImport { .. } => self.current_module.globs.borrow_mut().push(directive), + _ => unreachable!(), } } // Given a binding and an import directive that resolves to it, // return the corresponding binding defined by the import directive. - fn import(&mut self, binding: &'a NameBinding<'a>, directive: &'a ImportDirective<'a>) - -> NameBinding<'a> { + pub fn import(&mut self, binding: &'a NameBinding<'a>, directive: &'a ImportDirective<'a>) + -> NameBinding<'a> { let vis = if binding.pseudo_vis().is_at_least(directive.vis.get(), self) || !directive.is_glob() && binding.is_extern_crate() { // c.f. `PRIVATE_IN_PUBLIC` directive.vis.get() @@ -308,6 +296,7 @@ impl<'a> Resolver<'a> { }, span: directive.span, vis: vis, + expansion: directive.expansion, } } @@ -320,28 +309,23 @@ impl<'a> Resolver<'a> { self.update_resolution(module, name, ns, |this, resolution| { if let Some(old_binding) = resolution.binding { if binding.is_glob_import() { - if !this.new_import_semantics || !old_binding.is_glob_import() { - resolution.duplicate_globs.push(binding); + if !old_binding.is_glob_import() && + !(ns == MacroNS && old_binding.expansion != Mark::root()) { + resolution.shadows_glob = Some(binding); } else if binding.def() != old_binding.def() { - resolution.binding = Some(this.arenas.alloc_name_binding(NameBinding { - kind: NameBindingKind::Ambiguity { - b1: old_binding, - b2: binding, - }, - vis: if old_binding.vis.is_at_least(binding.vis, this) { - old_binding.vis - } else { - binding.vis - }, - span: old_binding.span, - })); + resolution.binding = Some(this.ambiguity(old_binding, binding)); } else if !old_binding.vis.is_at_least(binding.vis, this) { // We are glob-importing the same item but with greater visibility. resolution.binding = Some(binding); } } else if old_binding.is_glob_import() { - resolution.duplicate_globs.push(old_binding); - resolution.binding = Some(binding); + if ns == MacroNS && binding.expansion != Mark::root() && + binding.def() != old_binding.def() { + resolution.binding = Some(this.ambiguity(binding, old_binding)); + } else { + resolution.binding = Some(binding); + resolution.shadows_glob = Some(old_binding); + } } else { return Err(old_binding); } @@ -353,6 +337,16 @@ impl<'a> Resolver<'a> { }) } + pub fn ambiguity(&mut self, b1: &'a NameBinding<'a>, b2: &'a NameBinding<'a>) + -> &'a NameBinding<'a> { + self.arenas.alloc_name_binding(NameBinding { + kind: NameBindingKind::Ambiguity { b1: b1, b2: b2, legacy: false }, + vis: if b1.vis.is_at_least(b2.vis, self) { b1.vis } else { b2.vis }, + span: b1.span, + expansion: Mark::root(), + }) + } + // Use `f` to mutate the resolution of the name in the module. // If the resolution becomes a success, define it in the module's glob importers. fn update_resolution(&mut self, module: Module<'a>, name: Name, ns: Namespace, f: F) -> T @@ -367,7 +361,7 @@ impl<'a> Resolver<'a> { let t = f(self, resolution); match resolution.binding() { - _ if !self.new_import_semantics && old_binding.is_some() => return t, + _ if old_binding.is_some() => return t, None => return t, Some(binding) => match old_binding { Some(old_binding) if old_binding as *const _ == binding as *const _ => return t, @@ -378,10 +372,7 @@ impl<'a> Resolver<'a> { // Define `binding` in `module`s glob importers. for directive in module.glob_importers.borrow_mut().iter() { - if match self.new_import_semantics { - true => self.is_accessible_from(binding.vis, directive.parent), - false => binding.vis == ty::Visibility::Public, - } { + if self.is_accessible_from(binding.vis, directive.parent) { let imported_binding = self.import(binding, directive); let _ = self.try_define(directive.parent, name, ns, imported_binding); } @@ -389,10 +380,22 @@ impl<'a> Resolver<'a> { t } + + // Define a "dummy" resolution containing a Def::Err as a placeholder for a + // failed resolution + fn import_dummy_binding(&mut self, directive: &'a ImportDirective<'a>) { + if let SingleImport { target, .. } = directive.subclass { + let dummy_binding = self.dummy_binding; + let dummy_binding = self.import(dummy_binding, directive); + self.per_ns(|this, ns| { + let _ = this.try_define(directive.parent, target, ns, dummy_binding.clone()); + }); + } + } } -struct ImportResolver<'a, 'b: 'a> { - resolver: &'a mut Resolver<'b>, +pub struct ImportResolver<'a, 'b: 'a> { + pub resolver: &'a mut Resolver<'b>, } impl<'a, 'b: 'a> ::std::ops::Deref for ImportResolver<'a, 'b> { @@ -425,28 +428,20 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { /// Resolves all imports for the crate. This method performs the fixed- /// point iteration. - fn resolve_imports(&mut self) { - let mut i = 0; + pub fn resolve_imports(&mut self) { let mut prev_num_indeterminates = self.indeterminate_imports.len() + 1; - while self.indeterminate_imports.len() < prev_num_indeterminates { prev_num_indeterminates = self.indeterminate_imports.len(); - debug!("(resolving imports) iteration {}, {} imports left", i, prev_num_indeterminates); - - let mut imports = Vec::new(); - ::std::mem::swap(&mut imports, &mut self.indeterminate_imports); - - for import in imports { + for import in mem::replace(&mut self.indeterminate_imports, Vec::new()) { match self.resolve_import(&import) { - Failed(_) => self.determined_imports.push(import), - Indeterminate => self.indeterminate_imports.push(import), - Success(()) => self.determined_imports.push(import), + true => self.determined_imports.push(import), + false => self.indeterminate_imports.push(import), } } - - i += 1; } + } + pub fn finalize_imports(&mut self) { for module in self.arenas.local_modules().iter() { self.finalize_resolutions_in(module); } @@ -454,19 +449,15 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { let mut errors = false; for i in 0 .. self.determined_imports.len() { let import = self.determined_imports[i]; - if let Failed(err) = self.finalize_import(import) { + if let Some(err) = self.finalize_import(import) { errors = true; - let (span, help) = match err { - Some((span, msg)) => (span, msg), - None => continue, - }; // If the error is a single failed import then create a "fake" import // resolution for it so that later resolve stages won't complain. self.import_dummy_binding(import); let path = import_path_to_string(&import.module_path, &import.subclass); - let error = ResolutionError::UnresolvedImport(Some((&path, &help))); - resolve_error(self.resolver, span, error); + let error = ResolutionError::UnresolvedImport(Some((&path, &err))); + resolve_error(self.resolver, import.span, error); } } @@ -480,23 +471,9 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { } } - // Define a "dummy" resolution containing a Def::Err as a placeholder for a - // failed resolution - fn import_dummy_binding(&mut self, directive: &'b ImportDirective<'b>) { - if let SingleImport { target, .. } = directive.subclass { - let dummy_binding = self.dummy_binding; - let dummy_binding = self.import(dummy_binding, directive); - let _ = self.try_define(directive.parent, target, ValueNS, dummy_binding.clone()); - let _ = self.try_define(directive.parent, target, TypeNS, dummy_binding); - } - } - - /// Attempts to resolve the given import. The return value indicates - /// failure if we're certain the name does not exist, indeterminate if we - /// don't know whether the name exists at the moment due to other - /// currently-unresolved imports, or success if we know the name exists. + /// Attempts to resolve the given import, returning true if its resolution is determined. /// If successful, the resolved bindings are written into the module. - fn resolve_import(&mut self, directive: &'b ImportDirective<'b>) -> ResolveResult<()> { + fn resolve_import(&mut self, directive: &'b ImportDirective<'b>) -> bool { debug!("(resolving import for module) resolving import `{}::...` in `{}`", names_to_string(&directive.module_path), module_to_string(self.current_module)); @@ -510,102 +487,90 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { // For better failure detection, pretend that the import will not define any names // while resolving its module path. directive.vis.set(ty::Visibility::PrivateExternal); - let result = - self.resolve_module_path(&directive.module_path, DontUseLexicalScope, None); + let result = self.resolve_path(&directive.module_path, PathScope::Import, None, None); directive.vis.set(vis); match result { - Success(module) => module, - Indeterminate => return Indeterminate, - Failed(err) => return Failed(err), + PathResult::Module(module) => module, + PathResult::Indeterminate => return false, + _ => return true, } }; directive.imported_module.set(Some(module)); - let (source, target, value_result, type_result) = match directive.subclass { - SingleImport { source, target, ref value_result, ref type_result } => - (source, target, value_result, type_result), + let (source, target, result) = match directive.subclass { + SingleImport { source, target, ref result } => (source, target, result), GlobImport { .. } => { self.resolve_glob_import(directive); - return Success(()); + return true; } + _ => unreachable!(), }; let mut indeterminate = false; - for &(ns, result) in &[(ValueNS, value_result), (TypeNS, type_result)] { - if let Err(Undetermined) = result.get() { - result.set({ - match self.resolve_name_in_module(module, source, ns, false, None) { - Success(binding) => Ok(binding), - Indeterminate => Err(Undetermined), - Failed(_) => Err(Determined), - } - }); + self.per_ns(|this, ns| { + if let Err(Undetermined) = result[ns].get() { + result[ns].set(this.resolve_name_in_module(module, source, ns, false, None)); } else { - continue + return }; - match result.get() { + match result[ns].get() { Err(Undetermined) => indeterminate = true, Err(Determined) => { - self.update_resolution(directive.parent, target, ns, |_, resolution| { + this.update_resolution(directive.parent, target, ns, |_, resolution| { resolution.single_imports.directive_failed() }); } Ok(binding) if !binding.is_importable() => { let msg = format!("`{}` is not directly importable", target); - struct_span_err!(self.session, directive.span, E0253, "{}", &msg) + struct_span_err!(this.session, directive.span, E0253, "{}", &msg) .span_label(directive.span, &format!("cannot be imported directly")) .emit(); // Do not import this illegal binding. Import a dummy binding and pretend // everything is fine - self.import_dummy_binding(directive); - return Success(()); + this.import_dummy_binding(directive); } Ok(binding) => { - let imported_binding = self.import(binding, directive); - let conflict = self.try_define(directive.parent, target, ns, imported_binding); + let imported_binding = this.import(binding, directive); + let conflict = this.try_define(directive.parent, target, ns, imported_binding); if let Err(old_binding) = conflict { - let binding = &self.import(binding, directive); - self.report_conflict(directive.parent, target, ns, binding, old_binding); + let binding = &this.import(binding, directive); + this.report_conflict(directive.parent, target, ns, binding, old_binding); } } } - } + }); - if indeterminate { Indeterminate } else { Success(()) } + !indeterminate } - fn finalize_import(&mut self, directive: &'b ImportDirective<'b>) -> ResolveResult<()> { + // If appropriate, returns an error to report. + fn finalize_import(&mut self, directive: &'b ImportDirective<'b>) -> Option { self.current_module = directive.parent; let ImportDirective { ref module_path, span, .. } = *directive; - let module_result = self.resolve_module_path(&module_path, DontUseLexicalScope, Some(span)); + let module_result = self.resolve_path(&module_path, PathScope::Import, None, Some(span)); let module = match module_result { - Success(module) => module, - Indeterminate => return Indeterminate, - Failed(err) => { - let self_module = self.module_map[&self.current_module.normal_ancestor_id.unwrap()]; - - let resolve_from_self_result = self.resolve_module_path_from_root( - &self_module, &module_path, 0, Some(span)); - - return if let Success(_) = resolve_from_self_result { - let msg = format!("Did you mean `self::{}`?", &names_to_string(module_path)); - Failed(Some((span, msg))) + PathResult::Module(module) => module, + PathResult::Failed(msg, _) => { + let mut path = vec![keywords::SelfValue.ident()]; + path.extend(module_path); + let result = self.resolve_path(&path, PathScope::Import, None, None); + return if let PathResult::Module(..) = result { + Some(format!("Did you mean `self::{}`?", &names_to_string(module_path))) } else { - Failed(err) + Some(msg) }; }, + _ => return None, }; - let (name, value_result, type_result) = match directive.subclass { - SingleImport { source, ref value_result, ref type_result, .. } => - (source, value_result.get(), type_result.get()), + let (name, result) = match directive.subclass { + SingleImport { source, ref result, .. } => (source, result), GlobImport { .. } if module.def_id() == directive.parent.def_id() => { // Importing a module into itself is not allowed. - let msg = "Cannot glob-import a module into itself.".into(); - return Failed(Some((directive.span, msg))); + return Some("Cannot glob-import a module into itself.".to_string()); } GlobImport { is_prelude, ref max_vis } => { if !is_prelude && @@ -614,25 +579,32 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { let msg = "A non-empty glob must import something with the glob's visibility"; self.session.span_err(directive.span, msg); } - return Success(()); + return None; } + _ => unreachable!(), }; - for &(ns, result) in &[(ValueNS, value_result), (TypeNS, type_result)] { - if let Ok(binding) = result { - if self.record_use(name, ns, binding, directive.span) { - self.resolution(module, name, ns).borrow_mut().binding = - Some(self.dummy_binding); + let mut all_ns_err = true; + self.per_ns(|this, ns| { + if let Ok(binding) = result[ns].get() { + all_ns_err = false; + if this.record_use(name, ns, binding, directive.span) { + this.resolution(module, name, ns).borrow_mut().binding = + Some(this.dummy_binding); } } - } + }); - if value_result.is_err() && type_result.is_err() { - let (value_result, type_result); - value_result = self.resolve_name_in_module(module, name, ValueNS, false, Some(span)); - type_result = self.resolve_name_in_module(module, name, TypeNS, false, Some(span)); + if all_ns_err { + let mut all_ns_failed = true; + self.per_ns(|this, ns| { + match this.resolve_name_in_module(module, name, ns, false, Some(span)) { + Ok(_) => all_ns_failed = false, + _ => {} + } + }); - return if let (Failed(_), Failed(_)) = (value_result, type_result) { + return if all_ns_failed { let resolutions = module.resolutions.borrow(); let names = resolutions.iter().filter_map(|(&(ref n, _), resolution)| { if *n == name { return None; } // Never suggest the same name @@ -652,75 +624,60 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { } else { format!("no `{}` in `{}`{}", name, module_str, lev_suggestion) }; - Failed(Some((directive.span, msg))) + Some(msg) } else { // `resolve_name_in_module` reported a privacy error. self.import_dummy_binding(directive); - Success(()) + None } } - let session = self.session; - let reexport_error = || { - let msg = format!("`{}` is private, and cannot be reexported", name); - let note_msg = - format!("consider marking `{}` as `pub` in the imported module", name); - struct_span_err!(session, directive.span, E0364, "{}", &msg) - .span_note(directive.span, ¬e_msg) - .emit(); - }; - - let extern_crate_lint = || { - let msg = format!("extern crate `{}` is private, and cannot be reexported \ - (error E0364), consider declaring with `pub`", - name); - session.add_lint(PRIVATE_IN_PUBLIC, directive.id, directive.span, msg); - }; - - match (value_result, type_result) { - // All namespaces must be re-exported with extra visibility for an error to occur. - (Ok(value_binding), Ok(type_binding)) => { + let mut reexport_error = None; + let mut any_successful_reexport = false; + self.per_ns(|this, ns| { + if let Ok(binding) = result[ns].get() { let vis = directive.vis.get(); - if !value_binding.pseudo_vis().is_at_least(vis, self) && - !type_binding.pseudo_vis().is_at_least(vis, self) { - reexport_error(); - } else if type_binding.is_extern_crate() && - !type_binding.vis.is_at_least(vis, self) { - extern_crate_lint(); - } - } - - (Ok(binding), _) if !binding.pseudo_vis().is_at_least(directive.vis.get(), self) => { - reexport_error(); - } - - (_, Ok(binding)) if !binding.pseudo_vis().is_at_least(directive.vis.get(), self) => { - if binding.is_extern_crate() { - extern_crate_lint(); + if !binding.pseudo_vis().is_at_least(vis, this) { + reexport_error = Some((ns, binding)); } else { - struct_span_err!(self.session, directive.span, E0365, - "`{}` is private, and cannot be reexported", name) - .span_label(directive.span, &format!("reexport of private `{}`", name)) - .note(&format!("consider declaring type or module `{}` with `pub`", name)) - .emit(); + any_successful_reexport = true; } } + }); - _ => {} + // All namespaces must be re-exported with extra visibility for an error to occur. + if !any_successful_reexport { + let (ns, binding) = reexport_error.unwrap(); + if ns == TypeNS && binding.is_extern_crate() { + let msg = format!("extern crate `{}` is private, and cannot be reexported \ + (error E0364), consider declaring with `pub`", + name); + self.session.add_lint(PRIVATE_IN_PUBLIC, directive.id, directive.span, msg); + } else if ns == TypeNS { + struct_span_err!(self.session, directive.span, E0365, + "`{}` is private, and cannot be reexported", name) + .span_label(directive.span, &format!("reexport of private `{}`", name)) + .note(&format!("consider declaring type or module `{}` with `pub`", name)) + .emit(); + } else { + let msg = format!("`{}` is private, and cannot be reexported", name); + let note_msg = + format!("consider marking `{}` as `pub` in the imported module", name); + struct_span_err!(self.session, directive.span, E0364, "{}", &msg) + .span_note(directive.span, ¬e_msg) + .emit(); + } } // Record what this import resolves to for later uses in documentation, // this may resolve to either a value or a type, but for documentation // purposes it's good enough to just favor one over the other. - let def = match type_result.ok().map(NameBinding::def) { - Some(def) => def, - None => value_result.ok().map(NameBinding::def).unwrap(), - }; - let path_resolution = PathResolution::new(def); - self.def_map.insert(directive.id, path_resolution); + self.per_ns(|this, ns| if let Some(binding) = result[ns].get().ok() { + this.def_map.entry(directive.id).or_insert(PathResolution::new(binding.def())); + }); debug!("(resolving single import) successfully resolved import"); - return Success(()); + None } fn resolve_glob_import(&mut self, directive: &'b ImportDirective<'b>) { @@ -746,8 +703,7 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { resolution.borrow().binding().map(|binding| (*name, binding)) }).collect::>(); for ((name, ns), binding) in bindings { - if binding.pseudo_vis() == ty::Visibility::Public || - self.new_import_semantics && self.is_accessible(binding.vis) { + if binding.pseudo_vis() == ty::Visibility::Public || self.is_accessible(binding.vis) { let imported_binding = self.import(binding, directive); let _ = self.try_define(directive.parent, name, ns, imported_binding); } @@ -767,43 +723,61 @@ impl<'a, 'b:'a> ImportResolver<'a, 'b> { *module.globs.borrow_mut() = Vec::new(); let mut reexports = Vec::new(); + if module as *const _ == self.graph_root as *const _ { + let mut exported_macro_names = FxHashSet(); + for export in mem::replace(&mut self.macro_exports, Vec::new()).into_iter().rev() { + if exported_macro_names.insert(export.name) { + reexports.push(export); + } + } + } + for (&(name, ns), resolution) in module.resolutions.borrow().iter() { - let resolution = resolution.borrow(); + let resolution = &mut *resolution.borrow_mut(); let binding = match resolution.binding { Some(binding) => binding, None => continue, }; - // Report conflicts - if !self.new_import_semantics { - for duplicate_glob in resolution.duplicate_globs.iter() { - // FIXME #31337: We currently allow items to shadow glob-imported re-exports. - if !binding.is_import() { - if let NameBindingKind::Import { binding, .. } = duplicate_glob.kind { - if binding.is_import() { continue } - } - } - - self.report_conflict(module, name, ns, duplicate_glob, binding); - } - } - if binding.vis == ty::Visibility::Public && (binding.is_import() || binding.is_extern_crate()) { let def = binding.def(); if def != Def::Err { + if !def.def_id().is_local() { + self.session.cstore.export_macros(def.def_id().krate); + } reexports.push(Export { name: name, def: def }); } } - if let NameBindingKind::Import { binding: orig_binding, directive, .. } = binding.kind { - if ns == TypeNS && orig_binding.is_variant() && - !orig_binding.vis.is_at_least(binding.vis, self) { - let msg = format!("variant `{}` is private, and cannot be reexported \ - (error E0364), consider declaring its enum as `pub`", - name); - self.session.add_lint(PRIVATE_IN_PUBLIC, directive.id, binding.span, msg); + match binding.kind { + NameBindingKind::Import { binding: orig_binding, directive, .. } => { + if ns == TypeNS && orig_binding.is_variant() && + !orig_binding.vis.is_at_least(binding.vis, self) { + let msg = format!("variant `{}` is private, and cannot be reexported \ + (error E0364), consider declaring its enum as `pub`", + name); + self.session.add_lint(PRIVATE_IN_PUBLIC, directive.id, binding.span, msg); + } } + NameBindingKind::Ambiguity { b1, b2, .. } + if b1.is_glob_import() && b2.is_glob_import() => { + let (orig_b1, orig_b2) = match (&b1.kind, &b2.kind) { + (&NameBindingKind::Import { binding: b1, .. }, + &NameBindingKind::Import { binding: b2, .. }) => (b1, b2), + _ => continue, + }; + let (b1, b2) = match (orig_b1.vis, orig_b2.vis) { + (ty::Visibility::Public, ty::Visibility::Public) => continue, + (ty::Visibility::Public, _) => (b1, b2), + (_, ty::Visibility::Public) => (b2, b1), + _ => continue, + }; + resolution.binding = Some(self.arenas.alloc_name_binding(NameBinding { + kind: NameBindingKind::Ambiguity { b1: b1, b2: b2, legacy: true }, ..*b1 + })); + } + _ => {} } } @@ -831,5 +805,6 @@ fn import_directive_subclass_to_string(subclass: &ImportDirectiveSubclass) -> St match *subclass { SingleImport { source, .. } => source.to_string(), GlobImport { .. } => "*".to_string(), + ExternCrate => "".to_string(), } } diff --git a/src/librustc_save_analysis/dump_visitor.rs b/src/librustc_save_analysis/dump_visitor.rs index db4788c3ce..afa78a05a6 100644 --- a/src/librustc_save_analysis/dump_visitor.rs +++ b/src/librustc_save_analysis/dump_visitor.rs @@ -32,14 +32,15 @@ use rustc::hir::def::Def; use rustc::hir::def_id::{CrateNum, DefId, LOCAL_CRATE}; use rustc::hir::map::{Node, NodeItem}; use rustc::session::Session; -use rustc::ty::{self, TyCtxt, ImplOrTraitItem, ImplOrTraitItemContainer}; +use rustc::ty::{self, TyCtxt, AssociatedItemContainer}; use std::collections::HashSet; use std::collections::hash_map::DefaultHasher; use std::hash::*; use syntax::ast::{self, NodeId, PatKind, Attribute, CRATE_NODE_ID}; -use syntax::parse::token::{self, keywords}; +use syntax::parse::token; +use syntax::symbol::keywords; use syntax::visit::{self, Visitor}; use syntax::print::pprust::{path_to_string, ty_to_string, bounds_to_string, generics_to_string}; use syntax::ptr::P; @@ -67,7 +68,6 @@ pub struct DumpVisitor<'l, 'tcx: 'l, 'll, D: 'll> { save_ctxt: SaveContext<'l, 'tcx>, sess: &'l Session, tcx: TyCtxt<'l, 'tcx, 'tcx>, - analysis: &'l ty::CrateAnalysis<'l>, dumper: &'ll mut D, span: SpanUtils<'l>, @@ -83,17 +83,14 @@ pub struct DumpVisitor<'l, 'tcx: 'l, 'll, D: 'll> { } impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { - pub fn new(tcx: TyCtxt<'l, 'tcx, 'tcx>, - save_ctxt: SaveContext<'l, 'tcx>, - analysis: &'l ty::CrateAnalysis<'l>, + pub fn new(save_ctxt: SaveContext<'l, 'tcx>, dumper: &'ll mut D) -> DumpVisitor<'l, 'tcx, 'll, D> { - let span_utils = SpanUtils::new(&tcx.sess); + let span_utils = SpanUtils::new(&save_ctxt.tcx.sess); DumpVisitor { - sess: &tcx.sess, - tcx: tcx, + sess: &save_ctxt.tcx.sess, + tcx: save_ctxt.tcx, save_ctxt: save_ctxt, - analysis: analysis, dumper: dumper, span: span_utils.clone(), cur_scope: CRATE_NODE_ID, @@ -273,12 +270,10 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn lookup_def_id(&self, ref_id: NodeId) -> Option { - self.tcx.expect_def_or_none(ref_id).and_then(|def| { - match def { - Def::PrimTy(..) | Def::SelfTy(..) => None, - def => Some(def.def_id()), - } - }) + match self.save_ctxt.get_path_def(ref_id) { + Def::PrimTy(..) | Def::SelfTy(..) | Def::Err => None, + def => Some(def.def_id()), + } } fn process_def_kind(&mut self, @@ -291,7 +286,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { return; } - let def = self.tcx.expect_def(ref_id); + let def = self.save_ctxt.get_path_def(ref_id); match def { Def::Mod(_) => { self.dumper.mod_ref(ModRefData { @@ -341,6 +336,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { Def::AssociatedTy(..) | Def::AssociatedConst(..) | Def::PrimTy(_) | + Def::Macro(_) | Def::Err => { span_bug!(span, "process_def_kind for unexpected item: {:?}", @@ -349,14 +345,17 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } - fn process_formals(&mut self, formals: &Vec, qualname: &str) { + fn process_formals(&mut self, formals: &'l [ast::Arg], qualname: &str) { for arg in formals { self.visit_pat(&arg.pat); let mut collector = PathCollector::new(); collector.visit_pat(&arg.pat); let span_utils = self.span.clone(); for &(id, ref p, ..) in &collector.collected_paths { - let typ = self.tcx.tables().node_types.get(&id).unwrap().to_string(); + let typ = match self.tcx.tables().node_types.get(&id) { + Some(s) => s.to_string(), + None => continue, + }; // get the span only for the name of the variable (I hope the path is only ever a // variable name, but who knows?) let sub_span = span_utils.span_for_last_ident(p.span); @@ -380,12 +379,12 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_method(&mut self, - sig: &ast::MethodSig, - body: Option<&ast::Block>, + sig: &'l ast::MethodSig, + body: Option<&'l ast::Block>, id: ast::NodeId, name: ast::Name, vis: Visibility, - attrs: &[Attribute], + attrs: &'l [Attribute], span: Span) { debug!("process_method: {}:{}", id, name); @@ -402,19 +401,19 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { // with the right name. if !self.span.filter_generated(Some(method_data.span), span) { let container = - self.tcx.impl_or_trait_item(self.tcx.map.local_def_id(id)).container(); + self.tcx.associated_item(self.tcx.map.local_def_id(id)).container; let mut trait_id; let mut decl_id = None; match container { - ImplOrTraitItemContainer::ImplContainer(id) => { + AssociatedItemContainer::ImplContainer(id) => { trait_id = self.tcx.trait_id_of_impl(id); match trait_id { Some(id) => { - for item in &**self.tcx.trait_items(id) { - if let &ImplOrTraitItem::MethodTraitItem(ref m) = item { - if m.name == name { - decl_id = Some(m.def_id); + for item in self.tcx.associated_items(id) { + if item.kind == ty::AssociatedKind::Method { + if item.name == name { + decl_id = Some(item.def_id); break; } } @@ -429,7 +428,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } } - ImplOrTraitItemContainer::TraitContainer(id) => { + AssociatedItemContainer::TraitContainer(id) => { trait_id = Some(id); } } @@ -466,7 +465,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } - fn process_trait_ref(&mut self, trait_ref: &ast::TraitRef) { + fn process_trait_ref(&mut self, trait_ref: &'l ast::TraitRef) { let trait_ref_data = self.save_ctxt.get_trait_ref_data(trait_ref, self.cur_scope); if let Some(trait_ref_data) = trait_ref_data { if !self.span.filter_generated(Some(trait_ref_data.span), trait_ref.path.span) { @@ -489,7 +488,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { // Dump generic params bindings, then visit_generics fn process_generic_params(&mut self, - generics: &ast::Generics, + generics: &'l ast::Generics, full_span: Span, prefix: &str, id: NodeId) { @@ -523,10 +522,10 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_fn(&mut self, - item: &ast::Item, - decl: &ast::FnDecl, - ty_params: &ast::Generics, - body: &ast::Block) { + item: &'l ast::Item, + decl: &'l ast::FnDecl, + ty_params: &'l ast::Generics, + body: &'l ast::Block) { if let Some(fn_data) = self.save_ctxt.get_item_data(item) { down_cast_data!(fn_data, FunctionData, item.span); if !self.span.filter_generated(Some(fn_data.span), item.span) { @@ -548,7 +547,10 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { self.nest(item.id, |v| v.visit_block(&body)); } - fn process_static_or_const_item(&mut self, item: &ast::Item, typ: &ast::Ty, expr: &ast::Expr) { + fn process_static_or_const_item(&mut self, + item: &'l ast::Item, + typ: &'l ast::Ty, + expr: &'l ast::Expr) { if let Some(var_data) = self.save_ctxt.get_item_data(item) { down_cast_data!(var_data, VariableData, item.span); if !self.span.filter_generated(Some(var_data.span), item.span) { @@ -563,11 +565,11 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { id: ast::NodeId, name: ast::Name, span: Span, - typ: &ast::Ty, - expr: &ast::Expr, + typ: &'l ast::Ty, + expr: &'l ast::Expr, parent_id: DefId, vis: Visibility, - attrs: &[Attribute]) { + attrs: &'l [Attribute]) { let qualname = format!("::{}", self.tcx.node_path_str(id)); let sub_span = self.span.sub_span_after_keyword(span, keywords::Const); @@ -595,9 +597,9 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { // FIXME tuple structs should generate tuple-specific data. fn process_struct(&mut self, - item: &ast::Item, - def: &ast::VariantData, - ty_params: &ast::Generics) { + item: &'l ast::Item, + def: &'l ast::VariantData, + ty_params: &'l ast::Generics) { let name = item.ident.to_string(); let qualname = format!("::{}", self.tcx.node_path_str(item.id)); @@ -642,9 +644,9 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_enum(&mut self, - item: &ast::Item, - enum_definition: &ast::EnumDef, - ty_params: &ast::Generics) { + item: &'l ast::Item, + enum_definition: &'l ast::EnumDef, + ty_params: &'l ast::Generics) { let enum_data = self.save_ctxt.get_item_data(item); let enum_data = match enum_data { None => return, @@ -722,11 +724,11 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_impl(&mut self, - item: &ast::Item, - type_parameters: &ast::Generics, - trait_ref: &Option, - typ: &ast::Ty, - impl_items: &[ast::ImplItem]) { + item: &'l ast::Item, + type_parameters: &'l ast::Generics, + trait_ref: &'l Option, + typ: &'l ast::Ty, + impl_items: &'l [ast::ImplItem]) { let mut has_self_ref = false; if let Some(impl_data) = self.save_ctxt.get_item_data(item) { down_cast_data!(impl_data, ImplData, item.span); @@ -765,10 +767,10 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_trait(&mut self, - item: &ast::Item, - generics: &ast::Generics, - trait_refs: &ast::TyParamBounds, - methods: &[ast::TraitItem]) { + item: &'l ast::Item, + generics: &'l ast::Generics, + trait_refs: &'l ast::TyParamBounds, + methods: &'l [ast::TraitItem]) { let name = item.ident.to_string(); let qualname = format!("::{}", self.tcx.node_path_str(item.id)); let mut val = name.clone(); @@ -914,13 +916,11 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } // Modules or types in the path prefix. - match self.tcx.expect_def(id) { + match self.save_ctxt.get_path_def(id) { Def::Method(did) => { - let ti = self.tcx.impl_or_trait_item(did); - if let ty::MethodTraitItem(m) = ti { - if m.explicit_self == ty::ExplicitSelfCategory::Static { - self.write_sub_path_trait_truncated(path); - } + let ti = self.tcx.associated_item(did); + if ti.kind == ty::AssociatedKind::Method && ti.method_has_self_argument { + self.write_sub_path_trait_truncated(path); } } Def::Fn(..) | @@ -941,11 +941,11 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } fn process_struct_lit(&mut self, - ex: &ast::Expr, - path: &ast::Path, - fields: &Vec, - variant: ty::VariantDef, - base: &Option>) { + ex: &'l ast::Expr, + path: &'l ast::Path, + fields: &'l [ast::Field], + variant: &'l ty::VariantDef, + base: &'l Option>) { self.write_sub_paths_truncated(path, false); if let Some(struct_lit_data) = self.save_ctxt.get_expr_data(ex) { @@ -972,7 +972,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { walk_list!(self, visit_expr, base); } - fn process_method_call(&mut self, ex: &ast::Expr, args: &Vec>) { + fn process_method_call(&mut self, ex: &'l ast::Expr, args: &'l [P]) { if let Some(mcd) = self.save_ctxt.get_expr_data(ex) { down_cast_data!(mcd, MethodCallData, ex.span); if !self.span.filter_generated(Some(mcd.span), ex.span) { @@ -984,12 +984,18 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { walk_list!(self, visit_expr, args); } - fn process_pat(&mut self, p: &ast::Pat) { + fn process_pat(&mut self, p: &'l ast::Pat) { match p.node { PatKind::Struct(ref path, ref fields, _) => { visit::walk_path(self, path); - let adt = self.tcx.tables().node_id_to_type(p.id).ty_adt_def().unwrap(); - let variant = adt.variant_of_def(self.tcx.expect_def(p.id)); + let adt = match self.tcx.tables().node_id_to_type_opt(p.id) { + Some(ty) => ty.ty_adt_def().unwrap(), + None => { + visit::walk_pat(self, p); + return; + } + }; + let variant = adt.variant_of_def(self.save_ctxt.get_path_def(p.id)); for &Spanned { node: ref field, span } in fields { let sub_span = self.span.span_for_first_ident(span); @@ -1011,7 +1017,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } - fn process_var_decl(&mut self, p: &ast::Pat, value: String) { + fn process_var_decl(&mut self, p: &'l ast::Pat, value: String) { // The local could declare multiple new vars, we must walk the // pattern and collect them all. let mut collector = PathCollector::new(); @@ -1102,7 +1108,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } - fn process_trait_item(&mut self, trait_item: &ast::TraitItem, trait_id: DefId) { + fn process_trait_item(&mut self, trait_item: &'l ast::TraitItem, trait_id: DefId) { self.process_macro_use(trait_item.span, trait_item.id); match trait_item.node { ast::TraitItemKind::Const(ref ty, Some(ref expr)) => { @@ -1130,7 +1136,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } - fn process_impl_item(&mut self, impl_item: &ast::ImplItem, impl_id: DefId) { + fn process_impl_item(&mut self, impl_item: &'l ast::ImplItem, impl_id: DefId) { self.process_macro_use(impl_item.span, impl_item.id); match impl_item.node { ast::ImplItemKind::Const(ref ty, ref expr) => { @@ -1158,8 +1164,8 @@ impl<'l, 'tcx: 'l, 'll, D: Dump + 'll> DumpVisitor<'l, 'tcx, 'll, D> { } } -impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> { - fn visit_item(&mut self, item: &ast::Item) { +impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor<'l> for DumpVisitor<'l, 'tcx, 'll, D> { + fn visit_item(&mut self, item: &'l ast::Item) { use syntax::ast::ItemKind::*; self.process_macro_use(item.span, item.id); match item.node { @@ -1200,7 +1206,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> ast::ViewPathGlob(ref path) => { // Make a comma-separated list of names of imported modules. let mut names = vec![]; - let glob_map = &self.analysis.glob_map; + let glob_map = &self.save_ctxt.analysis.glob_map; let glob_map = glob_map.as_ref().unwrap(); if glob_map.contains_key(&item.id) { for n in glob_map.get(&item.id).unwrap() { @@ -1303,7 +1309,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } } - fn visit_generics(&mut self, generics: &ast::Generics) { + fn visit_generics(&mut self, generics: &'l ast::Generics) { for param in generics.ty_params.iter() { for bound in param.bounds.iter() { if let ast::TraitTyParamBound(ref trait_ref, _) = *bound { @@ -1316,20 +1322,22 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } } - fn visit_ty(&mut self, t: &ast::Ty) { + fn visit_ty(&mut self, t: &'l ast::Ty) { self.process_macro_use(t.span, t.id); match t.node { ast::TyKind::Path(_, ref path) => { + if self.span.filter_generated(None, t.span) { + return; + } + if let Some(id) = self.lookup_def_id(t.id) { let sub_span = self.span.sub_span_for_type_name(t.span); - if !self.span.filter_generated(sub_span, t.span) { - self.dumper.type_ref(TypeRefData { - span: sub_span.expect("No span found for type ref"), - ref_id: Some(id), - scope: self.cur_scope, - qualname: String::new() - }.lower(self.tcx)); - } + self.dumper.type_ref(TypeRefData { + span: sub_span.expect("No span found for type ref"), + ref_id: Some(id), + scope: self.cur_scope, + qualname: String::new() + }.lower(self.tcx)); } self.write_sub_paths_truncated(path, false); @@ -1340,7 +1348,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } } - fn visit_expr(&mut self, ex: &ast::Expr) { + fn visit_expr(&mut self, ex: &'l ast::Expr) { self.process_macro_use(ex.span, ex.id); match ex.node { ast::ExprKind::Call(ref _f, ref _args) => { @@ -1354,8 +1362,14 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } ast::ExprKind::Struct(ref path, ref fields, ref base) => { let hir_expr = self.save_ctxt.tcx.map.expect_expr(ex.id); - let adt = self.tcx.tables().expr_ty(&hir_expr).ty_adt_def().unwrap(); - let def = self.tcx.expect_def(hir_expr.id); + let adt = match self.tcx.tables().expr_ty_opt(&hir_expr) { + Some(ty) => ty.ty_adt_def().unwrap(), + None => { + visit::walk_expr(self, ex); + return; + } + }; + let def = self.save_ctxt.get_path_def(hir_expr.id); self.process_struct_lit(ex, path, fields, adt.variant_of_def(def), base) } ast::ExprKind::MethodCall(.., ref args) => self.process_method_call(ex, args), @@ -1380,7 +1394,13 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> return; } }; - let ty = &self.tcx.tables().expr_ty_adjusted(&hir_node).sty; + let ty = match self.tcx.tables().expr_ty_adjusted_opt(&hir_node) { + Some(ty) => &ty.sty, + None => { + visit::walk_expr(self, ex); + return; + } + }; match *ty { ty::TyAdt(def, _) => { let sub_span = self.span.sub_span_after_token(ex.span, token::Dot); @@ -1414,7 +1434,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } // walk the body - self.nest(ex.id, |v| v.visit_block(&body)); + self.nest(ex.id, |v| v.visit_expr(body)); } ast::ExprKind::ForLoop(ref pattern, ref subexpression, ref block, _) | ast::ExprKind::WhileLet(ref pattern, ref subexpression, ref block, _) => { @@ -1436,17 +1456,17 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> } } - fn visit_mac(&mut self, mac: &ast::Mac) { + fn visit_mac(&mut self, mac: &'l ast::Mac) { // These shouldn't exist in the AST at this point, log a span bug. span_bug!(mac.span, "macro invocation should have been expanded out of AST"); } - fn visit_pat(&mut self, p: &ast::Pat) { + fn visit_pat(&mut self, p: &'l ast::Pat) { self.process_macro_use(p.span, p.id); self.process_pat(p); } - fn visit_arm(&mut self, arm: &ast::Arm) { + fn visit_arm(&mut self, arm: &'l ast::Arm) { let mut collector = PathCollector::new(); for pattern in &arm.pats { // collect paths from the arm's patterns @@ -1459,7 +1479,7 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> // process collected paths for &(id, ref p, immut, ref_kind) in &collector.collected_paths { - match self.tcx.expect_def(id) { + match self.save_ctxt.get_path_def(id) { Def::Local(def_id) => { let id = self.tcx.map.as_local_node_id(def_id).unwrap(); let mut value = if immut == ast::Mutability::Immutable { @@ -1509,12 +1529,12 @@ impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> self.visit_expr(&arm.body); } - fn visit_stmt(&mut self, s: &ast::Stmt) { + fn visit_stmt(&mut self, s: &'l ast::Stmt) { self.process_macro_use(s.span, s.id); visit::walk_stmt(self, s) } - fn visit_local(&mut self, l: &ast::Local) { + fn visit_local(&mut self, l: &'l ast::Local) { self.process_macro_use(l.span, l.id); let value = l.init.as_ref().map(|i| self.span.snippet(i.span)).unwrap_or(String::new()); self.process_var_decl(&l.pat, value); diff --git a/src/librustc_save_analysis/json_dumper.rs b/src/librustc_save_analysis/json_dumper.rs index eb613c3afd..f97272ad54 100644 --- a/src/librustc_save_analysis/json_dumper.rs +++ b/src/librustc_save_analysis/json_dumper.rs @@ -62,7 +62,6 @@ impl<'b, W: Write + 'b> Dump for JsonDumper<'b, W> { impl_fn!(function, FunctionData, defs); impl_fn!(method, MethodData, defs); impl_fn!(macro_data, MacroData, defs); - impl_fn!(mod_data, ModData, defs); impl_fn!(typedef, TypeDefData, defs); impl_fn!(variable, VariableData, defs); @@ -75,6 +74,43 @@ impl<'b, W: Write + 'b> Dump for JsonDumper<'b, W> { impl_fn!(macro_use, MacroUseData, macro_refs); + fn mod_data(&mut self, data: ModData) { + let id: Id = From::from(data.id); + let mut def = Def { + kind: DefKind::Mod, + id: id, + span: data.span, + name: data.name, + qualname: data.qualname, + value: data.filename, + children: data.items.into_iter().map(|id| From::from(id)).collect(), + decl_id: None, + docs: data.docs, + }; + if def.span.file_name != def.value { + // If the module is an out-of-line defintion, then we'll make the + // defintion the first character in the module's file and turn the + // the declaration into a reference to it. + let rf = Ref { + kind: RefKind::Mod, + span: def.span, + ref_id: id, + }; + self.result.refs.push(rf); + def.span = SpanData { + file_name: def.value.clone(), + byte_start: 0, + byte_end: 0, + line_start: 1, + line_end: 1, + column_start: 1, + column_end: 1, + } + } + + self.result.defs.push(def); + } + // FIXME store this instead of throwing it away. fn impl_data(&mut self, _data: ImplData) {} fn inheritance(&mut self, _data: InheritanceData) {} @@ -111,7 +147,7 @@ impl Analysis { // DefId::index is a newtype and so the JSON serialisation is ugly. Therefore // we use our own Id which is the same, but without the newtype. -#[derive(Debug, RustcEncodable)] +#[derive(Clone, Copy, Debug, RustcEncodable)] struct Id { krate: u32, index: u32, @@ -337,21 +373,7 @@ impl From for Def { } } } -impl From for Def { - fn from(data:ModData) -> Def { - Def { - kind: DefKind::Mod, - id: From::from(data.id), - span: data.span, - name: data.name, - qualname: data.qualname, - value: data.filename, - children: data.items.into_iter().map(|id| From::from(id)).collect(), - decl_id: None, - docs: data.docs, - } - } -} + impl From for Def { fn from(data: TypeDefData) -> Def { Def { diff --git a/src/librustc_save_analysis/lib.rs b/src/librustc_save_analysis/lib.rs index 9103b90d7d..862345fd46 100644 --- a/src/librustc_save_analysis/lib.rs +++ b/src/librustc_save_analysis/lib.rs @@ -18,7 +18,6 @@ #![cfg_attr(not(stage0), deny(warnings))] #![feature(custom_attribute)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![allow(unused_attributes)] #![feature(rustc_private)] #![feature(staged_api)] @@ -42,19 +41,20 @@ pub mod external_data; pub mod span_utils; use rustc::hir; -use rustc::hir::map::{Node, NodeItem}; use rustc::hir::def::Def; +use rustc::hir::map::Node; use rustc::hir::def_id::DefId; use rustc::session::config::CrateType::CrateTypeExecutable; use rustc::ty::{self, TyCtxt}; use std::env; -use std::fs::{self, File}; +use std::fs::File; use std::path::{Path, PathBuf}; use syntax::ast::{self, NodeId, PatKind, Attribute, CRATE_NODE_ID}; use syntax::parse::lexer::comments::strip_doc_comment_decoration; -use syntax::parse::token::{self, keywords, InternedString}; +use syntax::parse::token; +use syntax::symbol::{Symbol, keywords}; use syntax::visit::{self, Visitor}; use syntax::print::pprust::{ty_to_string, arg_to_string}; use syntax::codemap::MacroAttribute; @@ -84,6 +84,7 @@ pub mod recorder { pub struct SaveContext<'l, 'tcx: 'l> { tcx: TyCtxt<'l, 'tcx, 'tcx>, + analysis: &'l ty::CrateAnalysis<'tcx>, span_utils: SpanUtils<'tcx>, } @@ -92,16 +93,20 @@ macro_rules! option_try( ); impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { - pub fn new(tcx: TyCtxt<'l, 'tcx, 'tcx>) -> SaveContext<'l, 'tcx> { + pub fn new(tcx: TyCtxt<'l, 'tcx, 'tcx>, + analysis: &'l ty::CrateAnalysis<'tcx>) + -> SaveContext<'l, 'tcx> { let span_utils = SpanUtils::new(&tcx.sess); - SaveContext::from_span_utils(tcx, span_utils) + SaveContext::from_span_utils(tcx, analysis, span_utils) } pub fn from_span_utils(tcx: TyCtxt<'l, 'tcx, 'tcx>, + analysis: &'l ty::CrateAnalysis<'tcx>, span_utils: SpanUtils<'tcx>) -> SaveContext<'l, 'tcx> { SaveContext { tcx: tcx, + analysis: analysis, span_utils: span_utils, } } @@ -119,7 +124,7 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { } }; result.push(CrateData { - name: (&self.tcx.sess.cstore.crate_name(n)[..]).to_owned(), + name: self.tcx.sess.cstore.crate_name(n).to_string(), number: n.as_u32(), span: span, }); @@ -245,8 +250,8 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { match typ.node { // Common case impl for a struct or something basic. ast::TyKind::Path(None, ref path) => { + filter!(self.span_utils, None, path.span, None); sub_span = self.span_utils.sub_span_for_type_name(path.span); - filter!(self.span_utils, sub_span, path.span, None); type_data = self.lookup_ref_id(typ.id).map(|id| { TypeRefData { span: sub_span.unwrap(), @@ -286,7 +291,8 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { scope: NodeId) -> Option { if let Some(ident) = field.ident { let qualname = format!("::{}::{}", self.tcx.node_path_str(scope), ident); - let typ = self.tcx.tables().node_types.get(&field.id).unwrap().to_string(); + let def_id = self.tcx.map.local_def_id(field.id); + let typ = self.tcx.item_type(def_id).to_string(); let sub_span = self.span_utils.sub_span_before_token(field.span, token::Colon); filter!(self.span_utils, sub_span, field.span, None); Some(VariableData { @@ -313,22 +319,29 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { name: ast::Name, span: Span) -> Option { // The qualname for a method is the trait name or name of the struct in an impl in // which the method is declared in, followed by the method's name. - let (qualname, parent_scope, vis, docs) = + let (qualname, parent_scope, decl_id, vis, docs) = match self.tcx.impl_of_method(self.tcx.map.local_def_id(id)) { Some(impl_id) => match self.tcx.map.get_if_local(impl_id) { - Some(NodeItem(item)) => { + Some(Node::NodeItem(item)) => { match item.node { hir::ItemImpl(.., ref ty, _) => { let mut result = String::from("<"); result.push_str(&rustc::hir::print::ty_to_string(&ty)); let trait_id = self.tcx.trait_id_of_impl(impl_id); + let mut decl_id = None; if let Some(def_id) = trait_id { result.push_str(" as "); result.push_str(&self.tcx.item_path_str(def_id)); + self.tcx.associated_items(def_id) + .find(|item| item.name == name) + .map(|item| decl_id = Some(item.def_id)); } result.push_str(">"); - (result, trait_id, From::from(&item.vis), docs_for_attrs(&item.attrs)) + + (result, trait_id, decl_id, + From::from(&item.vis), + docs_for_attrs(&item.attrs)) } _ => { span_bug!(span, @@ -349,9 +362,9 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { None => match self.tcx.trait_of_item(self.tcx.map.local_def_id(id)) { Some(def_id) => { match self.tcx.map.get_if_local(def_id) { - Some(NodeItem(item)) => { + Some(Node::NodeItem(item)) => { (format!("::{}", self.tcx.item_path_str(def_id)), - Some(def_id), + Some(def_id), None, From::from(&item.vis), docs_for_attrs(&item.attrs)) } @@ -373,15 +386,6 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { let qualname = format!("{}::{}", qualname, name); - let def_id = self.tcx.map.local_def_id(id); - let decl_id = self.tcx.trait_item_of_item(def_id).and_then(|new_def_id| { - if new_def_id != def_id { - Some(new_def_id) - } else { - None - } - }); - let sub_span = self.span_utils.sub_span_after_keyword(span, keywords::Fn); filter!(self.span_utils, sub_span, span, None); Some(FunctionData { @@ -473,7 +477,7 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { ast::ExprKind::MethodCall(..) => { let method_call = ty::MethodCall::expr(expr.id); let method_id = self.tcx.tables().method_map[&method_call].def_id; - let (def_id, decl_id) = match self.tcx.impl_or_trait_item(method_id).container() { + let (def_id, decl_id) = match self.tcx.associated_item(method_id).container { ty::ImplContainer(_) => (Some(method_id), None), ty::TraitContainer(_) => (None, Some(method_id)), }; @@ -497,8 +501,51 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { } } + pub fn get_path_def(&self, id: NodeId) -> Def { + match self.tcx.map.get(id) { + Node::NodeTraitRef(tr) => tr.path.def, + + Node::NodeItem(&hir::Item { node: hir::ItemUse(ref path, _), .. }) | + Node::NodeVisibility(&hir::Visibility::Restricted { ref path, .. }) => path.def, + + Node::NodeExpr(&hir::Expr { node: hir::ExprPath(ref qpath), .. }) | + Node::NodeExpr(&hir::Expr { node: hir::ExprStruct(ref qpath, ..), .. }) | + Node::NodePat(&hir::Pat { node: hir::PatKind::Path(ref qpath), .. }) | + Node::NodePat(&hir::Pat { node: hir::PatKind::Struct(ref qpath, ..), .. }) | + Node::NodePat(&hir::Pat { node: hir::PatKind::TupleStruct(ref qpath, ..), .. }) => { + self.tcx.tables().qpath_def(qpath, id) + } + + Node::NodeLocal(&hir::Pat { node: hir::PatKind::Binding(_, def_id, ..), .. }) => { + Def::Local(def_id) + } + + Node::NodeTy(&hir::Ty { node: hir::TyPath(ref qpath), .. }) => { + match *qpath { + hir::QPath::Resolved(_, ref path) => path.def, + hir::QPath::TypeRelative(..) => { + if let Some(ty) = self.analysis.hir_ty_to_ty.get(&id) { + if let ty::TyProjection(proj) = ty.sty { + for item in self.tcx.associated_items(proj.trait_ref.def_id) { + if item.kind == ty::AssociatedKind::Type { + if item.name == proj.item_name { + return Def::AssociatedTy(item.def_id); + } + } + } + } + } + Def::Err + } + } + } + + _ => Def::Err + } + } + pub fn get_path_data(&self, id: NodeId, path: &ast::Path) -> Option { - let def = self.tcx.expect_def(id); + let def = self.get_path_def(id); let sub_span = self.span_utils.span_for_last_ident(path.span); filter!(self.span_utils, sub_span, path.span, None); match def { @@ -535,21 +582,10 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { let sub_span = self.span_utils.sub_span_for_meth_name(path.span); filter!(self.span_utils, sub_span, path.span, None); let def_id = if decl_id.is_local() { - let ti = self.tcx.impl_or_trait_item(decl_id); - match ti.container() { - ty::TraitContainer(def_id) => { - self.tcx - .trait_items(def_id) - .iter() - .find(|mr| mr.name() == ti.name() && self.trait_method_has_body(mr)) - .map(|mr| mr.def_id()) - } - ty::ImplContainer(def_id) => { - Some(*self.tcx.impl_or_trait_items(def_id).iter().find(|&&mr| { - self.tcx.impl_or_trait_item(mr).name() == ti.name() - }).unwrap()) - } - } + let ti = self.tcx.associated_item(decl_id); + self.tcx.associated_items(ti.container.id()) + .find(|item| item.name == ti.name && item.defaultness.has_value()) + .map(|item| item.def_id) } else { None }; @@ -578,27 +614,14 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { Def::PrimTy(..) | Def::SelfTy(..) | Def::Label(..) | + Def::Macro(..) | Def::Err => None, } } - fn trait_method_has_body(&self, mr: &ty::ImplOrTraitItem) -> bool { - let def_id = mr.def_id(); - if let Some(node_id) = self.tcx.map.as_local_node_id(def_id) { - let trait_item = self.tcx.map.expect_trait_item(node_id); - if let hir::TraitItem_::MethodTraitItem(_, Some(_)) = trait_item.node { - true - } else { - false - } - } else { - false - } - } - pub fn get_field_ref_data(&self, field_ref: &ast::Field, - variant: ty::VariantDef, + variant: &ty::VariantDef, parent: NodeId) -> Option { let f = variant.field_named(field_ref.ident.node.name); @@ -666,8 +689,8 @@ impl<'l, 'tcx: 'l> SaveContext<'l, 'tcx> { } fn lookup_ref_id(&self, ref_id: NodeId) -> Option { - match self.tcx.expect_def(ref_id) { - Def::PrimTy(_) | Def::SelfTy(..) => None, + match self.get_path_def(ref_id) { + Def::PrimTy(_) | Def::SelfTy(..) | Def::Err => None, def => Some(def.def_id()), } } @@ -718,7 +741,7 @@ impl PathCollector { } } -impl Visitor for PathCollector { +impl<'a> Visitor<'a> for PathCollector { fn visit_pat(&mut self, p: &ast::Pat) { match p.node { PatKind::Struct(ref path, ..) => { @@ -753,13 +776,17 @@ impl Visitor for PathCollector { } fn docs_for_attrs(attrs: &[Attribute]) -> String { - let doc = InternedString::new("doc"); + let doc = Symbol::intern("doc"); let mut result = String::new(); for attr in attrs { if attr.name() == doc { - if let Some(ref val) = attr.value_str() { - result.push_str(&strip_doc_comment_decoration(val)); + if let Some(val) = attr.value_str() { + if attr.is_sugared_doc { + result.push_str(&strip_doc_comment_decoration(&val.as_str())); + } else { + result.push_str(&val.as_str()); + } result.push('\n'); } } @@ -786,7 +813,7 @@ impl Format { pub fn process_crate<'l, 'tcx>(tcx: TyCtxt<'l, 'tcx, 'tcx>, krate: &ast::Crate, - analysis: &'l ty::CrateAnalysis<'l>, + analysis: &'l ty::CrateAnalysis<'tcx>, cratename: &str, odir: Option<&Path>, format: Format) { @@ -805,7 +832,7 @@ pub fn process_crate<'l, 'tcx>(tcx: TyCtxt<'l, 'tcx, 'tcx>, }, }; - if let Err(e) = fs::create_dir_all(&root_path) { + if let Err(e) = rustc::util::fs::create_dir_racy(&root_path) { tcx.sess.err(&format!("Could not create directory {}: {}", root_path.display(), e)); @@ -834,12 +861,12 @@ pub fn process_crate<'l, 'tcx>(tcx: TyCtxt<'l, 'tcx, 'tcx>, root_path.pop(); let output = &mut output_file; - let save_ctxt = SaveContext::new(tcx); + let save_ctxt = SaveContext::new(tcx, analysis); macro_rules! dump { ($new_dumper: expr) => {{ let mut dumper = $new_dumper; - let mut visitor = DumpVisitor::new(tcx, save_ctxt, analysis, &mut dumper); + let mut visitor = DumpVisitor::new(save_ctxt, &mut dumper); visitor.dump_crate_info(cratename, krate); visit::walk_crate(&mut visitor, krate); diff --git a/src/librustc_save_analysis/span_utils.rs b/src/librustc_save_analysis/span_utils.rs index 031b9a6a5a..e06aefd865 100644 --- a/src/librustc_save_analysis/span_utils.rs +++ b/src/librustc_save_analysis/span_utils.rs @@ -18,7 +18,8 @@ use std::path::Path; use syntax::ast; use syntax::parse::lexer::{self, Reader, StringReader}; -use syntax::parse::token::{self, keywords, Token}; +use syntax::parse::token::{self, Token}; +use syntax::symbol::keywords; use syntax_pos::*; #[derive(Clone)] @@ -177,25 +178,44 @@ impl<'a> SpanUtils<'a> { } // Return the span for the last ident before a `<` and outside any - // brackets, or the last span. + // angle brackets, or the last span. pub fn sub_span_for_type_name(&self, span: Span) -> Option { let mut toks = self.retokenise_span(span); let mut prev = toks.real_token(); let mut result = None; + + // We keep track of the following two counts - the depth of nesting of + // angle brackets, and the depth of nesting of square brackets. For the + // angle bracket count, we only count tokens which occur outside of any + // square brackets (i.e. bracket_count == 0). The intutition here is + // that we want to count angle brackets in the type, but not any which + // could be in expression context (because these could mean 'less than', + // etc.). + let mut angle_count = 0; let mut bracket_count = 0; loop { let next = toks.real_token(); - if (next.tok == token::Lt || next.tok == token::Colon) && bracket_count == 0 && + if (next.tok == token::Lt || next.tok == token::Colon) && + angle_count == 0 && + bracket_count == 0 && prev.tok.is_ident() { result = Some(prev.sp); } + if bracket_count == 0 { + angle_count += match prev.tok { + token::Lt => 1, + token::Gt => -1, + token::BinOp(token::Shl) => 2, + token::BinOp(token::Shr) => -2, + _ => 0, + }; + } + bracket_count += match prev.tok { - token::Lt => 1, - token::Gt => -1, - token::BinOp(token::Shl) => 2, - token::BinOp(token::Shr) => -2, + token::OpenDelim(token::Bracket) => 1, + token::CloseDelim(token::Bracket) => -1, _ => 0, }; @@ -204,7 +224,7 @@ impl<'a> SpanUtils<'a> { } prev = next; } - if bracket_count != 0 { + if angle_count != 0 || bracket_count != 0 { let loc = self.sess.codemap().lookup_char_pos(span.lo); span_bug!(span, "Mis-counted brackets when breaking path? Parsing '{}' \ @@ -213,7 +233,7 @@ impl<'a> SpanUtils<'a> { loc.file.name, loc.line); } - if result.is_none() && prev.tok.is_ident() && bracket_count == 0 { + if result.is_none() && prev.tok.is_ident() && angle_count == 0 { return self.make_sub_span(span, Some(prev.sp)); } self.make_sub_span(span, result) @@ -222,19 +242,20 @@ impl<'a> SpanUtils<'a> { // Reparse span and return an owned vector of sub spans of the first limit // identifier tokens in the given nesting level. // example with Foo, Bar> - // Nesting = 0: all idents outside of brackets: [Foo] - // Nesting = 1: idents within one level of brackets: [Bar, Bar] + // Nesting = 0: all idents outside of angle brackets: [Foo] + // Nesting = 1: idents within one level of angle brackets: [Bar, Bar] pub fn spans_with_brackets(&self, span: Span, nesting: isize, limit: isize) -> Vec { let mut result: Vec = vec![]; let mut toks = self.retokenise_span(span); // We keep track of how many brackets we're nested in + let mut angle_count: isize = 0; let mut bracket_count: isize = 0; let mut found_ufcs_sep = false; loop { let ts = toks.real_token(); if ts.tok == token::Eof { - if bracket_count != 0 { + if angle_count != 0 || bracket_count != 0 { if generated_code(span) { return vec![]; } @@ -252,6 +273,14 @@ impl<'a> SpanUtils<'a> { return result; } bracket_count += match ts.tok { + token::OpenDelim(token::Bracket) => 1, + token::CloseDelim(token::Bracket) => -1, + _ => 0, + }; + if bracket_count > 0 { + continue; + } + angle_count += match ts.tok { token::Lt => 1, token::Gt => -1, token::BinOp(token::Shl) => 2, @@ -269,11 +298,11 @@ impl<'a> SpanUtils<'a> { // path, trying to pull out the non-nested idents (e.g., avoiding 'a // in `>::C`). So we end up with a span for `B>::C` from // the start of the first ident to the end of the path. - if !found_ufcs_sep && bracket_count == -1 { + if !found_ufcs_sep && angle_count == -1 { found_ufcs_sep = true; - bracket_count += 1; + angle_count += 1; } - if ts.tok.is_ident() && bracket_count == nesting { + if ts.tok.is_ident() && angle_count == nesting { result.push(self.make_sub_span(span, Some(ts.sp)).unwrap()); } } diff --git a/src/librustc_trans/Cargo.toml b/src/librustc_trans/Cargo.toml index 38f9e7ab0c..796a80d080 100644 --- a/src/librustc_trans/Cargo.toml +++ b/src/librustc_trans/Cargo.toml @@ -16,6 +16,7 @@ graphviz = { path = "../libgraphviz" } log = { path = "../liblog" } rustc = { path = "../librustc" } rustc_back = { path = "../librustc_back" } +rustc_bitflags = { path = "../librustc_bitflags" } rustc_const_eval = { path = "../librustc_const_eval" } rustc_const_math = { path = "../librustc_const_math" } rustc_data_structures = { path = "../librustc_data_structures" } diff --git a/src/librustc_trans/abi.rs b/src/librustc_trans/abi.rs index 0a5b013c79..0ac853e99e 100644 --- a/src/librustc_trans/abi.rs +++ b/src/librustc_trans/abi.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use llvm::{self, ValueRef, Integer, Pointer, Float, Double, Struct, Array, Vector}; +use llvm::{self, ValueRef, Integer, Pointer, Float, Double, Struct, Array, Vector, AttributePlace}; use base; use build::AllocaFcx; use common::{type_is_fat_ptr, BlockAndBuilder, C_uint}; @@ -24,6 +24,7 @@ use cabi_s390x; use cabi_mips; use cabi_mips64; use cabi_asmjs; +use cabi_msp430; use machine::{llalign_of_min, llsize_of, llsize_of_alloc}; use type_::Type; use type_of; @@ -49,6 +50,93 @@ enum ArgKind { Ignore, } +// Hack to disable non_upper_case_globals only for the bitflags! and not for the rest +// of this module +pub use self::attr_impl::ArgAttribute; + +#[allow(non_upper_case_globals)] +mod attr_impl { + // The subset of llvm::Attribute needed for arguments, packed into a bitfield. + bitflags! { + #[derive(Default, Debug)] + flags ArgAttribute : u8 { + const ByVal = 1 << 0, + const NoAlias = 1 << 1, + const NoCapture = 1 << 2, + const NonNull = 1 << 3, + const ReadOnly = 1 << 4, + const SExt = 1 << 5, + const StructRet = 1 << 6, + const ZExt = 1 << 7, + } + } +} + +macro_rules! for_each_kind { + ($flags: ident, $f: ident, $($kind: ident),+) => ({ + $(if $flags.contains(ArgAttribute::$kind) { $f(llvm::Attribute::$kind) })+ + }) +} + +impl ArgAttribute { + fn for_each_kind(&self, mut f: F) where F: FnMut(llvm::Attribute) { + for_each_kind!(self, f, + ByVal, NoAlias, NoCapture, NonNull, ReadOnly, SExt, StructRet, ZExt) + } +} + +/// A compact representation of LLVM attributes (at least those relevant for this module) +/// that can be manipulated without interacting with LLVM's Attribute machinery. +#[derive(Copy, Clone, Debug, Default)] +pub struct ArgAttributes { + regular: ArgAttribute, + dereferenceable_bytes: u64, +} + +impl ArgAttributes { + pub fn set(&mut self, attr: ArgAttribute) -> &mut Self { + self.regular = self.regular | attr; + self + } + + pub fn unset(&mut self, attr: ArgAttribute) -> &mut Self { + self.regular = self.regular - attr; + self + } + + pub fn set_dereferenceable(&mut self, bytes: u64) -> &mut Self { + self.dereferenceable_bytes = bytes; + self + } + + pub fn unset_dereferenceable(&mut self) -> &mut Self { + self.dereferenceable_bytes = 0; + self + } + + pub fn apply_llfn(&self, idx: AttributePlace, llfn: ValueRef) { + unsafe { + self.regular.for_each_kind(|attr| attr.apply_llfn(idx, llfn)); + if self.dereferenceable_bytes != 0 { + llvm::LLVMRustAddDereferenceableAttr(llfn, + idx.as_uint(), + self.dereferenceable_bytes); + } + } + } + + pub fn apply_callsite(&self, idx: AttributePlace, callsite: ValueRef) { + unsafe { + self.regular.for_each_kind(|attr| attr.apply_callsite(idx, callsite)); + if self.dereferenceable_bytes != 0 { + llvm::LLVMRustAddDereferenceableCallSiteAttr(callsite, + idx.as_uint(), + self.dereferenceable_bytes); + } + } + } +} + /// Information about how a specific C type /// should be passed to or returned from a function /// @@ -80,7 +168,7 @@ pub struct ArgType { /// Dummy argument, which is emitted before the real argument pub pad: Option, /// LLVM attributes of argument - pub attrs: llvm::Attributes + pub attrs: ArgAttributes } impl ArgType { @@ -92,7 +180,7 @@ impl ArgType { signedness: None, cast: None, pad: None, - attrs: llvm::Attributes::default() + attrs: ArgAttributes::default() } } @@ -100,15 +188,15 @@ impl ArgType { assert_eq!(self.kind, ArgKind::Direct); // Wipe old attributes, likely not valid through indirection. - self.attrs = llvm::Attributes::default(); + self.attrs = ArgAttributes::default(); let llarg_sz = llsize_of_alloc(ccx, self.ty); // For non-immediate arguments the callee gets its own copy of // the value on the stack, so there are no aliases. It's also // program-invisible so can't possibly capture - self.attrs.set(llvm::Attribute::NoAlias) - .set(llvm::Attribute::NoCapture) + self.attrs.set(ArgAttribute::NoAlias) + .set(ArgAttribute::NoCapture) .set_dereferenceable(llarg_sz); self.kind = ArgKind::Indirect; @@ -124,9 +212,9 @@ impl ArgType { if let Some(signed) = self.signedness { if self.ty.int_width() < bits { self.attrs.set(if signed { - llvm::Attribute::SExt + ArgAttribute::SExt } else { - llvm::Attribute::ZExt + ArgAttribute::ZExt }); } } @@ -273,19 +361,19 @@ impl FnType { C => llvm::CCallConv, Win64 => llvm::X86_64_Win64, SysV64 => llvm::X86_64_SysV, + Aapcs => llvm::ArmAapcsCallConv, // These API constants ought to be more specific... Cdecl => llvm::CCallConv, - Aapcs => llvm::CCallConv, }; - let mut inputs = &sig.inputs[..]; + let mut inputs = sig.inputs(); let extra_args = if abi == RustCall { assert!(!sig.variadic && extra_args.is_empty()); - match inputs[inputs.len() - 1].sty { + match sig.inputs().last().unwrap().sty { ty::TyTuple(ref tupled_arguments) => { - inputs = &inputs[..inputs.len() - 1]; + inputs = &sig.inputs()[0..sig.inputs().len() - 1]; &tupled_arguments[..] } _ => { @@ -314,7 +402,7 @@ impl FnType { if ty.is_bool() { let llty = Type::i1(ccx); let mut arg = ArgType::new(llty, llty); - arg.attrs.set(llvm::Attribute::ZExt); + arg.attrs.set(ArgAttribute::ZExt); arg } else { let mut arg = ArgType::new(type_of::type_of(ccx, ty), @@ -340,7 +428,7 @@ impl FnType { } }; - let ret_ty = sig.output; + let ret_ty = sig.output(); let mut ret = arg_of(ret_ty, true); if !type_is_fat_ptr(ccx.tcx(), ret_ty) { @@ -349,7 +437,7 @@ impl FnType { if let ty::TyBox(_) = ret_ty.sty { // `Box` pointer return values never alias because ownership // is transferred - ret.attrs.set(llvm::Attribute::NoAlias); + ret.attrs.set(ArgAttribute::NoAlias); } // We can also mark the return value as `dereferenceable` in certain cases @@ -371,7 +459,7 @@ impl FnType { let rust_ptr_attrs = |ty: Ty<'tcx>, arg: &mut ArgType| match ty.sty { // `Box` pointer parameters never alias because ownership is transferred ty::TyBox(inner) => { - arg.attrs.set(llvm::Attribute::NoAlias); + arg.attrs.set(ArgAttribute::NoAlias); Some(inner) } @@ -386,18 +474,18 @@ impl FnType { let interior_unsafe = mt.ty.type_contents(ccx.tcx()).interior_unsafe(); if mt.mutbl != hir::MutMutable && !interior_unsafe { - arg.attrs.set(llvm::Attribute::NoAlias); + arg.attrs.set(ArgAttribute::NoAlias); } if mt.mutbl == hir::MutImmutable && !interior_unsafe { - arg.attrs.set(llvm::Attribute::ReadOnly); + arg.attrs.set(ArgAttribute::ReadOnly); } // When a reference in an argument has no named lifetime, it's // impossible for that reference to escape this function // (returned or stored beyond the call by a closure). if let ReLateBound(_, BrAnon(_)) = *b { - arg.attrs.set(llvm::Attribute::NoCapture); + arg.attrs.set(ArgAttribute::NoCapture); } Some(mt.ty) @@ -417,9 +505,9 @@ impl FnType { let mut info = ArgType::new(original_tys[1], sizing_tys[1]); if let Some(inner) = rust_ptr_attrs(ty, &mut data) { - data.attrs.set(llvm::Attribute::NonNull); + data.attrs.set(ArgAttribute::NonNull); if ccx.tcx().struct_tail(inner).is_trait() { - info.attrs.set(llvm::Attribute::NonNull); + info.attrs.set(ArgAttribute::NonNull); } } args.push(data); @@ -481,7 +569,7 @@ impl FnType { }; // Fat pointers are returned by-value. if !self.ret.is_ignore() { - if !type_is_fat_ptr(ccx.tcx(), sig.output) { + if !type_is_fat_ptr(ccx.tcx(), sig.output()) { fixup(&mut self.ret); } } @@ -490,7 +578,7 @@ impl FnType { fixup(arg); } if self.ret.is_indirect() { - self.ret.attrs.set(llvm::Attribute::StructRet); + self.ret.attrs.set(ArgAttribute::StructRet); } return; } @@ -520,11 +608,12 @@ impl FnType { "s390x" => cabi_s390x::compute_abi_info(ccx, self), "asmjs" => cabi_asmjs::compute_abi_info(ccx, self), "wasm32" => cabi_asmjs::compute_abi_info(ccx, self), + "msp430" => cabi_msp430::compute_abi_info(ccx, self), a => ccx.sess().fatal(&format!("unrecognized arch \"{}\" in target specification", a)) } if self.ret.is_indirect() { - self.ret.attrs.set(llvm::Attribute::StructRet); + self.ret.attrs.set(ArgAttribute::StructRet); } } diff --git a/src/librustc_trans/adt.rs b/src/librustc_trans/adt.rs index 0f987933df..9c82e25077 100644 --- a/src/librustc_trans/adt.rs +++ b/src/librustc_trans/adt.rs @@ -108,9 +108,9 @@ fn compute_fields<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>, }).collect::>() }, ty::TyTuple(fields) => fields.to_vec(), - ty::TyClosure(_, substs) => { + ty::TyClosure(def_id, substs) => { if variant_index > 0 { bug!("{} is a closure, which only has one variant", t);} - substs.upvar_tys.to_vec() + substs.upvar_tys(def_id, cx.tcx()).collect() }, _ => bug!("{} is not a type that can have fields.", t) } @@ -151,14 +151,14 @@ pub fn finish_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, | layout::UntaggedUnion { .. } | layout::RawNullablePointer { .. } => { } layout::Univariant { ..} | layout::StructWrappedNullablePointer { .. } => { - let (nonnull_variant, packed) = match *l { - layout::Univariant { ref variant, .. } => (0, variant.packed), + let (nonnull_variant_index, nonnull_variant, packed) = match *l { + layout::Univariant { ref variant, .. } => (0, variant, variant.packed), layout::StructWrappedNullablePointer { nndiscr, ref nonnull, .. } => - (nndiscr, nonnull.packed), + (nndiscr, nonnull, nonnull.packed), _ => unreachable!() }; - let fields = compute_fields(cx, t, nonnull_variant as usize, true); - llty.set_struct_body(&struct_llfields(cx, &fields, false, false), + let fields = compute_fields(cx, t, nonnull_variant_index as usize, true); + llty.set_struct_body(&struct_llfields(cx, &fields, nonnull_variant, false, false), packed) }, _ => bug!("This function cannot handle {} with layout {:#?}", t, l) @@ -188,7 +188,7 @@ fn generic_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let fields = compute_fields(cx, t, nndiscr as usize, false); match name { None => { - Type::struct_(cx, &struct_llfields(cx, &fields, sizing, dst), + Type::struct_(cx, &struct_llfields(cx, &fields, nonnull, sizing, dst), nonnull.packed) } Some(name) => { @@ -203,7 +203,7 @@ fn generic_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let fields = compute_fields(cx, t, 0, true); match name { None => { - let fields = struct_llfields(cx, &fields, sizing, dst); + let fields = struct_llfields(cx, &fields, &variant, sizing, dst); Type::struct_(cx, &fields, variant.packed) } Some(name) => { @@ -245,10 +245,9 @@ fn generic_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, // So we start with the discriminant, pad it up to the alignment with // more of its own type, then use alignment-sized ints to get the rest // of the size. - // - // FIXME #10604: this breaks when vector types are present. let size = size.bytes(); let align = align.abi(); + assert!(align <= std::u32::MAX as u64); let discr_ty = Type::from_integer(cx, discr); let discr_size = discr.size().bytes(); let padded_discr_size = roundup(discr_size, align as u32); @@ -292,12 +291,14 @@ fn union_fill(cx: &CrateContext, size: u64, align: u64) -> Type { fn struct_llfields<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, fields: &Vec>, + variant: &layout::Struct, sizing: bool, dst: bool) -> Vec { + let fields = variant.field_index_by_increasing_offset().map(|i| fields[i as usize]); if sizing { - fields.iter().filter(|&ty| !dst || type_is_sized(cx.tcx(), *ty)) - .map(|&ty| type_of::sizing_type_of(cx, ty)).collect() + fields.filter(|ty| !dst || type_is_sized(cx.tcx(), *ty)) + .map(|ty| type_of::sizing_type_of(cx, ty)).collect() } else { - fields.iter().map(|&ty| type_of::in_memory_type_of(cx, ty)).collect() + fields.map(|ty| type_of::in_memory_type_of(cx, ty)).collect() } } @@ -565,16 +566,16 @@ pub fn trans_field_ptr_builder<'blk, 'tcx>(bcx: &BlockAndBuilder<'blk, 'tcx>, fn struct_field_ptr<'blk, 'tcx>(bcx: &BlockAndBuilder<'blk, 'tcx>, st: &layout::Struct, fields: &Vec>, val: MaybeSizedValue, ix: usize, needs_cast: bool) -> ValueRef { - let ccx = bcx.ccx(); let fty = fields[ix]; + let ccx = bcx.ccx(); let ll_fty = type_of::in_memory_type_of(bcx.ccx(), fty); if bcx.is_unreachable() { return C_undef(ll_fty.ptr_to()); } let ptr_val = if needs_cast { - let fields = fields.iter().map(|&ty| { - type_of::in_memory_type_of(ccx, ty) + let fields = st.field_index_by_increasing_offset().map(|i| { + type_of::in_memory_type_of(ccx, fields[i]) }).collect::>(); let real_ty = Type::struct_(ccx, &fields[..], st.packed); bcx.pointercast(val.value, real_ty.ptr_to()) @@ -586,15 +587,15 @@ fn struct_field_ptr<'blk, 'tcx>(bcx: &BlockAndBuilder<'blk, 'tcx>, // * First field - Always aligned properly // * Packed struct - There is no alignment padding // * Field is sized - pointer is properly aligned already - if ix == 0 || st.packed || type_is_sized(bcx.tcx(), fty) { - return bcx.struct_gep(ptr_val, ix); + if st.offsets[ix] == layout::Size::from_bytes(0) || st.packed || type_is_sized(bcx.tcx(), fty) { + return bcx.struct_gep(ptr_val, st.memory_index[ix] as usize); } // If the type of the last field is [T] or str, then we don't need to do // any adjusments match fty.sty { ty::TySlice(..) | ty::TyStr => { - return bcx.struct_gep(ptr_val, ix); + return bcx.struct_gep(ptr_val, st.memory_index[ix] as usize); } _ => () } @@ -756,8 +757,12 @@ fn build_const_struct<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, // offset of current value let mut offset = 0; let mut cfields = Vec::new(); - let offsets = st.offsets.iter().map(|i| i.bytes()); - for (&val, target_offset) in vals.iter().zip(offsets) { + cfields.reserve(st.offsets.len()*2); + + let parts = st.field_index_by_increasing_offset().map(|i| { + (&vals[i], st.offsets[i].bytes()) + }); + for (&val, target_offset) in parts { if offset < target_offset { cfields.push(padding(ccx, target_offset - offset)); offset = target_offset; @@ -808,14 +813,11 @@ pub fn const_get_field<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>, let l = ccx.layout_of(t); match *l { layout::CEnum { .. } => bug!("element access in C-like enum const"), - layout::Univariant { .. } | layout::Vector { .. } => const_struct_field(val, ix), + layout::Univariant { ref variant, .. } => { + const_struct_field(val, variant.memory_index[ix] as usize) + } + layout::Vector { .. } => const_struct_field(val, ix), layout::UntaggedUnion { .. } => const_struct_field(val, 0), - layout::General { .. } => const_struct_field(val, ix + 1), - layout::RawNullablePointer { .. } => { - assert_eq!(ix, 0); - val - }, - layout::StructWrappedNullablePointer{ .. } => const_struct_field(val, ix), _ => bug!("{} does not have fields.", t) } } diff --git a/src/librustc_trans/asm.rs b/src/librustc_trans/asm.rs index 8c704cc329..665e12cbe8 100644 --- a/src/librustc_trans/asm.rs +++ b/src/librustc_trans/asm.rs @@ -88,7 +88,7 @@ pub fn trans_inline_asm<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, AsmDialect::Intel => llvm::AsmDialect::Intel, }; - let asm = CString::new(ia.asm.as_bytes()).unwrap(); + let asm = CString::new(ia.asm.as_str().as_bytes()).unwrap(); let constraint_cstr = CString::new(all_constraints).unwrap(); let r = InlineAsmCall(bcx, asm.as_ptr(), diff --git a/src/librustc_trans/assert_module_sources.rs b/src/librustc_trans/assert_module_sources.rs index 264ed4cd12..898e65ce39 100644 --- a/src/librustc_trans/assert_module_sources.rs +++ b/src/librustc_trans/assert_module_sources.rs @@ -29,7 +29,6 @@ use rustc::ty::TyCtxt; use syntax::ast; -use syntax::parse::token::InternedString; use {ModuleSource, ModuleTranslation}; @@ -77,7 +76,7 @@ impl<'a, 'tcx> AssertModuleSource<'a, 'tcx> { } let mname = self.field(attr, MODULE); - let mtrans = self.modules.iter().find(|mtrans| &mtrans.name[..] == &mname[..]); + let mtrans = self.modules.iter().find(|mtrans| *mtrans.name == *mname.as_str()); let mtrans = match mtrans { Some(m) => m, None => { @@ -113,7 +112,7 @@ impl<'a, 'tcx> AssertModuleSource<'a, 'tcx> { } } - fn field(&self, attr: &ast::Attribute, name: &str) -> InternedString { + fn field(&self, attr: &ast::Attribute, name: &str) -> ast::Name { for item in attr.meta_item_list().unwrap_or(&[]) { if item.check_name(name) { if let Some(value) = item.value_str() { @@ -137,7 +136,7 @@ impl<'a, 'tcx> AssertModuleSource<'a, 'tcx> { let config = &self.tcx.sess.parse_sess.config; let value = self.field(attr, CFG); debug!("check_config(config={:?}, value={:?})", config, value); - if config.iter().any(|c| c.check_name(&value[..])) { + if config.iter().any(|&(name, _)| name == value) { debug!("check_config: matched"); return true; } diff --git a/src/librustc_trans/attributes.rs b/src/librustc_trans/attributes.rs index 62eac35e0a..efdd1b736f 100644 --- a/src/librustc_trans/attributes.rs +++ b/src/librustc_trans/attributes.rs @@ -9,6 +9,8 @@ // except according to those terms. //! Set and unset common attributes on LLVM values. +use std::ffi::{CStr, CString}; + use llvm::{self, Attribute, ValueRef}; use llvm::AttributePlace::Function; pub use syntax::attr::InlineAttr; @@ -24,10 +26,9 @@ pub fn inline(val: ValueRef, inline: InlineAttr) { Always => Attribute::AlwaysInline.apply_llfn(Function, val), Never => Attribute::NoInline.apply_llfn(Function, val), None => { - let attr = Attribute::InlineHint | - Attribute::AlwaysInline | - Attribute::NoInline; - attr.unapply_llfn(Function, val) + Attribute::InlineHint.unapply_llfn(Function, val); + Attribute::AlwaysInline.unapply_llfn(Function, val); + Attribute::NoInline.unapply_llfn(Function, val); }, }; } @@ -62,10 +63,8 @@ pub fn set_frame_pointer_elimination(ccx: &CrateContext, llfn: ValueRef) { // parameter. if ccx.sess().must_not_eliminate_frame_pointers() { llvm::AddFunctionAttrStringValue( - llfn, - llvm::AttributePlace::Function, - "no-frame-pointer-elim\0", - "true\0") + llfn, llvm::AttributePlace::Function, + cstr("no-frame-pointer-elim\0"), cstr("true\0")); } } @@ -76,9 +75,17 @@ pub fn from_fn_attrs(ccx: &CrateContext, attrs: &[ast::Attribute], llfn: ValueRe inline(llfn, find_inline_attr(Some(ccx.sess().diagnostic()), attrs)); set_frame_pointer_elimination(ccx, llfn); - + let mut target_features = vec![]; for attr in attrs { - if attr.check_name("cold") { + if attr.check_name("target_feature") { + if let Some(val) = attr.value_str() { + for feat in val.as_str().split(",").map(|f| f.trim()) { + if !feat.is_empty() && !feat.contains('\0') { + target_features.push(feat.to_string()); + } + } + } + } else if attr.check_name("cold") { Attribute::Cold.apply_llfn(Function, llfn); } else if attr.check_name("naked") { naked(llfn, true); @@ -89,4 +96,14 @@ pub fn from_fn_attrs(ccx: &CrateContext, attrs: &[ast::Attribute], llfn: ValueRe unwind(llfn, true); } } + if !target_features.is_empty() { + let val = CString::new(target_features.join(",")).unwrap(); + llvm::AddFunctionAttrStringValue( + llfn, llvm::AttributePlace::Function, + cstr("target-features\0"), &val); + } +} + +fn cstr(s: &'static str) -> &CStr { + CStr::from_bytes_with_nul(s.as_bytes()).expect("null-terminated string") } diff --git a/src/librustc_trans/back/archive.rs b/src/librustc_trans/back/archive.rs index e063209799..11ab6dcaa8 100644 --- a/src/librustc_trans/back/archive.rs +++ b/src/librustc_trans/back/archive.rs @@ -145,8 +145,11 @@ impl<'a> ArchiveBuilder<'a> { /// /// This ignores adding the bytecode from the rlib, and if LTO is enabled /// then the object file also isn't added. - pub fn add_rlib(&mut self, rlib: &Path, name: &str, lto: bool) - -> io::Result<()> { + pub fn add_rlib(&mut self, + rlib: &Path, + name: &str, + lto: bool, + skip_objects: bool) -> io::Result<()> { // Ignoring obj file starting with the crate name // as simple comparison is not enough - there // might be also an extra name suffix @@ -159,9 +162,23 @@ impl<'a> ArchiveBuilder<'a> { self.config.sess.cstore.metadata_filename().to_owned(); self.add_archive(rlib, move |fname: &str| { - let skip_obj = lto && fname.starts_with(&obj_start) - && fname.ends_with(".o"); - skip_obj || fname.ends_with(bc_ext) || fname == metadata_filename + if fname.ends_with(bc_ext) || fname == metadata_filename { + return true + } + + // Don't include Rust objects if LTO is enabled + if lto && fname.starts_with(&obj_start) && fname.ends_with(".o") { + return true + } + + // Otherwise if this is *not* a rust object and we're skipping + // objects then skip this file + if skip_objects && (!fname.starts_with(&obj_start) || !fname.ends_with(".o")) { + return true + } + + // ok, don't skip this + return false }) } @@ -214,7 +231,7 @@ impl<'a> ArchiveBuilder<'a> { } fn llvm_archive_kind(&self) -> Result { - let kind = &self.config.sess.target.target.options.archive_format[..]; + let kind = &*self.config.sess.target.target.options.archive_format; kind.parse().map_err(|_| kind) } diff --git a/src/librustc_trans/back/link.rs b/src/librustc_trans/back/link.rs index ad8e0c1ee5..648dc4c24c 100644 --- a/src/librustc_trans/back/link.rs +++ b/src/librustc_trans/back/link.rs @@ -19,7 +19,7 @@ use session::config::{OutputFilenames, Input, OutputType}; use session::filesearch; use session::search_paths::PathKind; use session::Session; -use middle::cstore::{self, LinkMeta}; +use middle::cstore::{self, LinkMeta, NativeLibrary, LibSource}; use middle::cstore::{LinkagePreference, NativeLibraryKind}; use middle::dependency_format::Linkage; use CrateTranslation; @@ -43,6 +43,8 @@ use std::process::Command; use std::str; use flate; use syntax::ast; +use syntax::attr; +use syntax::symbol::Symbol; use syntax_pos::Span; // RLIB LLVM-BYTECODE OBJECT LAYOUT @@ -92,8 +94,8 @@ pub fn find_crate_name(sess: Option<&Session>, if let Some(sess) = sess { if let Some(ref s) = sess.opts.crate_name { - if let Some((attr, ref name)) = attr_crate_name { - if *s != &name[..] { + if let Some((attr, name)) = attr_crate_name { + if name != &**s { let msg = format!("--crate-name and #[crate_name] are \ required to match, but `{}` != `{}`", s, name); @@ -122,14 +124,13 @@ pub fn find_crate_name(sess: Option<&Session>, } "rust_out".to_string() - } pub fn build_link_meta(incremental_hashes_map: &IncrementalHashesMap, name: &str) -> LinkMeta { let r = LinkMeta { - crate_name: name.to_owned(), + crate_name: Symbol::intern(name), crate_hash: Svh::new(incremental_hashes_map[&DepNode::Krate].to_smaller_hash()), }; info!("{:?}", r); @@ -262,6 +263,9 @@ pub fn filename_for_input(sess: &Session, config::CrateTypeRlib => { outputs.out_directory.join(&format!("lib{}.rlib", libname)) } + config::CrateTypeMetadata => { + outputs.out_directory.join(&format!("lib{}.rmeta", libname)) + } config::CrateTypeCdylib | config::CrateTypeProcMacro | config::CrateTypeDylib => { @@ -297,7 +301,7 @@ pub fn each_linked_rlib(sess: &Session, .or_else(|| fmts.get(&config::CrateTypeCdylib)) .or_else(|| fmts.get(&config::CrateTypeProcMacro)); let fmts = fmts.unwrap_or_else(|| { - bug!("could not find formats for rlibs") + bug!("could not find formats for rlibs"); }); for (cnum, path) in crates { match fmts[cnum.as_usize() - 1] { @@ -306,8 +310,12 @@ pub fn each_linked_rlib(sess: &Session, } let name = sess.cstore.crate_name(cnum).clone(); let path = match path { - Some(p) => p, - None => { + LibSource::Some(p) => p, + LibSource::MetadataOnly => { + sess.fatal(&format!("could not find rlib for: `{}`, found rmeta (metadata) file", + name)); + } + LibSource::None => { sess.fatal(&format!("could not find rlib for: `{}`", name)); } }; @@ -351,6 +359,9 @@ fn link_binary_output(sess: &Session, config::CrateTypeStaticlib => { link_staticlib(sess, &objects, &out_filename, tmpdir.path()); } + config::CrateTypeMetadata => { + emit_metadata(sess, trans, &out_filename); + } _ => { link_natively(sess, crate_type, &objects, &out_filename, trans, outputs, tmpdir.path()); @@ -389,6 +400,13 @@ fn archive_config<'a>(sess: &'a Session, } } +fn emit_metadata<'a>(sess: &'a Session, trans: &CrateTranslation, out_filename: &Path) { + let result = fs::File::create(out_filename).and_then(|mut f| f.write_all(&trans.metadata)); + if let Err(e) = result { + sess.fatal(&format!("failed to write {}: {}", out_filename.display(), e)); + } +} + // Create an 'rlib' // // An rlib in its current incarnation is essentially a renamed .a file. The @@ -402,16 +420,34 @@ fn link_rlib<'a>(sess: &'a Session, tmpdir: &Path) -> ArchiveBuilder<'a> { info!("preparing rlib from {:?} to {:?}", objects, out_filename); let mut ab = ArchiveBuilder::new(archive_config(sess, out_filename, None)); + for obj in objects { ab.add_file(obj); } - for (l, kind) in sess.cstore.used_libraries() { - match kind { - NativeLibraryKind::NativeStatic => ab.add_native_library(&l), + // Note that in this loop we are ignoring the value of `lib.cfg`. That is, + // we may not be configured to actually include a static library if we're + // adding it here. That's because later when we consume this rlib we'll + // decide whether we actually needed the static library or not. + // + // To do this "correctly" we'd need to keep track of which libraries added + // which object files to the archive. We don't do that here, however. The + // #[link(cfg(..))] feature is unstable, though, and only intended to get + // liblibc working. In that sense the check below just indicates that if + // there are any libraries we want to omit object files for at link time we + // just exclude all custom object files. + // + // Eventually if we want to stabilize or flesh out the #[link(cfg(..))] + // feature then we'll need to figure out how to record what objects were + // loaded from the libraries found here and then encode that into the + // metadata of the rlib we're generating somehow. + for lib in sess.cstore.used_libraries() { + match lib.kind { + NativeLibraryKind::NativeStatic => {} NativeLibraryKind::NativeFramework | - NativeLibraryKind::NativeUnknown => {} + NativeLibraryKind::NativeUnknown => continue, } + ab.add_native_library(&lib.name.as_str()); } // After adding all files to the archive, we need to update the @@ -446,15 +482,7 @@ fn link_rlib<'a>(sess: &'a Session, // here so concurrent builds in the same directory don't try to use // the same filename for metadata (stomping over one another) let metadata = tmpdir.join(sess.cstore.metadata_filename()); - match fs::File::create(&metadata).and_then(|mut f| { - f.write_all(&trans.metadata) - }) { - Ok(..) => {} - Err(e) => { - sess.fatal(&format!("failed to write {}: {}", - metadata.display(), e)); - } - } + emit_metadata(sess, trans, &metadata); ab.add_file(&metadata); // For LTO purposes, the bytecode of this library is also inserted @@ -578,10 +606,28 @@ fn link_staticlib(sess: &Session, objects: &[PathBuf], out_filename: &Path, each_linked_rlib(sess, &mut |cnum, path| { let name = sess.cstore.crate_name(cnum); - ab.add_rlib(path, &name, sess.lto()).unwrap(); - let native_libs = sess.cstore.native_libraries(cnum); - all_native_libs.extend(native_libs); + + // Here when we include the rlib into our staticlib we need to make a + // decision whether to include the extra object files along the way. + // These extra object files come from statically included native + // libraries, but they may be cfg'd away with #[link(cfg(..))]. + // + // This unstable feature, though, only needs liblibc to work. The only + // use case there is where musl is statically included in liblibc.rlib, + // so if we don't want the included version we just need to skip it. As + // a result the logic here is that if *any* linked library is cfg'd away + // we just skip all object files. + // + // Clearly this is not sufficient for a general purpose feature, and + // we'd want to read from the library's metadata to determine which + // object files come from where and selectively skip them. + let skip_object_files = native_libs.iter().any(|lib| { + lib.kind == NativeLibraryKind::NativeStatic && !relevant_lib(sess, lib) + }); + ab.add_rlib(path, &name.as_str(), sess.lto(), skip_object_files).unwrap(); + + all_native_libs.extend(sess.cstore.native_libraries(cnum)); }); ab.update_symbols(); @@ -594,13 +640,14 @@ fn link_staticlib(sess: &Session, objects: &[PathBuf], out_filename: &Path, platforms, and so may need to be preserved"); } - for &(kind, ref lib) in &all_native_libs { - let name = match kind { - NativeLibraryKind::NativeStatic => "static library", + for lib in all_native_libs.iter().filter(|l| relevant_lib(sess, l)) { + let name = match lib.kind { NativeLibraryKind::NativeUnknown => "library", NativeLibraryKind::NativeFramework => "framework", + // These are included, no need to print them + NativeLibraryKind::NativeStatic => continue, }; - sess.note_without_error(&format!("{}: {}", name, *lib)); + sess.note_without_error(&format!("{}: {}", name, lib.name)); } } @@ -876,14 +923,12 @@ fn add_local_native_libraries(cmd: &mut Linker, sess: &Session) { } }); - let libs = sess.cstore.used_libraries(); - - let staticlibs = libs.iter().filter_map(|&(ref l, kind)| { - if kind == NativeLibraryKind::NativeStatic {Some(l)} else {None} - }); - let others = libs.iter().filter(|&&(_, kind)| { - kind != NativeLibraryKind::NativeStatic + let pair = sess.cstore.used_libraries().into_iter().filter(|l| { + relevant_lib(sess, l) + }).partition(|lib| { + lib.kind == NativeLibraryKind::NativeStatic }); + let (staticlibs, others): (Vec<_>, Vec<_>) = pair; // Some platforms take hints about whether a library is static or dynamic. // For those that support this, we ensure we pass the option if the library @@ -899,15 +944,15 @@ fn add_local_native_libraries(cmd: &mut Linker, sess: &Session) { // don't otherwise explicitly reference them. This can occur for // libraries which are just providing bindings, libraries with generic // functions, etc. - cmd.link_whole_staticlib(l, &search_path); + cmd.link_whole_staticlib(&l.name.as_str(), &search_path); } cmd.hint_dynamic(); - for &(ref l, kind) in others { - match kind { - NativeLibraryKind::NativeUnknown => cmd.link_dylib(l), - NativeLibraryKind::NativeFramework => cmd.link_framework(l), + for lib in others { + match lib.kind { + NativeLibraryKind::NativeUnknown => cmd.link_dylib(&lib.name.as_str()), + NativeLibraryKind::NativeFramework => cmd.link_framework(&lib.name.as_str()), NativeLibraryKind::NativeStatic => bug!(), } } @@ -1017,7 +1062,16 @@ fn add_upstream_rust_crates(cmd: &mut Linker, cnum: CrateNum) { let src = sess.cstore.used_crate_source(cnum); let cratepath = &src.rlib.unwrap().0; - if !sess.lto() && crate_type != config::CrateTypeDylib { + + // See the comment above in `link_staticlib` and `link_rlib` for why if + // there's a static library that's not relevant we skip all object + // files. + let native_libs = sess.cstore.native_libraries(cnum); + let skip_native = native_libs.iter().any(|lib| { + lib.kind == NativeLibraryKind::NativeStatic && !relevant_lib(sess, lib) + }); + + if !sess.lto() && crate_type != config::CrateTypeDylib && !skip_native { cmd.link_rlib(&fix_windows_verbatim_for_gcc(cratepath)); return } @@ -1029,33 +1083,42 @@ fn add_upstream_rust_crates(cmd: &mut Linker, time(sess.time_passes(), &format!("altering {}.rlib", name), || { let cfg = archive_config(sess, &dst, Some(cratepath)); let mut archive = ArchiveBuilder::new(cfg); - archive.remove_file(sess.cstore.metadata_filename()); archive.update_symbols(); let mut any_objects = false; for f in archive.src_files() { - if f.ends_with("bytecode.deflate") { + if f.ends_with("bytecode.deflate") || + f == sess.cstore.metadata_filename() { archive.remove_file(&f); continue } + let canonical = f.replace("-", "_"); let canonical_name = name.replace("-", "_"); + let is_rust_object = + canonical.starts_with(&canonical_name) && { + let num = &f[name.len()..f.len() - 2]; + num.len() > 0 && num[1..].parse::().is_ok() + }; + + // If we've been requested to skip all native object files + // (those not generated by the rust compiler) then we can skip + // this file. See above for why we may want to do this. + let skip_because_cfg_say_so = skip_native && !is_rust_object; + // If we're performing LTO and this is a rust-generated object // file, then we don't need the object file as it's part of the // LTO module. Note that `#![no_builtins]` is excluded from LTO, // though, so we let that object file slide. - if sess.lto() && - !sess.cstore.is_no_builtins(cnum) && - canonical.starts_with(&canonical_name) && - canonical.ends_with(".o") { - let num = &f[name.len()..f.len() - 2]; - if num.len() > 0 && num[1..].parse::().is_ok() { - archive.remove_file(&f); - continue - } + let skip_because_lto = sess.lto() && is_rust_object && + !sess.cstore.is_no_builtins(cnum); + + if skip_because_cfg_say_so || skip_because_lto { + archive.remove_file(&f); + } else { + any_objects = true; } - any_objects = true; } if !any_objects { @@ -1127,15 +1190,26 @@ fn add_upstream_native_libraries(cmd: &mut Linker, sess: &Session) { // the paths. let crates = sess.cstore.used_crates(LinkagePreference::RequireStatic); for (cnum, _) in crates { - let libs = sess.cstore.native_libraries(cnum); - for &(kind, ref lib) in &libs { - match kind { - NativeLibraryKind::NativeUnknown => cmd.link_dylib(lib), - NativeLibraryKind::NativeFramework => cmd.link_framework(lib), - NativeLibraryKind::NativeStatic => { - bug!("statics shouldn't be propagated"); - } + for lib in sess.cstore.native_libraries(cnum) { + if !relevant_lib(sess, &lib) { + continue + } + match lib.kind { + NativeLibraryKind::NativeUnknown => cmd.link_dylib(&lib.name.as_str()), + NativeLibraryKind::NativeFramework => cmd.link_framework(&lib.name.as_str()), + + // ignore statically included native libraries here as we've + // already included them when we included the rust library + // previously + NativeLibraryKind::NativeStatic => {} } } } } + +fn relevant_lib(sess: &Session, lib: &NativeLibrary) -> bool { + match lib.cfg { + Some(ref cfg) => attr::cfg_matches(cfg, &sess.parse_sess, None), + None => true, + } +} diff --git a/src/librustc_trans/back/linker.rs b/src/librustc_trans/back/linker.rs index 860903d259..a147b59894 100644 --- a/src/librustc_trans/back/linker.rs +++ b/src/librustc_trans/back/linker.rs @@ -17,11 +17,11 @@ use std::path::{Path, PathBuf}; use std::process::Command; use context::SharedCrateContext; -use monomorphize::Instance; use back::archive; +use back::symbol_export::{self, ExportedSymbols}; use middle::dependency_format::Linkage; -use rustc::hir::def_id::CrateNum; +use rustc::hir::def_id::{LOCAL_CRATE, CrateNum}; use session::Session; use session::config::CrateType; use session::config; @@ -34,10 +34,10 @@ pub struct LinkerInfo { impl<'a, 'tcx> LinkerInfo { pub fn new(scx: &SharedCrateContext<'a, 'tcx>, - reachable: &[String]) -> LinkerInfo { + exports: &ExportedSymbols) -> LinkerInfo { LinkerInfo { exports: scx.sess().crate_types.borrow().iter().map(|&c| { - (c, exported_symbols(scx, reachable, c)) + (c, exported_symbols(scx, exports, c)) }).collect(), } } @@ -207,7 +207,12 @@ impl<'a> Linker for GnuLinker<'a> { if self.sess.target.target.options.is_like_osx { self.cmd.args(&["-dynamiclib", "-Wl,-dylib"]); - if self.sess.opts.cg.rpath { + // Note that the `osx_rpath_install_name` option here is a hack + // purely to support rustbuild right now, we should get a more + // principled solution at some point to force the compiler to pass + // the right `-Wl,-install_name` with an `@rpath` in it. + if self.sess.opts.cg.rpath || + self.sess.opts.debugging_opts.osx_rpath_install_name { let mut v = OsString::from("-Wl,-install_name,@rpath/"); v.push(out_filename.file_name().unwrap()); self.cmd.arg(&v); @@ -253,11 +258,28 @@ impl<'a> Linker for GnuLinker<'a> { let mut arg = OsString::new(); let path = tmpdir.join("list"); - if self.sess.target.target.options.is_like_solaris { + debug!("EXPORTED SYMBOLS:"); + + if self.sess.target.target.options.is_like_osx { + // Write a plain, newline-separated list of symbols + let res = (|| -> io::Result<()> { + let mut f = BufWriter::new(File::create(&path)?); + for sym in self.info.exports[&crate_type].iter() { + debug!(" _{}", sym); + writeln!(f, "_{}", sym)?; + } + Ok(()) + })(); + if let Err(e) = res { + self.sess.fatal(&format!("failed to write lib.def file: {}", e)); + } + } else { + // Write an LD version script let res = (|| -> io::Result<()> { let mut f = BufWriter::new(File::create(&path)?); writeln!(f, "{{\n global:")?; for sym in self.info.exports[&crate_type].iter() { + debug!(" {};", sym); writeln!(f, " {};", sym)?; } writeln!(f, "\n local:\n *;\n}};")?; @@ -266,33 +288,17 @@ impl<'a> Linker for GnuLinker<'a> { if let Err(e) = res { self.sess.fatal(&format!("failed to write version script: {}", e)); } - - arg.push("-Wl,-M,"); - arg.push(&path); - } else { - let prefix = if self.sess.target.target.options.is_like_osx { - "_" - } else { - "" - }; - let res = (|| -> io::Result<()> { - let mut f = BufWriter::new(File::create(&path)?); - for sym in self.info.exports[&crate_type].iter() { - writeln!(f, "{}{}", prefix, sym)?; - } - Ok(()) - })(); - if let Err(e) = res { - self.sess.fatal(&format!("failed to write lib.def file: {}", e)); - } - if self.sess.target.target.options.is_like_osx { - arg.push("-Wl,-exported_symbols_list,"); - } else { - arg.push("-Wl,--retain-symbols-file="); - } - arg.push(&path); } + if self.sess.target.target.options.is_like_osx { + arg.push("-Wl,-exported_symbols_list,"); + } else if self.sess.target.target.options.is_like_solaris { + arg.push("-Wl,-M,"); + } else { + arg.push("-Wl,--version-script="); + } + + arg.push(&path); self.cmd.arg(arg); } @@ -320,7 +326,16 @@ impl<'a> Linker for MsvcLinker<'a> { } fn gc_sections(&mut self, _keep_metadata: bool) { - self.cmd.arg("/OPT:REF,ICF"); + // MSVC's ICF (Identical COMDAT Folding) link optimization is + // slow for Rust and thus we disable it by default when not in + // optimization build. + if self.sess.opts.optimize != config::OptLevel::No { + self.cmd.arg("/OPT:REF,ICF"); + } else { + // It is necessary to specify NOICF here, because /OPT:REF + // implies ICF by default. + self.cmd.arg("/OPT:REF,NOICF"); + } } fn link_dylib(&mut self, lib: &str) { @@ -473,43 +488,29 @@ impl<'a> Linker for MsvcLinker<'a> { } fn exported_symbols(scx: &SharedCrateContext, - reachable: &[String], + exported_symbols: &ExportedSymbols, crate_type: CrateType) -> Vec { - // See explanation in GnuLinker::export_symbols, for - // why we don't ever need dylib symbols on non-MSVC. - if crate_type == CrateType::CrateTypeDylib || - crate_type == CrateType::CrateTypeProcMacro { - if !scx.sess().target.target.options.is_like_msvc { - return vec![]; - } - } + let export_threshold = symbol_export::crate_export_threshold(crate_type); - let mut symbols = reachable.to_vec(); + let mut symbols = Vec::new(); + exported_symbols.for_each_exported_symbol(LOCAL_CRATE, export_threshold, |name, _| { + symbols.push(name.to_owned()); + }); - // If we're producing anything other than a dylib then the `reachable` array - // above is the exhaustive set of symbols we should be exporting. - // - // For dylibs, however, we need to take a look at how all upstream crates - // are linked into this dynamic library. For all statically linked - // libraries we take all their reachable symbols and emit them as well. - if crate_type != CrateType::CrateTypeDylib { - return symbols - } - - let cstore = &scx.sess().cstore; let formats = scx.sess().dependency_formats.borrow(); let deps = formats[&crate_type].iter(); - symbols.extend(deps.enumerate().filter_map(|(i, f)| { - if *f == Linkage::Static { - Some(CrateNum::new(i + 1)) - } else { - None + + for (index, dep_format) in deps.enumerate() { + let cnum = CrateNum::new(index + 1); + // For each dependency that we are linking to statically ... + if *dep_format == Linkage::Static { + // ... we add its symbol list to our export list. + exported_symbols.for_each_exported_symbol(cnum, export_threshold, |name, _| { + symbols.push(name.to_owned()); + }) } - }).flat_map(|cnum| { - cstore.reachable_ids(cnum) - }).map(|did| -> String { - Instance::mono(scx, did).symbol_name(scx) - })); + } + symbols } diff --git a/src/librustc_trans/back/lto.rs b/src/librustc_trans/back/lto.rs index 522864c6ec..f137bfff03 100644 --- a/src/librustc_trans/back/lto.rs +++ b/src/librustc_trans/back/lto.rs @@ -8,14 +8,16 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use super::link; -use super::write; +use back::link; +use back::write; +use back::symbol_export::{self, ExportedSymbols}; use rustc::session::{self, config}; use llvm; use llvm::archive_ro::ArchiveRO; use llvm::{ModuleRef, TargetMachineRef, True, False}; use rustc::util::common::time; use rustc::util::common::path2cstr; +use rustc::hir::def_id::LOCAL_CRATE; use back::write::{ModuleConfig, with_llvm_pmb}; use libc; @@ -24,8 +26,23 @@ use flate; use std::ffi::CString; use std::path::Path; -pub fn run(sess: &session::Session, llmod: ModuleRef, - tm: TargetMachineRef, reachable: &[String], +pub fn crate_type_allows_lto(crate_type: config::CrateType) -> bool { + match crate_type { + config::CrateTypeExecutable | + config::CrateTypeStaticlib | + config::CrateTypeCdylib => true, + + config::CrateTypeDylib | + config::CrateTypeRlib | + config::CrateTypeMetadata | + config::CrateTypeProcMacro => false, + } +} + +pub fn run(sess: &session::Session, + llmod: ModuleRef, + tm: TargetMachineRef, + exported_symbols: &ExportedSymbols, config: &ModuleConfig, temp_no_opt_bc_filename: &Path) { if sess.opts.cg.prefer_dynamic { @@ -38,17 +55,31 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, // Make sure we actually can run LTO for crate_type in sess.crate_types.borrow().iter() { - match *crate_type { - config::CrateTypeExecutable | - config::CrateTypeCdylib | - config::CrateTypeStaticlib => {} - _ => { - sess.fatal("lto can only be run for executables and \ + if !crate_type_allows_lto(*crate_type) { + sess.fatal("lto can only be run for executables, cdylibs and \ static library outputs"); - } } } + let export_threshold = + symbol_export::crates_export_threshold(&sess.crate_types.borrow()[..]); + + let symbol_filter = &|&(ref name, level): &(String, _)| { + if symbol_export::is_below_threshold(level, export_threshold) { + let mut bytes = Vec::with_capacity(name.len() + 1); + bytes.extend(name.bytes()); + Some(CString::new(bytes).unwrap()) + } else { + None + } + }; + + let mut symbol_white_list: Vec = exported_symbols + .exported_symbols(LOCAL_CRATE) + .iter() + .filter_map(symbol_filter) + .collect(); + // For each of our upstream dependencies, find the corresponding rlib and // load the bitcode from the archive. Then merge it into the current LLVM // module that we've got. @@ -58,6 +89,11 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, return; } + symbol_white_list.extend( + exported_symbols.exported_symbols(cnum) + .iter() + .filter_map(symbol_filter)); + let archive = ArchiveRO::open(&path).expect("wanted an rlib"); let bytecodes = archive.iter().filter_map(|child| { child.ok().and_then(|c| c.name().map(|name| (name, c))) @@ -118,11 +154,10 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, } }); - // Internalize everything but the reachable symbols of the current module - let cstrs: Vec = reachable.iter().map(|s| { - CString::new(s.clone()).unwrap() - }).collect(); - let arr: Vec<*const libc::c_char> = cstrs.iter().map(|c| c.as_ptr()).collect(); + // Internalize everything but the exported symbols of the current module + let arr: Vec<*const libc::c_char> = symbol_white_list.iter() + .map(|c| c.as_ptr()) + .collect(); let ptr = arr.as_ptr(); unsafe { llvm::LLVMRustRunRestrictionPass(llmod, diff --git a/src/librustc_trans/back/msvc/registry.rs b/src/librustc_trans/back/msvc/registry.rs index 44b161a757..8242f53896 100644 --- a/src/librustc_trans/back/msvc/registry.rs +++ b/src/librustc_trans/back/msvc/registry.rs @@ -12,7 +12,7 @@ use std::io; use std::ffi::{OsString, OsStr}; use std::os::windows::prelude::*; use std::ptr; -use libc::{c_void, c_long}; +use libc::c_long; pub type DWORD = u32; type LPCWSTR = *const u16; @@ -38,8 +38,6 @@ pub enum __HKEY__ {} pub type HKEY = *mut __HKEY__; pub type PHKEY = *mut HKEY; pub type REGSAM = DWORD; -pub type LPWSTR = *mut u16; -pub type PFILETIME = *mut c_void; #[link(name = "advapi32")] extern "system" { diff --git a/src/librustc_trans/back/rpath.rs b/src/librustc_trans/back/rpath.rs index 8758cdcf9d..ccaa0d4e1b 100644 --- a/src/librustc_trans/back/rpath.rs +++ b/src/librustc_trans/back/rpath.rs @@ -14,9 +14,10 @@ use std::path::{Path, PathBuf}; use std::fs; use rustc::hir::def_id::CrateNum; +use rustc::middle::cstore::LibSource; pub struct RPathConfig<'a> { - pub used_crates: Vec<(CrateNum, Option)>, + pub used_crates: Vec<(CrateNum, LibSource)>, pub out_filename: PathBuf, pub is_like_osx: bool, pub has_rpath: bool, @@ -35,7 +36,7 @@ pub fn get_rpath_flags(config: &mut RPathConfig) -> Vec { debug!("preparing the RPATH!"); let libs = config.used_crates.clone(); - let libs = libs.into_iter().filter_map(|(_, l)| l).collect::>(); + let libs = libs.into_iter().filter_map(|(_, l)| l.option()).collect::>(); let rpaths = get_rpaths(config, &libs[..]); flags.extend_from_slice(&rpaths_to_flags(&rpaths[..])); diff --git a/src/librustc_trans/back/symbol_export.rs b/src/librustc_trans/back/symbol_export.rs new file mode 100644 index 0000000000..eef464eb7f --- /dev/null +++ b/src/librustc_trans/back/symbol_export.rs @@ -0,0 +1,196 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use context::SharedCrateContext; +use monomorphize::Instance; +use symbol_map::SymbolMap; +use util::nodemap::FxHashMap; +use rustc::hir::def_id::{DefId, CrateNum, LOCAL_CRATE}; +use rustc::session::config; +use syntax::attr; +use trans_item::TransItem; + +/// The SymbolExportLevel of a symbols specifies from which kinds of crates +/// the symbol will be exported. `C` symbols will be exported from any +/// kind of crate, including cdylibs which export very few things. +/// `Rust` will only be exported if the crate produced is a Rust +/// dylib. +#[derive(Eq, PartialEq, Debug, Copy, Clone)] +pub enum SymbolExportLevel { + C, + Rust, +} + +/// The set of symbols exported from each crate in the crate graph. +pub struct ExportedSymbols { + exports: FxHashMap>, +} + +impl ExportedSymbols { + + pub fn empty() -> ExportedSymbols { + ExportedSymbols { + exports: FxHashMap(), + } + } + + pub fn compute_from<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, + symbol_map: &SymbolMap<'tcx>) + -> ExportedSymbols { + let mut local_crate: Vec<_> = scx + .exported_symbols() + .iter() + .map(|&node_id| { + scx.tcx().map.local_def_id(node_id) + }) + .map(|def_id| { + let name = symbol_for_def_id(scx, def_id, symbol_map); + let export_level = export_level(scx, def_id); + debug!("EXPORTED SYMBOL (local): {} ({:?})", name, export_level); + (name, export_level) + }) + .collect(); + + if scx.sess().entry_fn.borrow().is_some() { + local_crate.push(("main".to_string(), SymbolExportLevel::C)); + } + + if let Some(id) = scx.sess().derive_registrar_fn.get() { + let svh = &scx.link_meta().crate_hash; + let def_id = scx.tcx().map.local_def_id(id); + let idx = def_id.index; + let registrar = scx.sess().generate_derive_registrar_symbol(svh, idx); + local_crate.push((registrar, SymbolExportLevel::C)); + } + + if scx.sess().crate_types.borrow().contains(&config::CrateTypeDylib) { + local_crate.push((scx.metadata_symbol_name(), + SymbolExportLevel::Rust)); + } + + let mut exports = FxHashMap(); + exports.insert(LOCAL_CRATE, local_crate); + + for cnum in scx.sess().cstore.crates() { + debug_assert!(cnum != LOCAL_CRATE); + + if scx.sess().cstore.plugin_registrar_fn(cnum).is_some() || + scx.sess().cstore.derive_registrar_fn(cnum).is_some() { + continue; + } + + let crate_exports = scx + .sess() + .cstore + .exported_symbols(cnum) + .iter() + .map(|&def_id| { + let name = Instance::mono(scx, def_id).symbol_name(scx); + let export_level = export_level(scx, def_id); + debug!("EXPORTED SYMBOL (re-export): {} ({:?})", name, export_level); + (name, export_level) + }) + .collect(); + + exports.insert(cnum, crate_exports); + } + + return ExportedSymbols { + exports: exports + }; + + fn export_level(scx: &SharedCrateContext, + sym_def_id: DefId) + -> SymbolExportLevel { + let attrs = scx.tcx().get_attrs(sym_def_id); + if attr::contains_extern_indicator(scx.sess().diagnostic(), &attrs) { + SymbolExportLevel::C + } else { + SymbolExportLevel::Rust + } + } + } + + pub fn exported_symbols(&self, + cnum: CrateNum) + -> &[(String, SymbolExportLevel)] { + match self.exports.get(&cnum) { + Some(exports) => &exports[..], + None => &[] + } + } + + pub fn for_each_exported_symbol(&self, + cnum: CrateNum, + export_threshold: SymbolExportLevel, + mut f: F) + where F: FnMut(&str, SymbolExportLevel) + { + for &(ref name, export_level) in self.exported_symbols(cnum) { + if is_below_threshold(export_level, export_threshold) { + f(&name[..], export_level) + } + } + } +} + +pub fn crate_export_threshold(crate_type: config::CrateType) + -> SymbolExportLevel { + match crate_type { + config::CrateTypeExecutable | + config::CrateTypeStaticlib | + config::CrateTypeProcMacro | + config::CrateTypeCdylib => SymbolExportLevel::C, + config::CrateTypeRlib | + config::CrateTypeMetadata | + config::CrateTypeDylib => SymbolExportLevel::Rust, + } +} + +pub fn crates_export_threshold(crate_types: &[config::CrateType]) + -> SymbolExportLevel { + if crate_types.iter().any(|&crate_type| { + crate_export_threshold(crate_type) == SymbolExportLevel::Rust + }) { + SymbolExportLevel::Rust + } else { + SymbolExportLevel::C + } +} + +pub fn is_below_threshold(level: SymbolExportLevel, + threshold: SymbolExportLevel) + -> bool { + if threshold == SymbolExportLevel::Rust { + // We export everything from Rust dylibs + true + } else { + level == SymbolExportLevel::C + } +} + +fn symbol_for_def_id<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, + def_id: DefId, + symbol_map: &SymbolMap<'tcx>) + -> String { + // Just try to look things up in the symbol map. If nothing's there, we + // recompute. + if let Some(node_id) = scx.tcx().map.as_local_node_id(def_id) { + if let Some(sym) = symbol_map.get(TransItem::Static(node_id)) { + return sym.to_owned(); + } + } + + let instance = Instance::mono(scx, def_id); + + symbol_map.get(TransItem::Fn(instance)) + .map(str::to_owned) + .unwrap_or_else(|| instance.symbol_name(scx)) +} diff --git a/src/librustc_trans/back/symbol_names.rs b/src/librustc_trans/back/symbol_names.rs index bf2a5d76c1..938848054f 100644 --- a/src/librustc_trans/back/symbol_names.rs +++ b/src/librustc_trans/back/symbol_names.rs @@ -99,8 +99,6 @@ use common::SharedCrateContext; use monomorphize::Instance; -use rustc_data_structures::fmt_wrap::FmtWrap; -use rustc_data_structures::blake2b::Blake2bHasher; use rustc::middle::weak_lang_items; use rustc::hir::def_id::LOCAL_CRATE; @@ -113,7 +111,7 @@ use rustc::hir::map::definitions::{DefPath, DefPathData}; use rustc::util::common::record_time; use syntax::attr; -use syntax::parse::token::{self, InternedString}; +use syntax::symbol::{Symbol, InternedString}; fn get_symbol_hash<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, @@ -135,7 +133,7 @@ fn get_symbol_hash<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, let tcx = scx.tcx(); - let mut hasher = ty::util::TypeIdHasher::new(tcx, Blake2bHasher::new(8, &[])); + let mut hasher = ty::util::TypeIdHasher::::new(tcx); record_time(&tcx.sess.perf_stats.symbol_hash_time, || { // the main symbol name is not necessarily unique; hash in the @@ -158,9 +156,7 @@ fn get_symbol_hash<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, }); // 64 bits should be enough to avoid collisions. - let mut hasher = hasher.into_inner(); - let hash_bytes = hasher.finalize(); - format!("h{:x}", FmtWrap(hash_bytes)) + format!("h{:016x}", hasher.finish()) } impl<'a, 'tcx> Instance<'tcx> { @@ -231,7 +227,7 @@ impl<'a, 'tcx> Instance<'tcx> { match key.disambiguated_data.data { DefPathData::TypeNs(_) | DefPathData::ValueNs(_) => { - instance_ty = scx.tcx().lookup_item_type(ty_def_id); + instance_ty = scx.tcx().item_type(ty_def_id); break; } _ => { @@ -248,7 +244,7 @@ impl<'a, 'tcx> Instance<'tcx> { // Erase regions because they may not be deterministic when hashed // and should not matter anyhow. - let instance_ty = scx.tcx().erase_regions(&instance_ty.ty); + let instance_ty = scx.tcx().erase_regions(&instance_ty); let hash = get_symbol_hash(scx, &def_path, instance_ty, Some(substs)); @@ -275,7 +271,7 @@ impl ItemPathBuffer for SymbolPathBuffer { } fn push(&mut self, text: &str) { - self.names.push(token::intern(text).as_str()); + self.names.push(Symbol::intern(text).as_str()); } } @@ -288,7 +284,7 @@ pub fn exported_name_from_type_and_prefix<'a, 'tcx>(scx: &SharedCrateContext<'a, krate: LOCAL_CRATE, }; let hash = get_symbol_hash(scx, &empty_def_path, t, None); - let path = [token::intern_and_get_ident(prefix)]; + let path = [Symbol::intern(prefix).as_str()]; mangle(path.iter().cloned(), &hash) } diff --git a/src/librustc_trans/back/write.rs b/src/librustc_trans/back/write.rs index 9012914dee..ffab0bde7a 100644 --- a/src/librustc_trans/back/write.rs +++ b/src/librustc_trans/back/write.rs @@ -10,6 +10,7 @@ use back::lto; use back::link::{get_linker, remove}; +use back::symbol_export::ExportedSymbols; use rustc_incremental::{save_trans_partition, in_incr_comp_dir}; use session::config::{OutputFilenames, OutputTypes, Passes, SomePasses, AllPasses}; use session::Session; @@ -26,7 +27,7 @@ use errors::emitter::Emitter; use syntax_pos::MultiSpan; use context::{is_pie_binary, get_reloc_model}; -use std::ffi::{CStr, CString}; +use std::ffi::CString; use std::fs; use std::path::{Path, PathBuf}; use std::str; @@ -147,7 +148,16 @@ impl Emitter for SharedEmitter { // arise as some of intrinsics are converted into function calls // and nobody provides implementations those functions fn target_feature(sess: &Session) -> String { - format!("{},{}", sess.target.target.options.features, sess.opts.cg.target_feature) + let rustc_features = [ + "crt-static", + ]; + let requested_features = sess.opts.cg.target_feature.split(','); + let llvm_features = requested_features.filter(|f| { + !rustc_features.iter().any(|s| f.contains(s)) + }); + format!("{},{}", + sess.target.target.options.features, + llvm_features.collect::>().join(",")) } fn get_llvm_opt_level(optimize: config::OptLevel) -> llvm::CodeGenOptLevel { @@ -319,7 +329,7 @@ impl ModuleConfig { struct CodegenContext<'a> { // Extra resources used for LTO: (sess, reachable). This will be `None` // when running in a worker thread. - lto_ctxt: Option<(&'a Session, &'a [String])>, + lto_ctxt: Option<(&'a Session, &'a ExportedSymbols)>, // Handler to use for diagnostics produced during codegen. handler: &'a Handler, // LLVM passes added by plugins. @@ -334,9 +344,11 @@ struct CodegenContext<'a> { } impl<'a> CodegenContext<'a> { - fn new_with_session(sess: &'a Session, reachable: &'a [String]) -> CodegenContext<'a> { + fn new_with_session(sess: &'a Session, + exported_symbols: &'a ExportedSymbols) + -> CodegenContext<'a> { CodegenContext { - lto_ctxt: Some((sess, reachable)), + lto_ctxt: Some((sess, exported_symbols)), handler: sess.diagnostic(), plugin_passes: sess.plugin_llvm_passes.borrow().clone(), remark: sess.opts.cg.remark.clone(), @@ -394,21 +406,18 @@ unsafe extern "C" fn diagnostic_handler(info: DiagnosticInfoRef, user: *mut c_vo } llvm::diagnostic::Optimization(opt) => { - let pass_name = str::from_utf8(CStr::from_ptr(opt.pass_name).to_bytes()) - .ok() - .expect("got a non-UTF8 pass name from LLVM"); let enabled = match cgcx.remark { AllPasses => true, - SomePasses(ref v) => v.iter().any(|s| *s == pass_name), + SomePasses(ref v) => v.iter().any(|s| *s == opt.pass_name), }; if enabled { let loc = llvm::debug_loc_to_string(llcx, opt.debug_loc); cgcx.handler.note_without_error(&format!("optimization {} for {} at {}: {}", opt.kind.describe(), - pass_name, + opt.pass_name, if loc.is_empty() { "[unknown]" } else { &*loc }, - llvm::twine_to_string(opt.message))); + opt.message)); } } @@ -510,14 +519,14 @@ unsafe fn optimize_and_codegen(cgcx: &CodegenContext, llvm::LLVMDisposePassManager(mpm); match cgcx.lto_ctxt { - Some((sess, reachable)) if sess.lto() => { + Some((sess, exported_symbols)) if sess.lto() => { time(sess.time_passes(), "all lto passes", || { let temp_no_opt_bc_filename = output_names.temp_path_ext("no-opt.lto.bc", module_name); lto::run(sess, llmod, tm, - reachable, + exported_symbols, &config, &temp_no_opt_bc_filename); }); @@ -747,7 +756,7 @@ pub fn run_passes(sess: &Session, // potentially create hundreds of them). let num_workers = work_items.len() - 1; if num_workers == 1 { - run_work_singlethreaded(sess, &trans.reachable, work_items); + run_work_singlethreaded(sess, &trans.exported_symbols, work_items); } else { run_work_multithreaded(sess, work_items, num_workers); } @@ -991,9 +1000,9 @@ fn execute_work_item(cgcx: &CodegenContext, } fn run_work_singlethreaded(sess: &Session, - reachable: &[String], + exported_symbols: &ExportedSymbols, work_items: Vec) { - let cgcx = CodegenContext::new_with_session(sess, reachable); + let cgcx = CodegenContext::new_with_session(sess, exported_symbols); // Since we're running single-threaded, we can pass the session to // the proc, allowing `optimize_and_codegen` to perform LTO. diff --git a/src/librustc_trans/base.rs b/src/librustc_trans/base.rs index bd15035b8a..c7f21427a0 100644 --- a/src/librustc_trans/base.rs +++ b/src/librustc_trans/base.rs @@ -33,10 +33,10 @@ use super::ModuleTranslation; use assert_module_sources; use back::link; use back::linker::LinkerInfo; +use back::symbol_export::{self, ExportedSymbols}; use llvm::{Linkage, ValueRef, Vector, get_param}; use llvm; -use rustc::hir::def::Def; -use rustc::hir::def_id::DefId; +use rustc::hir::def_id::{DefId, LOCAL_CRATE}; use middle::lang_items::{LangItem, ExchangeMallocFnLangItem, StartFnLangItem}; use rustc::ty::subst::Substs; use rustc::traits; @@ -47,7 +47,7 @@ use rustc::hir::map as hir_map; use rustc::util::common::time; use session::config::{self, NoDebugInfo}; use rustc_incremental::IncrementalHashesMap; -use session::Session; +use session::{self, DataTypeKind, Session}; use abi::{self, Abi, FnType}; use adt; use attributes; @@ -74,17 +74,16 @@ use monomorphize::{self, Instance}; use partitioning::{self, PartitioningStrategy, CodegenUnit}; use symbol_map::SymbolMap; use symbol_names_test; -use trans_item::TransItem; +use trans_item::{TransItem, DefPathBasedNames}; use type_::Type; use type_of; use value::Value; use Disr; -use util::nodemap::{NodeSet, FnvHashMap, FnvHashSet}; +use util::nodemap::{NodeSet, FxHashMap, FxHashSet}; use arena::TypedArena; use libc::c_uint; use std::ffi::{CStr, CString}; -use std::borrow::Cow; use std::cell::{Cell, RefCell}; use std::ptr; use std::rc::Rc; @@ -93,6 +92,7 @@ use std::i32; use syntax_pos::{Span, DUMMY_SP}; use syntax::attr; use rustc::hir; +use rustc::ty::layout::{self, Layout}; use syntax::ast; thread_local! { @@ -294,16 +294,14 @@ pub fn unsized_info<'ccx, 'tcx>(ccx: &CrateContext<'ccx, 'tcx>, let (source, target) = ccx.tcx().struct_lockstep_tails(source, target); match (&source.sty, &target.sty) { (&ty::TyArray(_, len), &ty::TySlice(_)) => C_uint(ccx, len), - (&ty::TyTrait(_), &ty::TyTrait(_)) => { + (&ty::TyDynamic(..), &ty::TyDynamic(..)) => { // For now, upcasts are limited to changes in marker // traits, and hence never actually require an actual // change to the vtable. old_info.expect("unsized_info: missing old info for trait upcast") } - (_, &ty::TyTrait(ref data)) => { - let trait_ref = data.principal.with_self_ty(ccx.tcx(), source); - let trait_ref = ccx.tcx().erase_regions(&trait_ref); - consts::ptrcast(meth::get_vtable(ccx, trait_ref), + (_, &ty::TyDynamic(ref data, ..)) => { + consts::ptrcast(meth::get_vtable(ccx, source, data.principal()), Type::vtable_ptr(ccx)) } _ => bug!("unsized_info: invalid unsizing {:?} -> {:?}", @@ -829,7 +827,9 @@ pub fn alloca(cx: Block, ty: Type, name: &str) -> ValueRef { } } DebugLoc::None.apply(cx.fcx); - Alloca(cx, ty, name) + let result = Alloca(cx, ty, name); + debug!("alloca({:?}) = {:?}", name, result); + result } impl<'blk, 'tcx> FunctionContext<'blk, 'tcx> { @@ -1003,34 +1003,49 @@ impl<'blk, 'tcx> FunctionContext<'blk, 'tcx> { } } -/// Builds an LLVM function out of a source function. -/// -/// If the function closes over its environment a closure will be returned. -pub fn trans_closure<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, - llfndecl: ValueRef, - instance: Instance<'tcx>, - sig: &ty::FnSig<'tcx>, - abi: Abi) { - ccx.stats().n_closures.set(ccx.stats().n_closures.get() + 1); - - let _icx = push_ctxt("trans_closure"); - if !ccx.sess().no_landing_pads() { - attributes::emit_uwtable(llfndecl, true); - } +pub fn trans_instance<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, instance: Instance<'tcx>) { + let _s = if ccx.sess().trans_stats() { + let mut instance_name = String::new(); + DefPathBasedNames::new(ccx.tcx(), true, true) + .push_def_path(instance.def, &mut instance_name); + Some(StatRecorder::new(ccx, instance_name)) + } else { + None + }; // this is an info! to allow collecting monomorphization statistics // and to allow finding the last function before LLVM aborts from // release builds. - info!("trans_closure(..., {})", instance); + info!("trans_instance({})", instance); - let fn_ty = FnType::new(ccx, abi, sig, &[]); + let _icx = push_ctxt("trans_instance"); + + let fn_ty = ccx.tcx().item_type(instance.def); + let fn_ty = ccx.tcx().erase_regions(&fn_ty); + let fn_ty = monomorphize::apply_param_substs(ccx.shared(), instance.substs, &fn_ty); + + let ty::BareFnTy { abi, ref sig, .. } = *common::ty_fn_ty(ccx, fn_ty); + let sig = ccx.tcx().erase_late_bound_regions_and_normalize(sig); + + let lldecl = match ccx.instances().borrow().get(&instance) { + Some(&val) => val, + None => bug!("Instance `{:?}` not already declared", instance) + }; + + ccx.stats().n_closures.set(ccx.stats().n_closures.get() + 1); + + if !ccx.sess().no_landing_pads() { + attributes::emit_uwtable(lldecl, true); + } + + let fn_ty = FnType::new(ccx, abi, &sig, &[]); let (arena, fcx): (TypedArena<_>, FunctionContext); arena = TypedArena::new(); fcx = FunctionContext::new(ccx, - llfndecl, + lldecl, fn_ty, - Some((instance, sig, abi)), + Some((instance, &sig, abi)), &arena); if fcx.mir.is_none() { @@ -1040,26 +1055,6 @@ pub fn trans_closure<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, mir::trans_mir(&fcx); } -pub fn trans_instance<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, instance: Instance<'tcx>) { - let _s = StatRecorder::new(ccx, ccx.tcx().item_path_str(instance.def)); - debug!("trans_instance(instance={:?})", instance); - let _icx = push_ctxt("trans_instance"); - - let fn_ty = ccx.tcx().lookup_item_type(instance.def).ty; - let fn_ty = ccx.tcx().erase_regions(&fn_ty); - let fn_ty = monomorphize::apply_param_substs(ccx.shared(), instance.substs, &fn_ty); - - let sig = ccx.tcx().erase_late_bound_regions_and_normalize(fn_ty.fn_sig()); - let abi = fn_ty.fn_abi(); - - let lldecl = match ccx.instances().borrow().get(&instance) { - Some(&val) => val, - None => bug!("Instance `{:?}` not already declared", instance) - }; - - trans_closure(ccx, lldecl, instance, &sig, abi); -} - pub fn trans_ctor_shim<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, def_id: DefId, substs: &'tcx Substs<'tcx>, @@ -1068,7 +1063,7 @@ pub fn trans_ctor_shim<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, attributes::inline(llfndecl, attributes::InlineAttr::Hint); attributes::set_frame_pointer_elimination(ccx, llfndecl); - let ctor_ty = ccx.tcx().lookup_item_type(def_id).ty; + let ctor_ty = ccx.tcx().item_type(def_id); let ctor_ty = monomorphize::apply_param_substs(ccx.shared(), substs, &ctor_ty); let sig = ccx.tcx().erase_late_bound_regions_and_normalize(&ctor_ty.fn_sig()); @@ -1084,8 +1079,8 @@ pub fn trans_ctor_shim<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, let dest_val = adt::MaybeSizedValue::sized(dest); // Can return unsized value let mut llarg_idx = fcx.fn_ty.ret.is_indirect() as usize; let mut arg_idx = 0; - for (i, arg_ty) in sig.inputs.into_iter().enumerate() { - let lldestptr = adt::trans_field_ptr(bcx, sig.output, dest_val, Disr::from(disr), i); + for (i, arg_ty) in sig.inputs().iter().enumerate() { + let lldestptr = adt::trans_field_ptr(bcx, sig.output(), dest_val, Disr::from(disr), i); let arg = &fcx.fn_ty.args[arg_idx]; arg_idx += 1; let b = &bcx.build(); @@ -1098,7 +1093,7 @@ pub fn trans_ctor_shim<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, arg.store_fn_arg(b, &mut llarg_idx, lldestptr); } } - adt::trans_set_discr(bcx, sig.output, dest, disr); + adt::trans_set_discr(bcx, sig.output(), dest, disr); } fcx.finish(bcx, DebugLoc::None); @@ -1133,11 +1128,11 @@ pub fn set_link_section(ccx: &CrateContext, llval: ValueRef, attrs: &[ast::Attribute]) { if let Some(sect) = attr::first_attr_value_str_by_name(attrs, "link_section") { - if contains_null(§) { + if contains_null(§.as_str()) { ccx.sess().fatal(&format!("Illegal null byte in link_section value: `{}`", §)); } unsafe { - let buf = CString::new(sect.as_bytes()).unwrap(); + let buf = CString::new(sect.as_str().as_bytes()).unwrap(); llvm::LLVMSetSection(llval, buf.as_ptr()); } } @@ -1249,7 +1244,7 @@ fn contains_null(s: &str) -> bool { } fn write_metadata(cx: &SharedCrateContext, - reachable_ids: &NodeSet) -> Vec { + exported_symbols: &NodeSet) -> Vec { use flate; #[derive(PartialEq, Eq, PartialOrd, Ord)] @@ -1265,7 +1260,8 @@ fn write_metadata(cx: &SharedCrateContext, config::CrateTypeStaticlib | config::CrateTypeCdylib => MetadataKind::None, - config::CrateTypeRlib => MetadataKind::Uncompressed, + config::CrateTypeRlib | + config::CrateTypeMetadata => MetadataKind::Uncompressed, config::CrateTypeDylib | config::CrateTypeProcMacro => MetadataKind::Compressed, @@ -1280,7 +1276,7 @@ fn write_metadata(cx: &SharedCrateContext, let metadata = cstore.encode_metadata(cx.tcx(), cx.export_map(), cx.link_meta(), - reachable_ids); + exported_symbols); if kind == MetadataKind::Uncompressed { return metadata; } @@ -1318,51 +1314,67 @@ fn write_metadata(cx: &SharedCrateContext, fn internalize_symbols<'a, 'tcx>(sess: &Session, ccxs: &CrateContextList<'a, 'tcx>, symbol_map: &SymbolMap<'tcx>, - reachable: &FnvHashSet<&str>) { + exported_symbols: &ExportedSymbols) { + let export_threshold = + symbol_export::crates_export_threshold(&sess.crate_types.borrow()[..]); + + let exported_symbols = exported_symbols + .exported_symbols(LOCAL_CRATE) + .iter() + .filter(|&&(_, export_level)| { + symbol_export::is_below_threshold(export_level, export_threshold) + }) + .map(|&(ref name, _)| &name[..]) + .collect::>(); + let scx = ccxs.shared(); let tcx = scx.tcx(); - // In incr. comp. mode, we can't necessarily see all refs since we - // don't generate LLVM IR for reused modules, so skip this - // step. Later we should get smarter. - if sess.opts.debugging_opts.incremental.is_some() { - return; - } + let incr_comp = sess.opts.debugging_opts.incremental.is_some(); // 'unsafe' because we are holding on to CStr's from the LLVM module within // this block. unsafe { - let mut referenced_somewhere = FnvHashSet(); + let mut referenced_somewhere = FxHashSet(); // Collect all symbols that need to stay externally visible because they - // are referenced via a declaration in some other codegen unit. - for ccx in ccxs.iter_need_trans() { - for val in iter_globals(ccx.llmod()).chain(iter_functions(ccx.llmod())) { - let linkage = llvm::LLVMRustGetLinkage(val); - // We only care about external declarations (not definitions) - // and available_externally definitions. - let is_available_externally = linkage == llvm::Linkage::AvailableExternallyLinkage; - let is_decl = llvm::LLVMIsDeclaration(val) != 0; + // are referenced via a declaration in some other codegen unit. In + // incremental compilation, we don't need to collect. See below for more + // information. + if !incr_comp { + for ccx in ccxs.iter_need_trans() { + for val in iter_globals(ccx.llmod()).chain(iter_functions(ccx.llmod())) { + let linkage = llvm::LLVMRustGetLinkage(val); + // We only care about external declarations (not definitions) + // and available_externally definitions. + let is_available_externally = + linkage == llvm::Linkage::AvailableExternallyLinkage; + let is_decl = llvm::LLVMIsDeclaration(val) == llvm::True; - if is_decl || is_available_externally { - let symbol_name = CStr::from_ptr(llvm::LLVMGetValueName(val)); - referenced_somewhere.insert(symbol_name); + if is_decl || is_available_externally { + let symbol_name = CStr::from_ptr(llvm::LLVMGetValueName(val)); + referenced_somewhere.insert(symbol_name); + } } } } // Also collect all symbols for which we cannot adjust linkage, because - // it is fixed by some directive in the source code (e.g. #[no_mangle]). - let linkage_fixed_explicitly: FnvHashSet<_> = scx - .translation_items() - .borrow() - .iter() - .cloned() - .filter(|trans_item|{ - trans_item.explicit_linkage(tcx).is_some() - }) - .map(|trans_item| symbol_map.get_or_compute(scx, trans_item)) - .collect(); + // it is fixed by some directive in the source code. + let (locally_defined_symbols, linkage_fixed_explicitly) = { + let mut locally_defined_symbols = FxHashSet(); + let mut linkage_fixed_explicitly = FxHashSet(); + + for trans_item in scx.translation_items().borrow().iter() { + let symbol_name = symbol_map.get_or_compute(scx, *trans_item); + if trans_item.explicit_linkage(tcx).is_some() { + linkage_fixed_explicitly.insert(symbol_name.clone()); + } + locally_defined_symbols.insert(symbol_name); + } + + (locally_defined_symbols, linkage_fixed_explicitly) + }; // Examine each external definition. If the definition is not used in // any other compilation unit, and is not reachable from other crates, @@ -1374,23 +1386,46 @@ fn internalize_symbols<'a, 'tcx>(sess: &Session, let is_externally_visible = (linkage == llvm::Linkage::ExternalLinkage) || (linkage == llvm::Linkage::LinkOnceODRLinkage) || (linkage == llvm::Linkage::WeakODRLinkage); - let is_definition = llvm::LLVMIsDeclaration(val) == 0; - // If this is a definition (as opposed to just a declaration) - // and externally visible, check if we can internalize it - if is_definition && is_externally_visible { - let name_cstr = CStr::from_ptr(llvm::LLVMGetValueName(val)); - let name_str = name_cstr.to_str().unwrap(); - let name_cow = Cow::Borrowed(name_str); + if !is_externally_visible { + // This symbol is not visible outside of its codegen unit, + // so there is nothing to do for it. + continue; + } - let is_referenced_somewhere = referenced_somewhere.contains(&name_cstr); - let is_reachable = reachable.contains(&name_str); - let has_fixed_linkage = linkage_fixed_explicitly.contains(&name_cow); + let name_cstr = CStr::from_ptr(llvm::LLVMGetValueName(val)); + let name_str = name_cstr.to_str().unwrap(); - if !is_referenced_somewhere && !is_reachable && !has_fixed_linkage { - llvm::LLVMRustSetLinkage(val, llvm::Linkage::InternalLinkage); - llvm::LLVMSetDLLStorageClass(val, - llvm::DLLStorageClass::Default); + if exported_symbols.contains(&name_str) { + // This symbol is explicitly exported, so we can't + // mark it as internal or hidden. + continue; + } + + let is_declaration = llvm::LLVMIsDeclaration(val) == llvm::True; + + if is_declaration { + if locally_defined_symbols.contains(name_str) { + // Only mark declarations from the current crate as hidden. + // Otherwise we would mark things as hidden that are + // imported from other crates or native libraries. + llvm::LLVMRustSetVisibility(val, llvm::Visibility::Hidden); + } + } else { + let has_fixed_linkage = linkage_fixed_explicitly.contains(name_str); + + if !has_fixed_linkage { + // In incremental compilation mode, we can't be sure that + // we saw all references because we don't know what's in + // cached compilation units, so we always assume that the + // given item has been referenced. + if incr_comp || referenced_somewhere.contains(&name_cstr) { + llvm::LLVMRustSetVisibility(val, llvm::Visibility::Hidden); + } else { + llvm::LLVMRustSetLinkage(val, llvm::Linkage::InternalLinkage); + } + + llvm::LLVMSetDLLStorageClass(val, llvm::DLLStorageClass::Default); llvm::UnsetComdat(val); } } @@ -1486,7 +1521,7 @@ fn iter_functions(llmod: llvm::ModuleRef) -> ValueIter { /// /// This list is later used by linkers to determine the set of symbols needed to /// be exposed from a dynamic library and it's also encoded into the metadata. -pub fn filter_reachable_ids(tcx: TyCtxt, reachable: NodeSet) -> NodeSet { +pub fn find_exported_symbols(tcx: TyCtxt, reachable: NodeSet) -> NodeSet { reachable.into_iter().filter(|&id| { // Next, we want to ignore some FFI functions that are not exposed from // this crate. Reachable FFI functions can be lumped into two @@ -1503,7 +1538,8 @@ pub fn filter_reachable_ids(tcx: TyCtxt, reachable: NodeSet) -> NodeSet { // let it through if it's included statically. match tcx.map.get(id) { hir_map::NodeForeignItem(..) => { - tcx.sess.cstore.is_statically_included_foreign_item(id) + let def_id = tcx.map.local_def_id(id); + tcx.sess.cstore.is_statically_included_foreign_item(def_id) } // Only consider nodes that actually have exported symbols. @@ -1514,7 +1550,7 @@ pub fn filter_reachable_ids(tcx: TyCtxt, reachable: NodeSet) -> NodeSet { hir_map::NodeImplItem(&hir::ImplItem { node: hir::ImplItemKind::Method(..), .. }) => { let def_id = tcx.map.local_def_id(id); - let generics = tcx.lookup_generics(def_id); + let generics = tcx.item_generics(def_id); let attributes = tcx.get_attrs(def_id); (generics.parent_types == 0 && generics.types.is_empty()) && // Functions marked with #[inline] are only ever translated @@ -1540,7 +1576,7 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let krate = tcx.map.krate(); let ty::CrateAnalysis { export_map, reachable, name, .. } = analysis; - let reachable = filter_reachable_ids(tcx, reachable); + let exported_symbols = find_exported_symbols(tcx, reachable); let check_overflow = if let Some(v) = tcx.sess.opts.debugging_opts.force_overflow_checks { v @@ -1548,16 +1584,16 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, tcx.sess.opts.debug_assertions }; - let link_meta = link::build_link_meta(incremental_hashes_map, name); + let link_meta = link::build_link_meta(incremental_hashes_map, &name); let shared_ccx = SharedCrateContext::new(tcx, export_map, link_meta.clone(), - reachable, + exported_symbols, check_overflow); // Translate the metadata. let metadata = time(tcx.sess.time_passes(), "write metadata", || { - write_metadata(&shared_ccx, shared_ccx.reachable()) + write_metadata(&shared_ccx, shared_ccx.exported_symbols()) }); let metadata_module = ModuleTranslation { @@ -1576,7 +1612,7 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let symbol_map = Rc::new(symbol_map); - let previous_work_products = trans_reuse_previous_work_products(tcx, + let previous_work_products = trans_reuse_previous_work_products(&shared_ccx, &codegen_units, &symbol_map); @@ -1596,7 +1632,9 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, ModuleTranslation { name: String::from(ccx.codegen_unit().name()), - symbol_name_hash: ccx.codegen_unit().compute_symbol_name_hash(tcx, &symbol_map), + symbol_name_hash: ccx.codegen_unit() + .compute_symbol_name_hash(&shared_ccx, + &symbol_map), source: source, } }) @@ -1605,14 +1643,15 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, assert_module_sources::assert_module_sources(tcx, &modules); // Skip crate items and just output metadata in -Z no-trans mode. - if tcx.sess.opts.debugging_opts.no_trans { - let linker_info = LinkerInfo::new(&shared_ccx, &[]); + if tcx.sess.opts.debugging_opts.no_trans || + tcx.sess.crate_types.borrow().iter().all(|ct| ct == &config::CrateTypeMetadata) { + let linker_info = LinkerInfo::new(&shared_ccx, &ExportedSymbols::empty()); return CrateTranslation { modules: modules, metadata_module: metadata_module, link: link_meta, metadata: metadata, - reachable: vec![], + exported_symbols: ExportedSymbols::empty(), no_builtins: no_builtins, linker_info: linker_info, windows_subsystem: None, @@ -1692,64 +1731,29 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } let sess = shared_ccx.sess(); - let mut reachable_symbols = shared_ccx.reachable().iter().map(|&id| { - let def_id = shared_ccx.tcx().map.local_def_id(id); - symbol_for_def_id(def_id, &shared_ccx, &symbol_map) - }).collect::>(); - if sess.entry_fn.borrow().is_some() { - reachable_symbols.push("main".to_string()); - } - - if sess.crate_types.borrow().contains(&config::CrateTypeDylib) { - reachable_symbols.push(shared_ccx.metadata_symbol_name()); - } - - // For the purposes of LTO or when creating a cdylib, we add to the - // reachable set all of the upstream reachable extern fns. These functions - // are all part of the public ABI of the final product, so we need to - // preserve them. - // - // Note that this happens even if LTO isn't requested or we're not creating - // a cdylib. In those cases, though, we're not even reading the - // `reachable_symbols` list later on so it should be ok. - for cnum in sess.cstore.crates() { - let syms = sess.cstore.reachable_ids(cnum); - reachable_symbols.extend(syms.into_iter().filter(|&def_id| { - let applicable = match sess.cstore.describe_def(def_id) { - Some(Def::Static(..)) => true, - Some(Def::Fn(_)) => { - shared_ccx.tcx().lookup_generics(def_id).types.is_empty() - } - _ => false - }; - - if applicable { - let attrs = shared_ccx.tcx().get_attrs(def_id); - attr::contains_extern_indicator(sess.diagnostic(), &attrs) - } else { - false - } - }).map(|did| { - symbol_for_def_id(did, &shared_ccx, &symbol_map) - })); - } + let exported_symbols = ExportedSymbols::compute_from(&shared_ccx, + &symbol_map); + // Now that we have all symbols that are exported from the CGUs of this + // crate, we can run the `internalize_symbols` pass. time(shared_ccx.sess().time_passes(), "internalize symbols", || { internalize_symbols(sess, &crate_context_list, &symbol_map, - &reachable_symbols.iter() - .map(|s| &s[..]) - .collect()) + &exported_symbols); }); + if tcx.sess.opts.debugging_opts.print_type_sizes { + gather_type_sizes(tcx); + } + if sess.target.target.options.is_like_msvc && sess.crate_types.borrow().iter().any(|ct| *ct == config::CrateTypeRlib) { create_imps(&crate_context_list); } - let linker_info = LinkerInfo::new(&shared_ccx, &reachable_symbols); + let linker_info = LinkerInfo::new(&shared_ccx, &exported_symbols); let subsystem = attr::first_attr_value_str_by_name(&krate.attrs, "windows_subsystem"); @@ -1767,16 +1771,203 @@ pub fn trans_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, metadata_module: metadata_module, link: link_meta, metadata: metadata, - reachable: reachable_symbols, + exported_symbols: exported_symbols, no_builtins: no_builtins, linker_info: linker_info, windows_subsystem: windows_subsystem, } } +fn gather_type_sizes<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { + let layout_cache = tcx.layout_cache.borrow(); + for (ty, layout) in layout_cache.iter() { + + // (delay format until we actually need it) + let record = |kind, opt_discr_size, variants| { + let type_desc = format!("{:?}", ty); + let overall_size = layout.size(&tcx.data_layout); + let align = layout.align(&tcx.data_layout); + tcx.sess.code_stats.borrow_mut().record_type_size(kind, + type_desc, + align, + overall_size, + opt_discr_size, + variants); + }; + + let (adt_def, substs) = match ty.sty { + ty::TyAdt(ref adt_def, substs) => { + debug!("print-type-size t: `{:?}` process adt", ty); + (adt_def, substs) + } + + ty::TyClosure(..) => { + debug!("print-type-size t: `{:?}` record closure", ty); + record(DataTypeKind::Closure, None, vec![]); + continue; + } + + _ => { + debug!("print-type-size t: `{:?}` skip non-nominal", ty); + continue; + } + }; + + let adt_kind = adt_def.adt_kind(); + + let build_field_info = |(field_name, field_ty): (ast::Name, Ty), offset: &layout::Size| { + match layout_cache.get(&field_ty) { + None => bug!("no layout found for field {} type: `{:?}`", field_name, field_ty), + Some(field_layout) => { + session::FieldInfo { + name: field_name.to_string(), + offset: offset.bytes(), + size: field_layout.size(&tcx.data_layout).bytes(), + align: field_layout.align(&tcx.data_layout).abi(), + } + } + } + }; + + let build_primitive_info = |name: ast::Name, value: &layout::Primitive| { + session::VariantInfo { + name: Some(name.to_string()), + kind: session::SizeKind::Exact, + align: value.align(&tcx.data_layout).abi(), + size: value.size(&tcx.data_layout).bytes(), + fields: vec![], + } + }; + + enum Fields<'a> { + WithDiscrim(&'a layout::Struct), + NoDiscrim(&'a layout::Struct), + } + + let build_variant_info = |n: Option, flds: &[(ast::Name, Ty)], layout: Fields| { + let (s, field_offsets) = match layout { + Fields::WithDiscrim(s) => (s, &s.offsets[1..]), + Fields::NoDiscrim(s) => (s, &s.offsets[0..]), + }; + let field_info: Vec<_> = flds.iter() + .zip(field_offsets.iter()) + .map(|(&field_name_ty, offset)| build_field_info(field_name_ty, offset)) + .collect(); + + session::VariantInfo { + name: n.map(|n|n.to_string()), + kind: if s.sized { + session::SizeKind::Exact + } else { + session::SizeKind::Min + }, + align: s.align.abi(), + size: s.min_size.bytes(), + fields: field_info, + } + }; + + match **layout { + Layout::StructWrappedNullablePointer { nonnull: ref variant_layout, + nndiscr, + discrfield: _, + discrfield_source: _ } => { + debug!("print-type-size t: `{:?}` adt struct-wrapped nullable nndiscr {} is {:?}", + ty, nndiscr, variant_layout); + let variant_def = &adt_def.variants[nndiscr as usize]; + let fields: Vec<_> = variant_def.fields.iter() + .map(|field_def| (field_def.name, field_def.ty(tcx, substs))) + .collect(); + record(adt_kind.into(), + None, + vec![build_variant_info(Some(variant_def.name), + &fields, + Fields::NoDiscrim(variant_layout))]); + } + Layout::RawNullablePointer { nndiscr, value } => { + debug!("print-type-size t: `{:?}` adt raw nullable nndiscr {} is {:?}", + ty, nndiscr, value); + let variant_def = &adt_def.variants[nndiscr as usize]; + record(adt_kind.into(), None, + vec![build_primitive_info(variant_def.name, &value)]); + } + Layout::Univariant { variant: ref variant_layout, non_zero: _ } => { + let variant_names = || { + adt_def.variants.iter().map(|v|format!("{}", v.name)).collect::>() + }; + debug!("print-type-size t: `{:?}` adt univariant {:?} variants: {:?}", + ty, variant_layout, variant_names()); + assert!(adt_def.variants.len() <= 1, + "univariant with variants {:?}", variant_names()); + if adt_def.variants.len() == 1 { + let variant_def = &adt_def.variants[0]; + let fields: Vec<_> = variant_def.fields.iter() + .map(|field_def| (field_def.name, field_def.ty(tcx, substs))) + .collect(); + record(adt_kind.into(), + None, + vec![build_variant_info(Some(variant_def.name), + &fields, + Fields::NoDiscrim(variant_layout))]); + } else { + // (This case arises for *empty* enums; so give it + // zero variants.) + record(adt_kind.into(), None, vec![]); + } + } + + Layout::General { ref variants, discr, .. } => { + debug!("print-type-size t: `{:?}` adt general variants def {} layouts {} {:?}", + ty, adt_def.variants.len(), variants.len(), variants); + let variant_infos: Vec<_> = adt_def.variants.iter() + .zip(variants.iter()) + .map(|(variant_def, variant_layout)| { + let fields: Vec<_> = variant_def.fields.iter() + .map(|field_def| (field_def.name, field_def.ty(tcx, substs))) + .collect(); + build_variant_info(Some(variant_def.name), + &fields, + Fields::WithDiscrim(variant_layout)) + }) + .collect(); + record(adt_kind.into(), Some(discr.size()), variant_infos); + } + + Layout::UntaggedUnion { ref variants } => { + debug!("print-type-size t: `{:?}` adt union variants {:?}", + ty, variants); + // layout does not currently store info about each + // variant... + record(adt_kind.into(), None, Vec::new()); + } + + Layout::CEnum { discr, .. } => { + debug!("print-type-size t: `{:?}` adt c-like enum", ty); + let variant_infos: Vec<_> = adt_def.variants.iter() + .map(|variant_def| { + build_primitive_info(variant_def.name, + &layout::Primitive::Int(discr)) + }) + .collect(); + record(adt_kind.into(), Some(discr.size()), variant_infos); + } + + // other cases provide little interesting (i.e. adjustable + // via representation tweaks) size info beyond total size. + Layout::Scalar { .. } | + Layout::Vector { .. } | + Layout::Array { .. } | + Layout::FatPointer { .. } => { + debug!("print-type-size t: `{:?}` adt other", ty); + record(adt_kind.into(), None, Vec::new()) + } + } + } +} + /// For each CGU, identify if we can reuse an existing object file (or /// maybe other context). -fn trans_reuse_previous_work_products(tcx: TyCtxt, +fn trans_reuse_previous_work_products(scx: &SharedCrateContext, codegen_units: &[CodegenUnit], symbol_map: &SymbolMap) -> Vec> { @@ -1786,15 +1977,20 @@ fn trans_reuse_previous_work_products(tcx: TyCtxt, .map(|cgu| { let id = cgu.work_product_id(); - let hash = cgu.compute_symbol_name_hash(tcx, symbol_map); + let hash = cgu.compute_symbol_name_hash(scx, symbol_map); debug!("trans_reuse_previous_work_products: id={:?} hash={}", id, hash); - if let Some(work_product) = tcx.dep_graph.previous_work_product(&id) { + if let Some(work_product) = scx.dep_graph().previous_work_product(&id) { if work_product.input_hash == hash { debug!("trans_reuse_previous_work_products: reusing {:?}", work_product); return Some(work_product); } else { + if scx.sess().opts.debugging_opts.incremental_info { + println!("incremental: CGU `{}` invalidated because of \ + changed partitioning hash.", + cgu.name()); + } debug!("trans_reuse_previous_work_products: \ not reusing {:?} because hash changed to {:?}", work_product, hash); @@ -1862,7 +2058,7 @@ fn collect_and_partition_translation_items<'a, 'tcx>(scx: &SharedCrateContext<'a } if scx.sess().opts.debugging_opts.print_trans_items.is_some() { - let mut item_to_cgus = FnvHashMap(); + let mut item_to_cgus = FxHashMap(); for cgu in &codegen_units { for (&trans_item, &linkage) in cgu.items() { @@ -1916,22 +2112,3 @@ fn collect_and_partition_translation_items<'a, 'tcx>(scx: &SharedCrateContext<'a (codegen_units, symbol_map) } - -fn symbol_for_def_id<'a, 'tcx>(def_id: DefId, - scx: &SharedCrateContext<'a, 'tcx>, - symbol_map: &SymbolMap<'tcx>) - -> String { - // Just try to look things up in the symbol map. If nothing's there, we - // recompute. - if let Some(node_id) = scx.tcx().map.as_local_node_id(def_id) { - if let Some(sym) = symbol_map.get(TransItem::Static(node_id)) { - return sym.to_owned(); - } - } - - let instance = Instance::mono(scx, def_id); - - symbol_map.get(TransItem::Fn(instance)) - .map(str::to_owned) - .unwrap_or_else(|| instance.symbol_name(scx)) -} diff --git a/src/librustc_trans/builder.rs b/src/librustc_trans/builder.rs index 8556e95903..0480bb82a9 100644 --- a/src/librustc_trans/builder.rs +++ b/src/librustc_trans/builder.rs @@ -19,7 +19,7 @@ use common::*; use machine::llalign_of_pref; use type_::Type; use value::Value; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; use libc::{c_uint, c_char}; use std::borrow::Cow; @@ -62,7 +62,7 @@ impl<'a, 'tcx> Builder<'a, 'tcx> { // Build version of path with cycles removed. // Pass 1: scan table mapping str -> rightmost pos. - let mut mm = FnvHashMap(); + let mut mm = FxHashMap(); let len = v.len(); let mut i = 0; while i < len { diff --git a/src/librustc_trans/cabi_asmjs.rs b/src/librustc_trans/cabi_asmjs.rs index 3cbc378ab0..f410627400 100644 --- a/src/librustc_trans/cabi_asmjs.rs +++ b/src/librustc_trans/cabi_asmjs.rs @@ -10,8 +10,8 @@ #![allow(non_upper_case_globals)] -use llvm::{Struct, Array, Attribute}; -use abi::{FnType, ArgType}; +use llvm::{Struct, Array}; +use abi::{FnType, ArgType, ArgAttribute}; use context::CrateContext; // Data layout: e-p:32:32-i64:64-v128:32:128-n32-S128 @@ -39,7 +39,7 @@ fn classify_ret_ty(ccx: &CrateContext, ret: &mut ArgType) { fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType) { if arg.ty.is_aggregate() { arg.make_indirect(ccx); - arg.attrs.set(Attribute::ByVal); + arg.attrs.set(ArgAttribute::ByVal); } } diff --git a/src/librustc_trans/cabi_msp430.rs b/src/librustc_trans/cabi_msp430.rs new file mode 100644 index 0000000000..aa90bb7ab7 --- /dev/null +++ b/src/librustc_trans/cabi_msp430.rs @@ -0,0 +1,59 @@ +// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Reference: MSP430 Embedded Application Binary Interface +// http://www.ti.com/lit/an/slaa534/slaa534.pdf + +#![allow(non_upper_case_globals)] + +use llvm::Struct; + +use abi::{self, ArgType, FnType}; +use context::CrateContext; +use type_::Type; + +fn ty_size(ty: Type) -> usize { + abi::ty_size(ty, 2) +} + +// 3.5 Structures or Unions Passed and Returned by Reference +// +// "Structures (including classes) and unions larger than 32 bits are passed and +// returned by reference. To pass a structure or union by reference, the caller +// places its address in the appropriate location: either in a register or on +// the stack, according to its position in the argument list. (..)" +fn classify_ret_ty(ccx: &CrateContext, ret: &mut ArgType) { + if ret.ty.kind() == Struct && ty_size(ret.ty) > 32 { + ret.make_indirect(ccx); + } else { + ret.extend_integer_width_to(16); + } +} + +fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType) { + if arg.ty.kind() == Struct && ty_size(arg.ty) > 32 { + arg.make_indirect(ccx); + } else { + arg.extend_integer_width_to(16); + } +} + +pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) { + if !fty.ret.is_ignore() { + classify_ret_ty(ccx, &mut fty.ret); + } + + for arg in &mut fty.args { + if arg.is_ignore() { + continue; + } + classify_arg_ty(ccx, arg); + } +} diff --git a/src/librustc_trans/cabi_x86.rs b/src/librustc_trans/cabi_x86.rs index b52231fa6b..ce85234f20 100644 --- a/src/librustc_trans/cabi_x86.rs +++ b/src/librustc_trans/cabi_x86.rs @@ -9,7 +9,7 @@ // except according to those terms. use llvm::*; -use abi::FnType; +use abi::{ArgAttribute, FnType}; use type_::Type; use super::common::*; use super::machine::*; @@ -25,7 +25,8 @@ pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) { // http://www.angelcode.com/dev/callconv/callconv.html // Clang's ABI handling is in lib/CodeGen/TargetInfo.cpp let t = &ccx.sess().target.target; - if t.options.is_like_osx || t.options.is_like_windows { + if t.options.is_like_osx || t.options.is_like_windows + || t.options.is_like_openbsd { match llsize_of_alloc(ccx, fty.ret.ty) { 1 => fty.ret.cast = Some(Type::i8(ccx)), 2 => fty.ret.cast = Some(Type::i16(ccx)), @@ -45,7 +46,7 @@ pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) { if arg.is_ignore() { continue; } if arg.ty.kind() == Struct { arg.make_indirect(ccx); - arg.attrs.set(Attribute::ByVal); + arg.attrs.set(ArgAttribute::ByVal); } else { arg.extend_integer_width_to(32); } diff --git a/src/librustc_trans/cabi_x86_64.rs b/src/librustc_trans/cabi_x86_64.rs index 33990148c8..7f2fdbf000 100644 --- a/src/librustc_trans/cabi_x86_64.rs +++ b/src/librustc_trans/cabi_x86_64.rs @@ -15,8 +15,8 @@ use self::RegClass::*; use llvm::{Integer, Pointer, Float, Double}; -use llvm::{Struct, Array, Attribute, Vector}; -use abi::{self, ArgType, FnType}; +use llvm::{Struct, Array, Vector}; +use abi::{self, ArgType, ArgAttribute, FnType}; use context::CrateContext; use type_::Type; @@ -334,7 +334,7 @@ pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) { fn x86_64_ty(ccx: &CrateContext, arg: &mut ArgType, is_mem_cls: F, - ind_attr: Option) + ind_attr: Option) where F: FnOnce(&[RegClass]) -> bool { if !arg.ty.is_reg_ty() { @@ -384,7 +384,7 @@ pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) { sse_regs -= needed_sse; } in_mem - }, Some(Attribute::ByVal)); + }, Some(ArgAttribute::ByVal)); // An integer, pointer, double or float parameter // thus the above closure passed to `x86_64_ty` won't diff --git a/src/librustc_trans/callee.rs b/src/librustc_trans/callee.rs index ffb13a833a..d7e9f1372e 100644 --- a/src/librustc_trans/callee.rs +++ b/src/librustc_trans/callee.rs @@ -26,11 +26,11 @@ use attributes; use base; use base::*; use build::*; -use closure; use common::{self, Block, Result, CrateContext, FunctionContext, SharedCrateContext}; use consts; use debuginfo::DebugLoc; use declare; +use value::Value; use meth; use monomorphize::{self, Instance}; use trans_item::TransItem; @@ -38,6 +38,7 @@ use type_of; use Disr; use rustc::ty::{self, Ty, TypeFoldable}; use rustc::hir; +use std::iter; use syntax_pos::DUMMY_SP; @@ -147,11 +148,12 @@ impl<'tcx> Callee<'tcx> { // after passing through fulfill_obligation let trait_closure_kind = tcx.lang_items.fn_trait_kind(trait_id).unwrap(); let instance = Instance::new(def_id, substs); - let llfn = closure::trans_closure_method(ccx, - vtable_closure.closure_def_id, - vtable_closure.substs, - instance, - trait_closure_kind); + let llfn = trans_closure_method( + ccx, + vtable_closure.closure_def_id, + vtable_closure.substs, + instance, + trait_closure_kind); let method_ty = def_ty(ccx.shared(), def_id, substs); Callee::ptr(llfn, method_ty) @@ -246,10 +248,182 @@ fn def_ty<'a, 'tcx>(shared: &SharedCrateContext<'a, 'tcx>, def_id: DefId, substs: &'tcx Substs<'tcx>) -> Ty<'tcx> { - let ty = shared.tcx().lookup_item_type(def_id).ty; + let ty = shared.tcx().item_type(def_id); monomorphize::apply_param_substs(shared, substs, &ty) } + +fn trans_closure_method<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>, + def_id: DefId, + substs: ty::ClosureSubsts<'tcx>, + method_instance: Instance<'tcx>, + trait_closure_kind: ty::ClosureKind) + -> ValueRef +{ + // If this is a closure, redirect to it. + let (llfn, _) = get_fn(ccx, def_id, substs.substs); + + // If the closure is a Fn closure, but a FnOnce is needed (etc), + // then adapt the self type + let llfn_closure_kind = ccx.tcx().closure_kind(def_id); + + let _icx = push_ctxt("trans_closure_adapter_shim"); + + debug!("trans_closure_adapter_shim(llfn_closure_kind={:?}, \ + trait_closure_kind={:?}, llfn={:?})", + llfn_closure_kind, trait_closure_kind, Value(llfn)); + + match (llfn_closure_kind, trait_closure_kind) { + (ty::ClosureKind::Fn, ty::ClosureKind::Fn) | + (ty::ClosureKind::FnMut, ty::ClosureKind::FnMut) | + (ty::ClosureKind::FnOnce, ty::ClosureKind::FnOnce) => { + // No adapter needed. + llfn + } + (ty::ClosureKind::Fn, ty::ClosureKind::FnMut) => { + // The closure fn `llfn` is a `fn(&self, ...)`. We want a + // `fn(&mut self, ...)`. In fact, at trans time, these are + // basically the same thing, so we can just return llfn. + llfn + } + (ty::ClosureKind::Fn, ty::ClosureKind::FnOnce) | + (ty::ClosureKind::FnMut, ty::ClosureKind::FnOnce) => { + // The closure fn `llfn` is a `fn(&self, ...)` or `fn(&mut + // self, ...)`. We want a `fn(self, ...)`. We can produce + // this by doing something like: + // + // fn call_once(self, ...) { call_mut(&self, ...) } + // fn call_once(mut self, ...) { call_mut(&mut self, ...) } + // + // These are both the same at trans time. + trans_fn_once_adapter_shim(ccx, def_id, substs, method_instance, llfn) + } + _ => { + bug!("trans_closure_adapter_shim: cannot convert {:?} to {:?}", + llfn_closure_kind, + trait_closure_kind); + } + } +} + +fn trans_fn_once_adapter_shim<'a, 'tcx>( + ccx: &'a CrateContext<'a, 'tcx>, + def_id: DefId, + substs: ty::ClosureSubsts<'tcx>, + method_instance: Instance<'tcx>, + llreffn: ValueRef) + -> ValueRef +{ + if let Some(&llfn) = ccx.instances().borrow().get(&method_instance) { + return llfn; + } + + debug!("trans_fn_once_adapter_shim(def_id={:?}, substs={:?}, llreffn={:?})", + def_id, substs, Value(llreffn)); + + let tcx = ccx.tcx(); + + // Find a version of the closure type. Substitute static for the + // region since it doesn't really matter. + let closure_ty = tcx.mk_closure_from_closure_substs(def_id, substs); + let ref_closure_ty = tcx.mk_imm_ref(tcx.mk_region(ty::ReErased), closure_ty); + + // Make a version with the type of by-ref closure. + let ty::ClosureTy { unsafety, abi, mut sig } = tcx.closure_type(def_id, substs); + sig.0 = tcx.mk_fn_sig( + iter::once(ref_closure_ty).chain(sig.0.inputs().iter().cloned()), + sig.0.output(), + sig.0.variadic + ); + let llref_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { + unsafety: unsafety, + abi: abi, + sig: sig.clone() + })); + debug!("trans_fn_once_adapter_shim: llref_fn_ty={:?}", + llref_fn_ty); + + + // Make a version of the closure type with the same arguments, but + // with argument #0 being by value. + assert_eq!(abi, Abi::RustCall); + sig.0 = tcx.mk_fn_sig( + iter::once(closure_ty).chain(sig.0.inputs().iter().skip(1).cloned()), + sig.0.output(), + sig.0.variadic + ); + + let sig = tcx.erase_late_bound_regions_and_normalize(&sig); + let fn_ty = FnType::new(ccx, abi, &sig, &[]); + + let llonce_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { + unsafety: unsafety, + abi: abi, + sig: ty::Binder(sig) + })); + + // Create the by-value helper. + let function_name = method_instance.symbol_name(ccx.shared()); + let lloncefn = declare::define_internal_fn(ccx, &function_name, llonce_fn_ty); + attributes::set_frame_pointer_elimination(ccx, lloncefn); + + let (block_arena, fcx): (TypedArena<_>, FunctionContext); + block_arena = TypedArena::new(); + fcx = FunctionContext::new(ccx, lloncefn, fn_ty, None, &block_arena); + let mut bcx = fcx.init(false); + + + // the first argument (`self`) will be the (by value) closure env. + + let mut llargs = get_params(fcx.llfn); + let mut self_idx = fcx.fn_ty.ret.is_indirect() as usize; + let env_arg = &fcx.fn_ty.args[0]; + let llenv = if env_arg.is_indirect() { + llargs[self_idx] + } else { + let scratch = alloc_ty(bcx, closure_ty, "self"); + let mut llarg_idx = self_idx; + env_arg.store_fn_arg(&bcx.build(), &mut llarg_idx, scratch); + scratch + }; + + debug!("trans_fn_once_adapter_shim: env={:?}", Value(llenv)); + // Adjust llargs such that llargs[self_idx..] has the call arguments. + // For zero-sized closures that means sneaking in a new argument. + if env_arg.is_ignore() { + if self_idx > 0 { + self_idx -= 1; + llargs[self_idx] = llenv; + } else { + llargs.insert(0, llenv); + } + } else { + llargs[self_idx] = llenv; + } + + let dest = fcx.llretslotptr.get(); + + let callee = Callee { + data: Fn(llreffn), + ty: llref_fn_ty + }; + + // Call the by-ref closure body with `self` in a cleanup scope, + // to drop `self` when the body returns, or in case it unwinds. + let self_scope = fcx.push_custom_cleanup_scope(); + fcx.schedule_drop_mem(self_scope, llenv, closure_ty); + + bcx = callee.call(bcx, DebugLoc::None, &llargs[self_idx..], dest).bcx; + + fcx.pop_and_trans_custom_cleanup_scope(bcx, self_scope); + + fcx.finish(bcx, DebugLoc::None); + + ccx.instances().borrow_mut().insert(method_instance, lloncefn); + + lloncefn +} + /// Translates an adapter that implements the `Fn` trait for a fn /// pointer. This is basically the equivalent of something like: /// @@ -326,13 +500,12 @@ fn trans_fn_pointer_shim<'a, 'tcx>( } }; let sig = tcx.erase_late_bound_regions_and_normalize(sig); - let tuple_input_ty = tcx.intern_tup(&sig.inputs[..]); - let sig = ty::FnSig { - inputs: vec![bare_fn_ty_maybe_ref, - tuple_input_ty], - output: sig.output, - variadic: false - }; + let tuple_input_ty = tcx.intern_tup(sig.inputs()); + let sig = tcx.mk_fn_sig( + [bare_fn_ty_maybe_ref, tuple_input_ty].iter().cloned(), + sig.output(), + false + ); let fn_ty = FnType::new(ccx, Abi::RustCall, &sig, &[]); let tuple_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, @@ -400,7 +573,7 @@ fn get_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, let substs = tcx.normalize_associated_type(&substs); let instance = Instance::new(def_id, substs); - let item_ty = ccx.tcx().lookup_item_type(def_id).ty; + let item_ty = ccx.tcx().item_type(def_id); let fn_ty = monomorphize::apply_param_substs(ccx.shared(), substs, &item_ty); if let Some(&llfn) = ccx.instances().borrow().get(&instance) { @@ -435,13 +608,8 @@ fn get_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, // reference. It also occurs when testing libcore and in some // other weird situations. Annoying. - let fn_ptr_ty = match fn_ty.sty { - ty::TyFnDef(.., fty) => { - // Create a fn pointer with the substituted signature. - tcx.mk_fn_ptr(fty) - } - _ => bug!("expected fn item type, found {}", fn_ty) - }; + // Create a fn pointer with the substituted signature. + let fn_ptr_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(common::ty_fn_ty(ccx, fn_ty).into_owned())); let llptrty = type_of::type_of(ccx, fn_ptr_ty); let llfn = if let Some(llfn) = declare::get_declared_value(ccx, &sym) { @@ -469,7 +637,11 @@ fn get_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, llvm::LLVMRustSetLinkage(llfn, llvm::Linkage::ExternalLinkage); } } - + if ccx.use_dll_storage_attrs() && ccx.sess().cstore.is_dllimport_foreign_item(def_id) { + unsafe { + llvm::LLVMSetDLLStorageClass(llfn, llvm::DLLStorageClass::DllImport); + } + } llfn }; diff --git a/src/librustc_trans/closure.rs b/src/librustc_trans/closure.rs deleted file mode 100644 index a1d645fb99..0000000000 --- a/src/librustc_trans/closure.rs +++ /dev/null @@ -1,319 +0,0 @@ -// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -use arena::TypedArena; -use llvm::{self, ValueRef, get_params}; -use rustc::hir::def_id::DefId; -use abi::{Abi, FnType}; -use attributes; -use base::*; -use callee::{self, Callee}; -use common::*; -use debuginfo::{DebugLoc}; -use declare; -use monomorphize::{Instance}; -use value::Value; -use rustc::ty::{self, Ty, TyCtxt}; - -use rustc::hir; - -fn get_self_type<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - closure_id: DefId, - fn_ty: Ty<'tcx>) - -> Ty<'tcx> { - match tcx.closure_kind(closure_id) { - ty::ClosureKind::Fn => { - tcx.mk_imm_ref(tcx.mk_region(ty::ReErased), fn_ty) - } - ty::ClosureKind::FnMut => { - tcx.mk_mut_ref(tcx.mk_region(ty::ReErased), fn_ty) - } - ty::ClosureKind::FnOnce => fn_ty, - } -} - -/// Returns the LLVM function declaration for a closure, creating it if -/// necessary. If the ID does not correspond to a closure ID, returns None. -fn get_or_create_closure_declaration<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, - closure_id: DefId, - substs: ty::ClosureSubsts<'tcx>) - -> ValueRef { - // Normalize type so differences in regions and typedefs don't cause - // duplicate declarations - let tcx = ccx.tcx(); - let substs = tcx.erase_regions(&substs); - let instance = Instance::new(closure_id, substs.func_substs); - - if let Some(&llfn) = ccx.instances().borrow().get(&instance) { - debug!("get_or_create_closure_declaration(): found closure {:?}: {:?}", - instance, Value(llfn)); - return llfn; - } - - let symbol = instance.symbol_name(ccx.shared()); - - // Compute the rust-call form of the closure call method. - let sig = &tcx.closure_type(closure_id, substs).sig; - let sig = tcx.erase_late_bound_regions_and_normalize(sig); - let closure_type = tcx.mk_closure_from_closure_substs(closure_id, substs); - let function_type = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { - unsafety: hir::Unsafety::Normal, - abi: Abi::RustCall, - sig: ty::Binder(ty::FnSig { - inputs: Some(get_self_type(tcx, closure_id, closure_type)) - .into_iter().chain(sig.inputs).collect(), - output: sig.output, - variadic: false - }) - })); - let llfn = declare::declare_fn(ccx, &symbol, function_type); - - attributes::set_frame_pointer_elimination(ccx, llfn); - - debug!("get_or_create_declaration_if_closure(): inserting new \ - closure {:?}: {:?}", - instance, Value(llfn)); - - // NOTE: We do *not* store llfn in the ccx.instances() map here, - // that is only done, when the closures body is translated. - - llfn -} - -pub fn trans_closure_body_via_mir<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, - closure_def_id: DefId, - closure_substs: ty::ClosureSubsts<'tcx>) { - // (*) Note that in the case of inlined functions, the `closure_def_id` will be the - // defid of the closure in its original crate, whereas `id` will be the id of the local - // inlined copy. - debug!("trans_closure_body_via_mir(closure_def_id={:?}, closure_substs={:?})", - closure_def_id, closure_substs); - - let tcx = ccx.tcx(); - let _icx = push_ctxt("closure::trans_closure_expr"); - - let param_substs = closure_substs.func_substs; - let instance = Instance::new(closure_def_id, param_substs); - - // If we have not done so yet, translate this closure's body - if !ccx.instances().borrow().contains_key(&instance) { - let llfn = get_or_create_closure_declaration(ccx, closure_def_id, closure_substs); - - unsafe { - if ccx.sess().target.target.options.allows_weak_linkage { - llvm::LLVMRustSetLinkage(llfn, llvm::Linkage::WeakODRLinkage); - llvm::SetUniqueComdat(ccx.llmod(), llfn); - } else { - llvm::LLVMRustSetLinkage(llfn, llvm::Linkage::InternalLinkage); - } - } - - // set an inline hint for all closures - attributes::inline(llfn, attributes::InlineAttr::Hint); - - // Get the type of this closure. Use the current `param_substs` as - // the closure substitutions. This makes sense because the closure - // takes the same set of type arguments as the enclosing fn, and - // this function (`trans_closure`) is invoked at the point - // of the closure expression. - - let sig = &tcx.closure_type(closure_def_id, closure_substs).sig; - let sig = tcx.erase_late_bound_regions_and_normalize(sig); - - let closure_type = tcx.mk_closure_from_closure_substs(closure_def_id, - closure_substs); - let sig = ty::FnSig { - inputs: Some(get_self_type(tcx, closure_def_id, closure_type)) - .into_iter().chain(sig.inputs).collect(), - output: sig.output, - variadic: false - }; - - trans_closure(ccx, - llfn, - Instance::new(closure_def_id, param_substs), - &sig, - Abi::RustCall); - - ccx.instances().borrow_mut().insert(instance, llfn); - } -} - -pub fn trans_closure_method<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>, - closure_def_id: DefId, - substs: ty::ClosureSubsts<'tcx>, - method_instance: Instance<'tcx>, - trait_closure_kind: ty::ClosureKind) - -> ValueRef -{ - // If this is a closure, redirect to it. - let llfn = get_or_create_closure_declaration(ccx, closure_def_id, substs); - - // If weak linkage is not allowed, we have to make sure that a local, - // private copy of the closure is available in this codegen unit - if !ccx.sess().target.target.options.allows_weak_linkage && - !ccx.sess().opts.single_codegen_unit() { - - trans_closure_body_via_mir(ccx, closure_def_id, substs); - } - - // If the closure is a Fn closure, but a FnOnce is needed (etc), - // then adapt the self type - let llfn_closure_kind = ccx.tcx().closure_kind(closure_def_id); - - let _icx = push_ctxt("trans_closure_adapter_shim"); - - debug!("trans_closure_adapter_shim(llfn_closure_kind={:?}, \ - trait_closure_kind={:?}, llfn={:?})", - llfn_closure_kind, trait_closure_kind, Value(llfn)); - - match (llfn_closure_kind, trait_closure_kind) { - (ty::ClosureKind::Fn, ty::ClosureKind::Fn) | - (ty::ClosureKind::FnMut, ty::ClosureKind::FnMut) | - (ty::ClosureKind::FnOnce, ty::ClosureKind::FnOnce) => { - // No adapter needed. - llfn - } - (ty::ClosureKind::Fn, ty::ClosureKind::FnMut) => { - // The closure fn `llfn` is a `fn(&self, ...)`. We want a - // `fn(&mut self, ...)`. In fact, at trans time, these are - // basically the same thing, so we can just return llfn. - llfn - } - (ty::ClosureKind::Fn, ty::ClosureKind::FnOnce) | - (ty::ClosureKind::FnMut, ty::ClosureKind::FnOnce) => { - // The closure fn `llfn` is a `fn(&self, ...)` or `fn(&mut - // self, ...)`. We want a `fn(self, ...)`. We can produce - // this by doing something like: - // - // fn call_once(self, ...) { call_mut(&self, ...) } - // fn call_once(mut self, ...) { call_mut(&mut self, ...) } - // - // These are both the same at trans time. - trans_fn_once_adapter_shim(ccx, closure_def_id, substs, method_instance, llfn) - } - _ => { - bug!("trans_closure_adapter_shim: cannot convert {:?} to {:?}", - llfn_closure_kind, - trait_closure_kind); - } - } -} - -fn trans_fn_once_adapter_shim<'a, 'tcx>( - ccx: &'a CrateContext<'a, 'tcx>, - closure_def_id: DefId, - substs: ty::ClosureSubsts<'tcx>, - method_instance: Instance<'tcx>, - llreffn: ValueRef) - -> ValueRef -{ - if let Some(&llfn) = ccx.instances().borrow().get(&method_instance) { - return llfn; - } - - debug!("trans_fn_once_adapter_shim(closure_def_id={:?}, substs={:?}, llreffn={:?})", - closure_def_id, substs, Value(llreffn)); - - let tcx = ccx.tcx(); - - // Find a version of the closure type. Substitute static for the - // region since it doesn't really matter. - let closure_ty = tcx.mk_closure_from_closure_substs(closure_def_id, substs); - let ref_closure_ty = tcx.mk_imm_ref(tcx.mk_region(ty::ReErased), closure_ty); - - // Make a version with the type of by-ref closure. - let ty::ClosureTy { unsafety, abi, mut sig } = - tcx.closure_type(closure_def_id, substs); - sig.0.inputs.insert(0, ref_closure_ty); // sig has no self type as of yet - let llref_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { - unsafety: unsafety, - abi: abi, - sig: sig.clone() - })); - debug!("trans_fn_once_adapter_shim: llref_fn_ty={:?}", - llref_fn_ty); - - - // Make a version of the closure type with the same arguments, but - // with argument #0 being by value. - assert_eq!(abi, Abi::RustCall); - sig.0.inputs[0] = closure_ty; - - let sig = tcx.erase_late_bound_regions_and_normalize(&sig); - let fn_ty = FnType::new(ccx, abi, &sig, &[]); - - let llonce_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { - unsafety: unsafety, - abi: abi, - sig: ty::Binder(sig) - })); - - // Create the by-value helper. - let function_name = method_instance.symbol_name(ccx.shared()); - let lloncefn = declare::define_internal_fn(ccx, &function_name, llonce_fn_ty); - attributes::set_frame_pointer_elimination(ccx, lloncefn); - - let (block_arena, fcx): (TypedArena<_>, FunctionContext); - block_arena = TypedArena::new(); - fcx = FunctionContext::new(ccx, lloncefn, fn_ty, None, &block_arena); - let mut bcx = fcx.init(false); - - - // the first argument (`self`) will be the (by value) closure env. - - let mut llargs = get_params(fcx.llfn); - let mut self_idx = fcx.fn_ty.ret.is_indirect() as usize; - let env_arg = &fcx.fn_ty.args[0]; - let llenv = if env_arg.is_indirect() { - llargs[self_idx] - } else { - let scratch = alloc_ty(bcx, closure_ty, "self"); - let mut llarg_idx = self_idx; - env_arg.store_fn_arg(&bcx.build(), &mut llarg_idx, scratch); - scratch - }; - - debug!("trans_fn_once_adapter_shim: env={:?}", Value(llenv)); - // Adjust llargs such that llargs[self_idx..] has the call arguments. - // For zero-sized closures that means sneaking in a new argument. - if env_arg.is_ignore() { - if self_idx > 0 { - self_idx -= 1; - llargs[self_idx] = llenv; - } else { - llargs.insert(0, llenv); - } - } else { - llargs[self_idx] = llenv; - } - - let dest = fcx.llretslotptr.get(); - - let callee = Callee { - data: callee::Fn(llreffn), - ty: llref_fn_ty - }; - - // Call the by-ref closure body with `self` in a cleanup scope, - // to drop `self` when the body returns, or in case it unwinds. - let self_scope = fcx.push_custom_cleanup_scope(); - fcx.schedule_drop_mem(self_scope, llenv, closure_ty); - - bcx = callee.call(bcx, DebugLoc::None, &llargs[self_idx..], dest).bcx; - - fcx.pop_and_trans_custom_cleanup_scope(bcx, self_scope); - - fcx.finish(bcx, DebugLoc::None); - - ccx.instances().borrow_mut().insert(method_instance, lloncefn); - - lloncefn -} diff --git a/src/librustc_trans/collector.rs b/src/librustc_trans/collector.rs index a439d415ed..3af3ada66b 100644 --- a/src/librustc_trans/collector.rs +++ b/src/librustc_trans/collector.rs @@ -189,7 +189,7 @@ //! regardless of whether it is actually needed or not. use rustc::hir; -use rustc::hir::intravisit as hir_visit; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir::map as hir_map; use rustc::hir::def_id::DefId; @@ -211,9 +211,9 @@ use context::SharedCrateContext; use common::{fulfill_obligation, type_is_sized}; use glue::{self, DropGlueKind}; use monomorphize::{self, Instance}; -use util::nodemap::{FnvHashSet, FnvHashMap, DefIdMap}; +use util::nodemap::{FxHashSet, FxHashMap, DefIdMap}; -use trans_item::{TransItem, type_to_string, def_id_to_string}; +use trans_item::{TransItem, DefPathBasedNames}; #[derive(PartialEq, Eq, Hash, Clone, Copy, Debug)] pub enum TransItemCollectionMode { @@ -228,7 +228,7 @@ pub struct InliningMap<'tcx> { // that are potentially inlined by LLVM into the source. // The two numbers in the tuple are the start (inclusive) and // end index (exclusive) within the `targets` vecs. - index: FnvHashMap, (usize, usize)>, + index: FxHashMap, (usize, usize)>, targets: Vec>, } @@ -236,7 +236,7 @@ impl<'tcx> InliningMap<'tcx> { fn new() -> InliningMap<'tcx> { InliningMap { - index: FnvHashMap(), + index: FxHashMap(), targets: Vec::new(), } } @@ -269,7 +269,7 @@ impl<'tcx> InliningMap<'tcx> { pub fn collect_crate_translation_items<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, mode: TransItemCollectionMode) - -> (FnvHashSet>, + -> (FxHashSet>, InliningMap<'tcx>) { // We are not tracking dependencies of this pass as it has to be re-executed // every time no matter what. @@ -277,7 +277,7 @@ pub fn collect_crate_translation_items<'a, 'tcx>(scx: &SharedCrateContext<'a, 't let roots = collect_roots(scx, mode); debug!("Building translation item graph, beginning at roots"); - let mut visited = FnvHashSet(); + let mut visited = FxHashSet(); let mut recursion_depths = DefIdMap(); let mut inlining_map = InliningMap::new(); @@ -306,10 +306,9 @@ fn collect_roots<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, scx: scx, mode: mode, output: &mut roots, - enclosing_item: None, }; - scx.tcx().map.krate().visit_all_items(&mut visitor); + scx.tcx().map.krate().visit_all_item_likes(&mut visitor); } roots @@ -318,7 +317,7 @@ fn collect_roots<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, // Collect all monomorphized translation items reachable from `starting_point` fn collect_items_rec<'a, 'tcx: 'a>(scx: &SharedCrateContext<'a, 'tcx>, starting_point: TransItem<'tcx>, - visited: &mut FnvHashSet>, + visited: &mut FxHashSet>, recursion_depths: &mut DefIdMap, inlining_map: &mut InliningMap<'tcx>) { if !visited.insert(starting_point.clone()) { @@ -337,7 +336,7 @@ fn collect_items_rec<'a, 'tcx: 'a>(scx: &SharedCrateContext<'a, 'tcx>, } TransItem::Static(node_id) => { let def_id = scx.tcx().map.local_def_id(node_id); - let ty = scx.tcx().lookup_item_type(def_id).ty; + let ty = scx.tcx().item_type(def_id); let ty = glue::get_drop_glue_type(scx.tcx(), ty); neighbors.push(TransItem::DropGlue(DropGlueKind::Ty(ty))); @@ -362,6 +361,7 @@ fn collect_items_rec<'a, 'tcx: 'a>(scx: &SharedCrateContext<'a, 'tcx>, recursion_depth_reset = Some(check_recursion_limit(scx.tcx(), instance, recursion_depths)); + check_type_length_limit(scx.tcx(), instance); // Scan the MIR in order to find function calls, closures, and // drop-glue @@ -433,6 +433,40 @@ fn check_recursion_limit<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, (instance.def, recursion_depth) } +fn check_type_length_limit<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, + instance: Instance<'tcx>) +{ + let type_length = instance.substs.types().flat_map(|ty| ty.walk()).count(); + debug!(" => type length={}", type_length); + + // Rust code can easily create exponentially-long types using only a + // polynomial recursion depth. Even with the default recursion + // depth, you can easily get cases that take >2^60 steps to run, + // which means that rustc basically hangs. + // + // Bail out in these cases to avoid that bad user experience. + let type_length_limit = tcx.sess.type_length_limit.get(); + if type_length > type_length_limit { + // The instance name is already known to be too long for rustc. Use + // `{:.64}` to avoid blasting the user's terminal with thousands of + // lines of type-name. + let instance_name = instance.to_string(); + let msg = format!("reached the type-length limit while instantiating `{:.64}...`", + instance_name); + let mut diag = if let Some(node_id) = tcx.map.as_local_node_id(instance.def) { + tcx.sess.struct_span_fatal(tcx.map.span(node_id), &msg) + } else { + tcx.sess.struct_fatal(&msg) + }; + + diag.note(&format!( + "consider adding a `#![type_length_limit=\"{}\"]` attribute to your crate", + type_length_limit*2)); + diag.emit(); + tcx.sess.abort_if_errors(); + } +} + struct MirNeighborCollector<'a, 'tcx: 'a> { scx: &'a SharedCrateContext<'a, 'tcx>, mir: &'a mir::Mir<'tcx>, @@ -446,24 +480,6 @@ impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> { debug!("visiting rvalue {:?}", *rvalue); match *rvalue { - mir::Rvalue::Aggregate(mir::AggregateKind::Closure(def_id, - ref substs), _) => { - let mir = self.scx.tcx().item_mir(def_id); - - let concrete_substs = monomorphize::apply_param_substs(self.scx, - self.param_substs, - &substs.func_substs); - let concrete_substs = self.scx.tcx().erase_regions(&concrete_substs); - - let visitor = MirNeighborCollector { - scx: self.scx, - mir: &mir, - output: self.output, - param_substs: concrete_substs - }; - - visit_mir_and_promoted(visitor, &mir); - } // When doing an cast from a regular pointer to a fat pointer, we // have to instantiate all methods of the trait being cast to, so we // can build the appropriate vtable. @@ -618,7 +634,7 @@ impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> { fn can_result_in_trans_item<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> bool { - match tcx.lookup_item_type(def_id).ty.sty { + match tcx.item_type(def_id).sty { ty::TyFnDef(def_id, _, f) => { // Some constructors also have type TyFnDef but they are // always instantiated inline and don't result in @@ -682,7 +698,7 @@ impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> { -> bool { (bare_fn_ty.abi == Abi::RustIntrinsic || bare_fn_ty.abi == Abi::PlatformIntrinsic) && - tcx.item_name(def_id).as_str() == "drop_in_place" + tcx.item_name(def_id) == "drop_in_place" } } } @@ -690,10 +706,7 @@ impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> { fn can_have_local_instance<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> bool { - // Take a look if we have the definition available. If not, we - // will not emit code for this item in the local crate, and thus - // don't create a translation item for it. - def_id.is_local() || tcx.sess.cstore.is_item_mir_available(def_id) + tcx.sess.cstore.can_have_local_instance(tcx, def_id) } fn find_drop_glue_neighbors<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, @@ -782,14 +795,15 @@ fn find_drop_glue_neighbors<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, ty::TyFnDef(..) | ty::TyFnPtr(_) | ty::TyNever | - ty::TyTrait(_) => { + ty::TyDynamic(..) => { /* nothing to do */ } ty::TyAdt(adt_def, substs) => { for field in adt_def.all_fields() { + let field_type = scx.tcx().item_type(field.did); let field_type = monomorphize::apply_param_substs(scx, substs, - &field.unsubst_ty()); + &field_type); let field_type = glue::get_drop_glue_type(scx.tcx(), field_type); if glue::type_needs_drop(scx.tcx(), field_type) { @@ -797,8 +811,8 @@ fn find_drop_glue_neighbors<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, } } } - ty::TyClosure(_, substs) => { - for upvar_ty in substs.upvar_tys { + ty::TyClosure(def_id, substs) => { + for upvar_ty in substs.upvar_tys(def_id, scx.tcx()) { let upvar_ty = glue::get_drop_glue_type(scx.tcx(), upvar_ty); if glue::type_needs_drop(scx.tcx(), upvar_ty) { output.push(TransItem::DropGlue(DropGlueKind::Ty(upvar_ty))); @@ -844,17 +858,12 @@ fn do_static_dispatch<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, param_substs); if let Some(trait_def_id) = scx.tcx().trait_of_item(fn_def_id) { - match scx.tcx().impl_or_trait_item(fn_def_id) { - ty::MethodTraitItem(ref method) => { - debug!(" => trait method, attempting to find impl"); - do_static_trait_method_dispatch(scx, - method, - trait_def_id, - fn_substs, - param_substs) - } - _ => bug!() - } + debug!(" => trait method, attempting to find impl"); + do_static_trait_method_dispatch(scx, + &scx.tcx().associated_item(fn_def_id), + trait_def_id, + fn_substs, + param_substs) } else { debug!(" => regular function"); // The function is not part of an impl or trait, no dispatching @@ -866,7 +875,7 @@ fn do_static_dispatch<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, // Given a trait-method and substitution information, find out the actual // implementation of the trait method. fn do_static_trait_method_dispatch<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, - trait_method: &ty::Method, + trait_method: &ty::AssociatedItem, trait_id: DefId, callee_substs: &'tcx Substs<'tcx>, param_substs: &'tcx Substs<'tcx>) @@ -893,10 +902,12 @@ fn do_static_trait_method_dispatch<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>, traits::VtableImpl(impl_data) => { Some(traits::find_method(tcx, trait_method.name, rcvr_substs, &impl_data)) } - // If we have a closure or a function pointer, we will also encounter - // the concrete closure/function somewhere else (during closure or fn - // pointer construction). That's where we track those things. - traits::VtableClosure(..) | + traits::VtableClosure(closure_data) => { + Some((closure_data.closure_def_id, closure_data.substs.substs)) + } + // Trait object and function pointer shims are always + // instantiated in-place, and as they are just an ABI-adjusting + // indirect call they do not have any dependencies. traits::VtableFnPointer(..) | traits::VtableObject(..) => { None @@ -1025,18 +1036,20 @@ fn create_trans_items_for_vtable_methods<'a, 'tcx>(scx: &SharedCrateContext<'a, output: &mut Vec>) { assert!(!trait_ty.needs_subst() && !impl_ty.needs_subst()); - if let ty::TyTrait(ref trait_ty) = trait_ty.sty { - let poly_trait_ref = trait_ty.principal.with_self_ty(scx.tcx(), impl_ty); - let param_substs = scx.tcx().intern_substs(&[]); - - // Walk all methods of the trait, including those of its supertraits - let methods = traits::get_vtable_methods(scx.tcx(), poly_trait_ref); - let methods = methods.filter_map(|method| method) - .filter_map(|(def_id, substs)| do_static_dispatch(scx, def_id, substs, param_substs)) - .filter(|&(def_id, _)| can_have_local_instance(scx.tcx(), def_id)) - .map(|(def_id, substs)| create_fn_trans_item(scx, def_id, substs, param_substs)); - output.extend(methods); + if let ty::TyDynamic(ref trait_ty, ..) = trait_ty.sty { + if let Some(principal) = trait_ty.principal() { + let poly_trait_ref = principal.with_self_ty(scx.tcx(), impl_ty); + let param_substs = scx.tcx().intern_substs(&[]); + // Walk all methods of the trait, including those of its supertraits + let methods = traits::get_vtable_methods(scx.tcx(), poly_trait_ref); + let methods = methods.filter_map(|method| method) + .filter_map(|(def_id, substs)| do_static_dispatch(scx, def_id, substs, + param_substs)) + .filter(|&(def_id, _)| can_have_local_instance(scx.tcx(), def_id)) + .map(|(def_id, substs)| create_fn_trans_item(scx, def_id, substs, param_substs)); + output.extend(methods); + } // Also add the destructor let dg_type = glue::get_drop_glue_type(scx.tcx(), impl_ty); output.push(TransItem::DropGlue(DropGlueKind::Ty(dg_type))); @@ -1051,14 +1064,10 @@ struct RootCollector<'b, 'a: 'b, 'tcx: 'a + 'b> { scx: &'b SharedCrateContext<'a, 'tcx>, mode: TransItemCollectionMode, output: &'b mut Vec>, - enclosing_item: Option<&'tcx hir::Item>, } -impl<'b, 'a, 'v> hir_visit::Visitor<'v> for RootCollector<'b, 'a, 'v> { +impl<'b, 'a, 'v> ItemLikeVisitor<'v> for RootCollector<'b, 'a, 'v> { fn visit_item(&mut self, item: &'v hir::Item) { - let old_enclosing_item = self.enclosing_item; - self.enclosing_item = Some(item); - match item.node { hir::ItemExternCrate(..) | hir::ItemUse(..) | @@ -1082,13 +1091,12 @@ impl<'b, 'a, 'v> hir_visit::Visitor<'v> for RootCollector<'b, 'a, 'v> { hir::ItemStruct(_, ref generics) | hir::ItemUnion(_, ref generics) => { if !generics.is_parameterized() { - let ty = self.scx.tcx().tables().node_types[&item.id]; - if self.mode == TransItemCollectionMode::Eager { + let def_id = self.scx.tcx().map.local_def_id(item.id); debug!("RootCollector: ADT drop-glue for {}", - def_id_to_string(self.scx.tcx(), - self.scx.tcx().map.local_def_id(item.id))); + def_id_to_string(self.scx.tcx(), def_id)); + let ty = self.scx.tcx().item_type(def_id); let ty = glue::get_drop_glue_type(self.scx.tcx(), ty); self.output.push(TransItem::DropGlue(DropGlueKind::Ty(ty))); } @@ -1116,9 +1124,6 @@ impl<'b, 'a, 'v> hir_visit::Visitor<'v> for RootCollector<'b, 'a, 'v> { } } } - - hir_visit::walk_item(self, item); - self.enclosing_item = old_enclosing_item; } fn visit_impl_item(&mut self, ii: &'v hir::ImplItem) { @@ -1153,8 +1158,6 @@ impl<'b, 'a, 'v> hir_visit::Visitor<'v> for RootCollector<'b, 'a, 'v> { } _ => { /* Nothing to do here */ } } - - hir_visit::walk_impl_item(self, ii) } } @@ -1167,7 +1170,7 @@ fn create_trans_items_for_default_impls<'a, 'tcx>(scx: &SharedCrateContext<'a, ' _, ref generics, .., - ref items) => { + ref impl_item_refs) => { if generics.is_type_parameterized() { return } @@ -1179,15 +1182,16 @@ fn create_trans_items_for_default_impls<'a, 'tcx>(scx: &SharedCrateContext<'a, ' if let Some(trait_ref) = tcx.impl_trait_ref(impl_def_id) { let callee_substs = tcx.erase_regions(&trait_ref.substs); - let overridden_methods: FnvHashSet<_> = items.iter() - .map(|item| item.name) - .collect(); + let overridden_methods: FxHashSet<_> = + impl_item_refs.iter() + .map(|iiref| iiref.name) + .collect(); for method in tcx.provided_trait_methods(trait_ref.def_id) { if overridden_methods.contains(&method.name) { continue; } - if !method.generics.types.is_empty() { + if !tcx.item_generics(method.def_id).types.is_empty() { continue; } @@ -1206,7 +1210,7 @@ fn create_trans_items_for_default_impls<'a, 'tcx>(scx: &SharedCrateContext<'a, ' callee_substs, &impl_data); - let predicates = tcx.lookup_predicates(def_id).predicates + let predicates = tcx.item_predicates(def_id).predicates .subst(tcx, substs); if !traits::normalize_and_test_predicates(tcx, predicates) { continue; @@ -1256,3 +1260,21 @@ fn visit_mir_and_promoted<'tcx, V: MirVisitor<'tcx>>(mut visitor: V, mir: &mir:: visitor.visit_mir(promoted); } } + +fn def_id_to_string<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, + def_id: DefId) + -> String { + let mut output = String::new(); + let printer = DefPathBasedNames::new(tcx, false, false); + printer.push_def_path(def_id, &mut output); + output +} + +fn type_to_string<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, + ty: ty::Ty<'tcx>) + -> String { + let mut output = String::new(); + let printer = DefPathBasedNames::new(tcx, false, false); + printer.push_type_name(ty, &mut output); + output +} diff --git a/src/librustc_trans/common.rs b/src/librustc_trans/common.rs index 464b261b08..b1d61cea39 100644 --- a/src/librustc_trans/common.rs +++ b/src/librustc_trans/common.rs @@ -18,6 +18,7 @@ use llvm::{ValueRef, BasicBlockRef, BuilderRef, ContextRef, TypeKind}; use llvm::{True, False, Bool, OperandBundleDef}; use rustc::hir::def::Def; use rustc::hir::def_id::DefId; +use rustc::hir::map::DefPathData; use rustc::infer::TransNormalize; use rustc::mir::Mir; use rustc::util::common::MemoizationMap; @@ -44,13 +45,14 @@ use rustc::hir; use arena::TypedArena; use libc::{c_uint, c_char}; +use std::borrow::Cow; +use std::iter; use std::ops::Deref; use std::ffi::CString; use std::cell::{Cell, RefCell, Ref}; use syntax::ast; -use syntax::parse::token::InternedString; -use syntax::parse::token; +use syntax::symbol::{Symbol, InternedString}; use syntax_pos::{DUMMY_SP, Span}; pub use context::{CrateContext, SharedCrateContext}; @@ -109,7 +111,16 @@ pub fn type_pair_fields<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) Some([monomorphize::field_ty(ccx.tcx(), substs, &fields[0]), monomorphize::field_ty(ccx.tcx(), substs, &fields[1])]) } - ty::TyClosure(_, ty::ClosureSubsts { upvar_tys: tys, .. }) | + ty::TyClosure(def_id, substs) => { + let mut tys = substs.upvar_tys(def_id, ccx.tcx()); + tys.next().and_then(|first_ty| tys.next().and_then(|second_ty| { + if tys.next().is_some() { + None + } else { + Some([first_ty, second_ty]) + } + })) + } ty::TyTuple(tys) => { if tys.len() != 2 { return None; @@ -213,7 +224,7 @@ impl<'a, 'tcx> VariantInfo<'tcx> { VariantInfo { discr: Disr(0), fields: v.iter().enumerate().map(|(i, &t)| { - Field(token::intern(&i.to_string()), t) + Field(Symbol::intern(&i.to_string()), t) }).collect() } } @@ -407,11 +418,11 @@ impl<'a, 'tcx> FunctionContext<'a, 'tcx> { let ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Unsafe, abi: Abi::C, - sig: ty::Binder(ty::FnSig { - inputs: vec![tcx.mk_mut_ptr(tcx.types.u8)], - output: tcx.types.never, - variadic: false - }), + sig: ty::Binder(tcx.mk_fn_sig( + iter::once(tcx.mk_mut_ptr(tcx.types.u8)), + tcx.types.never, + false + )), })); let unwresume = ccx.eh_unwind_resume(); @@ -815,7 +826,7 @@ pub fn C_cstr(cx: &CrateContext, s: InternedString, null_terminated: bool) -> Va pub fn C_str_slice(cx: &CrateContext, s: InternedString) -> ValueRef { let len = s.len(); let cs = consts::ptrcast(C_cstr(cx, s, false), Type::i8p(cx)); - C_named_struct(cx.tn().find_type("str_slice").unwrap(), &[cs, C_uint(cx, len)]) + C_named_struct(cx.str_slice_type(), &[cs, C_uint(cx, len)]) } pub fn C_struct(cx: &CrateContext, elts: &[ValueRef], packed: bool) -> ValueRef { @@ -1060,3 +1071,37 @@ pub fn shift_mask_val<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, _ => bug!("shift_mask_val: expected Integer or Vector, found {:?}", kind), } } + +pub fn ty_fn_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, + ty: Ty<'tcx>) + -> Cow<'tcx, ty::BareFnTy<'tcx>> +{ + match ty.sty { + ty::TyFnDef(_, _, fty) => Cow::Borrowed(fty), + // Shims currently have type TyFnPtr. Not sure this should remain. + ty::TyFnPtr(fty) => Cow::Borrowed(fty), + ty::TyClosure(def_id, substs) => { + let tcx = ccx.tcx(); + let ty::ClosureTy { unsafety, abi, sig } = tcx.closure_type(def_id, substs); + + let env_region = ty::ReLateBound(ty::DebruijnIndex::new(1), ty::BrEnv); + let env_ty = match tcx.closure_kind(def_id) { + ty::ClosureKind::Fn => tcx.mk_imm_ref(tcx.mk_region(env_region), ty), + ty::ClosureKind::FnMut => tcx.mk_mut_ref(tcx.mk_region(env_region), ty), + ty::ClosureKind::FnOnce => ty, + }; + + let sig = sig.map_bound(|sig| tcx.mk_fn_sig( + iter::once(env_ty).chain(sig.inputs().iter().cloned()), + sig.output(), + sig.variadic + )); + Cow::Owned(ty::BareFnTy { unsafety: unsafety, abi: abi, sig: sig }) + } + _ => bug!("unexpected type {:?} to ty_fn_sig", ty) + } +} + +pub fn is_closure(tcx: TyCtxt, def_id: DefId) -> bool { + tcx.def_key(def_id).disambiguated_data.data == DefPathData::ClosureExpr +} diff --git a/src/librustc_trans/consts.rs b/src/librustc_trans/consts.rs index 0dc10aa775..730a4025a5 100644 --- a/src/librustc_trans/consts.rs +++ b/src/librustc_trans/consts.rs @@ -84,7 +84,7 @@ pub fn get_static(ccx: &CrateContext, def_id: DefId) -> ValueRef { return g; } - let ty = ccx.tcx().lookup_item_type(def_id).ty; + let ty = ccx.tcx().item_type(def_id); let g = if let Some(id) = ccx.tcx().map.as_local_node_id(def_id) { let llty = type_of::type_of(ccx, ty); @@ -123,7 +123,7 @@ pub fn get_static(ccx: &CrateContext, def_id: DefId) -> ValueRef { // extern "C" fn() from being non-null, so we can't just declare a // static and call it a day. Some linkages (like weak) will make it such // that the static actually has a null value. - let linkage = match base::llvm_linkage_by_name(&name) { + let linkage = match base::llvm_linkage_by_name(&name.as_str()) { Some(linkage) => linkage, None => { ccx.sess().span_fatal(span, "invalid linkage specified"); @@ -191,7 +191,12 @@ pub fn get_static(ccx: &CrateContext, def_id: DefId) -> ValueRef { llvm::set_thread_local(g, true); } } - if ccx.use_dll_storage_attrs() { + if ccx.use_dll_storage_attrs() && !ccx.sess().cstore.is_foreign_item(def_id) { + // This item is external but not foreign, i.e. it originates from an external Rust + // crate. Since we don't know whether this crate will be linked dynamically or + // statically in the final application, we always mark such symbols as 'dllimport'. + // If final linkage happens to be static, we rely on compiler-emitted __imp_ stubs to + // make things work. unsafe { llvm::LLVMSetDLLStorageClass(g, llvm::DLLStorageClass::DllImport); } @@ -199,6 +204,12 @@ pub fn get_static(ccx: &CrateContext, def_id: DefId) -> ValueRef { g }; + if ccx.use_dll_storage_attrs() && ccx.sess().cstore.is_dllimport_foreign_item(def_id) { + // For foreign (native) libs we know the exact storage type to use. + unsafe { + llvm::LLVMSetDLLStorageClass(g, llvm::DLLStorageClass::DllImport); + } + } ccx.instances().borrow_mut().insert(instance, g); ccx.statics().borrow_mut().insert(g, def_id); g @@ -226,7 +237,7 @@ pub fn trans_static(ccx: &CrateContext, v }; - let ty = ccx.tcx().lookup_item_type(def_id).ty; + let ty = ccx.tcx().item_type(def_id); let llty = type_of::type_of(ccx, ty); let g = if val_llty == llty { g diff --git a/src/librustc_trans/context.rs b/src/librustc_trans/context.rs index fc75b1018e..6435b20eea 100644 --- a/src/librustc_trans/context.rs +++ b/src/librustc_trans/context.rs @@ -10,7 +10,8 @@ use llvm; use llvm::{ContextRef, ModuleRef, ValueRef, BuilderRef}; -use rustc::dep_graph::{DepNode, DepTrackingMap, DepTrackingMapConfig, WorkProduct}; +use rustc::dep_graph::{DepGraph, DepNode, DepTrackingMap, DepTrackingMapConfig, + WorkProduct}; use middle::cstore::LinkMeta; use rustc::hir::def::ExportMap; use rustc::hir::def_id::DefId; @@ -25,14 +26,15 @@ use monomorphize::Instance; use partitioning::CodegenUnit; use trans_item::TransItem; -use type_::{Type, TypeNames}; +use type_::Type; +use rustc_data_structures::base_n; use rustc::ty::subst::Substs; use rustc::ty::{self, Ty, TyCtxt}; use session::config::NoDebugInfo; use session::Session; use session::config; use symbol_map::SymbolMap; -use util::nodemap::{NodeSet, DefIdMap, FnvHashMap, FnvHashSet}; +use util::nodemap::{NodeSet, DefIdMap, FxHashMap, FxHashSet}; use std::ffi::{CStr, CString}; use std::cell::{Cell, RefCell}; @@ -41,7 +43,7 @@ use std::ptr; use std::rc::Rc; use std::str; use syntax::ast; -use syntax::parse::token::InternedString; +use syntax::symbol::InternedString; use abi::FnType; pub struct Stats { @@ -52,7 +54,7 @@ pub struct Stats { pub n_inlines: Cell, pub n_closures: Cell, pub n_llvm_insns: Cell, - pub llvm_insns: RefCell>, + pub llvm_insns: RefCell>, // (ident, llvm-instructions) pub fn_stats: RefCell >, } @@ -66,7 +68,7 @@ pub struct SharedCrateContext<'a, 'tcx: 'a> { metadata_llcx: ContextRef, export_map: ExportMap, - reachable: NodeSet, + exported_symbols: NodeSet, link_meta: LinkMeta, tcx: TyCtxt<'a, 'tcx, 'tcx>, stats: Stats, @@ -74,7 +76,7 @@ pub struct SharedCrateContext<'a, 'tcx: 'a> { use_dll_storage_attrs: bool, - translation_items: RefCell>>, + translation_items: RefCell>>, trait_cache: RefCell>>, project_cache: RefCell>>, } @@ -87,17 +89,17 @@ pub struct LocalCrateContext<'tcx> { llmod: ModuleRef, llcx: ContextRef, previous_work_product: Option, - tn: TypeNames, // FIXME: This seems to be largely unused. codegen_unit: CodegenUnit<'tcx>, - needs_unwind_cleanup_cache: RefCell, bool>>, - fn_pointer_shims: RefCell, ValueRef>>, - drop_glues: RefCell, (ValueRef, FnType)>>, + needs_unwind_cleanup_cache: RefCell, bool>>, + fn_pointer_shims: RefCell, ValueRef>>, + drop_glues: RefCell, (ValueRef, FnType)>>, /// Cache instances of monomorphic and polymorphic items - instances: RefCell, ValueRef>>, + instances: RefCell, ValueRef>>, /// Cache generated vtables - vtables: RefCell, ValueRef>>, + vtables: RefCell, + Option>), ValueRef>>, /// Cache of constant strings, - const_cstr_cache: RefCell>, + const_cstr_cache: RefCell>, /// Reverse-direction for const ptrs cast from globals. /// Key is a ValueRef holding a *T, @@ -107,24 +109,24 @@ pub struct LocalCrateContext<'tcx> { /// when we ptrcast, and we have to ptrcast during translation /// of a [T] const because we form a slice, a (*T,usize) pair, not /// a pointer to an LLVM array type. Similar for trait objects. - const_unsized: RefCell>, + const_unsized: RefCell>, /// Cache of emitted const globals (value -> global) - const_globals: RefCell>, + const_globals: RefCell>, /// Cache of emitted const values - const_values: RefCell), ValueRef>>, + const_values: RefCell), ValueRef>>, /// Cache of external const values extern_const_values: RefCell>, /// Mapping from static definitions to their DefId's. - statics: RefCell>, + statics: RefCell>, - impl_method_cache: RefCell>, + impl_method_cache: RefCell>, /// Cache of closure wrappers for bare fn's. - closure_bare_wrapper_cache: RefCell>, + closure_bare_wrapper_cache: RefCell>, /// List of globals for static variables which need to be passed to the /// LLVM function ReplaceAllUsesWith (RAUW) when translation is complete. @@ -132,15 +134,16 @@ pub struct LocalCrateContext<'tcx> { /// to constants.) statics_to_rauw: RefCell>, - lltypes: RefCell, Type>>, - llsizingtypes: RefCell, Type>>, - type_hashcodes: RefCell, String>>, + lltypes: RefCell, Type>>, + llsizingtypes: RefCell, Type>>, + type_hashcodes: RefCell, String>>, int_type: Type, opaque_vec_type: Type, + str_slice_type: Type, builder: BuilderRef_res, /// Holds the LLVM values for closure IDs. - closure_vals: RefCell, ValueRef>>, + closure_vals: RefCell, ValueRef>>, dbg_cx: Option>, @@ -148,7 +151,7 @@ pub struct LocalCrateContext<'tcx> { eh_unwind_resume: Cell>, rust_try_fn: Cell>, - intrinsics: RefCell>, + intrinsics: RefCell>, /// Number of LLVM instructions translated into this `LocalCrateContext`. /// This is used to perform some basic load-balancing to keep all LLVM @@ -435,7 +438,7 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { pub fn new(tcx: TyCtxt<'b, 'tcx, 'tcx>, export_map: ExportMap, link_meta: LinkMeta, - reachable: NodeSet, + exported_symbols: NodeSet, check_overflow: bool) -> SharedCrateContext<'b, 'tcx> { let (metadata_llcx, metadata_llmod) = unsafe { @@ -452,7 +455,7 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { // they're not available to be linked against. This poses a few problems // for the compiler, some of which are somewhat fundamental, but we use // the `use_dll_storage_attrs` variable below to attach the `dllexport` - // attribute to all LLVM functions that are reachable (e.g. they're + // attribute to all LLVM functions that are exported e.g. they're // already tagged with external linkage). This is suboptimal for a few // reasons: // @@ -491,7 +494,7 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { metadata_llmod: metadata_llmod, metadata_llcx: metadata_llcx, export_map: export_map, - reachable: reachable, + exported_symbols: exported_symbols, link_meta: link_meta, tcx: tcx, stats: Stats { @@ -502,12 +505,12 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { n_inlines: Cell::new(0), n_closures: Cell::new(0), n_llvm_insns: Cell::new(0), - llvm_insns: RefCell::new(FnvHashMap()), + llvm_insns: RefCell::new(FxHashMap()), fn_stats: RefCell::new(Vec::new()), }, check_overflow: check_overflow, use_dll_storage_attrs: use_dll_storage_attrs, - translation_items: RefCell::new(FnvHashSet()), + translation_items: RefCell::new(FxHashSet()), trait_cache: RefCell::new(DepTrackingMap::new(tcx.dep_graph.clone())), project_cache: RefCell::new(DepTrackingMap::new(tcx.dep_graph.clone())), } @@ -525,8 +528,8 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { &self.export_map } - pub fn reachable<'a>(&'a self) -> &'a NodeSet { - &self.reachable + pub fn exported_symbols<'a>(&'a self) -> &'a NodeSet { + &self.exported_symbols } pub fn trait_cache(&self) -> &RefCell>> { @@ -549,6 +552,10 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { &self.tcx.sess } + pub fn dep_graph<'a>(&'a self) -> &'a DepGraph { + &self.tcx.dep_graph + } + pub fn stats<'a>(&'a self) -> &'a Stats { &self.stats } @@ -557,7 +564,7 @@ impl<'b, 'tcx> SharedCrateContext<'b, 'tcx> { self.use_dll_storage_attrs } - pub fn translation_items(&self) -> &RefCell>> { + pub fn translation_items(&self) -> &RefCell>> { &self.translation_items } @@ -611,33 +618,33 @@ impl<'tcx> LocalCrateContext<'tcx> { llcx: llcx, previous_work_product: previous_work_product, codegen_unit: codegen_unit, - tn: TypeNames::new(), - needs_unwind_cleanup_cache: RefCell::new(FnvHashMap()), - fn_pointer_shims: RefCell::new(FnvHashMap()), - drop_glues: RefCell::new(FnvHashMap()), - instances: RefCell::new(FnvHashMap()), - vtables: RefCell::new(FnvHashMap()), - const_cstr_cache: RefCell::new(FnvHashMap()), - const_unsized: RefCell::new(FnvHashMap()), - const_globals: RefCell::new(FnvHashMap()), - const_values: RefCell::new(FnvHashMap()), + needs_unwind_cleanup_cache: RefCell::new(FxHashMap()), + fn_pointer_shims: RefCell::new(FxHashMap()), + drop_glues: RefCell::new(FxHashMap()), + instances: RefCell::new(FxHashMap()), + vtables: RefCell::new(FxHashMap()), + const_cstr_cache: RefCell::new(FxHashMap()), + const_unsized: RefCell::new(FxHashMap()), + const_globals: RefCell::new(FxHashMap()), + const_values: RefCell::new(FxHashMap()), extern_const_values: RefCell::new(DefIdMap()), - statics: RefCell::new(FnvHashMap()), - impl_method_cache: RefCell::new(FnvHashMap()), - closure_bare_wrapper_cache: RefCell::new(FnvHashMap()), + statics: RefCell::new(FxHashMap()), + impl_method_cache: RefCell::new(FxHashMap()), + closure_bare_wrapper_cache: RefCell::new(FxHashMap()), statics_to_rauw: RefCell::new(Vec::new()), - lltypes: RefCell::new(FnvHashMap()), - llsizingtypes: RefCell::new(FnvHashMap()), - type_hashcodes: RefCell::new(FnvHashMap()), + lltypes: RefCell::new(FxHashMap()), + llsizingtypes: RefCell::new(FxHashMap()), + type_hashcodes: RefCell::new(FxHashMap()), int_type: Type::from_ref(ptr::null_mut()), opaque_vec_type: Type::from_ref(ptr::null_mut()), + str_slice_type: Type::from_ref(ptr::null_mut()), builder: BuilderRef_res(llvm::LLVMCreateBuilderInContext(llcx)), - closure_vals: RefCell::new(FnvHashMap()), + closure_vals: RefCell::new(FxHashMap()), dbg_cx: dbg_cx, eh_personality: Cell::new(None), eh_unwind_resume: Cell::new(None), rust_try_fn: Cell::new(None), - intrinsics: RefCell::new(FnvHashMap()), + intrinsics: RefCell::new(FxHashMap()), n_llvm_insns: Cell::new(0), type_of_depth: Cell::new(0), symbol_map: symbol_map, @@ -662,7 +669,7 @@ impl<'tcx> LocalCrateContext<'tcx> { local_ccx.int_type = int_type; local_ccx.opaque_vec_type = opaque_vec_type; - local_ccx.tn.associate_type("str_slice", &str_slice_ty); + local_ccx.str_slice_type = str_slice_ty; if shared.tcx.sess.count_llvm_insns() { base::init_insn_ctxt() @@ -700,22 +707,6 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { &self.local_ccxs[self.index] } - /// Get a (possibly) different `CrateContext` from the same - /// `SharedCrateContext`. - pub fn rotate(&'b self) -> CrateContext<'b, 'tcx> { - let (_, index) = - self.local_ccxs - .iter() - .zip(0..self.local_ccxs.len()) - .min_by_key(|&(local_ccx, _idx)| local_ccx.n_llvm_insns.get()) - .unwrap(); - CrateContext { - shared: self.shared, - index: index, - local_ccxs: &self.local_ccxs[..], - } - } - /// Either iterate over only `self`, or iterate over all `CrateContext`s in /// the `SharedCrateContext`. The iterator produces `(ccx, is_origin)` /// pairs, where `is_origin` is `true` if `ccx` is `self` and `false` @@ -778,32 +769,28 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { unsafe { llvm::LLVMRustGetModuleDataLayout(self.llmod()) } } - pub fn tn<'a>(&'a self) -> &'a TypeNames { - &self.local().tn - } - pub fn export_map<'a>(&'a self) -> &'a ExportMap { &self.shared.export_map } - pub fn reachable<'a>(&'a self) -> &'a NodeSet { - &self.shared.reachable + pub fn exported_symbols<'a>(&'a self) -> &'a NodeSet { + &self.shared.exported_symbols } pub fn link_meta<'a>(&'a self) -> &'a LinkMeta { &self.shared.link_meta } - pub fn needs_unwind_cleanup_cache(&self) -> &RefCell, bool>> { + pub fn needs_unwind_cleanup_cache(&self) -> &RefCell, bool>> { &self.local().needs_unwind_cleanup_cache } - pub fn fn_pointer_shims(&self) -> &RefCell, ValueRef>> { + pub fn fn_pointer_shims(&self) -> &RefCell, ValueRef>> { &self.local().fn_pointer_shims } pub fn drop_glues<'a>(&'a self) - -> &'a RefCell, (ValueRef, FnType)>> { + -> &'a RefCell, (ValueRef, FnType)>> { &self.local().drop_glues } @@ -815,28 +802,30 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { self.sess().cstore.defid_for_inlined_node(node_id) } - pub fn instances<'a>(&'a self) -> &'a RefCell, ValueRef>> { + pub fn instances<'a>(&'a self) -> &'a RefCell, ValueRef>> { &self.local().instances } - pub fn vtables<'a>(&'a self) -> &'a RefCell, ValueRef>> { + pub fn vtables<'a>(&'a self) + -> &'a RefCell, + Option>), ValueRef>> { &self.local().vtables } - pub fn const_cstr_cache<'a>(&'a self) -> &'a RefCell> { + pub fn const_cstr_cache<'a>(&'a self) -> &'a RefCell> { &self.local().const_cstr_cache } - pub fn const_unsized<'a>(&'a self) -> &'a RefCell> { + pub fn const_unsized<'a>(&'a self) -> &'a RefCell> { &self.local().const_unsized } - pub fn const_globals<'a>(&'a self) -> &'a RefCell> { + pub fn const_globals<'a>(&'a self) -> &'a RefCell> { &self.local().const_globals } - pub fn const_values<'a>(&'a self) -> &'a RefCell), - ValueRef>> { + pub fn const_values<'a>(&'a self) -> &'a RefCell), + ValueRef>> { &self.local().const_values } @@ -844,16 +833,16 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { &self.local().extern_const_values } - pub fn statics<'a>(&'a self) -> &'a RefCell> { + pub fn statics<'a>(&'a self) -> &'a RefCell> { &self.local().statics } pub fn impl_method_cache<'a>(&'a self) - -> &'a RefCell> { + -> &'a RefCell> { &self.local().impl_method_cache } - pub fn closure_bare_wrapper_cache<'a>(&'a self) -> &'a RefCell> { + pub fn closure_bare_wrapper_cache<'a>(&'a self) -> &'a RefCell> { &self.local().closure_bare_wrapper_cache } @@ -861,15 +850,15 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { &self.local().statics_to_rauw } - pub fn lltypes<'a>(&'a self) -> &'a RefCell, Type>> { + pub fn lltypes<'a>(&'a self) -> &'a RefCell, Type>> { &self.local().lltypes } - pub fn llsizingtypes<'a>(&'a self) -> &'a RefCell, Type>> { + pub fn llsizingtypes<'a>(&'a self) -> &'a RefCell, Type>> { &self.local().llsizingtypes } - pub fn type_hashcodes<'a>(&'a self) -> &'a RefCell, String>> { + pub fn type_hashcodes<'a>(&'a self) -> &'a RefCell, String>> { &self.local().type_hashcodes } @@ -885,7 +874,11 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { self.local().opaque_vec_type } - pub fn closure_vals<'a>(&'a self) -> &'a RefCell, ValueRef>> { + pub fn str_slice_type(&self) -> Type { + self.local().str_slice_type + } + + pub fn closure_vals<'a>(&'a self) -> &'a RefCell, ValueRef>> { &self.local().closure_vals } @@ -905,7 +898,7 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { &self.local().rust_try_fn } - fn intrinsics<'a>(&'a self) -> &'a RefCell> { + fn intrinsics<'a>(&'a self) -> &'a RefCell> { &self.local().intrinsics } @@ -958,7 +951,7 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { &*self.local().symbol_map } - pub fn translation_items(&self) -> &RefCell>> { + pub fn translation_items(&self) -> &RefCell>> { &self.shared.translation_items } @@ -975,7 +968,11 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { self.local().local_gen_sym_counter.set(idx + 1); // Include a '.' character, so there can be no accidental conflicts with // user defined names - format!("{}.{}", prefix, idx) + let mut name = String::with_capacity(prefix.len() + 6); + name.push_str(prefix); + name.push_str("."); + base_n::push_str(idx as u64, base_n::ALPHANUMERIC_ONLY, &mut name); + name } } diff --git a/src/librustc_trans/debuginfo/metadata.rs b/src/librustc_trans/debuginfo/metadata.rs index 4bb34850e0..511c9d3c13 100644 --- a/src/librustc_trans/debuginfo/metadata.rs +++ b/src/librustc_trans/debuginfo/metadata.rs @@ -22,7 +22,8 @@ use context::SharedCrateContext; use session::Session; use llvm::{self, ValueRef}; -use llvm::debuginfo::{DIType, DIFile, DIScope, DIDescriptor, DICompositeType, DILexicalBlock}; +use llvm::debuginfo::{DIType, DIFile, DIScope, DIDescriptor, + DICompositeType, DILexicalBlock, DIFlags}; use rustc::hir::def::CtorKind; use rustc::hir::def_id::DefId; @@ -30,24 +31,21 @@ use rustc::ty::fold::TypeVisitor; use rustc::ty::subst::Substs; use rustc::ty::util::TypeIdHasher; use rustc::hir; -use rustc_data_structures::blake2b::Blake2bHasher; +use rustc_data_structures::ToHex; use {type_of, machine, monomorphize}; use common::CrateContext; use type_::Type; use rustc::ty::{self, AdtKind, Ty, layout}; use session::config; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; use util::common::path2cstr; use libc::{c_uint, c_longlong}; use std::ffi::CString; -use std::fmt::Write; use std::path::Path; use std::ptr; -use std::rc::Rc; -use syntax::util::interner::Interner; use syntax::ast; -use syntax::parse::token; +use syntax::symbol::{Interner, InternedString}; use syntax_pos::{self, Span}; @@ -71,8 +69,6 @@ pub const UNKNOWN_COLUMN_NUMBER: c_uint = 0; // ptr::null() doesn't work :( pub const NO_SCOPE_METADATA: DIScope = (0 as DIScope); -const FLAGS_NONE: c_uint = 0; - #[derive(Copy, Debug, Hash, Eq, PartialEq, Clone)] pub struct UniqueTypeId(ast::Name); @@ -84,20 +80,20 @@ pub struct TypeMap<'tcx> { // The UniqueTypeIds created so far unique_id_interner: Interner, // A map from UniqueTypeId to debuginfo metadata for that type. This is a 1:1 mapping. - unique_id_to_metadata: FnvHashMap, + unique_id_to_metadata: FxHashMap, // A map from types to debuginfo metadata. This is a N:1 mapping. - type_to_metadata: FnvHashMap, DIType>, + type_to_metadata: FxHashMap, DIType>, // A map from types to UniqueTypeId. This is a N:1 mapping. - type_to_unique_id: FnvHashMap, UniqueTypeId> + type_to_unique_id: FxHashMap, UniqueTypeId> } impl<'tcx> TypeMap<'tcx> { pub fn new() -> TypeMap<'tcx> { TypeMap { unique_id_interner: Interner::new(), - type_to_metadata: FnvHashMap(), - unique_id_to_metadata: FnvHashMap(), - type_to_unique_id: FnvHashMap(), + type_to_metadata: FxHashMap(), + unique_id_to_metadata: FxHashMap(), + type_to_unique_id: FxHashMap(), } } @@ -117,9 +113,8 @@ impl<'tcx> TypeMap<'tcx> { unique_type_id: UniqueTypeId, metadata: DIType) { if self.unique_id_to_metadata.insert(unique_type_id, metadata).is_some() { - let unique_type_id_str = self.get_unique_type_id_as_string(unique_type_id); bug!("Type metadata for unique id '{}' is already in the TypeMap!", - &unique_type_id_str[..]); + self.get_unique_type_id_as_string(unique_type_id)); } } @@ -133,7 +128,7 @@ impl<'tcx> TypeMap<'tcx> { // Get the string representation of a UniqueTypeId. This method will fail if // the id is unknown. - fn get_unique_type_id_as_string(&self, unique_type_id: UniqueTypeId) -> Rc { + fn get_unique_type_id_as_string(&self, unique_type_id: UniqueTypeId) -> &str { let UniqueTypeId(interner_key) = unique_type_id; self.unique_id_interner.get(interner_key) } @@ -151,21 +146,11 @@ impl<'tcx> TypeMap<'tcx> { // The hasher we are using to generate the UniqueTypeId. We want // something that provides more than the 64 bits of the DefaultHasher. - const TYPE_ID_HASH_LENGTH: usize = 20; - let mut type_id_hasher = TypeIdHasher::new(cx.tcx(), - Blake2bHasher::new(TYPE_ID_HASH_LENGTH, &[])); + let mut type_id_hasher = TypeIdHasher::<[u8; 20]>::new(cx.tcx()); type_id_hasher.visit_ty(type_); - let mut hash_state = type_id_hasher.into_inner(); - let hash: &[u8] = hash_state.finalize(); - debug_assert!(hash.len() == TYPE_ID_HASH_LENGTH); - - let mut unique_type_id = String::with_capacity(TYPE_ID_HASH_LENGTH * 2); - - for byte in hash.into_iter() { - write!(&mut unique_type_id, "{:x}", byte).unwrap(); - } + let unique_type_id = type_id_hasher.finish().to_hex(); let key = self.unique_id_interner.intern(&unique_type_id); self.type_to_unique_id.insert(type_, UniqueTypeId(key)); @@ -182,7 +167,7 @@ impl<'tcx> TypeMap<'tcx> { -> UniqueTypeId { let enum_type_id = self.get_unique_type_id_of_type(cx, enum_type); let enum_variant_type_id = format!("{}::{}", - &self.get_unique_type_id_as_string(enum_type_id), + self.get_unique_type_id_as_string(enum_type_id), variant_name); let interner_key = self.unique_id_interner.intern(&enum_variant_type_id); UniqueTypeId(interner_key) @@ -350,14 +335,14 @@ fn vec_slice_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, llvm_type: member_llvm_types[0], type_metadata: element_type_metadata, offset: ComputedMemberOffset, - flags: FLAGS_NONE + flags: DIFlags::FlagZero, }, MemberDescription { name: "length".to_string(), llvm_type: member_llvm_types[1], type_metadata: type_metadata(cx, cx.tcx().types.usize, span), offset: ComputedMemberOffset, - flags: FLAGS_NONE + flags: DIFlags::FlagZero, }, ]; @@ -394,16 +379,16 @@ fn subroutine_type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, { let signature = cx.tcx().erase_late_bound_regions(signature); - let mut signature_metadata: Vec = Vec::with_capacity(signature.inputs.len() + 1); + let mut signature_metadata: Vec = Vec::with_capacity(signature.inputs().len() + 1); // return type - signature_metadata.push(match signature.output.sty { + signature_metadata.push(match signature.output().sty { ty::TyTuple(ref tys) if tys.is_empty() => ptr::null_mut(), - _ => type_metadata(cx, signature.output, span) + _ => type_metadata(cx, signature.output(), span) }); // regular arguments - for &argument_type in &signature.inputs { + for &argument_type in signature.inputs() { signature_metadata.push(type_metadata(cx, argument_type, span)); } @@ -434,8 +419,13 @@ fn trait_pointer_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, // type is assigned the correct name, size, namespace, and source location. // But it does not describe the trait's methods. - let def_id = match trait_type.sty { - ty::TyTrait(ref data) => data.principal.def_id(), + let containing_scope = match trait_type.sty { + ty::TyDynamic(ref data, ..) => if let Some(principal) = data.principal() { + let def_id = principal.def_id(); + get_namespace_and_span_for_item(cx, def_id).0 + } else { + NO_SCOPE_METADATA + }, _ => { bug!("debuginfo: Unexpected trait-object type in \ trait_pointer_metadata(): {:?}", @@ -447,8 +437,6 @@ fn trait_pointer_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let trait_type_name = compute_debuginfo_type_name(cx, trait_object_type, false); - let (containing_scope, _) = get_namespace_and_span_for_item(cx, def_id); - let trait_llvm_type = type_of::type_of(cx, trait_object_type); let file_metadata = unknown_file_metadata(cx); @@ -523,7 +511,7 @@ pub fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty::TyStr => { fixed_vec_metadata(cx, unique_type_id, cx.tcx().types.i8, None, usage_site_span) } - ty::TyTrait(..) => { + ty::TyDynamic(..) => { MetadataCreationResult::new( trait_pointer_metadata(cx, t, None, unique_type_id), false) @@ -538,7 +526,7 @@ pub fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty::TyStr => { vec_slice_metadata(cx, t, cx.tcx().types.u8, unique_type_id, usage_site_span) } - ty::TyTrait(..) => { + ty::TyDynamic(..) => { MetadataCreationResult::new( trait_pointer_metadata(cx, ty, Some(t), unique_type_id), false) @@ -574,10 +562,11 @@ pub fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, MetadataCreationResult::new(pointer_type_metadata(cx, t, fn_metadata), false) } - ty::TyClosure(_, ref substs) => { + ty::TyClosure(def_id, substs) => { + let upvar_tys : Vec<_> = substs.upvar_tys(def_id, cx.tcx()).collect(); prepare_tuple_metadata(cx, t, - &substs.upvar_tys, + &upvar_tys, unique_type_id, usage_site_span).finalize(cx) } @@ -622,14 +611,12 @@ pub fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let metadata_for_uid = match type_map.find_metadata_for_unique_id(unique_type_id) { Some(metadata) => metadata, None => { - let unique_type_id_str = - type_map.get_unique_type_id_as_string(unique_type_id); span_bug!(usage_site_span, "Expected type metadata for unique \ type id '{}' to already be in \ the debuginfo::TypeMap but it \ was not. (Ty = {})", - &unique_type_id_str[..], + type_map.get_unique_type_id_as_string(unique_type_id), t); } }; @@ -637,14 +624,12 @@ pub fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, match type_map.find_metadata_for_type(t) { Some(metadata) => { if metadata != metadata_for_uid { - let unique_type_id_str = - type_map.get_unique_type_id_as_string(unique_type_id); span_bug!(usage_site_span, "Mismatch between Ty and \ UniqueTypeId maps in \ debuginfo::TypeMap. \ UniqueTypeId={}, Ty={}", - &unique_type_id_str[..], + type_map.get_unique_type_id_as_string(unique_type_id), t); } } @@ -808,7 +793,7 @@ pub fn compile_unit_metadata(scc: &SharedCrateContext, }; fn fallback_path(scc: &SharedCrateContext) -> CString { - CString::new(scc.link_meta().crate_name.clone()).unwrap() + CString::new(scc.link_meta().crate_name.to_string()).unwrap() } } @@ -841,7 +826,7 @@ struct MemberDescription { llvm_type: Type, type_metadata: DIType, offset: MemberOffset, - flags: c_uint + flags: DIFlags, } // A factory for MemberDescriptions. It produces a list of member descriptions @@ -885,25 +870,28 @@ impl<'tcx> MemberDescriptionFactory<'tcx> { // Creates MemberDescriptions for the fields of a struct struct StructMemberDescriptionFactory<'tcx> { - variant: ty::VariantDef<'tcx>, + ty: Ty<'tcx>, + variant: &'tcx ty::VariantDef, substs: &'tcx Substs<'tcx>, - is_simd: bool, span: Span, } impl<'tcx> StructMemberDescriptionFactory<'tcx> { fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>) -> Vec { - let field_size = if self.is_simd { - let fty = monomorphize::field_ty(cx.tcx(), - self.substs, - &self.variant.fields[0]); - Some(machine::llsize_of_alloc( - cx, - type_of::type_of(cx, fty) - ) as usize) - } else { - None + let layout = cx.layout_of(self.ty); + + let tmp; + let offsets = match *layout { + layout::Univariant { ref variant, .. } => &variant.offsets, + layout::Vector { element, count } => { + let element_size = element.size(&cx.tcx().data_layout).bytes(); + tmp = (0..count). + map(|i| layout::Size::from_bytes(i*element_size)) + .collect::>(); + &tmp + } + _ => bug!("{} is not a struct", self.ty) }; self.variant.fields.iter().enumerate().map(|(i, f)| { @@ -914,18 +902,14 @@ impl<'tcx> StructMemberDescriptionFactory<'tcx> { }; let fty = monomorphize::field_ty(cx.tcx(), self.substs, f); - let offset = if self.is_simd { - FixedMemberOffset { bytes: i * field_size.unwrap() } - } else { - ComputedMemberOffset - }; + let offset = FixedMemberOffset { bytes: offsets[i].bytes() as usize}; MemberDescription { name: name, llvm_type: type_of::type_of(cx, fty), type_metadata: type_metadata(cx, fty, self.span), offset: offset, - flags: FLAGS_NONE, + flags: DIFlags::FlagZero, } }).collect() } @@ -960,9 +944,9 @@ fn prepare_struct_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, struct_metadata_stub, struct_llvm_type, StructMDF(StructMemberDescriptionFactory { + ty: struct_type, variant: variant, substs: substs, - is_simd: struct_type.is_simd(), span: span, }) ) @@ -974,6 +958,7 @@ fn prepare_struct_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, // Creates MemberDescriptions for the fields of a tuple struct TupleMemberDescriptionFactory<'tcx> { + ty: Ty<'tcx>, component_types: Vec>, span: Span, } @@ -981,6 +966,13 @@ struct TupleMemberDescriptionFactory<'tcx> { impl<'tcx> TupleMemberDescriptionFactory<'tcx> { fn create_member_descriptions<'a>(&self, cx: &CrateContext<'a, 'tcx>) -> Vec { + let layout = cx.layout_of(self.ty); + let offsets = if let layout::Univariant { ref variant, .. } = *layout { + &variant.offsets + } else { + bug!("{} is not a tuple", self.ty); + }; + self.component_types .iter() .enumerate() @@ -989,8 +981,8 @@ impl<'tcx> TupleMemberDescriptionFactory<'tcx> { name: format!("__{}", i), llvm_type: type_of::type_of(cx, component_type), type_metadata: type_metadata(cx, component_type, self.span), - offset: ComputedMemberOffset, - flags: FLAGS_NONE, + offset: FixedMemberOffset { bytes: offsets[i].bytes() as usize }, + flags: DIFlags::FlagZero, } }).collect() } @@ -1016,6 +1008,7 @@ fn prepare_tuple_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, NO_SCOPE_METADATA), tuple_llvm_type, TupleMDF(TupleMemberDescriptionFactory { + ty: tuple_type, component_types: component_types.to_vec(), span: span, }) @@ -1027,7 +1020,7 @@ fn prepare_tuple_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, //=----------------------------------------------------------------------------- struct UnionMemberDescriptionFactory<'tcx> { - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, substs: &'tcx Substs<'tcx>, span: Span, } @@ -1042,7 +1035,7 @@ impl<'tcx> UnionMemberDescriptionFactory<'tcx> { llvm_type: type_of::type_of(cx, fty), type_metadata: type_metadata(cx, fty, self.span), offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE, + flags: DIFlags::FlagZero, } }).collect() } @@ -1140,7 +1133,7 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { llvm_type: variant_llvm_type, type_metadata: variant_type_metadata, offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE + flags: DIFlags::FlagZero } }).collect() }, @@ -1174,7 +1167,7 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { llvm_type: variant_llvm_type, type_metadata: variant_type_metadata, offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE + flags: DIFlags::FlagZero } ] } @@ -1211,7 +1204,7 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { llvm_type: non_null_llvm_type, type_metadata: non_null_type_metadata, offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE + flags: DIFlags::FlagZero }; let unique_type_id = debug_context(cx).type_map @@ -1248,13 +1241,13 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { llvm_type: artificial_struct_llvm_type, type_metadata: artificial_struct_metadata, offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE + flags: DIFlags::FlagZero } ] }, layout::StructWrappedNullablePointer { nonnull: ref struct_def, nndiscr, - ref discrfield, ..} => { + ref discrfield_source, ..} => { // Create a description of the non-null variant let (variant_type_metadata, variant_llvm_type, member_description_factory) = describe_enum_variant(cx, @@ -1277,12 +1270,12 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { // member's name. let null_variant_index = (1 - nndiscr) as usize; let null_variant_name = adt.variants[null_variant_index].name; - let discrfield = discrfield.iter() + let discrfield_source = discrfield_source.iter() .skip(1) .map(|x| x.to_string()) .collect::>().join("$"); let union_member_name = format!("RUST$ENCODED$ENUM${}${}", - discrfield, + discrfield_source, null_variant_name); // Create the (singleton) list of descriptions of union members. @@ -1292,7 +1285,7 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { llvm_type: variant_llvm_type, type_metadata: variant_type_metadata, offset: FixedMemberOffset { bytes: 0 }, - flags: FLAGS_NONE + flags: DIFlags::FlagZero } ] }, @@ -1304,6 +1297,8 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> { // Creates MemberDescriptions for the fields of a single enum variant. struct VariantMemberDescriptionFactory<'tcx> { + // Cloned from the layout::Struct describing the variant. + offsets: &'tcx [layout::Size], args: Vec<(String, Ty<'tcx>)>, discriminant_type_metadata: Option, span: Span, @@ -1320,8 +1315,8 @@ impl<'tcx> VariantMemberDescriptionFactory<'tcx> { Some(metadata) if i == 0 => metadata, _ => type_metadata(cx, ty, self.span) }, - offset: ComputedMemberOffset, - flags: FLAGS_NONE + offset: FixedMemberOffset { bytes: self.offsets[i].bytes() as usize }, + flags: DIFlags::FlagZero } }).collect() } @@ -1340,8 +1335,8 @@ enum EnumDiscriminantInfo { // full RecursiveTypeDescription. fn describe_enum_variant<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, enum_type: Ty<'tcx>, - struct_def: &layout::Struct, - variant: ty::VariantDef<'tcx>, + struct_def: &'tcx layout::Struct, + variant: &'tcx ty::VariantDef, discriminant_info: EnumDiscriminantInfo, containing_scope: DIScope, span: Span) @@ -1360,7 +1355,7 @@ fn describe_enum_variant<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ref l @ _ => bug!("This should be unreachable. Type is {:#?} layout is {:#?}", enum_type, l) }; - let mut field_tys = variant.fields.iter().map(|f: ty::FieldDef<'tcx>| { + let mut field_tys = variant.fields.iter().map(|f| { monomorphize::field_ty(cx.tcx(), &substs, f) }).collect::>(); @@ -1424,6 +1419,7 @@ fn describe_enum_variant<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let member_description_factory = VariantMDF(VariantMemberDescriptionFactory { + offsets: &struct_def.offsets[..], args: args, discriminant_type_metadata: match discriminant_info { RegularDiscriminant(discriminant_type_metadata) => { @@ -1525,13 +1521,10 @@ fn prepare_enum_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let enum_llvm_type = type_of::type_of(cx, enum_type); let (enum_type_size, enum_type_align) = size_and_align_of(cx, enum_llvm_type); - let unique_type_id_str = debug_context(cx) - .type_map - .borrow() - .get_unique_type_id_as_string(unique_type_id); - let enum_name = CString::new(enum_name).unwrap(); - let unique_type_id_str = CString::new(unique_type_id_str.as_bytes()).unwrap(); + let unique_type_id_str = CString::new( + debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id).as_bytes() + ).unwrap(); let enum_metadata = unsafe { llvm::LLVMRustDIBuilderCreateUnionType( DIB(cx), @@ -1541,7 +1534,7 @@ fn prepare_enum_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, UNKNOWN_LINE_NUMBER, bytes_to_bits(enum_type_size), bytes_to_bits(enum_type_align), - 0, // Flags + DIFlags::FlagZero, ptr::null_mut(), 0, // RuntimeLang unique_type_id_str.as_ptr()) @@ -1565,7 +1558,7 @@ fn prepare_enum_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, fn get_enum_discriminant_name(cx: &CrateContext, def_id: DefId) - -> token::InternedString { + -> InternedString { cx.tcx().item_name(def_id).as_str() } } @@ -1668,11 +1661,10 @@ fn create_struct_stub(cx: &CrateContext, -> DICompositeType { let (struct_size, struct_align) = size_and_align_of(cx, struct_llvm_type); - let unique_type_id_str = debug_context(cx).type_map - .borrow() - .get_unique_type_id_as_string(unique_type_id); let name = CString::new(struct_type_name).unwrap(); - let unique_type_id = CString::new(unique_type_id_str.as_bytes()).unwrap(); + let unique_type_id = CString::new( + debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id).as_bytes() + ).unwrap(); let metadata_stub = unsafe { // LLVMRustDIBuilderCreateStructType() wants an empty array. A null // pointer will lead to hard to trace and debug LLVM assertions @@ -1687,7 +1679,7 @@ fn create_struct_stub(cx: &CrateContext, UNKNOWN_LINE_NUMBER, bytes_to_bits(struct_size), bytes_to_bits(struct_align), - 0, + DIFlags::FlagZero, ptr::null_mut(), empty_array, 0, @@ -1706,11 +1698,10 @@ fn create_union_stub(cx: &CrateContext, -> DICompositeType { let (union_size, union_align) = size_and_align_of(cx, union_llvm_type); - let unique_type_id_str = debug_context(cx).type_map - .borrow() - .get_unique_type_id_as_string(unique_type_id); let name = CString::new(union_type_name).unwrap(); - let unique_type_id = CString::new(unique_type_id_str.as_bytes()).unwrap(); + let unique_type_id = CString::new( + debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id).as_bytes() + ).unwrap(); let metadata_stub = unsafe { // LLVMRustDIBuilderCreateUnionType() wants an empty array. A null // pointer will lead to hard to trace and debug LLVM assertions @@ -1725,7 +1716,7 @@ fn create_union_stub(cx: &CrateContext, UNKNOWN_LINE_NUMBER, bytes_to_bits(union_size), bytes_to_bits(union_align), - 0, // Flags + DIFlags::FlagZero, empty_array, 0, // RuntimeLang unique_type_id.as_ptr()) @@ -1765,13 +1756,17 @@ pub fn create_global_var_metadata(cx: &CrateContext, }; let is_local_to_unit = is_node_local_to_unit(cx, node_id); - let variable_type = tcx.erase_regions(&tcx.tables().node_id_to_type(node_id)); + let variable_type = tcx.erase_regions(&tcx.item_type(node_def_id)); let type_metadata = type_metadata(cx, variable_type, span); let var_name = tcx.item_name(node_def_id).to_string(); let linkage_name = mangled_name_of_item(cx, node_def_id, ""); let var_name = CString::new(var_name).unwrap(); let linkage_name = CString::new(linkage_name).unwrap(); + + let ty = cx.tcx().item_type(node_def_id); + let global_align = type_of::align_of(cx, ty); + unsafe { llvm::LLVMRustDIBuilderCreateStaticVariable(DIB(cx), var_scope, @@ -1782,7 +1777,9 @@ pub fn create_global_var_metadata(cx: &CrateContext, type_metadata, is_local_to_unit, global, - ptr::null_mut()); + ptr::null_mut(), + global_align as u64, + ); } } diff --git a/src/librustc_trans/debuginfo/mod.rs b/src/librustc_trans/debuginfo/mod.rs index 3bc5f4f3db..4e511c0584 100644 --- a/src/librustc_trans/debuginfo/mod.rs +++ b/src/librustc_trans/debuginfo/mod.rs @@ -22,10 +22,8 @@ use self::source_loc::InternalDebugLocation::{self, UnknownLocation}; use llvm; use llvm::{ModuleRef, ContextRef, ValueRef}; -use llvm::debuginfo::{DIFile, DIType, DIScope, DIBuilderRef, DISubprogram, DIArray, - FlagPrototyped}; +use llvm::debuginfo::{DIFile, DIType, DIScope, DIBuilderRef, DISubprogram, DIArray, DIFlags}; use rustc::hir::def_id::DefId; -use rustc::hir::map::DefPathData; use rustc::ty::subst::Substs; use abi::Abi; @@ -34,7 +32,7 @@ use monomorphize::{self, Instance}; use rustc::ty::{self, Ty}; use rustc::mir; use session::config::{self, FullDebugInfo, LimitedDebugInfo, NoDebugInfo}; -use util::nodemap::{DefIdMap, FnvHashMap, FnvHashSet}; +use util::nodemap::{DefIdMap, FxHashMap, FxHashSet}; use libc::c_uint; use std::cell::{Cell, RefCell}; @@ -68,15 +66,15 @@ pub struct CrateDebugContext<'tcx> { llcontext: ContextRef, builder: DIBuilderRef, current_debug_location: Cell, - created_files: RefCell>, - created_enum_disr_types: RefCell>, + created_files: RefCell>, + created_enum_disr_types: RefCell>, type_map: RefCell>, namespace_map: RefCell>, // This collection is used to assert that composite types (structs, enums, // ...) have their members only set once: - composite_types_completed: RefCell>, + composite_types_completed: RefCell>, } impl<'tcx> CrateDebugContext<'tcx> { @@ -89,11 +87,11 @@ impl<'tcx> CrateDebugContext<'tcx> { llcontext: llcontext, builder: builder, current_debug_location: Cell::new(InternalDebugLocation::UnknownLocation), - created_files: RefCell::new(FnvHashMap()), - created_enum_disr_types: RefCell::new(FnvHashMap()), + created_files: RefCell::new(FxHashMap()), + created_enum_disr_types: RefCell::new(FxHashMap()), type_map: RefCell::new(TypeMap::new()), namespace_map: RefCell::new(DefIdMap()), - composite_types_completed: RefCell::new(FnvHashSet()), + composite_types_completed: RefCell::new(FxHashSet()), }; } } @@ -248,21 +246,19 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, }; // Find the enclosing function, in case this is a closure. - let mut fn_def_id = instance.def; - let mut def_key = cx.tcx().def_key(fn_def_id); + let def_key = cx.tcx().def_key(instance.def); let mut name = def_key.disambiguated_data.data.to_string(); let name_len = name.len(); - while def_key.disambiguated_data.data == DefPathData::ClosureExpr { - fn_def_id.index = def_key.parent.expect("closure without a parent?"); - def_key = cx.tcx().def_key(fn_def_id); - } + + let fn_def_id = cx.tcx().closure_base_def_id(instance.def); // Get_template_parameters() will append a `<...>` clause to the function // name if necessary. - let generics = cx.tcx().lookup_generics(fn_def_id); + let generics = cx.tcx().item_generics(fn_def_id); + let substs = instance.substs.truncate_to(cx.tcx(), generics); let template_parameters = get_template_parameters(cx, &generics, - instance.substs, + substs, file_metadata, &mut name); @@ -289,7 +285,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, is_local_to_unit, true, scope_line as c_uint, - FlagPrototyped as c_uint, + DIFlags::FlagPrototyped, cx.sess().opts.optimize != config::OptLevel::No, llfn, template_parameters, @@ -312,18 +308,18 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, return create_DIArray(DIB(cx), &[]); } - let mut signature = Vec::with_capacity(sig.inputs.len() + 1); + let mut signature = Vec::with_capacity(sig.inputs().len() + 1); // Return type -- llvm::DIBuilder wants this at index 0 - signature.push(match sig.output.sty { + signature.push(match sig.output().sty { ty::TyTuple(ref tys) if tys.is_empty() => ptr::null_mut(), - _ => type_metadata(cx, sig.output, syntax_pos::DUMMY_SP) + _ => type_metadata(cx, sig.output(), syntax_pos::DUMMY_SP) }); let inputs = if abi == Abi::RustCall { - &sig.inputs[..sig.inputs.len()-1] + &sig.inputs()[..sig.inputs().len() - 1] } else { - &sig.inputs[..] + sig.inputs() }; // Arguments types @@ -331,8 +327,8 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, signature.push(type_metadata(cx, argument_type, syntax_pos::DUMMY_SP)); } - if abi == Abi::RustCall && !sig.inputs.is_empty() { - if let ty::TyTuple(args) = sig.inputs[sig.inputs.len() - 1].sty { + if abi == Abi::RustCall && !sig.inputs().is_empty() { + if let ty::TyTuple(args) = sig.inputs()[sig.inputs().len() - 1].sty { for &argument_type in args { signature.push(type_metadata(cx, argument_type, syntax_pos::DUMMY_SP)); } @@ -397,7 +393,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, generics: &ty::Generics<'tcx>) -> Vec { let mut names = generics.parent.map_or(vec![], |def_id| { - get_type_parameter_names(cx, cx.tcx().lookup_generics(def_id)) + get_type_parameter_names(cx, cx.tcx().item_generics(def_id)) }); names.extend(generics.types.iter().map(|param| param.name)); names @@ -412,7 +408,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let self_type = cx.tcx().impl_of_method(instance.def).and_then(|impl_def_id| { // If the method does *not* belong to a trait, proceed if cx.tcx().trait_id_of_impl(impl_def_id).is_none() { - let impl_self_ty = cx.tcx().lookup_item_type(impl_def_id).ty; + let impl_self_ty = cx.tcx().item_type(impl_def_id); let impl_self_ty = cx.tcx().erase_regions(&impl_self_ty); let impl_self_ty = monomorphize::apply_param_substs(cx.shared(), instance.substs, @@ -466,6 +462,7 @@ pub fn declare_local<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, LocalVariable | CapturedVariable => (0, DW_TAG_auto_variable) }; + let align = ::type_of::align_of(cx, variable_type); let name = CString::new(variable_name.as_str().as_bytes()).unwrap(); match (variable_access, &[][..]) { @@ -481,8 +478,10 @@ pub fn declare_local<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, loc.line as c_uint, type_metadata, cx.sess().opts.optimize != config::OptLevel::No, - 0, - argument_index) + DIFlags::FlagZero, + argument_index, + align as u64, + ) }; source_loc::set_debug_location(cx, None, InternalDebugLocation::new(scope_metadata, loc.line, loc.col.to_usize())); diff --git a/src/librustc_trans/debuginfo/namespace.rs b/src/librustc_trans/debuginfo/namespace.rs index 5953ec4aae..521dd7530b 100644 --- a/src/librustc_trans/debuginfo/namespace.rs +++ b/src/librustc_trans/debuginfo/namespace.rs @@ -35,7 +35,7 @@ pub fn mangled_name_of_item(ccx: &CrateContext, def_id: DefId, extra: &str) -> S } let name = match def_key.disambiguated_data.data { - DefPathData::CrateRoot => ccx.tcx().crate_name(def_id.krate), + DefPathData::CrateRoot => ccx.tcx().crate_name(def_id.krate).as_str(), data => data.as_interned_str() }; @@ -64,12 +64,12 @@ pub fn item_namespace(ccx: &CrateContext, def_id: DefId) -> DIScope { }); let namespace_name = match def_key.disambiguated_data.data { - DefPathData::CrateRoot => ccx.tcx().crate_name(def_id.krate), + DefPathData::CrateRoot => ccx.tcx().crate_name(def_id.krate).as_str(), data => data.as_interned_str() }; let namespace_name = CString::new(namespace_name.as_bytes()).unwrap(); - let span = ccx.tcx().map.def_id_span(def_id, DUMMY_SP); + let span = ccx.tcx().def_span(def_id); let (file, line) = if span != DUMMY_SP { let loc = span_start(ccx, span); (file_metadata(ccx, &loc.file.name, &loc.file.abs_path), loc.line as c_uint) diff --git a/src/librustc_trans/debuginfo/type_names.rs b/src/librustc_trans/debuginfo/type_names.rs index 956402edc1..788ce32937 100644 --- a/src/librustc_trans/debuginfo/type_names.rs +++ b/src/librustc_trans/debuginfo/type_names.rs @@ -93,11 +93,13 @@ pub fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, push_debuginfo_type_name(cx, inner_type, true, output); output.push(']'); }, - ty::TyTrait(ref trait_data) => { - let principal = cx.tcx().erase_late_bound_regions_and_normalize( - &trait_data.principal); - push_item_name(cx, principal.def_id, false, output); - push_type_params(cx, principal.substs, output); + ty::TyDynamic(ref trait_data, ..) => { + if let Some(principal) = trait_data.principal() { + let principal = cx.tcx().erase_late_bound_regions_and_normalize( + &principal); + push_item_name(cx, principal.def_id, false, output); + push_type_params(cx, principal.substs, output); + } }, ty::TyFnDef(.., &ty::BareFnTy{ unsafety, abi, ref sig } ) | ty::TyFnPtr(&ty::BareFnTy{ unsafety, abi, ref sig } ) => { @@ -114,8 +116,8 @@ pub fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, output.push_str("fn("); let sig = cx.tcx().erase_late_bound_regions_and_normalize(sig); - if !sig.inputs.is_empty() { - for ¶meter_type in &sig.inputs { + if !sig.inputs().is_empty() { + for ¶meter_type in sig.inputs() { push_debuginfo_type_name(cx, parameter_type, true, output); output.push_str(", "); } @@ -124,7 +126,7 @@ pub fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } if sig.variadic { - if !sig.inputs.is_empty() { + if !sig.inputs().is_empty() { output.push_str(", ..."); } else { output.push_str("..."); @@ -133,9 +135,9 @@ pub fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, output.push(')'); - if !sig.output.is_nil() { + if !sig.output().is_nil() { output.push_str(" -> "); - push_debuginfo_type_name(cx, sig.output, true, output); + push_debuginfo_type_name(cx, sig.output(), true, output); } }, ty::TyClosure(..) => { @@ -156,7 +158,7 @@ pub fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, qualified: bool, output: &mut String) { if qualified { - output.push_str(&cx.tcx().crate_name(def_id.krate)); + output.push_str(&cx.tcx().crate_name(def_id.krate).as_str()); for path_element in cx.tcx().def_path(def_id).data { output.push_str("::"); output.push_str(&path_element.data.as_interned_str()); diff --git a/src/librustc_trans/debuginfo/utils.rs b/src/librustc_trans/debuginfo/utils.rs index 3cdac485fe..3ee2497009 100644 --- a/src/librustc_trans/debuginfo/utils.rs +++ b/src/librustc_trans/debuginfo/utils.rs @@ -34,7 +34,7 @@ pub fn is_node_local_to_unit(cx: &CrateContext, node_id: ast::NodeId) -> bool // visible). It might better to use the `exported_items` set from // `driver::CrateAnalysis` in the future, but (atm) this set is not // available in the translation pass. - !cx.reachable().contains(&node_id) + !cx.exported_symbols().contains(&node_id) } #[allow(non_snake_case)] @@ -79,7 +79,7 @@ pub fn get_namespace_and_span_for_item(cx: &CrateContext, def_id: DefId) }); // Try to get some span information, if we have an inlined item. - let definition_span = cx.tcx().map.def_id_span(def_id, syntax_pos::DUMMY_SP); + let definition_span = cx.tcx().def_span(def_id); (containing_scope, definition_span) } diff --git a/src/librustc_trans/declare.rs b/src/librustc_trans/declare.rs index 1ec5ca4a56..9bf023fc18 100644 --- a/src/librustc_trans/declare.rs +++ b/src/librustc_trans/declare.rs @@ -19,14 +19,17 @@ //! interested in defining the ValueRef they return. //! * Use define_* family of methods when you might be defining the ValueRef. //! * When in doubt, define. + use llvm::{self, ValueRef}; use llvm::AttributePlace::Function; use rustc::ty; use abi::{Abi, FnType}; use attributes; use context::CrateContext; +use common; use type_::Type; use value::Value; +use syntax::attr; use std::ffi::CString; @@ -69,6 +72,16 @@ fn declare_raw_fn(ccx: &CrateContext, name: &str, callconv: llvm::CallConv, ty: llvm::Attribute::NoRedZone.apply_llfn(Function, llfn); } + // If we're compiling the compiler-builtins crate, e.g. the equivalent of + // compiler-rt, then we want to implicitly compile everything with hidden + // visibility as we're going to link this object all over the place but + // don't want the symbols to get exported. + if attr::contains_name(ccx.tcx().map.krate_attrs(), "compiler_builtins") { + unsafe { + llvm::LLVMRustSetVisibility(llfn, llvm::Visibility::Hidden); + } + } + match ccx.tcx().sess.opts.cg.opt_level.as_ref().map(String::as_ref) { Some("s") => { llvm::Attribute::OptimizeForSize.apply_llfn(Function, llfn); @@ -103,15 +116,15 @@ pub fn declare_cfn(ccx: &CrateContext, name: &str, fn_type: Type) -> ValueRef { pub fn declare_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, name: &str, fn_type: ty::Ty<'tcx>) -> ValueRef { debug!("declare_rust_fn(name={:?}, fn_type={:?})", name, fn_type); - let abi = fn_type.fn_abi(); - let sig = ccx.tcx().erase_late_bound_regions_and_normalize(fn_type.fn_sig()); + let ty::BareFnTy { abi, ref sig, .. } = *common::ty_fn_ty(ccx, fn_type); + let sig = ccx.tcx().erase_late_bound_regions_and_normalize(sig); debug!("declare_rust_fn (after region erasure) sig={:?}", sig); let fty = FnType::new(ccx, abi, &sig, &[]); let llfn = declare_raw_fn(ccx, name, fty.cconv, fty.llvm_type(ccx)); // FIXME(canndrew): This is_never should really be an is_uninhabited - if sig.output.is_never() { + if sig.output().is_never() { llvm::Attribute::NoReturn.apply_llfn(Function, llfn); } diff --git a/src/librustc_trans/glue.rs b/src/librustc_trans/glue.rs index 648dd9f3e3..90bc29c39e 100644 --- a/src/librustc_trans/glue.rs +++ b/src/librustc_trans/glue.rs @@ -394,7 +394,7 @@ pub fn size_and_align_of_dst<'blk, 'tcx>(bcx: &BlockAndBuilder<'blk, 'tcx>, (size, align) } - ty::TyTrait(..) => { + ty::TyDynamic(..) => { // info points to the vtable and the second entry in the vtable is the // dynamic size of the object. let info = bcx.pointercast(info, Type::int(bcx.ccx()).ptr_to()); @@ -463,7 +463,7 @@ fn make_drop_glue<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, trans_exchange_free_ty(bcx, llbox, content_ty, DebugLoc::None) } } - ty::TyTrait(..) => { + ty::TyDynamic(..) => { // No support in vtable for distinguishing destroying with // versus without calling Drop::drop. Assert caller is // okay with always calling the Drop impl, if any. @@ -504,7 +504,7 @@ fn drop_structural_ty<'blk, 'tcx>(cx: Block<'blk, 'tcx>, fn iter_variant<'blk, 'tcx>(cx: Block<'blk, 'tcx>, t: Ty<'tcx>, av: adt::MaybeSizedValue, - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, substs: &Substs<'tcx>) -> Block<'blk, 'tcx> { let _icx = push_ctxt("iter_variant"); @@ -531,8 +531,8 @@ fn drop_structural_ty<'blk, 'tcx>(cx: Block<'blk, 'tcx>, let mut cx = cx; match t.sty { - ty::TyClosure(_, ref substs) => { - for (i, upvar_ty) in substs.upvar_tys.iter().enumerate() { + ty::TyClosure(def_id, substs) => { + for (i, upvar_ty) in substs.upvar_tys(def_id, cx.tcx()).enumerate() { let llupvar = adt::trans_field_ptr(cx, t, value, Disr(0), i); cx = drop_ty(cx, llupvar, upvar_ty, DebugLoc::None); } diff --git a/src/librustc_trans/intrinsic.rs b/src/librustc_trans/intrinsic.rs index b1b09d3ca2..577ffbad13 100644 --- a/src/librustc_trans/intrinsic.rs +++ b/src/librustc_trans/intrinsic.rs @@ -30,12 +30,13 @@ use rustc::ty::{self, Ty}; use Disr; use rustc::hir; use syntax::ast; -use syntax::parse::token; +use syntax::symbol::Symbol; use rustc::session::Session; use syntax_pos::{Span, DUMMY_SP}; use std::cmp::Ordering; +use std::iter; fn get_simple_intrinsic(ccx: &CrateContext, name: &str) -> Option { let llvm_name = match name { @@ -105,9 +106,9 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, }; let sig = tcx.erase_late_bound_regions_and_normalize(&fty.sig); - let arg_tys = sig.inputs; - let ret_ty = sig.output; - let name = tcx.item_name(def_id).as_str(); + let arg_tys = sig.inputs(); + let ret_ty = sig.output(); + let name = &*tcx.item_name(def_id).as_str(); let span = match call_debug_location { DebugLoc::ScopeAt(_, span) => span, @@ -123,15 +124,15 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, Call(bcx, llfn, &[], call_debug_location); Unreachable(bcx); return Result::new(bcx, C_undef(Type::nil(ccx).ptr_to())); - } else if &name[..] == "unreachable" { + } else if name == "unreachable" { Unreachable(bcx); return Result::new(bcx, C_nil(ccx)); } let llret_ty = type_of::type_of(ccx, ret_ty); - let simple = get_simple_intrinsic(ccx, &name); - let llval = match (simple, &name[..]) { + let simple = get_simple_intrinsic(ccx, name); + let llval = match (simple, name) { (Some(llfn), _) => { Call(bcx, llfn, &llargs, call_debug_location) } @@ -208,7 +209,7 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, } (_, "type_name") => { let tp_ty = substs.type_at(0); - let ty_name = token::intern_and_get_ident(&tp_ty.to_string()); + let ty_name = Symbol::intern(&tp_ty.to_string()).as_str(); C_str_slice(ccx, ty_name) } (_, "type_id") => { @@ -340,7 +341,7 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, let sty = &arg_tys[0].sty; match int_type_width_signed(sty, ccx) { Some((width, signed)) => - match &*name { + match name { "ctlz" => count_zeros_intrinsic(bcx, &format!("llvm.ctlz.i{}", width), llargs[0], call_debug_location), "cttz" => count_zeros_intrinsic(bcx, &format!("llvm.cttz.i{}", width), @@ -394,7 +395,7 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, let sty = &arg_tys[0].sty; match float_type_width(sty) { Some(_width) => - match &*name { + match name { "fadd_fast" => FAddFast(bcx, llargs[0], llargs[1], call_debug_location), "fsub_fast" => FSubFast(bcx, llargs[0], llargs[1], call_debug_location), "fmul_fast" => FMulFast(bcx, llargs[0], llargs[1], call_debug_location), @@ -674,7 +675,7 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, // again to find them and extract the arguments intr.inputs.iter() .zip(llargs) - .zip(&arg_tys) + .zip(arg_tys) .flat_map(|((t, llarg), ty)| modify_as_needed(bcx, t, ty, *llarg)) .collect() }; @@ -1012,11 +1013,7 @@ fn gen_fn<'a, 'tcx>(fcx: &FunctionContext<'a, 'tcx>, trans: &mut for<'b> FnMut(Block<'b, 'tcx>)) -> ValueRef { let ccx = fcx.ccx; - let sig = ty::FnSig { - inputs: inputs, - output: output, - variadic: false, - }; + let sig = ccx.tcx().mk_fn_sig(inputs.into_iter(), output, false); let fn_ty = FnType::new(ccx, Abi::Rust, &sig, &[]); let rust_fn_ty = ccx.tcx().mk_fn_ptr(ccx.tcx().mk_bare_fn(ty::BareFnTy { @@ -1051,11 +1048,7 @@ fn get_rust_try_fn<'a, 'tcx>(fcx: &FunctionContext<'a, 'tcx>, let fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Unsafe, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: vec![i8p], - output: tcx.mk_nil(), - variadic: false, - }), + sig: ty::Binder(tcx.mk_fn_sig(iter::once(i8p), tcx.mk_nil(), false)), })); let output = tcx.types.i32; let rust_try = gen_fn(fcx, "__rust_try", vec![fn_ty, i8p, i8p], output, trans); @@ -1108,7 +1101,7 @@ fn generic_simd_intrinsic<'blk, 'tcx, 'a> let tcx = bcx.tcx(); let sig = tcx.erase_late_bound_regions_and_normalize(callee_ty.fn_sig()); - let arg_tys = sig.inputs; + let arg_tys = sig.inputs(); // every intrinsic takes a SIMD vector as its first argument require_simd!(arg_tys[0], "input"); diff --git a/src/librustc_trans/lib.rs b/src/librustc_trans/lib.rs index 8ef7f04d4e..d842827b6f 100644 --- a/src/librustc_trans/lib.rs +++ b/src/librustc_trans/lib.rs @@ -23,12 +23,11 @@ html_root_url = "https://doc.rust-lang.org/nightly/")] #![cfg_attr(not(stage0), deny(warnings))] +#![feature(associated_consts)] #![feature(box_patterns)] #![feature(box_syntax)] -#![feature(cell_extras)] #![feature(const_fn)] #![feature(custom_attribute)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![allow(unused_attributes)] #![feature(libc)] #![feature(quote)] @@ -37,7 +36,6 @@ #![feature(slice_patterns)] #![feature(staged_api)] #![feature(unicode)] -#![cfg_attr(stage0, feature(question_mark))] use rustc::dep_graph::WorkProduct; @@ -55,6 +53,9 @@ extern crate rustc_platform_intrinsics as intrinsics; extern crate serialize; extern crate rustc_const_math; extern crate rustc_const_eval; +#[macro_use] +#[no_link] +extern crate rustc_bitflags; #[macro_use] extern crate log; #[macro_use] extern crate syntax; @@ -76,6 +77,7 @@ pub mod back { pub mod linker; pub mod link; pub mod lto; + pub mod symbol_export; pub mod symbol_names; pub mod write; pub mod msvc; @@ -101,6 +103,7 @@ mod cabi_arm; mod cabi_asmjs; mod cabi_mips; mod cabi_mips64; +mod cabi_msp430; mod cabi_powerpc; mod cabi_powerpc64; mod cabi_s390x; @@ -109,7 +112,6 @@ mod cabi_x86_64; mod cabi_x86_win64; mod callee; mod cleanup; -mod closure; mod collector; mod common; mod consts; @@ -167,7 +169,7 @@ pub struct CrateTranslation { pub metadata_module: ModuleTranslation, pub link: middle::cstore::LinkMeta, pub metadata: Vec, - pub reachable: Vec, + pub exported_symbols: back::symbol_export::ExportedSymbols, pub no_builtins: bool, pub windows_subsystem: Option, pub linker_info: back::linker::LinkerInfo diff --git a/src/librustc_trans/meth.rs b/src/librustc_trans/meth.rs index 1e687f5ff6..aa9b900fa4 100644 --- a/src/librustc_trans/meth.rs +++ b/src/librustc_trans/meth.rs @@ -110,42 +110,48 @@ pub fn trans_object_shim<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>, /// making an object `Foo` from a value of type `Foo`, then /// `trait_ref` would map `T:Trait`. pub fn get_vtable<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, - trait_ref: ty::PolyTraitRef<'tcx>) + ty: ty::Ty<'tcx>, + trait_ref: Option>) -> ValueRef { let tcx = ccx.tcx(); let _icx = push_ctxt("meth::get_vtable"); - debug!("get_vtable(trait_ref={:?})", trait_ref); + debug!("get_vtable(ty={:?}, trait_ref={:?})", ty, trait_ref); // Check the cache. - if let Some(&val) = ccx.vtables().borrow().get(&trait_ref) { + if let Some(&val) = ccx.vtables().borrow().get(&(ty, trait_ref)) { return val; } // Not in the cache. Build it. let nullptr = C_null(Type::nil(ccx).ptr_to()); - let methods = traits::get_vtable_methods(tcx, trait_ref).map(|opt_mth| { - opt_mth.map_or(nullptr, |(def_id, substs)| { - Callee::def(ccx, def_id, substs).reify(ccx) - }) - }); - let size_ty = sizing_type_of(ccx, trait_ref.self_ty()); + let size_ty = sizing_type_of(ccx, ty); let size = machine::llsize_of_alloc(ccx, size_ty); - let align = align_of(ccx, trait_ref.self_ty()); + let align = align_of(ccx, ty); - let components: Vec<_> = [ + let mut components: Vec<_> = [ // Generate a destructor for the vtable. - glue::get_drop_glue(ccx, trait_ref.self_ty()), + glue::get_drop_glue(ccx, ty), C_uint(ccx, size), C_uint(ccx, align) - ].iter().cloned().chain(methods).collect(); + ].iter().cloned().collect(); + + if let Some(trait_ref) = trait_ref { + let trait_ref = trait_ref.with_self_ty(tcx, ty); + let methods = traits::get_vtable_methods(tcx, trait_ref).map(|opt_mth| { + opt_mth.map_or(nullptr, |(def_id, substs)| { + Callee::def(ccx, def_id, substs).reify(ccx) + }) + }); + components.extend(methods); + } let vtable_const = C_struct(ccx, &components, false); let align = machine::llalign_of_pref(ccx, val_ty(vtable_const)); let vtable = consts::addr_of(ccx, vtable_const, align, "vtable"); - ccx.vtables().borrow_mut().insert(trait_ref, vtable); + ccx.vtables().borrow_mut().insert((ty, trait_ref), vtable); vtable } diff --git a/src/librustc_trans/mir/block.rs b/src/librustc_trans/mir/block.rs index 8bf27b4bab..bba6574109 100644 --- a/src/librustc_trans/mir/block.rs +++ b/src/librustc_trans/mir/block.rs @@ -11,7 +11,7 @@ use llvm::{self, ValueRef}; use rustc_const_eval::{ErrKind, ConstEvalErr, note_const_eval_err}; use rustc::middle::lang_items; -use rustc::ty; +use rustc::ty::{self, layout}; use rustc::mir; use abi::{Abi, FnType, ArgType}; use adt; @@ -29,8 +29,8 @@ use type_of; use glue; use type_::Type; -use rustc_data_structures::fnv::FnvHashMap; -use syntax::parse::token; +use rustc_data_structures::fx::FxHashMap; +use syntax::symbol::Symbol; use super::{MirContext, LocalRef}; use super::analyze::CleanupKind; @@ -116,6 +116,9 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { if let Some(cleanup_pad) = cleanup_pad { bcx.cleanup_ret(cleanup_pad, None); } else { + let llpersonality = bcx.fcx().eh_personality(); + bcx.set_personality_fn(llpersonality); + let ps = self.get_personality_slot(&bcx); let lp = bcx.load(ps); bcx.with_block(|bcx| { @@ -144,7 +147,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { adt::trans_get_discr(bcx, ty, discr_lvalue.llval, None, true) ); - let mut bb_hist = FnvHashMap(); + let mut bb_hist = FxHashMap(); for target in targets { *bb_hist.entry(target).or_insert(0) += 1; } @@ -219,7 +222,11 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { load } else { let op = self.trans_consume(&bcx, &mir::Lvalue::Local(mir::RETURN_POINTER)); - op.pack_if_pair(&bcx).immediate() + if let Ref(llval) = op.val { + bcx.with_block(|bcx| base::load_ty(&bcx, llval, op.ty)) + } else { + op.pack_if_pair(&bcx).immediate() + } }; bcx.ret(llval); } @@ -321,7 +328,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { // Get the location information. let loc = bcx.sess().codemap().lookup_char_pos(span.lo); - let filename = token::intern_and_get_ident(&loc.file.name); + let filename = Symbol::intern(&loc.file.name).as_str(); let filename = C_str_slice(bcx.ccx(), filename); let line = C_u32(bcx.ccx(), loc.line as u32); @@ -351,7 +358,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { const_err) } mir::AssertMessage::Math(ref err) => { - let msg_str = token::intern_and_get_ident(err.description()); + let msg_str = Symbol::intern(err.description()).as_str(); let msg_str = C_str_slice(bcx.ccx(), msg_str); let msg_file_line = C_struct(bcx.ccx(), &[msg_str, filename, line], @@ -450,7 +457,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { return; } - let extra_args = &args[sig.inputs.len()..]; + let extra_args = &args[sig.inputs().len()..]; let extra_args = extra_args.iter().map(|op_arg| { let op_ty = op_arg.ty(&self.mir, bcx.tcx()); bcx.monomorphize(&op_ty) @@ -543,7 +550,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { // Make a fake operand for store_return let op = OperandRef { val: Ref(dst), - ty: sig.output, + ty: sig.output(), }; self.store_return(&bcx, ret_dest, fn_ty.ret, op); } @@ -581,7 +588,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { debug_loc.apply_to_bcx(ret_bcx); let op = OperandRef { val: Immediate(invokeret), - ty: sig.output, + ty: sig.output(), }; self.store_return(&ret_bcx, ret_dest, fn_ty.ret, op); }); @@ -592,7 +599,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { if let Some((_, target)) = *destination { let op = OperandRef { val: Immediate(llret), - ty: sig.output, + ty: sig.output(), }; self.store_return(&bcx, ret_dest, fn_ty.ret, op); funclet_br(self, bcx, target); @@ -719,8 +726,14 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { } Immediate(llval) => { + let l = bcx.ccx().layout_of(tuple.ty); + let v = if let layout::Univariant { ref variant, .. } = *l { + variant + } else { + bug!("Not a tuple."); + }; for (n, &ty) in arg_types.iter().enumerate() { - let mut elem = bcx.extract_value(llval, n); + let mut elem = bcx.extract_value(llval, v.memory_index[n] as usize); // Truncate bools to i1, if needed if ty.is_bool() && common::val_ty(elem) != Type::i1(bcx.ccx()) { elem = bcx.trunc(elem, Type::i1(bcx.ccx())); diff --git a/src/librustc_trans/mir/constant.rs b/src/librustc_trans/mir/constant.rs index 3d0d889760..bca81fa364 100644 --- a/src/librustc_trans/mir/constant.rs +++ b/src/librustc_trans/mir/constant.rs @@ -248,13 +248,8 @@ impl<'a, 'tcx> MirConstContext<'a, 'tcx> { let vtable = common::fulfill_obligation(ccx.shared(), DUMMY_SP, trait_ref); if let traits::VtableImpl(vtable_impl) = vtable { let name = ccx.tcx().item_name(instance.def); - let ac = ccx.tcx().impl_or_trait_items(vtable_impl.impl_def_id) - .iter().filter_map(|&def_id| { - match ccx.tcx().impl_or_trait_item(def_id) { - ty::ConstTraitItem(ac) => Some(ac), - _ => None - } - }).find(|ic| ic.name == name); + let ac = ccx.tcx().associated_items(vtable_impl.impl_def_id) + .find(|item| item.kind == ty::AssociatedKind::Const && item.name == name); if let Some(ac) = ac { instance = Instance::new(ac.def_id, vtable_impl.substs); } @@ -558,14 +553,6 @@ impl<'a, 'tcx> MirConstContext<'a, 'tcx> { } failure?; - // FIXME Shouldn't need to manually trigger closure instantiations. - if let mir::AggregateKind::Closure(def_id, substs) = *kind { - use closure; - closure::trans_closure_body_via_mir(self.ccx, - def_id, - self.monomorphize(&substs)); - } - match *kind { mir::AggregateKind::Array => { self.const_array(dest_ty, &fields) diff --git a/src/librustc_trans/mir/mod.rs b/src/librustc_trans/mir/mod.rs index d2adf88c91..94dc9a5fdb 100644 --- a/src/librustc_trans/mir/mod.rs +++ b/src/librustc_trans/mir/mod.rs @@ -10,18 +10,17 @@ use libc::c_uint; use llvm::{self, ValueRef}; -use rustc::ty; +use rustc::ty::{self, layout}; use rustc::mir; use rustc::mir::tcx::LvalueTy; use session::config::FullDebugInfo; use base; use common::{self, Block, BlockAndBuilder, CrateContext, FunctionContext, C_null}; use debuginfo::{self, declare_local, DebugLoc, VariableAccess, VariableKind, FunctionDebugContext}; -use machine; use type_of; use syntax_pos::{DUMMY_SP, NO_EXPANSION, COMMAND_LINE_EXPN, BytePos}; -use syntax::parse::token::keywords; +use syntax::symbol::keywords; use std::cell::Ref; use std::iter; @@ -470,8 +469,8 @@ fn arg_local_refs<'bcx, 'tcx>(bcx: &BlockAndBuilder<'bcx, 'tcx>, } else { (arg_ty, false) }; - let upvar_tys = if let ty::TyClosure(_, ref substs) = closure_ty.sty { - &substs.upvar_tys[..] + let upvar_tys = if let ty::TyClosure(def_id, substs) = closure_ty.sty { + substs.upvar_tys(def_id, tcx) } else { bug!("upvar_decls with non-closure arg0 type `{}`", closure_ty); }; @@ -494,10 +493,15 @@ fn arg_local_refs<'bcx, 'tcx>(bcx: &BlockAndBuilder<'bcx, 'tcx>, llval }; - let llclosurety = type_of::type_of(bcx.ccx(), closure_ty); + let layout = bcx.ccx().layout_of(closure_ty); + let offsets = match *layout { + layout::Univariant { ref variant, .. } => &variant.offsets[..], + _ => bug!("Closures are only supposed to be Univariant") + }; + for (i, (decl, ty)) in mir.upvar_decls.iter().zip(upvar_tys).enumerate() { - let byte_offset_of_var_in_env = - machine::llelement_offset(bcx.ccx(), llclosurety, i); + let byte_offset_of_var_in_env = offsets[i].bytes(); + let ops = unsafe { [llvm::LLVMRustDIBuilderCreateOpDeref(), diff --git a/src/librustc_trans/mir/operand.rs b/src/librustc_trans/mir/operand.rs index 62eda56e2e..83e1d03c68 100644 --- a/src/librustc_trans/mir/operand.rs +++ b/src/librustc_trans/mir/operand.rs @@ -246,7 +246,7 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { lldest: ValueRef, operand: OperandRef<'tcx>) { - debug!("store_operand: operand={:?}", operand); + debug!("store_operand: operand={:?} lldest={:?}", operand, lldest); bcx.with_block(|bcx| self.store_operand_direct(bcx, lldest, operand)) } diff --git a/src/librustc_trans/mir/rvalue.rs b/src/librustc_trans/mir/rvalue.rs index f25877b1de..2ee49db477 100644 --- a/src/librustc_trans/mir/rvalue.rs +++ b/src/librustc_trans/mir/rvalue.rs @@ -133,15 +133,13 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { } }, _ => { - // FIXME Shouldn't need to manually trigger closure instantiations. - if let mir::AggregateKind::Closure(def_id, substs) = *kind { - use closure; - - closure::trans_closure_body_via_mir(bcx.ccx(), - def_id, - bcx.monomorphize(&substs)); - } - + // If this is a tuple or closure, we need to translate GEP indices. + let layout = bcx.ccx().layout_of(dest.ty.to_ty(bcx.tcx())); + let translation = if let Layout::Univariant { ref variant, .. } = *layout { + Some(&variant.memory_index) + } else { + None + }; for (i, operand) in operands.iter().enumerate() { let op = self.trans_operand(&bcx, operand); // Do not generate stores and GEPis for zero-sized fields. @@ -149,6 +147,11 @@ impl<'bcx, 'tcx> MirContext<'bcx, 'tcx> { // Note: perhaps this should be StructGep, but // note that in some cases the values here will // not be structs but arrays. + let i = if let Some(ref t) = translation { + t[i] as usize + } else { + i + }; let dest = bcx.gepi(dest.llval, &[0, i]); self.store_operand(&bcx, dest, op); } @@ -729,11 +732,13 @@ fn get_overflow_intrinsic(oop: OverflowOp, bcx: &BlockAndBuilder, ty: Ty) -> Val let new_sty = match ty.sty { TyInt(Is) => match &tcx.sess.target.target.target_pointer_width[..] { + "16" => TyInt(I16), "32" => TyInt(I32), "64" => TyInt(I64), _ => panic!("unsupported target word size") }, TyUint(Us) => match &tcx.sess.target.target.target_pointer_width[..] { + "16" => TyUint(U16), "32" => TyUint(U32), "64" => TyUint(U64), _ => panic!("unsupported target word size") diff --git a/src/librustc_trans/monomorphize.rs b/src/librustc_trans/monomorphize.rs index 270ce79620..8f05cc793e 100644 --- a/src/librustc_trans/monomorphize.rs +++ b/src/librustc_trans/monomorphize.rs @@ -60,7 +60,7 @@ pub fn apply_param_substs<'a, 'tcx, T>(scx: &SharedCrateContext<'a, 'tcx>, /// Returns the normalized type of a struct field pub fn field_ty<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, param_substs: &Substs<'tcx>, - f: ty::FieldDef<'tcx>) + f: &'tcx ty::FieldDef) -> Ty<'tcx> { tcx.normalize_associated_type(&f.ty(tcx, param_substs)) diff --git a/src/librustc_trans/partitioning.rs b/src/librustc_trans/partitioning.rs index 625b43c7d1..d93bbec7ef 100644 --- a/src/librustc_trans/partitioning.rs +++ b/src/librustc_trans/partitioning.rs @@ -126,15 +126,15 @@ use rustc::hir::map::DefPathData; use rustc::session::config::NUMBERED_CODEGEN_UNIT_MARKER; use rustc::ty::TyCtxt; use rustc::ty::item_path::characteristic_def_id_of_type; +use rustc_incremental::IchHasher; use std::cmp::Ordering; -use std::hash::{Hash, Hasher}; +use std::hash::Hash; use std::sync::Arc; -use std::collections::hash_map::DefaultHasher; use symbol_map::SymbolMap; use syntax::ast::NodeId; -use syntax::parse::token::{self, InternedString}; +use syntax::symbol::{Symbol, InternedString}; use trans_item::TransItem; -use util::nodemap::{FnvHashMap, FnvHashSet}; +use util::nodemap::{FxHashMap, FxHashSet}; pub enum PartitioningStrategy { /// Generate one codegen unit per source-level module. @@ -151,12 +151,12 @@ pub struct CodegenUnit<'tcx> { /// as well as the crate name and disambiguator. name: InternedString, - items: FnvHashMap, llvm::Linkage>, + items: FxHashMap, llvm::Linkage>, } impl<'tcx> CodegenUnit<'tcx> { pub fn new(name: InternedString, - items: FnvHashMap, llvm::Linkage>) + items: FxHashMap, llvm::Linkage>) -> Self { CodegenUnit { name: name, @@ -165,7 +165,7 @@ impl<'tcx> CodegenUnit<'tcx> { } pub fn empty(name: InternedString) -> Self { - Self::new(name, FnvHashMap()) + Self::new(name, FxHashMap()) } pub fn contains_item(&self, item: &TransItem<'tcx>) -> bool { @@ -176,7 +176,7 @@ impl<'tcx> CodegenUnit<'tcx> { &self.name } - pub fn items(&self) -> &FnvHashMap, llvm::Linkage> { + pub fn items(&self) -> &FxHashMap, llvm::Linkage> { &self.items } @@ -188,14 +188,30 @@ impl<'tcx> CodegenUnit<'tcx> { DepNode::WorkProduct(self.work_product_id()) } - pub fn compute_symbol_name_hash(&self, tcx: TyCtxt, symbol_map: &SymbolMap) -> u64 { - let mut state = DefaultHasher::new(); - let all_items = self.items_in_deterministic_order(tcx, symbol_map); + pub fn compute_symbol_name_hash(&self, + scx: &SharedCrateContext, + symbol_map: &SymbolMap) -> u64 { + let mut state = IchHasher::new(); + let exported_symbols = scx.exported_symbols(); + let all_items = self.items_in_deterministic_order(scx.tcx(), symbol_map); for (item, _) in all_items { let symbol_name = symbol_map.get(item).unwrap(); + symbol_name.len().hash(&mut state); symbol_name.hash(&mut state); + let exported = match item { + TransItem::Fn(ref instance) => { + let node_id = scx.tcx().map.as_local_node_id(instance.def); + node_id.map(|node_id| exported_symbols.contains(&node_id)) + .unwrap_or(false) + } + TransItem::Static(node_id) => { + exported_symbols.contains(&node_id) + } + TransItem::DropGlue(..) => false, + }; + exported.hash(&mut state); } - state.finish() + state.finish().to_smaller_hash() } pub fn items_in_deterministic_order(&self, @@ -272,7 +288,7 @@ pub fn partition<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, // If the partitioning should produce a fixed count of codegen units, merge // until that count is reached. if let PartitioningStrategy::FixedUnitCount(count) = strategy { - merge_codegen_units(&mut initial_partitioning, count, &tcx.crate_name[..]); + merge_codegen_units(&mut initial_partitioning, count, &tcx.crate_name.as_str()); debug_dump(scx, "POST MERGING:", initial_partitioning.codegen_units.iter()); } @@ -297,7 +313,7 @@ pub fn partition<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, struct PreInliningPartitioning<'tcx> { codegen_units: Vec>, - roots: FnvHashSet>, + roots: FxHashSet>, } struct PostInliningPartitioning<'tcx>(Vec>); @@ -308,8 +324,8 @@ fn place_root_translation_items<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, where I: Iterator> { let tcx = scx.tcx(); - let mut roots = FnvHashSet(); - let mut codegen_units = FnvHashMap(); + let mut roots = FxHashSet(); + let mut codegen_units = FxHashMap(); for trans_item in trans_items { let is_root = !trans_item.is_instantiated_only_on_demand(tcx); @@ -320,7 +336,7 @@ fn place_root_translation_items<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, let codegen_unit_name = match characteristic_def_id { Some(def_id) => compute_codegen_unit_name(tcx, def_id, is_volatile), - None => InternedString::new(FALLBACK_CODEGEN_UNIT), + None => Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str(), }; let make_codegen_unit = || { @@ -365,7 +381,7 @@ fn place_root_translation_items<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, // always ensure we have at least one CGU; otherwise, if we have a // crate with just types (for example), we could wind up with no CGU if codegen_units.is_empty() { - let codegen_unit_name = InternedString::new(FALLBACK_CODEGEN_UNIT); + let codegen_unit_name = Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str(); codegen_units.entry(codegen_unit_name.clone()) .or_insert_with(|| CodegenUnit::empty(codegen_unit_name.clone())); } @@ -419,7 +435,7 @@ fn place_inlined_translation_items<'tcx>(initial_partitioning: PreInliningPartit for codegen_unit in &initial_partitioning.codegen_units[..] { // Collect all items that need to be available in this codegen unit - let mut reachable = FnvHashSet(); + let mut reachable = FxHashSet(); for root in codegen_unit.items.keys() { follow_inlining(*root, inlining_map, &mut reachable); } @@ -465,7 +481,7 @@ fn place_inlined_translation_items<'tcx>(initial_partitioning: PreInliningPartit fn follow_inlining<'tcx>(trans_item: TransItem<'tcx>, inlining_map: &InliningMap<'tcx>, - visited: &mut FnvHashSet>) { + visited: &mut FxHashSet>) { if !visited.insert(trans_item) { return; } @@ -495,7 +511,7 @@ fn characteristic_def_id_of_trans_item<'a, 'tcx>(scx: &SharedCrateContext<'a, 't if let Some(impl_def_id) = tcx.impl_of_method(instance.def) { // This is a method within an inherent impl, find out what the // self-type is: - let impl_self_ty = tcx.lookup_item_type(impl_def_id).ty; + let impl_self_ty = tcx.item_type(impl_def_id); let impl_self_ty = tcx.erase_regions(&impl_self_ty); let impl_self_ty = monomorphize::apply_param_substs(scx, instance.substs, @@ -523,7 +539,7 @@ fn compute_codegen_unit_name<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, let mut mod_path = String::with_capacity(64); let def_path = tcx.def_path(def_id); - mod_path.push_str(&tcx.crate_name(def_path.krate)); + mod_path.push_str(&tcx.crate_name(def_path.krate).as_str()); for part in tcx.def_path(def_id) .data @@ -542,14 +558,11 @@ fn compute_codegen_unit_name<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, mod_path.push_str(".volatile"); } - return token::intern_and_get_ident(&mod_path[..]); + return Symbol::intern(&mod_path[..]).as_str(); } fn numbered_codegen_unit_name(crate_name: &str, index: usize) -> InternedString { - token::intern_and_get_ident(&format!("{}{}{}", - crate_name, - NUMBERED_CODEGEN_UNIT_MARKER, - index)[..]) + Symbol::intern(&format!("{}{}{}", crate_name, NUMBERED_CODEGEN_UNIT_MARKER, index)).as_str() } fn debug_dump<'a, 'b, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>, diff --git a/src/librustc_trans/symbol_map.rs b/src/librustc_trans/symbol_map.rs index 3faaa085dc..c3e0ac1fee 100644 --- a/src/librustc_trans/symbol_map.rs +++ b/src/librustc_trans/symbol_map.rs @@ -14,7 +14,7 @@ use rustc::ty::TyCtxt; use std::borrow::Cow; use syntax::codemap::Span; use trans_item::TransItem; -use util::nodemap::FnvHashMap; +use util::nodemap::FxHashMap; // In the SymbolMap we collect the symbol names of all translation items of // the current crate. This map exists as a performance optimization. Symbol @@ -22,7 +22,7 @@ use util::nodemap::FnvHashMap; // Thus they could also always be recomputed if needed. pub struct SymbolMap<'tcx> { - index: FnvHashMap, (usize, usize)>, + index: FxHashMap, (usize, usize)>, arena: String, } @@ -78,7 +78,7 @@ impl<'tcx> SymbolMap<'tcx> { } let mut symbol_map = SymbolMap { - index: FnvHashMap(), + index: FxHashMap(), arena: String::with_capacity(1024), }; diff --git a/src/librustc_trans/symbol_names_test.rs b/src/librustc_trans/symbol_names_test.rs index 25c30151ad..9ed5a5d148 100644 --- a/src/librustc_trans/symbol_names_test.rs +++ b/src/librustc_trans/symbol_names_test.rs @@ -15,7 +15,7 @@ //! paths etc in all kinds of annoying scenarios. use rustc::hir; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use syntax::ast; use common::SharedCrateContext; @@ -35,7 +35,8 @@ pub fn report_symbol_names(scx: &SharedCrateContext) { let _ignore = tcx.dep_graph.in_ignore(); let mut visitor = SymbolNamesTest { scx: scx }; - tcx.map.krate().visit_all_items(&mut visitor); + // FIXME(#37712) could use ItemLikeVisitor if trait items were item-like + tcx.map.krate().visit_all_item_likes(&mut visitor.as_deep_visitor()); } struct SymbolNamesTest<'a, 'tcx:'a> { @@ -66,6 +67,10 @@ impl<'a, 'tcx> SymbolNamesTest<'a, 'tcx> { } impl<'a, 'tcx> Visitor<'tcx> for SymbolNamesTest<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::None + } + fn visit_item(&mut self, item: &'tcx hir::Item) { self.process_attrs(item.id); intravisit::walk_item(self, item); diff --git a/src/librustc_trans/trans_item.rs b/src/librustc_trans/trans_item.rs index 8930387c04..214eaeb817 100644 --- a/src/librustc_trans/trans_item.rs +++ b/src/librustc_trans/trans_item.rs @@ -18,6 +18,7 @@ use attributes; use base; use consts; use context::{CrateContext, SharedCrateContext}; +use common; use declare; use glue::DropGlueKind; use llvm; @@ -34,6 +35,8 @@ use type_of; use glue; use abi::{Abi, FnType}; use back::symbol_names; +use std::fmt::Write; +use std::iter; #[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)] pub enum TransItem<'tcx> { @@ -131,7 +134,7 @@ impl<'a, 'tcx> TransItem<'tcx> { linkage: llvm::Linkage, symbol_name: &str) { let def_id = ccx.tcx().map.local_def_id(node_id); - let ty = ccx.tcx().lookup_item_type(def_id).ty; + let ty = ccx.tcx().item_type(def_id); let llty = type_of::type_of(ccx, ty); let g = declare::define_global(ccx, symbol_name, llty).unwrap_or_else(|| { @@ -153,7 +156,7 @@ impl<'a, 'tcx> TransItem<'tcx> { assert!(!instance.substs.needs_infer() && !instance.substs.has_param_types()); - let item_ty = ccx.tcx().lookup_item_type(instance.def).ty; + let item_ty = ccx.tcx().item_type(instance.def); let item_ty = ccx.tcx().erase_regions(&item_ty); let mono_ty = monomorphize::apply_param_substs(ccx.shared(), instance.substs, &item_ty); @@ -166,6 +169,11 @@ impl<'a, 'tcx> TransItem<'tcx> { llvm::SetUniqueComdat(ccx.llmod(), lldecl); } + if let ty::TyClosure(..) = mono_ty.sty { + // set an inline hint for all closures + attributes::inline(lldecl, attributes::InlineAttr::Hint); + } + attributes::from_fn_attrs(ccx, &attrs, lldecl); ccx.instances().borrow_mut().insert(instance, lldecl); @@ -179,11 +187,7 @@ impl<'a, 'tcx> TransItem<'tcx> { assert_eq!(dg.ty(), glue::get_drop_glue_type(tcx, dg.ty())); let t = dg.ty(); - let sig = ty::FnSig { - inputs: vec![tcx.mk_mut_ptr(tcx.types.i8)], - output: tcx.mk_nil(), - variadic: false, - }; + let sig = tcx.mk_fn_sig(iter::once(tcx.mk_mut_ptr(tcx.types.i8)), tcx.mk_nil(), false); // Create a FnType for fn(*mut i8) and substitute the real type in // later - that prevents FnType from splitting fat pointers up. @@ -239,6 +243,7 @@ impl<'a, 'tcx> TransItem<'tcx> { TransItem::Fn(ref instance) => { !instance.def.is_local() || instance.substs.types().next().is_some() || + common::is_closure(tcx, instance.def) || attr::requests_inline(&tcx.get_attrs(instance.def)[..]) } TransItem::DropGlue(..) => true, @@ -277,7 +282,7 @@ impl<'a, 'tcx> TransItem<'tcx> { let attributes = tcx.get_attrs(def_id); if let Some(name) = attr::first_attr_value_str_by_name(&attributes, "linkage") { - if let Some(linkage) = base::llvm_linkage_by_name(&name) { + if let Some(linkage) = base::llvm_linkage_by_name(&name.as_str()) { Some(linkage) } else { let span = tcx.map.span_if_local(def_id); @@ -302,7 +307,8 @@ impl<'a, 'tcx> TransItem<'tcx> { DropGlueKind::Ty(_) => s.push_str("drop-glue "), DropGlueKind::TyContents(_) => s.push_str("drop-glue-contents "), }; - push_unique_type_name(tcx, dg.ty(), &mut s); + let printer = DefPathBasedNames::new(tcx, false, false); + printer.push_type_name(dg.ty(), &mut s); s } TransItem::Fn(instance) => { @@ -321,7 +327,8 @@ impl<'a, 'tcx> TransItem<'tcx> { -> String { let mut result = String::with_capacity(32); result.push_str(prefix); - push_instance_as_string(tcx, instance, &mut result); + let printer = DefPathBasedNames::new(tcx, false, false); + printer.push_instance_as_string(instance, &mut result); result } } @@ -362,207 +369,220 @@ impl<'a, 'tcx> TransItem<'tcx> { /// Same as `unique_type_name()` but with the result pushed onto the given /// `output` parameter. -pub fn push_unique_type_name<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - t: Ty<'tcx>, - output: &mut String) { - match t.sty { - ty::TyBool => output.push_str("bool"), - ty::TyChar => output.push_str("char"), - ty::TyStr => output.push_str("str"), - ty::TyNever => output.push_str("!"), - ty::TyInt(ast::IntTy::Is) => output.push_str("isize"), - ty::TyInt(ast::IntTy::I8) => output.push_str("i8"), - ty::TyInt(ast::IntTy::I16) => output.push_str("i16"), - ty::TyInt(ast::IntTy::I32) => output.push_str("i32"), - ty::TyInt(ast::IntTy::I64) => output.push_str("i64"), - ty::TyUint(ast::UintTy::Us) => output.push_str("usize"), - ty::TyUint(ast::UintTy::U8) => output.push_str("u8"), - ty::TyUint(ast::UintTy::U16) => output.push_str("u16"), - ty::TyUint(ast::UintTy::U32) => output.push_str("u32"), - ty::TyUint(ast::UintTy::U64) => output.push_str("u64"), - ty::TyFloat(ast::FloatTy::F32) => output.push_str("f32"), - ty::TyFloat(ast::FloatTy::F64) => output.push_str("f64"), - ty::TyAdt(adt_def, substs) => { - push_item_name(tcx, adt_def.did, output); - push_type_params(tcx, substs, &[], output); - }, - ty::TyTuple(component_types) => { - output.push('('); - for &component_type in component_types { - push_unique_type_name(tcx, component_type, output); - output.push_str(", "); - } - if !component_types.is_empty() { - output.pop(); - output.pop(); - } - output.push(')'); - }, - ty::TyBox(inner_type) => { - output.push_str("Box<"); - push_unique_type_name(tcx, inner_type, output); - output.push('>'); - }, - ty::TyRawPtr(ty::TypeAndMut { ty: inner_type, mutbl } ) => { - output.push('*'); - match mutbl { - hir::MutImmutable => output.push_str("const "), - hir::MutMutable => output.push_str("mut "), - } +pub struct DefPathBasedNames<'a, 'tcx: 'a> { + tcx: TyCtxt<'a, 'tcx, 'tcx>, + omit_disambiguators: bool, + omit_local_crate_name: bool, +} - push_unique_type_name(tcx, inner_type, output); - }, - ty::TyRef(_, ty::TypeAndMut { ty: inner_type, mutbl }) => { - output.push('&'); - if mutbl == hir::MutMutable { - output.push_str("mut "); - } +impl<'a, 'tcx> DefPathBasedNames<'a, 'tcx> { + pub fn new(tcx: TyCtxt<'a, 'tcx, 'tcx>, + omit_disambiguators: bool, + omit_local_crate_name: bool) + -> Self { + DefPathBasedNames { + tcx: tcx, + omit_disambiguators: omit_disambiguators, + omit_local_crate_name: omit_local_crate_name, + } + } - push_unique_type_name(tcx, inner_type, output); - }, - ty::TyArray(inner_type, len) => { - output.push('['); - push_unique_type_name(tcx, inner_type, output); - output.push_str(&format!("; {}", len)); - output.push(']'); - }, - ty::TySlice(inner_type) => { - output.push('['); - push_unique_type_name(tcx, inner_type, output); - output.push(']'); - }, - ty::TyTrait(ref trait_data) => { - push_item_name(tcx, trait_data.principal.def_id(), output); - push_type_params(tcx, - trait_data.principal.skip_binder().substs, - &trait_data.projection_bounds, - output); - }, - ty::TyFnDef(.., &ty::BareFnTy{ unsafety, abi, ref sig } ) | - ty::TyFnPtr(&ty::BareFnTy{ unsafety, abi, ref sig } ) => { - if unsafety == hir::Unsafety::Unsafe { - output.push_str("unsafe "); - } - - if abi != ::abi::Abi::Rust { - output.push_str("extern \""); - output.push_str(abi.name()); - output.push_str("\" "); - } - - output.push_str("fn("); - - let sig = tcx.erase_late_bound_regions_and_normalize(sig); - if !sig.inputs.is_empty() { - for ¶meter_type in &sig.inputs { - push_unique_type_name(tcx, parameter_type, output); + pub fn push_type_name(&self, t: Ty<'tcx>, output: &mut String) { + match t.sty { + ty::TyBool => output.push_str("bool"), + ty::TyChar => output.push_str("char"), + ty::TyStr => output.push_str("str"), + ty::TyNever => output.push_str("!"), + ty::TyInt(ast::IntTy::Is) => output.push_str("isize"), + ty::TyInt(ast::IntTy::I8) => output.push_str("i8"), + ty::TyInt(ast::IntTy::I16) => output.push_str("i16"), + ty::TyInt(ast::IntTy::I32) => output.push_str("i32"), + ty::TyInt(ast::IntTy::I64) => output.push_str("i64"), + ty::TyUint(ast::UintTy::Us) => output.push_str("usize"), + ty::TyUint(ast::UintTy::U8) => output.push_str("u8"), + ty::TyUint(ast::UintTy::U16) => output.push_str("u16"), + ty::TyUint(ast::UintTy::U32) => output.push_str("u32"), + ty::TyUint(ast::UintTy::U64) => output.push_str("u64"), + ty::TyFloat(ast::FloatTy::F32) => output.push_str("f32"), + ty::TyFloat(ast::FloatTy::F64) => output.push_str("f64"), + ty::TyAdt(adt_def, substs) => { + self.push_def_path(adt_def.did, output); + self.push_type_params(substs, iter::empty(), output); + }, + ty::TyTuple(component_types) => { + output.push('('); + for &component_type in component_types { + self.push_type_name(component_type, output); output.push_str(", "); } - output.pop(); - output.pop(); - } - - if sig.variadic { - if !sig.inputs.is_empty() { - output.push_str(", ..."); - } else { - output.push_str("..."); + if !component_types.is_empty() { + output.pop(); + output.pop(); + } + output.push(')'); + }, + ty::TyBox(inner_type) => { + output.push_str("Box<"); + self.push_type_name(inner_type, output); + output.push('>'); + }, + ty::TyRawPtr(ty::TypeAndMut { ty: inner_type, mutbl } ) => { + output.push('*'); + match mutbl { + hir::MutImmutable => output.push_str("const "), + hir::MutMutable => output.push_str("mut "), } - } - output.push(')'); + self.push_type_name(inner_type, output); + }, + ty::TyRef(_, ty::TypeAndMut { ty: inner_type, mutbl }) => { + output.push('&'); + if mutbl == hir::MutMutable { + output.push_str("mut "); + } - if !sig.output.is_nil() { - output.push_str(" -> "); - push_unique_type_name(tcx, sig.output, output); + self.push_type_name(inner_type, output); + }, + ty::TyArray(inner_type, len) => { + output.push('['); + self.push_type_name(inner_type, output); + write!(output, "; {}", len).unwrap(); + output.push(']'); + }, + ty::TySlice(inner_type) => { + output.push('['); + self.push_type_name(inner_type, output); + output.push(']'); + }, + ty::TyDynamic(ref trait_data, ..) => { + if let Some(principal) = trait_data.principal() { + self.push_def_path(principal.def_id(), output); + self.push_type_params(principal.skip_binder().substs, + trait_data.projection_bounds(), + output); + } + }, + ty::TyFnDef(.., &ty::BareFnTy{ unsafety, abi, ref sig } ) | + ty::TyFnPtr(&ty::BareFnTy{ unsafety, abi, ref sig } ) => { + if unsafety == hir::Unsafety::Unsafe { + output.push_str("unsafe "); + } + + if abi != ::abi::Abi::Rust { + output.push_str("extern \""); + output.push_str(abi.name()); + output.push_str("\" "); + } + + output.push_str("fn("); + + let sig = self.tcx.erase_late_bound_regions_and_normalize(sig); + + if !sig.inputs().is_empty() { + for ¶meter_type in sig.inputs() { + self.push_type_name(parameter_type, output); + output.push_str(", "); + } + output.pop(); + output.pop(); + } + + if sig.variadic { + if !sig.inputs().is_empty() { + output.push_str(", ..."); + } else { + output.push_str("..."); + } + } + + output.push(')'); + + if !sig.output().is_nil() { + output.push_str(" -> "); + self.push_type_name(sig.output(), output); + } + }, + ty::TyClosure(def_id, ref closure_substs) => { + self.push_def_path(def_id, output); + let generics = self.tcx.item_generics(self.tcx.closure_base_def_id(def_id)); + let substs = closure_substs.substs.truncate_to(self.tcx, generics); + self.push_type_params(substs, iter::empty(), output); + } + ty::TyError | + ty::TyInfer(_) | + ty::TyProjection(..) | + ty::TyParam(_) | + ty::TyAnon(..) => { + bug!("DefPathBasedNames: Trying to create type name for \ + unexpected type: {:?}", t); } - }, - ty::TyClosure(def_id, ref closure_substs) => { - push_item_name(tcx, def_id, output); - output.push_str("{"); - output.push_str(&format!("{}:{}", def_id.krate, def_id.index.as_usize())); - output.push_str("}"); - push_type_params(tcx, closure_substs.func_substs, &[], output); - } - ty::TyError | - ty::TyInfer(_) | - ty::TyProjection(..) | - ty::TyParam(_) | - ty::TyAnon(..) => { - bug!("debuginfo: Trying to create type name for \ - unexpected type: {:?}", t); } } -} -fn push_item_name(tcx: TyCtxt, - def_id: DefId, - output: &mut String) { - let def_path = tcx.def_path(def_id); + pub fn push_def_path(&self, + def_id: DefId, + output: &mut String) { + let def_path = self.tcx.def_path(def_id); - // some_crate:: - output.push_str(&tcx.crate_name(def_path.krate)); - output.push_str("::"); + // some_crate:: + if !(self.omit_local_crate_name && def_id.is_local()) { + output.push_str(&self.tcx.crate_name(def_path.krate).as_str()); + output.push_str("::"); + } - // foo::bar::ItemName:: - for part in tcx.def_path(def_id).data { - output.push_str(&format!("{}[{}]::", - part.data.as_interned_str(), - part.disambiguator)); + // foo::bar::ItemName:: + for part in self.tcx.def_path(def_id).data { + if self.omit_disambiguators { + write!(output, "{}::", part.data.as_interned_str()).unwrap(); + } else { + write!(output, "{}[{}]::", + part.data.as_interned_str(), + part.disambiguator).unwrap(); + } + } + + // remove final "::" + output.pop(); + output.pop(); } - // remove final "::" - output.pop(); - output.pop(); -} + fn push_type_params(&self, + substs: &Substs<'tcx>, + projections: I, + output: &mut String) + where I: Iterator> + { + let mut projections = projections.peekable(); + if substs.types().next().is_none() && projections.peek().is_none() { + return; + } -fn push_type_params<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - substs: &Substs<'tcx>, - projections: &[ty::PolyExistentialProjection<'tcx>], - output: &mut String) { - if substs.types().next().is_none() && projections.is_empty() { - return; + output.push('<'); + + for type_parameter in substs.types() { + self.push_type_name(type_parameter, output); + output.push_str(", "); + } + + for projection in projections { + let projection = projection.skip_binder(); + let name = &projection.item_name.as_str(); + output.push_str(name); + output.push_str("="); + self.push_type_name(projection.ty, output); + output.push_str(", "); + } + + output.pop(); + output.pop(); + + output.push('>'); } - output.push('<'); - - for type_parameter in substs.types() { - push_unique_type_name(tcx, type_parameter, output); - output.push_str(", "); + pub fn push_instance_as_string(&self, + instance: Instance<'tcx>, + output: &mut String) { + self.push_def_path(instance.def, output); + self.push_type_params(instance.substs, iter::empty(), output); } - - for projection in projections { - let projection = projection.skip_binder(); - let name = &projection.item_name.as_str(); - output.push_str(name); - output.push_str("="); - push_unique_type_name(tcx, projection.ty, output); - output.push_str(", "); - } - - output.pop(); - output.pop(); - - output.push('>'); -} - -fn push_instance_as_string<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - instance: Instance<'tcx>, - output: &mut String) { - push_item_name(tcx, instance.def, output); - push_type_params(tcx, instance.substs, &[], output); -} - -pub fn def_id_to_string(tcx: TyCtxt, def_id: DefId) -> String { - let mut output = String::new(); - push_item_name(tcx, def_id, &mut output); - output -} - -pub fn type_to_string<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - ty: Ty<'tcx>) - -> String { - let mut output = String::new(); - push_unique_type_name(tcx, ty, &mut output); - output } diff --git a/src/librustc_trans/type_.rs b/src/librustc_trans/type_.rs index 03a71827b4..2b2776acab 100644 --- a/src/librustc_trans/type_.rs +++ b/src/librustc_trans/type_.rs @@ -15,7 +15,6 @@ use llvm::{TypeRef, Bool, False, True, TypeKind}; use llvm::{Float, Double, X86_FP80, PPC_FP128, FP128}; use context::CrateContext; -use util::nodemap::FnvHashMap; use syntax::ast; use rustc::ty::layout; @@ -24,7 +23,6 @@ use std::ffi::CString; use std::fmt; use std::mem; use std::ptr; -use std::cell::RefCell; use libc::c_uint; @@ -321,26 +319,3 @@ impl Type { } } } - -/* Memory-managed object interface to type handles. */ - -pub struct TypeNames { - named_types: RefCell>, -} - -impl TypeNames { - pub fn new() -> TypeNames { - TypeNames { - named_types: RefCell::new(FnvHashMap()) - } - } - - pub fn associate_type(&self, s: &str, t: &Type) { - assert!(self.named_types.borrow_mut().insert(s.to_string(), - t.to_ref()).is_none()); - } - - pub fn find_type(&self, s: &str) -> Option { - self.named_types.borrow().get(s).map(|x| Type::from_ref(*x)) - } -} diff --git a/src/librustc_trans/type_of.rs b/src/librustc_trans/type_of.rs index 132b0a910b..22c405fe25 100644 --- a/src/librustc_trans/type_of.rs +++ b/src/librustc_trans/type_of.rs @@ -10,14 +10,12 @@ #![allow(non_camel_case_types)] -use rustc::hir::def_id::DefId; use abi::FnType; use adt; use common::*; use machine; use rustc::ty::{self, Ty, TypeFoldable}; -use rustc::ty::subst::Substs; - +use trans_item::DefPathBasedNames; use type_::Type; use syntax::ast; @@ -97,7 +95,7 @@ pub fn sizing_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Typ ty::TyAnon(..) | ty::TyError => { bug!("fictitious type {:?} in sizing_type_of()", t) } - ty::TySlice(_) | ty::TyTrait(..) | ty::TyStr => bug!() + ty::TySlice(_) | ty::TyDynamic(..) | ty::TyStr => bug!() }; debug!("--> mapped t={:?} to llsizingty={:?}", t, llsizingty); @@ -150,7 +148,7 @@ fn unsized_info_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> Type ty::TyStr | ty::TyArray(..) | ty::TySlice(_) => { Type::uint_from_ty(ccx, ast::UintTy::Us) } - ty::TyTrait(_) => Type::vtable_ptr(ccx), + ty::TyDynamic(..) => Type::vtable_ptr(ccx), _ => bug!("Unexpected tail in unsized_info_ty: {:?} for ty={:?}", unsized_part, ty) } @@ -238,7 +236,7 @@ pub fn in_memory_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> if let ty::TyStr = ty.sty { // This means we get a nicer name in the output (str is always // unsized). - cx.tn().find_type("str_slice").unwrap() + cx.str_slice_type() } else { let ptr_ty = in_memory_type_of(cx, ty).ptr_to(); let info_ty = unsized_info_ty(cx, ty); @@ -260,7 +258,7 @@ pub fn in_memory_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> // fat pointers is of the right type (e.g. for array accesses), even // when taking the address of an unsized field in a struct. ty::TySlice(ty) => in_memory_type_of(cx, ty), - ty::TyStr | ty::TyTrait(..) => Type::i8(cx), + ty::TyStr | ty::TyDynamic(..) => Type::i8(cx), ty::TyFnDef(..) => Type::nil(cx), ty::TyFnPtr(f) => { @@ -282,12 +280,12 @@ pub fn in_memory_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> let n = t.simd_size(cx.tcx()) as u64; Type::vector(&llet, n) } - ty::TyAdt(def, substs) => { + ty::TyAdt(..) => { // Only create the named struct, but don't fill it in. We // fill it in *after* placing it into the type cache. This // avoids creating more than one copy of the enum when one // of the enum's variants refers to the enum itself. - let name = llvm_type_name(cx, def.did, substs); + let name = llvm_type_name(cx, t); adt::incomplete_type_of(cx, t, &name[..]) } @@ -319,21 +317,9 @@ pub fn align_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) layout.align(&cx.tcx().data_layout).abi() as machine::llalign } -fn llvm_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, - did: DefId, - substs: &Substs<'tcx>) - -> String { - let base = cx.tcx().item_path_str(did); - let strings: Vec = substs.types().map(|t| t.to_string()).collect(); - let tstr = if strings.is_empty() { - base - } else { - format!("{}<{}>", base, strings.join(", ")) - }; - - if did.is_local() { - tstr - } else { - format!("{}.{}", did.krate, tstr) - } +fn llvm_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> String { + let mut name = String::with_capacity(32); + let printer = DefPathBasedNames::new(cx.tcx(), true, true); + printer.push_type_name(ty, &mut name); + name } diff --git a/src/librustc_typeck/Cargo.toml b/src/librustc_typeck/Cargo.toml index 720423371a..f08d26373e 100644 --- a/src/librustc_typeck/Cargo.toml +++ b/src/librustc_typeck/Cargo.toml @@ -18,6 +18,7 @@ rustc = { path = "../librustc" } rustc_back = { path = "../librustc_back" } rustc_const_eval = { path = "../librustc_const_eval" } rustc_const_math = { path = "../librustc_const_math" } +rustc_data_structures = { path = "../librustc_data_structures" } rustc_platform_intrinsics = { path = "../librustc_platform_intrinsics" } syntax_pos = { path = "../libsyntax_pos" } rustc_errors = { path = "../librustc_errors" } diff --git a/src/librustc_typeck/astconv.rs b/src/librustc_typeck/astconv.rs index c93f1c6c8e..71270963f8 100644 --- a/src/librustc_typeck/astconv.rs +++ b/src/librustc_typeck/astconv.rs @@ -16,12 +16,12 @@ //! somewhat differently during the collect and check phases, //! particularly with respect to looking up the types of top-level //! items. In the collect phase, the crate context is used as the -//! `AstConv` instance; in this phase, the `get_item_type_scheme()` -//! function triggers a recursive call to `type_scheme_of_item()` +//! `AstConv` instance; in this phase, the `get_item_type()` +//! function triggers a recursive call to `type_of_item()` //! (note that `ast_ty_to_ty()` will detect recursive types and report //! an error). In the check phase, when the FnCtxt is used as the -//! `AstConv`, `get_item_type_scheme()` just looks up the item type in -//! `tcx.tcache` (using `ty::lookup_item_type`). +//! `AstConv`, `get_item_type()` just looks up the item type in +//! `tcx.types` (using `TyCtxt::item_type`). //! //! The `RegionScope` trait controls what happens when the user does //! not specify a region in some location where a region is required @@ -49,8 +49,9 @@ //! an rptr (`&r.T`) use the region `r` that appears in the rptr. use rustc_const_eval::eval_length; +use rustc_data_structures::accumulate_vec::AccumulateVec; use hir::{self, SelfKind}; -use hir::def::{Def, PathResolution}; +use hir::def::Def; use hir::def_id::DefId; use hir::print as pprust; use middle::resolve_lifetime as rl; @@ -66,12 +67,13 @@ use rscope::{self, UnelidableRscope, RegionScope, ElidableRscope, ElisionFailureInfo, ElidedLifetime}; use rscope::{AnonTypeScope, MaybeWithAnonTypes}; use util::common::{ErrorReported, FN_OUTPUT_NAME}; -use util::nodemap::{NodeMap, FnvHashSet}; +use util::nodemap::{NodeMap, FxHashSet}; use std::cell::RefCell; +use std::iter; use syntax::{abi, ast}; use syntax::feature_gate::{GateIssue, emit_feature_err}; -use syntax::parse::token::{self, keywords}; +use syntax::symbol::{Symbol, keywords}; use syntax_pos::{Span, Pos}; use errors::DiagnosticBuilder; @@ -85,16 +87,13 @@ pub trait AstConv<'gcx, 'tcx> { fn get_generics(&self, span: Span, id: DefId) -> Result<&'tcx ty::Generics<'tcx>, ErrorReported>; - /// Identify the type scheme for an item with a type, like a type - /// alias, fn, or struct. This allows you to figure out the set of - /// type parameters defined on the item. - fn get_item_type_scheme(&self, span: Span, id: DefId) - -> Result, ErrorReported>; + /// Identify the type for an item, like a type alias, fn, or struct. + fn get_item_type(&self, span: Span, id: DefId) -> Result, ErrorReported>; /// Returns the `TraitDef` for a given trait. This allows you to /// figure out the set of type parameters defined on the trait. fn get_trait_def(&self, span: Span, id: DefId) - -> Result<&'tcx ty::TraitDef<'tcx>, ErrorReported>; + -> Result<&'tcx ty::TraitDef, ErrorReported>; /// Ensure that the super-predicates for the trait with the given /// id are available and also for the transitive set of @@ -107,11 +106,6 @@ pub trait AstConv<'gcx, 'tcx> { fn get_type_parameter_bounds(&self, span: Span, def_id: ast::NodeId) -> Result>, ErrorReported>; - /// Returns true if the trait with id `trait_def_id` defines an - /// associated type with the name `name`. - fn trait_defines_associated_type_named(&self, trait_def_id: DefId, name: ast::Name) - -> bool; - /// Return an (optional) substitution to convert bound type parameters that /// are in scope into free ones. This function should only return Some /// within a fn body. @@ -157,14 +151,6 @@ pub trait AstConv<'gcx, 'tcx> { fn set_tainted_by_errors(&self); } -#[derive(PartialEq, Eq)] -pub enum PathParamMode { - // Any path in a type context. - Explicit, - // The `module::Type` in `module::Type::method` in an expression. - Optional -} - struct ConvertedBinding<'tcx> { item_name: ast::Name, ty: Ty<'tcx>, @@ -349,7 +335,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { pub fn ast_path_substs_for_ty(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, def_id: DefId, item_segment: &hir::PathSegment) -> &'tcx Substs<'tcx> @@ -375,7 +360,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let (substs, assoc_bindings) = self.create_substs_for_ast_path(rscope, span, - param_mode, def_id, &item_segment.parameters, None); @@ -393,7 +377,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn create_substs_for_ast_path(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, def_id: DefId, parameters: &hir::PathParameters, self_ty: Option>) @@ -405,15 +388,11 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { parameters={:?})", def_id, self_ty, parameters); - let (lifetimes, num_types_provided) = match *parameters { + let (lifetimes, num_types_provided, infer_types) = match *parameters { hir::AngleBracketedParameters(ref data) => { - if param_mode == PathParamMode::Optional && data.types.is_empty() { - (&data.lifetimes[..], None) - } else { - (&data.lifetimes[..], Some(data.types.len())) - } + (&data.lifetimes[..], data.types.len(), data.infer_types) } - hir::ParenthesizedParameters(_) => (&[][..], Some(1)) + hir::ParenthesizedParameters(_) => (&[][..], 1, false) }; // If the type is parameterized by this region, then replace this @@ -451,9 +430,9 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { assert_eq!(decl_generics.has_self, self_ty.is_some()); // Check the number of type parameters supplied by the user. - if let Some(num_provided) = num_types_provided { - let ty_param_defs = &decl_generics.types[self_ty.is_some() as usize..]; - check_type_argument_count(tcx, span, num_provided, ty_param_defs); + let ty_param_defs = &decl_generics.types[self_ty.is_some() as usize..]; + if !infer_types || num_types_provided > ty_param_defs.len() { + check_type_argument_count(tcx, span, num_types_provided, ty_param_defs); } let is_object = self_ty.map_or(false, |ty| ty.sty == TRAIT_OBJECT_DUMMY_SELF); @@ -482,7 +461,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { } let i = i - self_ty.is_some() as usize - decl_generics.regions.len(); - if num_types_provided.map_or(false, |n| i < n) { + if i < num_types_provided { // A provided type parameter. match *parameters { hir::AngleBracketedParameters(ref data) => { @@ -496,7 +475,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { ty } } - } else if num_types_provided.is_none() { + } else if infer_types { // No type parameters were provided, we can infer all. let ty_var = if !default_needs_object_self(def) { self.ty_infer_for_def(def, substs, span) @@ -559,23 +538,26 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { /// Returns the appropriate lifetime to use for any output lifetimes /// (if one exists) and a vector of the (pattern, number of lifetimes) /// corresponding to each input type/pattern. - fn find_implied_output_region(&self, + fn find_implied_output_region(&self, input_tys: &[Ty<'tcx>], - input_pats: F) -> ElidedLifetime - where F: FnOnce() -> Vec + input_pats: I) -> ElidedLifetime + where I: Iterator { let tcx = self.tcx(); - let mut lifetimes_for_params = Vec::new(); + let mut lifetimes_for_params = Vec::with_capacity(input_tys.len()); let mut possible_implied_output_region = None; + let mut lifetimes = 0; for input_type in input_tys.iter() { - let mut regions = FnvHashSet(); + let mut regions = FxHashSet(); let have_bound_regions = tcx.collect_regions(input_type, &mut regions); debug!("find_implied_output_regions: collected {:?} from {:?} \ have_bound_regions={:?}", ®ions, input_type, have_bound_regions); - if regions.len() == 1 { + lifetimes += regions.len(); + + if lifetimes == 1 && regions.len() == 1 { // there's a chance that the unique lifetime of this // iteration will be the appropriate lifetime for output // parameters, so lets store it. @@ -592,12 +574,12 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { }); } - if lifetimes_for_params.iter().map(|e| e.lifetime_count).sum::() == 1 { + if lifetimes == 1 { Ok(*possible_implied_output_region.unwrap()) } else { // Fill in the expensive `name` fields now that we know they're // needed. - for (info, input_pat) in lifetimes_for_params.iter_mut().zip(input_pats()) { + for (info, input_pat) in lifetimes_for_params.iter_mut().zip(input_pats) { info.name = input_pat; } Err(Some(lifetimes_for_params)) @@ -636,8 +618,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let inputs = self.tcx().mk_type_list(data.inputs.iter().map(|a_t| { self.ast_ty_arg_to_ty(&binding_rscope, None, region_substs, a_t) })); - let inputs_len = inputs.len(); - let input_params = || vec![String::new(); inputs_len]; + let input_params = iter::repeat(String::new()).take(inputs.len()); let implied_output_region = self.find_implied_output_region(&inputs, input_params); let (output, output_span) = match data.output { @@ -653,7 +634,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { }; let output_binding = ConvertedBinding { - item_name: token::intern(FN_OUTPUT_NAME), + item_name: Symbol::intern(FN_OUTPUT_NAME), ty: output, span: output_span }; @@ -672,7 +653,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let trait_def_id = self.trait_def_id(trait_ref); self.ast_path_to_poly_trait_ref(rscope, trait_ref.path.span, - PathParamMode::Explicit, trait_def_id, self_ty, trait_ref.ref_id, @@ -695,7 +675,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let trait_def_id = self.trait_def_id(trait_ref); self.ast_path_to_mono_trait_ref(rscope, trait_ref.path.span, - PathParamMode::Explicit, trait_def_id, self_ty, trait_ref.path.segments.last().unwrap()) @@ -703,7 +682,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn trait_def_id(&self, trait_ref: &hir::TraitRef) -> DefId { let path = &trait_ref.path; - match self.tcx().expect_def(trait_ref.ref_id) { + match path.def { Def::Trait(trait_def_id) => trait_def_id, Def::Err => { self.tcx().sess.fatal("cannot continue compilation due to previous error"); @@ -718,7 +697,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn ast_path_to_poly_trait_ref(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, trait_def_id: DefId, self_ty: Ty<'tcx>, path_id: ast::NodeId, @@ -737,7 +715,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let (substs, assoc_bindings) = self.create_substs_for_ast_trait_ref(shifted_rscope, span, - param_mode, trait_def_id, self_ty, trait_segment); @@ -760,7 +737,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn ast_path_to_mono_trait_ref(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, trait_def_id: DefId, self_ty: Ty<'tcx>, trait_segment: &hir::PathSegment) @@ -769,7 +745,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let (substs, assoc_bindings) = self.create_substs_for_ast_trait_ref(rscope, span, - param_mode, trait_def_id, self_ty, trait_segment); @@ -780,7 +755,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn create_substs_for_ast_trait_ref(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, trait_def_id: DefId, self_ty: Ty<'tcx>, trait_segment: &hir::PathSegment) @@ -825,12 +799,21 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { self.create_substs_for_ast_path(rscope, span, - param_mode, trait_def_id, &trait_segment.parameters, Some(self_ty)) } + fn trait_defines_associated_type_named(&self, + trait_def_id: DefId, + assoc_name: ast::Name) + -> bool + { + self.tcx().associated_items(trait_def_id).any(|item| { + item.kind == ty::AssociatedKind::Type && item.name == assoc_name + }) + } + fn ast_type_binding_to_poly_projection_predicate( &self, path_id: ast::NodeId, @@ -903,10 +886,9 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // those that do. self.ensure_super_predicates(binding.span, trait_ref.def_id())?; - let candidates: Vec = + let candidates = traits::supertraits(tcx, trait_ref.clone()) - .filter(|r| self.trait_defines_associated_type_named(r.def_id(), binding.item_name)) - .collect(); + .filter(|r| self.trait_defines_associated_type_named(r.def_id(), binding.item_name)); let candidate = self.one_bound_for_assoc_type(candidates, &trait_ref.to_string(), @@ -927,14 +909,13 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn ast_path_to_ty(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, did: DefId, item_segment: &hir::PathSegment) -> Ty<'tcx> { let tcx = self.tcx(); - let decl_ty = match self.get_item_type_scheme(span, did) { - Ok(type_scheme) => type_scheme.ty, + let decl_ty = match self.get_item_type(span, did) { + Ok(ty) => ty, Err(ErrorReported) => { return tcx.types.err; } @@ -942,7 +923,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let substs = self.ast_path_substs_for_ty(rscope, span, - param_mode, did, item_segment); @@ -975,26 +955,21 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let tcx = self.tcx(); match ty.node { - hir::TyPath(None, ref path) => { - let resolution = tcx.expect_resolution(ty.id); - match resolution.base_def { - Def::Trait(trait_def_id) if resolution.depth == 0 => { - self.trait_path_to_object_type(rscope, - path.span, - PathParamMode::Explicit, - trait_def_id, - ty.id, - path.segments.last().unwrap(), - span, - partition_bounds(tcx, span, bounds)) - } - _ => { - struct_span_err!(tcx.sess, ty.span, E0172, - "expected a reference to a trait") - .span_label(ty.span, &format!("expected a trait")) - .emit(); - tcx.types.err - } + hir::TyPath(hir::QPath::Resolved(None, ref path)) => { + if let Def::Trait(trait_def_id) = path.def { + self.trait_path_to_object_type(rscope, + path.span, + trait_def_id, + ty.id, + path.segments.last().unwrap(), + span, + partition_bounds(bounds)) + } else { + struct_span_err!(tcx.sess, ty.span, E0172, + "expected a reference to a trait") + .span_label(ty.span, &format!("expected a trait")) + .emit(); + tcx.types.err } } _ => { @@ -1053,7 +1028,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn trait_path_to_object_type(&self, rscope: &RegionScope, path_span: Span, - param_mode: PathParamMode, trait_def_id: DefId, trait_path_ref_id: ast::NodeId, trait_segment: &hir::PathSegment, @@ -1066,24 +1040,24 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let dummy_self = tcx.mk_ty(TRAIT_OBJECT_DUMMY_SELF); let principal = self.ast_path_to_poly_trait_ref(rscope, path_span, - param_mode, trait_def_id, dummy_self, trait_path_ref_id, trait_segment, &mut projection_bounds); - let PartitionedBounds { builtin_bounds, - trait_bounds, + let PartitionedBounds { trait_bounds, region_bounds } = partitioned_bounds; + let (auto_traits, trait_bounds) = split_auto_traits(tcx, trait_bounds); + if !trait_bounds.is_empty() { let b = &trait_bounds[0]; let span = b.trait_ref.path.span; struct_span_err!(self.tcx().sess, span, E0225, - "only the builtin traits can be used as closure or object bounds") - .span_label(span, &format!("non-builtin trait used as bounds")) + "only Send/Sync traits can be used as additional traits in a trait object") + .span_label(span, &format!("non-Send/Sync additional trait")) .emit(); } @@ -1100,30 +1074,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { ty: b.ty } }) - }).collect(); - - let region_bound = - self.compute_object_lifetime_bound(span, - ®ion_bounds, - existential_principal, - builtin_bounds); - - let region_bound = match region_bound { - Some(r) => r, - None => { - tcx.mk_region(match rscope.object_lifetime_default(span) { - Some(r) => r, - None => { - span_err!(self.tcx().sess, span, E0228, - "the lifetime bound for this object type cannot be deduced \ - from context; please supply an explicit bound"); - ty::ReStatic - } - }) - } - }; - - debug!("region_bound: {:?}", region_bound); + }); // ensure the super predicates and stop if we encountered an error if self.ensure_super_predicates(span, principal.def_id()).is_err() { @@ -1142,22 +1093,11 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { return tcx.types.err; } - let mut associated_types = FnvHashSet::default(); + let mut associated_types = FxHashSet::default(); for tr in traits::supertraits(tcx, principal) { - if let Some(trait_id) = tcx.map.as_local_node_id(tr.def_id()) { - use collect::trait_associated_type_names; - - associated_types.extend(trait_associated_type_names(tcx, trait_id) - .map(|name| (tr.def_id(), name))) - } else { - let trait_items = tcx.impl_or_trait_items(tr.def_id()); - associated_types.extend(trait_items.iter().filter_map(|&def_id| { - match tcx.impl_or_trait_item(def_id) { - ty::TypeTraitItem(ref item) => Some(item.name), - _ => None - } - }).map(|name| (tr.def_id(), name))); - } + associated_types.extend(tcx.associated_items(tr.def_id()) + .filter(|item| item.kind == ty::AssociatedKind::Type) + .map(|item| (tr.def_id(), item.name))); } for projection_bound in &projection_bounds { @@ -1176,12 +1116,37 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { .emit(); } - let ty = tcx.mk_trait(ty::TraitObject { - principal: existential_principal, - region_bound: region_bound, - builtin_bounds: builtin_bounds, - projection_bounds: existential_projections - }); + let mut v = + iter::once(ty::ExistentialPredicate::Trait(*existential_principal.skip_binder())) + .chain(auto_traits.into_iter().map(ty::ExistentialPredicate::AutoTrait)) + .chain(existential_projections + .map(|x| ty::ExistentialPredicate::Projection(*x.skip_binder()))) + .collect::>(); + v.sort_by(|a, b| a.cmp(tcx, b)); + let existential_predicates = ty::Binder(tcx.mk_existential_predicates(v.into_iter())); + + let region_bound = self.compute_object_lifetime_bound(span, + ®ion_bounds, + existential_predicates); + + let region_bound = match region_bound { + Some(r) => r, + None => { + tcx.mk_region(match rscope.object_lifetime_default(span) { + Some(r) => r, + None => { + span_err!(self.tcx().sess, span, E0228, + "the lifetime bound for this object type cannot be deduced \ + from context; please supply an explicit bound"); + ty::ReStatic + } + }) + } + }; + + debug!("region_bound: {:?}", region_bound); + + let ty = tcx.mk_dynamic(existential_predicates, region_bound); debug!("trait_object_type: {:?}", ty); ty } @@ -1227,10 +1192,9 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // Check that there is exactly one way to find an associated type with the // correct name. - let suitable_bounds: Vec<_> = + let suitable_bounds = traits::transitive_bounds(tcx, &bounds) - .filter(|b| self.trait_defines_associated_type_named(b.def_id(), assoc_name)) - .collect(); + .filter(|b| self.trait_defines_associated_type_named(b.def_id(), assoc_name)); self.one_bound_for_assoc_type(suitable_bounds, &ty_param_name.as_str(), @@ -1241,36 +1205,29 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // Checks that bounds contains exactly one element and reports appropriate // errors otherwise. - fn one_bound_for_assoc_type(&self, - bounds: Vec>, + fn one_bound_for_assoc_type(&self, + mut bounds: I, ty_param_name: &str, assoc_name: &str, span: Span) -> Result, ErrorReported> + where I: Iterator> { - if bounds.is_empty() { - struct_span_err!(self.tcx().sess, span, E0220, - "associated type `{}` not found for `{}`", - assoc_name, - ty_param_name) - .span_label(span, &format!("associated type `{}` not found", assoc_name)) - .emit(); - return Err(ErrorReported); - } - - if bounds.len() > 1 { - let spans = bounds.iter().map(|b| { - self.tcx().impl_or_trait_items(b.def_id()).iter() - .find(|&&def_id| { - match self.tcx().impl_or_trait_item(def_id) { - ty::TypeTraitItem(ref item) => item.name.as_str() == assoc_name, - _ => false - } - }) - .and_then(|&def_id| self.tcx().map.as_local_node_id(def_id)) - .and_then(|node_id| self.tcx().map.opt_span(node_id)) - }); + let bound = match bounds.next() { + Some(bound) => bound, + None => { + struct_span_err!(self.tcx().sess, span, E0220, + "associated type `{}` not found for `{}`", + assoc_name, + ty_param_name) + .span_label(span, &format!("associated type `{}` not found", assoc_name)) + .emit(); + return Err(ErrorReported); + } + }; + if let Some(bound2) = bounds.next() { + let bounds = iter::once(bound).chain(iter::once(bound2)).chain(bounds); let mut err = struct_span_err!( self.tcx().sess, span, E0221, "ambiguous associated type `{}` in bounds of `{}`", @@ -1278,22 +1235,27 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { ty_param_name); err.span_label(span, &format!("ambiguous associated type `{}`", assoc_name)); - for span_and_bound in spans.zip(&bounds) { - if let Some(span) = span_and_bound.0 { + for bound in bounds { + let bound_span = self.tcx().associated_items(bound.def_id()).find(|item| { + item.kind == ty::AssociatedKind::Type && item.name == assoc_name + }) + .and_then(|item| self.tcx().map.span_if_local(item.def_id)); + + if let Some(span) = bound_span { err.span_label(span, &format!("ambiguous `{}` from `{}`", assoc_name, - span_and_bound.1)); + bound)); } else { span_note!(&mut err, span, "associated type `{}` could derive from `{}`", ty_param_name, - span_and_bound.1); + bound); } } err.emit(); } - Ok(bounds[0].clone()) + return Ok(bound); } // Create a type from a path to an associated type. @@ -1302,12 +1264,13 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // the whole path. // Will fail except for T::A and Self::A; i.e., if ty/ty_path_def are not a type // parameter or Self. - fn associated_path_def_to_ty(&self, - span: Span, - ty: Ty<'tcx>, - ty_path_def: Def, - item_segment: &hir::PathSegment) - -> (Ty<'tcx>, Def) + pub fn associated_path_def_to_ty(&self, + ref_id: ast::NodeId, + span: Span, + ty: Ty<'tcx>, + ty_path_def: Def, + item_segment: &hir::PathSegment) + -> (Ty<'tcx>, Def) { let tcx = self.tcx(); let assoc_name = item_segment.name; @@ -1333,11 +1296,10 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { return (tcx.types.err, Def::Err); } - let candidates: Vec = + let candidates = traits::supertraits(tcx, ty::Binder(trait_ref)) .filter(|r| self.trait_defines_associated_type_named(r.def_id(), - assoc_name)) - .collect(); + assoc_name)); match self.one_bound_for_assoc_type(candidates, "Self", @@ -1383,31 +1345,15 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let trait_did = bound.0.def_id; let ty = self.projected_ty_from_poly_trait_ref(span, bound, assoc_name); - let item_did = if let Some(trait_id) = tcx.map.as_local_node_id(trait_did) { - // `ty::trait_items` used below requires information generated - // by type collection, which may be in progress at this point. - match tcx.map.expect_item(trait_id).node { - hir::ItemTrait(.., ref trait_items) => { - let item = trait_items.iter() - .find(|i| i.name == assoc_name) - .expect("missing associated type"); - tcx.map.local_def_id(item.id) - } - _ => bug!() - } - } else { - let trait_items = tcx.trait_items(trait_did); - let item = trait_items.iter().find(|i| i.name() == assoc_name); - item.expect("missing associated type").def_id() - }; - - (ty, Def::AssociatedTy(item_did)) + let item = tcx.associated_items(trait_did).find(|i| i.name == assoc_name); + let def_id = item.expect("missing associated type").def_id; + tcx.check_stability(def_id, ref_id, span); + (ty, Def::AssociatedTy(def_id)) } fn qpath_to_ty(&self, rscope: &RegionScope, span: Span, - param_mode: PathParamMode, opt_self_ty: Option>, trait_def_id: DefId, trait_segment: &hir::PathSegment, @@ -1433,7 +1379,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let trait_ref = self.ast_path_to_mono_trait_ref(rscope, span, - param_mode, trait_def_id, self_ty, trait_segment); @@ -1472,60 +1417,54 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { } } - // Check the base def in a PathResolution and convert it to a Ty. If there are - // associated types in the PathResolution, these will need to be separately - // resolved. - fn base_def_to_ty(&self, - rscope: &RegionScope, - span: Span, - param_mode: PathParamMode, - def: Def, - opt_self_ty: Option>, - base_path_ref_id: ast::NodeId, - base_segments: &[hir::PathSegment], - permit_variants: bool) - -> Ty<'tcx> { + // Check a type Path and convert it to a Ty. + pub fn def_to_ty(&self, + rscope: &RegionScope, + opt_self_ty: Option>, + path: &hir::Path, + path_id: ast::NodeId, + permit_variants: bool) + -> Ty<'tcx> { let tcx = self.tcx(); - debug!("base_def_to_ty(def={:?}, opt_self_ty={:?}, base_segments={:?})", - def, opt_self_ty, base_segments); + debug!("base_def_to_ty(def={:?}, opt_self_ty={:?}, path_segments={:?})", + path.def, opt_self_ty, path.segments); - match def { + let span = path.span; + match path.def { Def::Trait(trait_def_id) => { // N.B. this case overlaps somewhat with // TyObjectSum, see that fn for details - tcx.prohibit_type_params(base_segments.split_last().unwrap().1); + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(path.segments.split_last().unwrap().1); self.trait_path_to_object_type(rscope, span, - param_mode, trait_def_id, - base_path_ref_id, - base_segments.last().unwrap(), + path_id, + path.segments.last().unwrap(), span, - partition_bounds(tcx, span, &[])) + partition_bounds(&[])) } Def::Enum(did) | Def::TyAlias(did) | Def::Struct(did) | Def::Union(did) => { - tcx.prohibit_type_params(base_segments.split_last().unwrap().1); - self.ast_path_to_ty(rscope, - span, - param_mode, - did, - base_segments.last().unwrap()) + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(path.segments.split_last().unwrap().1); + self.ast_path_to_ty(rscope, span, did, path.segments.last().unwrap()) } Def::Variant(did) if permit_variants => { // Convert "variant type" as if it were a real type. // The resulting `Ty` is type of the variant's enum for now. - tcx.prohibit_type_params(base_segments.split_last().unwrap().1); + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(path.segments.split_last().unwrap().1); self.ast_path_to_ty(rscope, span, - param_mode, tcx.parent_def_id(did).unwrap(), - base_segments.last().unwrap()) + path.segments.last().unwrap()) } Def::TyParam(did) => { - tcx.prohibit_type_params(base_segments); + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(&path.segments); let node_id = tcx.map.as_local_node_id(did).unwrap(); let param = tcx.ty_param_defs.borrow().get(&node_id) @@ -1547,9 +1486,9 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { Def::SelfTy(_, Some(def_id)) => { // Self in impl (we know the concrete type). - tcx.prohibit_type_params(base_segments); - let impl_id = tcx.map.as_local_node_id(def_id).unwrap(); - let ty = tcx.tables().node_id_to_type(impl_id); + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(&path.segments); + let ty = tcx.item_type(def_id); if let Some(free_substs) = self.get_free_substs() { ty.subst(tcx, free_substs) } else { @@ -1558,35 +1497,23 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { } Def::SelfTy(Some(_), None) => { // Self in trait. - tcx.prohibit_type_params(base_segments); + assert_eq!(opt_self_ty, None); + tcx.prohibit_type_params(&path.segments); tcx.mk_self_type() } Def::AssociatedTy(def_id) => { - tcx.prohibit_type_params(&base_segments[..base_segments.len()-2]); + tcx.prohibit_type_params(&path.segments[..path.segments.len()-2]); let trait_did = tcx.parent_def_id(def_id).unwrap(); self.qpath_to_ty(rscope, span, - param_mode, opt_self_ty, trait_did, - &base_segments[base_segments.len()-2], - base_segments.last().unwrap()) - } - Def::Mod(..) => { - // Used as sentinel by callers to indicate the `::A::B::C` form. - // FIXME(#22519) This part of the resolution logic should be - // avoided entirely for that form, once we stop needed a Def - // for `associated_path_def_to_ty`. - // Fixing this will also let use resolve ::Foo the same way we - // resolve Self::Foo, at the moment we can't resolve the former because - // we don't have the trait information around, which is just sad. - - assert!(base_segments.is_empty()); - - opt_self_ty.expect("missing T in ::a::b::c") + &path.segments[path.segments.len()-2], + path.segments.last().unwrap()) } Def::PrimTy(prim_ty) => { - tcx.prim_ty_to_ty(base_segments, prim_ty) + assert_eq!(opt_self_ty, None); + tcx.prim_ty_to_ty(&path.segments, prim_ty) } Def::Err => { self.set_tainted_by_errors(); @@ -1595,7 +1522,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { _ => { struct_span_err!(tcx.sess, span, E0248, "found value `{}` used as a type", - tcx.item_path_str(def.def_id())) + tcx.item_path_str(path.def.def_id())) .span_label(span, &format!("value used as a type")) .emit(); return self.tcx().types.err; @@ -1603,52 +1530,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { } } - // Resolve possibly associated type path into a type and final definition. - // Note that both base_segments and assoc_segments may be empty, although not at same time. - pub fn finish_resolving_def_to_ty(&self, - rscope: &RegionScope, - span: Span, - param_mode: PathParamMode, - base_def: Def, - opt_self_ty: Option>, - base_path_ref_id: ast::NodeId, - base_segments: &[hir::PathSegment], - assoc_segments: &[hir::PathSegment], - permit_variants: bool) - -> (Ty<'tcx>, Def) { - // Convert the base type. - debug!("finish_resolving_def_to_ty(base_def={:?}, \ - base_segments={:?}, \ - assoc_segments={:?})", - base_def, - base_segments, - assoc_segments); - let base_ty = self.base_def_to_ty(rscope, - span, - param_mode, - base_def, - opt_self_ty, - base_path_ref_id, - base_segments, - permit_variants); - debug!("finish_resolving_def_to_ty: base_def_to_ty returned {:?}", base_ty); - - // If any associated type segments remain, attempt to resolve them. - let (mut ty, mut def) = (base_ty, base_def); - for segment in assoc_segments { - debug!("finish_resolving_def_to_ty: segment={:?}", segment); - // This is pretty bad (it will fail except for T::A and Self::A). - let (new_ty, new_def) = self.associated_path_def_to_ty(span, ty, def, segment); - ty = new_ty; - def = new_def; - - if def == Def::Err { - break; - } - } - (ty, def) - } - /// Parses the programmer's textual representation of a type into our /// internal notion of a type. pub fn ast_ty_to_ty(&self, rscope: &RegionScope, ast_ty: &hir::Ty) -> Ty<'tcx> { @@ -1694,13 +1575,12 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { hir::TyBareFn(ref bf) => { require_c_abi_if_variadic(tcx, &bf.decl, bf.abi, ast_ty.span); let anon_scope = rscope.anon_type_scope(); - let (bare_fn_ty, _) = - self.ty_of_method_or_bare_fn(bf.unsafety, - bf.abi, - None, - &bf.decl, - anon_scope, - anon_scope); + let bare_fn_ty = self.ty_of_method_or_bare_fn(bf.unsafety, + bf.abi, + None, + &bf.decl, + anon_scope, + anon_scope); // Find any late-bound regions declared in return type that do // not appear in the arguments. These are not wellformed. @@ -1717,7 +1597,8 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // checking for here would be considered early bound // anyway.) let inputs = bare_fn_ty.sig.inputs(); - let late_bound_in_args = tcx.collect_constrained_late_bound_regions(&inputs); + let late_bound_in_args = tcx.collect_constrained_late_bound_regions( + &inputs.map_bound(|i| i.to_owned())); let output = bare_fn_ty.sig.output(); let late_bound_in_ret = tcx.collect_referenced_late_bound_regions(&output); for br in late_bound_in_ret.difference(&late_bound_in_args) { @@ -1772,29 +1653,23 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { tcx.types.err } } - hir::TyPath(ref maybe_qself, ref path) => { + hir::TyPath(hir::QPath::Resolved(ref maybe_qself, ref path)) => { debug!("ast_ty_to_ty: maybe_qself={:?} path={:?}", maybe_qself, path); - let path_res = tcx.expect_resolution(ast_ty.id); - let base_ty_end = path.segments.len() - path_res.depth; let opt_self_ty = maybe_qself.as_ref().map(|qself| { - self.ast_ty_to_ty(rscope, &qself.ty) + self.ast_ty_to_ty(rscope, qself) }); - let (ty, def) = self.finish_resolving_def_to_ty(rscope, - ast_ty.span, - PathParamMode::Explicit, - path_res.base_def, - opt_self_ty, - ast_ty.id, - &path.segments[..base_ty_end], - &path.segments[base_ty_end..], - false); + self.def_to_ty(rscope, opt_self_ty, path, ast_ty.id, false) + } + hir::TyPath(hir::QPath::TypeRelative(ref qself, ref segment)) => { + debug!("ast_ty_to_ty: qself={:?} segment={:?}", qself, segment); + let ty = self.ast_ty_to_ty(rscope, qself); - // Write back the new resolution. - if path_res.depth != 0 { - tcx.def_map.borrow_mut().insert(ast_ty.id, PathResolution::new(def)); - } - - ty + let def = if let hir::TyPath(hir::QPath::Resolved(_, ref path)) = qself.node { + path.def + } else { + Def::Err + }; + self.associated_path_def_to_ty(ast_ty.id, ast_ty.span, ty, def, segment).0 } hir::TyArray(ref ty, ref e) => { if let Ok(length) = eval_length(tcx.global_tcx(), &e, "array length") { @@ -1842,7 +1717,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { sig: &hir::MethodSig, untransformed_self_ty: Ty<'tcx>, anon_scope: Option) - -> (&'tcx ty::BareFnTy<'tcx>, ty::ExplicitSelfCategory<'tcx>) { + -> &'tcx ty::BareFnTy<'tcx> { self.ty_of_method_or_bare_fn(sig.unsafety, sig.abi, Some(untransformed_self_ty), @@ -1857,7 +1732,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { decl: &hir::FnDecl, anon_scope: Option) -> &'tcx ty::BareFnTy<'tcx> { - self.ty_of_method_or_bare_fn(unsafety, abi, None, decl, None, anon_scope).0 + self.ty_of_method_or_bare_fn(unsafety, abi, None, decl, None, anon_scope) } fn ty_of_method_or_bare_fn(&self, @@ -1867,7 +1742,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { decl: &hir::FnDecl, arg_anon_scope: Option, ret_anon_scope: Option) - -> (&'tcx ty::BareFnTy<'tcx>, ty::ExplicitSelfCategory<'tcx>) + -> &'tcx ty::BareFnTy<'tcx> { debug!("ty_of_method_or_bare_fn"); @@ -1880,13 +1755,13 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // lifetime elision, we can determine it in two ways. First (determined // here), if self is by-reference, then the implied output region is the // region of the self parameter. - let (self_ty, explicit_self_category) = match (opt_untransformed_self_ty, decl.get_self()) { + let (self_ty, explicit_self) = match (opt_untransformed_self_ty, decl.get_self()) { (Some(untransformed_self_ty), Some(explicit_self)) => { let self_type = self.determine_self_type(&rb, untransformed_self_ty, &explicit_self); - (Some(self_type.0), self_type.1) + (Some(self_type), Some(ExplicitSelf::determine(untransformed_self_ty, self_type))) } - _ => (None, ty::ExplicitSelfCategory::Static), + _ => (None, None), }; // HACK(eddyb) replace the fake self type in the AST with the actual type. @@ -1901,17 +1776,13 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // Second, if there was exactly one lifetime (either a substitution or a // reference) in the arguments, then any anonymous regions in the output // have that lifetime. - let implied_output_region = match explicit_self_category { - ty::ExplicitSelfCategory::ByReference(region, _) => Ok(*region), + let implied_output_region = match explicit_self { + Some(ExplicitSelf::ByReference(region, _)) => Ok(*region), _ => { - // `pat_to_string` is expensive and - // `find_implied_output_region` only needs its result when - // there's an error. So we wrap it in a closure to avoid - // calling it until necessary. - let arg_pats = || { - arg_params.iter().map(|a| pprust::pat_to_string(&a.pat)).collect() - }; - self.find_implied_output_region(&arg_tys, arg_pats) + self.find_implied_output_region(&arg_tys, + arg_params.iter() + .map(|a| pprust::pat_to_string(&a.pat))) + } }; @@ -1923,108 +1794,40 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { hir::DefaultReturn(..) => self.tcx().mk_nil(), }; - let input_tys = self_ty.into_iter().chain(arg_tys).collect(); - - debug!("ty_of_method_or_bare_fn: input_tys={:?}", input_tys); debug!("ty_of_method_or_bare_fn: output_ty={:?}", output_ty); - (self.tcx().mk_bare_fn(ty::BareFnTy { + self.tcx().mk_bare_fn(ty::BareFnTy { unsafety: unsafety, abi: abi, - sig: ty::Binder(ty::FnSig { - inputs: input_tys, - output: output_ty, - variadic: decl.variadic - }), - }), explicit_self_category) + sig: ty::Binder(self.tcx().mk_fn_sig( + self_ty.into_iter().chain(arg_tys), + output_ty, + decl.variadic + )), + }) } fn determine_self_type<'a>(&self, rscope: &RegionScope, untransformed_self_ty: Ty<'tcx>, explicit_self: &hir::ExplicitSelf) - -> (Ty<'tcx>, ty::ExplicitSelfCategory<'tcx>) + -> Ty<'tcx> { - return match explicit_self.node { - SelfKind::Value(..) => { - (untransformed_self_ty, ty::ExplicitSelfCategory::ByValue) - } + match explicit_self.node { + SelfKind::Value(..) => untransformed_self_ty, SelfKind::Region(ref lifetime, mutability) => { let region = self.opt_ast_region_to_region( rscope, explicit_self.span, lifetime); - (self.tcx().mk_ref(region, + self.tcx().mk_ref(region, ty::TypeAndMut { ty: untransformed_self_ty, mutbl: mutability - }), - ty::ExplicitSelfCategory::ByReference(region, mutability)) - } - SelfKind::Explicit(ref ast_type, _) => { - let explicit_type = self.ast_ty_to_ty(rscope, &ast_type); - - // We wish to (for now) categorize an explicit self - // declaration like `self: SomeType` into either `self`, - // `&self`, `&mut self`, or `Box`. We do this here - // by some simple pattern matching. A more precise check - // is done later in `check_method_self_type()`. - // - // Examples: - // - // ``` - // impl Foo for &T { - // // Legal declarations: - // fn method1(self: &&T); // ExplicitSelfCategory::ByReference - // fn method2(self: &T); // ExplicitSelfCategory::ByValue - // fn method3(self: Box<&T>); // ExplicitSelfCategory::ByBox - // - // // Invalid cases will be caught later by `check_method_self_type`: - // fn method_err1(self: &mut T); // ExplicitSelfCategory::ByReference - // } - // ``` - // - // To do the check we just count the number of "modifiers" - // on each type and compare them. If they are the same or - // the impl has more, we call it "by value". Otherwise, we - // look at the outermost modifier on the method decl and - // call it by-ref, by-box as appropriate. For method1, for - // example, the impl type has one modifier, but the method - // type has two, so we end up with - // ExplicitSelfCategory::ByReference. - - let impl_modifiers = count_modifiers(untransformed_self_ty); - let method_modifiers = count_modifiers(explicit_type); - - debug!("determine_explicit_self_category(self_info.untransformed_self_ty={:?} \ - explicit_type={:?} \ - modifiers=({},{})", - untransformed_self_ty, - explicit_type, - impl_modifiers, - method_modifiers); - - let category = if impl_modifiers >= method_modifiers { - ty::ExplicitSelfCategory::ByValue - } else { - match explicit_type.sty { - ty::TyRef(r, mt) => ty::ExplicitSelfCategory::ByReference(r, mt.mutbl), - ty::TyBox(_) => ty::ExplicitSelfCategory::ByBox, - _ => ty::ExplicitSelfCategory::ByValue, - } - }; - - (explicit_type, category) - } - }; - - fn count_modifiers(ty: Ty) -> usize { - match ty.sty { - ty::TyRef(_, mt) => count_modifiers(mt.ty) + 1, - ty::TyBox(t) => count_modifiers(t) + 1, - _ => 0, + }) } + SelfKind::Explicit(ref ast_type, _) => self.ast_ty_to_ty(rscope, &ast_type) } } @@ -2042,20 +1845,20 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { // that function type let rb = rscope::BindingRscope::new(); - let input_tys: Vec<_> = decl.inputs.iter().enumerate().map(|(i, a)| { + let input_tys = decl.inputs.iter().enumerate().map(|(i, a)| { let expected_arg_ty = expected_sig.as_ref().and_then(|e| { // no guarantee that the correct number of expected args // were supplied - if i < e.inputs.len() { - Some(e.inputs[i]) + if i < e.inputs().len() { + Some(e.inputs()[i]) } else { None } }); self.ty_of_arg(&rb, a, expected_arg_ty) - }).collect(); + }); - let expected_ret_ty = expected_sig.map(|e| e.output); + let expected_ret_ty = expected_sig.as_ref().map(|e| e.output()); let is_infer = match decl.output { hir::Return(ref output) if output.node == hir::TyInfer => true, @@ -2072,15 +1875,12 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { hir::DefaultReturn(..) => bug!(), }; - debug!("ty_of_closure: input_tys={:?}", input_tys); debug!("ty_of_closure: output_ty={:?}", output_ty); ty::ClosureTy { unsafety: unsafety, abi: abi, - sig: ty::Binder(ty::FnSig {inputs: input_tys, - output: output_ty, - variadic: decl.variadic}), + sig: ty::Binder(self.tcx().mk_fn_sig(input_tys, output_ty, decl.variadic)), } } @@ -2090,7 +1890,7 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { ast_bounds: &[hir::TyParamBound]) -> Ty<'tcx> { - let mut partitioned_bounds = partition_bounds(self.tcx(), span, &ast_bounds[..]); + let mut partitioned_bounds = partition_bounds(ast_bounds); let trait_bound = if !partitioned_bounds.trait_bounds.is_empty() { partitioned_bounds.trait_bounds.remove(0) @@ -2104,7 +1904,6 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { let trait_def_id = self.trait_def_id(trait_ref); self.trait_path_to_object_type(rscope, trait_ref.path.span, - PathParamMode::Explicit, trait_def_id, trait_ref.ref_id, trait_ref.path.segments.last().unwrap(), @@ -2120,38 +1919,36 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { fn compute_object_lifetime_bound(&self, span: Span, explicit_region_bounds: &[&hir::Lifetime], - principal_trait_ref: ty::PolyExistentialTraitRef<'tcx>, - builtin_bounds: ty::BuiltinBounds) + existential_predicates: ty::Binder<&'tcx ty::Slice>>) -> Option<&'tcx ty::Region> // if None, use the default { let tcx = self.tcx(); debug!("compute_opt_region_bound(explicit_region_bounds={:?}, \ - principal_trait_ref={:?}, builtin_bounds={:?})", + existential_predicates={:?})", explicit_region_bounds, - principal_trait_ref, - builtin_bounds); + existential_predicates); if explicit_region_bounds.len() > 1 { span_err!(tcx.sess, explicit_region_bounds[1].span, E0226, "only a single explicit lifetime bound is permitted"); } - if !explicit_region_bounds.is_empty() { + if let Some(&r) = explicit_region_bounds.get(0) { // Explicitly specified region bound. Use that. - let r = explicit_region_bounds[0]; return Some(ast_region_to_region(tcx, r)); } - if let Err(ErrorReported) = - self.ensure_super_predicates(span, principal_trait_ref.def_id()) { - return Some(tcx.mk_region(ty::ReStatic)); + if let Some(principal) = existential_predicates.principal() { + if let Err(ErrorReported) = self.ensure_super_predicates(span, principal.def_id()) { + return Some(tcx.mk_region(ty::ReStatic)); + } } // No explicit region bound specified. Therefore, examine trait // bounds and see if we can derive region bounds from those. let derived_region_bounds = - object_region_bounds(tcx, principal_trait_ref, builtin_bounds); + object_region_bounds(tcx, existential_predicates); // If there are no derived region bounds, then report back that we // can find no region bound. The caller will use the default. @@ -2178,46 +1975,62 @@ impl<'o, 'gcx: 'tcx, 'tcx> AstConv<'gcx, 'tcx>+'o { } pub struct PartitionedBounds<'a> { - pub builtin_bounds: ty::BuiltinBounds, pub trait_bounds: Vec<&'a hir::PolyTraitRef>, pub region_bounds: Vec<&'a hir::Lifetime>, } -/// Divides a list of bounds from the AST into three groups: builtin bounds (Copy, Sized etc), -/// general trait bounds, and region bounds. -pub fn partition_bounds<'a, 'b, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, - _span: Span, - ast_bounds: &'b [hir::TyParamBound]) - -> PartitionedBounds<'b> +/// Divides a list of general trait bounds into two groups: builtin bounds (Sync/Send) and the +/// remaining general trait bounds. +fn split_auto_traits<'a, 'b, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, + trait_bounds: Vec<&'b hir::PolyTraitRef>) + -> (Vec, Vec<&'b hir::PolyTraitRef>) +{ + let (auto_traits, trait_bounds): (Vec<_>, _) = trait_bounds.into_iter().partition(|bound| { + match bound.trait_ref.path.def { + Def::Trait(trait_did) => { + // Checks whether `trait_did` refers to one of the builtin + // traits, like `Send`, and adds it to `auto_traits` if so. + if Some(trait_did) == tcx.lang_items.send_trait() || + Some(trait_did) == tcx.lang_items.sync_trait() { + let segments = &bound.trait_ref.path.segments; + let parameters = &segments[segments.len() - 1].parameters; + if !parameters.types().is_empty() { + check_type_argument_count(tcx, bound.trait_ref.path.span, + parameters.types().len(), &[]); + } + if !parameters.lifetimes().is_empty() { + report_lifetime_number_error(tcx, bound.trait_ref.path.span, + parameters.lifetimes().len(), 0); + } + true + } else { + false + } + } + _ => false + } + }); + + let auto_traits = auto_traits.into_iter().map(|tr| { + if let Def::Trait(trait_did) = tr.trait_ref.path.def { + trait_did + } else { + unreachable!() + } + }).collect::>(); + + (auto_traits, trait_bounds) +} + +/// Divides a list of bounds from the AST into two groups: general trait bounds and region bounds +pub fn partition_bounds<'a, 'b, 'gcx, 'tcx>(ast_bounds: &'b [hir::TyParamBound]) + -> PartitionedBounds<'b> { - let mut builtin_bounds = ty::BuiltinBounds::empty(); let mut region_bounds = Vec::new(); let mut trait_bounds = Vec::new(); for ast_bound in ast_bounds { match *ast_bound { hir::TraitTyParamBound(ref b, hir::TraitBoundModifier::None) => { - match tcx.expect_def(b.trait_ref.ref_id) { - Def::Trait(trait_did) => { - if tcx.try_add_builtin_trait(trait_did, - &mut builtin_bounds) { - let segments = &b.trait_ref.path.segments; - let parameters = &segments[segments.len() - 1].parameters; - if !parameters.types().is_empty() { - check_type_argument_count(tcx, b.trait_ref.path.span, - parameters.types().len(), &[]); - } - if !parameters.lifetimes().is_empty() { - report_lifetime_number_error(tcx, b.trait_ref.path.span, - parameters.lifetimes().len(), 0); - } - continue; // success - } - } - _ => { - // Not a trait? that's an error, but it'll get - // reported later. - } - } trait_bounds.push(b); } hir::TraitTyParamBound(_, hir::TraitBoundModifier::Maybe) => {} @@ -2228,7 +2041,6 @@ pub fn partition_bounds<'a, 'b, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, } PartitionedBounds { - builtin_bounds: builtin_bounds, trait_bounds: trait_bounds, region_bounds: region_bounds, } @@ -2245,27 +2057,32 @@ fn check_type_argument_count(tcx: TyCtxt, span: Span, supplied: usize, "expected" }; let arguments_plural = if required == 1 { "" } else { "s" }; - struct_span_err!(tcx.sess, span, E0243, "wrong number of type arguments") - .span_label( - span, - &format!("{} {} type argument{}, found {}", - expected, required, arguments_plural, supplied) - ) + + struct_span_err!(tcx.sess, span, E0243, + "wrong number of type arguments: {} {}, found {}", + expected, required, supplied) + .span_label(span, + &format!("{} {} type argument{}", + expected, + required, + arguments_plural)) .emit(); } else if supplied > accepted { - let expected = if required == 0 { - "expected no".to_string() - } else if required < accepted { + let expected = if required < accepted { format!("expected at most {}", accepted) } else { format!("expected {}", accepted) }; let arguments_plural = if accepted == 1 { "" } else { "s" }; - struct_span_err!(tcx.sess, span, E0244, "wrong number of type arguments") + struct_span_err!(tcx.sess, span, E0244, + "wrong number of type arguments: {}, found {}", + expected, supplied) .span_label( span, - &format!("{} type argument{}, found {}", expected, arguments_plural, supplied) + &format!("{} type argument{}", + if accepted == 0 { "expected no" } else { &expected }, + arguments_plural) ) .emit(); } @@ -2298,7 +2115,7 @@ fn report_lifetime_number_error(tcx: TyCtxt, span: Span, number: usize, expected #[derive(PartialEq, Eq, Clone, Debug)] pub struct Bounds<'tcx> { pub region_bounds: Vec<&'tcx ty::Region>, - pub builtin_bounds: ty::BuiltinBounds, + pub implicitly_sized: bool, pub trait_bounds: Vec>, pub projection_bounds: Vec>, } @@ -2309,10 +2126,14 @@ impl<'a, 'gcx, 'tcx> Bounds<'tcx> { { let mut vec = Vec::new(); - for builtin_bound in &self.builtin_bounds { - match tcx.trait_ref_for_builtin_bound(builtin_bound, param_ty) { - Ok(trait_ref) => { vec.push(trait_ref.to_predicate()); } - Err(ErrorReported) => { } + // If it could be sized, and is, add the sized predicate + if self.implicitly_sized { + if let Some(sized) = tcx.lang_items.sized_trait() { + let trait_ref = ty::TraitRef { + def_id: sized, + substs: tcx.mk_substs_trait(param_ty, &[]) + }; + vec.push(trait_ref.to_predicate()); } } @@ -2334,3 +2155,64 @@ impl<'a, 'gcx, 'tcx> Bounds<'tcx> { vec } } + +pub enum ExplicitSelf<'tcx> { + ByValue, + ByReference(&'tcx ty::Region, hir::Mutability), + ByBox +} + +impl<'tcx> ExplicitSelf<'tcx> { + /// We wish to (for now) categorize an explicit self + /// declaration like `self: SomeType` into either `self`, + /// `&self`, `&mut self`, or `Box`. We do this here + /// by some simple pattern matching. A more precise check + /// is done later in `check_method_self_type()`. + /// + /// Examples: + /// + /// ``` + /// impl Foo for &T { + /// // Legal declarations: + /// fn method1(self: &&T); // ExplicitSelf::ByReference + /// fn method2(self: &T); // ExplicitSelf::ByValue + /// fn method3(self: Box<&T>); // ExplicitSelf::ByBox + /// + /// // Invalid cases will be caught later by `check_method_self_type`: + /// fn method_err1(self: &mut T); // ExplicitSelf::ByReference + /// } + /// ``` + /// + /// To do the check we just count the number of "modifiers" + /// on each type and compare them. If they are the same or + /// the impl has more, we call it "by value". Otherwise, we + /// look at the outermost modifier on the method decl and + /// call it by-ref, by-box as appropriate. For method1, for + /// example, the impl type has one modifier, but the method + /// type has two, so we end up with + /// ExplicitSelf::ByReference. + pub fn determine(untransformed_self_ty: Ty<'tcx>, + self_arg_ty: Ty<'tcx>) + -> ExplicitSelf<'tcx> { + fn count_modifiers(ty: Ty) -> usize { + match ty.sty { + ty::TyRef(_, mt) => count_modifiers(mt.ty) + 1, + ty::TyBox(t) => count_modifiers(t) + 1, + _ => 0, + } + } + + let impl_modifiers = count_modifiers(untransformed_self_ty); + let method_modifiers = count_modifiers(self_arg_ty); + + if impl_modifiers >= method_modifiers { + ExplicitSelf::ByValue + } else { + match self_arg_ty.sty { + ty::TyRef(r, mt) => ExplicitSelf::ByReference(r, mt.mutbl), + ty::TyBox(_) => ExplicitSelf::ByBox, + _ => ExplicitSelf::ByValue, + } + } + } +} diff --git a/src/librustc_typeck/check/_match.rs b/src/librustc_typeck/check/_match.rs index c842514227..624201eaab 100644 --- a/src/librustc_typeck/check/_match.rs +++ b/src/librustc_typeck/check/_match.rs @@ -11,10 +11,12 @@ use rustc::hir::{self, PatKind}; use rustc::hir::def::{Def, CtorKind}; use rustc::hir::pat_util::EnumerateAndAdjustIterator; -use rustc::infer::{self, InferOk, TypeOrigin}; +use rustc::infer; +use rustc::infer::type_variable::TypeVariableOrigin; +use rustc::traits::ObligationCauseCode; use rustc::ty::{self, Ty, TypeFoldable, LvaluePreference}; -use check::{FnCtxt, Expectation}; -use util::nodemap::FnvHashMap; +use check::{FnCtxt, Expectation, Diverges}; +use util::nodemap::FxHashMap; use std::collections::hash_map::Entry::{Occupied, Vacant}; use std::cmp; @@ -102,7 +104,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.demand_eqtype(pat.span, expected, rhs_ty); common_type } - PatKind::Binding(bm, _, ref sub) => { + PatKind::Binding(bm, def_id, _, ref sub) => { let typ = self.local_ty(pat.span, pat.id); match bm { hir::BindByRef(mutbl) => { @@ -129,16 +131,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // if there are multiple arms, make sure they all agree on // what the type of the binding `x` ought to be - match tcx.expect_def(pat.id) { - Def::Err => {} - Def::Local(def_id) => { - let var_id = tcx.map.as_local_node_id(def_id).unwrap(); - if var_id != pat.id { - let vt = self.local_ty(pat.span, var_id); - self.demand_eqtype(pat.span, vt, typ); - } - } - d => bug!("bad def for pattern binding `{:?}`", d) + let var_id = tcx.map.as_local_node_id(def_id).unwrap(); + if var_id != pat.id { + let vt = self.local_ty(pat.span, var_id); + self.demand_eqtype(pat.span, vt, typ); } if let Some(ref p) = *sub { @@ -147,15 +143,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { typ } - PatKind::TupleStruct(ref path, ref subpats, ddpos) => { - self.check_pat_tuple_struct(pat, path, &subpats, ddpos, expected) + PatKind::TupleStruct(ref qpath, ref subpats, ddpos) => { + self.check_pat_tuple_struct(pat, qpath, &subpats, ddpos, expected) } - PatKind::Path(ref opt_qself, ref path) => { - let opt_qself_ty = opt_qself.as_ref().map(|qself| self.to_ty(&qself.ty)); - self.check_pat_path(pat, opt_qself_ty, path, expected) + PatKind::Path(ref qpath) => { + self.check_pat_path(pat, qpath, expected) } - PatKind::Struct(ref path, ref fields, etc) => { - self.check_pat_struct(pat, path, fields, etc, expected) + PatKind::Struct(ref qpath, ref fields, etc) => { + self.check_pat_struct(pat, qpath, fields, etc, expected) } PatKind::Tuple(ref elements, ddpos) => { let mut expected_len = elements.len(); @@ -168,7 +163,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } let max_len = cmp::max(expected_len, elements.len()); - let element_tys_iter = (0..max_len).map(|_| self.next_ty_var()); + let element_tys_iter = (0..max_len).map(|_| self.next_ty_var( + // FIXME: MiscVariable for now, obtaining the span and name information + // from all tuple elements isn't trivial. + TypeVariableOrigin::TypeInference(pat.span))); let element_tys = tcx.mk_type_list(element_tys_iter); let pat_ty = tcx.mk_ty(ty::TyTuple(element_tys)); self.demand_eqtype(pat.span, expected, pat_ty); @@ -178,7 +176,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pat_ty } PatKind::Box(ref inner) => { - let inner_ty = self.next_ty_var(); + let inner_ty = self.next_ty_var(TypeVariableOrigin::TypeInference(inner.span)); let uniq_ty = tcx.mk_box(inner_ty); if self.check_dereferencable(pat.span, expected, &inner) { @@ -209,7 +207,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (expected, mt.ty) } _ => { - let inner_ty = self.next_ty_var(); + let inner_ty = self.next_ty_var( + TypeVariableOrigin::TypeInference(inner.span)); let mt = ty::TypeAndMut { ty: inner_ty, mutbl: mutbl }; let region = self.next_region_var(infer::PatternRegion(pat.span)); let rptr_ty = tcx.mk_ref(region, mt); @@ -346,7 +345,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn check_dereferencable(&self, span: Span, expected: Ty<'tcx>, inner: &hir::Pat) -> bool { if let PatKind::Binding(..) = inner.node { if let Some(mt) = self.shallow_resolve(expected).builtin_deref(true, ty::NoPreference) { - if let ty::TyTrait(..) = mt.ty.sty { + if let ty::TyDynamic(..) = mt.ty.sty { // This is "x = SomeTrait" being reduced from // "let &x = &SomeTrait" or "let box x = Box", an error. let type_str = self.ty_to_string(expected); @@ -360,9 +359,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } true } -} -impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn check_match(&self, expr: &'gcx hir::Expr, discrim: &'gcx hir::Expr, @@ -375,7 +372,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // want to use the *precise* type of the discriminant, *not* some // supertype, as the "discriminant type" (issue #23116). let contains_ref_bindings = arms.iter() - .filter_map(|a| tcx.arm_contains_ref_binding(a)) + .filter_map(|a| a.contains_ref_binding()) .max_by_key(|m| match *m { hir::MutMutable => 1, hir::MutImmutable => 0, @@ -387,17 +384,23 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // ...but otherwise we want to use any supertype of the // discriminant. This is sort of a workaround, see note (*) in // `check_pat` for some details. - discrim_ty = self.next_ty_var(); + discrim_ty = self.next_ty_var(TypeVariableOrigin::TypeInference(discrim.span)); self.check_expr_has_type(discrim, discrim_ty); }; + let discrim_diverges = self.diverges.get(); + self.diverges.set(Diverges::Maybe); // Typecheck the patterns first, so that we get types for all the // bindings. - for arm in arms { + let all_arm_pats_diverge: Vec<_> = arms.iter().map(|arm| { + let mut all_pats_diverge = Diverges::WarnedAlways; for p in &arm.pats { + self.diverges.set(Diverges::Maybe); self.check_pat(&p, discrim_ty); + all_pats_diverge &= self.diverges.get(); } - } + all_pats_diverge + }).collect(); // Now typecheck the blocks. // @@ -409,7 +412,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // of execution reach it, we will panic, so bottom is an appropriate // type in that case) let expected = expected.adjust_for_branches(self); - let mut result_ty = self.next_diverging_ty_var(); + let mut result_ty = self.next_diverging_ty_var( + TypeVariableOrigin::DivergingBlockExpr(expr.span)); + let mut all_arms_diverge = Diverges::WarnedAlways; let coerce_first = match expected { // We don't coerce to `()` so that if the match expression is a // statement it's branches can have any consistent type. That allows @@ -422,11 +427,15 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { _ => result_ty }; - for (i, arm) in arms.iter().enumerate() { + for (i, (arm, pats_diverge)) in arms.iter().zip(all_arm_pats_diverge).enumerate() { if let Some(ref e) = arm.guard { + self.diverges.set(pats_diverge); self.check_expr_has_type(e, tcx.types.bool); } + + self.diverges.set(pats_diverge); let arm_ty = self.check_expr_with_expectation(&arm.body, expected); + all_arms_diverge &= self.diverges.get(); if result_ty.references_error() || arm_ty.references_error() { result_ty = tcx.types.err; @@ -441,17 +450,19 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { _ => false }; - let origin = if is_if_let_fallback { - TypeOrigin::IfExpressionWithNoElse(expr.span) + let cause = if is_if_let_fallback { + self.cause(expr.span, ObligationCauseCode::IfExpressionWithNoElse) } else { - TypeOrigin::MatchExpressionArm(expr.span, arm.body.span, match_src) + self.cause(expr.span, ObligationCauseCode::MatchExpressionArm { + arm_span: arm.body.span, + source: match_src + }) }; let result = if is_if_let_fallback { - self.eq_types(true, origin, arm_ty, result_ty) - .map(|InferOk { obligations, .. }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + self.eq_types(true, &cause, arm_ty, result_ty) + .map(|infer_ok| { + self.register_infer_ok_obligations(infer_ok); arm_ty }) } else if i == 0 { @@ -459,7 +470,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.try_coerce(&arm.body, arm_ty, coerce_first) } else { let prev_arms = || arms[..i].iter().map(|arm| &*arm.body); - self.try_find_coercion_lub(origin, prev_arms, result_ty, &arm.body, arm_ty) + self.try_find_coercion_lub(&cause, prev_arms, result_ty, &arm.body, arm_ty) }; result_ty = match result { @@ -470,26 +481,27 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } else { (result_ty, arm_ty) }; - self.report_mismatched_types(origin, expected, found, e); + self.report_mismatched_types(&cause, expected, found, e); self.tcx.types.err } }; } + // We won't diverge unless the discriminant or all arms diverge. + self.diverges.set(discrim_diverges | all_arms_diverge); + result_ty } -} -impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_pat_struct(&self, pat: &'gcx hir::Pat, - path: &hir::Path, + qpath: &hir::QPath, fields: &'gcx [Spanned], etc: bool, expected: Ty<'tcx>) -> Ty<'tcx> { // Resolve the path and check the definition for errors. - let (variant, pat_ty) = if let Some(variant_ty) = self.check_struct_path(path, pat.id) { + let (variant, pat_ty) = if let Some(variant_ty) = self.check_struct_path(qpath, pat.id) { variant_ty } else { for field in fields { @@ -502,26 +514,24 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.demand_eqtype(pat.span, expected, pat_ty); // Type check subpatterns. - self.check_struct_pat_fields(pat_ty, pat.span, variant, fields, etc); + self.check_struct_pat_fields(pat_ty, pat.id, pat.span, variant, fields, etc); pat_ty } fn check_pat_path(&self, pat: &hir::Pat, - opt_self_ty: Option>, - path: &hir::Path, + qpath: &hir::QPath, expected: Ty<'tcx>) -> Ty<'tcx> { let tcx = self.tcx; let report_unexpected_def = |def: Def| { span_err!(tcx.sess, pat.span, E0533, "expected unit struct/variant or constant, found {} `{}`", - def.kind_name(), path); + def.kind_name(), qpath); }; // Resolve the path and check the definition for errors. - let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(opt_self_ty, path, - pat.id, pat.span); + let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(qpath, pat.id, pat.span); match def { Def::Err => { self.set_tainted_by_errors(); @@ -545,7 +555,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_pat_tuple_struct(&self, pat: &hir::Pat, - path: &hir::Path, + qpath: &hir::QPath, subpats: &'gcx [P], ddpos: Option, expected: Ty<'tcx>) -> Ty<'tcx> @@ -558,14 +568,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { }; let report_unexpected_def = |def: Def| { let msg = format!("expected tuple struct/variant, found {} `{}`", - def.kind_name(), path); + def.kind_name(), qpath); struct_span_err!(tcx.sess, pat.span, E0164, "{}", msg) .span_label(pat.span, &format!("not a tuple variant or struct")).emit(); on_error(); }; // Resolve the path and check the definition for errors. - let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(None, path, pat.id, pat.span); + let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(qpath, pat.id, pat.span); let variant = match def { Def::Err => { self.set_tainted_by_errors(); @@ -599,6 +609,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { for (i, subpat) in subpats.iter().enumerate_and_adjust(variant.fields.len(), ddpos) { let field_ty = self.field_ty(subpat.span, &variant.fields[i], substs); self.check_pat(&subpat, field_ty); + + self.tcx.check_stability(variant.fields[i].did, pat.id, subpat.span); } } else { let subpats_ending = if subpats.len() == 1 { "" } else { "s" }; @@ -618,8 +630,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_struct_pat_fields(&self, adt_ty: Ty<'tcx>, + pat_id: ast::NodeId, span: Span, - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, fields: &'gcx [Spanned], etc: bool) { let tcx = self.tcx; @@ -633,10 +646,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let field_map = variant.fields .iter() .map(|field| (field.name, field)) - .collect::>(); + .collect::>(); // Keep track of which fields have already appeared in the pattern. - let mut used_fields = FnvHashMap(); + let mut used_fields = FxHashMap(); // Typecheck each field. for &Spanned { node: ref field, span } in fields { @@ -655,7 +668,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { Vacant(vacant) => { vacant.insert(span); field_map.get(&field.name) - .map(|f| self.field_ty(span, f, substs)) + .map(|f| { + self.tcx.check_stability(f.did, pat_id, span); + + self.field_ty(span, f, substs) + }) .unwrap_or_else(|| { struct_span_err!(tcx.sess, span, E0026, "{} `{}` does not have a field named `{}`", diff --git a/src/librustc_typeck/check/autoderef.rs b/src/librustc_typeck/check/autoderef.rs index 900c22a817..e72dba858c 100644 --- a/src/librustc_typeck/check/autoderef.rs +++ b/src/librustc_typeck/check/autoderef.rs @@ -20,7 +20,7 @@ use rustc::ty::{LvaluePreference, NoPreference, PreferMutLvalue}; use rustc::hir; use syntax_pos::Span; -use syntax::parse::token; +use syntax::symbol::Symbol; #[derive(Copy, Clone, Debug)] enum AutoderefKind { @@ -120,7 +120,7 @@ impl<'a, 'gcx, 'tcx> Autoderef<'a, 'gcx, 'tcx> { let normalized = traits::normalize_projection_type(&mut selcx, ty::ProjectionTy { trait_ref: trait_ref, - item_name: token::intern("Target"), + item_name: Symbol::intern("Target"), }, cause, 0); @@ -198,7 +198,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (PreferMutLvalue, Some(trait_did)) => { self.lookup_method_in_trait(span, base_expr, - token::intern("deref_mut"), + Symbol::intern("deref_mut"), trait_did, base_ty, None) @@ -211,7 +211,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (None, Some(trait_did)) => { self.lookup_method_in_trait(span, base_expr, - token::intern("deref"), + Symbol::intern("deref"), trait_did, base_ty, None) diff --git a/src/librustc_typeck/check/callee.rs b/src/librustc_typeck/check/callee.rs index 3cf64fa439..4fba29def2 100644 --- a/src/librustc_typeck/check/callee.rs +++ b/src/librustc_typeck/check/callee.rs @@ -13,10 +13,10 @@ use super::{DeferredCallResolution, Expectation, FnCtxt, TupleArgumentsFlag}; use CrateCtxt; use hir::def::Def; use hir::def_id::{DefId, LOCAL_CRATE}; +use hir::print; use rustc::{infer, traits}; use rustc::ty::{self, LvaluePreference, Ty}; -use syntax::parse::token; -use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; use rustc::hir; @@ -45,7 +45,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn check_call(&self, call_expr: &'gcx hir::Expr, callee_expr: &'gcx hir::Expr, - arg_exprs: &'gcx [P], + arg_exprs: &'gcx [hir::Expr], expected: Expectation<'tcx>) -> Ty<'tcx> { let original_callee_ty = self.check_expr(callee_expr); @@ -159,9 +159,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { -> Option> { // Try the options that are least restrictive on the caller first. for &(opt_trait_def_id, method_name) in - &[(self.tcx.lang_items.fn_trait(), token::intern("call")), - (self.tcx.lang_items.fn_mut_trait(), token::intern("call_mut")), - (self.tcx.lang_items.fn_once_trait(), token::intern("call_once"))] { + &[(self.tcx.lang_items.fn_trait(), Symbol::intern("call")), + (self.tcx.lang_items.fn_mut_trait(), Symbol::intern("call_mut")), + (self.tcx.lang_items.fn_once_trait(), Symbol::intern("call_once"))] { let trait_def_id = match opt_trait_def_id { Some(def_id) => def_id, None => continue, @@ -188,29 +188,46 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn confirm_builtin_call(&self, call_expr: &hir::Expr, callee_ty: Ty<'tcx>, - arg_exprs: &'gcx [P], + arg_exprs: &'gcx [hir::Expr], expected: Expectation<'tcx>) -> Ty<'tcx> { let error_fn_sig; - let fn_sig = match callee_ty.sty { - ty::TyFnDef(.., &ty::BareFnTy { ref sig, .. }) | - ty::TyFnPtr(&ty::BareFnTy { ref sig, .. }) => sig, - _ => { - let mut err = self.type_error_struct(call_expr.span, - |actual| { - format!("expected function, found `{}`", - actual) - }, - callee_ty); + let (fn_sig, def_span) = match callee_ty.sty { + ty::TyFnDef(def_id, .., &ty::BareFnTy {ref sig, ..}) => { + (sig, self.tcx.map.span_if_local(def_id)) + } + ty::TyFnPtr(&ty::BareFnTy {ref sig, ..}) => (sig, None), + ref t => { + let mut unit_variant = None; + if let &ty::TyAdt(adt_def, ..) = t { + if adt_def.is_enum() { + if let hir::ExprCall(ref expr, _) = call_expr.node { + unit_variant = Some(print::expr_to_string(expr)) + } + } + } + let mut err = if let Some(path) = unit_variant { + let mut err = self.type_error_struct(call_expr.span, |_| { + format!("`{}` is being called, but it is not a function", path) + }, callee_ty); + err.help(&format!("did you mean to write `{}`?", path)); + err + } else { + self.type_error_struct(call_expr.span, |actual| { + format!("expected function, found `{}`", actual) + }, callee_ty) + }; if let hir::ExprCall(ref expr, _) = call_expr.node { - let tcx = self.tcx; - if let Some(pr) = tcx.def_map.borrow().get(&expr.id) { - if pr.depth == 0 && pr.base_def != Def::Err { - if let Some(span) = tcx.map.span_if_local(pr.base_def.def_id()) { - err.span_note(span, "defined here"); - } + let def = if let hir::ExprPath(ref qpath) = expr.node { + self.tables.borrow().qpath_def(qpath, expr.id) + } else { + Def::Err + }; + if def != Def::Err { + if let Some(span) = self.tcx.map.span_if_local(def.def_id()) { + err.span_note(span, "defined here"); } } } @@ -220,13 +237,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // This is the "default" function signature, used in case of error. // In that case, we check each argument against "error" in order to // set up all the node type bindings. - error_fn_sig = ty::Binder(ty::FnSig { - inputs: self.err_args(arg_exprs.len()), - output: self.tcx.types.err, - variadic: false, - }); + error_fn_sig = ty::Binder(self.tcx.mk_fn_sig( + self.err_args(arg_exprs.len()).into_iter(), + self.tcx.types.err, + false, + )); - &error_fn_sig + (&error_fn_sig, None) } }; @@ -244,21 +261,22 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let expected_arg_tys = self.expected_types_for_fn_args(call_expr.span, expected, - fn_sig.output, - &fn_sig.inputs); + fn_sig.output(), + fn_sig.inputs()); self.check_argument_types(call_expr.span, - &fn_sig.inputs, + fn_sig.inputs(), &expected_arg_tys[..], arg_exprs, fn_sig.variadic, - TupleArgumentsFlag::DontTupleArguments); + TupleArgumentsFlag::DontTupleArguments, + def_span); - fn_sig.output + fn_sig.output() } fn confirm_deferred_closure_call(&self, call_expr: &hir::Expr, - arg_exprs: &'gcx [P], + arg_exprs: &'gcx [hir::Expr], expected: Expectation<'tcx>, fn_sig: ty::FnSig<'tcx>) -> Ty<'tcx> { @@ -269,23 +287,24 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let expected_arg_tys = self.expected_types_for_fn_args(call_expr.span, expected, - fn_sig.output.clone(), - &fn_sig.inputs); + fn_sig.output().clone(), + fn_sig.inputs()); self.check_argument_types(call_expr.span, - &fn_sig.inputs, + fn_sig.inputs(), &expected_arg_tys, arg_exprs, fn_sig.variadic, - TupleArgumentsFlag::TupleArguments); + TupleArgumentsFlag::TupleArguments, + None); - fn_sig.output + fn_sig.output() } fn confirm_overloaded_call(&self, call_expr: &hir::Expr, callee_expr: &'gcx hir::Expr, - arg_exprs: &'gcx [P], + arg_exprs: &'gcx [hir::Expr], expected: Expectation<'tcx>, method_callee: ty::MethodCallee<'tcx>) -> Ty<'tcx> { @@ -346,12 +365,12 @@ impl<'gcx, 'tcx> DeferredCallResolution<'gcx, 'tcx> for CallResolution<'gcx, 'tc debug!("attempt_resolution: method_callee={:?}", method_callee); - for (&method_arg_ty, &self_arg_ty) in - method_sig.inputs[1..].iter().zip(&self.fn_sig.inputs) { - fcx.demand_eqtype(self.call_expr.span, self_arg_ty, method_arg_ty); + for (method_arg_ty, self_arg_ty) in + method_sig.inputs().iter().skip(1).zip(self.fn_sig.inputs()) { + fcx.demand_eqtype(self.call_expr.span, &self_arg_ty, &method_arg_ty); } - fcx.demand_eqtype(self.call_expr.span, method_sig.output, self.fn_sig.output); + fcx.demand_eqtype(self.call_expr.span, method_sig.output(), self.fn_sig.output()); fcx.write_overloaded_call_method_map(self.call_expr, method_callee); } diff --git a/src/librustc_typeck/check/cast.rs b/src/librustc_typeck/check/cast.rs index c456b9358b..f2c8ef46a7 100644 --- a/src/librustc_typeck/check/cast.rs +++ b/src/librustc_typeck/check/cast.rs @@ -46,6 +46,7 @@ use rustc::hir; use rustc::traits; use rustc::ty::{self, Ty, TypeFoldable}; use rustc::ty::cast::{CastKind, CastTy}; +use rustc::middle::lang_items; use syntax::ast; use syntax_pos::Span; use util::common::ErrorReported; @@ -64,7 +65,7 @@ pub struct CastCheck<'tcx> { /// fat pointers if their unsize-infos have the same kind. #[derive(Copy, Clone, PartialEq, Eq)] enum UnsizeKind<'tcx> { - Vtable(DefId), + Vtable(Option), Length, /// The unsize info of this projection OfProjection(&'tcx ty::ProjectionTy<'tcx>), @@ -78,7 +79,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn unsize_kind(&self, t: Ty<'tcx>) -> Option> { match t.sty { ty::TySlice(_) | ty::TyStr => Some(UnsizeKind::Length), - ty::TyTrait(ref tty) => Some(UnsizeKind::Vtable(tty.principal.def_id())), + ty::TyDynamic(ref tty, ..) => + Some(UnsizeKind::Vtable(tty.principal().map(|p| p.def_id()))), ty::TyAdt(def, substs) if def.is_struct() => { // FIXME(arielb1): do some kind of normalization match def.struct_variant().fields.last() { @@ -102,10 +104,10 @@ enum CastError { /// Cast of thin to fat raw ptr (eg. `*const () as *const [u8]`) SizedUnsizedCast, IllegalCast, + NeedDeref, NeedViaPtr, NeedViaThinPtr, NeedViaInt, - NeedViaUsize, NonScalar, } @@ -129,7 +131,7 @@ impl<'a, 'gcx, 'tcx> CastCheck<'tcx> { // cases now. We do a more thorough check at the end, once // inference is more completely known. match cast_ty.sty { - ty::TyTrait(..) | ty::TySlice(..) => { + ty::TyDynamic(..) | ty::TySlice(..) => { check.report_cast_to_unsized_type(fcx); Err(ErrorReported) } @@ -139,26 +141,58 @@ impl<'a, 'gcx, 'tcx> CastCheck<'tcx> { fn report_cast_error(&self, fcx: &FnCtxt<'a, 'gcx, 'tcx>, e: CastError) { match e { - CastError::NeedViaPtr | - CastError::NeedViaThinPtr | - CastError::NeedViaInt | - CastError::NeedViaUsize => { - fcx.type_error_struct(self.span, + CastError::NeedDeref => { + let cast_ty = fcx.ty_to_string(self.cast_ty); + let mut err = fcx.type_error_struct(self.cast_span, |actual| { format!("casting `{}` as `{}` is invalid", actual, - fcx.ty_to_string(self.cast_ty)) + cast_ty) }, - self.expr_ty) - .help(&format!("cast through {} first", - match e { - CastError::NeedViaPtr => "a raw pointer", - CastError::NeedViaThinPtr => "a thin pointer", - CastError::NeedViaInt => "an integer", - CastError::NeedViaUsize => "a usize", - _ => bug!(), - })) - .emit(); + self.expr_ty); + err.span_label(self.expr.span, + &format!("cannot cast `{}` as `{}`", + fcx.ty_to_string(self.expr_ty), + cast_ty)); + if let Ok(snippet) = fcx.sess().codemap().span_to_snippet(self.expr.span) { + err.span_label(self.expr.span, + &format!("did you mean `*{}`?", snippet)); + } + err.emit(); + } + CastError::NeedViaThinPtr | + CastError::NeedViaPtr => { + let mut err = fcx.type_error_struct(self.span, + |actual| { + format!("casting `{}` as `{}` is invalid", + actual, + fcx.ty_to_string(self.cast_ty)) + }, + self.expr_ty); + if self.cast_ty.is_uint() { + err.help(&format!("cast through {} first", + match e { + CastError::NeedViaPtr => "a raw pointer", + CastError::NeedViaThinPtr => "a thin pointer", + _ => bug!(), + })); + } + err.emit(); + } + CastError::NeedViaInt => { + fcx.type_error_struct(self.span, + |actual| { + format!("casting `{}` as `{}` is invalid", + actual, + fcx.ty_to_string(self.cast_ty)) + }, + self.expr_ty) + .help(&format!("cast through {} first", + match e { + CastError::NeedViaInt => "an integer", + _ => bug!(), + })) + .emit(); } CastError::CastToBool => { struct_span_err!(fcx.tcx.sess, self.span, E0054, "cannot cast as `bool`") @@ -366,21 +400,43 @@ impl<'a, 'gcx, 'tcx> CastCheck<'tcx> { (Int(Bool), Float) | (Int(CEnum), Float) | (Int(Char), Float) => Err(CastError::NeedViaInt), + (Int(Bool), Ptr(_)) | (Int(CEnum), Ptr(_)) | - (Int(Char), Ptr(_)) => Err(CastError::NeedViaUsize), + (Int(Char), Ptr(_)) | + (Ptr(_), Float) | + (FnPtr, Float) | + (Float, Ptr(_)) => Err(CastError::IllegalCast), // ptr -> * (Ptr(m_e), Ptr(m_c)) => self.check_ptr_ptr_cast(fcx, m_e, m_c), // ptr-ptr-cast (Ptr(m_expr), Int(_)) => self.check_ptr_addr_cast(fcx, m_expr), // ptr-addr-cast - (Ptr(_), Float) | (FnPtr, Float) => Err(CastError::NeedViaUsize), (FnPtr, Int(_)) => Ok(CastKind::FnPtrAddrCast), - (RPtr(_), Int(_)) | - (RPtr(_), Float) => Err(CastError::NeedViaPtr), + (RPtr(p), Int(_)) | + (RPtr(p), Float) => { + match p.ty.sty { + ty::TypeVariants::TyInt(_) | + ty::TypeVariants::TyUint(_) | + ty::TypeVariants::TyFloat(_) => { + Err(CastError::NeedDeref) + } + ty::TypeVariants::TyInfer(t) => { + match t { + ty::InferTy::IntVar(_) | + ty::InferTy::FloatVar(_) | + ty::InferTy::FreshIntTy(_) | + ty::InferTy::FreshFloatTy(_) => { + Err(CastError::NeedDeref) + } + _ => Err(CastError::NeedViaPtr), + } + } + _ => Err(CastError::NeedViaPtr), + } + } // * -> ptr (Int(_), Ptr(mt)) => self.check_addr_ptr_cast(fcx, mt), // addr-ptr-cast (FnPtr, Ptr(mt)) => self.check_fptr_ptr_cast(fcx, mt), - (Float, Ptr(_)) => Err(CastError::NeedViaUsize), (RPtr(rmt), Ptr(mt)) => self.check_ref_cast(fcx, rmt, mt), // array-ptr-cast // prim -> prim @@ -391,7 +447,6 @@ impl<'a, 'gcx, 'tcx> CastCheck<'tcx> { (Int(_), Int(_)) | (Int(_), Float) | (Float, Int(_)) | (Float, Float) => { Ok(CastKind::NumericCast) } - } } @@ -490,6 +545,7 @@ impl<'a, 'gcx, 'tcx> CastCheck<'tcx> { impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn type_is_known_to_be_sized(&self, ty: Ty<'tcx>, span: Span) -> bool { - traits::type_known_to_meet_builtin_bound(self, ty, ty::BoundSized, span) + let lang_item = self.tcx.require_lang_item(lang_items::SizedTraitLangItem); + traits::type_known_to_meet_bound(self, ty, lang_item, span) } } diff --git a/src/librustc_typeck/check/closure.rs b/src/librustc_typeck/check/closure.rs index d478f1092b..1d81ed7d35 100644 --- a/src/librustc_typeck/check/closure.rs +++ b/src/librustc_typeck/check/closure.rs @@ -13,8 +13,10 @@ use super::{check_fn, Expectation, FnCtxt}; use astconv::AstConv; +use rustc::infer::type_variable::TypeVariableOrigin; use rustc::ty::{self, ToPolyTraitRef, Ty}; use std::cmp; +use std::iter; use syntax::abi::Abi; use rustc::hir; @@ -23,7 +25,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { expr: &hir::Expr, _capture: hir::CaptureClause, decl: &'gcx hir::FnDecl, - body: &'gcx hir::Block, + body_id: hir::ExprId, expected: Expectation<'tcx>) -> Ty<'tcx> { debug!("check_expr_closure(expr={:?},expected={:?})", @@ -37,6 +39,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { Some(ty) => self.deduce_expectations_from_expected_type(ty), None => (None, None), }; + let body = self.tcx.map.expr(body_id); self.check_closure(expr, expected_kind, decl, body, expected_sig) } @@ -44,15 +47,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { expr: &hir::Expr, opt_kind: Option, decl: &'gcx hir::FnDecl, - body: &'gcx hir::Block, + body: &'gcx hir::Expr, expected_sig: Option>) -> Ty<'tcx> { - let expr_def_id = self.tcx.map.local_def_id(expr.id); - debug!("check_closure opt_kind={:?} expected_sig={:?}", opt_kind, expected_sig); + let expr_def_id = self.tcx.map.local_def_id(expr.id); let mut fn_ty = AstConv::ty_of_closure(self, hir::Unsafety::Normal, decl, @@ -62,16 +64,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Create type variables (for now) to represent the transformed // types of upvars. These will be unified during the upvar // inference phase (`upvar.rs`). - let num_upvars = self.tcx.with_freevars(expr.id, |fv| fv.len()); - let upvar_tys = self.next_ty_vars(num_upvars); - - debug!("check_closure: expr.id={:?} upvar_tys={:?}", - expr.id, - upvar_tys); - let closure_type = self.tcx.mk_closure(expr_def_id, - self.parameter_environment.free_substs, - &upvar_tys); + self.parameter_environment.free_substs.extend_to(self.tcx, expr_def_id, + |_, _| span_bug!(expr.span, "closure has region param"), + |_, _| self.infcx.next_ty_var(TypeVariableOrigin::TransformedUpvar(expr.span)) + ) + ); + + debug!("check_closure: expr.id={:?} closure_type={:?}", expr.id, closure_type); let fn_sig = self.tcx .liberate_late_bound_regions(self.tcx.region_maps.call_site_extent(expr.id, body.id), @@ -88,7 +88,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Tuple up the arguments and insert the resulting function type into // the `closures` table. - fn_ty.sig.0.inputs = vec![self.tcx.intern_tup(&fn_ty.sig.0.inputs[..])]; + fn_ty.sig.0 = self.tcx.mk_fn_sig( + iter::once(self.tcx.intern_tup(fn_ty.sig.skip_binder().inputs())), + fn_ty.sig.skip_binder().output(), + fn_ty.sig.variadic() + ); debug!("closure for {:?} --> sig={:?} opt_kind={:?}", expr_def_id, @@ -114,15 +118,15 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { expected_ty); match expected_ty.sty { - ty::TyTrait(ref object_type) => { - let sig = object_type.projection_bounds - .iter() + ty::TyDynamic(ref object_type, ..) => { + let sig = object_type.projection_bounds() .filter_map(|pb| { let pb = pb.with_self_ty(self.tcx, self.tcx.types.err); self.deduce_sig_from_projection(&pb) }) .next(); - let kind = self.tcx.lang_items.fn_trait_kind(object_type.principal.def_id()); + let kind = object_type.principal() + .and_then(|p| self.tcx.lang_items.fn_trait_kind(p.def_id())); (sig, kind) } ty::TyInfer(ty::TyVar(vid)) => self.deduce_expectations_from_obligations(vid), @@ -214,23 +218,17 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { arg_param_ty); let input_tys = match arg_param_ty.sty { - ty::TyTuple(tys) => tys.to_vec(), + ty::TyTuple(tys) => tys.into_iter(), _ => { return None; } }; - debug!("deduce_sig_from_projection: input_tys {:?}", input_tys); let ret_param_ty = projection.0.ty; let ret_param_ty = self.resolve_type_vars_if_possible(&ret_param_ty); - debug!("deduce_sig_from_projection: ret_param_ty {:?}", - ret_param_ty); + debug!("deduce_sig_from_projection: ret_param_ty {:?}", ret_param_ty); - let fn_sig = ty::FnSig { - inputs: input_tys, - output: ret_param_ty, - variadic: false, - }; + let fn_sig = self.tcx.mk_fn_sig(input_tys.cloned(), ret_param_ty, false); debug!("deduce_sig_from_projection: fn_sig {:?}", fn_sig); Some(fn_sig) diff --git a/src/librustc_typeck/check/coercion.rs b/src/librustc_typeck/check/coercion.rs index 16493412d6..718c273785 100644 --- a/src/librustc_typeck/check/coercion.rs +++ b/src/librustc_typeck/check/coercion.rs @@ -63,8 +63,8 @@ use check::FnCtxt; use rustc::hir; -use rustc::infer::{Coercion, InferOk, TypeOrigin, TypeTrace}; -use rustc::traits::{self, ObligationCause}; +use rustc::infer::{Coercion, InferOk, TypeTrace}; +use rustc::traits::{self, ObligationCause, ObligationCauseCode}; use rustc::ty::adjustment::{Adjustment, Adjust, AutoBorrow}; use rustc::ty::{self, LvaluePreference, TypeAndMut, Ty}; use rustc::ty::fold::TypeFoldable; @@ -78,7 +78,7 @@ use std::ops::Deref; struct Coerce<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> { fcx: &'a FnCtxt<'a, 'gcx, 'tcx>, - origin: TypeOrigin, + cause: ObligationCause<'tcx>, use_lub: bool, unsizing_obligations: RefCell>>, } @@ -104,10 +104,10 @@ fn coerce_mutbls<'tcx>(from_mutbl: hir::Mutability, } impl<'f, 'gcx, 'tcx> Coerce<'f, 'gcx, 'tcx> { - fn new(fcx: &'f FnCtxt<'f, 'gcx, 'tcx>, origin: TypeOrigin) -> Self { + fn new(fcx: &'f FnCtxt<'f, 'gcx, 'tcx>, cause: ObligationCause<'tcx>) -> Self { Coerce { fcx: fcx, - origin: origin, + cause: cause, use_lub: false, unsizing_obligations: RefCell::new(vec![]), } @@ -115,19 +115,14 @@ impl<'f, 'gcx, 'tcx> Coerce<'f, 'gcx, 'tcx> { fn unify(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> { self.commit_if_ok(|_| { - let trace = TypeTrace::types(self.origin, false, a, b); + let trace = TypeTrace::types(&self.cause, false, a, b); if self.use_lub { self.lub(false, trace, &a, &b) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - }) + .map(|ok| self.register_infer_ok_obligations(ok)) } else { self.sub(false, trace, &a, &b) .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + self.fcx.register_predicates(obligations); value }) } @@ -238,7 +233,7 @@ impl<'f, 'gcx, 'tcx> Coerce<'f, 'gcx, 'tcx> { _ => return self.unify_and_identity(a, b), }; - let span = self.origin.span(); + let span = self.cause.span; let mut first_error = None; let mut r_borrow_var = None; @@ -430,7 +425,7 @@ impl<'f, 'gcx, 'tcx> Coerce<'f, 'gcx, 'tcx> { (&ty::TyRef(_, mt_a), &ty::TyRef(_, mt_b)) => { coerce_mutbls(mt_a.mutbl, mt_b.mutbl)?; - let coercion = Coercion(self.origin.span()); + let coercion = Coercion(self.cause.span); let r_borrow = self.next_region_var(coercion); (mt_a.ty, Some(AutoBorrow::Ref(r_borrow, mt_b.mutbl))) } @@ -449,7 +444,7 @@ impl<'f, 'gcx, 'tcx> Coerce<'f, 'gcx, 'tcx> { let mut leftover_predicates = vec![]; // Create an obligation for `Source: CoerceUnsized`. - let cause = ObligationCause::misc(self.origin.span(), self.body_id); + let cause = ObligationCause::misc(self.cause.span, self.body_id); queue.push_back(self.tcx .predicate_for_trait_def(cause, coerce_unsized_did, 0, source, &[target])); @@ -635,7 +630,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let source = self.resolve_type_vars_with_obligations(expr_ty); debug!("coercion::try({:?}: {:?} -> {:?})", expr, source, target); - let mut coerce = Coerce::new(self, TypeOrigin::ExprAssignable(expr.span)); + let cause = self.cause(expr.span, ObligationCauseCode::ExprAssignable); + let mut coerce = Coerce::new(self, cause); self.commit_if_ok(|_| { let adjustment = apply(&mut coerce, &|| Some(expr), source, target)?; if !adjustment.is_identity() { @@ -655,7 +651,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { /// tries to unify the types, potentially inserting coercions on any of the /// provided expressions and returns their LUB (aka "common supertype"). pub fn try_find_coercion_lub<'b, E, I>(&self, - origin: TypeOrigin, + cause: &ObligationCause<'tcx>, exprs: E, prev_ty: Ty<'tcx>, new: &'b hir::Expr, @@ -669,7 +665,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let new_ty = self.resolve_type_vars_with_obligations(new_ty); debug!("coercion::try_find_lub({:?}, {:?})", prev_ty, new_ty); - let trace = TypeTrace::types(origin, true, prev_ty, new_ty); + let trace = TypeTrace::types(cause, true, prev_ty, new_ty); // Special-case that coercion alone cannot handle: // Two function item types of differing IDs or Substs. @@ -677,21 +673,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (&ty::TyFnDef(a_def_id, a_substs, a_fty), &ty::TyFnDef(b_def_id, b_substs, b_fty)) => { // The signature must always match. let fty = self.lub(true, trace.clone(), &a_fty, &b_fty) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - })?; + .map(|ok| self.register_infer_ok_obligations(ok))?; if a_def_id == b_def_id { // Same function, maybe the parameters match. let substs = self.commit_if_ok(|_| { self.lub(true, trace.clone(), &a_substs, &b_substs) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - }) + .map(|ok| self.register_infer_ok_obligations(ok)) }); if let Ok(substs) = substs { @@ -715,7 +703,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { _ => {} } - let mut coerce = Coerce::new(self, origin); + let mut coerce = Coerce::new(self, cause.clone()); coerce.use_lub = true; // First try to coerce the new expression to the type of the previous ones, @@ -760,11 +748,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if !noop { return self.commit_if_ok(|_| { self.lub(true, trace.clone(), &prev_ty, &new_ty) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - }) + .map(|ok| self.register_infer_ok_obligations(ok)) }); } } @@ -777,11 +761,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } else { self.commit_if_ok(|_| { self.lub(true, trace, &prev_ty, &new_ty) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - }) + .map(|ok| self.register_infer_ok_obligations(ok)) }) } } diff --git a/src/librustc_typeck/check/compare_method.rs b/src/librustc_typeck/check/compare_method.rs index 2cb719675a..478de16731 100644 --- a/src/librustc_typeck/check/compare_method.rs +++ b/src/librustc_typeck/check/compare_method.rs @@ -8,10 +8,11 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use rustc::infer::{self, InferOk, TypeOrigin}; +use rustc::hir; +use rustc::infer::{self, InferOk}; use rustc::middle::free_region::FreeRegionMap; use rustc::ty; -use rustc::traits::{self, Reveal}; +use rustc::traits::{self, ObligationCause, ObligationCauseCode, Reveal}; use rustc::ty::error::{ExpectedFound, TypeError}; use rustc::ty::subst::{Subst, Substs}; use rustc::hir::{ImplItemKind, TraitItem_, Ty_}; @@ -23,6 +24,7 @@ use syntax_pos::Span; use CrateCtxt; use super::assoc; use super::{Inherited, FnCtxt}; +use astconv::ExplicitSelf; /// Checks that a method from an impl conforms to the signature of /// the same method as declared in the trait. @@ -36,11 +38,11 @@ use super::{Inherited, FnCtxt}; /// - impl_trait_ref: the TraitRef corresponding to the trait implementation pub fn compare_impl_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, impl_m_span: Span, impl_m_body_id: ast::NodeId, - trait_m: &ty::Method<'tcx>, - impl_trait_ref: &ty::TraitRef<'tcx>, + trait_m: &ty::AssociatedItem, + impl_trait_ref: ty::TraitRef<'tcx>, trait_item_span: Option, old_broken_mode: bool) { debug!("compare_impl_method(impl_trait_ref={:?})", @@ -49,7 +51,8 @@ pub fn compare_impl_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, if let Err(ErrorReported) = compare_self_type(ccx, impl_m, impl_m_span, - trait_m) { + trait_m, + impl_trait_ref) { return; } @@ -81,16 +84,27 @@ pub fn compare_impl_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, } fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, impl_m_span: Span, impl_m_body_id: ast::NodeId, - trait_m: &ty::Method<'tcx>, - impl_trait_ref: &ty::TraitRef<'tcx>, + trait_m: &ty::AssociatedItem, + impl_trait_ref: ty::TraitRef<'tcx>, old_broken_mode: bool) -> Result<(), ErrorReported> { let tcx = ccx.tcx; - let trait_to_impl_substs = &impl_trait_ref.substs; + let trait_to_impl_substs = impl_trait_ref.substs; + + let cause = ObligationCause { + span: impl_m_span, + body_id: impl_m_body_id, + code: ObligationCauseCode::CompareImplMethodObligation { + item_name: impl_m.name, + impl_item_def_id: impl_m.def_id, + trait_item_def_id: trait_m.def_id, + lint_id: if !old_broken_mode { Some(impl_m_body_id) } else { None }, + }, + }; // This code is best explained by example. Consider a trait: // @@ -165,18 +179,23 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // Create mapping from trait to skolemized. let trait_to_skol_substs = impl_to_skol_substs.rebase_onto(tcx, - impl_m.container_id(), + impl_m.container.id(), trait_to_impl_substs.subst(tcx, impl_to_skol_substs)); debug!("compare_impl_method: trait_to_skol_substs={:?}", trait_to_skol_substs); + let impl_m_generics = tcx.item_generics(impl_m.def_id); + let trait_m_generics = tcx.item_generics(trait_m.def_id); + let impl_m_predicates = tcx.item_predicates(impl_m.def_id); + let trait_m_predicates = tcx.item_predicates(trait_m.def_id); + // Check region bounds. check_region_bounds_on_impl_method(ccx, impl_m_span, impl_m, - &trait_m.generics, - &impl_m.generics, + &trait_m_generics, + &impl_m_generics, trait_to_skol_substs, impl_to_skol_substs)?; @@ -185,7 +204,7 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // environment. We can't just use `impl_env.caller_bounds`, // however, because we want to replace all late-bound regions with // region variables. - let impl_predicates = tcx.lookup_predicates(impl_m.predicates.parent.unwrap()); + let impl_predicates = tcx.item_predicates(impl_m_predicates.parent.unwrap()); let mut hybrid_preds = impl_predicates.instantiate(tcx, impl_to_skol_substs); debug!("compare_impl_method: impl_bounds={:?}", hybrid_preds); @@ -198,7 +217,7 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // We then register the obligations from the impl_m and check to see // if all constraints hold. hybrid_preds.predicates - .extend(trait_m.predicates.instantiate_own(tcx, trait_to_skol_substs).predicates); + .extend(trait_m_predicates.instantiate_own(tcx, trait_to_skol_substs).predicates); // Construct trait parameter environment and then shift it into the skolemized viewpoint. // The key step here is to update the caller_bounds's predicates to be @@ -219,7 +238,7 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let mut selcx = traits::SelectionContext::new(&infcx); - let impl_m_own_bounds = impl_m.predicates.instantiate_own(tcx, impl_to_skol_substs); + let impl_m_own_bounds = impl_m_predicates.instantiate_own(tcx, impl_to_skol_substs); let (impl_m_own_bounds, _) = infcx.replace_late_bound_regions_with_fresh_var(impl_m_span, infer::HigherRankedType, &ty::Binder(impl_m_own_bounds.predicates)); @@ -227,20 +246,9 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let traits::Normalized { value: predicate, .. } = traits::normalize(&mut selcx, normalize_cause.clone(), &predicate); - let cause = traits::ObligationCause { - span: impl_m_span, - body_id: impl_m_body_id, - code: traits::ObligationCauseCode::CompareImplMethodObligation { - item_name: impl_m.name, - impl_item_def_id: impl_m.def_id, - trait_item_def_id: trait_m.def_id, - lint_id: if !old_broken_mode { Some(impl_m_body_id) } else { None }, - }, - }; - fulfillment_cx.borrow_mut().register_predicate_obligation( &infcx, - traits::Obligation::new(cause, predicate)); + traits::Obligation::new(cause.clone(), predicate)); } // We now need to check that the signature of the impl method is @@ -258,12 +266,20 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // Compute skolemized form of impl and trait method tys. let tcx = infcx.tcx; - let origin = TypeOrigin::MethodCompatCheck(impl_m_span); + + let m_fty = |method: &ty::AssociatedItem| { + match tcx.item_type(method.def_id).sty { + ty::TyFnDef(_, _, f) => f, + _ => bug!() + } + }; + let impl_m_fty = m_fty(impl_m); + let trait_m_fty = m_fty(trait_m); let (impl_sig, _) = infcx.replace_late_bound_regions_with_fresh_var(impl_m_span, infer::HigherRankedType, - &impl_m.fty.sig); + &impl_m_fty.sig); let impl_sig = impl_sig.subst(tcx, impl_to_skol_substs); let impl_sig = @@ -273,15 +289,15 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, impl_m_body_id, &impl_sig); let impl_fty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { - unsafety: impl_m.fty.unsafety, - abi: impl_m.fty.abi, + unsafety: impl_m_fty.unsafety, + abi: impl_m_fty.abi, sig: ty::Binder(impl_sig.clone()), })); debug!("compare_impl_method: impl_fty={:?}", impl_fty); let trait_sig = tcx.liberate_late_bound_regions( infcx.parameter_environment.free_id_outlive, - &trait_m.fty.sig); + &trait_m_fty.sig); let trait_sig = trait_sig.subst(tcx, trait_to_skol_substs); let trait_sig = @@ -291,14 +307,14 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, impl_m_body_id, &trait_sig); let trait_fty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy { - unsafety: trait_m.fty.unsafety, - abi: trait_m.fty.abi, + unsafety: trait_m_fty.unsafety, + abi: trait_m_fty.abi, sig: ty::Binder(trait_sig.clone()), })); debug!("compare_impl_method: trait_fty={:?}", trait_fty); - let sub_result = infcx.sub_types(false, origin, impl_fty, trait_fty) + let sub_result = infcx.sub_types(false, &cause, impl_fty, trait_fty) .map(|InferOk { obligations, .. }| { // FIXME(#32730) propagate obligations assert!(obligations.is_empty()); @@ -311,22 +327,25 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let (impl_err_span, trait_err_span) = extract_spans_for_error_reporting(&infcx, &terr, - origin, + &cause, impl_m, impl_sig, trait_m, trait_sig); - let origin = TypeOrigin::MethodCompatCheck(impl_err_span); + let cause = ObligationCause { + span: impl_err_span, + ..cause.clone() + }; let mut diag = struct_span_err!(tcx.sess, - origin.span(), + cause.span, E0053, "method `{}` has an incompatible type for trait", trait_m.name); infcx.note_type_err(&mut diag, - origin, + &cause, trait_err_span.map(|sp| (sp, format!("type in trait"))), Some(infer::ValuePairs::Types(ExpectedFound { expected: trait_fty, @@ -357,7 +376,7 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, &infcx.parameter_environment.caller_bounds); infcx.resolve_regions_and_report_errors(&free_regions, impl_m_body_id); } else { - let fcx = FnCtxt::new(&inh, tcx.types.err, impl_m_body_id); + let fcx = FnCtxt::new(&inh, Some(tcx.types.err), impl_m_body_id); fcx.regionck_item(impl_m_body_id, impl_m_span, &[]); } @@ -367,7 +386,7 @@ fn compare_predicate_entailment<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, fn check_region_bounds_on_impl_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, span: Span, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, trait_generics: &ty::Generics<'tcx>, impl_generics: &ty::Generics<'tcx>, trait_to_skol_substs: &Substs<'tcx>, @@ -412,10 +431,10 @@ fn check_region_bounds_on_impl_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, fn extract_spans_for_error_reporting<'a, 'gcx, 'tcx>(infcx: &infer::InferCtxt<'a, 'gcx, 'tcx>, terr: &TypeError, - origin: TypeOrigin, - impl_m: &ty::Method, + cause: &ObligationCause<'tcx>, + impl_m: &ty::AssociatedItem, impl_sig: ty::FnSig<'tcx>, - trait_m: &ty::Method, + trait_m: &ty::AssociatedItem, trait_sig: ty::FnSig<'tcx>) -> (Span, Option) { let tcx = infcx.tcx; @@ -461,9 +480,9 @@ fn extract_spans_for_error_reporting<'a, 'gcx, 'tcx>(infcx: &infer::InferCtxt<'a } } }) - .unwrap_or((origin.span(), tcx.map.span_if_local(trait_m.def_id))) + .unwrap_or((cause.span, tcx.map.span_if_local(trait_m.def_id))) } else { - (origin.span(), tcx.map.span_if_local(trait_m.def_id)) + (cause.span, tcx.map.span_if_local(trait_m.def_id)) } } TypeError::Sorts(ExpectedFound { .. }) => { @@ -476,38 +495,40 @@ fn extract_spans_for_error_reporting<'a, 'gcx, 'tcx>(infcx: &infer::InferCtxt<'a _ => bug!("{:?} is not a MethodTraitItem", trait_m), }; - let impl_iter = impl_sig.inputs.iter(); - let trait_iter = trait_sig.inputs.iter(); + let impl_iter = impl_sig.inputs().iter(); + let trait_iter = trait_sig.inputs().iter(); impl_iter.zip(trait_iter) .zip(impl_m_iter) .zip(trait_m_iter) .filter_map(|(((impl_arg_ty, trait_arg_ty), impl_arg), trait_arg)| { - match infcx.sub_types(true, origin, trait_arg_ty, impl_arg_ty) { + match infcx.sub_types(true, &cause, trait_arg_ty, impl_arg_ty) { Ok(_) => None, Err(_) => Some((impl_arg.ty.span, Some(trait_arg.ty.span))), } }) .next() .unwrap_or_else(|| { - if infcx.sub_types(false, origin, impl_sig.output, trait_sig.output) + if infcx.sub_types(false, &cause, impl_sig.output(), + trait_sig.output()) .is_err() { (impl_m_output.span(), Some(trait_m_output.span())) } else { - (origin.span(), tcx.map.span_if_local(trait_m.def_id)) + (cause.span, tcx.map.span_if_local(trait_m.def_id)) } }) } else { - (origin.span(), tcx.map.span_if_local(trait_m.def_id)) + (cause.span, tcx.map.span_if_local(trait_m.def_id)) } } - _ => (origin.span(), tcx.map.span_if_local(trait_m.def_id)), + _ => (cause.span, tcx.map.span_if_local(trait_m.def_id)), } } fn compare_self_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, impl_m_span: Span, - trait_m: &ty::Method<'tcx>) + trait_m: &ty::AssociatedItem, + impl_trait_ref: ty::TraitRef<'tcx>) -> Result<(), ErrorReported> { let tcx = ccx.tcx; @@ -518,58 +539,75 @@ fn compare_self_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // that the error messages you get out of this code are a bit more // inscrutable, particularly for cases where one method has no // self. - match (&trait_m.explicit_self, &impl_m.explicit_self) { - (&ty::ExplicitSelfCategory::Static, &ty::ExplicitSelfCategory::Static) => {} - (&ty::ExplicitSelfCategory::Static, _) => { + + let self_string = |method: &ty::AssociatedItem| { + let untransformed_self_ty = match method.container { + ty::ImplContainer(_) => impl_trait_ref.self_ty(), + ty::TraitContainer(_) => tcx.mk_self_type() + }; + let method_ty = tcx.item_type(method.def_id); + let self_arg_ty = *method_ty.fn_sig().input(0).skip_binder(); + match ExplicitSelf::determine(untransformed_self_ty, self_arg_ty) { + ExplicitSelf::ByValue => "self".to_string(), + ExplicitSelf::ByReference(_, hir::MutImmutable) => "&self".to_string(), + ExplicitSelf::ByReference(_, hir::MutMutable) => "&mut self".to_string(), + _ => format!("self: {}", self_arg_ty) + } + }; + + match (trait_m.method_has_self_argument, impl_m.method_has_self_argument) { + (false, false) | (true, true) => {} + + (false, true) => { + let self_descr = self_string(impl_m); let mut err = struct_span_err!(tcx.sess, impl_m_span, E0185, "method `{}` has a `{}` declaration in the impl, but \ not in the trait", trait_m.name, - impl_m.explicit_self); - err.span_label(impl_m_span, - &format!("`{}` used in impl", impl_m.explicit_self)); + self_descr); + err.span_label(impl_m_span, &format!("`{}` used in impl", self_descr)); if let Some(span) = tcx.map.span_if_local(trait_m.def_id) { - err.span_label(span, - &format!("trait declared without `{}`", impl_m.explicit_self)); + err.span_label(span, &format!("trait declared without `{}`", self_descr)); } err.emit(); return Err(ErrorReported); } - (_, &ty::ExplicitSelfCategory::Static) => { + + (true, false) => { + let self_descr = self_string(trait_m); let mut err = struct_span_err!(tcx.sess, impl_m_span, E0186, "method `{}` has a `{}` declaration in the trait, but \ not in the impl", trait_m.name, - trait_m.explicit_self); + self_descr); err.span_label(impl_m_span, - &format!("expected `{}` in impl", trait_m.explicit_self)); + &format!("expected `{}` in impl", self_descr)); if let Some(span) = tcx.map.span_if_local(trait_m.def_id) { - err.span_label(span, &format!("`{}` used in trait", trait_m.explicit_self)); + err.span_label(span, &format!("`{}` used in trait", self_descr)); } err.emit(); return Err(ErrorReported); } - _ => { - // Let the type checker catch other errors below - } } Ok(()) } fn compare_number_of_generics<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, impl_m_span: Span, - trait_m: &ty::Method<'tcx>, + trait_m: &ty::AssociatedItem, trait_item_span: Option) -> Result<(), ErrorReported> { let tcx = ccx.tcx; - let num_impl_m_type_params = impl_m.generics.types.len(); - let num_trait_m_type_params = trait_m.generics.types.len(); + let impl_m_generics = tcx.item_generics(impl_m.def_id); + let trait_m_generics = tcx.item_generics(trait_m.def_id); + let num_impl_m_type_params = impl_m_generics.types.len(); + let num_trait_m_type_params = trait_m_generics.types.len(); if num_impl_m_type_params != num_trait_m_type_params { let impl_m_node_id = tcx.map.as_local_node_id(impl_m.def_id).unwrap(); let span = match tcx.map.expect_impl_item(impl_m_node_id).node { @@ -630,15 +668,23 @@ fn compare_number_of_generics<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, } fn compare_number_of_method_arguments<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_m: &ty::Method<'tcx>, + impl_m: &ty::AssociatedItem, impl_m_span: Span, - trait_m: &ty::Method<'tcx>, + trait_m: &ty::AssociatedItem, trait_item_span: Option) -> Result<(), ErrorReported> { let tcx = ccx.tcx; - if impl_m.fty.sig.0.inputs.len() != trait_m.fty.sig.0.inputs.len() { - let trait_number_args = trait_m.fty.sig.0.inputs.len(); - let impl_number_args = impl_m.fty.sig.0.inputs.len(); + let m_fty = |method: &ty::AssociatedItem| { + match tcx.item_type(method.def_id).sty { + ty::TyFnDef(_, _, f) => f, + _ => bug!() + } + }; + let impl_m_fty = m_fty(impl_m); + let trait_m_fty = m_fty(trait_m); + let trait_number_args = trait_m_fty.sig.inputs().skip_binder().len(); + let impl_number_args = impl_m_fty.sig.inputs().skip_binder().len(); + if trait_number_args != impl_number_args { let trait_m_node_id = tcx.map.as_local_node_id(trait_m.def_id); let trait_span = if let Some(trait_id) = trait_m_node_id { match tcx.map.expect_trait_item(trait_id).node { @@ -708,10 +754,10 @@ fn compare_number_of_method_arguments<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, } pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - impl_c: &ty::AssociatedConst<'tcx>, + impl_c: &ty::AssociatedItem, impl_c_span: Span, - trait_c: &ty::AssociatedConst<'tcx>, - impl_trait_ref: &ty::TraitRef<'tcx>) { + trait_c: &ty::AssociatedItem, + impl_trait_ref: ty::TraitRef<'tcx>) { debug!("compare_const_impl(impl_trait_ref={:?})", impl_trait_ref); let tcx = ccx.tcx; @@ -723,7 +769,7 @@ pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // because we shouldn't really have to deal with lifetimes or // predicates. In fact some of this should probably be put into // shared functions because of DRY violations... - let trait_to_impl_substs = &impl_trait_ref.substs; + let trait_to_impl_substs = impl_trait_ref.substs; // Create a parameter environment that represents the implementation's // method. @@ -742,9 +788,9 @@ pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, trait_to_skol_substs); // Compute skolemized form of impl and trait const tys. - let impl_ty = impl_c.ty.subst(tcx, impl_to_skol_substs); - let trait_ty = trait_c.ty.subst(tcx, trait_to_skol_substs); - let mut origin = TypeOrigin::Misc(impl_c_span); + let impl_ty = tcx.item_type(impl_c.def_id).subst(tcx, impl_to_skol_substs); + let trait_ty = tcx.item_type(trait_c.def_id).subst(tcx, trait_to_skol_substs); + let mut cause = ObligationCause::misc(impl_c_span, impl_c_node_id); let err = infcx.commit_if_ok(|_| { // There is no "body" here, so just pass dummy id. @@ -764,11 +810,12 @@ pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, debug!("compare_const_impl: trait_ty={:?}", trait_ty); - infcx.sub_types(false, origin, impl_ty, trait_ty) - .map(|InferOk { obligations, .. }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()) - }) + infcx.sub_types(false, &cause, impl_ty, trait_ty) + .map(|InferOk { obligations, value: () }| { + for obligation in obligations { + fulfillment_cx.register_predicate_obligation(&infcx, obligation); + } + }) }); if let Err(terr) = err { @@ -778,12 +825,12 @@ pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // Locate the Span containing just the type of the offending impl match tcx.map.expect_impl_item(impl_c_node_id).node { - ImplItemKind::Const(ref ty, _) => origin = TypeOrigin::Misc(ty.span), + ImplItemKind::Const(ref ty, _) => cause.span = ty.span, _ => bug!("{:?} is not a impl const", impl_c), } let mut diag = struct_span_err!(tcx.sess, - origin.span(), + cause.span, E0326, "implemented const `{}` has an incompatible type for \ trait", @@ -797,7 +844,7 @@ pub fn compare_const_impl<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, }; infcx.note_type_err(&mut diag, - origin, + &cause, Some((trait_c_span, format!("type in trait"))), Some(infer::ValuePairs::Types(ExpectedFound { expected: trait_ty, diff --git a/src/librustc_typeck/check/demand.rs b/src/librustc_typeck/check/demand.rs index d622bc7f75..ef1c08bdab 100644 --- a/src/librustc_typeck/check/demand.rs +++ b/src/librustc_typeck/check/demand.rs @@ -11,7 +11,8 @@ use check::FnCtxt; use rustc::ty::Ty; -use rustc::infer::{InferOk, TypeOrigin}; +use rustc::infer::{InferOk}; +use rustc::traits::ObligationCause; use syntax_pos::Span; use rustc::hir; @@ -20,34 +21,32 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Requires that the two types unify, and prints an error message if // they don't. pub fn demand_suptype(&self, sp: Span, expected: Ty<'tcx>, actual: Ty<'tcx>) { - let origin = TypeOrigin::Misc(sp); - match self.sub_types(false, origin, actual, expected) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + let cause = self.misc(sp); + match self.sub_types(false, &cause, actual, expected) { + Ok(InferOk { obligations, value: () }) => { + self.register_predicates(obligations); }, Err(e) => { - self.report_mismatched_types(origin, expected, actual, e); + self.report_mismatched_types(&cause, expected, actual, e); } } } pub fn demand_eqtype(&self, sp: Span, expected: Ty<'tcx>, actual: Ty<'tcx>) { - self.demand_eqtype_with_origin(TypeOrigin::Misc(sp), expected, actual); + self.demand_eqtype_with_origin(&self.misc(sp), expected, actual); } pub fn demand_eqtype_with_origin(&self, - origin: TypeOrigin, + cause: &ObligationCause<'tcx>, expected: Ty<'tcx>, actual: Ty<'tcx>) { - match self.eq_types(false, origin, actual, expected) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + match self.eq_types(false, cause, actual, expected) { + Ok(InferOk { obligations, value: () }) => { + self.register_predicates(obligations); }, Err(e) => { - self.report_mismatched_types(origin, expected, actual, e); + self.report_mismatched_types(cause, expected, actual, e); } } } @@ -56,9 +55,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn demand_coerce(&self, expr: &hir::Expr, checked_ty: Ty<'tcx>, expected: Ty<'tcx>) { let expected = self.resolve_type_vars_with_obligations(expected); if let Err(e) = self.try_coerce(expr, checked_ty, expected) { - let origin = TypeOrigin::Misc(expr.span); + let cause = self.misc(expr.span); let expr_ty = self.resolve_type_vars_with_obligations(checked_ty); - self.report_mismatched_types(origin, expected, expr_ty, e); + self.report_mismatched_types(&cause, expected, expr_ty, e); } } } diff --git a/src/librustc_typeck/check/dropck.rs b/src/librustc_typeck/check/dropck.rs index e72bcb3079..e13c4ea314 100644 --- a/src/librustc_typeck/check/dropck.rs +++ b/src/librustc_typeck/check/dropck.rs @@ -17,11 +17,11 @@ use rustc::infer::{self, InferOk}; use middle::region; use rustc::ty::subst::{Subst, Substs}; use rustc::ty::{self, AdtKind, Ty, TyCtxt}; -use rustc::traits::{self, Reveal}; -use util::nodemap::FnvHashSet; +use rustc::traits::{self, ObligationCause, Reveal}; +use util::nodemap::FxHashSet; use syntax::ast; -use syntax_pos::{self, Span}; +use syntax_pos::Span; /// check_drop_impl confirms that the Drop implementation identfied by /// `drop_impl_did` is not any more specialized than the type it is @@ -41,8 +41,8 @@ use syntax_pos::{self, Span}; /// cannot do `struct S; impl Drop for S { ... }`). /// pub fn check_drop_impl(ccx: &CrateCtxt, drop_impl_did: DefId) -> Result<(), ()> { - let dtor_self_type = ccx.tcx.lookup_item_type(drop_impl_did).ty; - let dtor_predicates = ccx.tcx.lookup_predicates(drop_impl_did); + let dtor_self_type = ccx.tcx.item_type(drop_impl_did); + let dtor_predicates = ccx.tcx.item_predicates(drop_impl_did); match dtor_self_type.sty { ty::TyAdt(adt_def, self_to_impl_substs) => { ensure_drop_params_and_item_params_correspond(ccx, @@ -59,7 +59,7 @@ pub fn check_drop_impl(ccx: &CrateCtxt, drop_impl_did: DefId) -> Result<(), ()> _ => { // Destructors only work on nominal types. This was // already checked by coherence, so we can panic here. - let span = ccx.tcx.map.def_id_span(drop_impl_did, syntax_pos::DUMMY_SP); + let span = ccx.tcx.def_span(drop_impl_did); span_bug!(span, "should have been rejected by coherence check: {}", dtor_self_type); @@ -85,16 +85,16 @@ fn ensure_drop_params_and_item_params_correspond<'a, 'tcx>( let tcx = infcx.tcx; let mut fulfillment_cx = traits::FulfillmentContext::new(); - let named_type = tcx.lookup_item_type(self_type_did).ty; + let named_type = tcx.item_type(self_type_did); let named_type = named_type.subst(tcx, &infcx.parameter_environment.free_substs); - let drop_impl_span = tcx.map.def_id_span(drop_impl_did, syntax_pos::DUMMY_SP); + let drop_impl_span = tcx.def_span(drop_impl_did); let fresh_impl_substs = infcx.fresh_substs_for_item(drop_impl_span, drop_impl_did); let fresh_impl_self_ty = drop_impl_ty.subst(tcx, fresh_impl_substs); - match infcx.eq_types(true, infer::TypeOrigin::Misc(drop_impl_span), - named_type, fresh_impl_self_ty) { + let cause = &ObligationCause::misc(drop_impl_span, drop_impl_node_id); + match infcx.eq_types(true, cause, named_type, fresh_impl_self_ty) { Ok(InferOk { obligations, .. }) => { // FIXME(#32730) propagate obligations assert!(obligations.is_empty()); @@ -173,11 +173,11 @@ fn ensure_drop_predicates_are_implied_by_item_defn<'a, 'tcx>( let self_type_node_id = tcx.map.as_local_node_id(self_type_did).unwrap(); - let drop_impl_span = tcx.map.def_id_span(drop_impl_did, syntax_pos::DUMMY_SP); + let drop_impl_span = tcx.def_span(drop_impl_did); // We can assume the predicates attached to struct/enum definition // hold. - let generic_assumptions = tcx.lookup_predicates(self_type_did); + let generic_assumptions = tcx.item_predicates(self_type_did); let assumptions_in_impl_context = generic_assumptions.instantiate(tcx, &self_to_impl_substs); let assumptions_in_impl_context = assumptions_in_impl_context.predicates; @@ -289,7 +289,7 @@ pub fn check_safety_of_destructor_if_necessary<'a, 'gcx, 'tcx>( rcx: rcx, span: span, parent_scope: parent_scope, - breadcrumbs: FnvHashSet() + breadcrumbs: FxHashSet() }, TypeContext::Root, typ, @@ -347,7 +347,7 @@ enum TypeContext { struct DropckContext<'a, 'b: 'a, 'gcx: 'b+'tcx, 'tcx: 'b> { rcx: &'a mut RegionCtxt<'b, 'gcx, 'tcx>, /// types that have already been traversed - breadcrumbs: FnvHashSet>, + breadcrumbs: FxHashSet>, /// span for error reporting span: Span, /// the scope reachable dtorck types must outlive @@ -482,8 +482,14 @@ fn iterate_over_potentially_unsafe_regions_in_type<'a, 'b, 'gcx, 'tcx>( Ok(()) } - ty::TyTuple(tys) | - ty::TyClosure(_, ty::ClosureSubsts { upvar_tys: tys, .. }) => { + ty::TyClosure(def_id, substs) => { + for ty in substs.upvar_tys(def_id, tcx) { + iterate_over_potentially_unsafe_regions_in_type(cx, context, ty, depth+1)? + } + Ok(()) + } + + ty::TyTuple(tys) => { for ty in tys { iterate_over_potentially_unsafe_regions_in_type(cx, context, ty, depth+1)? } @@ -509,7 +515,7 @@ fn iterate_over_potentially_unsafe_regions_in_type<'a, 'b, 'gcx, 'tcx>( } // these are always dtorck - ty::TyTrait(..) | ty::TyProjection(_) | ty::TyAnon(..) => bug!(), + ty::TyDynamic(..) | ty::TyProjection(_) | ty::TyAnon(..) => bug!(), } } @@ -553,12 +559,12 @@ fn has_dtor_of_interest<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, // attributes attached to the impl's generics. let dtor_method = adt_def.destructor() .expect("dtorck type without destructor impossible"); - let method = tcx.impl_or_trait_item(dtor_method); - let impl_id: DefId = method.container().id(); - let revised_ty = revise_self_ty(tcx, adt_def, impl_id, substs); + let method = tcx.associated_item(dtor_method); + let impl_def_id = method.container.id(); + let revised_ty = revise_self_ty(tcx, adt_def, impl_def_id, substs); return DropckKind::RevisedSelf(revised_ty); } - ty::TyTrait(..) | ty::TyProjection(..) | ty::TyAnon(..) => { + ty::TyDynamic(..) | ty::TyProjection(..) | ty::TyAnon(..) => { debug!("ty: {:?} isn't known, and therefore is a dropck type", ty); return DropckKind::BorrowedDataMustStrictlyOutliveSelf; }, @@ -570,30 +576,30 @@ fn has_dtor_of_interest<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, // Constructs new Ty just like the type defined by `adt_def` coupled // with `substs`, except each type and lifetime parameter marked as -// `#[may_dangle]` in the Drop impl (identified by `impl_id`) is +// `#[may_dangle]` in the Drop impl (identified by `impl_def_id`) is // respectively mapped to `()` or `'static`. // // For example: If the `adt_def` maps to: // // enum Foo<'a, X, Y> { ... } // -// and the `impl_id` maps to: +// and the `impl_def_id` maps to: // // impl<#[may_dangle] 'a, X, #[may_dangle] Y> Drop for Foo<'a, X, Y> { ... } // // then revises input: `Foo<'r,i64,&'r i64>` to: `Foo<'static,i64,()>` fn revise_self_ty<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, - adt_def: ty::AdtDef<'tcx>, - impl_id: DefId, + adt_def: &'tcx ty::AdtDef, + impl_def_id: DefId, substs: &Substs<'tcx>) -> Ty<'tcx> { // Get generics for `impl Drop` to query for `#[may_dangle]` attr. - let impl_bindings = tcx.lookup_generics(impl_id); + let impl_bindings = tcx.item_generics(impl_def_id); // Get Substs attached to Self on `impl Drop`; process in parallel // with `substs`, replacing dangling entries as appropriate. let self_substs = { - let impl_self_ty: Ty<'tcx> = tcx.lookup_item_type(impl_id).ty; + let impl_self_ty: Ty<'tcx> = tcx.item_type(impl_def_id); if let ty::TyAdt(self_adt_def, self_substs) = impl_self_ty.sty { assert_eq!(adt_def, self_adt_def); self_substs @@ -648,5 +654,5 @@ fn revise_self_ty<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, t }); - return tcx.mk_adt(adt_def, &substs); + tcx.mk_adt(adt_def, &substs) } diff --git a/src/librustc_typeck/check/intrinsic.rs b/src/librustc_typeck/check/intrinsic.rs index 7d2547ec17..183a2a48ff 100644 --- a/src/librustc_typeck/check/intrinsic.rs +++ b/src/librustc_typeck/check/intrinsic.rs @@ -12,20 +12,21 @@ //! intrinsics that the compiler exposes. use intrinsics; -use rustc::infer::TypeOrigin; +use rustc::traits::{ObligationCause, ObligationCauseCode}; use rustc::ty::subst::Substs; -use rustc::ty::FnSig; use rustc::ty::{self, Ty}; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::FxHashMap; use {CrateCtxt, require_same_types}; use syntax::abi::Abi; use syntax::ast; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax_pos::Span; use rustc::hir; +use std::iter; + fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::ForeignItem, n_tps: usize, @@ -34,7 +35,6 @@ fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, output: Ty<'tcx>) { let tcx = ccx.tcx; let def_id = tcx.map.local_def_id(it.id); - let i_ty = tcx.lookup_item_type(def_id); let substs = Substs::for_item(tcx, def_id, |_, _| tcx.mk_region(ty::ReErased), @@ -43,13 +43,9 @@ fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let fty = tcx.mk_fn_def(def_id, substs, tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Unsafe, abi: abi, - sig: ty::Binder(FnSig { - inputs: inputs, - output: output, - variadic: false, - }), + sig: ty::Binder(tcx.mk_fn_sig(inputs.into_iter(), output, false)), })); - let i_n_tps = i_ty.generics.types.len(); + let i_n_tps = tcx.item_generics(def_id).types.len(); if i_n_tps != n_tps { let span = match it.node { hir::ForeignItemFn(_, ref generics) => generics.span, @@ -64,8 +60,10 @@ fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, .emit(); } else { require_same_types(ccx, - TypeOrigin::IntrinsicType(it.span), - i_ty.ty, + &ObligationCause::new(it.span, + it.id, + ObligationCauseCode::IntrinsicType), + tcx.item_type(def_id), fty); } } @@ -74,7 +72,7 @@ fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, /// and in libcore/intrinsics.rs pub fn check_intrinsic_type(ccx: &CrateCtxt, it: &hir::ForeignItem) { fn param<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, n: u32) -> Ty<'tcx> { - let name = token::intern(&format!("P{}", n)); + let name = Symbol::intern(&format!("P{}", n)); ccx.tcx.mk_param(n, name) } @@ -298,11 +296,7 @@ pub fn check_intrinsic_type(ccx: &CrateCtxt, it: &hir::ForeignItem) { let fn_ty = tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(FnSig { - inputs: vec![mut_u8], - output: tcx.mk_nil(), - variadic: false, - }), + sig: ty::Binder(tcx.mk_fn_sig(iter::once(mut_u8), tcx.mk_nil(), false)), }); (0, vec![tcx.mk_fn_ptr(fn_ty), mut_u8, mut_u8], tcx.types.i32) } @@ -325,13 +319,13 @@ pub fn check_intrinsic_type(ccx: &CrateCtxt, it: &hir::ForeignItem) { pub fn check_platform_intrinsic_type(ccx: &CrateCtxt, it: &hir::ForeignItem) { let param = |n| { - let name = token::intern(&format!("P{}", n)); + let name = Symbol::intern(&format!("P{}", n)); ccx.tcx.mk_param(n, name) }; let tcx = ccx.tcx; - let i_ty = tcx.lookup_item_type(tcx.map.local_def_id(it.id)); - let i_n_tps = i_ty.generics.types.len(); + let def_id = tcx.map.local_def_id(it.id); + let i_n_tps = tcx.item_generics(def_id).types.len(); let name = it.name.as_str(); let (n_tps, inputs, output) = match &*name { @@ -372,24 +366,25 @@ pub fn check_platform_intrinsic_type(ccx: &CrateCtxt, return } - let mut structural_to_nomimal = FnvHashMap(); + let mut structural_to_nomimal = FxHashMap(); - let sig = tcx.no_late_bound_regions(i_ty.ty.fn_sig()).unwrap(); - if intr.inputs.len() != sig.inputs.len() { + let sig = tcx.item_type(def_id).fn_sig(); + let sig = tcx.no_late_bound_regions(sig).unwrap(); + if intr.inputs.len() != sig.inputs().len() { span_err!(tcx.sess, it.span, E0444, "platform-specific intrinsic has invalid number of \ arguments: found {}, expected {}", - sig.inputs.len(), intr.inputs.len()); + sig.inputs().len(), intr.inputs.len()); return } - let input_pairs = intr.inputs.iter().zip(&sig.inputs); + let input_pairs = intr.inputs.iter().zip(sig.inputs()); for (i, (expected_arg, arg)) in input_pairs.enumerate() { match_intrinsic_type_to_type(ccx, &format!("argument {}", i + 1), it.span, &mut structural_to_nomimal, expected_arg, arg); } match_intrinsic_type_to_type(ccx, "return value", it.span, &mut structural_to_nomimal, - &intr.output, sig.output); + &intr.output, sig.output()); return } None => { @@ -412,7 +407,7 @@ fn match_intrinsic_type_to_type<'tcx, 'a>( ccx: &CrateCtxt<'a, 'tcx>, position: &str, span: Span, - structural_to_nominal: &mut FnvHashMap<&'a intrinsics::Type, ty::Ty<'tcx>>, + structural_to_nominal: &mut FxHashMap<&'a intrinsics::Type, ty::Ty<'tcx>>, expected: &'a intrinsics::Type, t: ty::Ty<'tcx>) { use intrinsics::Type::*; diff --git a/src/librustc_typeck/check/method/confirm.rs b/src/librustc_typeck/check/method/confirm.rs index f88bb355d1..ff9eaa012b 100644 --- a/src/librustc_typeck/check/method/confirm.rs +++ b/src/librustc_typeck/check/method/confirm.rs @@ -17,7 +17,7 @@ use rustc::traits; use rustc::ty::{self, LvaluePreference, NoPreference, PreferMutLvalue, Ty}; use rustc::ty::adjustment::{Adjustment, Adjust, AutoBorrow}; use rustc::ty::fold::TypeFoldable; -use rustc::infer::{self, InferOk, TypeOrigin}; +use rustc::infer::{self, InferOk}; use syntax_pos::Span; use rustc::hir; @@ -37,16 +37,6 @@ impl<'a, 'gcx, 'tcx> Deref for ConfirmContext<'a, 'gcx, 'tcx> { } } -struct InstantiatedMethodSig<'tcx> { - /// Function signature of the method being invoked. The 0th - /// argument is the receiver. - method_sig: ty::FnSig<'tcx>, - - /// Generic bounds on the method's parameters which must be added - /// as pending obligations. - method_predicates: ty::InstantiatedPredicates<'tcx>, -} - impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn confirm_method(&self, span: Span, @@ -98,31 +88,18 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { debug!("all_substs={:?}", all_substs); // Create the final signature for the method, replacing late-bound regions. - let InstantiatedMethodSig { method_sig, method_predicates } = - self.instantiate_method_sig(&pick, all_substs); - let method_self_ty = method_sig.inputs[0]; + let (method_ty, method_predicates) = self.instantiate_method_sig(&pick, all_substs); // Unify the (adjusted) self type with what the method expects. - self.unify_receivers(self_ty, method_self_ty); - - // Create the method type - let def_id = pick.item.def_id(); - let method_ty = pick.item.as_opt_method().unwrap(); - let fty = self.tcx.mk_fn_def(def_id, - all_substs, - self.tcx.mk_bare_fn(ty::BareFnTy { - sig: ty::Binder(method_sig), - unsafety: method_ty.fty.unsafety, - abi: method_ty.fty.abi.clone(), - })); + self.unify_receivers(self_ty, method_ty.fn_sig().input(0).skip_binder()); // Add any trait/regions obligations specified on the method's type parameters. - self.add_obligations(fty, all_substs, &method_predicates); + self.add_obligations(method_ty, all_substs, &method_predicates); // Create the final `MethodCallee`. let callee = ty::MethodCallee { - def_id: def_id, - ty: fty, + def_id: pick.item.def_id, + ty: method_ty, substs: all_substs, }; @@ -193,7 +170,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { -> &'tcx Substs<'tcx> { match pick.kind { probe::InherentImplPick => { - let impl_def_id = pick.item.container().id(); + let impl_def_id = pick.item.container.id(); assert!(self.tcx.impl_trait_ref(impl_def_id).is_none(), "impl {:?} is not an inherent impl", impl_def_id); @@ -201,7 +178,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { } probe::ObjectPick => { - let trait_def_id = pick.item.container().id(); + let trait_def_id = pick.item.container.id(); self.extract_existential_trait_ref(self_ty, |this, object_ty, principal| { // The object data has no entry for the Self // Type. For the purposes of this method call, we @@ -244,7 +221,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { } probe::TraitPick => { - let trait_def_id = pick.item.container().id(); + let trait_def_id = pick.item.container.id(); // Make a trait reference `$0 : Trait<$1...$n>` // consisting entirely of type variables. Later on in @@ -278,7 +255,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { .autoderef(self.span, self_ty) .filter_map(|(ty, _)| { match ty.sty { - ty::TyTrait(ref data) => Some(closure(self, ty, data.principal)), + ty::TyDynamic(ref data, ..) => data.principal().map(|p| closure(self, ty, p)), _ => None, } }) @@ -299,8 +276,8 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { // If they were not explicitly supplied, just construct fresh // variables. let num_supplied_types = supplied_method_types.len(); - let method = pick.item.as_opt_method().unwrap(); - let num_method_types = method.generics.types.len(); + let method_generics = self.tcx.item_generics(pick.item.def_id); + let num_method_types = method_generics.types.len(); if num_supplied_types > 0 && num_supplied_types != num_method_types { if num_method_types == 0 { @@ -332,18 +309,15 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { // parameters from the type and those from the method. // // FIXME -- permit users to manually specify lifetimes - let supplied_start = substs.params().len() + method.generics.regions.len(); - Substs::for_item(self.tcx, - method.def_id, - |def, _| { + let supplied_start = substs.params().len() + method_generics.regions.len(); + Substs::for_item(self.tcx, pick.item.def_id, |def, _| { let i = def.index as usize; if i < substs.params().len() { substs.region_at(i) } else { self.region_var_for_def(self.span, def) } - }, - |def, cur_substs| { + }, |def, cur_substs| { let i = def.index as usize; if i < substs.params().len() { substs.type_at(i) @@ -356,10 +330,9 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { } fn unify_receivers(&mut self, self_ty: Ty<'tcx>, method_self_ty: Ty<'tcx>) { - match self.sub_types(false, TypeOrigin::Misc(self.span), self_ty, method_self_ty) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + match self.sub_types(false, &self.misc(self.span), self_ty, method_self_ty) { + Ok(InferOk { obligations, value: () }) => { + self.register_predicates(obligations); } Err(_) => { span_bug!(self.span, @@ -376,7 +349,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { fn instantiate_method_sig(&mut self, pick: &probe::Pick<'tcx>, all_substs: &'tcx Substs<'tcx>) - -> InstantiatedMethodSig<'tcx> { + -> (Ty<'tcx>, ty::InstantiatedPredicates<'tcx>) { debug!("instantiate_method_sig(pick={:?}, all_substs={:?})", pick, all_substs); @@ -384,36 +357,40 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { // Instantiate the bounds on the method with the // type/early-bound-regions substitutions performed. There can // be no late-bound regions appearing here. - let method_predicates = pick.item - .as_opt_method() - .unwrap() - .predicates - .instantiate(self.tcx, all_substs); - let method_predicates = self.normalize_associated_types_in(self.span, &method_predicates); + let def_id = pick.item.def_id; + let method_predicates = self.tcx.item_predicates(def_id) + .instantiate(self.tcx, all_substs); + let method_predicates = self.normalize_associated_types_in(self.span, + &method_predicates); debug!("method_predicates after subst = {:?}", method_predicates); + let fty = match self.tcx.item_type(def_id).sty { + ty::TyFnDef(_, _, f) => f, + _ => bug!() + }; + // Instantiate late-bound regions and substitute the trait // parameters into the method type to get the actual method type. // // NB: Instantiate late-bound regions first so that // `instantiate_type_scheme` can normalize associated types that // may reference those regions. - let method_sig = self.replace_late_bound_regions_with_fresh_var(&pick.item - .as_opt_method() - .unwrap() - .fty - .sig); + let method_sig = self.replace_late_bound_regions_with_fresh_var(&fty.sig); debug!("late-bound lifetimes from method instantiated, method_sig={:?}", method_sig); let method_sig = self.instantiate_type_scheme(self.span, all_substs, &method_sig); debug!("type scheme substituted, method_sig={:?}", method_sig); - InstantiatedMethodSig { - method_sig: method_sig, - method_predicates: method_predicates, - } + let method_ty = self.tcx.mk_fn_def(def_id, all_substs, + self.tcx.mk_bare_fn(ty::BareFnTy { + sig: ty::Binder(method_sig), + unsafety: fty.unsafety, + abi: fty.abi, + })); + + (method_ty, method_predicates) } fn add_obligations(&mut self, @@ -587,7 +564,7 @@ impl<'a, 'gcx, 'tcx> ConfirmContext<'a, 'gcx, 'tcx> { fn enforce_illegal_method_limitations(&self, pick: &probe::Pick) { // Disallow calls to the method `drop` defined in the `Drop` trait. - match pick.item.container() { + match pick.item.container { ty::TraitContainer(trait_def_id) => { callee::check_legal_trait_for_method_call(self.ccx, self.span, trait_def_id) } diff --git a/src/librustc_typeck/check/method/mod.rs b/src/librustc_typeck/check/method/mod.rs index 2df562f9ad..b29eab780e 100644 --- a/src/librustc_typeck/check/method/mod.rs +++ b/src/librustc_typeck/check/method/mod.rs @@ -136,6 +136,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.tcx.used_trait_imports.borrow_mut().insert(import_id); } + self.tcx.check_stability(pick.item.def_id, call_expr.id, span); + Ok(self.confirm_method(span, self_expr, call_expr, @@ -190,13 +192,6 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { m_name, trait_def_id); - let trait_def = self.tcx.lookup_trait_def(trait_def_id); - - if let Some(ref input_types) = opt_input_types { - assert_eq!(trait_def.generics.types.len() - 1, input_types.len()); - } - assert!(trait_def.generics.regions.is_empty()); - // Construct a trait-reference `self_ty : Trait` let substs = Substs::for_item(self.tcx, trait_def_id, @@ -228,14 +223,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Trait must have a method named `m_name` and it should not have // type parameters or early-bound regions. let tcx = self.tcx; - let method_item = self.impl_or_trait_item(trait_def_id, m_name).unwrap(); - let method_ty = method_item.as_opt_method().unwrap(); - assert_eq!(method_ty.generics.types.len(), 0); - assert_eq!(method_ty.generics.regions.len(), 0); + let method_item = self.associated_item(trait_def_id, m_name).unwrap(); + let def_id = method_item.def_id; + let generics = tcx.item_generics(def_id); + assert_eq!(generics.types.len(), 0); + assert_eq!(generics.regions.len(), 0); - debug!("lookup_in_trait_adjusted: method_item={:?} method_ty={:?}", - method_item, - method_ty); + debug!("lookup_in_trait_adjusted: method_item={:?}", method_item); // Instantiate late-bound regions and substitute the trait // parameters into the method type to get the actual method type. @@ -243,22 +237,25 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // NB: Instantiate late-bound regions first so that // `instantiate_type_scheme` can normalize associated types that // may reference those regions. - let fn_sig = - self.replace_late_bound_regions_with_fresh_var(span, infer::FnCall, &method_ty.fty.sig) - .0; + let original_method_ty = tcx.item_type(def_id); + let fty = match original_method_ty.sty { + ty::TyFnDef(_, _, f) => f, + _ => bug!() + }; + let fn_sig = self.replace_late_bound_regions_with_fresh_var(span, + infer::FnCall, + &fty.sig).0; let fn_sig = self.instantiate_type_scheme(span, trait_ref.substs, &fn_sig); - let transformed_self_ty = fn_sig.inputs[0]; - let def_id = method_item.def_id(); - let fty = tcx.mk_fn_def(def_id, - trait_ref.substs, - tcx.mk_bare_fn(ty::BareFnTy { - sig: ty::Binder(fn_sig), - unsafety: method_ty.fty.unsafety, - abi: method_ty.fty.abi.clone(), - })); + let transformed_self_ty = fn_sig.inputs()[0]; + let method_ty = tcx.mk_fn_def(def_id, trait_ref.substs, + tcx.mk_bare_fn(ty::BareFnTy { + sig: ty::Binder(fn_sig), + unsafety: fty.unsafety, + abi: fty.abi + })); - debug!("lookup_in_trait_adjusted: matched method fty={:?} obligation={:?}", - fty, + debug!("lookup_in_trait_adjusted: matched method method_ty={:?} obligation={:?}", + method_ty, obligation); // Register obligations for the parameters. This will include the @@ -269,13 +266,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // // Note that as the method comes from a trait, it should not have // any late-bound regions appearing in its bounds. - let method_bounds = self.instantiate_bounds(span, trait_ref.substs, &method_ty.predicates); + let method_bounds = self.instantiate_bounds(span, def_id, trait_ref.substs); assert!(!method_bounds.has_escaping_regions()); self.add_obligations_for_parameters(traits::ObligationCause::misc(span, self.body_id), &method_bounds); // Also register an obligation for the method type being well-formed. - self.register_wf_obligation(fty, span, traits::MiscObligation); + self.register_wf_obligation(method_ty, span, traits::MiscObligation); // FIXME(#18653) -- Try to resolve obligations, giving us more // typing information, which can sometimes be needed to avoid @@ -283,61 +280,39 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.select_obligations_where_possible(); // Insert any adjustments needed (always an autoref of some mutability). - match self_expr { - None => {} + if let Some(self_expr) = self_expr { + debug!("lookup_in_trait_adjusted: inserting adjustment if needed \ + (self-id={}, autoderefs={}, unsize={}, fty={:?})", + self_expr.id, autoderefs, unsize, original_method_ty); - Some(self_expr) => { - debug!("lookup_in_trait_adjusted: inserting adjustment if needed \ - (self-id={}, autoderefs={}, unsize={}, explicit_self={:?})", - self_expr.id, - autoderefs, - unsize, - method_ty.explicit_self); + let original_sig = original_method_ty.fn_sig(); + let autoref = match (&original_sig.input(0).skip_binder().sty, + &transformed_self_ty.sty) { + (&ty::TyRef(..), &ty::TyRef(region, ty::TypeAndMut { mutbl, ty: _ })) => { + // Trait method is fn(&self) or fn(&mut self), need an + // autoref. Pull the region etc out of the type of first argument. + Some(AutoBorrow::Ref(region, mutbl)) + } + _ => { + // Trait method is fn(self), no transformation needed. + assert!(!unsize); + None + } + }; - let autoref = match method_ty.explicit_self { - ty::ExplicitSelfCategory::ByValue => { - // Trait method is fn(self), no transformation needed. - assert!(!unsize); - None - } - - ty::ExplicitSelfCategory::ByReference(..) => { - // Trait method is fn(&self) or fn(&mut self), need an - // autoref. Pull the region etc out of the type of first argument. - match transformed_self_ty.sty { - ty::TyRef(region, ty::TypeAndMut { mutbl, ty: _ }) => { - Some(AutoBorrow::Ref(region, mutbl)) - } - - _ => { - span_bug!(span, - "trait method is &self but first arg is: {}", - transformed_self_ty); - } - } - } - - _ => { - span_bug!(span, - "unexpected explicit self type in operator method: {:?}", - method_ty.explicit_self); - } - }; - - self.write_adjustment(self_expr.id, Adjustment { - kind: Adjust::DerefRef { - autoderefs: autoderefs, - autoref: autoref, - unsize: unsize - }, - target: transformed_self_ty - }); - } + self.write_adjustment(self_expr.id, Adjustment { + kind: Adjust::DerefRef { + autoderefs: autoderefs, + autoref: autoref, + unsize: unsize + }, + target: transformed_self_ty + }); } let callee = ty::MethodCallee { def_id: def_id, - ty: fty, + ty: method_ty, substs: trait_ref.substs, }; @@ -360,9 +335,12 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } let def = pick.item.def(); + + self.tcx.check_stability(def.def_id(), expr_id, span); + if let probe::InherentImplPick = pick.kind { - if !pick.item.vis().is_accessible_from(self.body_id, &self.tcx.map) { - let msg = format!("{} `{}` is private", def.kind_name(), &method_name.as_str()); + if !pick.item.vis.is_accessible_from(self.body_id, &self.tcx.map) { + let msg = format!("{} `{}` is private", def.kind_name(), method_name); self.tcx.sess.span_err(span, &msg); } } @@ -371,14 +349,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { /// Find item with name `item_name` defined in impl/trait `def_id` /// and return it, or `None`, if no such item was defined there. - pub fn impl_or_trait_item(&self, - def_id: DefId, - item_name: ast::Name) - -> Option> { - self.tcx - .impl_or_trait_items(def_id) - .iter() - .map(|&did| self.tcx.impl_or_trait_item(did)) - .find(|m| m.name() == item_name) + pub fn associated_item(&self, def_id: DefId, item_name: ast::Name) + -> Option { + self.tcx.associated_items(def_id).find(|item| item.name == item_name) } } diff --git a/src/librustc_typeck/check/method/probe.rs b/src/librustc_typeck/check/method/probe.rs index 43837de2f3..0f28be90ab 100644 --- a/src/librustc_typeck/check/method/probe.rs +++ b/src/librustc_typeck/check/method/probe.rs @@ -16,13 +16,14 @@ use super::suggest; use check::FnCtxt; use hir::def_id::DefId; use hir::def::Def; +use rustc::infer::InferOk; use rustc::ty::subst::{Subst, Substs}; -use rustc::traits; +use rustc::traits::{self, ObligationCause}; use rustc::ty::{self, Ty, ToPolyTraitRef, TraitRef, TypeFoldable}; -use rustc::infer::{InferOk, TypeOrigin}; -use rustc::util::nodemap::FnvHashSet; +use rustc::infer::type_variable::TypeVariableOrigin; +use rustc::util::nodemap::FxHashSet; use syntax::ast; -use syntax_pos::{Span, DUMMY_SP}; +use syntax_pos::Span; use rustc::hir; use std::mem; use std::ops::Deref; @@ -40,7 +41,7 @@ struct ProbeContext<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> { opt_simplified_steps: Option>, inherent_candidates: Vec>, extension_candidates: Vec>, - impl_dups: FnvHashSet, + impl_dups: FxHashSet, import_id: Option, /// Collects near misses when the candidate functions are missing a `self` keyword and is only @@ -72,7 +73,7 @@ struct CandidateStep<'tcx> { #[derive(Debug)] struct Candidate<'tcx> { xform_self_ty: Ty<'tcx>, - item: ty::ImplOrTraitItem<'tcx>, + item: ty::AssociatedItem, kind: CandidateKind<'tcx>, import_id: Option, } @@ -95,7 +96,7 @@ enum CandidateKind<'tcx> { #[derive(Debug)] pub struct Pick<'tcx> { - pub item: ty::ImplOrTraitItem<'tcx>, + pub item: ty::AssociatedItem, pub kind: PickKind<'tcx>, pub import_id: Option, @@ -263,7 +264,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { item_name: item_name, inherent_candidates: Vec::new(), extension_candidates: Vec::new(), - impl_dups: FnvHashSet(), + impl_dups: FxHashSet(), import_id: None, steps: Rc::new(steps), opt_simplified_steps: opt_simplified_steps, @@ -295,9 +296,11 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { debug!("assemble_probe: self_ty={:?}", self_ty); match self_ty.sty { - ty::TyTrait(box ref data) => { - self.assemble_inherent_candidates_from_object(self_ty, data.principal); - self.assemble_inherent_impl_candidates_for_type(data.principal.def_id()); + ty::TyDynamic(ref data, ..) => { + if let Some(p) = data.principal() { + self.assemble_inherent_candidates_from_object(self_ty, p); + self.assemble_inherent_impl_candidates_for_type(p.def_id()); + } } ty::TyAdt(def, _) => { self.assemble_inherent_impl_candidates_for_type(def.did); @@ -384,8 +387,6 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_inherent_impl_for_primitive(&mut self, lang_def_id: Option) { if let Some(impl_def_id) = lang_def_id { - self.tcx.populate_implementations_for_primitive_if_necessary(impl_def_id); - self.assemble_inherent_impl_probe(impl_def_id); } } @@ -409,7 +410,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { debug!("assemble_inherent_impl_probe {:?}", impl_def_id); - let item = match self.impl_or_trait_item(impl_def_id) { + let item = match self.associated_item(impl_def_id) { Some(m) => m, None => { return; @@ -421,7 +422,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { return self.record_static_candidate(ImplSource(impl_def_id)); } - if !item.vis().is_accessible_from(self.body_id, &self.tcx.map) { + if !item.vis.is_accessible_from(self.body_id, &self.tcx.map) { self.private_candidate = Some(item.def()); return; } @@ -512,17 +513,6 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { let xform_self_ty = this.xform_self_ty(&item, trait_ref.self_ty(), trait_ref.substs); - if let Some(ref m) = item.as_opt_method() { - debug!("found match: trait_ref={:?} substs={:?} m={:?}", - trait_ref, - trait_ref.substs, - m); - assert_eq!(m.generics.parent_types as usize, - trait_ref.substs.types().count()); - assert_eq!(m.generics.parent_regions as usize, - trait_ref.substs.regions().count()); - } - // Because this trait derives from a where-clause, it // should not contain any inference variables or other // artifacts. This means it is safe to put into the @@ -544,13 +534,13 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn elaborate_bounds(&mut self, bounds: &[ty::PolyTraitRef<'tcx>], mut mk_cand: F) where F: for<'b> FnMut(&mut ProbeContext<'b, 'gcx, 'tcx>, ty::PolyTraitRef<'tcx>, - ty::ImplOrTraitItem<'tcx>) + ty::AssociatedItem) { debug!("elaborate_bounds(bounds={:?})", bounds); let tcx = self.tcx; for bound_trait_ref in traits::transitive_bounds(tcx, bounds) { - let item = match self.impl_or_trait_item(bound_trait_ref.def_id()) { + let item = match self.associated_item(bound_trait_ref.def_id()) { Some(v) => v, None => { continue; @@ -568,7 +558,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_extension_candidates_for_traits_in_scope(&mut self, expr_id: ast::NodeId) -> Result<(), MethodError<'tcx>> { - let mut duplicates = FnvHashSet(); + let mut duplicates = FxHashSet(); let opt_applicable_traits = self.tcx.trait_map.get(&expr_id); if let Some(applicable_traits) = opt_applicable_traits { for trait_candidate in applicable_traits { @@ -585,7 +575,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { } fn assemble_extension_candidates_for_all_traits(&mut self) -> Result<(), MethodError<'tcx>> { - let mut duplicates = FnvHashSet(); + let mut duplicates = FxHashSet(); for trait_info in suggest::all_traits(self.ccx) { if duplicates.insert(trait_info.def_id) { self.assemble_extension_candidates_for_trait(trait_info.def_id)?; @@ -601,9 +591,8 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { trait_def_id); // Check whether `trait_def_id` defines a method with suitable name: - let trait_items = self.tcx.trait_items(trait_def_id); - let maybe_item = trait_items.iter() - .find(|item| item.name() == self.item_name); + let maybe_item = self.tcx.associated_items(trait_def_id) + .find(|item| item.name == self.item_name); let item = match maybe_item { Some(i) => i, None => { @@ -612,7 +601,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { }; // Check whether `trait_def_id` defines a method with suitable name: - if !self.has_applicable_self(item) { + if !self.has_applicable_self(&item) { debug!("method has inapplicable self"); self.record_static_candidate(TraitSource(trait_def_id)); return Ok(()); @@ -631,7 +620,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_extension_candidates_for_trait_impls(&mut self, trait_def_id: DefId, - item: ty::ImplOrTraitItem<'tcx>) { + item: ty::AssociatedItem) { let trait_def = self.tcx.lookup_trait_def(trait_def_id); // FIXME(arielb1): can we use for_each_relevant_impl here? @@ -686,9 +675,9 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { } }; - let impl_type = self.tcx.lookup_item_type(impl_def_id); + let impl_type = self.tcx.item_type(impl_def_id); let impl_simplified_type = - match ty::fast_reject::simplify_type(self.tcx, impl_type.ty, false) { + match ty::fast_reject::simplify_type(self.tcx, impl_type, false) { Some(simplified_type) => simplified_type, None => { return true; @@ -700,7 +689,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_closure_candidates(&mut self, trait_def_id: DefId, - item: ty::ImplOrTraitItem<'tcx>) + item: ty::AssociatedItem) -> Result<(), MethodError<'tcx>> { // Check if this is one of the Fn,FnMut,FnOnce traits. let tcx = self.tcx; @@ -765,7 +754,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_projection_candidates(&mut self, trait_def_id: DefId, - item: ty::ImplOrTraitItem<'tcx>) { + item: ty::AssociatedItem) { debug!("assemble_projection_candidates(\ trait_def_id={:?}, \ item={:?})", @@ -785,7 +774,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { def_id, substs); - let trait_predicates = self.tcx.lookup_predicates(def_id); + let trait_predicates = self.tcx.item_predicates(def_id); let bounds = trait_predicates.instantiate(self.tcx, substs); let predicates = bounds.predicates; debug!("assemble_projection_candidates: predicates={:?}", @@ -820,7 +809,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { fn assemble_where_clause_candidates(&mut self, trait_def_id: DefId, - item: ty::ImplOrTraitItem<'tcx>) { + item: ty::AssociatedItem) { debug!("assemble_where_clause_candidates(trait_def_id={:?})", trait_def_id); @@ -865,7 +854,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { self.assemble_extension_candidates_for_all_traits()?; let out_of_scope_traits = match self.pick_core() { - Some(Ok(p)) => vec![p.item.container().id()], + Some(Ok(p)) => vec![p.item.container.id()], Some(Err(MethodError::Ambiguity(v))) => { v.into_iter() .map(|source| { @@ -1046,10 +1035,10 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { self.probe(|_| { // First check that the self type can be related. match self.sub_types(false, - TypeOrigin::Misc(DUMMY_SP), + &ObligationCause::dummy(), self_ty, probe.xform_self_ty) { - Ok(InferOk { obligations, .. }) => { + Ok(InferOk { obligations, value: () }) => { // FIXME(#32730) propagate obligations assert!(obligations.is_empty()) } @@ -1065,7 +1054,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { // don't have enough information to fully evaluate). let (impl_def_id, substs, ref_obligations) = match probe.kind { InherentImplCandidate(ref substs, ref ref_obligations) => { - (probe.item.container().id(), substs, ref_obligations) + (probe.item.container.id(), substs, ref_obligations) } ExtensionImplCandidate(impl_def_id, ref substs, ref ref_obligations) => { @@ -1084,7 +1073,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { let cause = traits::ObligationCause::misc(self.span, self.body_id); // Check whether the impl imposes obligations we have to worry about. - let impl_bounds = self.tcx.lookup_predicates(impl_def_id); + let impl_bounds = self.tcx.item_predicates(impl_def_id); let impl_bounds = impl_bounds.instantiate(self.tcx, substs); let traits::Normalized { value: impl_bounds, obligations: norm_obligations } = traits::normalize(selcx, cause.clone(), &impl_bounds); @@ -1128,12 +1117,12 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { /// use, so it's ok to just commit to "using the method from the trait Foo". fn collapse_candidates_to_trait_pick(&self, probes: &[&Candidate<'tcx>]) -> Option> { // Do all probes correspond to the same trait? - let container = probes[0].item.container(); + let container = probes[0].item.container; match container { ty::TraitContainer(_) => {} ty::ImplContainer(_) => return None, } - if probes[1..].iter().any(|p| p.item.container() != container) { + if probes[1..].iter().any(|p| p.item.container != container) { return None; } @@ -1150,19 +1139,18 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { /////////////////////////////////////////////////////////////////////////// // MISCELLANY - fn has_applicable_self(&self, item: &ty::ImplOrTraitItem) -> bool { - // "fast track" -- check for usage of sugar - match *item { - ty::ImplOrTraitItem::MethodTraitItem(ref method) => { - match method.explicit_self { - ty::ExplicitSelfCategory::Static => self.mode == Mode::Path, - ty::ExplicitSelfCategory::ByValue | - ty::ExplicitSelfCategory::ByReference(..) | - ty::ExplicitSelfCategory::ByBox => true, - } - } - ty::ImplOrTraitItem::ConstTraitItem(..) => self.mode == Mode::Path, - _ => false, + fn has_applicable_self(&self, item: &ty::AssociatedItem) -> bool { + // "Fast track" -- check for usage of sugar when in method call + // mode. + // + // In Path mode (i.e., resolving a value like `T::next`), consider any + // associated value (i.e., methods, constants) but not types. + match self.mode { + Mode::MethodCall => item.method_has_self_argument, + Mode::Path => match item.kind { + ty::AssociatedKind::Type => false, + ty::AssociatedKind::Method | ty::AssociatedKind::Const => true + }, } // FIXME -- check for types that deref to `Self`, // like `Rc` and so on. @@ -1177,24 +1165,26 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { } fn xform_self_ty(&self, - item: &ty::ImplOrTraitItem<'tcx>, + item: &ty::AssociatedItem, impl_ty: Ty<'tcx>, substs: &Substs<'tcx>) -> Ty<'tcx> { - match item.as_opt_method() { - Some(ref method) => self.xform_method_self_ty(method, impl_ty, substs), - None => impl_ty, + if item.kind == ty::AssociatedKind::Method && self.mode == Mode::MethodCall { + self.xform_method_self_ty(item.def_id, impl_ty, substs) + } else { + impl_ty } } fn xform_method_self_ty(&self, - method: &Rc>, + method: DefId, impl_ty: Ty<'tcx>, substs: &Substs<'tcx>) -> Ty<'tcx> { + let self_ty = self.tcx.item_type(method).fn_sig().input(0); debug!("xform_self_ty(impl_ty={:?}, self_ty={:?}, substs={:?})", impl_ty, - method.fty.sig.0.inputs.get(0), + self_ty, substs); assert!(!substs.has_escaping_regions()); @@ -1204,26 +1194,18 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { // are given do not include type/lifetime parameters for the // method yet. So create fresh variables here for those too, // if there are any. - assert_eq!(substs.types().count(), - method.generics.parent_types as usize); - assert_eq!(substs.regions().count(), - method.generics.parent_regions as usize); - - if self.mode == Mode::Path { - return impl_ty; - } + let generics = self.tcx.item_generics(method); + assert_eq!(substs.types().count(), generics.parent_types as usize); + assert_eq!(substs.regions().count(), generics.parent_regions as usize); // Erase any late-bound regions from the method and substitute // in the values from the substitution. - let xform_self_ty = method.fty.sig.input(0); - let xform_self_ty = self.erase_late_bound_regions(&xform_self_ty); + let xform_self_ty = self.erase_late_bound_regions(&self_ty); - if method.generics.types.is_empty() && method.generics.regions.is_empty() { + if generics.types.is_empty() && generics.regions.is_empty() { xform_self_ty.subst(self.tcx, substs) } else { - let substs = Substs::for_item(self.tcx, - method.def_id, - |def, _| { + let substs = Substs::for_item(self.tcx, method, |def, _| { let i = def.index as usize; if i < substs.params().len() { substs.region_at(i) @@ -1232,8 +1214,7 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { // `impl_self_ty()` for an explanation. self.tcx.mk_region(ty::ReErased) } - }, - |def, cur_substs| { + }, |def, cur_substs| { let i = def.index as usize; if i < substs.params().len() { substs.type_at(i) @@ -1247,12 +1228,14 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { /// Get the type of an impl and generate substitutions with placeholders. fn impl_ty_and_substs(&self, impl_def_id: DefId) -> (Ty<'tcx>, &'tcx Substs<'tcx>) { - let impl_ty = self.tcx.lookup_item_type(impl_def_id).ty; + let impl_ty = self.tcx.item_type(impl_def_id); let substs = Substs::for_item(self.tcx, impl_def_id, |_, _| self.tcx.mk_region(ty::ReErased), - |_, _| self.next_ty_var()); + |_, _| self.next_ty_var( + TypeVariableOrigin::SubstitutionPlaceholder( + self.tcx.def_span(impl_def_id)))); (impl_ty, substs) } @@ -1283,8 +1266,8 @@ impl<'a, 'gcx, 'tcx> ProbeContext<'a, 'gcx, 'tcx> { /// Find item with name `item_name` defined in impl/trait `def_id` /// and return it, or `None`, if no such item was defined there. - fn impl_or_trait_item(&self, def_id: DefId) -> Option> { - self.fcx.impl_or_trait_item(def_id, self.item_name) + fn associated_item(&self, def_id: DefId) -> Option { + self.fcx.associated_item(def_id, self.item_name) } } @@ -1317,11 +1300,11 @@ impl<'tcx> Candidate<'tcx> { fn to_source(&self) -> CandidateSource { match self.kind { - InherentImplCandidate(..) => ImplSource(self.item.container().id()), + InherentImplCandidate(..) => ImplSource(self.item.container.id()), ExtensionImplCandidate(def_id, ..) => ImplSource(def_id), ObjectCandidate | TraitCandidate | - WhereClauseCandidate(_) => TraitSource(self.item.container().id()), + WhereClauseCandidate(_) => TraitSource(self.item.container.id()), } } } diff --git a/src/librustc_typeck/check/method/suggest.rs b/src/librustc_typeck/check/method/suggest.rs index 32bf839a4e..86bfede87b 100644 --- a/src/librustc_typeck/check/method/suggest.rs +++ b/src/librustc_typeck/check/method/suggest.rs @@ -20,7 +20,7 @@ use hir::def::Def; use hir::def_id::{CRATE_DEF_INDEX, DefId}; use middle::lang_items::FnOnceTraitLangItem; use rustc::traits::{Obligation, SelectionContext}; -use util::nodemap::FnvHashSet; +use util::nodemap::FxHashSet; use syntax::ast; use errors::DiagnosticBuilder; @@ -28,7 +28,7 @@ use syntax_pos::Span; use rustc::hir::print as pprust; use rustc::hir; -use rustc::hir::Expr_; +use rustc::infer::type_variable::TypeVariableOrigin; use std::cell; use std::cmp::Ordering; @@ -54,7 +54,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.autoderef(span, ty).any(|(ty, _)| { self.probe(|_| { - let fn_once_substs = tcx.mk_substs_trait(ty, &[self.next_ty_var()]); + let fn_once_substs = tcx.mk_substs_trait(ty, + &[self.next_ty_var(TypeVariableOrigin::MiscVariable(span))]); let trait_ref = ty::TraitRef::new(fn_once, fn_once_substs); let poly_trait_ref = trait_ref.to_poly_trait_ref(); let obligation = @@ -89,20 +90,17 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { CandidateSource::ImplSource(impl_did) => { // Provide the best span we can. Use the item, if local to crate, else // the impl, if local to crate (item may be defaulted), else nothing. - let item = self.impl_or_trait_item(impl_did, item_name) + let item = self.associated_item(impl_did, item_name) .or_else(|| { - self.impl_or_trait_item(self.tcx - .impl_trait_ref(impl_did) - .unwrap() - .def_id, + self.associated_item( + self.tcx.impl_trait_ref(impl_did).unwrap().def_id, - item_name) - }) - .unwrap(); - let note_span = self.tcx - .map - .span_if_local(item.def_id()) - .or_else(|| self.tcx.map.span_if_local(impl_did)); + item_name + ) + }).unwrap(); + let note_span = self.tcx.map.span_if_local(item.def_id).or_else(|| { + self.tcx.map.span_if_local(impl_did) + }); let impl_ty = self.impl_self_ty(span, impl_did).ty; @@ -127,8 +125,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } } CandidateSource::TraitSource(trait_did) => { - let item = self.impl_or_trait_item(trait_did, item_name).unwrap(); - let item_span = self.tcx.map.def_id_span(item.def_id(), span); + let item = self.associated_item(trait_did, item_name).unwrap(); + let item_span = self.tcx.def_span(item.def_id); span_note!(err, item_span, "candidate #{} is defined in the trait `{}`", @@ -213,7 +211,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if let Some(expr) = rcvr_expr { if let Ok(expr_string) = tcx.sess.codemap().span_to_snippet(expr.span) { report_function!(expr.span, expr_string); - } else if let Expr_::ExprPath(_, path) = expr.node.clone() { + } else if let hir::ExprPath(hir::QPath::Resolved(_, ref path)) = expr.node { if let Some(segment) = path.segments.last() { report_function!(expr.span, segment.name); } @@ -311,7 +309,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let limit = if candidates.len() == 5 { 5 } else { 4 }; for (i, trait_did) in candidates.iter().take(limit).enumerate() { - err.help(&format!("candidate #{}: `use {}`", + err.help(&format!("candidate #{}: `use {};`", i + 1, self.tcx.item_path_str(*trait_did))); } @@ -334,8 +332,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // this isn't perfect (that is, there are cases when // implementing a trait would be legal but is rejected // here). - (type_is_local || info.def_id.is_local()) && - self.impl_or_trait_item(info.def_id, item_name).is_some() + (type_is_local || info.def_id.is_local()) + && self.associated_item(info.def_id, item_name).is_some() }) .collect::>(); @@ -383,7 +381,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { match ty.sty { ty::TyAdt(def, _) => def.did.is_local(), - ty::TyTrait(ref tr) => tr.principal.def_id().is_local(), + ty::TyDynamic(ref tr, ..) => tr.principal() + .map_or(false, |p| p.def_id().is_local()), ty::TyParam(_) => true, @@ -442,7 +441,7 @@ impl Ord for TraitInfo { /// Retrieve all traits in this crate and any dependent crates. pub fn all_traits<'a>(ccx: &'a CrateCtxt) -> AllTraits<'a> { if ccx.all_traits.borrow().is_none() { - use rustc::hir::intravisit; + use rustc::hir::itemlikevisit; let mut traits = vec![]; @@ -453,7 +452,7 @@ pub fn all_traits<'a>(ccx: &'a CrateCtxt) -> AllTraits<'a> { map: &'a hir_map::Map<'tcx>, traits: &'a mut AllTraitsVec, } - impl<'v, 'a, 'tcx> intravisit::Visitor<'v> for Visitor<'a, 'tcx> { + impl<'v, 'a, 'tcx> itemlikevisit::ItemLikeVisitor<'v> for Visitor<'a, 'tcx> { fn visit_item(&mut self, i: &'v hir::Item) { match i.node { hir::ItemTrait(..) => { @@ -463,17 +462,20 @@ pub fn all_traits<'a>(ccx: &'a CrateCtxt) -> AllTraits<'a> { _ => {} } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } - ccx.tcx.map.krate().visit_all_items(&mut Visitor { + ccx.tcx.map.krate().visit_all_item_likes(&mut Visitor { map: &ccx.tcx.map, traits: &mut traits, }); // Cross-crate: - let mut external_mods = FnvHashSet(); + let mut external_mods = FxHashSet(); fn handle_external_def(ccx: &CrateCtxt, traits: &mut AllTraitsVec, - external_mods: &mut FnvHashSet, + external_mods: &mut FxHashSet, def: Def) { let def_id = def.def_id(); match def { diff --git a/src/librustc_typeck/check/mod.rs b/src/librustc_typeck/check/mod.rs index d8314bd6c2..fb7a7e894f 100644 --- a/src/librustc_typeck/check/mod.rs +++ b/src/librustc_typeck/check/mod.rs @@ -80,15 +80,15 @@ pub use self::Expectation::*; pub use self::compare_method::{compare_impl_method, compare_const_impl}; use self::TupleArgumentsFlag::*; -use astconv::{AstConv, ast_region_to_region, PathParamMode}; +use astconv::{AstConv, ast_region_to_region}; use dep_graph::DepNode; use fmt_macros::{Parser, Piece, Position}; -use hir::def::{Def, CtorKind, PathResolution}; +use hir::def::{Def, CtorKind}; use hir::def_id::{DefId, LOCAL_CRATE}; -use hir::pat_util; -use rustc::infer::{self, InferCtxt, InferOk, TypeOrigin, TypeTrace, type_variable}; +use rustc::infer::{self, InferCtxt, InferOk, RegionVariableOrigin, TypeTrace}; +use rustc::infer::type_variable::{self, TypeVariableOrigin}; use rustc::ty::subst::{Kind, Subst, Substs}; -use rustc::traits::{self, Reveal}; +use rustc::traits::{self, ObligationCause, ObligationCauseCode, Reveal}; use rustc::ty::{ParamTy, ParameterEnvironment}; use rustc::ty::{LvaluePreference, NoPreference, PreferMutLvalue}; use rustc::ty::{self, ToPolyTraitRef, Ty, TyCtxt, Visibility}; @@ -102,25 +102,28 @@ use session::{Session, CompileResult}; use CrateCtxt; use TypeAndSubsts; use lint; -use util::common::{block_query, ErrorReported, indenter, loop_query}; -use util::nodemap::{DefIdMap, FnvHashMap, FnvHashSet, NodeMap}; +use util::common::{ErrorReported, indenter}; +use util::nodemap::{DefIdMap, FxHashMap, FxHashSet, NodeMap}; use std::cell::{Cell, Ref, RefCell}; +use std::cmp; use std::mem::replace; -use std::ops::Deref; +use std::ops::{self, Deref}; use syntax::abi::Abi; use syntax::ast; use syntax::attr; -use syntax::codemap::{self, Spanned}; +use syntax::codemap::{self, original_sp, Spanned}; use syntax::feature_gate::{GateIssue, emit_feature_err}; -use syntax::parse::token::{self, InternedString, keywords}; use syntax::ptr::P; +use syntax::symbol::{Symbol, InternedString, keywords}; use syntax::util::lev_distance::find_best_match_for_name; -use syntax_pos::{self, Span}; +use syntax_pos::{self, BytePos, Span, DUMMY_SP}; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir::{self, PatKind}; use rustc::hir::print as pprust; +use rustc::middle::lang_items; use rustc_back::slice; use rustc_const_eval::eval_length; @@ -266,7 +269,7 @@ impl<'a, 'gcx, 'tcx> Expectation<'tcx> { /// for examples of where this comes up,. fn rvalue_hint(fcx: &FnCtxt<'a, 'gcx, 'tcx>, ty: Ty<'tcx>) -> Expectation<'tcx> { match fcx.tcx.struct_tail(ty).sty { - ty::TySlice(_) | ty::TyStr | ty::TyTrait(..) => { + ty::TySlice(_) | ty::TyStr | ty::TyDynamic(..) => { ExpectRvalueLikeUnsized(ty) } _ => ExpectHasType(ty) @@ -339,7 +342,7 @@ impl UnsafetyState { (unsafety, blk.id, self.unsafe_push_count.checked_sub(1).unwrap()), hir::UnsafeBlock(..) => (hir::Unsafety::Unsafe, blk.id, self.unsafe_push_count), - hir::DefaultBlock | hir::PushUnstableBlock | hir:: PopUnstableBlock => + hir::DefaultBlock => (unsafety, self.def, self.unsafe_push_count), }; UnsafetyState{ def: def, @@ -351,6 +354,87 @@ impl UnsafetyState { } } +/// Whether a node ever exits normally or not. +/// Tracked semi-automatically (through type variables +/// marked as diverging), with some manual adjustments +/// for control-flow primitives (approximating a CFG). +#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord)] +enum Diverges { + /// Potentially unknown, some cases converge, + /// others require a CFG to determine them. + Maybe, + + /// Definitely known to diverge and therefore + /// not reach the next sibling or its parent. + Always, + + /// Same as `Always` but with a reachability + /// warning already emitted + WarnedAlways +} + +// Convenience impls for combinig `Diverges`. + +impl ops::BitAnd for Diverges { + type Output = Self; + fn bitand(self, other: Self) -> Self { + cmp::min(self, other) + } +} + +impl ops::BitOr for Diverges { + type Output = Self; + fn bitor(self, other: Self) -> Self { + cmp::max(self, other) + } +} + +impl ops::BitAndAssign for Diverges { + fn bitand_assign(&mut self, other: Self) { + *self = *self & other; + } +} + +impl ops::BitOrAssign for Diverges { + fn bitor_assign(&mut self, other: Self) { + *self = *self | other; + } +} + +impl Diverges { + fn always(self) -> bool { + self >= Diverges::Always + } +} + +#[derive(Clone)] +pub struct LoopCtxt<'gcx, 'tcx> { + unified: Ty<'tcx>, + coerce_to: Ty<'tcx>, + break_exprs: Vec<&'gcx hir::Expr>, + may_break: bool, +} + +#[derive(Clone)] +pub struct EnclosingLoops<'gcx, 'tcx> { + stack: Vec>, + by_id: NodeMap, +} + +impl<'gcx, 'tcx> EnclosingLoops<'gcx, 'tcx> { + fn find_loop(&mut self, id: Option) -> Option<&mut LoopCtxt<'gcx, 'tcx>> { + if let Some(id) = id { + if let Some(ix) = self.by_id.get(&id).cloned() { + Some(&mut self.stack[ix]) + } else { + None + } + } else { + self.stack.last_mut() + } + } +} + #[derive(Clone)] pub struct FnCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { ast_ty_to_ty_cache: RefCell>>, @@ -367,10 +451,18 @@ pub struct FnCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { // expects the types within the function to be consistent. err_count_on_creation: usize, - ret_ty: Ty<'tcx>, + ret_ty: Option>, ps: RefCell, + /// Whether the last checked node can ever exit. + diverges: Cell, + + /// Whether any child nodes have any type errors. + has_errors: Cell, + + enclosing_loops: RefCell>, + inh: &'a Inherited<'a, 'gcx, 'tcx>, } @@ -447,6 +539,10 @@ struct CheckItemTypesVisitor<'a, 'tcx: 'a> { ccx: &'a CrateCtxt<'a, 'tcx> } struct CheckItemBodiesVisitor<'a, 'tcx: 'a> { ccx: &'a CrateCtxt<'a, 'tcx> } impl<'a, 'tcx> Visitor<'tcx> for CheckItemTypesVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.ccx.tcx.map) + } + fn visit_item(&mut self, i: &'tcx hir::Item) { check_item_type(self.ccx, i); intravisit::walk_item(self, i); @@ -464,30 +560,35 @@ impl<'a, 'tcx> Visitor<'tcx> for CheckItemTypesVisitor<'a, 'tcx> { } } -impl<'a, 'tcx> Visitor<'tcx> for CheckItemBodiesVisitor<'a, 'tcx> { +impl<'a, 'tcx> ItemLikeVisitor<'tcx> for CheckItemBodiesVisitor<'a, 'tcx> { fn visit_item(&mut self, i: &'tcx hir::Item) { check_item_body(self.ccx, i); } + + fn visit_impl_item(&mut self, _item: &'tcx hir::ImplItem) { + // done as part of `visit_item` above + } } pub fn check_wf_new(ccx: &CrateCtxt) -> CompileResult { ccx.tcx.sess.track_errors(|| { let mut visit = wfcheck::CheckTypeWellFormedVisitor::new(ccx); - ccx.tcx.visit_all_items_in_krate(DepNode::WfCheck, &mut visit); + ccx.tcx.visit_all_item_likes_in_krate(DepNode::WfCheck, &mut visit.as_deep_visitor()); }) } pub fn check_item_types(ccx: &CrateCtxt) -> CompileResult { ccx.tcx.sess.track_errors(|| { let mut visit = CheckItemTypesVisitor { ccx: ccx }; - ccx.tcx.visit_all_items_in_krate(DepNode::TypeckItemType, &mut visit); + ccx.tcx.visit_all_item_likes_in_krate(DepNode::TypeckItemType, + &mut visit.as_deep_visitor()); }) } pub fn check_item_bodies(ccx: &CrateCtxt) -> CompileResult { ccx.tcx.sess.track_errors(|| { let mut visit = CheckItemBodiesVisitor { ccx: ccx }; - ccx.tcx.visit_all_items_in_krate(DepNode::TypeckItemBody, &mut visit); + ccx.tcx.visit_all_item_likes_in_krate(DepNode::TypeckItemBody, &mut visit); // Process deferred obligations, now that all functions // bodies have been fully inferred. @@ -534,10 +635,12 @@ pub fn check_drop_impls(ccx: &CrateCtxt) -> CompileResult { fn check_bare_fn<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, decl: &'tcx hir::FnDecl, - body: &'tcx hir::Block, + body_id: hir::ExprId, fn_id: ast::NodeId, span: Span) { - let raw_fty = ccx.tcx.lookup_item_type(ccx.tcx.map.local_def_id(fn_id)).ty; + let body = ccx.tcx.map.expr(body_id); + + let raw_fty = ccx.tcx.item_type(ccx.tcx.map.local_def_id(fn_id)); let fn_ty = match raw_fty.sty { ty::TyFnDef(.., f) => f, _ => span_bug!(body.span, "check_bare_fn: function type expected") @@ -547,23 +650,23 @@ fn check_bare_fn<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, ccx.inherited(fn_id).enter(|inh| { // Compute the fty from point of view of inside fn. - let fn_scope = inh.tcx.region_maps.call_site_extent(fn_id, body.id); + let fn_scope = inh.tcx.region_maps.call_site_extent(fn_id, body_id.node_id()); let fn_sig = fn_ty.sig.subst(inh.tcx, &inh.parameter_environment.free_substs); let fn_sig = inh.tcx.liberate_late_bound_regions(fn_scope, &fn_sig); let fn_sig = - inh.normalize_associated_types_in(body.span, body.id, &fn_sig); + inh.normalize_associated_types_in(body.span, body_id.node_id(), &fn_sig); let fcx = check_fn(&inh, fn_ty.unsafety, fn_id, &fn_sig, decl, fn_id, body); fcx.select_all_obligations_and_apply_defaults(); - fcx.closure_analyze_fn(body); + fcx.closure_analyze(body); fcx.select_obligations_where_possible(); fcx.check_casts(); fcx.select_all_obligations_or_error(); // Casts can introduce new obligations. - fcx.regionck_fn(fn_id, decl, body); + fcx.regionck_fn(fn_id, decl, body_id); fcx.resolve_type_vars_in_fn(decl, body, fn_id); }); } @@ -580,11 +683,11 @@ struct GatherLocalsVisitor<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { } impl<'a, 'gcx, 'tcx> GatherLocalsVisitor<'a, 'gcx, 'tcx> { - fn assign(&mut self, _span: Span, nid: ast::NodeId, ty_opt: Option>) -> Ty<'tcx> { + fn assign(&mut self, span: Span, nid: ast::NodeId, ty_opt: Option>) -> Ty<'tcx> { match ty_opt { None => { // infer the variable's type - let var_ty = self.fcx.next_ty_var(); + let var_ty = self.fcx.next_ty_var(TypeVariableOrigin::TypeInference(span)); self.fcx.locals.borrow_mut().insert(nid, var_ty); var_ty } @@ -598,6 +701,10 @@ impl<'a, 'gcx, 'tcx> GatherLocalsVisitor<'a, 'gcx, 'tcx> { } impl<'a, 'gcx, 'tcx> Visitor<'gcx> for GatherLocalsVisitor<'a, 'gcx, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'gcx> { + NestedVisitorMap::None + } + // Add explicitly-declared locals. fn visit_local(&mut self, local: &'gcx hir::Local) { let o_ty = match local.ty { @@ -614,7 +721,7 @@ impl<'a, 'gcx, 'tcx> Visitor<'gcx> for GatherLocalsVisitor<'a, 'gcx, 'tcx> { // Add pattern bindings. fn visit_pat(&mut self, p: &'gcx hir::Pat) { - if let PatKind::Binding(_, ref path1, _) = p.node { + if let PatKind::Binding(_, _, ref path1, _) = p.node { let var_ty = self.assign(p.span, p.id, None); self.fcx.require_type_is_sized(var_ty, p.span, @@ -654,7 +761,7 @@ impl<'a, 'gcx, 'tcx> Visitor<'gcx> for GatherLocalsVisitor<'a, 'gcx, 'tcx> { // Don't descend into the bodies of nested closures fn visit_fn(&mut self, _: intravisit::FnKind<'gcx>, _: &'gcx hir::FnDecl, - _: &'gcx hir::Block, _: Span, _: ast::NodeId) { } + _: hir::ExprId, _: Span, _: ast::NodeId) { } } /// Helper used by check_bare_fn and check_expr_fn. Does the grungy work of checking a function @@ -669,7 +776,7 @@ fn check_fn<'a, 'gcx, 'tcx>(inherited: &'a Inherited<'a, 'gcx, 'tcx>, fn_sig: &ty::FnSig<'tcx>, decl: &'gcx hir::FnDecl, fn_id: ast::NodeId, - body: &'gcx hir::Block) + body: &'gcx hir::Expr) -> FnCtxt<'a, 'gcx, 'tcx> { let mut fn_sig = fn_sig.clone(); @@ -678,18 +785,20 @@ fn check_fn<'a, 'gcx, 'tcx>(inherited: &'a Inherited<'a, 'gcx, 'tcx>, // Create the function context. This is either derived from scratch or, // in the case of function expressions, based on the outer context. - let mut fcx = FnCtxt::new(inherited, fn_sig.output, body.id); + let mut fcx = FnCtxt::new(inherited, None, body.id); + let ret_ty = fn_sig.output(); *fcx.ps.borrow_mut() = UnsafetyState::function(unsafety, unsafety_id); - fcx.require_type_is_sized(fcx.ret_ty, decl.output.span(), traits::ReturnType); - fcx.ret_ty = fcx.instantiate_anon_types(&fcx.ret_ty); - fn_sig.output = fcx.ret_ty; + fcx.require_type_is_sized(ret_ty, decl.output.span(), traits::ReturnType); + fcx.ret_ty = fcx.instantiate_anon_types(&Some(ret_ty)); + fn_sig = fcx.tcx.mk_fn_sig(fn_sig.inputs().iter().cloned(), &fcx.ret_ty.unwrap(), + fn_sig.variadic); { let mut visit = GatherLocalsVisitor { fcx: &fcx, }; // Add formal parameters. - for (arg_ty, input) in fn_sig.inputs.iter().zip(&decl.inputs) { + for (arg_ty, input) in fn_sig.inputs().iter().zip(&decl.inputs) { // The type of the argument must be well-formed. // // NB -- this is now checked in wfcheck, but that @@ -699,7 +808,7 @@ fn check_fn<'a, 'gcx, 'tcx>(inherited: &'a Inherited<'a, 'gcx, 'tcx>, fcx.register_old_wf_obligation(arg_ty, input.ty.span, traits::MiscObligation); // Create type variables for each argument. - pat_util::pat_bindings(&input.pat, |_bm, pat_id, sp, _path| { + input.pat.each_binding(|_bm, pat_id, sp, _path| { let var_ty = visit.assign(sp, pat_id, None); fcx.require_type_is_sized(var_ty, sp, traits::VariableType(pat_id)); }); @@ -709,32 +818,27 @@ fn check_fn<'a, 'gcx, 'tcx>(inherited: &'a Inherited<'a, 'gcx, 'tcx>, fcx.write_ty(input.id, arg_ty); } - visit.visit_block(body); + visit.visit_expr(body); } inherited.tables.borrow_mut().liberated_fn_sigs.insert(fn_id, fn_sig); - // FIXME(aburka) do we need this special case? and should it be is_uninhabited? - let expected = if fcx.ret_ty.is_never() { - NoExpectation - } else { - ExpectHasType(fcx.ret_ty) - }; - fcx.check_block_with_expected(body, expected); + fcx.check_expr_coercable_to_type(body, fcx.ret_ty.unwrap()); fcx } fn check_struct(ccx: &CrateCtxt, id: ast::NodeId, span: Span) { - check_representable(ccx.tcx, span, id); + let def_id = ccx.tcx.map.local_def_id(id); + check_representable(ccx.tcx, span, def_id); - if ccx.tcx.lookup_simd(ccx.tcx.map.local_def_id(id)) { - check_simd(ccx.tcx, span, id); + if ccx.tcx.lookup_simd(def_id) { + check_simd(ccx.tcx, span, def_id); } } fn check_union(ccx: &CrateCtxt, id: ast::NodeId, span: Span) { - check_representable(ccx.tcx, span, id); + check_representable(ccx.tcx, span, ccx.tcx.map.local_def_id(id)); } pub fn check_item_type<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { @@ -753,15 +857,15 @@ pub fn check_item_type<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { it.id); } hir::ItemFn(..) => {} // entirely within check_item_body - hir::ItemImpl(.., ref impl_items) => { + hir::ItemImpl(.., ref impl_item_refs) => { debug!("ItemImpl {} with id {}", it.name, it.id); let impl_def_id = ccx.tcx.map.local_def_id(it.id); if let Some(impl_trait_ref) = ccx.tcx.impl_trait_ref(impl_def_id) { check_impl_items_against_trait(ccx, it.span, impl_def_id, - &impl_trait_ref, - impl_items); + impl_trait_ref, + impl_item_refs); let trait_def_id = impl_trait_ref.def_id; check_on_unimplemented(ccx, trait_def_id, it); } @@ -777,7 +881,8 @@ pub fn check_item_type<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { check_union(ccx, it.id, it.span); } hir::ItemTy(_, ref generics) => { - let pty_ty = ccx.tcx.tables().node_id_to_type(it.id); + let def_id = ccx.tcx.map.local_def_id(it.id); + let pty_ty = ccx.tcx.item_type(def_id); check_bounds_are_used(ccx, generics, pty_ty); } hir::ItemForeignMod(ref m) => { @@ -793,8 +898,8 @@ pub fn check_item_type<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { } } else { for item in &m.items { - let pty = ccx.tcx.lookup_item_type(ccx.tcx.map.local_def_id(item.id)); - if !pty.generics.types.is_empty() { + let generics = ccx.tcx.item_generics(ccx.tcx.map.local_def_id(item.id)); + if !generics.types.is_empty() { let mut err = struct_span_err!(ccx.tcx.sess, item.span, E0044, "foreign items may not have type parameters"); span_help!(&mut err, item.span, @@ -819,19 +924,20 @@ pub fn check_item_body<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { ccx.tcx.item_path_str(ccx.tcx.map.local_def_id(it.id))); let _indenter = indenter(); match it.node { - hir::ItemFn(ref decl, .., ref body) => { - check_bare_fn(ccx, &decl, &body, it.id, it.span); + hir::ItemFn(ref decl, .., body_id) => { + check_bare_fn(ccx, &decl, body_id, it.id, it.span); } - hir::ItemImpl(.., ref impl_items) => { + hir::ItemImpl(.., ref impl_item_refs) => { debug!("ItemImpl {} with id {}", it.name, it.id); - for impl_item in impl_items { + for impl_item_ref in impl_item_refs { + let impl_item = ccx.tcx.map.impl_item(impl_item_ref.id); match impl_item.node { hir::ImplItemKind::Const(_, ref expr) => { check_const(ccx, &expr, impl_item.id) } - hir::ImplItemKind::Method(ref sig, ref body) => { - check_bare_fn(ccx, &sig.decl, body, impl_item.id, impl_item.span); + hir::ImplItemKind::Method(ref sig, body_id) => { + check_bare_fn(ccx, &sig.decl, body_id, impl_item.id, impl_item.span); } hir::ImplItemKind::Type(_) => { // Nothing to do here. @@ -845,8 +951,8 @@ pub fn check_item_body<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { hir::ConstTraitItem(_, Some(ref expr)) => { check_const(ccx, &expr, trait_item.id) } - hir::MethodTraitItem(ref sig, Some(ref body)) => { - check_bare_fn(ccx, &sig.decl, body, trait_item.id, trait_item.span); + hir::MethodTraitItem(ref sig, Some(body_id)) => { + check_bare_fn(ccx, &sig.decl, body_id, trait_item.id, trait_item.span); } hir::MethodTraitItem(_, None) | hir::ConstTraitItem(_, None) | @@ -863,11 +969,12 @@ pub fn check_item_body<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx hir::Item) { fn check_on_unimplemented<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, def_id: DefId, item: &hir::Item) { - let generics = ccx.tcx.lookup_generics(def_id); + let generics = ccx.tcx.item_generics(def_id); if let Some(ref attr) = item.attrs.iter().find(|a| { a.check_name("rustc_on_unimplemented") }) { - if let Some(ref istring) = attr.value_str() { + if let Some(istring) = attr.value_str() { + let istring = istring.as_str(); let parser = Parser::new(&istring); let types = &generics.types; for token in parser { @@ -878,7 +985,7 @@ fn check_on_unimplemented<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, Position::ArgumentNamed(s) if s == "Self" => (), // So is `{A}` if A is a type parameter Position::ArgumentNamed(s) => match types.iter().find(|t| { - t.name.as_str() == s + t.name == s }) { Some(_) => (), None => { @@ -936,27 +1043,19 @@ fn report_forbidden_specialization<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } fn check_specialization_validity<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - trait_def: &ty::TraitDef<'tcx>, + trait_def: &ty::TraitDef, impl_id: DefId, impl_item: &hir::ImplItem) { let ancestors = trait_def.ancestors(impl_id); - let parent = match impl_item.node { - hir::ImplItemKind::Const(..) => { - ancestors.const_defs(tcx, impl_item.name).skip(1).next() - .map(|node_item| node_item.map(|parent| parent.defaultness)) - } - hir::ImplItemKind::Method(..) => { - ancestors.fn_defs(tcx, impl_item.name).skip(1).next() - .map(|node_item| node_item.map(|parent| parent.defaultness)) - - } - hir::ImplItemKind::Type(_) => { - ancestors.type_defs(tcx, impl_item.name).skip(1).next() - .map(|node_item| node_item.map(|parent| parent.defaultness)) - } + let kind = match impl_item.node { + hir::ImplItemKind::Const(..) => ty::AssociatedKind::Const, + hir::ImplItemKind::Method(..) => ty::AssociatedKind::Method, + hir::ImplItemKind::Type(_) => ty::AssociatedKind::Type }; + let parent = ancestors.defs(tcx, impl_item.name, kind).skip(1).next() + .map(|node_item| node_item.map(|parent| parent.defaultness)); if let Some(parent) = parent { if parent.item.is_final() { @@ -969,8 +1068,8 @@ fn check_specialization_validity<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, fn check_impl_items_against_trait<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, impl_span: Span, impl_id: DefId, - impl_trait_ref: &ty::TraitRef<'tcx>, - impl_items: &[hir::ImplItem]) { + impl_trait_ref: ty::TraitRef<'tcx>, + impl_item_refs: &[hir::ImplItemRef]) { // If the trait reference itself is erroneous (so the compilation is going // to fail), skip checking the items here -- the `impl_item` table in `tcx` // isn't populated for such impls. @@ -979,106 +1078,92 @@ fn check_impl_items_against_trait<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // Locate trait definition and items let tcx = ccx.tcx; let trait_def = tcx.lookup_trait_def(impl_trait_ref.def_id); - let trait_items = tcx.trait_items(impl_trait_ref.def_id); let mut overridden_associated_type = None; + let impl_items = || impl_item_refs.iter().map(|iiref| ccx.tcx.map.impl_item(iiref.id)); + // Check existing impl methods to see if they are both present in trait // and compatible with trait signature - for impl_item in impl_items { - let ty_impl_item = tcx.impl_or_trait_item(tcx.map.local_def_id(impl_item.id)); - let ty_trait_item = trait_items.iter() - .find(|ac| ac.name() == ty_impl_item.name()); + for impl_item in impl_items() { + let ty_impl_item = tcx.associated_item(tcx.map.local_def_id(impl_item.id)); + let ty_trait_item = tcx.associated_items(impl_trait_ref.def_id) + .find(|ac| ac.name == ty_impl_item.name); // Check that impl definition matches trait definition if let Some(ty_trait_item) = ty_trait_item { match impl_item.node { hir::ImplItemKind::Const(..) => { - let impl_const = match ty_impl_item { - ty::ConstTraitItem(ref cti) => cti, - _ => span_bug!(impl_item.span, "non-const impl-item for const") - }; - // Find associated const definition. - if let &ty::ConstTraitItem(ref trait_const) = ty_trait_item { + if ty_trait_item.kind == ty::AssociatedKind::Const { compare_const_impl(ccx, - &impl_const, + &ty_impl_item, impl_item.span, - trait_const, - &impl_trait_ref); + &ty_trait_item, + impl_trait_ref); } else { let mut err = struct_span_err!(tcx.sess, impl_item.span, E0323, "item `{}` is an associated const, \ - which doesn't match its trait `{:?}`", - impl_const.name, + which doesn't match its trait `{}`", + ty_impl_item.name, impl_trait_ref); err.span_label(impl_item.span, &format!("does not match trait")); // We can only get the spans from local trait definition // Same for E0324 and E0325 - if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id()) { + if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id) { err.span_label(trait_span, &format!("item in trait")); } err.emit() } } - hir::ImplItemKind::Method(_, ref body) => { - let impl_method = match ty_impl_item { - ty::MethodTraitItem(ref mti) => mti, - _ => span_bug!(impl_item.span, "non-method impl-item for method") - }; - - let trait_span = tcx.map.span_if_local(ty_trait_item.def_id()); - if let &ty::MethodTraitItem(ref trait_method) = ty_trait_item { + hir::ImplItemKind::Method(_, body_id) => { + let trait_span = tcx.map.span_if_local(ty_trait_item.def_id); + if ty_trait_item.kind == ty::AssociatedKind::Method { let err_count = tcx.sess.err_count(); compare_impl_method(ccx, - &impl_method, + &ty_impl_item, impl_item.span, - body.id, - &trait_method, - &impl_trait_ref, + body_id.node_id(), + &ty_trait_item, + impl_trait_ref, trait_span, true); // start with old-broken-mode if err_count == tcx.sess.err_count() { // old broken mode did not report an error. Try with the new mode. compare_impl_method(ccx, - &impl_method, + &ty_impl_item, impl_item.span, - body.id, - &trait_method, - &impl_trait_ref, + body_id.node_id(), + &ty_trait_item, + impl_trait_ref, trait_span, false); // use the new mode } } else { let mut err = struct_span_err!(tcx.sess, impl_item.span, E0324, "item `{}` is an associated method, \ - which doesn't match its trait `{:?}`", - impl_method.name, + which doesn't match its trait `{}`", + ty_impl_item.name, impl_trait_ref); err.span_label(impl_item.span, &format!("does not match trait")); - if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id()) { + if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id) { err.span_label(trait_span, &format!("item in trait")); } err.emit() } } hir::ImplItemKind::Type(_) => { - let impl_type = match ty_impl_item { - ty::TypeTraitItem(ref tti) => tti, - _ => span_bug!(impl_item.span, "non-type impl-item for type") - }; - - if let &ty::TypeTraitItem(ref at) = ty_trait_item { - if let Some(_) = at.ty { + if ty_trait_item.kind == ty::AssociatedKind::Type { + if ty_trait_item.defaultness.has_value() { overridden_associated_type = Some(impl_item); } } else { let mut err = struct_span_err!(tcx.sess, impl_item.span, E0325, "item `{}` is an associated type, \ - which doesn't match its trait `{:?}`", - impl_type.name, + which doesn't match its trait `{}`", + ty_impl_item.name, impl_trait_ref); err.span_label(impl_item.span, &format!("does not match trait")); - if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id()) { + if let Some(trait_span) = tcx.map.span_if_local(ty_trait_item.def_id) { err.span_label(trait_span, &format!("item in trait")); } err.emit() @@ -1091,64 +1176,57 @@ fn check_impl_items_against_trait<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, } // Check for missing items from trait - let provided_methods = tcx.provided_trait_methods(impl_trait_ref.def_id); let mut missing_items = Vec::new(); let mut invalidated_items = Vec::new(); let associated_type_overridden = overridden_associated_type.is_some(); - for trait_item in trait_items.iter() { - let is_implemented; - let is_provided; - - match *trait_item { - ty::ConstTraitItem(ref associated_const) => { - is_provided = associated_const.has_value; - is_implemented = impl_items.iter().any(|ii| { - match ii.node { - hir::ImplItemKind::Const(..) => { - ii.name == associated_const.name - } - _ => false, - } - }); - } - ty::MethodTraitItem(ref trait_method) => { - is_provided = provided_methods.iter().any(|m| m.name == trait_method.name); - is_implemented = trait_def.ancestors(impl_id) - .fn_defs(tcx, trait_method.name) - .next() - .map(|node_item| !node_item.node.is_from_trait()) - .unwrap_or(false); - } - ty::TypeTraitItem(ref trait_assoc_ty) => { - is_provided = trait_assoc_ty.ty.is_some(); - is_implemented = trait_def.ancestors(impl_id) - .type_defs(tcx, trait_assoc_ty.name) - .next() - .map(|node_item| !node_item.node.is_from_trait()) - .unwrap_or(false); - } - } + for trait_item in tcx.associated_items(impl_trait_ref.def_id) { + let is_implemented = trait_def.ancestors(impl_id) + .defs(tcx, trait_item.name, trait_item.kind) + .next() + .map(|node_item| !node_item.node.is_from_trait()) + .unwrap_or(false); if !is_implemented { - if !is_provided { - missing_items.push(trait_item.name()); + if !trait_item.defaultness.has_value() { + missing_items.push(trait_item); } else if associated_type_overridden { - invalidated_items.push(trait_item.name()); + invalidated_items.push(trait_item.name); } } } + let signature = |item: &ty::AssociatedItem| { + match item.kind { + ty::AssociatedKind::Method => { + format!("{}", tcx.item_type(item.def_id).fn_sig().0) + } + ty::AssociatedKind::Type => format!("type {};", item.name.to_string()), + ty::AssociatedKind::Const => { + format!("const {}: {:?};", item.name.to_string(), tcx.item_type(item.def_id)) + } + } + }; + if !missing_items.is_empty() { - struct_span_err!(tcx.sess, impl_span, E0046, + let mut err = struct_span_err!(tcx.sess, impl_span, E0046, "not all trait items implemented, missing: `{}`", missing_items.iter() - .map(|name| name.to_string()) - .collect::>().join("`, `")) - .span_label(impl_span, &format!("missing `{}` in implementation", + .map(|trait_item| trait_item.name.to_string()) + .collect::>().join("`, `")); + err.span_label(impl_span, &format!("missing `{}` in implementation", missing_items.iter() - .map(|name| name.to_string()) - .collect::>().join("`, `")) - ).emit(); + .map(|trait_item| trait_item.name.to_string()) + .collect::>().join("`, `"))); + for trait_item in missing_items { + if let Some(span) = tcx.map.span_if_local(trait_item.def_id) { + err.span_label(span, &format!("`{}` from trait", trait_item.name)); + } else { + err.note(&format!("`{}` from trait: `{}`", + trait_item.name, + signature(&trait_item))); + } + } + err.emit(); } if !invalidated_items.is_empty() { @@ -1169,7 +1247,7 @@ fn check_const_with_type<'a, 'tcx>(ccx: &'a CrateCtxt<'a, 'tcx>, expected_type: Ty<'tcx>, id: ast::NodeId) { ccx.inherited(id).enter(|inh| { - let fcx = FnCtxt::new(&inh, expected_type, expr.id); + let fcx = FnCtxt::new(&inh, None, expr.id); fcx.require_type_is_sized(expected_type, expr.span, traits::ConstSized); // Gather locals in statics (because of block expressions). @@ -1181,7 +1259,7 @@ fn check_const_with_type<'a, 'tcx>(ccx: &'a CrateCtxt<'a, 'tcx>, fcx.check_expr_coercable_to_type(expr, expected_type); fcx.select_all_obligations_and_apply_defaults(); - fcx.closure_analyze_const(expr); + fcx.closure_analyze(expr); fcx.select_obligations_where_possible(); fcx.check_casts(); fcx.select_all_obligations_or_error(); @@ -1194,7 +1272,7 @@ fn check_const_with_type<'a, 'tcx>(ccx: &'a CrateCtxt<'a, 'tcx>, fn check_const<'a, 'tcx>(ccx: &CrateCtxt<'a,'tcx>, expr: &'tcx hir::Expr, id: ast::NodeId) { - let decl_ty = ccx.tcx.lookup_item_type(ccx.tcx.map.local_def_id(id)).ty; + let decl_ty = ccx.tcx.item_type(ccx.tcx.map.local_def_id(id)); check_const_with_type(ccx, expr, decl_ty, id); } @@ -1203,9 +1281,9 @@ fn check_const<'a, 'tcx>(ccx: &CrateCtxt<'a,'tcx>, /// pointer, which would mean their size is unbounded. fn check_representable<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, sp: Span, - item_id: ast::NodeId) + item_def_id: DefId) -> bool { - let rty = tcx.tables().node_id_to_type(item_id); + let rty = tcx.item_type(item_def_id); // Check that it is possible to represent this type. This call identifies // (1) types that contain themselves and (2) types that contain a different @@ -1214,7 +1292,6 @@ fn check_representable<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, // caught by case 1. match rty.is_representable(tcx, sp) { Representability::SelfRecursive => { - let item_def_id = tcx.map.local_def_id(item_id); tcx.recursive_type_with_infinite_size_error(item_def_id).emit(); return false } @@ -1223,8 +1300,8 @@ fn check_representable<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, return true } -pub fn check_simd<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, sp: Span, id: ast::NodeId) { - let t = tcx.tables().node_id_to_type(id); +pub fn check_simd<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, sp: Span, def_id: DefId) { + let t = tcx.item_type(def_id); match t.sty { ty::TyAdt(def, substs) if def.is_struct() => { let fields = &def.struct_variant().fields; @@ -1304,7 +1381,7 @@ pub fn check_enum_variants<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, disr_vals.push(current_disr_val); } - check_representable(ccx.tcx, sp, id); + check_representable(ccx.tcx, sp, def_id); } impl<'a, 'gcx, 'tcx> AstConv<'gcx, 'tcx> for FnCtxt<'a, 'gcx, 'tcx> { @@ -1317,17 +1394,16 @@ impl<'a, 'gcx, 'tcx> AstConv<'gcx, 'tcx> for FnCtxt<'a, 'gcx, 'tcx> { fn get_generics(&self, _: Span, id: DefId) -> Result<&'tcx ty::Generics<'tcx>, ErrorReported> { - Ok(self.tcx().lookup_generics(id)) + Ok(self.tcx().item_generics(id)) } - fn get_item_type_scheme(&self, _: Span, id: DefId) - -> Result, ErrorReported> + fn get_item_type(&self, _: Span, id: DefId) -> Result, ErrorReported> { - Ok(self.tcx().lookup_item_type(id)) + Ok(self.tcx().item_type(id)) } fn get_trait_def(&self, _: Span, id: DefId) - -> Result<&'tcx ty::TraitDef<'tcx>, ErrorReported> + -> Result<&'tcx ty::TraitDef, ErrorReported> { Ok(self.tcx().lookup_trait_def(id)) } @@ -1368,21 +1444,8 @@ impl<'a, 'gcx, 'tcx> AstConv<'gcx, 'tcx> for FnCtxt<'a, 'gcx, 'tcx> { Ok(r) } - fn trait_defines_associated_type_named(&self, - trait_def_id: DefId, - assoc_name: ast::Name) - -> bool - { - self.tcx().impl_or_trait_items(trait_def_id).iter().any(|&def_id| { - match self.tcx().impl_or_trait_item(def_id) { - ty::TypeTraitItem(ref item) => item.name == assoc_name, - _ => false - } - }) - } - - fn ty_infer(&self, _span: Span) -> Ty<'tcx> { - self.next_ty_var() + fn ty_infer(&self, span: Span) -> Ty<'tcx> { + self.next_ty_var(TypeVariableOrigin::TypeInference(span)) } fn ty_infer_for_def(&self, @@ -1469,7 +1532,7 @@ enum TupleArgumentsFlag { impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn new(inh: &'a Inherited<'a, 'gcx, 'tcx>, - rty: Ty<'tcx>, + rty: Option>, body_id: ast::NodeId) -> FnCtxt<'a, 'gcx, 'tcx> { FnCtxt { @@ -1480,6 +1543,12 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { ret_ty: rty, ps: RefCell::new(UnsafetyState::function(hir::Unsafety::Normal, ast::CRATE_NODE_ID)), + diverges: Cell::new(Diverges::Maybe), + has_errors: Cell::new(false), + enclosing_loops: RefCell::new(EnclosingLoops { + stack: Vec::new(), + by_id: NodeMap(), + }), inh: inh, } } @@ -1496,6 +1565,29 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.tcx.sess.err_count() - self.err_count_on_creation } + /// Produce warning on the given node, if the current point in the + /// function is unreachable, and there hasn't been another warning. + fn warn_if_unreachable(&self, id: ast::NodeId, span: Span, kind: &str) { + if self.diverges.get() == Diverges::Always { + self.diverges.set(Diverges::WarnedAlways); + + self.tcx.sess.add_lint(lint::builtin::UNREACHABLE_CODE, + id, span, + format!("unreachable {}", kind)); + } + } + + pub fn cause(&self, + span: Span, + code: ObligationCauseCode<'tcx>) + -> ObligationCause<'tcx> { + ObligationCause::new(span, self.body_id, code) + } + + pub fn misc(&self, span: Span) -> ObligationCause<'tcx> { + self.cause(span, ObligationCauseCode::MiscObligation) + } + /// Resolves type variables in `ty` if possible. Unlike the infcx /// version (resolve_type_vars_if_possible), this version will /// also select obligations if it seems useful, in an effort @@ -1566,6 +1658,15 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { debug!("write_ty({}, {:?}) in fcx {}", node_id, ty, self.tag()); self.tables.borrow_mut().node_types.insert(node_id, ty); + + if ty.references_error() { + self.has_errors.set(true); + } + + // FIXME(canndrew): This is_never should probably be an is_uninhabited + if ty.is_never() || self.type_var_diverges(ty) { + self.diverges.set(self.diverges.get() | Diverges::Always); + } } pub fn write_substs(&self, node_id: ast::NodeId, substs: ty::ItemSubsts<'tcx>) { @@ -1626,12 +1727,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { /// As `instantiate_type_scheme`, but for the bounds found in a /// generic type scheme. - fn instantiate_bounds(&self, - span: Span, - substs: &Substs<'tcx>, - bounds: &ty::GenericPredicates<'tcx>) - -> ty::InstantiatedPredicates<'tcx> - { + fn instantiate_bounds(&self, span: Span, def_id: DefId, substs: &Substs<'tcx>) + -> ty::InstantiatedPredicates<'tcx> { + let bounds = self.tcx.item_predicates(def_id); let result = bounds.instantiate(self.tcx, substs); let result = self.normalize_associated_types_in(span, &result.predicates); debug!("instantiate_bounds(bounds={:?}, substs={:?}) = {:?}", @@ -1653,13 +1751,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if let Some(ty_var) = self.anon_types.borrow().get(&def_id) { return ty_var; } - let ty_var = self.next_ty_var(); + let span = self.tcx.def_span(def_id); + let ty_var = self.next_ty_var(TypeVariableOrigin::TypeInference(span)); self.anon_types.borrow_mut().insert(def_id, ty_var); - let item_predicates = self.tcx.lookup_predicates(def_id); + let item_predicates = self.tcx.item_predicates(def_id); let bounds = item_predicates.instantiate(self.tcx, substs); - let span = self.tcx.map.def_id_span(def_id, codemap::DUMMY_SP); for predicate in bounds.predicates { // Change the predicate to refer to the type variable, // which will be the concrete type, instead of the TyAnon. @@ -1720,11 +1818,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { ty: Ty<'tcx>, span: Span, code: traits::ObligationCauseCode<'tcx>, - bound: ty::BuiltinBound) + def_id: DefId) { - self.register_builtin_bound( + self.register_bound( ty, - bound, + def_id, traits::ObligationCause::new(span, self.body_id, code)); } @@ -1733,16 +1831,17 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { span: Span, code: traits::ObligationCauseCode<'tcx>) { - self.require_type_meets(ty, span, code, ty::BoundSized); + let lang_item = self.tcx.require_lang_item(lang_items::SizedTraitLangItem); + self.require_type_meets(ty, span, code, lang_item); } - pub fn register_builtin_bound(&self, + pub fn register_bound(&self, ty: Ty<'tcx>, - builtin_bound: ty::BuiltinBound, + def_id: DefId, cause: traits::ObligationCause<'tcx>) { self.fulfillment_cx.borrow_mut() - .register_builtin_bound(self, ty, builtin_bound, cause); + .register_bound(self, ty, def_id, cause); } pub fn register_predicate(&self, @@ -1755,6 +1854,19 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { .register_predicate_obligation(self, obligation); } + pub fn register_predicates(&self, + obligations: Vec>) + { + for obligation in obligations { + self.register_predicate(obligation); + } + } + + pub fn register_infer_ok_obligations(&self, infer_ok: InferOk<'tcx, T>) -> T { + self.register_predicates(infer_ok.obligations); + infer_ok.value + } + pub fn to_ty(&self, ast_t: &hir::Ty) -> Ty<'tcx> { let t = AstConv::ast_ty_to_ty(self, self, ast_t); self.register_wf_obligation(t, ast_t.span, traits::MiscObligation); @@ -1877,7 +1989,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Indifferent to privacy flags pub fn field_ty(&self, span: Span, - field: ty::FieldDef<'tcx>, + field: &'tcx ty::FieldDef, substs: &Substs<'tcx>) -> Ty<'tcx> { @@ -1975,13 +2087,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // We must collect the defaults *before* we do any unification. Because we have // directly attached defaults to the type variables any unification that occurs // will erase defaults causing conflicting defaults to be completely ignored. - let default_map: FnvHashMap<_, _> = + let default_map: FxHashMap<_, _> = unsolved_variables .iter() .filter_map(|t| self.default(t).map(|d| (t, d))) .collect(); - let mut unbound_tyvars = FnvHashSet(); + let mut unbound_tyvars = FxHashSet(); debug!("select_all_obligations_and_apply_defaults: defaults={:?}", default_map); @@ -2065,15 +2177,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if let Some(default) = default_map.get(ty) { let default = default.clone(); match self.eq_types(false, - TypeOrigin::Misc(default.origin_span), - ty, default.ty) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()) - }, - Err(_) => { - conflicts.push((*ty, default)); - } + &self.misc(default.origin_span), + ty, + default.ty) { + Ok(ok) => self.register_infer_ok_obligations(ok), + Err(_) => conflicts.push((*ty, default)), } } } @@ -2096,7 +2204,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let conflicting_default = self.find_conflicting_default(&unbound_tyvars, &default_map, conflict) .unwrap_or(type_variable::Default { - ty: self.next_ty_var(), + ty: self.next_ty_var( + TypeVariableOrigin::MiscVariable(syntax_pos::DUMMY_SP)), origin_span: syntax_pos::DUMMY_SP, // what do I put here? def_id: self.tcx.map.local_def_id(ast::CRATE_NODE_ID) @@ -2115,6 +2224,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.report_conflicting_default_types( first_default.origin_span, + self.body_id, first_default, second_default) } @@ -2129,8 +2239,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // table then apply defaults until we find a conflict. That default must be the one // that caused conflict earlier. fn find_conflicting_default(&self, - unbound_vars: &FnvHashSet>, - default_map: &FnvHashMap<&Ty<'tcx>, type_variable::Default<'tcx>>, + unbound_vars: &FxHashSet>, + default_map: &FxHashMap<&Ty<'tcx>, type_variable::Default<'tcx>>, conflict: Ty<'tcx>) -> Option> { use rustc::ty::error::UnconstrainedNumeric::Neither; @@ -2163,10 +2273,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if let Some(default) = default_map.get(ty) { let default = default.clone(); match self.eq_types(false, - TypeOrigin::Misc(default.origin_span), - ty, default.ty) { - // FIXME(#32730) propagate obligations - Ok(InferOk { obligations, .. }) => assert!(obligations.is_empty()), + &self.misc(default.origin_span), + ty, + default.ty) { + Ok(ok) => self.register_infer_ok_obligations(ok), Err(_) => { result = Some(default); } @@ -2289,7 +2399,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { unsize, index_ty); - let input_ty = self.next_ty_var(); + let input_ty = self.next_ty_var(TypeVariableOrigin::AutoDeref(base_expr.span)); // First, try built-in indexing. match (adjusted_ty.builtin_index(), &index_ty.sty) { @@ -2308,7 +2418,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (PreferMutLvalue, Some(trait_did)) => { self.lookup_method_in_trait_adjusted(expr.span, Some(&base_expr), - token::intern("index_mut"), + Symbol::intern("index_mut"), trait_did, autoderefs, unsize, @@ -2323,7 +2433,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (None, Some(trait_did)) => { self.lookup_method_in_trait_adjusted(expr.span, Some(&base_expr), - token::intern("index"), + Symbol::intern("index"), trait_did, autoderefs, unsize, @@ -2347,7 +2457,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { sp: Span, method_fn_ty: Ty<'tcx>, callee_expr: &'gcx hir::Expr, - args_no_rcvr: &'gcx [P], + args_no_rcvr: &'gcx [hir::Expr], tuple_arguments: TupleArgumentsFlag, expected: Expectation<'tcx>) -> Ty<'tcx> { @@ -2360,18 +2470,22 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { }; self.check_argument_types(sp, &err_inputs[..], &[], args_no_rcvr, - false, tuple_arguments); + false, tuple_arguments, None); self.tcx.types.err } else { match method_fn_ty.sty { - ty::TyFnDef(.., ref fty) => { + ty::TyFnDef(def_id, .., ref fty) => { // HACK(eddyb) ignore self in the definition (see above). - let expected_arg_tys = self.expected_types_for_fn_args(sp, expected, - fty.sig.0.output, - &fty.sig.0.inputs[1..]); - self.check_argument_types(sp, &fty.sig.0.inputs[1..], &expected_arg_tys[..], - args_no_rcvr, fty.sig.0.variadic, tuple_arguments); - fty.sig.0.output + let expected_arg_tys = self.expected_types_for_fn_args( + sp, + expected, + fty.sig.0.output(), + &fty.sig.0.inputs()[1..] + ); + self.check_argument_types(sp, &fty.sig.0.inputs()[1..], &expected_arg_tys[..], + args_no_rcvr, fty.sig.0.variadic, tuple_arguments, + self.tcx.map.span_if_local(def_id)); + fty.sig.0.output() } _ => { span_bug!(callee_expr.span, "method without bare fn type"); @@ -2386,9 +2500,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { sp: Span, fn_inputs: &[Ty<'tcx>], expected_arg_tys: &[Ty<'tcx>], - args: &'gcx [P], + args: &'gcx [hir::Expr], variadic: bool, - tuple_arguments: TupleArgumentsFlag) { + tuple_arguments: TupleArgumentsFlag, + def_span: Option) { let tcx = self.tcx; // Grab the argument types, supplying fresh type variables @@ -2423,9 +2538,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { sp }; - fn parameter_count_error<'tcx>(sess: &Session, sp: Span, fn_inputs: &[Ty<'tcx>], - expected_count: usize, arg_count: usize, error_code: &str, - variadic: bool) { + fn parameter_count_error<'tcx>(sess: &Session, sp: Span, expected_count: usize, + arg_count: usize, error_code: &str, variadic: bool, + def_span: Option) { let mut err = sess.struct_span_err_with_code(sp, &format!("this function takes {}{} parameter{} but {} parameter{} supplied", if variadic {"at least "} else {""}, @@ -2435,18 +2550,12 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if arg_count == 1 {" was"} else {"s were"}), error_code); - let input_types = fn_inputs.iter().map(|i| format!("{:?}", i)).collect::>(); - if input_types.len() > 1 { - err.note("the following parameter types were expected:"); - err.note(&input_types.join(", ")); - } else if input_types.len() > 0 { - err.note(&format!("the following parameter type was expected: {}", - input_types[0])); - } else { - err.span_label(sp, &format!("expected {}{} parameter{}", - if variadic {"at least "} else {""}, - expected_count, - if expected_count == 1 {""} else {"s"})); + err.span_label(sp, &format!("expected {}{} parameter{}", + if variadic {"at least "} else {""}, + expected_count, + if expected_count == 1 {""} else {"s"})); + if let Some(def_s) = def_span { + err.span_label(def_s, &format!("defined here")); } err.emit(); } @@ -2455,8 +2564,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let tuple_type = self.structurally_resolved_type(sp, fn_inputs[0]); match tuple_type.sty { ty::TyTuple(arg_types) if arg_types.len() != args.len() => { - parameter_count_error(tcx.sess, sp_args, fn_inputs, arg_types.len(), args.len(), - "E0057", false); + parameter_count_error(tcx.sess, sp_args, arg_types.len(), args.len(), + "E0057", false, def_span); expected_arg_tys = &[]; self.err_args(args.len()) } @@ -2484,14 +2593,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if supplied_arg_count >= expected_arg_count { fn_inputs.to_vec() } else { - parameter_count_error(tcx.sess, sp_args, fn_inputs, expected_arg_count, - supplied_arg_count, "E0060", true); + parameter_count_error(tcx.sess, sp_args, expected_arg_count, + supplied_arg_count, "E0060", true, def_span); expected_arg_tys = &[]; self.err_args(supplied_arg_count) } } else { - parameter_count_error(tcx.sess, sp_args, fn_inputs, expected_arg_count, - supplied_arg_count, "E0061", false); + parameter_count_error(tcx.sess, sp_args, expected_arg_count, + supplied_arg_count, "E0061", false, def_span); expected_arg_tys = &[]; self.err_args(supplied_arg_count) }; @@ -2501,21 +2610,16 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Check the arguments. // We do this in a pretty awful way: first we typecheck any arguments - // that are not anonymous functions, then we typecheck the anonymous - // functions. This is so that we have more information about the types - // of arguments when we typecheck the functions. This isn't really the - // right way to do this. - let xs = [false, true]; - let mut any_diverges = false; // has any of the arguments diverged? - let mut warned = false; // have we already warned about unreachable code? - for check_blocks in &xs { - let check_blocks = *check_blocks; - debug!("check_blocks={}", check_blocks); + // that are not closures, then we typecheck the closures. This is so + // that we have more information about the types of arguments when we + // typecheck the functions. This isn't really the right way to do this. + for &check_closures in &[false, true] { + debug!("check_closures={}", check_closures); // More awful hacks: before we check argument types, try to do // an "opportunistic" vtable resolution of any trait bounds on // the call. This helps coercions. - if check_blocks { + if check_closures { self.select_obligations_where_possible(); } @@ -2530,61 +2634,43 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { supplied_arg_count }; for (i, arg) in args.iter().take(t).enumerate() { - if any_diverges && !warned { - self.tcx - .sess - .add_lint(lint::builtin::UNREACHABLE_CODE, - arg.id, - arg.span, - "unreachable expression".to_string()); - warned = true; + // Warn only for the first loop (the "no closures" one). + // Closure arguments themselves can't be diverging, but + // a previous argument can, e.g. `foo(panic!(), || {})`. + if !check_closures { + self.warn_if_unreachable(arg.id, arg.span, "expression"); } - let is_block = match arg.node { + + let is_closure = match arg.node { hir::ExprClosure(..) => true, _ => false }; - if is_block == check_blocks { - debug!("checking the argument"); - let formal_ty = formal_tys[i]; - - // The special-cased logic below has three functions: - // 1. Provide as good of an expected type as possible. - let expected = expected_arg_tys.get(i).map(|&ty| { - Expectation::rvalue_hint(self, ty) - }); - - let checked_ty = self.check_expr_with_expectation(&arg, - expected.unwrap_or(ExpectHasType(formal_ty))); - // 2. Coerce to the most detailed type that could be coerced - // to, which is `expected_ty` if `rvalue_hint` returns an - // `ExpectHasType(expected_ty)`, or the `formal_ty` otherwise. - let coerce_ty = expected.and_then(|e| e.only_has_type(self)); - self.demand_coerce(&arg, checked_ty, coerce_ty.unwrap_or(formal_ty)); - - // 3. Relate the expected type and the formal one, - // if the expected type was used for the coercion. - coerce_ty.map(|ty| self.demand_suptype(arg.span, formal_ty, ty)); + if is_closure != check_closures { + continue; } - if let Some(&arg_ty) = self.tables.borrow().node_types.get(&arg.id) { - // FIXME(canndrew): This is_never should probably be an is_uninhabited - any_diverges = any_diverges || - self.type_var_diverges(arg_ty) || - arg_ty.is_never(); - } - } - if any_diverges && !warned { - let parent = self.tcx.map.get_parent_node(args[0].id); - self.tcx - .sess - .add_lint(lint::builtin::UNREACHABLE_CODE, - parent, - sp, - "unreachable call".to_string()); - warned = true; - } + debug!("checking the argument"); + let formal_ty = formal_tys[i]; + // The special-cased logic below has three functions: + // 1. Provide as good of an expected type as possible. + let expected = expected_arg_tys.get(i).map(|&ty| { + Expectation::rvalue_hint(self, ty) + }); + + let checked_ty = self.check_expr_with_expectation(&arg, + expected.unwrap_or(ExpectHasType(formal_ty))); + // 2. Coerce to the most detailed type that could be coerced + // to, which is `expected_ty` if `rvalue_hint` returns an + // `ExpectHasType(expected_ty)`, or the `formal_ty` otherwise. + let coerce_ty = expected.and_then(|e| e.only_has_type(self)); + self.demand_coerce(&arg, checked_ty, coerce_ty.unwrap_or(formal_ty)); + + // 3. Relate the expected type and the formal one, + // if the expected type was used for the coercion. + coerce_ty.map(|ty| self.demand_suptype(arg.span, formal_ty, ty)); + } } // We also need to make sure we at least write the ty of the other @@ -2734,11 +2820,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { span: Span, // (potential) receiver for this impl did: DefId) -> TypeAndSubsts<'tcx> { - let ity = self.tcx.lookup_item_type(did); + let ity = self.tcx.item_type(did); debug!("impl_self_ty: ity={:?}", ity); let substs = self.fresh_substs_for_item(span, did); - let substd_ty = self.instantiate_type_scheme(span, &substs, &ity.ty); + let substd_ty = self.instantiate_type_scheme(span, &substs, &ity); TypeAndSubsts { substs: substs, ty: substd_ty } } @@ -2752,18 +2838,17 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { formal_args: &[Ty<'tcx>]) -> Vec> { let expected_args = expected_ret.only_has_type(self).and_then(|ret_ty| { - self.commit_regions_if_ok(|| { + self.fudge_regions_if_ok(&RegionVariableOrigin::Coercion(call_span), || { // Attempt to apply a subtyping relationship between the formal // return type (likely containing type variables if the function // is polymorphic) and the expected return type. // No argument expectations are produced if unification fails. - let origin = TypeOrigin::Misc(call_span); - let ures = self.sub_types(false, origin, formal_ret, ret_ty); + let origin = self.misc(call_span); + let ures = self.sub_types(false, &origin, formal_ret, ret_ty); // FIXME(#15760) can't use try! here, FromError doesn't default // to identity so the resulting type is not constrained. match ures { - // FIXME(#32730) propagate obligations - Ok(InferOk { obligations, .. }) => assert!(obligations.is_empty()), + Ok(ok) => self.register_infer_ok_obligations(ok), Err(e) => return Err(e), } @@ -2784,7 +2869,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_method_call(&self, expr: &'gcx hir::Expr, method_name: Spanned, - args: &'gcx [P], + args: &'gcx [hir::Expr], tps: &[P], expected: Expectation<'tcx>, lvalue_pref: LvaluePreference) -> Ty<'tcx> { @@ -2835,20 +2920,27 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { sp: Span, expected: Expectation<'tcx>) -> Ty<'tcx> { let cond_ty = self.check_expr_has_type(cond_expr, self.tcx.types.bool); + let cond_diverges = self.diverges.get(); + self.diverges.set(Diverges::Maybe); let expected = expected.adjust_for_branches(self); let then_ty = self.check_block_with_expected(then_blk, expected); + let then_diverges = self.diverges.get(); + self.diverges.set(Diverges::Maybe); let unit = self.tcx.mk_nil(); - let (origin, expected, found, result) = + let (cause, expected_ty, found_ty, result); if let Some(else_expr) = opt_else_expr { let else_ty = self.check_expr_with_expectation(else_expr, expected); - let origin = TypeOrigin::IfExpression(sp); + let else_diverges = self.diverges.get(); + cause = self.cause(sp, ObligationCauseCode::IfExpression); // Only try to coerce-unify if we have a then expression // to assign coercions to, otherwise it's () or diverging. - let result = if let Some(ref then) = then_blk.expr { - let res = self.try_find_coercion_lub(origin, || Some(&**then), + expected_ty = then_ty; + found_ty = else_ty; + result = if let Some(ref then) = then_blk.expr { + let res = self.try_find_coercion_lub(&cause, || Some(&**then), then_ty, else_expr, else_ty); // In case we did perform an adjustment, we have to update @@ -2863,26 +2955,27 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { res } else { self.commit_if_ok(|_| { - let trace = TypeTrace::types(origin, true, then_ty, else_ty); + let trace = TypeTrace::types(&cause, true, then_ty, else_ty); self.lub(true, trace, &then_ty, &else_ty) - .map(|InferOk { value, obligations }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - value - }) + .map(|ok| self.register_infer_ok_obligations(ok)) }) }; - (origin, then_ty, else_ty, result) + + // We won't diverge unless both branches do (or the condition does). + self.diverges.set(cond_diverges | then_diverges & else_diverges); } else { - let origin = TypeOrigin::IfExpressionWithNoElse(sp); - (origin, unit, then_ty, - self.eq_types(true, origin, unit, then_ty) - .map(|InferOk { obligations, .. }| { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - unit - })) - }; + // If the condition is false we can't diverge. + self.diverges.set(cond_diverges); + + cause = self.cause(sp, ObligationCauseCode::IfExpressionWithNoElse); + expected_ty = unit; + found_ty = then_ty; + result = self.eq_types(true, &cause, unit, then_ty) + .map(|ok| { + self.register_infer_ok_obligations(ok); + unit + }); + } match result { Ok(ty) => { @@ -2893,7 +2986,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } } Err(e) => { - self.report_mismatched_types(origin, expected, found, e); + self.report_mismatched_types(&cause, expected_ty, found_ty, e); self.tcx.types.err } } @@ -2919,6 +3012,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if field.vis.is_accessible_from(self.body_id, &self.tcx().map) { autoderef.finalize(lvalue_pref, Some(base)); self.write_autoderef_adjustment(base.id, autoderefs, base_t); + + self.tcx.check_stability(field.did, expr.id, expr.span); + return field_ty; } private_candidate = Some((base_def.did, field_ty)); @@ -2979,10 +3075,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } // Return an hint about the closest match in field names - fn suggest_field_name(variant: ty::VariantDef<'tcx>, + fn suggest_field_name(variant: &'tcx ty::VariantDef, field: &Spanned, skip : Vec) - -> Option { + -> Option { let name = field.node.as_str(); let names = variant.fields.iter().filter_map(|field| { // ignore already set fields and private fields from non-local crates @@ -3021,6 +3117,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let field_ty = self.field_ty(expr.span, field, substs); private_candidate = Some((base_def.did, field_ty)); if field.vis.is_accessible_from(self.body_id, &self.tcx().map) { + self.tcx.check_stability(field.did, expr.id, expr.span); Some(field_ty) } else { None @@ -3071,7 +3168,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn report_unknown_field(&self, ty: Ty<'tcx>, - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, field: &hir::Field, skip_fields: &[hir::Field], kind_name: &str) { @@ -3081,7 +3178,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { ty::TyAdt(adt, ..) if adt.is_enum() => { struct_span_err!(self.tcx.sess, field.name.span, E0559, "{} `{}::{}` has no field named `{}`", - kind_name, actual, variant.name.as_str(), field.name.node) + kind_name, actual, variant.name, field.name.node) } _ => { struct_span_err!(self.tcx.sess, field.name.span, E0560, @@ -3101,7 +3198,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { match ty.sty { ty::TyAdt(adt, ..) if adt.is_enum() => { err.span_label(field.name.span, &format!("`{}::{}` does not have this field", - ty, variant.name.as_str())); + ty, variant.name)); } _ => { err.span_label(field.name.span, &format!("`{}` does not have this field", ty)); @@ -3113,22 +3210,23 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_expr_struct_fields(&self, adt_ty: Ty<'tcx>, + expr_id: ast::NodeId, span: Span, - variant: ty::VariantDef<'tcx>, + variant: &'tcx ty::VariantDef, ast_fields: &'gcx [hir::Field], check_completeness: bool) { let tcx = self.tcx; - let (substs, kind_name) = match adt_ty.sty { - ty::TyAdt(adt, substs) => (substs, adt.variant_descr()), + let (substs, adt_kind, kind_name) = match adt_ty.sty { + ty::TyAdt(adt, substs) => (substs, adt.adt_kind(), adt.variant_descr()), _ => span_bug!(span, "non-ADT passed to check_expr_struct_fields") }; - let mut remaining_fields = FnvHashMap(); + let mut remaining_fields = FxHashMap(); for field in &variant.fields { remaining_fields.insert(field.name, field); } - let mut seen_fields = FnvHashMap(); + let mut seen_fields = FxHashMap(); let mut error_happened = false; @@ -3140,6 +3238,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { expected_field_type = self.field_ty(field.span, v_field, substs); seen_fields.insert(field.name.node, field.span); + + // we don't look at stability attributes on + // struct-like enums (yet...), but it's definitely not + // a bug to have construct one. + if adt_kind != ty::AdtKind::Enum { + tcx.check_stability(v_field.did, expr_id, field.span); + } } else { error_happened = true; expected_field_type = tcx.types.err; @@ -3221,10 +3326,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } pub fn check_struct_path(&self, - path: &hir::Path, + qpath: &hir::QPath, node_id: ast::NodeId) - -> Option<(ty::VariantDef<'tcx>, Ty<'tcx>)> { - let (def, ty) = self.finish_resolving_struct_path(path, node_id); + -> Option<(&'tcx ty::VariantDef, Ty<'tcx>)> { + let path_span = match *qpath { + hir::QPath::Resolved(_, ref path) => path.span, + hir::QPath::TypeRelative(ref qself, _) => qself.span + }; + let (def, ty) = self.finish_resolving_struct_path(qpath, path_span, node_id); let variant = match def { Def::Err => { self.set_tainted_by_errors(); @@ -3244,7 +3353,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { Def::AssociatedTy(..) | Def::SelfTy(..) if !self.tcx.sess.features.borrow().more_struct_aliases => { emit_feature_err(&self.tcx.sess.parse_sess, - "more_struct_aliases", path.span, GateIssue::Language, + "more_struct_aliases", path_span, GateIssue::Language, "`Self` and associated types in struct \ expressions and patterns are unstable"); } @@ -3261,26 +3370,18 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { }; if let Some((variant, did, substs)) = variant { - if variant.ctor_kind == CtorKind::Fn && - !self.tcx.sess.features.borrow().relaxed_adts { - emit_feature_err(&self.tcx.sess.parse_sess, - "relaxed_adts", path.span, GateIssue::Language, - "tuple structs and variants in struct patterns are unstable"); - } - // Check bounds on type arguments used in the path. - let type_predicates = self.tcx.lookup_predicates(did); - let bounds = self.instantiate_bounds(path.span, substs, &type_predicates); - let cause = traits::ObligationCause::new(path.span, self.body_id, + let bounds = self.instantiate_bounds(path_span, did, substs); + let cause = traits::ObligationCause::new(path_span, self.body_id, traits::ItemObligation(did)); self.add_obligations_for_parameters(cause, &bounds); Some((variant, ty)) } else { - struct_span_err!(self.tcx.sess, path.span, E0071, + struct_span_err!(self.tcx.sess, path_span, E0071, "expected struct, variant or union type, found {}", ty.sort_string(self.tcx)) - .span_label(path.span, &format!("not a struct")) + .span_label(path_span, &format!("not a struct")) .emit(); None } @@ -3288,19 +3389,25 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { fn check_expr_struct(&self, expr: &hir::Expr, - path: &hir::Path, + qpath: &hir::QPath, fields: &'gcx [hir::Field], base_expr: &'gcx Option>) -> Ty<'tcx> { // Find the relevant variant - let (variant, struct_ty) = if let Some(variant_ty) = self.check_struct_path(path, expr.id) { + let (variant, struct_ty) = + if let Some(variant_ty) = self.check_struct_path(qpath, expr.id) { variant_ty } else { self.check_struct_fields_on_error(fields, base_expr); return self.tcx.types.err; }; - self.check_expr_struct_fields(struct_ty, path.span, variant, fields, + let path_span = match *qpath { + hir::QPath::Resolved(_, ref path) => path.span, + hir::QPath::TypeRelative(ref qself, _) => qself.span + }; + + self.check_expr_struct_fields(struct_ty, expr.id, path_span, variant, fields, base_expr.is_none()); if let &Some(ref base_expr) = base_expr { self.check_expr_has_type(base_expr, struct_ty); @@ -3342,10 +3449,36 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { lvalue_pref: LvaluePreference) -> Ty<'tcx> { debug!(">> typechecking: expr={:?} expected={:?}", expr, expected); + + // Warn for expressions after diverging siblings. + self.warn_if_unreachable(expr.id, expr.span, "expression"); + + // Hide the outer diverging and has_errors flags. + let old_diverges = self.diverges.get(); + let old_has_errors = self.has_errors.get(); + self.diverges.set(Diverges::Maybe); + self.has_errors.set(false); + let ty = self.check_expr_kind(expr, expected, lvalue_pref); + // Warn for non-block expressions with diverging children. + match expr.node { + hir::ExprBlock(_) | + hir::ExprLoop(..) | hir::ExprWhile(..) | + hir::ExprIf(..) | hir::ExprMatch(..) => {} + + _ => self.warn_if_unreachable(expr.id, expr.span, "expression") + } + + // Record the type, which applies it effects. + // We need to do this after the warning above, so that + // we don't warn for the diverging expression itself. self.write_ty(expr.id, ty); + // Combine the diverging and has_error flags. + self.diverges.set(self.diverges.get() | old_diverges); + self.has_errors.set(self.has_errors.get() | old_has_errors); + debug!("type of expr({}) {} is...", expr.id, pprust::expr_to_string(expr)); debug!("... {:?}, expected is {:?}", @@ -3354,8 +3487,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Add adjustments to !-expressions if ty.is_never() { - if let Some(hir::map::NodeExpr(_)) = self.tcx.map.find(expr.id) { - let adj_ty = self.next_diverging_ty_var(); + if let Some(hir::map::NodeExpr(node_expr)) = self.tcx.map.find(expr.id) { + let adj_ty = self.next_diverging_ty_var( + TypeVariableOrigin::AdjustmentType(node_expr.span)); self.write_adjustment(expr.id, adjustment::Adjustment { kind: adjustment::Adjust::NeverToAny, target: adj_ty @@ -3492,9 +3626,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { tcx.mk_ref(region, tm) } } - hir::ExprPath(ref opt_qself, ref path) => { - let opt_self_ty = opt_qself.as_ref().map(|qself| self.to_ty(&qself.ty)); - let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(opt_self_ty, path, + hir::ExprPath(ref qpath) => { + let (def, opt_ty, segments) = self.resolve_ty_and_def_ufcs(qpath, expr.id, expr.span); let ty = if def != Def::Err { self.instantiate_value_path(segments, opt_ty, def, expr.span, id) @@ -3520,23 +3653,79 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } tcx.mk_nil() } - hir::ExprBreak(_) => { tcx.types.never } + hir::ExprBreak(label, ref expr_opt) => { + let loop_id = label.map(|l| l.loop_id); + let coerce_to = { + let mut enclosing_loops = self.enclosing_loops.borrow_mut(); + enclosing_loops.find_loop(loop_id).map(|ctxt| ctxt.coerce_to) + }; + if let Some(coerce_to) = coerce_to { + let e_ty; + let cause; + if let Some(ref e) = *expr_opt { + // Recurse without `enclosing_loops` borrowed. + e_ty = self.check_expr_with_hint(e, coerce_to); + cause = self.misc(e.span); + // Notably, the recursive call may alter coerce_to - must not keep using it! + } else { + // `break` without argument acts like `break ()`. + e_ty = tcx.mk_nil(); + cause = self.misc(expr.span); + } + let mut enclosing_loops = self.enclosing_loops.borrow_mut(); + let ctxt = enclosing_loops.find_loop(loop_id).unwrap(); + + let result = if let Some(ref e) = *expr_opt { + // Special-case the first element, as it has no "previous expressions". + let result = if !ctxt.may_break { + self.try_coerce(e, e_ty, ctxt.coerce_to) + } else { + self.try_find_coercion_lub(&cause, || ctxt.break_exprs.iter().cloned(), + ctxt.unified, e, e_ty) + }; + + ctxt.break_exprs.push(e); + result + } else { + self.eq_types(true, &cause, e_ty, ctxt.unified) + .map(|InferOk { obligations, .. }| { + // FIXME(#32730) propagate obligations + assert!(obligations.is_empty()); + e_ty + }) + }; + match result { + Ok(ty) => ctxt.unified = ty, + Err(err) => { + self.report_mismatched_types(&cause, ctxt.unified, e_ty, err); + } + } + + ctxt.may_break = true; + } + // Otherwise, we failed to find the enclosing loop; this can only happen if the + // `break` was not inside a loop at all, which is caught by the loop-checking pass. + tcx.types.never + } hir::ExprAgain(_) => { tcx.types.never } hir::ExprRet(ref expr_opt) => { - if let Some(ref e) = *expr_opt { - self.check_expr_coercable_to_type(&e, self.ret_ty); + if self.ret_ty.is_none() { + struct_span_err!(self.tcx.sess, expr.span, E0572, + "return statement outside of function body").emit(); + } else if let Some(ref e) = *expr_opt { + self.check_expr_coercable_to_type(&e, self.ret_ty.unwrap()); } else { - let eq_result = self.eq_types(false, - TypeOrigin::Misc(expr.span), - self.ret_ty, - tcx.mk_nil()) - // FIXME(#32730) propagate obligations - .map(|InferOk { obligations, .. }| assert!(obligations.is_empty())); - if eq_result.is_err() { - struct_span_err!(tcx.sess, expr.span, E0069, - "`return;` in a function whose return type is not `()`") - .span_label(expr.span, &format!("return type is not ()")) - .emit(); + match self.eq_types(false, + &self.misc(expr.span), + self.ret_ty.unwrap(), + tcx.mk_nil()) { + Ok(ok) => self.register_infer_ok_obligations(ok), + Err(_) => { + struct_span_err!(tcx.sess, expr.span, E0069, + "`return;` in a function whose return type is not `()`") + .span_label(expr.span, &format!("return type is not ()")) + .emit(); + } } } tcx.types.never @@ -3570,38 +3759,66 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { expr.span, expected) } hir::ExprWhile(ref cond, ref body, _) => { - let cond_ty = self.check_expr_has_type(&cond, tcx.types.bool); - self.check_block_no_value(&body); - let body_ty = self.node_ty(body.id); - if cond_ty.references_error() || body_ty.references_error() { + let unified = self.tcx.mk_nil(); + let coerce_to = unified; + let ctxt = LoopCtxt { + unified: unified, + coerce_to: coerce_to, + break_exprs: vec![], + may_break: true, + }; + self.with_loop_ctxt(expr.id, ctxt, || { + self.check_expr_has_type(&cond, tcx.types.bool); + let cond_diverging = self.diverges.get(); + self.check_block_no_value(&body); + + // We may never reach the body so it diverging means nothing. + self.diverges.set(cond_diverging); + }); + + if self.has_errors.get() { tcx.types.err - } - else { + } else { tcx.mk_nil() } } - hir::ExprLoop(ref body, _) => { - self.check_block_no_value(&body); - if !may_break(tcx, expr.id, &body) { - tcx.types.never + hir::ExprLoop(ref body, _, _) => { + let unified = self.next_ty_var(TypeVariableOrigin::TypeInference(body.span)); + let coerce_to = expected.only_has_type(self).unwrap_or(unified); + let ctxt = LoopCtxt { + unified: unified, + coerce_to: coerce_to, + break_exprs: vec![], + may_break: false, + }; + + let ctxt = self.with_loop_ctxt(expr.id, ctxt, || { + self.check_block_no_value(&body); + }); + if ctxt.may_break { + // No way to know whether it's diverging because + // of a `break` or an outer `break` or `return. + self.diverges.set(Diverges::Maybe); + + ctxt.unified } else { - tcx.mk_nil() + tcx.types.never } } hir::ExprMatch(ref discrim, ref arms, match_src) => { self.check_match(expr, &discrim, arms, expected, match_src) } - hir::ExprClosure(capture, ref decl, ref body, _) => { - self.check_expr_closure(expr, capture, &decl, &body, expected) + hir::ExprClosure(capture, ref decl, body_id, _) => { + self.check_expr_closure(expr, capture, &decl, body_id, expected) } hir::ExprBlock(ref b) => { self.check_block_with_expected(&b, expected) } hir::ExprCall(ref callee, ref args) => { - self.check_call(expr, &callee, &args[..], expected) + self.check_call(expr, &callee, args, expected) } hir::ExprMethodCall(name, ref tps, ref args) => { - self.check_method_call(expr, name, &args[..], &tps[..], expected, lvalue_pref) + self.check_method_call(expr, name, args, &tps[..], expected, lvalue_pref) } hir::ExprCast(ref e, ref t) => { if let hir::TyArray(_, ref count_expr) = t.node { @@ -3645,25 +3862,25 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } }); - let mut unified = self.next_ty_var(); + let mut unified = self.next_ty_var(TypeVariableOrigin::TypeInference(expr.span)); let coerce_to = uty.unwrap_or(unified); for (i, e) in args.iter().enumerate() { let e_ty = self.check_expr_with_hint(e, coerce_to); - let origin = TypeOrigin::Misc(e.span); + let cause = self.misc(e.span); // Special-case the first element, as it has no "previous expressions". let result = if i == 0 { self.try_coerce(e, e_ty, coerce_to) } else { - let prev_elems = || args[..i].iter().map(|e| &**e); - self.try_find_coercion_lub(origin, prev_elems, unified, e, e_ty) + let prev_elems = || args[..i].iter().map(|e| &*e); + self.try_find_coercion_lub(&cause, prev_elems, unified, e, e_ty) }; match result { Ok(ty) => unified = ty, Err(e) => { - self.report_mismatched_types(origin, unified, e_ty, e); + self.report_mismatched_types(&cause, unified, e_ty, e); } } } @@ -3690,7 +3907,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { (uty, uty) } None => { - let t: Ty = self.next_ty_var(); + let t: Ty = self.next_ty_var(TypeVariableOrigin::MiscVariable(element.span)); let element_ty = self.check_expr_has_type(&element, t); (element_ty, t) } @@ -3699,7 +3916,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { if count > 1 { // For [foo, ..n] where n > 1, `foo` must have // Copy type: - self.require_type_meets(t, expr.span, traits::RepeatVec, ty::BoundCopy); + let lang_item = self.tcx.require_lang_item(lang_items::CopyTraitLangItem); + self.require_type_meets(t, expr.span, traits::RepeatVec, lang_item); } if element_ty.references_error() { @@ -3736,8 +3954,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { tuple } } - hir::ExprStruct(ref path, ref fields, ref base_expr) => { - self.check_expr_struct(expr, path, fields, base_expr) + hir::ExprStruct(ref qpath, ref fields, ref base_expr) => { + self.check_expr_struct(expr, qpath, fields, base_expr) } hir::ExprField(ref base, ref field) => { self.check_field(expr, lvalue_pref, &base, field) @@ -3803,83 +4021,81 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } // Finish resolving a path in a struct expression or pattern `S::A { .. }` if necessary. - // The newly resolved definition is written into `def_map`. + // The newly resolved definition is written into `type_relative_path_defs`. fn finish_resolving_struct_path(&self, - path: &hir::Path, + qpath: &hir::QPath, + path_span: Span, node_id: ast::NodeId) -> (Def, Ty<'tcx>) { - let path_res = self.tcx.expect_resolution(node_id); - let base_ty_end = path.segments.len() - path_res.depth; - let (ty, def) = AstConv::finish_resolving_def_to_ty(self, self, path.span, - PathParamMode::Optional, - path_res.base_def, - None, - node_id, - &path.segments[..base_ty_end], - &path.segments[base_ty_end..], - true); - // Write back the new resolution. - if path_res.depth != 0 { - self.tcx.def_map.borrow_mut().insert(node_id, PathResolution::new(def)); + match *qpath { + hir::QPath::Resolved(ref maybe_qself, ref path) => { + let opt_self_ty = maybe_qself.as_ref().map(|qself| self.to_ty(qself)); + let ty = AstConv::def_to_ty(self, self, opt_self_ty, path, node_id, true); + (path.def, ty) + } + hir::QPath::TypeRelative(ref qself, ref segment) => { + let ty = self.to_ty(qself); + + let def = if let hir::TyPath(hir::QPath::Resolved(_, ref path)) = qself.node { + path.def + } else { + Def::Err + }; + let (ty, def) = AstConv::associated_path_def_to_ty(self, node_id, path_span, + ty, def, segment); + + // Write back the new resolution. + self.tables.borrow_mut().type_relative_path_defs.insert(node_id, def); + + (def, ty) + } } - (def, ty) } // Resolve associated value path into a base type and associated constant or method definition. - // The newly resolved definition is written into `def_map`. + // The newly resolved definition is written into `type_relative_path_defs`. pub fn resolve_ty_and_def_ufcs<'b>(&self, - opt_self_ty: Option>, - path: &'b hir::Path, + qpath: &'b hir::QPath, node_id: ast::NodeId, span: Span) -> (Def, Option>, &'b [hir::PathSegment]) { - let path_res = self.tcx.expect_resolution(node_id); - if path_res.depth == 0 { - // If fully resolved already, we don't have to do anything. - (path_res.base_def, opt_self_ty, &path.segments) - } else { - // Try to resolve everything except for the last segment as a type. - let ty_segments = path.segments.split_last().unwrap().1; - let base_ty_end = path.segments.len() - path_res.depth; - let (ty, _def) = AstConv::finish_resolving_def_to_ty(self, self, span, - PathParamMode::Optional, - path_res.base_def, - opt_self_ty, - node_id, - &ty_segments[..base_ty_end], - &ty_segments[base_ty_end..], - false); - - // Resolve an associated constant or method on the previously resolved type. - let item_segment = path.segments.last().unwrap(); - let item_name = item_segment.name; - let def = match self.resolve_ufcs(span, item_name, ty, node_id) { - Ok(def) => def, - Err(error) => { - let def = match error { - method::MethodError::PrivateMatch(def) => def, - _ => Def::Err, - }; - if item_name != keywords::Invalid.name() { - self.report_method_error(span, ty, item_name, None, error); - } - def + let (ty, item_segment) = match *qpath { + hir::QPath::Resolved(ref opt_qself, ref path) => { + return (path.def, + opt_qself.as_ref().map(|qself| self.to_ty(qself)), + &path.segments[..]); + } + hir::QPath::TypeRelative(ref qself, ref segment) => { + (self.to_ty(qself), segment) + } + }; + let item_name = item_segment.name; + let def = match self.resolve_ufcs(span, item_name, ty, node_id) { + Ok(def) => def, + Err(error) => { + let def = match error { + method::MethodError::PrivateMatch(def) => def, + _ => Def::Err, + }; + if item_name != keywords::Invalid.name() { + self.report_method_error(span, ty, item_name, None, error); } - }; + def + } + }; - // Write back the new resolution. - self.tcx.def_map.borrow_mut().insert(node_id, PathResolution::new(def)); - (def, Some(ty), slice::ref_slice(item_segment)) - } + // Write back the new resolution. + self.tables.borrow_mut().type_relative_path_defs.insert(node_id, def); + (def, Some(ty), slice::ref_slice(&**item_segment)) } pub fn check_decl_initializer(&self, local: &'gcx hir::Local, init: &'gcx hir::Expr) -> Ty<'tcx> { - let ref_bindings = self.tcx.pat_contains_ref_binding(&local.pat); + let ref_bindings = local.pat.contains_ref_binding(); let local_ty = self.local_ty(init.span, local.id); if let Some(m) = ref_bindings { @@ -3918,55 +4134,70 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { } pub fn check_stmt(&self, stmt: &'gcx hir::Stmt) { - let node_id; - let mut saw_bot = false; - let mut saw_err = false; + // Don't do all the complex logic below for DeclItem. match stmt.node { - hir::StmtDecl(ref decl, id) => { - node_id = id; - match decl.node { - hir::DeclLocal(ref l) => { - self.check_decl_local(&l); - let l_t = self.node_ty(l.id); - saw_bot = saw_bot || self.type_var_diverges(l_t); - saw_err = saw_err || l_t.references_error(); - } - hir::DeclItem(_) => {/* ignore for now */ } + hir::StmtDecl(ref decl, id) => { + match decl.node { + hir::DeclLocal(_) => {} + hir::DeclItem(_) => { + self.write_nil(id); + return; + } + } } - } - hir::StmtExpr(ref expr, id) => { - node_id = id; - // Check with expected type of () - let ty = self.check_expr_has_type(&expr, self.tcx.mk_nil()); - saw_bot = saw_bot || self.type_var_diverges(ty); - saw_err = saw_err || ty.references_error(); - } - hir::StmtSemi(ref expr, id) => { - node_id = id; - let ty = self.check_expr(&expr); - saw_bot |= self.type_var_diverges(ty); - saw_err |= ty.references_error(); - } + hir::StmtExpr(..) | hir::StmtSemi(..) => {} } - if saw_bot { - self.write_ty(node_id, self.next_diverging_ty_var()); - } - else if saw_err { + + self.warn_if_unreachable(stmt.node.id(), stmt.span, "statement"); + + // Hide the outer diverging and has_errors flags. + let old_diverges = self.diverges.get(); + let old_has_errors = self.has_errors.get(); + self.diverges.set(Diverges::Maybe); + self.has_errors.set(false); + + let (node_id, span) = match stmt.node { + hir::StmtDecl(ref decl, id) => { + let span = match decl.node { + hir::DeclLocal(ref l) => { + self.check_decl_local(&l); + l.span + } + hir::DeclItem(_) => {/* ignore for now */ + DUMMY_SP + } + }; + (id, span) + } + hir::StmtExpr(ref expr, id) => { + // Check with expected type of () + self.check_expr_has_type(&expr, self.tcx.mk_nil()); + (id, expr.span) + } + hir::StmtSemi(ref expr, id) => { + self.check_expr(&expr); + (id, expr.span) + } + }; + + if self.has_errors.get() { self.write_error(node_id); - } - else { + } else if self.diverges.get().always() { + self.write_ty(node_id, self.next_diverging_ty_var( + TypeVariableOrigin::DivergingStmt(span))); + } else { self.write_nil(node_id); } + + // Combine the diverging and has_error flags. + self.diverges.set(self.diverges.get() | old_diverges); + self.has_errors.set(self.has_errors.get() | old_has_errors); } pub fn check_block_no_value(&self, blk: &'gcx hir::Block) { - let blkty = self.check_block_with_expected(blk, ExpectHasType(self.tcx.mk_nil())); - if blkty.references_error() { - self.write_error(blk.id); - } else { - let nilty = self.tcx.mk_nil(); - self.demand_suptype(blk.span, nilty, blkty); - } + let unit = self.tcx.mk_nil(); + let ty = self.check_block_with_expected(blk, ExpectHasType(unit)); + self.demand_suptype(blk.span, unit, ty); } fn check_block_with_expected(&self, @@ -3978,72 +4209,81 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { replace(&mut *fcx_ps, unsafety_state) }; - let mut warned = false; - let mut any_diverges = false; - let mut any_err = false; for s in &blk.stmts { self.check_stmt(s); - let s_id = s.node.id(); - let s_ty = self.node_ty(s_id); - if any_diverges && !warned && match s.node { - hir::StmtDecl(ref decl, _) => { - match decl.node { - hir::DeclLocal(_) => true, - _ => false, - } - } - hir::StmtExpr(..) | hir::StmtSemi(..) => true, - } { - self.tcx - .sess - .add_lint(lint::builtin::UNREACHABLE_CODE, - s_id, - s.span, - "unreachable statement".to_string()); - warned = true; - } - // FIXME(canndrew): This is_never should probably be an is_uninhabited - any_diverges = any_diverges || - self.type_var_diverges(s_ty) || - s_ty.is_never(); - any_err = any_err || s_ty.references_error(); } - let ty = match blk.expr { - None => if any_err { - self.tcx.types.err - } else if any_diverges { - self.next_diverging_ty_var() - } else { - self.tcx.mk_nil() - }, - Some(ref e) => { - if any_diverges && !warned { - self.tcx - .sess - .add_lint(lint::builtin::UNREACHABLE_CODE, - e.id, - e.span, - "unreachable expression".to_string()); - } - let ety = match expected { - ExpectHasType(ety) => { - self.check_expr_coercable_to_type(&e, ety); - ety - } - _ => { - self.check_expr_with_expectation(&e, expected) - } - }; - if any_err { - self.tcx.types.err - } else if any_diverges { - self.next_diverging_ty_var() - } else { - ety + let mut ty = match blk.expr { + Some(ref e) => self.check_expr_with_expectation(e, expected), + None => self.tcx.mk_nil() + }; + + if self.diverges.get().always() { + if let ExpectHasType(ety) = expected { + // Avoid forcing a type (only `!` for now) in unreachable code. + // FIXME(aburka) do we need this special case? and should it be is_uninhabited? + if !ety.is_never() { + if let Some(ref e) = blk.expr { + // Coerce the tail expression to the right type. + self.demand_coerce(e, ty, ety); + } } } - }; + + ty = self.next_diverging_ty_var(TypeVariableOrigin::DivergingBlockExpr(blk.span)); + } else if let ExpectHasType(ety) = expected { + if let Some(ref e) = blk.expr { + // Coerce the tail expression to the right type. + self.demand_coerce(e, ty, ety); + } else { + // We're not diverging and there's an expected type, which, + // in case it's not `()`, could result in an error higher-up. + // We have a chance to error here early and be more helpful. + let cause = self.misc(blk.span); + let trace = TypeTrace::types(&cause, false, ty, ety); + match self.sub_types(false, &cause, ty, ety) { + Ok(InferOk { obligations, .. }) => { + // FIXME(#32730) propagate obligations + assert!(obligations.is_empty()); + }, + Err(err) => { + let mut err = self.report_and_explain_type_error(trace, &err); + + // Be helpful when the user wrote `{... expr;}` and + // taking the `;` off is enough to fix the error. + let mut extra_semi = None; + if let Some(stmt) = blk.stmts.last() { + if let hir::StmtSemi(ref e, _) = stmt.node { + if self.can_sub_types(self.node_ty(e.id), ety).is_ok() { + extra_semi = Some(stmt); + } + } + } + if let Some(last_stmt) = extra_semi { + let original_span = original_sp(self.tcx.sess.codemap(), + last_stmt.span, blk.span); + let span_semi = Span { + lo: original_span.hi - BytePos(1), + hi: original_span.hi, + expn_id: original_span.expn_id + }; + err.span_help(span_semi, "consider removing this semicolon:"); + } + + err.emit(); + } + } + } + + // We already applied the type (and potentially errored), + // use the expected type to avoid further errors out. + ty = ety; + } + + if self.has_errors.get() || ty.references_error() { + ty = self.tcx.types.err + } + self.write_ty(blk.id, ty); *self.ps.borrow_mut() = prev; @@ -4119,11 +4359,11 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { Def::VariantCtor(def_id, ..) => { // Everything but the final segment should have no // parameters at all. - let mut generics = self.tcx.lookup_generics(def_id); + let mut generics = self.tcx.item_generics(def_id); if let Some(def_id) = generics.parent { // Variant and struct constructors use the // generics of their parent type definition. - generics = self.tcx.lookup_generics(def_id); + generics = self.tcx.item_generics(def_id); } type_segment = Some((segments.last().unwrap(), generics)); } @@ -4133,13 +4373,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { Def::Const(def_id) | Def::Static(def_id, _) => { fn_segment = Some((segments.last().unwrap(), - self.tcx.lookup_generics(def_id))); + self.tcx.item_generics(def_id))); } // Case 3. Reference to a method or associated const. Def::Method(def_id) | Def::AssociatedConst(def_id) => { - let container = self.tcx.impl_or_trait_item(def_id).container(); + let container = self.tcx.associated_item(def_id).container; match container { ty::TraitContainer(trait_did) => { callee::check_legal_trait_for_method_call(self.ccx, span, trait_did) @@ -4147,9 +4387,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { ty::ImplContainer(_) => {} } - let generics = self.tcx.lookup_generics(def_id); + let generics = self.tcx.item_generics(def_id); if segments.len() >= 2 { - let parent_generics = self.tcx.lookup_generics(generics.parent.unwrap()); + let parent_generics = self.tcx.item_generics(generics.parent.unwrap()); type_segment = Some((&segments[segments.len() - 2], parent_generics)); } else { // `::assoc` will end up here, and so can `T::assoc`. @@ -4165,11 +4405,6 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { _ => bug!("unexpected definition: {:?}", def), } - // In `>::method`, `A` and `B` are mandatory, but - // `opt_self_ty` can also be Some for `Foo::method`, where Foo's - // type parameters are not mandatory. - let require_type_space = opt_self_ty.is_some() && ufcs_associated.is_none(); - debug!("type_segment={:?} fn_segment={:?}", type_segment, fn_segment); // Now that we have categorized what space the parameters for each @@ -4200,8 +4435,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // variables. If the user provided some types, we may still need // to add defaults. If the user provided *too many* types, that's // a problem. - self.check_path_parameter_count(span, !require_type_space, &mut type_segment); - self.check_path_parameter_count(span, true, &mut fn_segment); + self.check_path_parameter_count(span, &mut type_segment); + self.check_path_parameter_count(span, &mut fn_segment); let (fn_start, has_self) = match (type_segment, fn_segment) { (_, Some((_, generics))) => { @@ -4236,7 +4471,6 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { }, |def, substs| { let mut i = def.index as usize; - let can_omit = i >= fn_start || !require_type_space; let segment = if i < fn_start { // Handle Self first, so we can adjust the index to match the AST. if has_self && i == 0 { @@ -4250,10 +4484,12 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { i -= fn_start; fn_segment }; - let types = match segment.map(|(s, _)| &s.parameters) { - Some(&hir::AngleBracketedParameters(ref data)) => &data.types[..], + let (types, infer_types) = match segment.map(|(s, _)| &s.parameters) { + Some(&hir::AngleBracketedParameters(ref data)) => { + (&data.types[..], data.infer_types) + } Some(&hir::ParenthesizedParameters(_)) => bug!(), - None => &[] + None => (&[][..], true) }; // Skip over the lifetimes in the same segment. @@ -4261,11 +4497,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { i -= generics.regions.len(); } - let omitted = can_omit && types.is_empty(); if let Some(ast_ty) = types.get(i) { // A provided type parameter. self.to_ty(ast_ty) - } else if let (false, Some(default)) = (omitted, def.default) { + } else if let (false, Some(default)) = (infer_types, def.default) { // No type parameter provided, but a default exists. default.subst_spanned(self.tcx, substs, Some(span)) } else { @@ -4279,35 +4514,31 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // The things we are substituting into the type should not contain // escaping late-bound regions, and nor should the base type scheme. - let scheme = self.tcx.lookup_item_type(def.def_id()); - let type_predicates = self.tcx.lookup_predicates(def.def_id()); + let ty = self.tcx.item_type(def.def_id()); assert!(!substs.has_escaping_regions()); - assert!(!scheme.ty.has_escaping_regions()); + assert!(!ty.has_escaping_regions()); // Add all the obligations that are required, substituting and // normalized appropriately. - let bounds = self.instantiate_bounds(span, &substs, &type_predicates); + let bounds = self.instantiate_bounds(span, def.def_id(), &substs); self.add_obligations_for_parameters( traits::ObligationCause::new(span, self.body_id, traits::ItemObligation(def.def_id())), &bounds); // Substitute the values for the type parameters into the type of // the referenced item. - let ty_substituted = self.instantiate_type_scheme(span, &substs, &scheme.ty); + let ty_substituted = self.instantiate_type_scheme(span, &substs, &ty); if let Some((ty::ImplContainer(impl_def_id), self_ty)) = ufcs_associated { // In the case of `Foo::method` and `>::method`, if `method` // is inherent, there is no `Self` parameter, instead, the impl needs // type parameters, which we can infer by unifying the provided `Self` // with the substituted impl type. - let impl_scheme = self.tcx.lookup_item_type(impl_def_id); + let ty = self.tcx.item_type(impl_def_id); - let impl_ty = self.instantiate_type_scheme(span, &substs, &impl_scheme.ty); - match self.sub_types(false, TypeOrigin::Misc(span), self_ty, impl_ty) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); - } + let impl_ty = self.instantiate_type_scheme(span, &substs, &ty); + match self.sub_types(false, &self.misc(span), self_ty, impl_ty) { + Ok(ok) => self.register_infer_ok_obligations(ok), Err(_) => { span_bug!(span, "instantiate_value_path: (UFCS) {:?} was a subtype of {:?} but now is not?", @@ -4329,16 +4560,17 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { /// Report errors if the provided parameters are too few or too many. fn check_path_parameter_count(&self, span: Span, - can_omit: bool, segment: &mut Option<(&hir::PathSegment, &ty::Generics)>) { - let (lifetimes, types, bindings) = match segment.map(|(s, _)| &s.parameters) { - Some(&hir::AngleBracketedParameters(ref data)) => { - (&data.lifetimes[..], &data.types[..], &data.bindings[..]) + let (lifetimes, types, infer_types, bindings) = { + match segment.map(|(s, _)| &s.parameters) { + Some(&hir::AngleBracketedParameters(ref data)) => { + (&data.lifetimes[..], &data.types[..], data.infer_types, &data.bindings[..]) + } + Some(&hir::ParenthesizedParameters(_)) => { + span_bug!(span, "parenthesized parameters cannot appear in ExprPath"); + } + None => (&[][..], &[][..], true, &[][..]) } - Some(&hir::ParenthesizedParameters(_)) => { - span_bug!(span, "parenthesized parameters cannot appear in ExprPath"); - } - None => (&[][..], &[][..], &[][..]) }; let count = |n| { @@ -4348,20 +4580,28 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // Check provided lifetime parameters. let lifetime_defs = segment.map_or(&[][..], |(_, generics)| &generics.regions); if lifetimes.len() > lifetime_defs.len() { - let span = lifetimes[lifetime_defs.len()].span; - span_err!(self.tcx.sess, span, E0088, - "too many lifetime parameters provided: \ - expected {}, found {}", - count(lifetime_defs.len()), - count(lifetimes.len())); + struct_span_err!(self.tcx.sess, span, E0088, + "too many lifetime parameters provided: \ + expected {}, found {}", + count(lifetime_defs.len()), + count(lifetimes.len())) + .span_label(span, &format!("unexpected lifetime parameter{}", + match lifetimes.len() { 1 => "", _ => "s" })) + .emit(); } else if lifetimes.len() > 0 && lifetimes.len() < lifetime_defs.len() { - span_err!(self.tcx.sess, span, E0090, - "too few lifetime parameters provided: \ - expected {}, found {}", - count(lifetime_defs.len()), - count(lifetimes.len())); + struct_span_err!(self.tcx.sess, span, E0090, + "too few lifetime parameters provided: \ + expected {}, found {}", + count(lifetime_defs.len()), + count(lifetimes.len())) + .span_label(span, &format!("too few lifetime parameters")) + .emit(); } + // The case where there is not enough lifetime parameters is not checked, + // because this is not possible - a function never takes lifetime parameters. + // See discussion for Pull Request 36208. + // Check provided type parameters. let type_defs = segment.map_or(&[][..], |(_, generics)| { if generics.parent.is_none() { @@ -4386,7 +4626,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // type parameters, we force instantiate_value_path to // use inference variables instead of the provided types. *segment = None; - } else if !(can_omit && types.len() == 0) && types.len() < required_len { + } else if !infer_types && types.len() < required_len { let adjust = |len| if len > 1 { "parameters" } else { "parameter" }; let required_param_str = adjust(required_len); let actual_param_str = adjust(types.len()); @@ -4442,27 +4682,24 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { self.tcx.types.err }) } -} -// Returns true if b contains a break that can exit from b -pub fn may_break(tcx: TyCtxt, id: ast::NodeId, b: &hir::Block) -> bool { - // First: is there an unlabeled break immediately - // inside the loop? - (loop_query(&b, |e| { - match *e { - hir::ExprBreak(None) => true, - _ => false + fn with_loop_ctxt(&self, id: ast::NodeId, ctxt: LoopCtxt<'gcx, 'tcx>, f: F) + -> LoopCtxt<'gcx, 'tcx> { + let index; + { + let mut enclosing_loops = self.enclosing_loops.borrow_mut(); + index = enclosing_loops.stack.len(); + enclosing_loops.by_id.insert(id, index); + enclosing_loops.stack.push(ctxt); } - })) || - // Second: is there a labeled break with label - // nested anywhere inside the loop? - (block_query(b, |e| { - if let hir::ExprBreak(Some(_)) = e.node { - tcx.expect_def(e.id) == Def::Label(id) - } else { - false + f(); + { + let mut enclosing_loops = self.enclosing_loops.borrow_mut(); + debug_assert!(enclosing_loops.stack.len() == index + 1); + enclosing_loops.by_id.remove(&id).expect("missing loop context"); + (enclosing_loops.stack.pop().expect("missing loop context")) } - })) + } } pub fn check_bounds_are_used<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, diff --git a/src/librustc_typeck/check/op.rs b/src/librustc_typeck/check/op.rs index 411bd7e7b5..d1a9b8ef85 100644 --- a/src/librustc_typeck/check/op.rs +++ b/src/librustc_typeck/check/op.rs @@ -13,8 +13,9 @@ use super::FnCtxt; use hir::def_id::DefId; use rustc::ty::{Ty, TypeFoldable, PreferMutLvalue}; +use rustc::infer::type_variable::TypeVariableOrigin; use syntax::ast; -use syntax::parse::token; +use syntax::symbol::Symbol; use rustc::hir; impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { @@ -75,8 +76,13 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { match BinOpCategory::from(op) { BinOpCategory::Shortcircuit => { // && and || are a simple case. + let lhs_diverges = self.diverges.get(); self.demand_suptype(lhs_expr.span, tcx.mk_bool(), lhs_ty); self.check_expr_coercable_to_type(rhs_expr, tcx.mk_bool()); + + // Depending on the LHS' value, the RHS can never execute. + self.diverges.set(lhs_diverges); + tcx.mk_bool() } _ => { @@ -174,10 +180,10 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { // using this variable as the expected type, which sometimes lets // us do better coercions than we would be able to do otherwise, // particularly for things like `String + &String`. - let rhs_ty_var = self.next_ty_var(); + let rhs_ty_var = self.next_ty_var(TypeVariableOrigin::MiscVariable(rhs_expr.span)); let return_ty = match self.lookup_op_method(expr, lhs_ty, vec![rhs_ty_var], - token::intern(name), trait_def_id, + Symbol::intern(name), trait_def_id, lhs_expr) { Ok(return_ty) => return_ty, Err(()) => { @@ -243,9 +249,8 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { -> Ty<'tcx> { assert!(op.is_by_value()); - match self.lookup_op_method(ex, operand_ty, vec![], - token::intern(mname), trait_did, - operand_expr) { + let mname = Symbol::intern(mname); + match self.lookup_op_method(ex, operand_ty, vec![], mname, trait_did, operand_expr) { Ok(t) => t, Err(()) => { self.type_error_message(ex.span, |actual| { diff --git a/src/librustc_typeck/check/regionck.rs b/src/librustc_typeck/check/regionck.rs index 6f6538254c..eb08e70d4c 100644 --- a/src/librustc_typeck/check/regionck.rs +++ b/src/librustc_typeck/check/regionck.rs @@ -91,8 +91,7 @@ use middle::region::{self, CodeExtent}; use rustc::ty::subst::Substs; use rustc::traits; use rustc::ty::{self, Ty, MethodCall, TypeFoldable}; -use rustc::infer::{self, GenericKind, InferOk, SubregionOrigin, TypeOrigin, VerifyBound}; -use hir::pat_util; +use rustc::infer::{self, GenericKind, SubregionOrigin, VerifyBound}; use rustc::ty::adjustment; use rustc::ty::wf::ImpliedBound; @@ -100,7 +99,7 @@ use std::mem; use std::ops::Deref; use syntax::ast; use syntax_pos::Span; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir::{self, PatKind}; use self::SubjectNode::Subject; @@ -114,7 +113,7 @@ macro_rules! ignore_err { // PUBLIC ENTRY POINTS impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { - pub fn regionck_expr(&self, e: &hir::Expr) { + pub fn regionck_expr(&self, e: &'gcx hir::Expr) { let mut rcx = RegionCtxt::new(self, RepeatingScope(e.id), e.id, Subject(e.id)); if self.err_count_since_creation() == 0 { // regionck assumes typeck succeeded @@ -142,13 +141,14 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { pub fn regionck_fn(&self, fn_id: ast::NodeId, decl: &hir::FnDecl, - blk: &hir::Block) { + body_id: hir::ExprId) { debug!("regionck_fn(id={})", fn_id); - let mut rcx = RegionCtxt::new(self, RepeatingScope(blk.id), blk.id, Subject(fn_id)); + let node_id = body_id.node_id(); + let mut rcx = RegionCtxt::new(self, RepeatingScope(node_id), node_id, Subject(fn_id)); if self.err_count_since_creation() == 0 { // regionck assumes typeck succeeded - rcx.visit_fn_body(fn_id, decl, blk, self.tcx.map.span(fn_id)); + rcx.visit_fn_body(fn_id, decl, body_id, self.tcx.map.span(fn_id)); } rcx.free_region_map.relate_free_regions_from_predicates( @@ -268,14 +268,14 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { fn visit_fn_body(&mut self, id: ast::NodeId, // the id of the fn itself fn_decl: &hir::FnDecl, - body: &hir::Block, + body_id: hir::ExprId, span: Span) { // When we enter a function, we can derive debug!("visit_fn_body(id={})", id); let call_site = self.tcx.region_maps.lookup_code_extent( - region::CodeExtentData::CallSiteScope { fn_id: id, body_id: body.id }); + region::CodeExtentData::CallSiteScope { fn_id: id, body_id: body_id.node_id() }); let old_call_site_scope = self.set_call_site_scope(Some(call_site)); let fn_sig = { @@ -296,24 +296,22 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { // // FIXME(#27579) return types should not be implied bounds let fn_sig_tys: Vec<_> = - fn_sig.inputs.iter() - .cloned() - .chain(Some(fn_sig.output)) - .collect(); + fn_sig.inputs().iter().cloned().chain(Some(fn_sig.output())).collect(); - let old_body_id = self.set_body_id(body.id); - self.relate_free_regions(&fn_sig_tys[..], body.id, span); - self.link_fn_args(self.tcx.region_maps.node_extent(body.id), + let old_body_id = self.set_body_id(body_id.node_id()); + self.relate_free_regions(&fn_sig_tys[..], body_id.node_id(), span); + self.link_fn_args(self.tcx.region_maps.node_extent(body_id.node_id()), &fn_decl.inputs[..]); - self.visit_block(body); - self.visit_region_obligations(body.id); + let body = self.tcx.map.expr(body_id); + self.visit_expr(body); + self.visit_region_obligations(body_id.node_id()); let call_site_scope = self.call_site_scope.unwrap(); debug!("visit_fn_body body.id {} call_site_scope: {:?}", body.id, call_site_scope); let call_site_region = self.tcx.mk_region(ty::ReScope(call_site_scope)); self.type_of_node_must_outlive(infer::CallReturn(span), - body.id, + body_id.node_id(), call_site_region); self.region_bound_pairs.truncate(old_region_bounds_pairs_len); @@ -434,7 +432,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { fn constrain_bindings_in_pat(&mut self, pat: &hir::Pat) { let tcx = self.tcx; debug!("regionck::visit_pat(pat={:?})", pat); - pat_util::pat_bindings(pat, |_, id, span, _| { + pat.each_binding(|_, id, span, _| { // If we have a variable that contains region'd data, that // data will be accessible from anywhere that the variable is // accessed. We must be wary of loops like this: @@ -470,7 +468,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { } } -impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { +impl<'a, 'gcx, 'tcx> Visitor<'gcx> for RegionCtxt<'a, 'gcx, 'tcx> { // (..) FIXME(#3238) should use visit_pat, not visit_arm/visit_local, // However, right now we run into an issue whereby some free // regions are not properly related if they appear within the @@ -479,14 +477,18 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { // hierarchy, and in particular the relationships between free // regions, until regionck, as described in #3238. - fn visit_fn(&mut self, _fk: intravisit::FnKind<'v>, fd: &'v hir::FnDecl, - b: &'v hir::Block, span: Span, id: ast::NodeId) { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'gcx> { + NestedVisitorMap::OnlyBodies(&self.tcx.map) + } + + fn visit_fn(&mut self, _fk: intravisit::FnKind<'gcx>, fd: &'gcx hir::FnDecl, + b: hir::ExprId, span: Span, id: ast::NodeId) { self.visit_fn_body(id, fd, b, span) } //visit_pat: visit_pat, // (..) see above - fn visit_arm(&mut self, arm: &hir::Arm) { + fn visit_arm(&mut self, arm: &'gcx hir::Arm) { // see above for p in &arm.pats { self.constrain_bindings_in_pat(p); @@ -494,14 +496,14 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { intravisit::walk_arm(self, arm); } - fn visit_local(&mut self, l: &hir::Local) { + fn visit_local(&mut self, l: &'gcx hir::Local) { // see above self.constrain_bindings_in_pat(&l.pat); self.link_local(l); intravisit::walk_local(self, l); } - fn visit_expr(&mut self, expr: &hir::Expr) { + fn visit_expr(&mut self, expr: &'gcx hir::Expr) { debug!("regionck::visit_expr(e={:?}, repeating_scope={})", expr, self.repeating_scope); @@ -603,7 +605,7 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { debug!("regionck::visit_expr(e={:?}, repeating_scope={}) - visiting subexprs", expr, self.repeating_scope); match expr.node { - hir::ExprPath(..) => { + hir::ExprPath(_) => { self.fcx.opt_node_ty_substs(expr.id, |item_substs| { let origin = infer::ParameterOrigin::Path; self.substs_wf_in_scope(origin, &item_substs.substs, expr.span, expr_region); @@ -613,11 +615,11 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { hir::ExprCall(ref callee, ref args) => { if has_method_map { self.constrain_call(expr, Some(&callee), - args.iter().map(|e| &**e), false); + args.iter().map(|e| &*e), false); } else { self.constrain_callee(callee.id, expr, &callee); self.constrain_call(expr, None, - args.iter().map(|e| &**e), false); + args.iter().map(|e| &*e), false); } intravisit::walk_expr(self, expr); @@ -625,7 +627,7 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { hir::ExprMethodCall(.., ref args) => { self.constrain_call(expr, Some(&args[0]), - args[1..].iter().map(|e| &**e), false); + args[1..].iter().map(|e| &*e), false); intravisit::walk_expr(self, expr); } @@ -738,11 +740,11 @@ impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for RegionCtxt<'a, 'gcx, 'tcx> { intravisit::walk_expr(self, expr); } - hir::ExprClosure(.., ref body, _) => { - self.check_expr_fn_block(expr, &body); + hir::ExprClosure(.., body_id, _) => { + self.check_expr_fn_block(expr, body_id); } - hir::ExprLoop(ref body, _) => { + hir::ExprLoop(ref body, _, _) => { let repeating_scope = self.set_repeating_scope(body.id); intravisit::walk_expr(self, expr); self.set_repeating_scope(repeating_scope); @@ -807,11 +809,10 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { } /*From:*/ (_, - /*To: */ &ty::TyTrait(ref obj)) => { + /*To: */ &ty::TyDynamic(.., r)) => { // When T is existentially quantified as a trait // `Foo+'to`, it must outlive the region bound `'to`. - self.type_must_outlive(infer::RelateObjectBound(cast_expr.span), - from_ty, obj.region_bound); + self.type_must_outlive(infer::RelateObjectBound(cast_expr.span), from_ty, r); } /*From:*/ (&ty::TyBox(from_referent_ty), @@ -824,9 +825,9 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { } fn check_expr_fn_block(&mut self, - expr: &hir::Expr, - body: &hir::Block) { - let repeating_scope = self.set_repeating_scope(body.id); + expr: &'gcx hir::Expr, + body_id: hir::ExprId) { + let repeating_scope = self.set_repeating_scope(body_id.node_id()); intravisit::walk_expr(self, expr); self.set_repeating_scope(repeating_scope); } @@ -936,7 +937,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { let fn_sig = method.ty.fn_sig(); let fn_sig = // late-bound regions should have been instantiated self.tcx.no_late_bound_regions(fn_sig).unwrap(); - let self_ty = fn_sig.inputs[0]; + let self_ty = fn_sig.inputs()[0]; let (m, r) = match self_ty.sty { ty::TyRef(r, ref m) => (m.mutbl, r), _ => { @@ -963,8 +964,8 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { self.type_must_outlive(infer::CallRcvr(deref_expr.span), self_ty, r_deref_expr); self.type_must_outlive(infer::CallReturn(deref_expr.span), - fn_sig.output, r_deref_expr); - fn_sig.output + fn_sig.output(), r_deref_expr); + fn_sig.output() } None => derefd_ty }; @@ -1149,7 +1150,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { autoderefs: usize, autoref: &adjustment::AutoBorrow<'tcx>) { - debug!("link_autoref(autoref={:?})", autoref); + debug!("link_autoref(autoderefs={}, autoref={:?})", autoderefs, autoref); let mc = mc::MemCategorizationContext::new(self); let expr_cmt = ignore_err!(mc.cat_expr_autoderefd(expr, autoderefs)); debug!("expr_cmt={:?}", expr_cmt); @@ -1729,7 +1730,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { // ``` // // we can thus deduce that `>::SomeType : 'a`. - let trait_predicates = self.tcx.lookup_predicates(projection_ty.trait_ref.def_id); + let trait_predicates = self.tcx.item_predicates(projection_ty.trait_ref.def_id); assert_eq!(trait_predicates.parent, None); let predicates = trait_predicates.predicates.as_slice().to_vec(); traits::elaborate_predicates(self.tcx, predicates) @@ -1762,10 +1763,10 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> { outlives); // check whether this predicate applies to our current projection - match self.eq_types(false, TypeOrigin::Misc(span), ty, outlives.0) { - Ok(InferOk { obligations, .. }) => { - // FIXME(#32730) propagate obligations - assert!(obligations.is_empty()); + let cause = self.fcx.misc(span); + match self.eq_types(false, &cause, ty, outlives.0) { + Ok(ok) => { + self.register_infer_ok_obligations(ok); Ok(outlives.1) } Err(_) => { Err(()) } diff --git a/src/librustc_typeck/check/upvar.rs b/src/librustc_typeck/check/upvar.rs index aa221c33b5..63d20416bd 100644 --- a/src/librustc_typeck/check/upvar.rs +++ b/src/librustc_typeck/check/upvar.rs @@ -50,25 +50,14 @@ use rustc::infer::UpvarRegion; use syntax::ast; use syntax_pos::Span; use rustc::hir; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::util::nodemap::NodeMap; /////////////////////////////////////////////////////////////////////////// // PUBLIC ENTRY POINTS impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { - pub fn closure_analyze_fn(&self, body: &hir::Block) { - let mut seed = SeedBorrowKind::new(self); - seed.visit_block(body); - - let mut adjust = AdjustBorrowKind::new(self, seed.temp_closure_kinds); - adjust.visit_block(body); - - // it's our job to process these. - assert!(self.deferred_call_resolutions.borrow().is_empty()); - } - - pub fn closure_analyze_const(&self, body: &hir::Expr) { + pub fn closure_analyze(&self, body: &'gcx hir::Expr) { let mut seed = SeedBorrowKind::new(self); seed.visit_expr(body); @@ -88,11 +77,15 @@ struct SeedBorrowKind<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { temp_closure_kinds: NodeMap, } -impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for SeedBorrowKind<'a, 'gcx, 'tcx> { - fn visit_expr(&mut self, expr: &hir::Expr) { +impl<'a, 'gcx, 'tcx> Visitor<'gcx> for SeedBorrowKind<'a, 'gcx, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'gcx> { + NestedVisitorMap::OnlyBodies(&self.fcx.tcx.map) + } + + fn visit_expr(&mut self, expr: &'gcx hir::Expr) { match expr.node { - hir::ExprClosure(cc, _, ref body, _) => { - self.check_closure(expr, cc, &body); + hir::ExprClosure(cc, _, body_id, _) => { + self.check_closure(expr, cc, body_id); } _ => { } @@ -110,7 +103,7 @@ impl<'a, 'gcx, 'tcx> SeedBorrowKind<'a, 'gcx, 'tcx> { fn check_closure(&mut self, expr: &hir::Expr, capture_clause: hir::CaptureClause, - _body: &hir::Block) + _body_id: hir::ExprId) { let closure_def_id = self.fcx.tcx.map.local_def_id(expr.id); if !self.fcx.tables.borrow().closure_kinds.contains_key(&closure_def_id) { @@ -164,14 +157,15 @@ impl<'a, 'gcx, 'tcx> AdjustBorrowKind<'a, 'gcx, 'tcx> { id: ast::NodeId, span: Span, decl: &hir::FnDecl, - body: &hir::Block) { + body_id: hir::ExprId) { /*! * Analysis starting point. */ - debug!("analyze_closure(id={:?}, body.id={:?})", id, body.id); + debug!("analyze_closure(id={:?}, body.id={:?})", id, body_id); { + let body = self.fcx.tcx.map.expr(body_id); let mut euv = euv::ExprUseVisitor::with_options(self, self.fcx, @@ -194,8 +188,8 @@ impl<'a, 'gcx, 'tcx> AdjustBorrowKind<'a, 'gcx, 'tcx> { // inference algorithm will reject it). // Extract the type variables UV0...UVn. - let closure_substs = match self.fcx.node_ty(id).sty { - ty::TyClosure(_, ref substs) => substs, + let (def_id, closure_substs) = match self.fcx.node_ty(id).sty { + ty::TyClosure(def_id, substs) => (def_id, substs), ref t => { span_bug!( span, @@ -208,7 +202,9 @@ impl<'a, 'gcx, 'tcx> AdjustBorrowKind<'a, 'gcx, 'tcx> { let final_upvar_tys = self.final_upvar_tys(id); debug!("analyze_closure: id={:?} closure_substs={:?} final_upvar_tys={:?}", id, closure_substs, final_upvar_tys); - for (&upvar_ty, final_upvar_ty) in closure_substs.upvar_tys.iter().zip(final_upvar_tys) { + for (upvar_ty, final_upvar_ty) in + closure_substs.upvar_tys(def_id, self.fcx.tcx).zip(final_upvar_tys) + { self.fcx.demand_eqtype(span, final_upvar_ty, upvar_ty); } @@ -493,11 +489,15 @@ impl<'a, 'gcx, 'tcx> AdjustBorrowKind<'a, 'gcx, 'tcx> { } } -impl<'a, 'gcx, 'tcx, 'v> Visitor<'v> for AdjustBorrowKind<'a, 'gcx, 'tcx> { +impl<'a, 'gcx, 'tcx> Visitor<'gcx> for AdjustBorrowKind<'a, 'gcx, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'gcx> { + NestedVisitorMap::OnlyBodies(&self.fcx.tcx.map) + } + fn visit_fn(&mut self, - fn_kind: intravisit::FnKind<'v>, - decl: &'v hir::FnDecl, - body: &'v hir::Block, + fn_kind: intravisit::FnKind<'gcx>, + decl: &'gcx hir::FnDecl, + body: hir::ExprId, span: Span, id: ast::NodeId) { diff --git a/src/librustc_typeck/check/wfcheck.rs b/src/librustc_typeck/check/wfcheck.rs index be1f2e3567..ffdb56753f 100644 --- a/src/librustc_typeck/check/wfcheck.rs +++ b/src/librustc_typeck/check/wfcheck.rs @@ -8,26 +8,28 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +use astconv::ExplicitSelf; use check::FnCtxt; use constrained_type_params::{identify_constrained_type_params, Parameter}; use CrateCtxt; + use hir::def_id::DefId; use middle::region::{CodeExtent}; -use rustc::infer::TypeOrigin; -use rustc::traits; +use rustc::traits::{self, ObligationCauseCode}; use rustc::ty::{self, Ty, TyCtxt}; -use rustc::util::nodemap::{FnvHashSet, FnvHashMap}; +use rustc::util::nodemap::{FxHashSet, FxHashMap}; +use rustc::middle::lang_items; use syntax::ast; use syntax_pos::Span; use errors::DiagnosticBuilder; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir; pub struct CheckTypeWellFormedVisitor<'ccx, 'tcx:'ccx> { ccx: &'ccx CrateCtxt<'ccx, 'tcx>, - code: traits::ObligationCauseCode<'tcx>, + code: ObligationCauseCode<'tcx>, } /// Helper type of a temporary returned by .for_item(...). @@ -35,7 +37,7 @@ pub struct CheckTypeWellFormedVisitor<'ccx, 'tcx:'ccx> { /// F: for<'b, 'tcx> where 'gcx: 'tcx FnOnce(FnCtxt<'b, 'gcx, 'tcx>). struct CheckWfFcxBuilder<'a, 'gcx: 'a+'tcx, 'tcx: 'a> { inherited: super::InheritedBuilder<'a, 'gcx, 'tcx>, - code: traits::ObligationCauseCode<'gcx>, + code: ObligationCauseCode<'gcx>, id: ast::NodeId, span: Span } @@ -49,7 +51,7 @@ impl<'a, 'gcx, 'tcx> CheckWfFcxBuilder<'a, 'gcx, 'tcx> { let id = self.id; let span = self.span; self.inherited.enter(|inh| { - let fcx = FnCtxt::new(&inh, inh.ccx.tcx.types.never, id); + let fcx = FnCtxt::new(&inh, Some(inh.ccx.tcx.types.never), id); let wf_tys = f(&fcx, &mut CheckTypeWellFormedVisitor { ccx: fcx.ccx, code: code @@ -65,7 +67,7 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { -> CheckTypeWellFormedVisitor<'ccx, 'gcx> { CheckTypeWellFormedVisitor { ccx: ccx, - code: traits::ObligationCauseCode::MiscObligation + code: ObligationCauseCode::MiscObligation } } @@ -116,18 +118,12 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { // FIXME(#27579) what amount of WF checking do we need for neg impls? let trait_ref = ccx.tcx.impl_trait_ref(ccx.tcx.map.local_def_id(item.id)).unwrap(); - ccx.tcx.populate_implementations_for_trait_if_necessary(trait_ref.def_id); - match ccx.tcx.lang_items.to_builtin_kind(trait_ref.def_id) { - Some(ty::BoundSend) | Some(ty::BoundSync) => {} - Some(_) | None => { - if !ccx.tcx.trait_has_default_impl(trait_ref.def_id) { - error_192(ccx, item.span); - } - } + if !ccx.tcx.trait_has_default_impl(trait_ref.def_id) { + error_192(ccx, item.span); } } - hir::ItemFn(.., ref body) => { - self.check_item_fn(item, body); + hir::ItemFn(.., body_id) => { + self.check_item_fn(item, body_id); } hir::ItemStatic(..) => { self.check_item_type(item); @@ -156,8 +152,8 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { self.check_variances_for_type_defn(item, ast_generics); } - hir::ItemTrait(.., ref items) => { - self.check_trait(item, items); + hir::ItemTrait(..) => { + self.check_trait(item); } _ => {} } @@ -172,32 +168,39 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { let free_substs = &fcx.parameter_environment.free_substs; let free_id_outlive = fcx.parameter_environment.free_id_outlive; - let item = fcx.tcx.impl_or_trait_item(fcx.tcx.map.local_def_id(item_id)); + let item = fcx.tcx.associated_item(fcx.tcx.map.local_def_id(item_id)); - let (mut implied_bounds, self_ty) = match item.container() { + let (mut implied_bounds, self_ty) = match item.container { ty::TraitContainer(_) => (vec![], fcx.tcx.mk_self_type()), ty::ImplContainer(def_id) => (fcx.impl_implied_bounds(def_id, span), - fcx.tcx.lookup_item_type(def_id).ty) + fcx.tcx.item_type(def_id)) }; - match item { - ty::ConstTraitItem(assoc_const) => { - let ty = fcx.instantiate_type_scheme(span, free_substs, &assoc_const.ty); + match item.kind { + ty::AssociatedKind::Const => { + let ty = fcx.tcx.item_type(item.def_id); + let ty = fcx.instantiate_type_scheme(span, free_substs, &ty); fcx.register_wf_obligation(ty, span, code.clone()); } - ty::MethodTraitItem(method) => { - reject_shadowing_type_parameters(fcx.tcx, span, &method.generics); - let method_ty = fcx.instantiate_type_scheme(span, free_substs, &method.fty); - let predicates = fcx.instantiate_bounds(span, free_substs, &method.predicates); - this.check_fn_or_method(fcx, span, &method_ty, &predicates, + ty::AssociatedKind::Method => { + reject_shadowing_type_parameters(fcx.tcx, item.def_id); + let method_ty = fcx.tcx.item_type(item.def_id); + let method_ty = fcx.instantiate_type_scheme(span, free_substs, &method_ty); + let predicates = fcx.instantiate_bounds(span, item.def_id, free_substs); + let fty = match method_ty.sty { + ty::TyFnDef(_, _, f) => f, + _ => bug!() + }; + this.check_fn_or_method(fcx, span, fty, &predicates, free_id_outlive, &mut implied_bounds); let sig_if_method = sig_if_method.expect("bad signature for method"); - this.check_method_receiver(fcx, sig_if_method, &method, + this.check_method_receiver(fcx, sig_if_method, &item, free_id_outlive, self_ty); } - ty::TypeTraitItem(assoc_type) => { - if let Some(ref ty) = assoc_type.ty { - let ty = fcx.instantiate_type_scheme(span, free_substs, ty); + ty::AssociatedKind::Type => { + if item.defaultness.has_value() { + let ty = fcx.tcx.item_type(item.def_id); + let ty = fcx.instantiate_type_scheme(span, free_substs, &ty); fcx.register_wf_obligation(ty, span, code.clone()); } } @@ -233,9 +236,9 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { // For DST, all intermediate types must be sized. let unsized_len = if all_sized || variant.fields.is_empty() { 0 } else { 1 }; for field in &variant.fields[..variant.fields.len() - unsized_len] { - fcx.register_builtin_bound( + fcx.register_bound( field.ty, - ty::BoundSized, + fcx.tcx.require_lang_item(lang_items::SizedTraitLangItem), traits::ObligationCause::new(field.span, fcx.body_id, traits::FieldSized)); @@ -248,19 +251,15 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } let free_substs = &fcx.parameter_environment.free_substs; - let predicates = fcx.tcx.lookup_predicates(fcx.tcx.map.local_def_id(item.id)); - let predicates = fcx.instantiate_bounds(item.span, free_substs, &predicates); + let def_id = fcx.tcx.map.local_def_id(item.id); + let predicates = fcx.instantiate_bounds(item.span, def_id, free_substs); this.check_where_clauses(fcx, item.span, &predicates); vec![] // no implied bounds in a struct def'n }); } - fn check_auto_trait(&mut self, - trait_def_id: DefId, - items: &[hir::TraitItem], - span: Span) - { + fn check_auto_trait(&mut self, trait_def_id: DefId, span: Span) { // We want to ensure: // // 1) that there are no items contained within @@ -271,7 +270,7 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { // // 3) that the trait definition does not have any type parameters - let predicates = self.tcx().lookup_predicates(trait_def_id); + let predicates = self.tcx().item_predicates(trait_def_id); // We must exclude the Self : Trait predicate contained by all // traits. @@ -286,12 +285,7 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } }); - let trait_def = self.tcx().lookup_trait_def(trait_def_id); - - let has_ty_params = - trait_def.generics - .types - .len() > 1; + let has_ty_params = self.tcx().item_generics(trait_def_id).types.len() > 1; // We use an if-else here, since the generics will also trigger // an extraneous error message when we find predicates like @@ -302,7 +296,7 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { // extraneous predicates created by things like // an associated type inside the trait. let mut err = None; - if !items.is_empty() { + if !self.tcx().associated_item_def_ids(trait_def_id).is_empty() { error_380(self.ccx, span); } else if has_ty_params { err = Some(struct_span_err!(self.tcx().sess, span, E0567, @@ -326,20 +320,16 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } } - fn check_trait(&mut self, - item: &hir::Item, - items: &[hir::TraitItem]) - { + fn check_trait(&mut self, item: &hir::Item) { let trait_def_id = self.tcx().map.local_def_id(item.id); if self.tcx().trait_has_default_impl(trait_def_id) { - self.check_auto_trait(trait_def_id, items, item.span); + self.check_auto_trait(trait_def_id, item.span); } self.for_item(item).with_fcx(|fcx, this| { let free_substs = &fcx.parameter_environment.free_substs; - let predicates = fcx.tcx.lookup_predicates(trait_def_id); - let predicates = fcx.instantiate_bounds(item.span, free_substs, &predicates); + let predicates = fcx.instantiate_bounds(item.span, trait_def_id, free_substs); this.check_where_clauses(fcx, item.span, &predicates); vec![] }); @@ -347,12 +337,13 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { fn check_item_fn(&mut self, item: &hir::Item, - body: &hir::Block) + body_id: hir::ExprId) { self.for_item(item).with_fcx(|fcx, this| { let free_substs = &fcx.parameter_environment.free_substs; - let type_scheme = fcx.tcx.lookup_item_type(fcx.tcx.map.local_def_id(item.id)); - let item_ty = fcx.instantiate_type_scheme(item.span, free_substs, &type_scheme.ty); + let def_id = fcx.tcx.map.local_def_id(item.id); + let ty = fcx.tcx.item_type(def_id); + let item_ty = fcx.instantiate_type_scheme(item.span, free_substs, &ty); let bare_fn_ty = match item_ty.sty { ty::TyFnDef(.., ref bare_fn_ty) => bare_fn_ty, _ => { @@ -360,11 +351,10 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } }; - let predicates = fcx.tcx.lookup_predicates(fcx.tcx.map.local_def_id(item.id)); - let predicates = fcx.instantiate_bounds(item.span, free_substs, &predicates); + let predicates = fcx.instantiate_bounds(item.span, def_id, free_substs); let mut implied_bounds = vec![]; - let free_id_outlive = fcx.tcx.region_maps.call_site_extent(item.id, body.id); + let free_id_outlive = fcx.tcx.region_maps.call_site_extent(item.id, body_id.node_id()); this.check_fn_or_method(fcx, item.span, bare_fn_ty, &predicates, free_id_outlive, &mut implied_bounds); implied_bounds @@ -377,11 +367,11 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { debug!("check_item_type: {:?}", item); self.for_item(item).with_fcx(|fcx, this| { - let type_scheme = fcx.tcx.lookup_item_type(fcx.tcx.map.local_def_id(item.id)); + let ty = fcx.tcx.item_type(fcx.tcx.map.local_def_id(item.id)); let item_ty = fcx.instantiate_type_scheme(item.span, &fcx.parameter_environment .free_substs, - &type_scheme.ty); + &ty); fcx.register_wf_obligation(item_ty, item.span, this.code.clone()); @@ -416,17 +406,16 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } } None => { - let self_ty = fcx.tcx.tables().node_id_to_type(item.id); + let self_ty = fcx.tcx.item_type(item_def_id); let self_ty = fcx.instantiate_type_scheme(item.span, free_substs, &self_ty); fcx.register_wf_obligation(self_ty, ast_self_ty.span, this.code.clone()); } } - let predicates = fcx.tcx.lookup_predicates(item_def_id); - let predicates = fcx.instantiate_bounds(item.span, free_substs, &predicates); + let predicates = fcx.instantiate_bounds(item.span, item_def_id, free_substs); this.check_where_clauses(fcx, item.span, &predicates); - fcx.impl_implied_bounds(fcx.tcx.map.local_def_id(item.id), item.span) + fcx.impl_implied_bounds(item_def_id, item.span) }); } @@ -460,15 +449,15 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { let fty = fcx.instantiate_type_scheme(span, free_substs, &fty); let sig = fcx.tcx.liberate_late_bound_regions(free_id_outlive, &fty.sig); - for &input_ty in &sig.inputs { - fcx.register_wf_obligation(input_ty, span, self.code.clone()); + for input_ty in sig.inputs() { + fcx.register_wf_obligation(&input_ty, span, self.code.clone()); } - implied_bounds.extend(sig.inputs); + implied_bounds.extend(sig.inputs()); - fcx.register_wf_obligation(sig.output, span, self.code.clone()); + fcx.register_wf_obligation(sig.output(), span, self.code.clone()); // FIXME(#25759) return types should not be implied bounds - implied_bounds.push(sig.output); + implied_bounds.push(sig.output()); self.check_where_clauses(fcx, span, predicates); } @@ -476,60 +465,64 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { fn check_method_receiver<'fcx, 'tcx>(&mut self, fcx: &FnCtxt<'fcx, 'gcx, 'tcx>, method_sig: &hir::MethodSig, - method: &ty::Method<'tcx>, + method: &ty::AssociatedItem, free_id_outlive: CodeExtent, self_ty: ty::Ty<'tcx>) { // check that the type of the method's receiver matches the // method's first parameter. - debug!("check_method_receiver({:?},cat={:?},self_ty={:?})", - method.name, method.explicit_self, self_ty); + debug!("check_method_receiver({:?}, self_ty={:?})", + method, self_ty); - let rcvr_ty = match method.explicit_self { - ty::ExplicitSelfCategory::Static => return, - ty::ExplicitSelfCategory::ByValue => self_ty, - ty::ExplicitSelfCategory::ByReference(region, mutability) => { - fcx.tcx.mk_ref(region, ty::TypeAndMut { - ty: self_ty, - mutbl: mutability - }) - } - ty::ExplicitSelfCategory::ByBox => fcx.tcx.mk_box(self_ty) - }; + if !method.method_has_self_argument { + return; + } let span = method_sig.decl.inputs[0].pat.span; let free_substs = &fcx.parameter_environment.free_substs; - let fty = fcx.instantiate_type_scheme(span, free_substs, &method.fty); - let sig = fcx.tcx.liberate_late_bound_regions(free_id_outlive, &fty.sig); + let method_ty = fcx.tcx.item_type(method.def_id); + let fty = fcx.instantiate_type_scheme(span, free_substs, &method_ty); + let sig = fcx.tcx.liberate_late_bound_regions(free_id_outlive, &fty.fn_sig()); debug!("check_method_receiver: sig={:?}", sig); + let self_arg_ty = sig.inputs()[0]; + let rcvr_ty = match ExplicitSelf::determine(self_ty, self_arg_ty) { + ExplicitSelf::ByValue => self_ty, + ExplicitSelf::ByReference(region, mutbl) => { + fcx.tcx.mk_ref(region, ty::TypeAndMut { + ty: self_ty, + mutbl: mutbl + }) + } + ExplicitSelf::ByBox => fcx.tcx.mk_box(self_ty) + }; let rcvr_ty = fcx.instantiate_type_scheme(span, free_substs, &rcvr_ty); let rcvr_ty = fcx.tcx.liberate_late_bound_regions(free_id_outlive, &ty::Binder(rcvr_ty)); debug!("check_method_receiver: receiver ty = {:?}", rcvr_ty); - let origin = TypeOrigin::MethodReceiver(span); - fcx.demand_eqtype_with_origin(origin, rcvr_ty, sig.inputs[0]); + let cause = fcx.cause(span, ObligationCauseCode::MethodReceiver); + fcx.demand_eqtype_with_origin(&cause, rcvr_ty, self_arg_ty); } fn check_variances_for_type_defn(&self, item: &hir::Item, ast_generics: &hir::Generics) { - let ty = self.tcx().tables().node_id_to_type(item.id); + let item_def_id = self.tcx().map.local_def_id(item.id); + let ty = self.tcx().item_type(item_def_id); if self.tcx().has_error_field(ty) { return; } - let item_def_id = self.tcx().map.local_def_id(item.id); - let ty_predicates = self.tcx().lookup_predicates(item_def_id); + let ty_predicates = self.tcx().item_predicates(item_def_id); assert_eq!(ty_predicates.parent, None); let variances = self.tcx().item_variances(item_def_id); - let mut constrained_parameters: FnvHashSet<_> = + let mut constrained_parameters: FxHashSet<_> = variances.iter().enumerate() .filter(|&(_, &variance)| variance != ty::Bivariant) .map(|(index, _)| Parameter(index as u32)) @@ -578,33 +571,33 @@ impl<'ccx, 'gcx> CheckTypeWellFormedVisitor<'ccx, 'gcx> { } } -fn reject_shadowing_type_parameters(tcx: TyCtxt, span: Span, generics: &ty::Generics) { - let parent = tcx.lookup_generics(generics.parent.unwrap()); - let impl_params: FnvHashMap<_, _> = parent.types - .iter() - .map(|tp| (tp.name, tp.def_id)) - .collect(); +fn reject_shadowing_type_parameters(tcx: TyCtxt, def_id: DefId) { + let generics = tcx.item_generics(def_id); + let parent = tcx.item_generics(generics.parent.unwrap()); + let impl_params: FxHashMap<_, _> = parent.types + .iter() + .map(|tp| (tp.name, tp.def_id)) + .collect(); for method_param in &generics.types { if impl_params.contains_key(&method_param.name) { // Tighten up the span to focus on only the shadowing type - let shadow_node_id = tcx.map.as_local_node_id(method_param.def_id).unwrap(); - let type_span = match tcx.map.opt_span(shadow_node_id) { - Some(osp) => osp, - None => span - }; + let type_span = tcx.def_span(method_param.def_id); // The expectation here is that the original trait declaration is // local so it should be okay to just unwrap everything. - let trait_def_id = impl_params.get(&method_param.name).unwrap(); - let trait_node_id = tcx.map.as_local_node_id(*trait_def_id).unwrap(); - let trait_decl_span = tcx.map.opt_span(trait_node_id).unwrap(); + let trait_def_id = impl_params[&method_param.name]; + let trait_decl_span = tcx.def_span(trait_def_id); error_194(tcx, type_span, trait_decl_span, method_param.name); } } } impl<'ccx, 'tcx, 'v> Visitor<'v> for CheckTypeWellFormedVisitor<'ccx, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'v> { + NestedVisitorMap::None + } + fn visit_item(&mut self, i: &hir::Item) { debug!("visit_item: {:?}", i); self.check_item_well_formed(i); @@ -649,7 +642,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { let fields = struct_def.fields().iter() .map(|field| { - let field_ty = self.tcx.tables().node_id_to_type(field.id); + let field_ty = self.tcx.item_type(self.tcx.map.local_def_id(field.id)); let field_ty = self.instantiate_type_scheme(field.span, &self.parameter_environment .free_substs, @@ -678,7 +671,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { None => { // Inherent impl: take implied bounds from the self type. - let self_ty = self.tcx.lookup_item_type(impl_def_id).ty; + let self_ty = self.tcx.item_type(impl_def_id); let self_ty = self.instantiate_type_scheme(span, free_substs, &self_ty); vec![self_ty] } diff --git a/src/librustc_typeck/check/writeback.rs b/src/librustc_typeck/check/writeback.rs index 5ef3e86996..56de75995f 100644 --- a/src/librustc_typeck/check/writeback.rs +++ b/src/librustc_typeck/check/writeback.rs @@ -20,23 +20,21 @@ use rustc::ty::adjustment; use rustc::ty::fold::{TypeFolder,TypeFoldable}; use rustc::infer::{InferCtxt, FixupError}; use rustc::util::nodemap::DefIdMap; -use write_substs_to_tcx; -use write_ty_to_tcx; use std::cell::Cell; use syntax::ast; -use syntax_pos::{DUMMY_SP, Span}; +use syntax_pos::Span; use rustc::hir::print::pat_to_string; -use rustc::hir::intravisit::{self, Visitor}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir::{self, PatKind}; /////////////////////////////////////////////////////////////////////////// // Entry point functions impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { - pub fn resolve_type_vars_in_expr(&self, e: &hir::Expr, item_id: ast::NodeId) { + pub fn resolve_type_vars_in_expr(&self, e: &'gcx hir::Expr, item_id: ast::NodeId) { assert_eq!(self.writeback_errors.get(), false); let mut wbcx = WritebackCx::new(self); wbcx.visit_expr(e); @@ -45,15 +43,16 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { wbcx.visit_liberated_fn_sigs(); wbcx.visit_fru_field_types(); wbcx.visit_deferred_obligations(item_id); + wbcx.visit_type_nodes(); } pub fn resolve_type_vars_in_fn(&self, - decl: &hir::FnDecl, - blk: &hir::Block, + decl: &'gcx hir::FnDecl, + body: &'gcx hir::Expr, item_id: ast::NodeId) { assert_eq!(self.writeback_errors.get(), false); let mut wbcx = WritebackCx::new(self); - wbcx.visit_block(blk); + wbcx.visit_expr(body); for arg in &decl.inputs { wbcx.visit_node_id(ResolvingPattern(arg.pat.span), arg.id); wbcx.visit_pat(&arg.pat); @@ -67,8 +66,9 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> { wbcx.visit_closures(); wbcx.visit_liberated_fn_sigs(); wbcx.visit_fru_field_types(); - wbcx.visit_anon_types(item_id); + wbcx.visit_anon_types(); wbcx.visit_deferred_obligations(item_id); + wbcx.visit_type_nodes(); } } @@ -133,6 +133,12 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> { self.fcx.tcx } + fn write_ty_to_tcx(&self, node_id: ast::NodeId, ty: Ty<'gcx>) { + debug!("write_ty_to_tcx({}, {:?})", node_id, ty); + assert!(!ty.needs_infer()); + self.tcx().tables.borrow_mut().node_types.insert(node_id, ty); + } + // Hacky hack: During type-checking, we treat *all* operators // as potentially overloaded. But then, during writeback, if // we observe that something like `a+b` is (known to be) @@ -180,8 +186,12 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> { // below. In general, a function is made into a `visitor` if it must // traffic in node-ids or update tables in the type context etc. -impl<'cx, 'gcx, 'tcx, 'v> Visitor<'v> for WritebackCx<'cx, 'gcx, 'tcx> { - fn visit_stmt(&mut self, s: &hir::Stmt) { +impl<'cx, 'gcx, 'tcx> Visitor<'gcx> for WritebackCx<'cx, 'gcx, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'gcx> { + NestedVisitorMap::OnlyBodies(&self.fcx.tcx.map) + } + + fn visit_stmt(&mut self, s: &'gcx hir::Stmt) { if self.fcx.writeback_errors.get() { return; } @@ -190,7 +200,7 @@ impl<'cx, 'gcx, 'tcx, 'v> Visitor<'v> for WritebackCx<'cx, 'gcx, 'tcx> { intravisit::walk_stmt(self, s); } - fn visit_expr(&mut self, e: &hir::Expr) { + fn visit_expr(&mut self, e: &'gcx hir::Expr) { if self.fcx.writeback_errors.get() { return; } @@ -210,7 +220,7 @@ impl<'cx, 'gcx, 'tcx, 'v> Visitor<'v> for WritebackCx<'cx, 'gcx, 'tcx> { intravisit::walk_expr(self, e); } - fn visit_block(&mut self, b: &hir::Block) { + fn visit_block(&mut self, b: &'gcx hir::Block) { if self.fcx.writeback_errors.get() { return; } @@ -219,7 +229,7 @@ impl<'cx, 'gcx, 'tcx, 'v> Visitor<'v> for WritebackCx<'cx, 'gcx, 'tcx> { intravisit::walk_block(self, b); } - fn visit_pat(&mut self, p: &hir::Pat) { + fn visit_pat(&mut self, p: &'gcx hir::Pat) { if self.fcx.writeback_errors.get() { return; } @@ -234,22 +244,22 @@ impl<'cx, 'gcx, 'tcx, 'v> Visitor<'v> for WritebackCx<'cx, 'gcx, 'tcx> { intravisit::walk_pat(self, p); } - fn visit_local(&mut self, l: &hir::Local) { + fn visit_local(&mut self, l: &'gcx hir::Local) { if self.fcx.writeback_errors.get() { return; } let var_ty = self.fcx.local_ty(l.span, l.id); let var_ty = self.resolve(&var_ty, ResolvingLocal(l.span)); - write_ty_to_tcx(self.fcx.ccx, l.id, var_ty); + self.write_ty_to_tcx(l.id, var_ty); intravisit::walk_local(self, l); } - fn visit_ty(&mut self, t: &hir::Ty) { + fn visit_ty(&mut self, t: &'gcx hir::Ty) { match t.node { hir::TyArray(ref ty, ref count_expr) => { self.visit_ty(&ty); - write_ty_to_tcx(self.fcx.ccx, count_expr.id, self.tcx().types.usize); + self.write_ty_to_tcx(count_expr.id, self.tcx().types.usize); } hir::TyBareFn(ref function_declaration) => { intravisit::walk_fn_decl_nopat(self, &function_declaration.decl); @@ -302,13 +312,11 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> { } } - fn visit_anon_types(&self, item_id: ast::NodeId) { + fn visit_anon_types(&self) { if self.fcx.writeback_errors.get() { return } - let item_def_id = self.fcx.tcx.map.local_def_id(item_id); - let gcx = self.tcx().global_tcx(); for (&def_id, &concrete_ty) in self.fcx.anon_types.borrow().iter() { let reason = ResolvingAnonTy(def_id); @@ -349,27 +357,33 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> { } }); - gcx.register_item_type(def_id, ty::TypeScheme { - ty: outside_ty, - generics: gcx.lookup_generics(item_def_id) - }); + gcx.item_types.borrow_mut().insert(def_id, outside_ty); } } fn visit_node_id(&self, reason: ResolveReason, id: ast::NodeId) { + // Export associated path extensions. + if let Some(def) = self.fcx.tables.borrow_mut().type_relative_path_defs.remove(&id) { + self.tcx().tables.borrow_mut().type_relative_path_defs.insert(id, def); + } + // Resolve any borrowings for the node with id `id` self.visit_adjustments(reason, id); // Resolve the type of the node with id `id` let n_ty = self.fcx.node_ty(id); let n_ty = self.resolve(&n_ty, reason); - write_ty_to_tcx(self.fcx.ccx, id, n_ty); + self.write_ty_to_tcx(id, n_ty); debug!("Node {} has type {:?}", id, n_ty); // Resolve any substitutions self.fcx.opt_node_ty_substs(id, |item_substs| { - write_substs_to_tcx(self.fcx.ccx, id, - self.resolve(item_substs, reason)); + let item_substs = self.resolve(item_substs, reason); + if !item_substs.is_noop() { + debug!("write_substs_to_tcx({}, {:?})", id, item_substs); + assert!(!item_substs.substs.needs_infer()); + self.tcx().tables.borrow_mut().item_substs.insert(id, item_substs); + } }); } @@ -475,6 +489,13 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> { } } + fn visit_type_nodes(&self) { + for (&id, ty) in self.fcx.ast_ty_to_ty_cache.borrow().iter() { + let ty = self.resolve(ty, ResolvingTyNode(id)); + self.fcx.ccx.ast_ty_to_ty_cache.borrow_mut().insert(id, ty); + } + } + fn resolve(&self, x: &T, reason: ResolveReason) -> T::Lifted where T: TypeFoldable<'tcx> + ty::Lift<'gcx> { @@ -502,6 +523,7 @@ enum ResolveReason { ResolvingFieldTypes(ast::NodeId), ResolvingAnonTy(DefId), ResolvingDeferredObligation(Span), + ResolvingTyNode(ast::NodeId), } impl<'a, 'gcx, 'tcx> ResolveReason { @@ -513,15 +535,14 @@ impl<'a, 'gcx, 'tcx> ResolveReason { ResolvingUpvar(upvar_id) => { tcx.expr_span(upvar_id.closure_expr_id) } - ResolvingFnSig(id) => { - tcx.map.span(id) - } - ResolvingFieldTypes(id) => { + ResolvingFnSig(id) | + ResolvingFieldTypes(id) | + ResolvingTyNode(id) => { tcx.map.span(id) } ResolvingClosure(did) | ResolvingAnonTy(did) => { - tcx.map.def_id_span(did, DUMMY_SP) + tcx.def_span(did) } ResolvingDeferredObligation(span) => span } @@ -598,7 +619,8 @@ impl<'cx, 'gcx, 'tcx> Resolver<'cx, 'gcx, 'tcx> { ResolvingFnSig(_) | ResolvingFieldTypes(_) | - ResolvingDeferredObligation(_) => { + ResolvingDeferredObligation(_) | + ResolvingTyNode(_) => { // any failures here should also fail when // resolving the patterns, closure types, or // something else. diff --git a/src/librustc_typeck/check_unused.rs b/src/librustc_typeck/check_unused.rs index 7e41a672bf..0034a85f8e 100644 --- a/src/librustc_typeck/check_unused.rs +++ b/src/librustc_typeck/check_unused.rs @@ -16,7 +16,7 @@ use syntax::ast; use syntax_pos::{Span, DUMMY_SP}; use rustc::hir; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; struct UnusedTraitImportVisitor<'a, 'tcx: 'a> { tcx: TyCtxt<'a, 'tcx, 'tcx>, @@ -40,28 +40,22 @@ impl<'a, 'tcx> UnusedTraitImportVisitor<'a, 'tcx> { } } -impl<'a, 'tcx, 'v> Visitor<'v> for UnusedTraitImportVisitor<'a, 'tcx> { +impl<'a, 'tcx, 'v> ItemLikeVisitor<'v> for UnusedTraitImportVisitor<'a, 'tcx> { fn visit_item(&mut self, item: &hir::Item) { if item.vis == hir::Public || item.span == DUMMY_SP { return; } - if let hir::ItemUse(ref path) = item.node { - match path.node { - hir::ViewPathSimple(..) | hir::ViewPathGlob(..) => { - self.check_import(item.id, path.span); - } - hir::ViewPathList(_, ref path_list) => { - for path_item in path_list { - self.check_import(path_item.node.id, path_item.span); - } - } - } + if let hir::ItemUse(ref path, _) = item.node { + self.check_import(item.id, path.span); } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { let _task = tcx.dep_graph.in_task(DepNode::UnusedTraitCheck); let mut visitor = UnusedTraitImportVisitor { tcx: tcx }; - tcx.map.krate().visit_all_items(&mut visitor); + tcx.map.krate().visit_all_item_likes(&mut visitor); } diff --git a/src/librustc_typeck/coherence/mod.rs b/src/librustc_typeck/coherence/mod.rs index 4a4dea5b51..f575d4d8ba 100644 --- a/src/librustc_typeck/coherence/mod.rs +++ b/src/librustc_typeck/coherence/mod.rs @@ -19,27 +19,25 @@ use hir::def_id::DefId; use middle::lang_items::UnsizeTraitLangItem; use rustc::ty::subst::Subst; use rustc::ty::{self, TyCtxt, TypeFoldable}; -use rustc::traits::{self, Reveal}; +use rustc::traits::{self, ObligationCause, Reveal}; use rustc::ty::ParameterEnvironment; use rustc::ty::{Ty, TyBool, TyChar, TyError}; use rustc::ty::{TyParam, TyRawPtr}; -use rustc::ty::{TyRef, TyAdt, TyTrait, TyNever, TyTuple}; +use rustc::ty::{TyRef, TyAdt, TyDynamic, TyNever, TyTuple}; use rustc::ty::{TyStr, TyArray, TySlice, TyFloat, TyInfer, TyInt}; use rustc::ty::{TyUint, TyClosure, TyBox, TyFnDef, TyFnPtr}; use rustc::ty::{TyProjection, TyAnon}; use rustc::ty::util::CopyImplementationError; use middle::free_region::FreeRegionMap; use CrateCtxt; -use rustc::infer::{self, InferCtxt, TypeOrigin}; +use rustc::infer::{self, InferCtxt}; use syntax_pos::Span; use rustc::dep_graph::DepNode; use rustc::hir::map as hir_map; -use rustc::hir::intravisit; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir::{Item, ItemImpl}; use rustc::hir; -use std::rc::Rc; - mod orphan; mod overlap; mod unsafety; @@ -53,12 +51,15 @@ struct CoherenceCheckVisitor<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> { cc: &'a CoherenceChecker<'a, 'gcx, 'tcx>, } -impl<'a, 'gcx, 'tcx, 'v> intravisit::Visitor<'v> for CoherenceCheckVisitor<'a, 'gcx, 'tcx> { +impl<'a, 'gcx, 'tcx, 'v> ItemLikeVisitor<'v> for CoherenceCheckVisitor<'a, 'gcx, 'tcx> { fn visit_item(&mut self, item: &Item) { if let ItemImpl(..) = item.node { self.cc.check_implementation(item) } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { @@ -67,7 +68,7 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { match ty.sty { TyAdt(def, _) => Some(def.did), - TyTrait(ref t) => Some(t.principal.def_id()), + TyDynamic(ref t, ..) => t.principal().map(|p| p.def_id()), TyBox(_) => self.inference_context.tcx.lang_items.owned_box(), @@ -89,8 +90,9 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { // Check implementations and traits. This populates the tables // containing the inherent methods and extension methods. It also // builds up the trait inheritance table. - self.crate_context.tcx.visit_all_items_in_krate(DepNode::CoherenceCheckImpl, - &mut CoherenceCheckVisitor { cc: self }); + self.crate_context.tcx.visit_all_item_likes_in_krate( + DepNode::CoherenceCheckImpl, + &mut CoherenceCheckVisitor { cc: self }); // Populate the table of destructors. It might seem a bit strange to // do this here, but it's actually the most convenient place, since @@ -108,13 +110,11 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { fn check_implementation(&self, item: &Item) { let tcx = self.crate_context.tcx; let impl_did = tcx.map.local_def_id(item.id); - let self_type = tcx.lookup_item_type(impl_did); + let self_type = tcx.item_type(impl_did); // If there are no traits, then this implementation must have a // base type. - let impl_items = self.create_impl_from_item(item); - if let Some(trait_ref) = self.crate_context.tcx.impl_trait_ref(impl_did) { debug!("(checking implementation) adding impl for trait '{:?}', item '{}'", trait_ref, @@ -133,19 +133,17 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { } else { // Skip inherent impls where the self type is an error // type. This occurs with e.g. resolve failures (#30589). - if self_type.ty.references_error() { + if self_type.references_error() { return; } // Add the implementation to the mapping from implementation to base // type def ID, if there is a base type for this implementation and // the implementation does not have any associated traits. - if let Some(base_def_id) = self.get_base_type_def_id(item.span, self_type.ty) { + if let Some(base_def_id) = self.get_base_type_def_id(item.span, self_type) { self.add_inherent_impl(base_def_id, impl_did); } } - - tcx.impl_or_trait_item_def_ids.borrow_mut().insert(impl_did, Rc::new(impl_items)); } fn add_inherent_impl(&self, base_def_id: DefId, impl_def_id: DefId) { @@ -161,20 +159,6 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { trait_def.record_local_impl(self.crate_context.tcx, impl_def_id, impl_trait_ref); } - // Converts an implementation in the AST to a vector of items. - fn create_impl_from_item(&self, item: &Item) -> Vec { - match item.node { - ItemImpl(.., ref impl_items) => { - impl_items.iter() - .map(|impl_item| self.crate_context.tcx.map.local_def_id(impl_item.id)) - .collect() - } - _ => { - span_bug!(item.span, "can't convert a non-impl to an impl"); - } - } - } - // Destructors // @@ -187,18 +171,16 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { tcx.populate_implementations_for_trait_if_necessary(drop_trait); let drop_trait = tcx.lookup_trait_def(drop_trait); - let impl_items = tcx.impl_or_trait_item_def_ids.borrow(); - drop_trait.for_each_impl(tcx, |impl_did| { - let items = impl_items.get(&impl_did).unwrap(); + let items = tcx.associated_item_def_ids(impl_did); if items.is_empty() { // We'll error out later. For now, just don't ICE. return; } let method_def_id = items[0]; - let self_type = tcx.lookup_item_type(impl_did); - match self_type.ty.sty { + let self_type = tcx.item_type(impl_did); + match self_type.sty { ty::TyAdt(type_def, _) => { type_def.set_destructor(method_def_id); } @@ -254,13 +236,13 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { return; }; - let self_type = tcx.lookup_item_type(impl_did); + let self_type = tcx.item_type(impl_did); debug!("check_implementations_of_copy: self_type={:?} (bound)", self_type); let span = tcx.map.span(impl_node_id); let param_env = ParameterEnvironment::for_item(tcx, impl_node_id); - let self_type = self_type.ty.subst(tcx, ¶m_env.free_substs); + let self_type = self_type.subst(tcx, ¶m_env.free_substs); assert!(!self_type.has_escaping_regions()); debug!("check_implementations_of_copy: self_type={:?} (free)", @@ -348,7 +330,7 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { return; }; - let source = tcx.lookup_item_type(impl_did).ty; + let source = tcx.item_type(impl_did); let trait_ref = self.crate_context.tcx.impl_trait_ref(impl_did).unwrap(); let target = trait_ref.substs.type_at(1); debug!("check_implementations_of_coerce_unsized: {:?} -> {:?} (bound)", @@ -366,12 +348,12 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { target); tcx.infer_ctxt(None, Some(param_env), Reveal::ExactMatch).enter(|infcx| { - let origin = TypeOrigin::Misc(span); + let cause = ObligationCause::misc(span, impl_node_id); let check_mutbl = |mt_a: ty::TypeAndMut<'gcx>, mt_b: ty::TypeAndMut<'gcx>, mk_ptr: &Fn(Ty<'gcx>) -> Ty<'gcx>| { if (mt_a.mutbl, mt_b.mutbl) == (hir::MutImmutable, hir::MutMutable) { - infcx.report_mismatched_types(origin, + infcx.report_mismatched_types(&cause, mk_ptr(mt_b.ty), target, ty::error::TypeError::Mutability); @@ -413,13 +395,13 @@ impl<'a, 'gcx, 'tcx> CoherenceChecker<'a, 'gcx, 'tcx> { .filter_map(|(i, f)| { let (a, b) = (f.ty(tcx, substs_a), f.ty(tcx, substs_b)); - if f.unsubst_ty().is_phantom_data() { + if tcx.item_type(f.did).is_phantom_data() { // Ignore PhantomData fields return None; } // Ignore fields that aren't significantly changed - if let Ok(ok) = infcx.sub_types(false, origin, b, a) { + if let Ok(ok) = infcx.sub_types(false, &cause, b, a) { if ok.obligations.is_empty() { return None; } diff --git a/src/librustc_typeck/coherence/orphan.rs b/src/librustc_typeck/coherence/orphan.rs index bff794364c..2e8206ec95 100644 --- a/src/librustc_typeck/coherence/orphan.rs +++ b/src/librustc_typeck/coherence/orphan.rs @@ -17,12 +17,12 @@ use rustc::ty::{self, TyCtxt}; use syntax::ast; use syntax_pos::Span; use rustc::dep_graph::DepNode; -use rustc::hir::intravisit; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir; pub fn check<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { let mut orphan = OrphanChecker { tcx: tcx }; - tcx.visit_all_items_in_krate(DepNode::CoherenceOrphanCheck, &mut orphan); + tcx.visit_all_item_likes_in_krate(DepNode::CoherenceOrphanCheck, &mut orphan); } struct OrphanChecker<'cx, 'tcx: 'cx> { @@ -81,13 +81,13 @@ impl<'cx, 'tcx> OrphanChecker<'cx, 'tcx> { // defined in this crate. debug!("coherence2::orphan check: inherent impl {}", self.tcx.map.node_to_string(item.id)); - let self_ty = self.tcx.lookup_item_type(def_id).ty; + let self_ty = self.tcx.item_type(def_id); match self_ty.sty { ty::TyAdt(def, _) => { self.check_def_id(item, def.did); } - ty::TyTrait(ref data) => { - self.check_def_id(item, data.principal.def_id()); + ty::TyDynamic(ref data, ..) if data.principal().is_some() => { + self.check_def_id(item, data.principal().unwrap().def_id()); } ty::TyBox(..) => { match self.tcx.lang_items.require_owned_box() { @@ -380,8 +380,11 @@ impl<'cx, 'tcx> OrphanChecker<'cx, 'tcx> { } } -impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for OrphanChecker<'cx, 'tcx> { +impl<'cx, 'tcx, 'v> ItemLikeVisitor<'v> for OrphanChecker<'cx, 'tcx> { fn visit_item(&mut self, item: &hir::Item) { self.check_item(item); } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } diff --git a/src/librustc_typeck/coherence/overlap.rs b/src/librustc_typeck/coherence/overlap.rs index 1bf140c21a..815811675a 100644 --- a/src/librustc_typeck/coherence/overlap.rs +++ b/src/librustc_typeck/coherence/overlap.rs @@ -14,11 +14,11 @@ use hir::def_id::DefId; use rustc::traits::{self, Reveal}; -use rustc::ty::{self, TyCtxt}; +use rustc::ty::{self, TyCtxt, TypeFoldable}; use syntax::ast; use rustc::dep_graph::DepNode; use rustc::hir; -use rustc::hir::intravisit; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use util::nodemap::DefIdMap; use lint; @@ -30,7 +30,7 @@ pub fn check<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { // this secondary walk specifically checks for some other cases, // like defaulted traits, for which additional overlap rules exist - tcx.visit_all_items_in_krate(DepNode::CoherenceOverlapCheckSpecial, &mut overlap); + tcx.visit_all_item_likes_in_krate(DepNode::CoherenceOverlapCheckSpecial, &mut overlap); } struct OverlapChecker<'cx, 'tcx: 'cx> { @@ -48,25 +48,23 @@ impl<'cx, 'tcx> OverlapChecker<'cx, 'tcx> { Value, } - fn name_and_namespace<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, - def_id: DefId) - -> (ast::Name, Namespace) { - let item = tcx.impl_or_trait_item(def_id); - (item.name(), - match item { - ty::TypeTraitItem(..) => Namespace::Type, - ty::ConstTraitItem(..) => Namespace::Value, - ty::MethodTraitItem(..) => Namespace::Value, - }) - } + let name_and_namespace = |def_id| { + let item = self.tcx.associated_item(def_id); + (item.name, match item.kind { + ty::AssociatedKind::Type => Namespace::Type, + ty::AssociatedKind::Const | + ty::AssociatedKind::Method => Namespace::Value, + }) + }; - let impl_items = self.tcx.impl_or_trait_item_def_ids.borrow(); + let impl_items1 = self.tcx.associated_item_def_ids(impl1); + let impl_items2 = self.tcx.associated_item_def_ids(impl2); - for &item1 in &impl_items[&impl1][..] { - let (name, namespace) = name_and_namespace(self.tcx, item1); + for &item1 in &impl_items1[..] { + let (name, namespace) = name_and_namespace(item1); - for &item2 in &impl_items[&impl2][..] { - if (name, namespace) == name_and_namespace(self.tcx, item2) { + for &item2 in &impl_items2[..] { + if (name, namespace) == name_and_namespace(item2) { let msg = format!("duplicate definitions with name `{}`", name); let node_id = self.tcx.map.as_local_node_id(item1).unwrap(); self.tcx.sess.add_lint(lint::builtin::OVERLAPPING_INHERENT_IMPLS, @@ -99,7 +97,7 @@ impl<'cx, 'tcx> OverlapChecker<'cx, 'tcx> { } } -impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for OverlapChecker<'cx, 'tcx> { +impl<'cx, 'tcx, 'v> ItemLikeVisitor<'v> for OverlapChecker<'cx, 'tcx> { fn visit_item(&mut self, item: &'v hir::Item) { match item.node { hir::ItemEnum(..) | @@ -136,6 +134,12 @@ impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for OverlapChecker<'cx, 'tcx> { let trait_ref = self.tcx.impl_trait_ref(impl_def_id).unwrap(); let trait_def_id = trait_ref.def_id; + if trait_ref.references_error() { + debug!("coherence: skipping impl {:?} with error {:?}", + impl_def_id, trait_ref); + return + } + let _task = self.tcx.dep_graph.in_task(DepNode::CoherenceOverlapCheck(trait_def_id)); @@ -174,18 +178,17 @@ impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for OverlapChecker<'cx, 'tcx> { } // check for overlap with the automatic `impl Trait for Trait` - if let ty::TyTrait(ref data) = trait_ref.self_ty().sty { + if let ty::TyDynamic(ref data, ..) = trait_ref.self_ty().sty { // This is something like impl Trait1 for Trait2. Illegal // if Trait1 is a supertrait of Trait2 or Trait2 is not object safe. - if !self.tcx.is_object_safe(data.principal.def_id()) { - // This is an error, but it will be - // reported by wfcheck. Ignore it - // here. This is tested by - // `coherence-impl-trait-for-trait-object-safe.rs`. + if data.principal().map_or(true, |p| !self.tcx.is_object_safe(p.def_id())) { + // This is an error, but it will be reported by wfcheck. Ignore it here. + // This is tested by `coherence-impl-trait-for-trait-object-safe.rs`. } else { let mut supertrait_def_ids = - traits::supertrait_def_ids(self.tcx, data.principal.def_id()); + traits::supertrait_def_ids(self.tcx, + data.principal().unwrap().def_id()); if supertrait_def_ids.any(|d| d == trait_def_id) { span_err!(self.tcx.sess, item.span, @@ -201,4 +204,7 @@ impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for OverlapChecker<'cx, 'tcx> { _ => {} } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } diff --git a/src/librustc_typeck/coherence/unsafety.rs b/src/librustc_typeck/coherence/unsafety.rs index cca6c88430..6d5de8f250 100644 --- a/src/librustc_typeck/coherence/unsafety.rs +++ b/src/librustc_typeck/coherence/unsafety.rs @@ -12,12 +12,12 @@ //! crate or pertains to a type defined in this crate. use rustc::ty::TyCtxt; -use rustc::hir::intravisit; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use rustc::hir::{self, Unsafety}; pub fn check<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) { - let mut orphan = UnsafetyChecker { tcx: tcx }; - tcx.map.krate().visit_all_items(&mut orphan); + let mut unsafety = UnsafetyChecker { tcx: tcx }; + tcx.map.krate().visit_all_item_likes(&mut unsafety); } struct UnsafetyChecker<'cx, 'tcx: 'cx> { @@ -94,7 +94,7 @@ impl<'cx, 'tcx, 'v> UnsafetyChecker<'cx, 'tcx> { } } -impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for UnsafetyChecker<'cx, 'tcx> { +impl<'cx, 'tcx, 'v> ItemLikeVisitor<'v> for UnsafetyChecker<'cx, 'tcx> { fn visit_item(&mut self, item: &'v hir::Item) { match item.node { hir::ItemDefaultImpl(unsafety, _) => { @@ -106,4 +106,7 @@ impl<'cx, 'tcx, 'v> intravisit::Visitor<'v> for UnsafetyChecker<'cx, 'tcx> { _ => {} } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } diff --git a/src/librustc_typeck/collect.rs b/src/librustc_typeck/collect.rs index 0e0f5cb1a7..fba77d1717 100644 --- a/src/librustc_typeck/collect.rs +++ b/src/librustc_typeck/collect.rs @@ -13,7 +13,7 @@ # Collect phase The collect phase of type check has the job of visiting all items, -determining their type, and writing that type into the `tcx.tcache` +determining their type, and writing that type into the `tcx.types` table. Despite its name, this table does not really operate as a *cache*, at least not for the types of items defined within the current crate: we assume that after the collect phase, the types of @@ -22,8 +22,7 @@ all local items will be present in the table. Unlike most of the types that are present in Rust, the types computed for each item are in fact type schemes. This means that they are generic types that may have type parameters. TypeSchemes are -represented by an instance of `ty::TypeScheme`. This combines the -core type along with a list of the bounds for each parameter. Type +represented by a pair of `Generics` and `Ty`. Type parameters themselves are represented as `ty_param()` instances. The phasing of type conversion is somewhat complicated. There is no @@ -51,8 +50,8 @@ There are some shortcomings in this design: - Before walking the set of supertraits for a given trait, you must call `ensure_super_predicates` on that trait def-id. Otherwise, - `lookup_super_predicates` will result in ICEs. -- Because the type scheme includes defaults, cycles through type + `item_super_predicates` will result in ICEs. +- Because the item generics include defaults, cycles through type parameter defaults are illegal even if those defaults are never employed. This is not necessarily a bug. @@ -66,26 +65,25 @@ use middle::const_val::ConstVal; use rustc_const_eval::EvalHint::UncheckedExprHint; use rustc_const_eval::{eval_const_expr_partial, report_const_eval_err}; use rustc::ty::subst::Substs; -use rustc::ty::{ToPredicate, ImplContainer, ImplOrTraitItemContainer, TraitContainer}; -use rustc::ty::{self, AdtKind, ToPolyTraitRef, Ty, TyCtxt, TypeScheme}; +use rustc::ty::{ToPredicate, ImplContainer, AssociatedItemContainer, TraitContainer}; +use rustc::ty::{self, AdtKind, ToPolyTraitRef, Ty, TyCtxt}; use rustc::ty::util::IntTypeExt; use rscope::*; use rustc::dep_graph::DepNode; use util::common::{ErrorReported, MemoizationMap}; -use util::nodemap::{NodeMap, FnvHashMap, FnvHashSet}; -use {CrateCtxt, write_ty_to_tcx}; +use util::nodemap::{NodeMap, FxHashMap, FxHashSet}; +use CrateCtxt; use rustc_const_math::ConstInt; use std::cell::RefCell; -use std::collections::hash_map::Entry::{Occupied, Vacant}; -use std::rc::Rc; use syntax::{abi, ast, attr}; -use syntax::parse::token::keywords; +use syntax::symbol::{Symbol, keywords}; use syntax_pos::Span; -use rustc::hir::{self, intravisit, map as hir_map, print as pprust}; +use rustc::hir::{self, map as hir_map, print as pprust}; +use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; use rustc::hir::def::{Def, CtorKind}; use rustc::hir::def_id::DefId; @@ -94,7 +92,7 @@ use rustc::hir::def_id::DefId; pub fn collect_item_types(ccx: &CrateCtxt) { let mut visitor = CollectItemTypesVisitor { ccx: ccx }; - ccx.tcx.visit_all_items_in_krate(DepNode::CollectItem, &mut visitor); + ccx.tcx.visit_all_item_likes_in_krate(DepNode::CollectItem, &mut visitor.as_deep_visitor()); } /////////////////////////////////////////////////////////////////////////// @@ -130,9 +128,87 @@ struct CollectItemTypesVisitor<'a, 'tcx: 'a> { ccx: &'a CrateCtxt<'a, 'tcx> } -impl<'a, 'tcx, 'v> intravisit::Visitor<'v> for CollectItemTypesVisitor<'a, 'tcx> { - fn visit_item(&mut self, item: &hir::Item) { - convert_item(self.ccx, item); +impl<'a, 'tcx> CollectItemTypesVisitor<'a, 'tcx> { + /// Collect item types is structured into two tasks. The outer + /// task, `CollectItem`, walks the entire content of an item-like + /// thing, including its body. It also spawns an inner task, + /// `CollectItemSig`, which walks only the signature. This inner + /// task is the one that writes the item-type into the various + /// maps. This setup ensures that the item body is never + /// accessible to the task that computes its signature, so that + /// changes to the body don't affect the signature. + /// + /// Consider an example function `foo` that also has a closure in its body: + /// + /// ``` + /// fn foo() { + /// ... + /// let bar = || ...; // we'll label this closure as "bar" below + /// } + /// ``` + /// + /// This results in a dep-graph like so. I've labeled the edges to + /// document where they arise. + /// + /// ``` + /// [HirBody(foo)] -2--> [CollectItem(foo)] -4-> [ItemSignature(bar)] + /// ^ ^ + /// 1 3 + /// [Hir(foo)] -----------+-6-> [CollectItemSig(foo)] -5-> [ItemSignature(foo)] + /// ``` + /// + /// 1. This is added by the `visit_all_item_likes_in_krate`. + /// 2. This is added when we fetch the item body. + /// 3. This is added because `CollectItem` launches `CollectItemSig`. + /// - it is arguably false; if we refactor the `with_task` system; + /// we could get probably rid of it, but it is also harmless enough. + /// 4. This is added by the code in `visit_expr` when we write to `item_types`. + /// 5. This is added by the code in `convert_item` when we write to `item_types`; + /// note that this write occurs inside the `CollectItemSig` task. + /// 6. Added by explicit `read` below + fn with_collect_item_sig(&self, id: ast::NodeId, op: OP) + where OP: FnOnce() + { + let def_id = self.ccx.tcx.map.local_def_id(id); + self.ccx.tcx.dep_graph.with_task(DepNode::CollectItemSig(def_id), || { + self.ccx.tcx.map.read(id); + op(); + }); + } +} + +impl<'a, 'tcx> Visitor<'tcx> for CollectItemTypesVisitor<'a, 'tcx> { + fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> { + NestedVisitorMap::OnlyBodies(&self.ccx.tcx.map) + } + + fn visit_item(&mut self, item: &'tcx hir::Item) { + self.with_collect_item_sig(item.id, || convert_item(self.ccx, item)); + intravisit::walk_item(self, item); + } + + fn visit_expr(&mut self, expr: &'tcx hir::Expr) { + if let hir::ExprClosure(..) = expr.node { + let def_id = self.ccx.tcx.map.local_def_id(expr.id); + generics_of_def_id(self.ccx, def_id); + type_of_def_id(self.ccx, def_id); + } + intravisit::walk_expr(self, expr); + } + + fn visit_ty(&mut self, ty: &'tcx hir::Ty) { + if let hir::TyImplTrait(..) = ty.node { + let def_id = self.ccx.tcx.map.local_def_id(ty.id); + generics_of_def_id(self.ccx, def_id); + } + intravisit::walk_ty(self, ty); + } + + fn visit_impl_item(&mut self, impl_item: &'tcx hir::ImplItem) { + self.with_collect_item_sig(impl_item.id, || { + convert_impl_item(self.ccx, impl_item) + }); + intravisit::walk_impl_item(self, impl_item); } } @@ -253,20 +329,21 @@ impl<'a,'tcx> CrateCtxt<'a,'tcx> { } /// Loads the trait def for a given trait, returning ErrorReported if a cycle arises. - fn get_trait_def(&self, trait_id: DefId) - -> &'tcx ty::TraitDef<'tcx> + fn get_trait_def(&self, def_id: DefId) + -> &'tcx ty::TraitDef { let tcx = self.tcx; - if let Some(trait_id) = tcx.map.as_local_node_id(trait_id) { + if let Some(trait_id) = tcx.map.as_local_node_id(def_id) { let item = match tcx.map.get(trait_id) { hir_map::NodeItem(item) => item, _ => bug!("get_trait_def({:?}): not an item", trait_id) }; + generics_of_def_id(self, def_id); trait_def_of_item(self, &item) } else { - tcx.lookup_trait_def(trait_id) + tcx.lookup_trait_def(def_id) } } @@ -309,16 +386,14 @@ impl<'a, 'tcx> AstConv<'tcx, 'tcx> for ItemCtxt<'a, 'tcx> { }) } - fn get_item_type_scheme(&self, span: Span, id: DefId) - -> Result, ErrorReported> - { + fn get_item_type(&self, span: Span, id: DefId) -> Result, ErrorReported> { self.ccx.cycle_check(span, AstConvRequest::GetItemTypeScheme(id), || { - Ok(type_scheme_of_def_id(self.ccx, id)) + Ok(type_of_def_id(self.ccx, id)) }) } fn get_trait_def(&self, span: Span, id: DefId) - -> Result<&'tcx ty::TraitDef<'tcx>, ErrorReported> + -> Result<&'tcx ty::TraitDef, ErrorReported> { self.ccx.cycle_check(span, AstConvRequest::GetTraitDef(id), || { Ok(self.ccx.get_trait_def(id)) @@ -351,24 +426,6 @@ impl<'a, 'tcx> AstConv<'tcx, 'tcx> for ItemCtxt<'a, 'tcx> { }) } - fn trait_defines_associated_type_named(&self, - trait_def_id: DefId, - assoc_name: ast::Name) - -> bool - { - if let Some(trait_id) = self.tcx().map.as_local_node_id(trait_def_id) { - trait_associated_type_names(self.tcx(), trait_id) - .any(|name| name == assoc_name) - } else { - self.tcx().impl_or_trait_items(trait_def_id).iter().any(|&def_id| { - match self.tcx().impl_or_trait_item(def_id) { - ty::TypeTraitItem(ref item) => item.name == assoc_name, - _ => false - } - }) - } - } - fn get_free_substs(&self) -> Option<&Substs<'tcx>> { None } @@ -466,7 +523,7 @@ impl<'tcx> GetTypeParameterBounds<'tcx> for ty::GenericPredicates<'tcx> { let def = astconv.tcx().type_parameter_def(node_id); let mut results = self.parent.map_or(vec![], |def_id| { - let parent = astconv.tcx().lookup_predicates(def_id); + let parent = astconv.tcx().item_predicates(def_id); parent.get_type_parameter_bounds(astconv, span, node_id) }); @@ -543,11 +600,10 @@ fn is_param<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, param_id: ast::NodeId) -> bool { - if let hir::TyPath(None, _) = ast_ty.node { - let path_res = tcx.expect_resolution(ast_ty.id); - match path_res.base_def { + if let hir::TyPath(hir::QPath::Resolved(None, ref path)) = ast_ty.node { + match path.def { Def::SelfTy(Some(def_id), None) | - Def::TyParam(def_id) if path_res.depth == 0 => { + Def::TyParam(def_id) => { def_id == tcx.map.local_def_id(param_id) } _ => false @@ -557,14 +613,25 @@ fn is_param<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, } } +fn convert_field<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, + struct_generics: &'tcx ty::Generics<'tcx>, + struct_predicates: &ty::GenericPredicates<'tcx>, + field: &hir::StructField, + ty_f: &'tcx ty::FieldDef) +{ + let tt = ccx.icx(struct_predicates).to_ty(&ExplicitRscope, &field.ty); + ccx.tcx.item_types.borrow_mut().insert(ty_f.did, tt); + + let def_id = ccx.tcx.map.local_def_id(field.id); + ccx.tcx.item_types.borrow_mut().insert(def_id, tt); + ccx.tcx.generics.borrow_mut().insert(def_id, struct_generics); + ccx.tcx.predicates.borrow_mut().insert(def_id, struct_predicates.clone()); +} + fn convert_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - container: ImplOrTraitItemContainer, - name: ast::Name, + container: AssociatedItemContainer, id: ast::NodeId, - vis: &hir::Visibility, sig: &hir::MethodSig, - defaultness: hir::Defaultness, - has_body: bool, untransformed_rcvr_ty: Ty<'tcx>, rcvr_ty_predicates: &ty::GenericPredicates<'tcx>) { let def_id = ccx.tcx.map.local_def_id(id); @@ -573,120 +640,49 @@ fn convert_method<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let ty_generic_predicates = ty_generic_predicates(ccx, &sig.generics, ty_generics.parent, vec![], false); - let (fty, explicit_self_category) = { - let anon_scope = match container { - ImplContainer(_) => Some(AnonTypeScope::new(def_id)), - TraitContainer(_) => None - }; - AstConv::ty_of_method(&ccx.icx(&(rcvr_ty_predicates, &sig.generics)), - sig, untransformed_rcvr_ty, anon_scope) - }; - - let ty_method = ty::Method { - name: name, - generics: ty_generics, - predicates: ty_generic_predicates, - fty: fty, - explicit_self: explicit_self_category, - vis: ty::Visibility::from_hir(vis, id, ccx.tcx), - defaultness: defaultness, - has_body: has_body, - def_id: def_id, - container: container, + let anon_scope = match container { + ImplContainer(_) => Some(AnonTypeScope::new(def_id)), + TraitContainer(_) => None }; + let fty = AstConv::ty_of_method(&ccx.icx(&(rcvr_ty_predicates, &sig.generics)), + sig, untransformed_rcvr_ty, anon_scope); let substs = mk_item_substs(&ccx.icx(&(rcvr_ty_predicates, &sig.generics)), ccx.tcx.map.span(id), def_id); - let fty = ccx.tcx.mk_fn_def(def_id, substs, ty_method.fty); - debug!("method {} (id {}) has type {:?}", - name, id, fty); - ccx.tcx.tcache.borrow_mut().insert(def_id, fty); - write_ty_to_tcx(ccx, id, fty); - ccx.tcx.predicates.borrow_mut().insert(def_id, ty_method.predicates.clone()); - - debug!("writing method type: def_id={:?} mty={:?}", - def_id, ty_method); - - ccx.tcx.impl_or_trait_items.borrow_mut().insert(def_id, - ty::MethodTraitItem(Rc::new(ty_method))); -} - -fn convert_field<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - struct_generics: &'tcx ty::Generics<'tcx>, - struct_predicates: &ty::GenericPredicates<'tcx>, - field: &hir::StructField, - ty_f: ty::FieldDefMaster<'tcx>) -{ - let tt = ccx.icx(struct_predicates).to_ty(&ExplicitRscope, &field.ty); - ty_f.fulfill_ty(tt); - write_ty_to_tcx(ccx, field.id, tt); - - /* add the field to the tcache */ - ccx.tcx.register_item_type(ccx.tcx.map.local_def_id(field.id), - ty::TypeScheme { - generics: struct_generics, - ty: tt - }); - ccx.tcx.predicates.borrow_mut().insert(ccx.tcx.map.local_def_id(field.id), - struct_predicates.clone()); + let fty = ccx.tcx.mk_fn_def(def_id, substs, fty); + ccx.tcx.item_types.borrow_mut().insert(def_id, fty); + ccx.tcx.predicates.borrow_mut().insert(def_id, ty_generic_predicates); } fn convert_associated_const<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - container: ImplOrTraitItemContainer, - name: ast::Name, + container: AssociatedItemContainer, id: ast::NodeId, - vis: &hir::Visibility, - defaultness: hir::Defaultness, - ty: ty::Ty<'tcx>, - has_value: bool) + ty: ty::Ty<'tcx>) { let predicates = ty::GenericPredicates { parent: Some(container.id()), predicates: vec![] }; - ccx.tcx.predicates.borrow_mut().insert(ccx.tcx.map.local_def_id(id), - predicates); - - write_ty_to_tcx(ccx, id, ty); - - let associated_const = Rc::new(ty::AssociatedConst { - name: name, - vis: ty::Visibility::from_hir(vis, id, ccx.tcx), - defaultness: defaultness, - def_id: ccx.tcx.map.local_def_id(id), - container: container, - ty: ty, - has_value: has_value - }); - ccx.tcx.impl_or_trait_items.borrow_mut() - .insert(ccx.tcx.map.local_def_id(id), ty::ConstTraitItem(associated_const)); + let def_id = ccx.tcx.map.local_def_id(id); + ccx.tcx.predicates.borrow_mut().insert(def_id, predicates); + ccx.tcx.item_types.borrow_mut().insert(def_id, ty); } fn convert_associated_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - container: ImplOrTraitItemContainer, - name: ast::Name, + container: AssociatedItemContainer, id: ast::NodeId, - vis: &hir::Visibility, - defaultness: hir::Defaultness, ty: Option>) { let predicates = ty::GenericPredicates { parent: Some(container.id()), predicates: vec![] }; - ccx.tcx.predicates.borrow_mut().insert(ccx.tcx.map.local_def_id(id), - predicates); + let def_id = ccx.tcx.map.local_def_id(id); + ccx.tcx.predicates.borrow_mut().insert(def_id, predicates); - let associated_type = Rc::new(ty::AssociatedType { - name: name, - vis: ty::Visibility::from_hir(vis, id, ccx.tcx), - defaultness: defaultness, - ty: ty, - def_id: ccx.tcx.map.local_def_id(id), - container: container - }); - ccx.tcx.impl_or_trait_items.borrow_mut() - .insert(ccx.tcx.map.local_def_id(id), ty::TypeTraitItem(associated_type)); + if let Some(ty) = ty { + ccx.tcx.item_types.borrow_mut().insert(def_id, ty); + } } fn ensure_no_ty_param_bounds(ccx: &CrateCtxt, @@ -721,9 +717,10 @@ fn ensure_no_ty_param_bounds(ccx: &CrateCtxt, fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { let tcx = ccx.tcx; debug!("convert: item {} with id {}", it.name, it.id); + let def_id = ccx.tcx.map.local_def_id(it.id); match it.node { // These don't define types. - hir::ItemExternCrate(_) | hir::ItemUse(_) | hir::ItemMod(_) => { + hir::ItemExternCrate(_) | hir::ItemUse(..) | hir::ItemMod(_) => { } hir::ItemForeignMod(ref foreign_mod) => { for item in &foreign_mod.items { @@ -731,12 +728,13 @@ fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { } } hir::ItemEnum(ref enum_definition, _) => { - let def_id = ccx.tcx.map.local_def_id(it.id); - let scheme = type_scheme_of_def_id(ccx, def_id); + let ty = type_of_def_id(ccx, def_id); + let generics = generics_of_def_id(ccx, def_id); let predicates = predicates_of_item(ccx, it); convert_enum_variant_types(ccx, - tcx.lookup_adt_def_master(ccx.tcx.map.local_def_id(it.id)), - scheme, + tcx.lookup_adt_def(ccx.tcx.map.local_def_id(it.id)), + ty, + generics, predicates, &enum_definition.variants); }, @@ -756,22 +754,18 @@ fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { ref generics, ref opt_trait_ref, ref selfty, - ref impl_items) => { + _) => { // Create generics from the generics specified in the impl head. debug!("convert: ast_generics={:?}", generics); - let def_id = ccx.tcx.map.local_def_id(it.id); - let ty_generics = generics_of_def_id(ccx, def_id); + generics_of_def_id(ccx, def_id); let mut ty_predicates = ty_generic_predicates(ccx, generics, None, vec![], false); debug!("convert: impl_bounds={:?}", ty_predicates); let selfty = ccx.icx(&ty_predicates).to_ty(&ExplicitRscope, &selfty); - write_ty_to_tcx(ccx, it.id, selfty); + tcx.item_types.borrow_mut().insert(def_id, selfty); - tcx.register_item_type(def_id, - TypeScheme { generics: ty_generics, - ty: selfty }); let trait_ref = opt_trait_ref.as_ref().map(|ast_trait_ref| { AstConv::instantiate_mono_trait_ref(&ccx.icx(&ty_predicates), &ExplicitRscope, @@ -780,98 +774,24 @@ fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { }); tcx.impl_trait_refs.borrow_mut().insert(def_id, trait_ref); - enforce_impl_params_are_constrained(ccx, generics, &mut ty_predicates, def_id); + // Subtle: before we store the predicates into the tcx, we + // sort them so that predicates like `T: Foo` come + // before uses of `U`. This avoids false ambiguity errors + // in trait checking. See `setup_constraining_predicates` + // for details. + ctp::setup_constraining_predicates(&mut ty_predicates.predicates, + trait_ref, + &mut ctp::parameters_for_impl(selfty, trait_ref)); + tcx.predicates.borrow_mut().insert(def_id, ty_predicates.clone()); - - - // Convert all the associated consts. - // Also, check if there are any duplicate associated items - let mut seen_type_items = FnvHashMap(); - let mut seen_value_items = FnvHashMap(); - - for impl_item in impl_items { - let seen_items = match impl_item.node { - hir::ImplItemKind::Type(_) => &mut seen_type_items, - _ => &mut seen_value_items, - }; - match seen_items.entry(impl_item.name) { - Occupied(entry) => { - let mut err = struct_span_err!(tcx.sess, impl_item.span, E0201, - "duplicate definitions with name `{}`:", - impl_item.name); - err.span_label(*entry.get(), - &format!("previous definition of `{}` here", - impl_item.name)); - err.span_label(impl_item.span, &format!("duplicate definition")); - err.emit(); - } - Vacant(entry) => { - entry.insert(impl_item.span); - } - } - - if let hir::ImplItemKind::Const(ref ty, _) = impl_item.node { - let const_def_id = ccx.tcx.map.local_def_id(impl_item.id); - let ty_generics = generics_of_def_id(ccx, const_def_id); - let ty = ccx.icx(&ty_predicates) - .to_ty(&ExplicitRscope, &ty); - tcx.register_item_type(const_def_id, - TypeScheme { - generics: ty_generics, - ty: ty, - }); - // Trait-associated constants are always public. - let public = &hir::Public; - let visibility = if opt_trait_ref.is_some() { public } else { &impl_item.vis }; - convert_associated_const(ccx, ImplContainer(def_id), - impl_item.name, impl_item.id, - visibility, - impl_item.defaultness, - ty, true /* has_value */); - } - } - - // Convert all the associated types. - for impl_item in impl_items { - if let hir::ImplItemKind::Type(ref ty) = impl_item.node { - let type_def_id = ccx.tcx.map.local_def_id(impl_item.id); - generics_of_def_id(ccx, type_def_id); - - if opt_trait_ref.is_none() { - span_err!(tcx.sess, impl_item.span, E0202, - "associated types are not allowed in inherent impls"); - } - - let typ = ccx.icx(&ty_predicates).to_ty(&ExplicitRscope, ty); - - convert_associated_type(ccx, ImplContainer(def_id), - impl_item.name, impl_item.id, &impl_item.vis, - impl_item.defaultness, Some(typ)); - } - } - - for impl_item in impl_items { - if let hir::ImplItemKind::Method(ref sig, _) = impl_item.node { - // Trait methods are always public. - let public = &hir::Public; - let method_vis = if opt_trait_ref.is_some() { public } else { &impl_item.vis }; - - convert_method(ccx, ImplContainer(def_id), - impl_item.name, impl_item.id, method_vis, - sig, impl_item.defaultness, true, selfty, - &ty_predicates); - } - } - - enforce_impl_lifetimes_are_constrained(ccx, generics, def_id, impl_items); }, hir::ItemTrait(.., ref trait_items) => { - let trait_def = trait_def_of_item(ccx, it); - let def_id = trait_def.trait_ref.def_id; + generics_of_def_id(ccx, def_id); + trait_def_of_item(ccx, it); let _: Result<(), ErrorReported> = // any error is already reported, can ignore ccx.ensure_super_predicates(it.span, def_id); convert_trait_predicates(ccx, it); - let trait_predicates = tcx.lookup_predicates(def_id); + let trait_predicates = tcx.item_predicates(def_id); debug!("convert: trait_bounds={:?}", trait_predicates); @@ -880,24 +800,13 @@ fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { // Convert all the associated constants. for trait_item in trait_items { - if let hir::ConstTraitItem(ref ty, ref default) = trait_item.node { + if let hir::ConstTraitItem(ref ty, _) = trait_item.node { let const_def_id = ccx.tcx.map.local_def_id(trait_item.id); - let ty_generics = generics_of_def_id(ccx, const_def_id); + generics_of_def_id(ccx, const_def_id); let ty = ccx.icx(&trait_predicates) .to_ty(&ExplicitRscope, ty); - tcx.register_item_type(const_def_id, - TypeScheme { - generics: ty_generics, - ty: ty, - }); - convert_associated_const(ccx, - container, - trait_item.name, - trait_item.id, - &hir::Public, - hir::Defaultness::Default, - ty, - default.is_some()) + tcx.item_types.borrow_mut().insert(const_def_id, ty); + convert_associated_const(ccx, container, trait_item.id, ty) } } @@ -911,113 +820,129 @@ fn convert_item(ccx: &CrateCtxt, it: &hir::Item) { |ty| ccx.icx(&trait_predicates).to_ty(&ExplicitRscope, &ty) }); - convert_associated_type(ccx, - container, - trait_item.name, - trait_item.id, - &hir::Public, - hir::Defaultness::Default, - typ); + convert_associated_type(ccx, container, trait_item.id, typ); } } // Convert all the methods for trait_item in trait_items { - if let hir::MethodTraitItem(ref sig, ref body) = trait_item.node { + if let hir::MethodTraitItem(ref sig, _) = trait_item.node { convert_method(ccx, container, - trait_item.name, trait_item.id, - &hir::Inherited, sig, - hir::Defaultness::Default, - body.is_some(), tcx.mk_self_type(), &trait_predicates); - } } - - // Add an entry mapping - let trait_item_def_ids = Rc::new(trait_items.iter().map(|trait_item| { - ccx.tcx.map.local_def_id(trait_item.id) - }).collect()); - tcx.impl_or_trait_item_def_ids.borrow_mut().insert(ccx.tcx.map.local_def_id(it.id), - trait_item_def_ids); }, hir::ItemStruct(ref struct_def, _) | hir::ItemUnion(ref struct_def, _) => { - let def_id = ccx.tcx.map.local_def_id(it.id); - let scheme = type_scheme_of_def_id(ccx, def_id); + let ty = type_of_def_id(ccx, def_id); + let generics = generics_of_def_id(ccx, def_id); let predicates = predicates_of_item(ccx, it); - let variant = tcx.lookup_adt_def_master(def_id).struct_variant(); + let variant = tcx.lookup_adt_def(def_id).struct_variant(); for (f, ty_f) in struct_def.fields().iter().zip(variant.fields.iter()) { - convert_field(ccx, &scheme.generics, &predicates, f, ty_f) + convert_field(ccx, generics, &predicates, f, ty_f) } if !struct_def.is_struct() { - convert_variant_ctor(ccx, struct_def.id(), variant, scheme, predicates); + convert_variant_ctor(ccx, struct_def.id(), variant, ty, predicates); } }, hir::ItemTy(_, ref generics) => { ensure_no_ty_param_bounds(ccx, it.span, generics, "type"); - let def_id = ccx.tcx.map.local_def_id(it.id); - type_scheme_of_def_id(ccx, def_id); + type_of_def_id(ccx, def_id); + generics_of_def_id(ccx, def_id); predicates_of_item(ccx, it); }, _ => { - let def_id = ccx.tcx.map.local_def_id(it.id); - type_scheme_of_def_id(ccx, def_id); + type_of_def_id(ccx, def_id); + generics_of_def_id(ccx, def_id); predicates_of_item(ccx, it); }, } } +fn convert_impl_item(ccx: &CrateCtxt, impl_item: &hir::ImplItem) { + let tcx = ccx.tcx; + + // we can lookup details about the impl because items are visited + // before impl-items + let impl_def_id = tcx.map.get_parent_did(impl_item.id); + let impl_predicates = tcx.item_predicates(impl_def_id); + let impl_trait_ref = tcx.impl_trait_ref(impl_def_id); + let impl_self_ty = tcx.item_type(impl_def_id); + + match impl_item.node { + hir::ImplItemKind::Const(ref ty, _) => { + let const_def_id = ccx.tcx.map.local_def_id(impl_item.id); + generics_of_def_id(ccx, const_def_id); + let ty = ccx.icx(&impl_predicates) + .to_ty(&ExplicitRscope, &ty); + tcx.item_types.borrow_mut().insert(const_def_id, ty); + convert_associated_const(ccx, ImplContainer(impl_def_id), + impl_item.id, ty); + } + + hir::ImplItemKind::Type(ref ty) => { + let type_def_id = ccx.tcx.map.local_def_id(impl_item.id); + generics_of_def_id(ccx, type_def_id); + + if impl_trait_ref.is_none() { + span_err!(tcx.sess, impl_item.span, E0202, + "associated types are not allowed in inherent impls"); + } + + let typ = ccx.icx(&impl_predicates).to_ty(&ExplicitRscope, ty); + + convert_associated_type(ccx, ImplContainer(impl_def_id), impl_item.id, Some(typ)); + } + + hir::ImplItemKind::Method(ref sig, _) => { + convert_method(ccx, ImplContainer(impl_def_id), + impl_item.id, sig, impl_self_ty, + &impl_predicates); + } + } +} + fn convert_variant_ctor<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, ctor_id: ast::NodeId, - variant: ty::VariantDef<'tcx>, - scheme: ty::TypeScheme<'tcx>, + variant: &'tcx ty::VariantDef, + ty: Ty<'tcx>, predicates: ty::GenericPredicates<'tcx>) { let tcx = ccx.tcx; let def_id = tcx.map.local_def_id(ctor_id); generics_of_def_id(ccx, def_id); let ctor_ty = match variant.ctor_kind { - CtorKind::Fictive | CtorKind::Const => scheme.ty, + CtorKind::Fictive | CtorKind::Const => ty, CtorKind::Fn => { - let inputs: Vec<_> = - variant.fields - .iter() - .map(|field| field.unsubst_ty()) - .collect(); - let substs = mk_item_substs(&ccx.icx(&predicates), - ccx.tcx.map.span(ctor_id), def_id); + let inputs = variant.fields.iter().map(|field| tcx.item_type(field.did)); + let substs = mk_item_substs(&ccx.icx(&predicates), ccx.tcx.map.span(ctor_id), def_id); tcx.mk_fn_def(def_id, substs, tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: abi::Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: inputs, - output: scheme.ty, - variadic: false - }) + sig: ty::Binder(ccx.tcx.mk_fn_sig(inputs, ty, false)) })) } }; - write_ty_to_tcx(ccx, ctor_id, ctor_ty); - tcx.tcache.borrow_mut().insert(def_id, ctor_ty); + tcx.item_types.borrow_mut().insert(def_id, ctor_ty); tcx.predicates.borrow_mut().insert(tcx.map.local_def_id(ctor_id), predicates); } fn convert_enum_variant_types<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - def: ty::AdtDefMaster<'tcx>, - scheme: ty::TypeScheme<'tcx>, + def: &'tcx ty::AdtDef, + ty: Ty<'tcx>, + generics: &'tcx ty::Generics<'tcx>, predicates: ty::GenericPredicates<'tcx>, variants: &[hir::Variant]) { // fill the field types for (variant, ty_variant) in variants.iter().zip(def.variants.iter()) { for (f, ty_f) in variant.node.data.fields().iter().zip(ty_variant.fields.iter()) { - convert_field(ccx, &scheme.generics, &predicates, f, ty_f) + convert_field(ccx, generics, &predicates, f, ty_f) } // Convert the ctor, if any. This also registers the variant as @@ -1026,7 +951,7 @@ fn convert_enum_variant_types<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, ccx, variant.node.data.id(), ty_variant, - scheme.clone(), + ty, predicates.clone() ); } @@ -1037,8 +962,8 @@ fn convert_struct_variant<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, name: ast::Name, disr_val: ty::Disr, def: &hir::VariantData) - -> ty::VariantDefData<'tcx, 'tcx> { - let mut seen_fields: FnvHashMap = FnvHashMap(); + -> ty::VariantDef { + let mut seen_fields: FxHashMap = FxHashMap(); let node_id = ccx.tcx.map.as_local_node_id(did).unwrap(); let fields = def.fields().iter().map(|f| { let fid = ccx.tcx.map.local_def_id(f.id); @@ -1054,10 +979,13 @@ fn convert_struct_variant<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, seen_fields.insert(f.name, f.span); } - ty::FieldDefData::new(fid, f.name, - ty::Visibility::from_hir(&f.vis, node_id, ccx.tcx)) + ty::FieldDef { + did: fid, + name: f.name, + vis: ty::Visibility::from_hir(&f.vis, node_id, ccx.tcx) + } }).collect(); - ty::VariantDefData { + ty::VariantDef { did: did, name: name, disr_val: disr_val, @@ -1069,29 +997,34 @@ fn convert_struct_variant<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, fn convert_struct_def<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item, def: &hir::VariantData) - -> ty::AdtDefMaster<'tcx> + -> &'tcx ty::AdtDef { let did = ccx.tcx.map.local_def_id(it.id); // Use separate constructor id for unit/tuple structs and reuse did for braced structs. let ctor_id = if !def.is_struct() { Some(ccx.tcx.map.local_def_id(def.id())) } else { None }; let variants = vec![convert_struct_variant(ccx, ctor_id.unwrap_or(did), it.name, ConstInt::Infer(0), def)]; - let adt = ccx.tcx.intern_adt_def(did, AdtKind::Struct, variants); + let adt = ccx.tcx.alloc_adt_def(did, AdtKind::Struct, variants); if let Some(ctor_id) = ctor_id { // Make adt definition available through constructor id as well. - ccx.tcx.insert_adt_def(ctor_id, adt); + ccx.tcx.adt_defs.borrow_mut().insert(ctor_id, adt); } + + ccx.tcx.adt_defs.borrow_mut().insert(did, adt); adt } fn convert_union_def<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item, def: &hir::VariantData) - -> ty::AdtDefMaster<'tcx> + -> &'tcx ty::AdtDef { let did = ccx.tcx.map.local_def_id(it.id); let variants = vec![convert_struct_variant(ccx, did, it.name, ConstInt::Infer(0), def)]; - ccx.tcx.intern_adt_def(did, AdtKind::Union, variants) + + let adt = ccx.tcx.alloc_adt_def(did, AdtKind::Union, variants); + ccx.tcx.adt_defs.borrow_mut().insert(did, adt); + adt } fn evaluate_disr_expr(ccx: &CrateCtxt, repr_ty: attr::IntType, e: &hir::Expr) @@ -1145,7 +1078,7 @@ fn convert_union_def<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, fn convert_enum_def<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item, def: &hir::EnumDef) - -> ty::AdtDefMaster<'tcx> + -> &'tcx ty::AdtDef { let tcx = ccx.tcx; let did = tcx.map.local_def_id(it.id); @@ -1173,7 +1106,10 @@ fn convert_enum_def<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let did = tcx.map.local_def_id(v.node.data.id()); convert_struct_variant(ccx, did, v.node.name, disr, &v.node.data) }).collect(); - tcx.intern_adt_def(tcx.map.local_def_id(it.id), AdtKind::Enum, variants) + + let adt = tcx.alloc_adt_def(did, AdtKind::Enum, variants); + tcx.adt_defs.borrow_mut().insert(did, adt); + adt } /// Ensures that the super-predicates of the trait with def-id @@ -1218,10 +1154,15 @@ fn ensure_super_predicates_step(ccx: &CrateCtxt, // In-scope when converting the superbounds for `Trait` are // that `Self:Trait` as well as any bounds that appear on the // generic types: - let trait_def = trait_def_of_item(ccx, item); + generics_of_def_id(ccx, trait_def_id); + trait_def_of_item(ccx, item); + let trait_ref = ty::TraitRef { + def_id: trait_def_id, + substs: Substs::identity_for_item(tcx, trait_def_id) + }; let self_predicate = ty::GenericPredicates { parent: None, - predicates: vec![trait_def.trait_ref.to_predicate()] + predicates: vec![trait_ref.to_predicate()] }; let scope = &(generics, &self_predicate); @@ -1266,76 +1207,41 @@ fn ensure_super_predicates_step(ccx: &CrateCtxt, def_ids } -fn trait_def_of_item<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - it: &hir::Item) - -> &'tcx ty::TraitDef<'tcx> -{ +fn trait_def_of_item<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) -> &'tcx ty::TraitDef { let def_id = ccx.tcx.map.local_def_id(it.id); let tcx = ccx.tcx; - if let Some(def) = tcx.trait_defs.borrow().get(&def_id) { - return def.clone(); - } + tcx.trait_defs.memoize(def_id, || { + let unsafety = match it.node { + hir::ItemTrait(unsafety, ..) => unsafety, + _ => span_bug!(it.span, "trait_def_of_item invoked on non-trait"), + }; - let (unsafety, generics) = match it.node { - hir::ItemTrait(unsafety, ref generics, _, _) => { - (unsafety, generics) + let paren_sugar = tcx.has_attr(def_id, "rustc_paren_sugar"); + if paren_sugar && !ccx.tcx.sess.features.borrow().unboxed_closures { + let mut err = ccx.tcx.sess.struct_span_err( + it.span, + "the `#[rustc_paren_sugar]` attribute is a temporary means of controlling \ + which traits can use parenthetical notation"); + help!(&mut err, + "add `#![feature(unboxed_closures)]` to \ + the crate attributes to use it"); + err.emit(); } - _ => span_bug!(it.span, "trait_def_of_item invoked on non-trait"), - }; - let paren_sugar = tcx.has_attr(def_id, "rustc_paren_sugar"); - if paren_sugar && !ccx.tcx.sess.features.borrow().unboxed_closures { - let mut err = ccx.tcx.sess.struct_span_err( - it.span, - "the `#[rustc_paren_sugar]` attribute is a temporary means of controlling \ - which traits can use parenthetical notation"); - help!(&mut err, - "add `#![feature(unboxed_closures)]` to \ - the crate attributes to use it"); - err.emit(); - } - - let ty_generics = generics_of_def_id(ccx, def_id); - let substs = mk_item_substs(&ccx.icx(generics), it.span, def_id); - - let def_path_hash = tcx.def_path(def_id).deterministic_hash(tcx); - - let trait_ref = ty::TraitRef::new(def_id, substs); - let trait_def = ty::TraitDef::new(unsafety, paren_sugar, ty_generics, trait_ref, - def_path_hash); - - tcx.intern_trait_def(trait_def) -} - -pub fn trait_associated_type_names<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, - trait_node_id: ast::NodeId) - -> impl Iterator + 'a -{ - let item = match tcx.map.get(trait_node_id) { - hir_map::NodeItem(item) => item, - _ => bug!("trait_node_id {} is not an item", trait_node_id) - }; - - let trait_items = match item.node { - hir::ItemTrait(.., ref trait_items) => trait_items, - _ => bug!("trait_node_id {} is not a trait", trait_node_id) - }; - - trait_items.iter().filter_map(|trait_item| { - match trait_item.node { - hir::TypeTraitItem(..) => Some(trait_item.name), - _ => None, - } + let def_path_hash = tcx.def_path(def_id).deterministic_hash(tcx); + tcx.alloc_trait_def(ty::TraitDef::new(def_id, unsafety, paren_sugar, def_path_hash)) }) } fn convert_trait_predicates<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) { let tcx = ccx.tcx; - let trait_def = trait_def_of_item(ccx, it); let def_id = ccx.tcx.map.local_def_id(it.id); + generics_of_def_id(ccx, def_id); + trait_def_of_item(ccx, it); + let (generics, items) = match it.node { hir::ItemTrait(_, ref generics, _, ref items) => (generics, items), ref s => { @@ -1346,7 +1252,7 @@ fn convert_trait_predicates<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) } }; - let super_predicates = ccx.tcx.lookup_super_predicates(def_id); + let super_predicates = ccx.tcx.item_super_predicates(def_id); // `ty_generic_predicates` below will consider the bounds on the type // parameters (including `Self`) and the explicit where-clauses, @@ -1357,7 +1263,11 @@ fn convert_trait_predicates<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) // Add in a predicate that `Self:Trait` (where `Trait` is the // current trait). This is needed for builtin bounds. - let self_predicate = trait_def.trait_ref.to_poly_trait_ref().to_predicate(); + let trait_ref = ty::TraitRef { + def_id: def_id, + substs: Substs::identity_for_item(tcx, def_id) + }; + let self_predicate = trait_ref.to_poly_trait_ref().to_predicate(); base_predicates.push(self_predicate); // add in the explicit where-clauses @@ -1367,7 +1277,7 @@ fn convert_trait_predicates<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) let assoc_predicates = predicates_for_associated_types(ccx, generics, &trait_predicates, - trait_def.trait_ref, + trait_ref, items); trait_predicates.predicates.extend(assoc_predicates); @@ -1413,7 +1323,7 @@ fn generics_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let node_id = if let Some(id) = tcx.map.as_local_node_id(def_id) { id } else { - return tcx.lookup_generics(def_id); + return tcx.item_generics(def_id); }; tcx.generics.memoize(def_id, || { use rustc::hir::map::*; @@ -1428,6 +1338,21 @@ fn generics_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let parent_id = tcx.map.get_parent(node_id); Some(tcx.map.local_def_id(parent_id)) } + NodeExpr(&hir::Expr { node: hir::ExprClosure(..), .. }) => { + Some(tcx.closure_base_def_id(def_id)) + } + NodeTy(&hir::Ty { node: hir::TyImplTrait(..), .. }) => { + let mut parent_id = node_id; + loop { + match tcx.map.get(parent_id) { + NodeItem(_) | NodeImplItem(_) | NodeTraitItem(_) => break, + _ => { + parent_id = tcx.map.get_parent_node(parent_id); + } + } + } + Some(tcx.map.local_def_id(parent_id)) + } _ => None }; @@ -1507,13 +1432,11 @@ fn generics_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let mut own_start = has_self as u32; let (parent_regions, parent_types) = parent_def_id.map_or((0, 0), |def_id| { let generics = generics_of_def_id(ccx, def_id); - assert_eq!(generics.parent, None); - assert_eq!(generics.parent_regions, 0); - assert_eq!(generics.parent_types, 0); assert_eq!(has_self, false); parent_has_self = generics.has_self; own_start = generics.count() as u32; - (generics.regions.len() as u32, generics.types.len() as u32) + (generics.parent_regions + generics.regions.len() as u32, + generics.parent_types + generics.types.len() as u32) }); let early_lifetimes = early_bound_lifetimes_from_generics(ccx, ast_generics); @@ -1535,7 +1458,24 @@ fn generics_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let i = type_start + i as u32; get_or_create_type_parameter_def(ccx, ast_generics, i, p, allow_defaults) }); - let types: Vec<_> = opt_self.into_iter().chain(types).collect(); + let mut types: Vec<_> = opt_self.into_iter().chain(types).collect(); + + // provide junk type parameter defs - the only place that + // cares about anything but the length is instantiation, + // and we don't do that for closures. + if let NodeExpr(&hir::Expr { node: hir::ExprClosure(..), .. }) = node { + tcx.with_freevars(node_id, |fv| { + types.extend(fv.iter().enumerate().map(|(i, _)| ty::TypeParameterDef { + index: type_start + i as u32, + name: Symbol::intern(""), + def_id: def_id, + default_def_id: parent_def_id.unwrap(), + default: None, + object_lifetime_default: ty::ObjectLifetimeDefault::BaseDefault, + pure_wrt_drop: false, + })); + }); + } // Debugging aid. if tcx.has_attr(def_id, "rustc_object_lifetime_default") { @@ -1566,12 +1506,15 @@ fn type_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, let node_id = if let Some(id) = ccx.tcx.map.as_local_node_id(def_id) { id } else { - return ccx.tcx.lookup_item_type(def_id).ty; + return ccx.tcx.item_type(def_id); }; - ccx.tcx.tcache.memoize(def_id, || { + ccx.tcx.item_types.memoize(def_id, || { use rustc::hir::map::*; use rustc::hir::*; + // Alway bring in generics, as computing the type needs them. + generics_of_def_id(ccx, def_id); + let ty = match ccx.tcx.map.get(node_id) { NodeItem(item) => { match item.node { @@ -1630,29 +1573,25 @@ fn type_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, } } } + NodeExpr(&hir::Expr { node: hir::ExprClosure(..), .. }) => { + ccx.tcx.mk_closure(def_id, Substs::for_item( + ccx.tcx, def_id, + |def, _| { + let region = def.to_early_bound_region_data(); + ccx.tcx.mk_region(ty::ReEarlyBound(region)) + }, + |def, _| ccx.tcx.mk_param_from_def(def) + )) + } x => { bug!("unexpected sort of node in type_of_def_id(): {:?}", x); } }; - write_ty_to_tcx(ccx, node_id, ty); ty }) } -fn type_scheme_of_def_id<'a, 'tcx>(ccx: &CrateCtxt<'a,'tcx>, - def_id: DefId) - -> ty::TypeScheme<'tcx> { - if def_id.is_local() { - ty::TypeScheme { - generics: generics_of_def_id(ccx, def_id), - ty: type_of_def_id(ccx, def_id) - } - } else { - ccx.tcx.lookup_item_type(def_id) - } -} - fn predicates_of_item<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, it: &hir::Item) -> ty::GenericPredicates<'tcx> { @@ -1684,7 +1623,8 @@ fn convert_foreign_item<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, // moral failing, but at the moment it seems like the only // convenient way to extract the ABI. - ndm let def_id = ccx.tcx.map.local_def_id(it.id); - type_scheme_of_def_id(ccx, def_id); + type_of_def_id(ccx, def_id); + generics_of_def_id(ccx, def_id); let no_generics = hir::Generics::empty(); let generics = match it.node { @@ -1697,11 +1637,10 @@ fn convert_foreign_item<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, assert!(prev_predicates.is_none()); } -// Add the Sized bound, unless the type parameter is marked as `?Sized`. -fn add_unsized_bound<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, - bounds: &mut ty::BuiltinBounds, - ast_bounds: &[hir::TyParamBound], - span: Span) +// Is it marked with ?Sized +fn is_unsized<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, + ast_bounds: &[hir::TyParamBound], + span: Span) -> bool { let tcx = astconv.tcx(); @@ -1725,22 +1664,22 @@ fn add_unsized_bound<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, Some(ref tpb) => { // FIXME(#8559) currently requires the unbound to be built-in. if let Ok(kind_id) = kind_id { - let trait_def = tcx.expect_def(tpb.ref_id); - if trait_def != Def::Trait(kind_id) { + if tpb.path.def != Def::Trait(kind_id) { tcx.sess.span_warn(span, "default bound relaxed for a type parameter, but \ this does nothing because the given bound is not \ a default. Only `?Sized` is supported"); - tcx.try_add_builtin_trait(kind_id, bounds); } } } _ if kind_id.is_ok() => { - tcx.try_add_builtin_trait(kind_id.unwrap(), bounds); + return false; } // No lang item for Sized, so we can't add it as a bound. None => {} } + + true } /// Returns the early-bound lifetimes declared in this generics @@ -1778,7 +1717,7 @@ fn ty_generic_predicates<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, let ref base_predicates = match parent { Some(def_id) => { assert_eq!(super_predicates, vec![]); - tcx.lookup_predicates(def_id) + tcx.item_predicates(def_id) } None => { ty::GenericPredicates { @@ -1952,9 +1891,9 @@ fn compute_object_lifetime_default<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, { let inline_bounds = from_bounds(ccx, param_bounds); let where_bounds = from_predicates(ccx, param_id, &where_clause.predicates); - let all_bounds: FnvHashSet<_> = inline_bounds.into_iter() - .chain(where_bounds) - .collect(); + let all_bounds: FxHashSet<_> = inline_bounds.into_iter() + .chain(where_bounds) + .collect(); return if all_bounds.len() > 1 { ty::ObjectLifetimeDefault::Ambiguous } else if all_bounds.len() == 0 { @@ -2022,14 +1961,9 @@ pub fn compute_bounds<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, { let tcx = astconv.tcx(); let PartitionedBounds { - mut builtin_bounds, trait_bounds, region_bounds - } = partition_bounds(tcx, span, &ast_bounds); - - if let SizedByDefault::Yes = sized_by_default { - add_unsized_bound(astconv, &mut builtin_bounds, ast_bounds, span); - } + } = partition_bounds(&ast_bounds); let mut projection_bounds = vec![]; @@ -2047,9 +1981,15 @@ pub fn compute_bounds<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, trait_bounds.sort_by(|a,b| a.def_id().cmp(&b.def_id())); + let implicitly_sized = if let SizedByDefault::Yes = sized_by_default { + !is_unsized(astconv, ast_bounds, span) + } else { + false + }; + Bounds { region_bounds: region_bounds, - builtin_bounds: builtin_bounds, + implicitly_sized: implicitly_sized, trait_bounds: trait_bounds, projection_bounds: projection_bounds, } @@ -2136,9 +2076,7 @@ fn compute_type_of_foreign_fn_decl<'a, 'tcx>( ccx.tcx.mk_fn_def(def_id, substs, ccx.tcx.mk_bare_fn(ty::BareFnTy { abi: abi, unsafety: hir::Unsafety::Unsafe, - sig: ty::Binder(ty::FnSig {inputs: input_tys, - output: output, - variadic: decl.variadic}), + sig: ty::Binder(ccx.tcx.mk_fn_sig(input_tys.into_iter(), output, decl.variadic)), })) } @@ -2154,113 +2092,5 @@ pub fn mk_item_substs<'gcx: 'tcx, 'tcx>(astconv: &AstConv<'gcx, 'tcx>, bug!("ErrorReported returned, but no errors reports?") } - Substs::for_item(tcx, def_id, - |def, _| tcx.mk_region(def.to_early_bound_region()), - |def, _| tcx.mk_param_from_def(def)) -} - -/// Checks that all the type parameters on an impl -fn enforce_impl_params_are_constrained<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - generics: &hir::Generics, - impl_predicates: &mut ty::GenericPredicates<'tcx>, - impl_def_id: DefId) -{ - let impl_scheme = ccx.tcx.lookup_item_type(impl_def_id); - let impl_trait_ref = ccx.tcx.impl_trait_ref(impl_def_id); - - // The trait reference is an input, so find all type parameters - // reachable from there, to start (if this is an inherent impl, - // then just examine the self type). - let mut input_parameters: FnvHashSet<_> = - ctp::parameters_for(&impl_scheme.ty, false).into_iter().collect(); - if let Some(ref trait_ref) = impl_trait_ref { - input_parameters.extend(ctp::parameters_for(trait_ref, false)); - } - - ctp::setup_constraining_predicates(&mut impl_predicates.predicates, - impl_trait_ref, - &mut input_parameters); - - let ty_generics = generics_of_def_id(ccx, impl_def_id); - for (ty_param, param) in ty_generics.types.iter().zip(&generics.ty_params) { - let param_ty = ty::ParamTy::for_def(ty_param); - if !input_parameters.contains(&ctp::Parameter::from(param_ty)) { - report_unused_parameter(ccx, param.span, "type", ¶m_ty.to_string()); - } - } -} - -fn enforce_impl_lifetimes_are_constrained<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - ast_generics: &hir::Generics, - impl_def_id: DefId, - impl_items: &[hir::ImplItem]) -{ - // Every lifetime used in an associated type must be constrained. - let impl_scheme = ccx.tcx.lookup_item_type(impl_def_id); - let impl_predicates = ccx.tcx.lookup_predicates(impl_def_id); - let impl_trait_ref = ccx.tcx.impl_trait_ref(impl_def_id); - - let mut input_parameters: FnvHashSet<_> = - ctp::parameters_for(&impl_scheme.ty, false).into_iter().collect(); - if let Some(ref trait_ref) = impl_trait_ref { - input_parameters.extend(ctp::parameters_for(trait_ref, false)); - } - ctp::identify_constrained_type_params( - &impl_predicates.predicates.as_slice(), impl_trait_ref, &mut input_parameters); - - let lifetimes_in_associated_types: FnvHashSet<_> = impl_items.iter() - .map(|item| ccx.tcx.impl_or_trait_item(ccx.tcx.map.local_def_id(item.id))) - .filter_map(|item| match item { - ty::TypeTraitItem(ref assoc_ty) => assoc_ty.ty, - ty::ConstTraitItem(..) | ty::MethodTraitItem(..) => None - }) - .flat_map(|ty| ctp::parameters_for(&ty, true)) - .collect(); - - for (ty_lifetime, lifetime) in impl_scheme.generics.regions.iter() - .zip(&ast_generics.lifetimes) - { - let param = ctp::Parameter::from(ty_lifetime.to_early_bound_region_data()); - - if - lifetimes_in_associated_types.contains(¶m) && // (*) - !input_parameters.contains(¶m) - { - report_unused_parameter(ccx, lifetime.lifetime.span, - "lifetime", &lifetime.lifetime.name.to_string()); - } - } - - // (*) This is a horrible concession to reality. I think it'd be - // better to just ban unconstrianed lifetimes outright, but in - // practice people do non-hygenic macros like: - // - // ``` - // macro_rules! __impl_slice_eq1 { - // ($Lhs: ty, $Rhs: ty, $Bound: ident) => { - // impl<'a, 'b, A: $Bound, B> PartialEq<$Rhs> for $Lhs where A: PartialEq { - // .... - // } - // } - // } - // ``` - // - // In a concession to backwards compatbility, we continue to - // permit those, so long as the lifetimes aren't used in - // associated types. I believe this is sound, because lifetimes - // used elsewhere are not projected back out. -} - -fn report_unused_parameter(ccx: &CrateCtxt, - span: Span, - kind: &str, - name: &str) -{ - struct_span_err!( - ccx.tcx.sess, span, E0207, - "the {} parameter `{}` is not constrained by the \ - impl trait, self type, or predicates", - kind, name) - .span_label(span, &format!("unconstrained {} parameter", kind)) - .emit(); + Substs::identity_for_item(tcx, def_id) } diff --git a/src/librustc_typeck/constrained_type_params.rs b/src/librustc_typeck/constrained_type_params.rs index 39f9e4316b..22be449127 100644 --- a/src/librustc_typeck/constrained_type_params.rs +++ b/src/librustc_typeck/constrained_type_params.rs @@ -10,7 +10,7 @@ use rustc::ty::{self, Ty}; use rustc::ty::fold::{TypeFoldable, TypeVisitor}; -use rustc::util::nodemap::FnvHashSet; +use rustc::util::nodemap::FxHashSet; #[derive(Clone, PartialEq, Eq, Hash, Debug)] pub struct Parameter(pub u32); @@ -23,6 +23,18 @@ impl From for Parameter { fn from(param: ty::EarlyBoundRegion) -> Self { Parameter(param.index) } } +/// Return the set of parameters constrained by the impl header. +pub fn parameters_for_impl<'tcx>(impl_self_ty: Ty<'tcx>, + impl_trait_ref: Option>) + -> FxHashSet +{ + let vec = match impl_trait_ref { + Some(tr) => parameters_for(&tr, false), + None => parameters_for(&impl_self_ty, false), + }; + vec.into_iter().collect() +} + /// If `include_projections` is false, returns the list of parameters that are /// constrained by `t` - i.e. the value of each parameter in the list is /// uniquely determined by `t` (see RFC 447). If it is true, return the list @@ -76,7 +88,7 @@ impl<'tcx> TypeVisitor<'tcx> for ParameterCollector { pub fn identify_constrained_type_params<'tcx>(predicates: &[ty::Predicate<'tcx>], impl_trait_ref: Option>, - input_parameters: &mut FnvHashSet) + input_parameters: &mut FxHashSet) { let mut predicates = predicates.to_owned(); setup_constraining_predicates(&mut predicates, impl_trait_ref, input_parameters); @@ -125,7 +137,7 @@ pub fn identify_constrained_type_params<'tcx>(predicates: &[ty::Predicate<'tcx>] /// think of any. pub fn setup_constraining_predicates<'tcx>(predicates: &mut [ty::Predicate<'tcx>], impl_trait_ref: Option>, - input_parameters: &mut FnvHashSet) + input_parameters: &mut FxHashSet) { // The canonical way of doing the needed topological sort // would be a DFS, but getting the graph and its ownership diff --git a/src/librustc_typeck/diagnostics.rs b/src/librustc_typeck/diagnostics.rs index be012d8976..71507063ff 100644 --- a/src/librustc_typeck/diagnostics.rs +++ b/src/librustc_typeck/diagnostics.rs @@ -1355,7 +1355,7 @@ extern "rust-intrinsic" { } ``` -Please check that you provided the right number of lifetime parameters +Please check that you provided the right number of type parameters and verify with the function declaration in the Rust source code. Example: @@ -1377,6 +1377,7 @@ let x = |_| {}; // error: cannot determine a type for this expression ``` You have two possibilities to solve this situation: + * Give an explicit definition of the expression * Infer the expression @@ -2777,8 +2778,8 @@ fn main() { } ``` -Builtin traits are an exception to this rule: it's possible to have bounds of -one non-builtin type, plus any number of builtin types. For example, the +Send and Sync are an exception to this rule: it's possible to have bounds of +one non-builtin trait, plus either or both of Send and Sync. For example, the following compiles correctly: ``` @@ -4163,6 +4164,33 @@ target / ABI combination is currently unsupported by llvm. If necessary, you can circumvent this check using custom target specifications. "##, +E0572: r##" +A return statement was found outside of a function body. + +Erroneous code example: + +```compile_fail,E0572 +const FOO: u32 = return 0; // error: return statement outside of function body + +fn main() {} +``` + +To fix this issue, just remove the return keyword or move the expression into a +function. Example: + +``` +const FOO: u32 = 0; + +fn some_fn() -> u32 { + return FOO; +} + +fn main() { + some_fn(); +} +``` +"##, + } register_diagnostics! { diff --git a/src/librustc_typeck/impl_wf_check.rs b/src/librustc_typeck/impl_wf_check.rs new file mode 100644 index 0000000000..9f5b73d9b3 --- /dev/null +++ b/src/librustc_typeck/impl_wf_check.rs @@ -0,0 +1,203 @@ +// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! This pass enforces various "well-formedness constraints" on impls. +//! Logically, it is part of wfcheck -- but we do it early so that we +//! can stop compilation afterwards, since part of the trait matching +//! infrastructure gets very grumpy if these conditions don't hold. In +//! particular, if there are type parameters that are not part of the +//! impl, then coherence will report strange inference ambiguity +//! errors; if impls have duplicate items, we get misleading +//! specialization errors. These things can (and probably should) be +//! fixed, but for the moment it's easier to do these checks early. + +use constrained_type_params as ctp; +use rustc::dep_graph::DepNode; +use rustc::hir; +use rustc::hir::itemlikevisit::ItemLikeVisitor; +use rustc::hir::def_id::DefId; +use rustc::ty; +use rustc::util::nodemap::{FxHashMap, FxHashSet}; +use std::collections::hash_map::Entry::{Occupied, Vacant}; + +use syntax_pos::Span; + +use CrateCtxt; + +/// Checks that all the type/lifetime parameters on an impl also +/// appear in the trait ref or self-type (or are constrained by a +/// where-clause). These rules are needed to ensure that, given a +/// trait ref like `>`, we can derive the values of all +/// parameters on the impl (which is needed to make specialization +/// possible). +/// +/// However, in the case of lifetimes, we only enforce these rules if +/// the lifetime parameter is used in an associated type. This is a +/// concession to backwards compatibility; see comment at the end of +/// the fn for details. +/// +/// Example: +/// +/// ``` +/// impl Trait for Bar { ... } +/// ^ T does not appear in `Foo` or `Bar`, error! +/// +/// impl Trait> for Bar { ... } +/// ^ T appears in `Foo`, ok. +/// +/// impl Trait for Bar where Bar: Iterator { ... } +/// ^ T is bound to `::Item`, ok. +/// +/// impl<'a> Trait for Bar { } +/// ^ 'a is unused, but for back-compat we allow it +/// +/// impl<'a> Trait for Bar { type X = &'a i32; } +/// ^ 'a is unused and appears in assoc type, error +/// ``` +pub fn impl_wf_check<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>) { + // We will tag this as part of the WF check -- logically, it is, + // but it's one that we must perform earlier than the rest of + // WfCheck. + ccx.tcx.visit_all_item_likes_in_krate(DepNode::WfCheck, &mut ImplWfCheck { ccx: ccx }); +} + +struct ImplWfCheck<'a, 'tcx: 'a> { + ccx: &'a CrateCtxt<'a, 'tcx>, +} + +impl<'a, 'tcx> ItemLikeVisitor<'tcx> for ImplWfCheck<'a, 'tcx> { + fn visit_item(&mut self, item: &'tcx hir::Item) { + match item.node { + hir::ItemImpl(.., ref generics, _, _, ref impl_item_refs) => { + let impl_def_id = self.ccx.tcx.map.local_def_id(item.id); + enforce_impl_params_are_constrained(self.ccx, + generics, + impl_def_id, + impl_item_refs); + enforce_impl_items_are_distinct(self.ccx, impl_item_refs); + } + _ => { } + } + } + + fn visit_impl_item(&mut self, _impl_item: &'tcx hir::ImplItem) { } +} + +fn enforce_impl_params_are_constrained<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, + impl_hir_generics: &hir::Generics, + impl_def_id: DefId, + impl_item_refs: &[hir::ImplItemRef]) +{ + // Every lifetime used in an associated type must be constrained. + let impl_self_ty = ccx.tcx.item_type(impl_def_id); + let impl_generics = ccx.tcx.item_generics(impl_def_id); + let impl_predicates = ccx.tcx.item_predicates(impl_def_id); + let impl_trait_ref = ccx.tcx.impl_trait_ref(impl_def_id); + + let mut input_parameters = ctp::parameters_for_impl(impl_self_ty, impl_trait_ref); + ctp::identify_constrained_type_params( + &impl_predicates.predicates.as_slice(), impl_trait_ref, &mut input_parameters); + + // Disallow ANY unconstrained type parameters. + for (ty_param, param) in impl_generics.types.iter().zip(&impl_hir_generics.ty_params) { + let param_ty = ty::ParamTy::for_def(ty_param); + if !input_parameters.contains(&ctp::Parameter::from(param_ty)) { + report_unused_parameter(ccx, param.span, "type", ¶m_ty.to_string()); + } + } + + // Disallow unconstrained lifetimes, but only if they appear in assoc types. + let lifetimes_in_associated_types: FxHashSet<_> = impl_item_refs.iter() + .map(|item_ref| ccx.tcx.map.local_def_id(item_ref.id.node_id)) + .filter(|&def_id| { + let item = ccx.tcx.associated_item(def_id); + item.kind == ty::AssociatedKind::Type && item.defaultness.has_value() + }) + .flat_map(|def_id| { + ctp::parameters_for(&ccx.tcx.item_type(def_id), true) + }).collect(); + for (ty_lifetime, lifetime) in impl_generics.regions.iter() + .zip(&impl_hir_generics.lifetimes) + { + let param = ctp::Parameter::from(ty_lifetime.to_early_bound_region_data()); + + if + lifetimes_in_associated_types.contains(¶m) && // (*) + !input_parameters.contains(¶m) + { + report_unused_parameter(ccx, lifetime.lifetime.span, + "lifetime", &lifetime.lifetime.name.to_string()); + } + } + + // (*) This is a horrible concession to reality. I think it'd be + // better to just ban unconstrianed lifetimes outright, but in + // practice people do non-hygenic macros like: + // + // ``` + // macro_rules! __impl_slice_eq1 { + // ($Lhs: ty, $Rhs: ty, $Bound: ident) => { + // impl<'a, 'b, A: $Bound, B> PartialEq<$Rhs> for $Lhs where A: PartialEq { + // .... + // } + // } + // } + // ``` + // + // In a concession to backwards compatbility, we continue to + // permit those, so long as the lifetimes aren't used in + // associated types. I believe this is sound, because lifetimes + // used elsewhere are not projected back out. +} + +fn report_unused_parameter(ccx: &CrateCtxt, + span: Span, + kind: &str, + name: &str) +{ + struct_span_err!( + ccx.tcx.sess, span, E0207, + "the {} parameter `{}` is not constrained by the \ + impl trait, self type, or predicates", + kind, name) + .span_label(span, &format!("unconstrained {} parameter", kind)) + .emit(); +} + +/// Enforce that we do not have two items in an impl with the same name. +fn enforce_impl_items_are_distinct<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, + impl_item_refs: &[hir::ImplItemRef]) +{ + let tcx = ccx.tcx; + let mut seen_type_items = FxHashMap(); + let mut seen_value_items = FxHashMap(); + for impl_item_ref in impl_item_refs { + let impl_item = tcx.map.impl_item(impl_item_ref.id); + let seen_items = match impl_item.node { + hir::ImplItemKind::Type(_) => &mut seen_type_items, + _ => &mut seen_value_items, + }; + match seen_items.entry(impl_item.name) { + Occupied(entry) => { + let mut err = struct_span_err!(tcx.sess, impl_item.span, E0201, + "duplicate definitions with name `{}`:", + impl_item.name); + err.span_label(*entry.get(), + &format!("previous definition of `{}` here", + impl_item.name)); + err.span_label(impl_item.span, &format!("duplicate definition")); + err.emit(); + } + Vacant(entry) => { + entry.insert(impl_item.span); + } + } + } +} diff --git a/src/librustc_typeck/lib.rs b/src/librustc_typeck/lib.rs index d7573c7a7b..50d4c3cd0c 100644 --- a/src/librustc_typeck/lib.rs +++ b/src/librustc_typeck/lib.rs @@ -44,7 +44,7 @@ independently: into the `ty` representation - collect: computes the types of each top-level item and enters them into - the `cx.tcache` table for later use + the `tcx.types` table for later use - coherence: enforces coherence rules, builds some tables @@ -77,12 +77,10 @@ This API is completely unstable and subject to change. #![feature(box_patterns)] #![feature(box_syntax)] #![feature(conservative_impl_trait)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(quote)] #![feature(rustc_diagnostic_macros)] #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #[macro_use] extern crate log; #[macro_use] extern crate syntax; @@ -95,6 +93,7 @@ extern crate rustc_platform_intrinsics as intrinsics; extern crate rustc_back; extern crate rustc_const_math; extern crate rustc_const_eval; +extern crate rustc_data_structures; extern crate rustc_errors as errors; pub use rustc::dep_graph; @@ -106,17 +105,18 @@ pub use rustc::util; use dep_graph::DepNode; use hir::map as hir_map; -use rustc::infer::{InferOk, TypeOrigin}; +use rustc::infer::InferOk; use rustc::ty::subst::Substs; -use rustc::ty::{self, Ty, TyCtxt, TypeFoldable}; -use rustc::traits::{self, Reveal}; -use session::{config, CompileResult}; +use rustc::ty::{self, Ty, TyCtxt}; +use rustc::traits::{self, ObligationCause, ObligationCauseCode, Reveal}; +use session::config; use util::common::time; use syntax::ast; use syntax::abi::Abi; use syntax_pos::Span; +use std::iter; use std::cell::RefCell; use util::nodemap::NodeMap; @@ -130,6 +130,7 @@ mod rscope; mod astconv; pub mod collect; mod constrained_type_params; +mod impl_wf_check; pub mod coherence; pub mod variance; @@ -159,27 +160,6 @@ pub struct CrateCtxt<'a, 'tcx: 'a> { pub deferred_obligations: RefCell>>>, } -// Functions that write types into the node type table -fn write_ty_to_tcx<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, node_id: ast::NodeId, ty: Ty<'tcx>) { - debug!("write_ty_to_tcx({}, {:?})", node_id, ty); - assert!(!ty.needs_infer()); - ccx.tcx.node_type_insert(node_id, ty); -} - -fn write_substs_to_tcx<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - node_id: ast::NodeId, - item_substs: ty::ItemSubsts<'tcx>) { - if !item_substs.is_noop() { - debug!("write_substs_to_tcx({}, {:?})", - node_id, - item_substs); - - assert!(!item_substs.substs.needs_infer()); - - ccx.tcx.tables.borrow_mut().item_substs.insert(node_id, item_substs); - } -} - fn require_c_abi_if_variadic(tcx: TyCtxt, decl: &hir::FnDecl, abi: Abi, @@ -193,19 +173,19 @@ fn require_c_abi_if_variadic(tcx: TyCtxt, } fn require_same_types<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, - origin: TypeOrigin, + cause: &ObligationCause<'tcx>, expected: Ty<'tcx>, actual: Ty<'tcx>) -> bool { ccx.tcx.infer_ctxt(None, None, Reveal::NotSpecializable).enter(|infcx| { - match infcx.eq_types(false, origin.clone(), expected, actual) { + match infcx.eq_types(false, &cause, expected, actual) { Ok(InferOk { obligations, .. }) => { // FIXME(#32730) propagate obligations assert!(obligations.is_empty()); true } Err(err) => { - infcx.report_mismatched_types(origin, expected, actual, err); + infcx.report_mismatched_types(cause, expected, actual, err); false } } @@ -216,7 +196,8 @@ fn check_main_fn_ty(ccx: &CrateCtxt, main_id: ast::NodeId, main_span: Span) { let tcx = ccx.tcx; - let main_t = tcx.tables().node_id_to_type(main_id); + let main_def_id = tcx.map.local_def_id(main_id); + let main_t = tcx.item_type(main_def_id); match main_t.sty { ty::TyFnDef(..) => { match tcx.map.find(main_id) { @@ -237,22 +218,17 @@ fn check_main_fn_ty(ccx: &CrateCtxt, } _ => () } - let main_def_id = tcx.map.local_def_id(main_id); let substs = tcx.intern_substs(&[]); let se_ty = tcx.mk_fn_def(main_def_id, substs, tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: Vec::new(), - output: tcx.mk_nil(), - variadic: false - }) + sig: ty::Binder(tcx.mk_fn_sig(iter::empty(), tcx.mk_nil(), false)) })); require_same_types( ccx, - TypeOrigin::MainFunctionType(main_span), + &ObligationCause::new(main_span, main_id, ObligationCauseCode::MainFunctionType), se_ty, main_t); } @@ -268,7 +244,8 @@ fn check_start_fn_ty(ccx: &CrateCtxt, start_id: ast::NodeId, start_span: Span) { let tcx = ccx.tcx; - let start_t = tcx.tables().node_id_to_type(start_id); + let start_def_id = ccx.tcx.map.local_def_id(start_id); + let start_t = tcx.item_type(start_def_id); match start_t.sty { ty::TyFnDef(..) => { match tcx.map.find(start_id) { @@ -289,25 +266,24 @@ fn check_start_fn_ty(ccx: &CrateCtxt, _ => () } - let start_def_id = ccx.tcx.map.local_def_id(start_id); let substs = tcx.intern_substs(&[]); let se_ty = tcx.mk_fn_def(start_def_id, substs, tcx.mk_bare_fn(ty::BareFnTy { unsafety: hir::Unsafety::Normal, abi: Abi::Rust, - sig: ty::Binder(ty::FnSig { - inputs: vec![ + sig: ty::Binder(tcx.mk_fn_sig( + [ tcx.types.isize, tcx.mk_imm_ptr(tcx.mk_imm_ptr(tcx.types.u8)) - ], - output: tcx.types.isize, - variadic: false, - }), + ].iter().cloned(), + tcx.types.isize, + false, + )), })); require_same_types( ccx, - TypeOrigin::StartFunctionType(start_span), + &ObligationCause::new(start_span, start_id, ObligationCauseCode::StartFunctionType), se_ty, start_t); } @@ -333,7 +309,7 @@ fn check_for_entry_fn(ccx: &CrateCtxt) { } pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) - -> CompileResult { + -> Result>, usize> { let time_passes = tcx.sess.time_passes(); let ccx = CrateCtxt { ast_ty_to_ty_cache: RefCell::new(NodeMap()), @@ -354,6 +330,11 @@ pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) time(time_passes, "variance inference", || variance::infer_variance(tcx)); + tcx.sess.track_errors(|| { + time(time_passes, "impl wf inference", || + impl_wf_check::impl_wf_check(&ccx)); + })?; + tcx.sess.track_errors(|| { time(time_passes, "coherence checking", || coherence::check_coherence(&ccx)); @@ -372,7 +353,7 @@ pub fn check_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>) let err_count = tcx.sess.err_count(); if err_count == 0 { - Ok(()) + Ok(ccx.ast_ty_to_ty_cache.into_inner()) } else { Err(err_count) } diff --git a/src/librustc_typeck/variance/constraints.rs b/src/librustc_typeck/variance/constraints.rs index c9e93a1a46..39f996ee62 100644 --- a/src/librustc_typeck/variance/constraints.rs +++ b/src/librustc_typeck/variance/constraints.rs @@ -22,7 +22,7 @@ use rustc::ty::maps::ItemVariances; use rustc::hir::map as hir_map; use syntax::ast; use rustc::hir; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use super::terms::*; use super::terms::VarianceTerm::*; @@ -65,13 +65,13 @@ pub fn add_constraints_from_crate<'a, 'tcx>(terms_cx: TermsContext<'a, 'tcx>) }; // See README.md for a discussion on dep-graph management. - tcx.visit_all_items_in_krate(|def_id| ItemVariances::to_dep_node(&def_id), - &mut constraint_cx); + tcx.visit_all_item_likes_in_krate(|def_id| ItemVariances::to_dep_node(&def_id), + &mut constraint_cx); constraint_cx } -impl<'a, 'tcx, 'v> Visitor<'v> for ConstraintContext<'a, 'tcx> { +impl<'a, 'tcx, 'v> ItemLikeVisitor<'v> for ConstraintContext<'a, 'tcx> { fn visit_item(&mut self, item: &hir::Item) { let tcx = self.terms_cx.tcx; let did = tcx.map.local_def_id(item.id); @@ -82,29 +82,33 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ConstraintContext<'a, 'tcx> { hir::ItemEnum(..) | hir::ItemStruct(..) | hir::ItemUnion(..) => { - let scheme = tcx.lookup_item_type(did); + let generics = tcx.item_generics(did); // Not entirely obvious: constraints on structs/enums do not // affect the variance of their type parameters. See discussion // in comment at top of module. // - // self.add_constraints_from_generics(&scheme.generics); + // self.add_constraints_from_generics(generics); for field in tcx.lookup_adt_def(did).all_fields() { - self.add_constraints_from_ty(&scheme.generics, - field.unsubst_ty(), + self.add_constraints_from_ty(generics, + tcx.item_type(field.did), self.covariant); } } hir::ItemTrait(..) => { - let trait_def = tcx.lookup_trait_def(did); - self.add_constraints_from_trait_ref(&trait_def.generics, - trait_def.trait_ref, + let generics = tcx.item_generics(did); + let trait_ref = ty::TraitRef { + def_id: did, + substs: Substs::identity_for_item(tcx, did) + }; + self.add_constraints_from_trait_ref(generics, + trait_ref, self.invariant); } hir::ItemExternCrate(_) | - hir::ItemUse(_) | + hir::ItemUse(..) | hir::ItemStatic(..) | hir::ItemConst(..) | hir::ItemFn(..) | @@ -115,6 +119,9 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ConstraintContext<'a, 'tcx> { hir::ItemDefaultImpl(..) => {} } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } /// Is `param_id` a lifetime according to `map`? @@ -276,7 +283,7 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { trait_ref, variance); - let trait_def = self.tcx().lookup_trait_def(trait_ref.def_id); + let trait_generics = self.tcx().item_generics(trait_ref.def_id); // This edge is actually implied by the call to // `lookup_trait_def`, but I'm trying to be future-proof. See @@ -285,8 +292,8 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { self.add_constraints_from_substs(generics, trait_ref.def_id, - &trait_def.generics.types, - &trait_def.generics.regions, + &trait_generics.types, + &trait_generics.regions, trait_ref.substs, variance); } @@ -336,7 +343,7 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { } ty::TyAdt(def, substs) => { - let item_type = self.tcx().lookup_item_type(def.did); + let adt_generics = self.tcx().item_generics(def.did); // This edge is actually implied by the call to // `lookup_trait_def`, but I'm trying to be future-proof. See @@ -345,15 +352,15 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { self.add_constraints_from_substs(generics, def.did, - &item_type.generics.types, - &item_type.generics.regions, + &adt_generics.types, + &adt_generics.regions, substs, variance); } ty::TyProjection(ref data) => { let trait_ref = &data.trait_ref; - let trait_def = self.tcx().lookup_trait_def(trait_ref.def_id); + let trait_generics = self.tcx().item_generics(trait_ref.def_id); // This edge is actually implied by the call to // `lookup_trait_def`, but I'm trying to be future-proof. See @@ -362,21 +369,23 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { self.add_constraints_from_substs(generics, trait_ref.def_id, - &trait_def.generics.types, - &trait_def.generics.regions, + &trait_generics.types, + &trait_generics.regions, trait_ref.substs, variance); } - ty::TyTrait(ref data) => { + ty::TyDynamic(ref data, r) => { // The type `Foo` is contravariant w/r/t `'a`: let contra = self.contravariant(variance); - self.add_constraints_from_region(generics, data.region_bound, contra); + self.add_constraints_from_region(generics, r, contra); - let poly_trait_ref = data.principal.with_self_ty(self.tcx(), self.tcx().types.err); - self.add_constraints_from_trait_ref(generics, poly_trait_ref.0, variance); + if let Some(p) = data.principal() { + let poly_trait_ref = p.with_self_ty(self.tcx(), self.tcx().types.err); + self.add_constraints_from_trait_ref(generics, poly_trait_ref.0, variance); + } - for projection in &data.projection_bounds { + for projection in data.projection_bounds() { self.add_constraints_from_ty(generics, projection.0.ty, self.invariant); } } @@ -458,10 +467,10 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { sig: &ty::PolyFnSig<'tcx>, variance: VarianceTermPtr<'a>) { let contra = self.contravariant(variance); - for &input in &sig.0.inputs { + for &input in sig.0.inputs() { self.add_constraints_from_ty(generics, input, contra); } - self.add_constraints_from_ty(generics, sig.0.output, variance); + self.add_constraints_from_ty(generics, sig.0.output(), variance); } /// Adds constraints appropriate for a region appearing in a diff --git a/src/librustc_typeck/variance/terms.rs b/src/librustc_typeck/variance/terms.rs index f6732f36e3..851cfcd872 100644 --- a/src/librustc_typeck/variance/terms.rs +++ b/src/librustc_typeck/variance/terms.rs @@ -27,7 +27,7 @@ use std::fmt; use std::rc::Rc; use syntax::ast; use rustc::hir; -use rustc::hir::intravisit::Visitor; +use rustc::hir::itemlikevisit::ItemLikeVisitor; use util::nodemap::NodeMap; use self::VarianceTerm::*; @@ -109,7 +109,7 @@ pub fn determine_parameters_to_be_inferred<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx> }; // See README.md for a discussion on dep-graph management. - tcx.visit_all_items_in_krate(|def_id| ItemVariances::to_dep_node(&def_id), &mut terms_cx); + tcx.visit_all_item_likes_in_krate(|def_id| ItemVariances::to_dep_node(&def_id), &mut terms_cx); terms_cx } @@ -227,7 +227,7 @@ impl<'a, 'tcx> TermsContext<'a, 'tcx> { } } -impl<'a, 'tcx, 'v> Visitor<'v> for TermsContext<'a, 'tcx> { +impl<'a, 'tcx, 'v> ItemLikeVisitor<'v> for TermsContext<'a, 'tcx> { fn visit_item(&mut self, item: &hir::Item) { debug!("add_inferreds for item {}", self.tcx.map.node_to_string(item.id)); @@ -246,7 +246,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for TermsContext<'a, 'tcx> { } hir::ItemExternCrate(_) | - hir::ItemUse(_) | + hir::ItemUse(..) | hir::ItemDefaultImpl(..) | hir::ItemImpl(..) | hir::ItemStatic(..) | @@ -257,4 +257,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for TermsContext<'a, 'tcx> { hir::ItemTy(..) => {} } } + + fn visit_impl_item(&mut self, _impl_item: &hir::ImplItem) { + } } diff --git a/src/librustdoc/clean/inline.rs b/src/librustdoc/clean/inline.rs index 31497b6bd3..94e9fdbfc3 100644 --- a/src/librustdoc/clean/inline.rs +++ b/src/librustdoc/clean/inline.rs @@ -18,8 +18,8 @@ use rustc::hir; use rustc::hir::def::{Def, CtorKind}; use rustc::hir::def_id::DefId; use rustc::hir::print as pprust; -use rustc::ty::{self, TyCtxt}; -use rustc::util::nodemap::FnvHashSet; +use rustc::ty; +use rustc::util::nodemap::FxHashSet; use rustc_const_eval::lookup_const_by_id; @@ -29,31 +29,24 @@ use clean::{self, GetDefId}; use super::Clean; -/// Attempt to inline the definition of a local node id into this AST. +/// Attempt to inline a definition into this AST. /// -/// This function will fetch the definition of the id specified, and if it is -/// from another crate it will attempt to inline the documentation from the -/// other crate into this crate. +/// This function will fetch the definition specified, and if it is +/// from another crate it will attempt to inline the documentation +/// from the other crate into this crate. /// /// This is primarily used for `pub use` statements which are, in general, /// implementation details. Inlining the documentation should help provide a /// better experience when reading the documentation in this use case. /// -/// The returned value is `None` if the `id` could not be inlined, and `Some` -/// of a vector of items if it was successfully expanded. -pub fn try_inline(cx: &DocContext, id: ast::NodeId, into: Option) +/// The returned value is `None` if the definition could not be inlined, +/// and `Some` of a vector of items if it was successfully expanded. +pub fn try_inline(cx: &DocContext, def: Def, into: Option) -> Option> { - let tcx = match cx.tcx_opt() { - Some(tcx) => tcx, - None => return None, - }; - let def = match tcx.expect_def_or_none(id) { - Some(def) => def, - None => return None, - }; + if def == Def::Err { return None } let did = def.def_id(); if did.is_local() { return None } - try_inline_def(cx, tcx, def).map(|vec| { + try_inline_def(cx, def).map(|vec| { vec.into_iter().map(|mut item| { match into { Some(into) if item.name.is_some() => { @@ -66,39 +59,38 @@ pub fn try_inline(cx: &DocContext, id: ast::NodeId, into: Option) }) } -fn try_inline_def<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - def: Def) -> Option> { +fn try_inline_def(cx: &DocContext, def: Def) -> Option> { + let tcx = cx.tcx; let mut ret = Vec::new(); - let did = def.def_id(); let inner = match def { Def::Trait(did) => { record_extern_fqn(cx, did, clean::TypeKind::Trait); - ret.extend(build_impls(cx, tcx, did)); - clean::TraitItem(build_external_trait(cx, tcx, did)) + ret.extend(build_impls(cx, did)); + clean::TraitItem(build_external_trait(cx, did)) } Def::Fn(did) => { record_extern_fqn(cx, did, clean::TypeKind::Function); - clean::FunctionItem(build_external_function(cx, tcx, did)) + clean::FunctionItem(build_external_function(cx, did)) } Def::Struct(did) => { record_extern_fqn(cx, did, clean::TypeKind::Struct); - ret.extend(build_impls(cx, tcx, did)); - clean::StructItem(build_struct(cx, tcx, did)) + ret.extend(build_impls(cx, did)); + clean::StructItem(build_struct(cx, did)) } Def::Union(did) => { record_extern_fqn(cx, did, clean::TypeKind::Union); - ret.extend(build_impls(cx, tcx, did)); - clean::UnionItem(build_union(cx, tcx, did)) + ret.extend(build_impls(cx, did)); + clean::UnionItem(build_union(cx, did)) } Def::TyAlias(did) => { record_extern_fqn(cx, did, clean::TypeKind::Typedef); - ret.extend(build_impls(cx, tcx, did)); - clean::TypedefItem(build_type_alias(cx, tcx, did), false) + ret.extend(build_impls(cx, did)); + clean::TypedefItem(build_type_alias(cx, did), false) } Def::Enum(did) => { record_extern_fqn(cx, did, clean::TypeKind::Enum); - ret.extend(build_impls(cx, tcx, did)); - clean::EnumItem(build_enum(cx, tcx, did)) + ret.extend(build_impls(cx, did)); + clean::EnumItem(build_enum(cx, did)) } // Assume that the enum type is reexported next to the variant, and // variants don't show up in documentation specially. @@ -108,23 +100,24 @@ fn try_inline_def<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, Def::StructCtor(..) => return Some(Vec::new()), Def::Mod(did) => { record_extern_fqn(cx, did, clean::TypeKind::Module); - clean::ModuleItem(build_module(cx, tcx, did)) + clean::ModuleItem(build_module(cx, did)) } Def::Static(did, mtbl) => { record_extern_fqn(cx, did, clean::TypeKind::Static); - clean::StaticItem(build_static(cx, tcx, did, mtbl)) + clean::StaticItem(build_static(cx, did, mtbl)) } Def::Const(did) => { record_extern_fqn(cx, did, clean::TypeKind::Const); - clean::ConstantItem(build_const(cx, tcx, did)) + clean::ConstantItem(build_const(cx, did)) } _ => return None, }; + let did = def.def_id(); cx.renderinfo.borrow_mut().inlined.insert(did); ret.push(clean::Item { - source: clean::Span::empty(), + source: tcx.def_span(did).clean(cx), name: Some(tcx.item_name(did).to_string()), - attrs: load_attrs(cx, tcx, did), + attrs: load_attrs(cx, did), inner: inner, visibility: Some(clean::Public), stability: tcx.lookup_stability(did).clean(cx), @@ -134,9 +127,8 @@ fn try_inline_def<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, Some(ret) } -pub fn load_attrs<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> Vec { - tcx.get_attrs(did).iter().map(|a| a.clean(cx)).collect() +pub fn load_attrs(cx: &DocContext, did: DefId) -> clean::Attributes { + cx.tcx.get_attrs(did).clean(cx) } /// Record an external fully qualified name in the external_paths cache. @@ -144,79 +136,70 @@ pub fn load_attrs<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, /// These names are used later on by HTML rendering to generate things like /// source links back to the original item. pub fn record_extern_fqn(cx: &DocContext, did: DefId, kind: clean::TypeKind) { - if let Some(tcx) = cx.tcx_opt() { - let crate_name = tcx.sess.cstore.crate_name(did.krate).to_string(); - let relative = tcx.def_path(did).data.into_iter().filter_map(|elem| { - // extern blocks have an empty name - let s = elem.data.to_string(); - if !s.is_empty() { - Some(s) - } else { - None - } - }); - let fqn = once(crate_name).chain(relative).collect(); - cx.renderinfo.borrow_mut().external_paths.insert(did, (fqn, kind)); - } + let crate_name = cx.tcx.sess.cstore.crate_name(did.krate).to_string(); + let relative = cx.tcx.def_path(did).data.into_iter().filter_map(|elem| { + // extern blocks have an empty name + let s = elem.data.to_string(); + if !s.is_empty() { + Some(s) + } else { + None + } + }); + let fqn = once(crate_name).chain(relative).collect(); + cx.renderinfo.borrow_mut().external_paths.insert(did, (fqn, kind)); } -pub fn build_external_trait<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Trait { - let def = tcx.lookup_trait_def(did); - let trait_items = tcx.trait_items(did).clean(cx); - let predicates = tcx.lookup_predicates(did); - let generics = (def.generics, &predicates).clean(cx); +pub fn build_external_trait(cx: &DocContext, did: DefId) -> clean::Trait { + let trait_items = cx.tcx.associated_items(did).map(|item| item.clean(cx)).collect(); + let predicates = cx.tcx.item_predicates(did); + let generics = (cx.tcx.item_generics(did), &predicates).clean(cx); let generics = filter_non_trait_generics(did, generics); let (generics, supertrait_bounds) = separate_supertrait_bounds(generics); clean::Trait { - unsafety: def.unsafety, + unsafety: cx.tcx.lookup_trait_def(did).unsafety, generics: generics, items: trait_items, bounds: supertrait_bounds, } } -fn build_external_function<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Function { - let t = tcx.lookup_item_type(did); - let (decl, style, abi) = match t.ty.sty { +fn build_external_function(cx: &DocContext, did: DefId) -> clean::Function { + let ty = cx.tcx.item_type(did); + let (decl, style, abi) = match ty.sty { ty::TyFnDef(.., ref f) => ((did, &f.sig).clean(cx), f.unsafety, f.abi), _ => panic!("bad function"), }; - let constness = if tcx.sess.cstore.is_const_fn(did) { + let constness = if cx.tcx.sess.cstore.is_const_fn(did) { hir::Constness::Const } else { hir::Constness::NotConst }; - let predicates = tcx.lookup_predicates(did); + let predicates = cx.tcx.item_predicates(did); clean::Function { decl: decl, - generics: (t.generics, &predicates).clean(cx), + generics: (cx.tcx.item_generics(did), &predicates).clean(cx), unsafety: style, constness: constness, abi: abi, } } -fn build_enum<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Enum { - let t = tcx.lookup_item_type(did); - let predicates = tcx.lookup_predicates(did); +fn build_enum(cx: &DocContext, did: DefId) -> clean::Enum { + let predicates = cx.tcx.item_predicates(did); clean::Enum { - generics: (t.generics, &predicates).clean(cx), + generics: (cx.tcx.item_generics(did), &predicates).clean(cx), variants_stripped: false, - variants: tcx.lookup_adt_def(did).variants.clean(cx), + variants: cx.tcx.lookup_adt_def(did).variants.clean(cx), } } -fn build_struct<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Struct { - let t = tcx.lookup_item_type(did); - let predicates = tcx.lookup_predicates(did); - let variant = tcx.lookup_adt_def(did).struct_variant(); +fn build_struct(cx: &DocContext, did: DefId) -> clean::Struct { + let predicates = cx.tcx.item_predicates(did); + let variant = cx.tcx.lookup_adt_def(did).struct_variant(); clean::Struct { struct_type: match variant.ctor_kind { @@ -224,46 +207,41 @@ fn build_struct<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, CtorKind::Fn => doctree::Tuple, CtorKind::Const => doctree::Unit, }, - generics: (t.generics, &predicates).clean(cx), + generics: (cx.tcx.item_generics(did), &predicates).clean(cx), fields: variant.fields.clean(cx), fields_stripped: false, } } -fn build_union<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Union { - let t = tcx.lookup_item_type(did); - let predicates = tcx.lookup_predicates(did); - let variant = tcx.lookup_adt_def(did).struct_variant(); +fn build_union(cx: &DocContext, did: DefId) -> clean::Union { + let predicates = cx.tcx.item_predicates(did); + let variant = cx.tcx.lookup_adt_def(did).struct_variant(); clean::Union { struct_type: doctree::Plain, - generics: (t.generics, &predicates).clean(cx), + generics: (cx.tcx.item_generics(did), &predicates).clean(cx), fields: variant.fields.clean(cx), fields_stripped: false, } } -fn build_type_alias<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Typedef { - let t = tcx.lookup_item_type(did); - let predicates = tcx.lookup_predicates(did); +fn build_type_alias(cx: &DocContext, did: DefId) -> clean::Typedef { + let predicates = cx.tcx.item_predicates(did); clean::Typedef { - type_: t.ty.clean(cx), - generics: (t.generics, &predicates).clean(cx), + type_: cx.tcx.item_type(did).clean(cx), + generics: (cx.tcx.item_generics(did), &predicates).clean(cx), } } -pub fn build_impls<'a, 'tcx>(cx: &DocContext, - tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> Vec { +pub fn build_impls(cx: &DocContext, did: DefId) -> Vec { + let tcx = cx.tcx; tcx.populate_inherent_implementations_for_type_if_necessary(did); let mut impls = Vec::new(); if let Some(i) = tcx.inherent_impls.borrow().get(&did) { for &did in i.iter() { - build_impl(cx, tcx, did, &mut impls); + build_impl(cx, did, &mut impls); } } // If this is the first time we've inlined something from another crate, then @@ -281,7 +259,7 @@ pub fn build_impls<'a, 'tcx>(cx: &DocContext, cx.populated_all_crate_impls.set(true); for did in tcx.sess.cstore.implementations_of_trait(None) { - build_impl(cx, tcx, did, &mut impls); + build_impl(cx, did, &mut impls); } // Also try to inline primitive impls from other crates. @@ -307,23 +285,20 @@ pub fn build_impls<'a, 'tcx>(cx: &DocContext, for def_id in primitive_impls.iter().filter_map(|&def_id| def_id) { if !def_id.is_local() { - tcx.populate_implementations_for_primitive_if_necessary(def_id); - build_impl(cx, tcx, def_id, &mut impls); + build_impl(cx, def_id, &mut impls); } } impls } -pub fn build_impl<'a, 'tcx>(cx: &DocContext, - tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId, - ret: &mut Vec) { +pub fn build_impl(cx: &DocContext, did: DefId, ret: &mut Vec) { if !cx.renderinfo.borrow_mut().inlined.insert(did) { return } - let attrs = load_attrs(cx, tcx, did); + let attrs = load_attrs(cx, did); + let tcx = cx.tcx; let associated_trait = tcx.impl_trait_ref(did); // Only inline impl if the implemented trait is @@ -345,7 +320,7 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, clean::RegionBound(..) => unreachable!(), }, }), - source: clean::Span::empty(), + source: tcx.def_span(did).clean(cx), name: None, attrs: attrs, visibility: Some(clean::Inherited), @@ -355,8 +330,7 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, }); } - let ty = tcx.lookup_item_type(did); - let for_ = ty.ty.clean(cx); + let for_ = tcx.item_type(did).clean(cx); // Only inline impl if the implementing type is // reachable in rustdoc generated documentation @@ -366,44 +340,40 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, } } - let predicates = tcx.lookup_predicates(did); - let trait_items = tcx.sess.cstore.impl_or_trait_items(did) - .iter() - .filter_map(|&did| { - match tcx.impl_or_trait_item(did) { - ty::ConstTraitItem(ref assoc_const) => { - let did = assoc_const.def_id; - let type_scheme = tcx.lookup_item_type(did); - let default = if assoc_const.has_value { + let predicates = tcx.item_predicates(did); + let trait_items = tcx.associated_items(did).filter_map(|item| { + match item.kind { + ty::AssociatedKind::Const => { + let default = if item.defaultness.has_value() { Some(pprust::expr_to_string( - lookup_const_by_id(tcx, did, None).unwrap().0)) + lookup_const_by_id(tcx, item.def_id, None).unwrap().0)) } else { None }; Some(clean::Item { - name: Some(assoc_const.name.clean(cx)), + name: Some(item.name.clean(cx)), inner: clean::AssociatedConstItem( - type_scheme.ty.clean(cx), + tcx.item_type(item.def_id).clean(cx), default, ), - source: clean::Span::empty(), - attrs: vec![], + source: tcx.def_span(item.def_id).clean(cx), + attrs: clean::Attributes::default(), visibility: None, - stability: tcx.lookup_stability(did).clean(cx), - deprecation: tcx.lookup_deprecation(did).clean(cx), - def_id: did + stability: tcx.lookup_stability(item.def_id).clean(cx), + deprecation: tcx.lookup_deprecation(item.def_id).clean(cx), + def_id: item.def_id }) } - ty::MethodTraitItem(method) => { - if method.vis != ty::Visibility::Public && associated_trait.is_none() { + ty::AssociatedKind::Method => { + if item.vis != ty::Visibility::Public && associated_trait.is_none() { return None } - let mut item = method.clean(cx); - item.inner = match item.inner.clone() { + let mut cleaned = item.clean(cx); + cleaned.inner = match cleaned.inner.clone() { clean::TyMethodItem(clean::TyMethod { unsafety, decl, generics, abi }) => { - let constness = if tcx.sess.cstore.is_const_fn(did) { + let constness = if tcx.sess.cstore.is_const_fn(item.def_id) { hir::Constness::Const } else { hir::Constness::NotConst @@ -417,14 +387,13 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, abi: abi }) } - _ => panic!("not a tymethod"), + ref r => panic!("not a tymethod: {:?}", r), }; - Some(item) + Some(cleaned) } - ty::TypeTraitItem(ref assoc_ty) => { - let did = assoc_ty.def_id; + ty::AssociatedKind::Type => { let typedef = clean::Typedef { - type_: assoc_ty.ty.unwrap().clean(cx), + type_: tcx.item_type(item.def_id).clean(cx), generics: clean::Generics { lifetimes: vec![], type_params: vec![], @@ -432,14 +401,14 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, } }; Some(clean::Item { - name: Some(assoc_ty.name.clean(cx)), + name: Some(item.name.clean(cx)), inner: clean::TypedefItem(typedef, true), - source: clean::Span::empty(), - attrs: vec![], + source: tcx.def_span(item.def_id).clean(cx), + attrs: clean::Attributes::default(), visibility: None, - stability: tcx.lookup_stability(did).clean(cx), - deprecation: tcx.lookup_deprecation(did).clean(cx), - def_id: did + stability: tcx.lookup_stability(item.def_id).clean(cx), + deprecation: tcx.lookup_deprecation(item.def_id).clean(cx), + def_id: item.def_id }) } } @@ -451,16 +420,16 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, clean::RegionBound(..) => unreachable!(), } }); - if trait_.def_id() == cx.deref_trait_did.get() { + if trait_.def_id() == tcx.lang_items.deref_trait() { super::build_deref_target_impls(cx, &trait_items, ret); } let provided = trait_.def_id().map(|did| { - cx.tcx().provided_trait_methods(did) - .into_iter() - .map(|meth| meth.name.to_string()) - .collect() - }).unwrap_or(FnvHashSet()); + tcx.provided_trait_methods(did) + .into_iter() + .map(|meth| meth.name.to_string()) + .collect() + }).unwrap_or(FxHashSet()); ret.push(clean::Item { inner: clean::ImplItem(clean::Impl { @@ -468,11 +437,11 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, provided_trait_methods: provided, trait_: trait_, for_: for_, - generics: (ty.generics, &predicates).clean(cx), + generics: (tcx.item_generics(did), &predicates).clean(cx), items: trait_items, polarity: Some(polarity.clean(cx)), }), - source: clean::Span::empty(), + source: tcx.def_span(did).clean(cx), name: None, attrs: attrs, visibility: Some(clean::Inherited), @@ -482,26 +451,24 @@ pub fn build_impl<'a, 'tcx>(cx: &DocContext, }); } -fn build_module<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Module { +fn build_module(cx: &DocContext, did: DefId) -> clean::Module { let mut items = Vec::new(); - fill_in(cx, tcx, did, &mut items); + fill_in(cx, did, &mut items); return clean::Module { items: items, is_crate: false, }; - fn fill_in<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId, items: &mut Vec) { + fn fill_in(cx: &DocContext, did: DefId, items: &mut Vec) { // If we're reexporting a reexport it may actually reexport something in // two namespaces, so the target may be listed twice. Make sure we only // visit each node at most once. - let mut visited = FnvHashSet(); - for item in tcx.sess.cstore.item_children(did) { + let mut visited = FxHashSet(); + for item in cx.tcx.sess.cstore.item_children(did) { let def_id = item.def.def_id(); - if tcx.sess.cstore.visibility(def_id) == ty::Visibility::Public { + if cx.tcx.sess.cstore.visibility(def_id) == ty::Visibility::Public { if !visited.insert(def_id) { continue } - if let Some(i) = try_inline_def(cx, tcx, item.def) { + if let Some(i) = try_inline_def(cx, item.def) { items.extend(i) } } @@ -509,9 +476,8 @@ fn build_module<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, } } -fn build_const<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId) -> clean::Constant { - let (expr, ty) = lookup_const_by_id(tcx, did, None).unwrap_or_else(|| { +fn build_const(cx: &DocContext, did: DefId) -> clean::Constant { + let (expr, ty) = lookup_const_by_id(cx.tcx, did, None).unwrap_or_else(|| { panic!("expected lookup_const_by_id to succeed for {:?}", did); }); debug!("converting constant expr {:?} to snippet", expr); @@ -519,16 +485,14 @@ fn build_const<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, debug!("got snippet {}", sn); clean::Constant { - type_: ty.map(|t| t.clean(cx)).unwrap_or_else(|| tcx.lookup_item_type(did).ty.clean(cx)), + type_: ty.map(|t| t.clean(cx)).unwrap_or_else(|| cx.tcx.item_type(did).clean(cx)), expr: sn } } -fn build_static<'a, 'tcx>(cx: &DocContext, tcx: TyCtxt<'a, 'tcx, 'tcx>, - did: DefId, - mutable: bool) -> clean::Static { +fn build_static(cx: &DocContext, did: DefId, mutable: bool) -> clean::Static { clean::Static { - type_: tcx.lookup_item_type(did).ty.clean(cx), + type_: cx.tcx.item_type(did).clean(cx), mutability: if mutable {clean::Mutable} else {clean::Immutable}, expr: "\n\n\n".to_string(), // trigger the "[definition]" links } diff --git a/src/librustdoc/clean/mod.rs b/src/librustdoc/clean/mod.rs index 265b66b01e..28ca92f5db 100644 --- a/src/librustdoc/clean/mod.rs +++ b/src/librustdoc/clean/mod.rs @@ -14,7 +14,6 @@ pub use self::Type::*; pub use self::Mutability::*; pub use self::ItemEnum::*; -pub use self::Attribute::*; pub use self::TyParamBound::*; pub use self::SelfTy::*; pub use self::FunctionRetTy::*; @@ -24,29 +23,28 @@ use syntax::abi::Abi; use syntax::ast; use syntax::attr; use syntax::codemap::Spanned; -use syntax::parse::token::keywords; use syntax::ptr::P; -use syntax::print::pprust as syntax_pprust; +use syntax::symbol::keywords; use syntax_pos::{self, DUMMY_SP, Pos}; -use rustc_trans::back::link; use rustc::middle::privacy::AccessLevels; use rustc::middle::resolve_lifetime::DefRegion::*; +use rustc::middle::lang_items; use rustc::hir::def::{Def, CtorKind}; -use rustc::hir::def_id::{self, DefId, DefIndex, CRATE_DEF_INDEX}; +use rustc::hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX, LOCAL_CRATE}; use rustc::hir::print as pprust; use rustc::ty::subst::Substs; use rustc::ty::{self, AdtKind}; use rustc::middle::stability; -use rustc::util::nodemap::{FnvHashMap, FnvHashSet}; +use rustc::util::nodemap::{FxHashMap, FxHashSet}; use rustc::hir; use std::path::PathBuf; use std::rc::Rc; +use std::slice; use std::sync::Arc; use std::u32; -use std::env::current_dir; use std::mem; use core::DocContext; @@ -59,11 +57,11 @@ mod simplify; // extract the stability index for a node from tcx, if possible fn get_stability(cx: &DocContext, def_id: DefId) -> Option { - cx.tcx_opt().and_then(|tcx| tcx.lookup_stability(def_id)).clean(cx) + cx.tcx.lookup_stability(def_id).clean(cx) } fn get_deprecation(cx: &DocContext, def_id: DefId) -> Option { - cx.tcx_opt().and_then(|tcx| tcx.lookup_deprecation(def_id)).clean(cx) + cx.tcx.lookup_deprecation(def_id).clean(cx) } pub trait Clean { @@ -111,46 +109,85 @@ pub struct Crate { pub name: String, pub src: PathBuf, pub module: Option, - pub externs: Vec<(def_id::CrateNum, ExternalCrate)>, - pub primitives: Vec, + pub externs: Vec<(CrateNum, ExternalCrate)>, + pub primitives: Vec<(DefId, PrimitiveType, Attributes)>, pub access_levels: Arc>, // These are later on moved into `CACHEKEY`, leaving the map empty. // Only here so that they can be filtered through the rustdoc passes. - pub external_traits: FnvHashMap, + pub external_traits: FxHashMap, } -struct CrateNum(def_id::CrateNum); - impl<'a, 'tcx> Clean for visit_ast::RustdocVisitor<'a, 'tcx> { fn clean(&self, cx: &DocContext) -> Crate { - use rustc::session::config::Input; use ::visit_lib::LibEmbargoVisitor; - if let Some(t) = cx.tcx_opt() { - cx.deref_trait_did.set(t.lang_items.deref_trait()); - cx.renderinfo.borrow_mut().deref_trait_did = cx.deref_trait_did.get(); - cx.deref_mut_trait_did.set(t.lang_items.deref_mut_trait()); - cx.renderinfo.borrow_mut().deref_mut_trait_did = cx.deref_mut_trait_did.get(); + { + let mut r = cx.renderinfo.borrow_mut(); + r.deref_trait_did = cx.tcx.lang_items.deref_trait(); + r.deref_mut_trait_did = cx.tcx.lang_items.deref_mut_trait(); } let mut externs = Vec::new(); for cnum in cx.sess().cstore.crates() { - externs.push((cnum, CrateNum(cnum).clean(cx))); - if cx.tcx_opt().is_some() { - // Analyze doc-reachability for extern items - LibEmbargoVisitor::new(cx).visit_lib(cnum); - } + externs.push((cnum, cnum.clean(cx))); + // Analyze doc-reachability for extern items + LibEmbargoVisitor::new(cx).visit_lib(cnum); } externs.sort_by(|&(a, _), &(b, _)| a.cmp(&b)); - // Figure out the name of this crate - let input = &cx.input; - let name = link::find_crate_name(None, &self.attrs, input); - // Clean the crate, translating the entire libsyntax AST to one that is // understood by rustdoc. let mut module = self.module.clean(cx); + let ExternalCrate { name, src, primitives, .. } = LOCAL_CRATE.clean(cx); + { + let m = match module.inner { + ModuleItem(ref mut m) => m, + _ => unreachable!(), + }; + m.items.extend(primitives.iter().map(|&(def_id, prim, ref attrs)| { + Item { + source: Span::empty(), + name: Some(prim.to_url_str().to_string()), + attrs: attrs.clone(), + visibility: Some(Public), + stability: None, + deprecation: None, + def_id: def_id, + inner: PrimitiveItem(prim), + } + })); + } + + let mut access_levels = cx.access_levels.borrow_mut(); + let mut external_traits = cx.external_traits.borrow_mut(); + + Crate { + name: name, + src: src, + module: Some(module), + externs: externs, + primitives: primitives, + access_levels: Arc::new(mem::replace(&mut access_levels, Default::default())), + external_traits: mem::replace(&mut external_traits, Default::default()), + } + } +} + +#[derive(Clone, RustcEncodable, RustcDecodable, Debug)] +pub struct ExternalCrate { + pub name: String, + pub src: PathBuf, + pub attrs: Attributes, + pub primitives: Vec<(DefId, PrimitiveType, Attributes)>, +} + +impl Clean for CrateNum { + fn clean(&self, cx: &DocContext) -> ExternalCrate { + let root = DefId { krate: *self, index: CRATE_DEF_INDEX }; + let krate_span = cx.tcx.def_span(root); + let krate_src = cx.sess().codemap().span_to_filename(krate_span); + // Collect all inner modules which are tagged as implementations of // primitives. // @@ -168,82 +205,50 @@ impl<'a, 'tcx> Clean for visit_ast::RustdocVisitor<'a, 'tcx> { // Also note that this does not attempt to deal with modules tagged // duplicately for the same primitive. This is handled later on when // rendering by delegating everything to a hash map. - let mut primitives = Vec::new(); - { - let m = match module.inner { - ModuleItem(ref mut m) => m, - _ => unreachable!(), - }; - let mut tmp = Vec::new(); - for child in &mut m.items { - if !child.is_mod() { - continue; + let as_primitive = |def: Def| { + if let Def::Mod(def_id) = def { + let attrs = cx.tcx.get_attrs(def_id).clean(cx); + let mut prim = None; + for attr in attrs.lists("doc") { + if let Some(v) = attr.value_str() { + if attr.check_name("primitive") { + prim = PrimitiveType::from_str(&v.as_str()); + if prim.is_some() { + break; + } + } + } } - let prim = match PrimitiveType::find(&child.attrs) { - Some(prim) => prim, - None => continue, - }; - primitives.push(prim); - tmp.push(Item { - source: Span::empty(), - name: Some(prim.to_url_str().to_string()), - attrs: child.attrs.clone(), - visibility: Some(Public), - stability: None, - deprecation: None, - def_id: DefId::local(prim.to_def_index()), - inner: PrimitiveItem(prim), - }); + return prim.map(|p| (def_id, p, attrs)); } - m.items.extend(tmp); - } - - let src = match cx.input { - Input::File(ref path) => { - if path.is_absolute() { - path.clone() - } else { - current_dir().unwrap().join(path) + None + }; + let primitives = if root.is_local() { + cx.tcx.map.krate().module.item_ids.iter().filter_map(|&id| { + let item = cx.tcx.map.expect_item(id.id); + match item.node { + hir::ItemMod(_) => { + as_primitive(Def::Mod(cx.tcx.map.local_def_id(id.id))) + } + hir::ItemUse(ref path, hir::UseKind::Single) + if item.vis == hir::Visibility::Public => { + as_primitive(path.def).map(|(_, prim, attrs)| { + // Pretend the primitive is local. + (cx.tcx.map.local_def_id(id.id), prim, attrs) + }) + } + _ => None } - }, - Input::Str { ref name, .. } => PathBuf::from(name.clone()), + }).collect() + } else { + cx.tcx.sess.cstore.item_children(root).iter().map(|item| item.def) + .filter_map(as_primitive).collect() }; - let mut access_levels = cx.access_levels.borrow_mut(); - let mut external_traits = cx.external_traits.borrow_mut(); - - Crate { - name: name.to_string(), - src: src, - module: Some(module), - externs: externs, - primitives: primitives, - access_levels: Arc::new(mem::replace(&mut access_levels, Default::default())), - external_traits: mem::replace(&mut external_traits, Default::default()), - } - } -} - -#[derive(Clone, RustcEncodable, RustcDecodable, Debug)] -pub struct ExternalCrate { - pub name: String, - pub attrs: Vec, - pub primitives: Vec, -} - -impl Clean for CrateNum { - fn clean(&self, cx: &DocContext) -> ExternalCrate { - let mut primitives = Vec::new(); - let root = DefId { krate: self.0, index: CRATE_DEF_INDEX }; - cx.tcx_opt().map(|tcx| { - for item in tcx.sess.cstore.item_children(root) { - let attrs = inline::load_attrs(cx, tcx, item.def.def_id()); - PrimitiveType::find(&attrs).map(|prim| primitives.push(prim)); - } - }); ExternalCrate { - name: (&cx.sess().cstore.crate_name(self.0)[..]).to_owned(), - attrs: cx.sess().cstore.item_attrs(root).clean(cx), + name: cx.tcx.crate_name(*self).to_string(), + src: PathBuf::from(krate_src), + attrs: cx.tcx.get_attrs(root).clean(cx), primitives: primitives, } } @@ -258,7 +263,7 @@ pub struct Item { pub source: Span, /// Not everything has a name. E.g., impls pub name: Option, - pub attrs: Vec, + pub attrs: Attributes, pub inner: ItemEnum, pub visibility: Option, pub def_id: DefId, @@ -270,7 +275,7 @@ impl Item { /// Finds the `doc` attribute as a NameValue and returns the corresponding /// value found. pub fn doc_value<'a>(&'a self) -> Option<&'a str> { - self.attrs.value("doc") + self.attrs.doc_value() } pub fn is_crate(&self) -> bool { match self.inner { @@ -450,7 +455,7 @@ impl Clean for doctree::Module { visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), inner: ModuleItem(Module { is_crate: self.is_crate, items: items @@ -459,86 +464,104 @@ impl Clean for doctree::Module { } } -pub trait Attributes { - fn has_word(&self, &str) -> bool; - fn value<'a>(&'a self, &str) -> Option<&'a str>; - fn list<'a>(&'a self, &str) -> &'a [Attribute]; +pub struct ListAttributesIter<'a> { + attrs: slice::Iter<'a, ast::Attribute>, + current_list: slice::Iter<'a, ast::NestedMetaItem>, + name: &'a str } -impl Attributes for [Attribute] { - /// Returns whether the attribute list contains a specific `Word` - fn has_word(&self, word: &str) -> bool { - for attr in self { - if let Word(ref w) = *attr { - if word == *w { - return true; - } - } - } - false - } +impl<'a> Iterator for ListAttributesIter<'a> { + type Item = &'a ast::NestedMetaItem; - /// Finds an attribute as NameValue and returns the corresponding value found. - fn value<'a>(&'a self, name: &str) -> Option<&'a str> { - for attr in self { - if let NameValue(ref x, ref v) = *attr { - if name == *x { - return Some(v); + fn next(&mut self) -> Option { + if let Some(nested) = self.current_list.next() { + return Some(nested); + } + + for attr in &mut self.attrs { + if let Some(ref list) = attr.meta_item_list() { + if attr.check_name(self.name) { + self.current_list = list.iter(); + if let Some(nested) = self.current_list.next() { + return Some(nested); + } } } } + None } +} +pub trait AttributesExt { /// Finds an attribute as List and returns the list of attributes nested inside. - fn list<'a>(&'a self, name: &str) -> &'a [Attribute] { - for attr in self { - if let List(ref x, ref list) = *attr { - if name == *x { - return &list[..]; + fn lists<'a>(&'a self, &'a str) -> ListAttributesIter<'a>; +} + +impl AttributesExt for [ast::Attribute] { + fn lists<'a>(&'a self, name: &'a str) -> ListAttributesIter<'a> { + ListAttributesIter { + attrs: self.iter(), + current_list: [].iter(), + name: name + } + } +} + +pub trait NestedAttributesExt { + /// Returns whether the attribute list contains a specific `Word` + fn has_word(self, &str) -> bool; +} + +impl<'a, I: IntoIterator> NestedAttributesExt for I { + fn has_word(self, word: &str) -> bool { + self.into_iter().any(|attr| attr.is_word() && attr.check_name(word)) + } +} + +#[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Debug, Default)] +pub struct Attributes { + pub doc_strings: Vec, + pub other_attrs: Vec +} + +impl Attributes { + pub fn from_ast(attrs: &[ast::Attribute]) -> Attributes { + let mut doc_strings = vec![]; + let other_attrs = attrs.iter().filter_map(|attr| { + attr.with_desugared_doc(|attr| { + if let Some(value) = attr.value_str() { + if attr.check_name("doc") { + doc_strings.push(value.to_string()); + return None; + } } - } - } - &[] - } -} -/// This is a flattened version of the AST's Attribute + MetaItem. -#[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Debug)] -pub enum Attribute { - Word(String), - List(String, Vec), - NameValue(String, String), - Literal(String), -} - -impl Clean for ast::NestedMetaItem { - fn clean(&self, cx: &DocContext) -> Attribute { - if let Some(mi) = self.meta_item() { - mi.clean(cx) - } else { // must be a literal - let lit = self.literal().unwrap(); - Literal(syntax_pprust::lit_to_string(lit)) + Some(attr.clone()) + }) + }).collect(); + Attributes { + doc_strings: doc_strings, + other_attrs: other_attrs } } -} -impl Clean for ast::MetaItem { - fn clean(&self, cx: &DocContext) -> Attribute { - if self.is_word() { - Word(self.name().to_string()) - } else if let Some(v) = self.value_str() { - NameValue(self.name().to_string(), v.to_string()) - } else { // must be a list - let l = self.meta_item_list().unwrap(); - List(self.name().to_string(), l.clean(cx)) - } + /// Finds the `doc` attribute as a NameValue and returns the corresponding + /// value found. + pub fn doc_value<'a>(&'a self) -> Option<&'a str> { + self.doc_strings.first().map(|s| &s[..]) } } -impl Clean for ast::Attribute { - fn clean(&self, cx: &DocContext) -> Attribute { - self.with_desugared_doc(|a| a.meta().clean(cx)) +impl AttributesExt for Attributes { + fn lists<'a>(&'a self, name: &'a str) -> ListAttributesIter<'a> { + self.other_attrs.lists(name) + } +} + +impl Clean for [ast::Attribute] { + fn clean(&self, _cx: &DocContext) -> Attributes { + Attributes::from_ast(self) } } @@ -554,7 +577,7 @@ impl Clean for hir::TyParam { fn clean(&self, cx: &DocContext) -> TyParam { TyParam { name: self.name.clean(cx), - did: cx.map.local_def_id(self.id), + did: cx.tcx.map.local_def_id(self.id), bounds: self.bounds.clean(cx), default: self.default.clean(cx), } @@ -581,21 +604,27 @@ pub enum TyParamBound { impl TyParamBound { fn maybe_sized(cx: &DocContext) -> TyParamBound { - use rustc::hir::TraitBoundModifier as TBM; - let mut sized_bound = ty::BoundSized.clean(cx); - if let TyParamBound::TraitBound(_, ref mut tbm) = sized_bound { - *tbm = TBM::Maybe - }; - sized_bound + let did = cx.tcx.require_lang_item(lang_items::SizedTraitLangItem); + let empty = cx.tcx.intern_substs(&[]); + let path = external_path(cx, &cx.tcx.item_name(did).as_str(), + Some(did), false, vec![], empty); + inline::record_extern_fqn(cx, did, TypeKind::Trait); + TraitBound(PolyTrait { + trait_: ResolvedPath { + path: path, + typarams: None, + did: did, + is_generic: false, + }, + lifetimes: vec![] + }, hir::TraitBoundModifier::Maybe) } fn is_sized_bound(&self, cx: &DocContext) -> bool { use rustc::hir::TraitBoundModifier as TBM; - if let Some(tcx) = cx.tcx_opt() { - if let TyParamBound::TraitBound(PolyTrait { ref trait_, .. }, TBM::None) = *self { - if trait_.def_id() == tcx.lang_items.sized_trait() { - return true; - } + if let TyParamBound::TraitBound(PolyTrait { ref trait_, .. }, TBM::None) = *self { + if trait_.def_id() == cx.tcx.lang_items.sized_trait() { + return true; } } false @@ -616,9 +645,9 @@ fn external_path_params(cx: &DocContext, trait_did: Option, has_self: boo let lifetimes = substs.regions().filter_map(|v| v.clean(cx)).collect(); let types = substs.types().skip(has_self as usize).collect::>(); - match (trait_did, cx.tcx_opt()) { + match trait_did { // Attempt to sugar an external path like Fn<(A, B,), C> to Fn(A, B) -> C - (Some(did), Some(ref tcx)) if tcx.lang_items.fn_trait_kind(did).is_some() => { + Some(did) if cx.tcx.lang_items.fn_trait_kind(did).is_some() => { assert_eq!(types.len(), 1); let inputs = match types[0].sty { ty::TyTuple(ref tys) => tys.iter().map(|t| t.clean(cx)).collect(), @@ -641,7 +670,7 @@ fn external_path_params(cx: &DocContext, trait_did: Option, has_self: boo output: output } }, - (..) => { + _ => { PathParameters::AngleBracketed { lifetimes: lifetimes, types: types.clean(cx), @@ -657,6 +686,7 @@ fn external_path(cx: &DocContext, name: &str, trait_did: Option, has_self bindings: Vec, substs: &Substs) -> Path { Path { global: false, + def: Def::Err, segments: vec![PathSegment { name: name.to_string(), params: external_path_params(cx, trait_did, has_self, bindings, substs) @@ -664,48 +694,10 @@ fn external_path(cx: &DocContext, name: &str, trait_did: Option, has_self } } -impl Clean for ty::BuiltinBound { - fn clean(&self, cx: &DocContext) -> TyParamBound { - let tcx = match cx.tcx_opt() { - Some(tcx) => tcx, - None => return RegionBound(Lifetime::statik()) - }; - let empty = tcx.intern_substs(&[]); - let (did, path) = match *self { - ty::BoundSend => - (tcx.lang_items.send_trait().unwrap(), - external_path(cx, "Send", None, false, vec![], empty)), - ty::BoundSized => - (tcx.lang_items.sized_trait().unwrap(), - external_path(cx, "Sized", None, false, vec![], empty)), - ty::BoundCopy => - (tcx.lang_items.copy_trait().unwrap(), - external_path(cx, "Copy", None, false, vec![], empty)), - ty::BoundSync => - (tcx.lang_items.sync_trait().unwrap(), - external_path(cx, "Sync", None, false, vec![], empty)), - }; - inline::record_extern_fqn(cx, did, TypeKind::Trait); - TraitBound(PolyTrait { - trait_: ResolvedPath { - path: path, - typarams: None, - did: did, - is_generic: false, - }, - lifetimes: vec![] - }, hir::TraitBoundModifier::None) - } -} - impl<'tcx> Clean for ty::TraitRef<'tcx> { fn clean(&self, cx: &DocContext) -> TyParamBound { - let tcx = match cx.tcx_opt() { - Some(tcx) => tcx, - None => return RegionBound(Lifetime::statik()) - }; inline::record_extern_fqn(cx, self.def_id, TypeKind::Trait); - let path = external_path(cx, &tcx.item_name(self.def_id).as_str(), + let path = external_path(cx, &cx.tcx.item_name(self.def_id).as_str(), Some(self.def_id), true, vec![], self.substs); debug!("ty::TraitRef\n subst: {:?}\n", self.substs); @@ -772,18 +764,16 @@ impl Lifetime { impl Clean for hir::Lifetime { fn clean(&self, cx: &DocContext) -> Lifetime { - if let Some(tcx) = cx.tcx_opt() { - let def = tcx.named_region_map.defs.get(&self.id).cloned(); - match def { - Some(DefEarlyBoundRegion(_, node_id)) | - Some(DefLateBoundRegion(_, node_id)) | - Some(DefFreeRegion(_, node_id)) => { - if let Some(lt) = cx.lt_substs.borrow().get(&node_id).cloned() { - return lt; - } + let def = cx.tcx.named_region_map.defs.get(&self.id).cloned(); + match def { + Some(DefEarlyBoundRegion(_, node_id)) | + Some(DefLateBoundRegion(_, node_id)) | + Some(DefFreeRegion(_, node_id)) => { + if let Some(lt) = cx.lt_substs.borrow().get(&node_id).cloned() { + return lt; } - _ => {} } + _ => {} } Lifetime(self.name.to_string()) } @@ -993,7 +983,7 @@ impl<'a, 'tcx> Clean for (&'a ty::Generics<'tcx>, // Note that associated types also have a sized bound by default, but we // don't actually know the set of associated types right here so that's // handled in cleaning associated types - let mut sized_params = FnvHashSet(); + let mut sized_params = FxHashSet(); where_predicates.retain(|pred| { match *pred { WP::BoundPredicate { ty: Generic(ref g), ref bounds } => { @@ -1048,7 +1038,7 @@ impl Clean for hir::MethodSig { }, output: self.decl.output.clean(cx), variadic: false, - attrs: Vec::new() + attrs: Attributes::default() }; Method { generics: self.generics.clean(cx), @@ -1076,7 +1066,7 @@ impl Clean for hir::MethodSig { }, output: self.decl.output.clean(cx), variadic: false, - attrs: Vec::new() + attrs: Attributes::default() }; TyMethod { unsafety: self.unsafety.clone(), @@ -1105,7 +1095,7 @@ impl Clean for doctree::Function { visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), inner: FunctionItem(Function { decl: self.decl.clean(cx), generics: self.generics.clean(cx), @@ -1122,7 +1112,7 @@ pub struct FnDecl { pub inputs: Arguments, pub output: FunctionRetTy, pub variadic: bool, - pub attrs: Vec, + pub attrs: Attributes, } impl FnDecl { @@ -1148,7 +1138,7 @@ impl Clean for hir::FnDecl { }, output: self.output.clean(cx), variadic: self.variadic, - attrs: Vec::new() + attrs: Attributes::default() } } } @@ -1156,17 +1146,17 @@ impl Clean for hir::FnDecl { impl<'a, 'tcx> Clean for (DefId, &'a ty::PolyFnSig<'tcx>) { fn clean(&self, cx: &DocContext) -> FnDecl { let (did, sig) = *self; - let mut names = if cx.map.as_local_node_id(did).is_some() { + let mut names = if cx.tcx.map.as_local_node_id(did).is_some() { vec![].into_iter() } else { - cx.tcx().sess.cstore.fn_arg_names(did).into_iter() + cx.tcx.sess.cstore.fn_arg_names(did).into_iter() }.peekable(); FnDecl { - output: Return(sig.0.output.clean(cx)), - attrs: Vec::new(), - variadic: sig.0.variadic, + output: Return(sig.skip_binder().output().clean(cx)), + attrs: Attributes::default(), + variadic: sig.skip_binder().variadic, inputs: Arguments { - values: sig.0.inputs.iter().map(|t| { + values: sig.skip_binder().inputs().iter().map(|t| { Argument { type_: t.clean(cx), id: ast::CRATE_NODE_ID, @@ -1247,7 +1237,7 @@ impl Clean for doctree::Trait { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -1297,10 +1287,10 @@ impl Clean for hir::TraitItem { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.span.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: None, - stability: get_stability(cx, cx.map.local_def_id(self.id)), - deprecation: get_deprecation(cx, cx.map.local_def_id(self.id)), + stability: get_stability(cx, cx.tcx.map.local_def_id(self.id)), + deprecation: get_deprecation(cx, cx.tcx.map.local_def_id(self.id)), inner: inner } } @@ -1329,56 +1319,125 @@ impl Clean for hir::ImplItem { name: Some(self.name.clean(cx)), source: self.span.clean(cx), attrs: self.attrs.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), - stability: get_stability(cx, cx.map.local_def_id(self.id)), - deprecation: get_deprecation(cx, cx.map.local_def_id(self.id)), + stability: get_stability(cx, cx.tcx.map.local_def_id(self.id)), + deprecation: get_deprecation(cx, cx.tcx.map.local_def_id(self.id)), inner: inner } } } -impl<'tcx> Clean for ty::Method<'tcx> { +impl<'tcx> Clean for ty::AssociatedItem { fn clean(&self, cx: &DocContext) -> Item { - let generics = (self.generics, &self.predicates).clean(cx); - let mut decl = (self.def_id, &self.fty.sig).clean(cx); - match self.explicit_self { - ty::ExplicitSelfCategory::ByValue => { - decl.inputs.values[0].type_ = Infer; + let inner = match self.kind { + ty::AssociatedKind::Const => { + let ty = cx.tcx.item_type(self.def_id); + AssociatedConstItem(ty.clean(cx), None) } - ty::ExplicitSelfCategory::ByReference(..) => { - match decl.inputs.values[0].type_ { - BorrowedRef{ref mut type_, ..} => **type_ = Infer, - _ => unreachable!(), + ty::AssociatedKind::Method => { + let generics = (cx.tcx.item_generics(self.def_id), + &cx.tcx.item_predicates(self.def_id)).clean(cx); + let fty = match cx.tcx.item_type(self.def_id).sty { + ty::TyFnDef(_, _, f) => f, + _ => unreachable!() + }; + let mut decl = (self.def_id, &fty.sig).clean(cx); + + if self.method_has_self_argument { + let self_ty = match self.container { + ty::ImplContainer(def_id) => { + cx.tcx.item_type(def_id) + } + ty::TraitContainer(_) => cx.tcx.mk_self_type() + }; + let self_arg_ty = *fty.sig.input(0).skip_binder(); + if self_arg_ty == self_ty { + decl.inputs.values[0].type_ = Infer; + } else if let ty::TyRef(_, mt) = self_arg_ty.sty { + if mt.ty == self_ty { + match decl.inputs.values[0].type_ { + BorrowedRef{ref mut type_, ..} => **type_ = Infer, + _ => unreachable!(), + } + } + } + } + + let provided = match self.container { + ty::ImplContainer(_) => false, + ty::TraitContainer(_) => self.defaultness.has_value() + }; + if provided { + MethodItem(Method { + unsafety: fty.unsafety, + generics: generics, + decl: decl, + abi: fty.abi, + + // trait methods canot (currently, at least) be const + constness: hir::Constness::NotConst, + }) + } else { + TyMethodItem(TyMethod { + unsafety: fty.unsafety, + generics: generics, + decl: decl, + abi: fty.abi, + }) } } - _ => {} - } - let provided = match self.container { - ty::ImplContainer(..) => false, - ty::TraitContainer(did) => { - cx.tcx().provided_trait_methods(did).iter().any(|m| { - m.def_id == self.def_id - }) - } - }; - let inner = if provided { - MethodItem(Method { - unsafety: self.fty.unsafety, - generics: generics, - decl: decl, - abi: self.fty.abi, + ty::AssociatedKind::Type => { + let my_name = self.name.clean(cx); - // trait methods canot (currently, at least) be const - constness: hir::Constness::NotConst, - }) - } else { - TyMethodItem(TyMethod { - unsafety: self.fty.unsafety, - generics: generics, - decl: decl, - abi: self.fty.abi, - }) + let mut bounds = if let ty::TraitContainer(did) = self.container { + // When loading a cross-crate associated type, the bounds for this type + // are actually located on the trait/impl itself, so we need to load + // all of the generics from there and then look for bounds that are + // applied to this associated type in question. + let predicates = cx.tcx.item_predicates(did); + let generics = (cx.tcx.item_generics(did), &predicates).clean(cx); + generics.where_predicates.iter().filter_map(|pred| { + let (name, self_type, trait_, bounds) = match *pred { + WherePredicate::BoundPredicate { + ty: QPath { ref name, ref self_type, ref trait_ }, + ref bounds + } => (name, self_type, trait_, bounds), + _ => return None, + }; + if *name != my_name { return None } + match **trait_ { + ResolvedPath { did, .. } if did == self.container.id() => {} + _ => return None, + } + match **self_type { + Generic(ref s) if *s == "Self" => {} + _ => return None, + } + Some(bounds) + }).flat_map(|i| i.iter().cloned()).collect::>() + } else { + vec![] + }; + + // Our Sized/?Sized bound didn't get handled when creating the generics + // because we didn't actually get our whole set of bounds until just now + // (some of them may have come from the trait). If we do have a sized + // bound, we remove it, and if we don't then we add the `?Sized` bound + // at the end. + match bounds.iter().position(|b| b.is_sized_bound(cx)) { + Some(i) => { bounds.remove(i); } + None => bounds.push(TyParamBound::maybe_sized(cx)), + } + + let ty = if self.defaultness.has_value() { + Some(cx.tcx.item_type(self.def_id)) + } else { + None + }; + + AssociatedTypeItem(bounds, ty.clean(cx)) + } }; Item { @@ -1387,23 +1446,13 @@ impl<'tcx> Clean for ty::Method<'tcx> { stability: get_stability(cx, self.def_id), deprecation: get_deprecation(cx, self.def_id), def_id: self.def_id, - attrs: inline::load_attrs(cx, cx.tcx(), self.def_id), - source: Span::empty(), + attrs: inline::load_attrs(cx, self.def_id), + source: cx.tcx.def_span(self.def_id).clean(cx), inner: inner, } } } -impl<'tcx> Clean for ty::ImplOrTraitItem<'tcx> { - fn clean(&self, cx: &DocContext) -> Item { - match *self { - ty::ConstTraitItem(ref cti) => cti.clean(cx), - ty::MethodTraitItem(ref mti) => mti.clean(cx), - ty::TypeTraitItem(ref tti) => tti.clean(cx), - } - } -} - /// A trait reference, which may have higher ranked lifetimes. #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Debug)] pub struct PolyTrait { @@ -1556,19 +1605,6 @@ impl PrimitiveType { } } - fn find(attrs: &[Attribute]) -> Option { - for attr in attrs.list("doc") { - if let NameValue(ref k, ref v) = *attr { - if "primitive" == *k { - if let ret@Some(..) = PrimitiveType::from_str(v) { - return ret; - } - } - } - } - None - } - pub fn as_str(&self) -> &'static str { match *self { PrimitiveType::Isize => "isize", @@ -1596,14 +1632,6 @@ impl PrimitiveType { pub fn to_url_str(&self) -> &'static str { self.as_str() } - - /// Creates a rustdoc-specific node id for primitive types. - /// - /// These node ids are generally never used by the AST itself. - pub fn to_def_index(&self) -> DefIndex { - let x = u32::MAX - 1 - (*self as u32); - DefIndex::new(x as usize) - } } impl From for PrimitiveType { @@ -1650,53 +1678,43 @@ impl Clean for hir::Ty { type_: box m.ty.clean(cx)}, TySlice(ref ty) => Vector(box ty.clean(cx)), TyArray(ref ty, ref e) => { - let n = if let Some(tcx) = cx.tcx_opt() { - use rustc_const_math::{ConstInt, ConstUsize}; - use rustc_const_eval::eval_const_expr; - use rustc::middle::const_val::ConstVal; - match eval_const_expr(tcx, e) { - ConstVal::Integral(ConstInt::Usize(u)) => match u { - ConstUsize::Us16(u) => u.to_string(), - ConstUsize::Us32(u) => u.to_string(), - ConstUsize::Us64(u) => u.to_string(), - }, - // after type checking this can't fail - _ => unreachable!(), - } - } else { - pprust::expr_to_string(e) + use rustc_const_math::{ConstInt, ConstUsize}; + use rustc_const_eval::eval_const_expr; + use rustc::middle::const_val::ConstVal; + + let n = match eval_const_expr(cx.tcx, e) { + ConstVal::Integral(ConstInt::Usize(u)) => match u { + ConstUsize::Us16(u) => u.to_string(), + ConstUsize::Us32(u) => u.to_string(), + ConstUsize::Us64(u) => u.to_string(), + }, + // after type checking this can't fail + _ => unreachable!(), }; FixedVector(box ty.clean(cx), n) }, TyTup(ref tys) => Tuple(tys.clean(cx)), - TyPath(None, ref path) => { - let tcx_and_def = cx.tcx_opt().map(|tcx| (tcx, tcx.expect_def(self.id))); - if let Some((_, def)) = tcx_and_def { - if let Some(new_ty) = cx.ty_substs.borrow().get(&def).cloned() { - return new_ty; - } + TyPath(hir::QPath::Resolved(None, ref path)) => { + if let Some(new_ty) = cx.ty_substs.borrow().get(&path.def).cloned() { + return new_ty; } - let tcx_and_alias = tcx_and_def.and_then(|(tcx, def)| { - if let Def::TyAlias(def_id) = def { - // Substitute private type aliases - tcx.map.as_local_node_id(def_id).and_then(|node_id| { - if !cx.access_levels.borrow().is_exported(def_id) { - Some((tcx, &tcx.map.expect_item(node_id).node)) - } else { - None - } - }) - } else { - None + let mut alias = None; + if let Def::TyAlias(def_id) = path.def { + // Substitute private type aliases + if let Some(node_id) = cx.tcx.map.as_local_node_id(def_id) { + if !cx.access_levels.borrow().is_exported(def_id) { + alias = Some(&cx.tcx.map.expect_item(node_id).node); + } } - }); - if let Some((tcx, &hir::ItemTy(ref ty, ref generics))) = tcx_and_alias { + }; + + if let Some(&hir::ItemTy(ref ty, ref generics)) = alias { let provided_params = &path.segments.last().unwrap().parameters; - let mut ty_substs = FnvHashMap(); - let mut lt_substs = FnvHashMap(); + let mut ty_substs = FxHashMap(); + let mut lt_substs = FxHashMap(); for (i, ty_param) in generics.ty_params.iter().enumerate() { - let ty_param_def = tcx.expect_def(ty_param.id); + let ty_param_def = Def::TyParam(cx.tcx.map.local_def_id(ty_param.id)); if let Some(ty) = provided_params.types().get(i).cloned() .cloned() { ty_substs.insert(ty_param_def, ty.unwrap().clean(cx)); @@ -1714,17 +1732,37 @@ impl Clean for hir::Ty { } resolve_type(cx, path.clean(cx), self.id) } - TyPath(Some(ref qself), ref p) => { + TyPath(hir::QPath::Resolved(Some(ref qself), ref p)) => { let mut segments: Vec<_> = p.segments.clone().into(); segments.pop(); let trait_path = hir::Path { span: p.span, global: p.global, + def: Def::Trait(cx.tcx.associated_item(p.def.def_id()).container.id()), segments: segments.into(), }; Type::QPath { name: p.segments.last().unwrap().name.clean(cx), - self_type: box qself.ty.clean(cx), + self_type: box qself.clean(cx), + trait_: box resolve_type(cx, trait_path.clean(cx), self.id) + } + } + TyPath(hir::QPath::TypeRelative(ref qself, ref segment)) => { + let mut def = Def::Err; + if let Some(ty) = cx.hir_ty_to_ty.get(&self.id) { + if let ty::TyProjection(proj) = ty.sty { + def = Def::Trait(proj.trait_ref.def_id); + } + } + let trait_path = hir::Path { + span: self.span, + global: false, + def: def, + segments: vec![].into(), + }; + Type::QPath { + name: segment.name.clean(cx), + self_type: box qself.clean(cx), trait_: box resolve_type(cx, trait_path.clean(cx), self.id) } } @@ -1764,9 +1802,7 @@ impl<'tcx> Clean for ty::Ty<'tcx> { ty::TyFloat(float_ty) => Primitive(float_ty.into()), ty::TyStr => Primitive(PrimitiveType::Str), ty::TyBox(t) => { - let box_did = cx.tcx_opt().and_then(|tcx| { - tcx.lang_items.owned_box() - }); + let box_did = cx.tcx.lang_items.owned_box(); lang_struct(cx, box_did, t, "Box", Unique) } ty::TySlice(ty) => Vector(box ty.clean(cx)), @@ -1786,7 +1822,7 @@ impl<'tcx> Clean for ty::Ty<'tcx> { type_params: Vec::new(), where_predicates: Vec::new() }, - decl: (cx.map.local_def_id(ast::CRATE_NODE_ID), &fty.sig).clean(cx), + decl: (cx.tcx.map.local_def_id(ast::CRATE_NODE_ID), &fty.sig).clean(cx), abi: fty.abi, }), ty::TyAdt(def, substs) => { @@ -1797,7 +1833,7 @@ impl<'tcx> Clean for ty::Ty<'tcx> { AdtKind::Enum => TypeKind::Enum, }; inline::record_extern_fqn(cx, did, kind); - let path = external_path(cx, &cx.tcx().item_name(did).as_str(), + let path = external_path(cx, &cx.tcx.item_name(did).as_str(), None, false, vec![], substs); ResolvedPath { path: path, @@ -1806,31 +1842,48 @@ impl<'tcx> Clean for ty::Ty<'tcx> { is_generic: false, } } - ty::TyTrait(ref obj) => { - let did = obj.principal.def_id(); - inline::record_extern_fqn(cx, did, TypeKind::Trait); + ty::TyDynamic(ref obj, ref reg) => { + if let Some(principal) = obj.principal() { + let did = principal.def_id(); + inline::record_extern_fqn(cx, did, TypeKind::Trait); - let mut typarams = vec![]; - obj.region_bound.clean(cx).map(|b| typarams.push(RegionBound(b))); - for bb in &obj.builtin_bounds { - typarams.push(bb.clean(cx)); - } + let mut typarams = vec![]; + reg.clean(cx).map(|b| typarams.push(RegionBound(b))); + for did in obj.auto_traits() { + let empty = cx.tcx.intern_substs(&[]); + let path = external_path(cx, &cx.tcx.item_name(did).as_str(), + Some(did), false, vec![], empty); + inline::record_extern_fqn(cx, did, TypeKind::Trait); + let bound = TraitBound(PolyTrait { + trait_: ResolvedPath { + path: path, + typarams: None, + did: did, + is_generic: false, + }, + lifetimes: vec![] + }, hir::TraitBoundModifier::None); + typarams.push(bound); + } - let mut bindings = vec![]; - for &ty::Binder(ref pb) in &obj.projection_bounds { - bindings.push(TypeBinding { - name: pb.item_name.clean(cx), - ty: pb.ty.clean(cx) - }); - } + let mut bindings = vec![]; + for ty::Binder(ref pb) in obj.projection_bounds() { + bindings.push(TypeBinding { + name: pb.item_name.clean(cx), + ty: pb.ty.clean(cx) + }); + } - let path = external_path(cx, &cx.tcx().item_name(did).as_str(), - Some(did), false, bindings, obj.principal.0.substs); - ResolvedPath { - path: path, - typarams: Some(typarams), - did: did, - is_generic: false, + let path = external_path(cx, &cx.tcx.item_name(did).as_str(), Some(did), + false, bindings, principal.0.substs); + ResolvedPath { + path: path, + typarams: Some(typarams), + did: did, + is_generic: false, + } + } else { + Never } } ty::TyTuple(ref t) => Tuple(t.clean(cx)), @@ -1842,9 +1895,9 @@ impl<'tcx> Clean for ty::Ty<'tcx> { ty::TyAnon(def_id, substs) => { // Grab the "TraitA + TraitB" from `impl TraitA + TraitB`, // by looking up the projections associated with the def_id. - let item_predicates = cx.tcx().lookup_predicates(def_id); - let substs = cx.tcx().lift(&substs).unwrap(); - let bounds = item_predicates.instantiate(cx.tcx(), substs); + let item_predicates = cx.tcx.item_predicates(def_id); + let substs = cx.tcx.lift(&substs).unwrap(); + let bounds = item_predicates.instantiate(cx.tcx, substs); ImplTrait(bounds.predicates.into_iter().filter_map(|predicate| { predicate.to_opt_poly_trait_ref().clean(cx) }).collect()) @@ -1865,25 +1918,25 @@ impl Clean for hir::StructField { attrs: self.attrs.clean(cx), source: self.span.clean(cx), visibility: self.vis.clean(cx), - stability: get_stability(cx, cx.map.local_def_id(self.id)), - deprecation: get_deprecation(cx, cx.map.local_def_id(self.id)), - def_id: cx.map.local_def_id(self.id), + stability: get_stability(cx, cx.tcx.map.local_def_id(self.id)), + deprecation: get_deprecation(cx, cx.tcx.map.local_def_id(self.id)), + def_id: cx.tcx.map.local_def_id(self.id), inner: StructFieldItem(self.ty.clean(cx)), } } } -impl<'tcx> Clean for ty::FieldDefData<'tcx, 'static> { +impl<'tcx> Clean for ty::FieldDef { fn clean(&self, cx: &DocContext) -> Item { Item { name: Some(self.name).clean(cx), - attrs: cx.tcx().get_attrs(self.did).clean(cx), - source: Span::empty(), + attrs: cx.tcx.get_attrs(self.did).clean(cx), + source: cx.tcx.def_span(self.did).clean(cx), visibility: self.vis.clean(cx), stability: get_stability(cx, self.did), deprecation: get_deprecation(cx, self.did), def_id: self.did, - inner: StructFieldItem(self.unsubst_ty().clean(cx)), + inner: StructFieldItem(cx.tcx.item_type(self.did).clean(cx)), } } } @@ -1928,7 +1981,7 @@ impl Clean for doctree::Struct { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -1948,7 +2001,7 @@ impl Clean for doctree::Union { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -1995,7 +2048,7 @@ impl Clean for doctree::Enum { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -2022,7 +2075,7 @@ impl Clean for doctree::Variant { visibility: None, stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), - def_id: cx.map.local_def_id(self.def.id()), + def_id: cx.tcx.map.local_def_id(self.def.id()), inner: VariantItem(Variant { kind: self.def.clean(cx), }), @@ -2030,13 +2083,13 @@ impl Clean for doctree::Variant { } } -impl<'tcx> Clean for ty::VariantDefData<'tcx, 'static> { +impl<'tcx> Clean for ty::VariantDef { fn clean(&self, cx: &DocContext) -> Item { let kind = match self.ctor_kind { CtorKind::Const => VariantKind::CLike, CtorKind::Fn => { VariantKind::Tuple( - self.fields.iter().map(|f| f.unsubst_ty().clean(cx)).collect() + self.fields.iter().map(|f| cx.tcx.item_type(f.did).clean(cx)).collect() ) } CtorKind::Fictive => { @@ -2045,14 +2098,14 @@ impl<'tcx> Clean for ty::VariantDefData<'tcx, 'static> { fields_stripped: false, fields: self.fields.iter().map(|field| { Item { - source: Span::empty(), + source: cx.tcx.def_span(field.did).clean(cx), name: Some(field.name.clean(cx)), - attrs: cx.tcx().get_attrs(field.did).clean(cx), + attrs: cx.tcx.get_attrs(field.did).clean(cx), visibility: field.vis.clean(cx), def_id: field.did, stability: get_stability(cx, field.did), deprecation: get_deprecation(cx, field.did), - inner: StructFieldItem(field.unsubst_ty().clean(cx)) + inner: StructFieldItem(cx.tcx.item_type(field.did).clean(cx)) } }).collect() }) @@ -2060,8 +2113,8 @@ impl<'tcx> Clean for ty::VariantDefData<'tcx, 'static> { }; Item { name: Some(self.name.clean(cx)), - attrs: inline::load_attrs(cx, cx.tcx(), self.did), - source: Span::empty(), + attrs: inline::load_attrs(cx, self.did), + source: cx.tcx.def_span(self.did).clean(cx), visibility: Some(Inherited), def_id: self.did, inner: VariantItem(Variant { kind: kind }), @@ -2132,6 +2185,7 @@ impl Clean for syntax_pos::Span { #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Debug)] pub struct Path { pub global: bool, + pub def: Def, pub segments: Vec, } @@ -2139,6 +2193,7 @@ impl Path { pub fn singleton(name: String) -> Path { Path { global: false, + def: Def::Err, segments: vec![PathSegment { name: name, params: PathParameters::AngleBracketed { @@ -2159,6 +2214,7 @@ impl Clean for hir::Path { fn clean(&self, cx: &DocContext) -> Path { Path { global: self.global, + def: self.def, segments: self.segments.clean(cx), } } @@ -2213,11 +2269,20 @@ impl Clean for hir::PathSegment { } } -fn path_to_string(p: &hir::Path) -> String { +fn qpath_to_string(p: &hir::QPath) -> String { + let (segments, global) = match *p { + hir::QPath::Resolved(_, ref path) => { + (&path.segments, path.global) + } + hir::QPath::TypeRelative(_, ref segment) => { + return segment.name.to_string() + } + }; + let mut s = String::new(); let mut first = true; - for i in p.segments.iter().map(|x| x.name.as_str()) { - if !first || p.global { + for i in segments.iter().map(|x| x.name.as_str()) { + if !first || global { s.push_str("::"); } else { first = false; @@ -2245,7 +2310,7 @@ impl Clean for doctree::Typedef { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id.clone()), + def_id: cx.tcx.map.local_def_id(self.id.clone()), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -2297,7 +2362,7 @@ impl Clean for doctree::Static { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -2322,7 +2387,7 @@ impl Clean for doctree::Constant { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -2368,7 +2433,7 @@ impl Clean for hir::ImplPolarity { pub struct Impl { pub unsafety: hir::Unsafety, pub generics: Generics, - pub provided_trait_methods: FnvHashSet, + pub provided_trait_methods: FxHashSet, pub trait_: Option, pub for_: Type, pub items: Vec, @@ -2383,24 +2448,22 @@ impl Clean> for doctree::Impl { // If this impl block is an implementation of the Deref trait, then we // need to try inlining the target's inherent impl blocks as well. - if trait_.def_id() == cx.deref_trait_did.get() { + if trait_.def_id() == cx.tcx.lang_items.deref_trait() { build_deref_target_impls(cx, &items, &mut ret); } - let provided = trait_.def_id().and_then(|did| { - cx.tcx_opt().map(|tcx| { - tcx.provided_trait_methods(did) - .into_iter() - .map(|meth| meth.name.to_string()) - .collect() - }) - }).unwrap_or(FnvHashSet()); + let provided = trait_.def_id().map(|did| { + cx.tcx.provided_trait_methods(did) + .into_iter() + .map(|meth| meth.name.to_string()) + .collect() + }).unwrap_or(FxHashSet()); ret.push(Item { name: None, attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), @@ -2421,10 +2484,7 @@ impl Clean> for doctree::Impl { fn build_deref_target_impls(cx: &DocContext, items: &[Item], ret: &mut Vec) { - let tcx = match cx.tcx_opt() { - Some(t) => t, - None => return, - }; + let tcx = cx.tcx; for item in items { let target = match item.inner { @@ -2434,7 +2494,7 @@ fn build_deref_target_impls(cx: &DocContext, let primitive = match *target { ResolvedPath { did, .. } if did.is_local() => continue, ResolvedPath { did, .. } => { - ret.extend(inline::build_impls(cx, tcx, did)); + ret.extend(inline::build_impls(cx, did)); continue } _ => match target.primitive_type() { @@ -2465,7 +2525,7 @@ fn build_deref_target_impls(cx: &DocContext, }; if let Some(did) = did { if !did.is_local() { - inline::build_impl(cx, tcx, did, ret); + inline::build_impl(cx, did, ret); } } } @@ -2483,7 +2543,7 @@ impl Clean for doctree::DefaultImpl { name: None, attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: Some(Public), stability: None, deprecation: None, @@ -2517,63 +2577,34 @@ impl Clean> for doctree::Import { // #[doc(no_inline)] attribute is present. // Don't inline doc(hidden) imports so they can be stripped at a later stage. let denied = self.vis != hir::Public || self.attrs.iter().any(|a| { - &a.name()[..] == "doc" && match a.meta_item_list() { + a.name() == "doc" && match a.meta_item_list() { Some(l) => attr::list_contains_name(l, "no_inline") || attr::list_contains_name(l, "hidden"), None => false, } }); - let (mut ret, inner) = match self.node { - hir::ViewPathGlob(ref p) => { - (vec![], Import::Glob(resolve_use_source(cx, p.clean(cx), self.id))) - } - hir::ViewPathList(ref p, ref list) => { - // Attempt to inline all reexported items, but be sure - // to keep any non-inlineable reexports so they can be - // listed in the documentation. - let mut ret = vec![]; - let remaining = if !denied { - let mut remaining = vec![]; - for path in list { - match inline::try_inline(cx, path.node.id, path.node.rename) { - Some(items) => { - ret.extend(items); - } - None => { - remaining.push(path.clean(cx)); - } - } - } - remaining - } else { - list.clean(cx) - }; - if remaining.is_empty() { - return ret; + let path = self.path.clean(cx); + let inner = if self.glob { + Import::Glob(resolve_use_source(cx, path)) + } else { + let name = self.name; + if !denied { + if let Some(items) = inline::try_inline(cx, path.def, Some(name)) { + return items; } - (ret, Import::List(resolve_use_source(cx, p.clean(cx), self.id), remaining)) - } - hir::ViewPathSimple(name, ref p) => { - if !denied { - if let Some(items) = inline::try_inline(cx, self.id, Some(name)) { - return items; - } - } - (vec![], Import::Simple(name.clean(cx), - resolve_use_source(cx, p.clean(cx), self.id))) } + Import::Simple(name.clean(cx), resolve_use_source(cx, path)) }; - ret.push(Item { + vec![Item { name: None, attrs: self.attrs.clean(cx), source: self.whence.clean(cx), - def_id: cx.map.local_def_id(ast::CRATE_NODE_ID), + def_id: cx.tcx.map.local_def_id(ast::CRATE_NODE_ID), visibility: self.vis.clean(cx), stability: None, deprecation: None, inner: ImportItem(inner) - }); - ret + }] } } @@ -2582,9 +2613,7 @@ pub enum Import { // use source as str; Simple(String, ImportSource), // use source::*; - Glob(ImportSource), - // use source::{a, b, c}; - List(ImportSource, Vec), + Glob(ImportSource) } #[derive(Clone, RustcEncodable, RustcDecodable, Debug)] @@ -2593,23 +2622,6 @@ pub struct ImportSource { pub did: Option, } -#[derive(Clone, RustcEncodable, RustcDecodable, Debug)] -pub struct ViewListIdent { - pub name: String, - pub rename: Option, - pub source: Option, -} - -impl Clean for hir::PathListItem { - fn clean(&self, cx: &DocContext) -> ViewListIdent { - ViewListIdent { - name: self.node.name.clean(cx), - rename: self.node.rename.map(|r| r.clean(cx)), - source: resolve_def(cx, self.node.id) - } - } -} - impl Clean> for hir::ForeignMod { fn clean(&self, cx: &DocContext) -> Vec { let mut items = self.items.clean(cx); @@ -2646,10 +2658,10 @@ impl Clean for hir::ForeignItem { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx), source: self.span.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), visibility: self.vis.clean(cx), - stability: get_stability(cx, cx.map.local_def_id(self.id)), - deprecation: get_deprecation(cx, cx.map.local_def_id(self.id)), + stability: get_stability(cx, cx.tcx.map.local_def_id(self.id)), + deprecation: get_deprecation(cx, cx.tcx.map.local_def_id(self.id)), inner: inner, } } @@ -2679,18 +2691,16 @@ fn name_from_pat(p: &hir::Pat) -> String { match p.node { PatKind::Wild => "_".to_string(), - PatKind::Binding(_, ref p, _) => p.node.to_string(), - PatKind::TupleStruct(ref p, ..) | PatKind::Path(None, ref p) => path_to_string(p), - PatKind::Path(..) => panic!("tried to get argument name from qualified PatKind::Path, \ - which is not allowed in function arguments"), + PatKind::Binding(_, _, ref p, _) => p.node.to_string(), + PatKind::TupleStruct(ref p, ..) | PatKind::Path(ref p) => qpath_to_string(p), PatKind::Struct(ref name, ref fields, etc) => { - format!("{} {{ {}{} }}", path_to_string(name), + format!("{} {{ {}{} }}", qpath_to_string(name), fields.iter().map(|&Spanned { node: ref fp, .. }| format!("{}: {}", fp.name, name_from_pat(&*fp.pat))) .collect::>().join(", "), if etc { ", ..." } else { "" } ) - }, + } PatKind::Tuple(ref elts, _) => format!("({})", elts.iter().map(|p| name_from_pat(&**p)) .collect::>().join(", ")), PatKind::Box(ref p) => name_from_pat(&**p), @@ -2711,30 +2721,13 @@ fn name_from_pat(p: &hir::Pat) -> String { } } -/// Given a Type, resolve it using the def_map +/// Given a type Path, resolve it to a Type using the TyCtxt fn resolve_type(cx: &DocContext, path: Path, id: ast::NodeId) -> Type { debug!("resolve_type({:?},{:?})", path, id); - let tcx = match cx.tcx_opt() { - Some(tcx) => tcx, - // If we're extracting tests, this return value's accuracy is not - // important, all we want is a string representation to help people - // figure out what doctests are failing. - None => { - let did = DefId::local(DefIndex::from_u32(0)); - return ResolvedPath { - path: path, - typarams: None, - did: did, - is_generic: false - }; - } - }; - let def = tcx.expect_def(id); - debug!("resolve_type: def={:?}", def); - let is_generic = match def { + let is_generic = match path.def { Def::PrimTy(p) => match p { hir::TyStr => return Primitive(PrimitiveType::Str), hir::TyBool => return Primitive(PrimitiveType::Bool), @@ -2749,15 +2742,13 @@ fn resolve_type(cx: &DocContext, Def::SelfTy(..) | Def::TyParam(..) | Def::AssociatedTy(..) => true, _ => false, }; - let did = register_def(&*cx, def); + let did = register_def(&*cx, path.def); ResolvedPath { path: path, typarams: None, did: did, is_generic: is_generic } } fn register_def(cx: &DocContext, def: Def) -> DefId { debug!("register_def({:?})", def); - let tcx = cx.tcx(); - let (did, kind) = match def { Def::Fn(i) => (i, TypeKind::Function), Def::TyAlias(i) => (i, TypeKind::Typedef), @@ -2767,7 +2758,7 @@ fn register_def(cx: &DocContext, def: Def) -> DefId { Def::Union(i) => (i, TypeKind::Union), Def::Mod(i) => (i, TypeKind::Module), Def::Static(i, _) => (i, TypeKind::Static), - Def::Variant(i) => (tcx.parent_def_id(i).unwrap(), TypeKind::Enum), + Def::Variant(i) => (cx.tcx.parent_def_id(i).unwrap(), TypeKind::Enum), Def::SelfTy(Some(def_id), _) => (def_id, TypeKind::Trait), Def::SelfTy(_, Some(impl_def_id)) => { return impl_def_id @@ -2777,25 +2768,23 @@ fn register_def(cx: &DocContext, def: Def) -> DefId { if did.is_local() { return did } inline::record_extern_fqn(cx, did, kind); if let TypeKind::Trait = kind { - let t = inline::build_external_trait(cx, tcx, did); + let t = inline::build_external_trait(cx, did); cx.external_traits.borrow_mut().insert(did, t); } did } -fn resolve_use_source(cx: &DocContext, path: Path, id: ast::NodeId) -> ImportSource { +fn resolve_use_source(cx: &DocContext, path: Path) -> ImportSource { ImportSource { + did: if path.def == Def::Err { + None + } else { + Some(register_def(cx, path.def)) + }, path: path, - did: resolve_def(cx, id), } } -fn resolve_def(cx: &DocContext, id: ast::NodeId) -> Option { - cx.tcx_opt().and_then(|tcx| { - tcx.expect_def_or_none(id).map(|def| register_def(cx, def)) - }) -} - #[derive(Clone, RustcEncodable, RustcDecodable, Debug)] pub struct Macro { pub source: String, @@ -2812,7 +2801,7 @@ impl Clean for doctree::Macro { visibility: Some(Public), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), - def_id: cx.map.local_def_id(self.id), + def_id: cx.tcx.map.local_def_id(self.id), inner: MacroItem(Macro { source: format!("macro_rules! {} {{\n{}}}", name, @@ -2831,7 +2820,8 @@ pub struct Stability { pub feature: String, pub since: String, pub deprecated_since: String, - pub reason: String, + pub deprecated_reason: String, + pub unstable_reason: String, pub issue: Option } @@ -2854,12 +2844,13 @@ impl Clean for attr::Stability { Some(attr::RustcDeprecation {ref since, ..}) => since.to_string(), _=> "".to_string(), }, - reason: { - match (&self.rustc_depr, &self.level) { - (&Some(ref depr), _) => depr.reason.to_string(), - (&None, &attr::Unstable {reason: Some(ref reason), ..}) => reason.to_string(), - _ => "".to_string(), - } + deprecated_reason: match self.rustc_depr { + Some(ref depr) => depr.reason.to_string(), + _ => "".to_string(), + }, + unstable_reason: match self.level { + attr::Unstable { reason: Some(ref reason), .. } => reason.to_string(), + _ => "".to_string(), }, issue: match self.level { attr::Unstable {issue, ..} => Some(issue), @@ -2884,79 +2875,6 @@ impl Clean for attr::Deprecation { } } -impl<'tcx> Clean for ty::AssociatedConst<'tcx> { - fn clean(&self, cx: &DocContext) -> Item { - Item { - source: DUMMY_SP.clean(cx), - name: Some(self.name.clean(cx)), - attrs: Vec::new(), - inner: AssociatedConstItem(self.ty.clean(cx), None), - visibility: None, - def_id: self.def_id, - stability: None, - deprecation: None, - } - } -} - -impl<'tcx> Clean for ty::AssociatedType<'tcx> { - fn clean(&self, cx: &DocContext) -> Item { - let my_name = self.name.clean(cx); - - let mut bounds = if let ty::TraitContainer(did) = self.container { - // When loading a cross-crate associated type, the bounds for this type - // are actually located on the trait/impl itself, so we need to load - // all of the generics from there and then look for bounds that are - // applied to this associated type in question. - let def = cx.tcx().lookup_trait_def(did); - let predicates = cx.tcx().lookup_predicates(did); - let generics = (def.generics, &predicates).clean(cx); - generics.where_predicates.iter().filter_map(|pred| { - let (name, self_type, trait_, bounds) = match *pred { - WherePredicate::BoundPredicate { - ty: QPath { ref name, ref self_type, ref trait_ }, - ref bounds - } => (name, self_type, trait_, bounds), - _ => return None, - }; - if *name != my_name { return None } - match **trait_ { - ResolvedPath { did, .. } if did == self.container.id() => {} - _ => return None, - } - match **self_type { - Generic(ref s) if *s == "Self" => {} - _ => return None, - } - Some(bounds) - }).flat_map(|i| i.iter().cloned()).collect::>() - } else { - vec![] - }; - - // Our Sized/?Sized bound didn't get handled when creating the generics - // because we didn't actually get our whole set of bounds until just now - // (some of them may have come from the trait). If we do have a sized - // bound, we remove it, and if we don't then we add the `?Sized` bound - // at the end. - match bounds.iter().position(|b| b.is_sized_bound(cx)) { - Some(i) => { bounds.remove(i); } - None => bounds.push(TyParamBound::maybe_sized(cx)), - } - - Item { - source: DUMMY_SP.clean(cx), - name: Some(self.name.clean(cx)), - attrs: inline::load_attrs(cx, cx.tcx(), self.def_id), - inner: AssociatedTypeItem(bounds, self.ty.clean(cx)), - visibility: self.vis.clean(cx), - def_id: self.def_id, - stability: cx.tcx().lookup_stability(self.def_id).clean(cx), - deprecation: cx.tcx().lookup_deprecation(self.def_id).clean(cx), - } - } -} - fn lang_struct(cx: &DocContext, did: Option, t: ty::Ty, name: &str, fallback: fn(Box) -> Type) -> Type { @@ -2970,6 +2888,7 @@ fn lang_struct(cx: &DocContext, did: Option, did: did, path: Path { global: false, + def: Def::Err, segments: vec![PathSegment { name: name.to_string(), params: PathParameters::AngleBracketed { diff --git a/src/librustdoc/clean/simplify.rs b/src/librustdoc/clean/simplify.rs index 15e042f8c0..7240f0aedb 100644 --- a/src/librustdoc/clean/simplify.rs +++ b/src/librustdoc/clean/simplify.rs @@ -153,7 +153,7 @@ fn trait_is_same_or_supertrait(cx: &DocContext, child: DefId, if child == trait_ { return true } - let predicates = cx.tcx().lookup_super_predicates(child).predicates; + let predicates = cx.tcx.item_super_predicates(child).predicates; predicates.iter().filter_map(|pred| { if let ty::Predicate::Trait(ref pred) = *pred { if pred.0.trait_ref.self_ty().is_self() { diff --git a/src/librustdoc/core.rs b/src/librustdoc/core.rs index f03b6a5ab3..df25473ddd 100644 --- a/src/librustdoc/core.rs +++ b/src/librustdoc/core.rs @@ -7,19 +7,18 @@ // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. -pub use self::MaybeTyped::*; use rustc_lint; use rustc_driver::{driver, target_features, abort_on_err}; use rustc::dep_graph::DepGraph; use rustc::session::{self, config}; use rustc::hir::def_id::DefId; -use rustc::hir::def::Def; +use rustc::hir::def::{Def, ExportMap}; use rustc::middle::privacy::AccessLevels; -use rustc::ty::{self, TyCtxt}; +use rustc::ty::{self, TyCtxt, Ty}; use rustc::hir::map as hir_map; use rustc::lint; -use rustc::util::nodemap::FnvHashMap; +use rustc::util::nodemap::{FxHashMap, NodeMap}; use rustc_trans::back::link; use rustc_resolve as resolve; use rustc_metadata::cstore::CStore; @@ -42,21 +41,11 @@ use html::render::RenderInfo; pub use rustc::session::config::Input; pub use rustc::session::search_paths::SearchPaths; -/// Are we generating documentation (`Typed`) or tests (`NotTyped`)? -pub enum MaybeTyped<'a, 'tcx: 'a> { - Typed(TyCtxt<'a, 'tcx, 'tcx>), - NotTyped(&'a session::Session) -} - -pub type ExternalPaths = FnvHashMap, clean::TypeKind)>; +pub type ExternalPaths = FxHashMap, clean::TypeKind)>; pub struct DocContext<'a, 'tcx: 'a> { - pub map: &'a hir_map::Map<'tcx>, - pub maybe_typed: MaybeTyped<'a, 'tcx>, - pub input: Input, + pub tcx: TyCtxt<'a, 'tcx, 'tcx>, pub populated_all_crate_impls: Cell, - pub deref_trait_did: Cell>, - pub deref_mut_trait_did: Cell>, // Note that external items for which `doc(hidden)` applies to are shown as // non-reachable while local items aren't. This is because we're reusing // the access levels from crateanalysis. @@ -65,42 +54,31 @@ pub struct DocContext<'a, 'tcx: 'a> { /// Later on moved into `html::render::CACHE_KEY` pub renderinfo: RefCell, /// Later on moved through `clean::Crate` into `html::render::CACHE_KEY` - pub external_traits: RefCell>, + pub external_traits: RefCell>, // The current set of type and lifetime substitutions, // for expanding type aliases at the HIR level: /// Table type parameter definition -> substituted type - pub ty_substs: RefCell>, + pub ty_substs: RefCell>, /// Table node id of lifetime parameter definition -> substituted lifetime - pub lt_substs: RefCell>, + pub lt_substs: RefCell>, + pub export_map: ExportMap, + + /// Table from HIR Ty nodes to their resolved Ty. + pub hir_ty_to_ty: NodeMap>, } -impl<'b, 'tcx> DocContext<'b, 'tcx> { - pub fn sess<'a>(&'a self) -> &'a session::Session { - match self.maybe_typed { - Typed(tcx) => &tcx.sess, - NotTyped(ref sess) => sess - } - } - - pub fn tcx_opt<'a>(&'a self) -> Option> { - match self.maybe_typed { - Typed(tcx) => Some(tcx), - NotTyped(_) => None - } - } - - pub fn tcx<'a>(&'a self) -> TyCtxt<'a, 'tcx, 'tcx> { - let tcx_opt = self.tcx_opt(); - tcx_opt.expect("tcx not present") +impl<'a, 'tcx> DocContext<'a, 'tcx> { + pub fn sess(&self) -> &session::Session { + &self.tcx.sess } /// Call the closure with the given parameters set as /// the substitutions for a type alias' RHS. pub fn enter_alias(&self, - ty_substs: FnvHashMap, - lt_substs: FnvHashMap, + ty_substs: FxHashMap, + lt_substs: FxHashMap, f: F) -> R where F: FnOnce() -> R { let (old_tys, old_lts) = @@ -196,7 +174,7 @@ pub fn run_core(search_paths: SearchPaths, sess.fatal("Compilation failed, aborting rustdoc"); } - let ty::CrateAnalysis { access_levels, .. } = analysis; + let ty::CrateAnalysis { access_levels, export_map, hir_ty_to_ty, .. } = analysis; // Convert from a NodeId set to a DefId set since we don't always have easy access // to the map from defid -> nodeid @@ -207,23 +185,21 @@ pub fn run_core(search_paths: SearchPaths, }; let ctxt = DocContext { - map: &tcx.map, - maybe_typed: Typed(tcx), - input: input, + tcx: tcx, populated_all_crate_impls: Cell::new(false), - deref_trait_did: Cell::new(None), - deref_mut_trait_did: Cell::new(None), access_levels: RefCell::new(access_levels), external_traits: Default::default(), renderinfo: Default::default(), ty_substs: Default::default(), lt_substs: Default::default(), + export_map: export_map, + hir_ty_to_ty: hir_ty_to_ty, }; - debug!("crate: {:?}", ctxt.map.krate()); + debug!("crate: {:?}", tcx.map.krate()); let krate = { let mut v = RustdocVisitor::new(&ctxt); - v.visit(ctxt.map.krate()); + v.visit(tcx.map.krate()); v.clean(&ctxt) }; diff --git a/src/librustdoc/doctree.rs b/src/librustdoc/doctree.rs index 609ae0c0e6..21fc135eaa 100644 --- a/src/librustdoc/doctree.rs +++ b/src/librustdoc/doctree.rs @@ -254,10 +254,12 @@ pub struct ExternCrate { } pub struct Import { + pub name: Name, pub id: NodeId, pub vis: hir::Visibility, pub attrs: hir::HirVec, - pub node: hir::ViewPath_, + pub path: hir::Path, + pub glob: bool, pub whence: Span, } diff --git a/src/librustdoc/html/format.rs b/src/librustdoc/html/format.rs index 625acce27b..6dc6e80dae 100644 --- a/src/librustdoc/html/format.rs +++ b/src/librustdoc/html/format.rs @@ -18,7 +18,7 @@ use std::fmt; use std::iter::repeat; -use rustc::hir::def_id::{DefId, LOCAL_CRATE}; +use rustc::hir::def_id::DefId; use syntax::abi::Abi; use rustc::hir; @@ -42,7 +42,7 @@ pub struct UnsafetySpace(pub hir::Unsafety); #[derive(Copy, Clone)] pub struct ConstnessSpace(pub hir::Constness); /// Wrapper struct for properly emitting a method declaration. -pub struct Method<'a>(pub &'a clean::FnDecl, pub &'a str); +pub struct Method<'a>(pub &'a clean::FnDecl, pub usize); /// Similar to VisSpace, but used for mutability #[derive(Copy, Clone)] pub struct MutableSpace(pub clean::Mutability); @@ -50,7 +50,7 @@ pub struct MutableSpace(pub clean::Mutability); #[derive(Copy, Clone)] pub struct RawMutableSpace(pub clean::Mutability); /// Wrapper struct for emitting a where clause from Generics. -pub struct WhereClause<'a>(pub &'a clean::Generics); +pub struct WhereClause<'a>(pub &'a clean::Generics, pub usize); /// Wrapper struct for emitting type parameter bounds. pub struct TyParamBounds<'a>(pub &'a [clean::TyParamBound]); /// Wrapper struct for emitting a comma-separated list of items @@ -157,52 +157,71 @@ impl fmt::Display for clean::Generics { impl<'a> fmt::Display for WhereClause<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - let &WhereClause(gens) = self; + let &WhereClause(gens, pad) = self; if gens.where_predicates.is_empty() { return Ok(()); } + let mut clause = String::new(); if f.alternate() { - f.write_str(" ")?; + clause.push_str(" where "); } else { - f.write_str(" where ")?; + clause.push_str(" where "); } for (i, pred) in gens.where_predicates.iter().enumerate() { if i > 0 { - f.write_str(", ")?; + if f.alternate() { + clause.push_str(", "); + } else { + clause.push_str(",
"); + } } match pred { &clean::WherePredicate::BoundPredicate { ref ty, ref bounds } => { let bounds = bounds; if f.alternate() { - write!(f, "{:#}: {:#}", ty, TyParamBounds(bounds))?; + clause.push_str(&format!("{:#}: {:#}", ty, TyParamBounds(bounds))); } else { - write!(f, "{}: {}", ty, TyParamBounds(bounds))?; + clause.push_str(&format!("{}: {}", ty, TyParamBounds(bounds))); } } &clean::WherePredicate::RegionPredicate { ref lifetime, ref bounds } => { - write!(f, "{}: ", lifetime)?; + clause.push_str(&format!("{}: ", lifetime)); for (i, lifetime) in bounds.iter().enumerate() { if i > 0 { - f.write_str(" + ")?; + clause.push_str(" + "); } - write!(f, "{}", lifetime)?; + clause.push_str(&format!("{}", lifetime)); } } &clean::WherePredicate::EqPredicate { ref lhs, ref rhs } => { if f.alternate() { - write!(f, "{:#} == {:#}", lhs, rhs)?; + clause.push_str(&format!("{:#} == {:#}", lhs, rhs)); } else { - write!(f, "{} == {}", lhs, rhs)?; + clause.push_str(&format!("{} == {}", lhs, rhs)); } } } } if !f.alternate() { - f.write_str("
")?; + clause.push_str("
"); + let plain = format!("{:#}", self); + if plain.len() > 80 { + //break it onto its own line regardless, but make sure method impls and trait + //blocks keep their fixed padding (2 and 9, respectively) + let padding = if pad > 10 { + clause = clause.replace("class='where'", "class='where fmt-newline'"); + repeat(" ").take(8).collect::() + } else { + repeat(" ").take(pad + 6).collect::() + }; + clause = clause.replace("
", &format!("
{}", padding)); + } else { + clause = clause.replace("
", " "); + } } - Ok(()) + write!(f, "{}", clause) } } @@ -384,9 +403,9 @@ pub fn href(did: DefId) -> Option<(String, ItemType, Vec)> { None => match cache.external_paths.get(&did) { Some(&(ref fqp, shortty)) => { (fqp, shortty, match cache.extern_locations[&did.krate] { - (_, render::Remote(ref s)) => s.to_string(), - (_, render::Local) => repeat("../").take(loc.len()).collect(), - (_, render::Unknown) => return None, + (.., render::Remote(ref s)) => s.to_string(), + (.., render::Local) => repeat("../").take(loc.len()).collect(), + (.., render::Unknown) => return None, }) } None => return None, @@ -460,7 +479,7 @@ fn primitive_link(f: &mut fmt::Formatter, let mut needs_termination = false; if !f.alternate() { match m.primitive_locations.get(&prim) { - Some(&LOCAL_CRATE) => { + Some(&def_id) if def_id.is_local() => { let len = CURRENT_LOCATION_KEY.with(|s| s.borrow().len()); let len = if len == 0 {0} else {len - 1}; write!(f, "
", @@ -468,14 +487,16 @@ fn primitive_link(f: &mut fmt::Formatter, prim.to_url_str())?; needs_termination = true; } - Some(&cnum) => { - let loc = match m.extern_locations[&cnum] { - (ref cname, render::Remote(ref s)) => Some((cname, s.to_string())), - (ref cname, render::Local) => { + Some(&def_id) => { + let loc = match m.extern_locations[&def_id.krate] { + (ref cname, _, render::Remote(ref s)) => { + Some((cname, s.to_string())) + } + (ref cname, _, render::Local) => { let len = CURRENT_LOCATION_KEY.with(|s| s.borrow().len()); Some((cname, repeat("../").take(len).collect::())) } - (_, render::Unknown) => None, + (.., render::Unknown) => None, }; if let Some((cname, root)) = loc { write!(f, "", @@ -718,30 +739,43 @@ impl fmt::Display for clean::Type { } fn fmt_impl(i: &clean::Impl, f: &mut fmt::Formatter, link_trait: bool) -> fmt::Result { + let mut plain = String::new(); + if f.alternate() { write!(f, "impl{:#} ", i.generics)?; } else { write!(f, "impl{} ", i.generics)?; } + plain.push_str(&format!("impl{:#} ", i.generics)); + if let Some(ref ty) = i.trait_ { - write!(f, "{}", - if i.polarity == Some(clean::ImplPolarity::Negative) { "!" } else { "" })?; + if i.polarity == Some(clean::ImplPolarity::Negative) { + write!(f, "!")?; + plain.push_str("!"); + } + if link_trait { fmt::Display::fmt(ty, f)?; + plain.push_str(&format!("{:#}", ty)); } else { match *ty { clean::ResolvedPath{ typarams: None, ref path, is_generic: false, .. } => { let last = path.segments.last().unwrap(); fmt::Display::fmt(&last.name, f)?; fmt::Display::fmt(&last.params, f)?; + plain.push_str(&format!("{:#}{:#}", last.name, last.params)); } _ => unreachable!(), } } write!(f, " for ")?; + plain.push_str(" for "); } + fmt::Display::fmt(&i.for_, f)?; - fmt::Display::fmt(&WhereClause(&i.generics), f)?; + plain.push_str(&format!("{:#}", i.for_)); + + fmt::Display::fmt(&WhereClause(&i.generics, plain.len() + 1), f)?; Ok(()) } @@ -870,24 +904,30 @@ impl<'a> fmt::Display for Method<'a> { let mut output: String; let plain: String; + let pad = repeat(" ").take(indent).collect::(); if arrow.is_empty() { output = format!("({})", args); - plain = format!("{}({})", indent.replace(" ", " "), args_plain); + plain = format!("{}({})", pad, args_plain); } else { output = format!("({args})
{arrow}", args = args, arrow = arrow); - plain = format!("{indent}({args}){arrow}", - indent = indent.replace(" ", " "), + plain = format!("{pad}({args}){arrow}", + pad = pad, args = args_plain, arrow = arrow_plain); } if plain.len() > 80 { - let pad = format!("
{}", indent); + let pad = repeat(" ").take(indent).collect::(); + let pad = format!("
{}", pad); output = output.replace("
", &pad); } else { output = output.replace("
", ""); } - write!(f, "{}", output) + if f.alternate() { + write!(f, "{}", output.replace("
", "\n")) + } else { + write!(f, "{}", output) + } } } @@ -931,16 +971,6 @@ impl fmt::Display for clean::Import { clean::Import::Glob(ref src) => { write!(f, "use {}::*;", *src) } - clean::Import::List(ref src, ref names) => { - write!(f, "use {}::{{", *src)?; - for (i, n) in names.iter().enumerate() { - if i > 0 { - write!(f, ", ")?; - } - write!(f, "{}", *n)?; - } - write!(f, "}};") - } } } } @@ -962,23 +992,6 @@ impl fmt::Display for clean::ImportSource { } } -impl fmt::Display for clean::ViewListIdent { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - match self.source { - Some(did) => { - let path = clean::Path::singleton(self.name.clone()); - resolved_path(f, did, &path, false)?; - } - _ => write!(f, "{}", self.name)?, - } - - if let Some(ref name) = self.rename { - write!(f, " as {}", name)?; - } - Ok(()) - } -} - impl fmt::Display for clean::TypeBinding { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { if f.alternate() { diff --git a/src/librustdoc/html/render.rs b/src/librustdoc/html/render.rs index a848a011f8..ac336fe45e 100644 --- a/src/librustdoc/html/render.rs +++ b/src/librustdoc/html/render.rs @@ -40,7 +40,7 @@ use std::cmp::Ordering; use std::collections::BTreeMap; use std::default::Default; use std::error; -use std::fmt::{self, Display, Formatter}; +use std::fmt::{self, Display, Formatter, Write as FmtWrite}; use std::fs::{self, File, OpenOptions}; use std::io::prelude::*; use std::io::{self, BufWriter, BufReader}; @@ -53,16 +53,16 @@ use std::sync::Arc; use externalfiles::ExternalHtml; use serialize::json::{ToJson, Json, as_json}; -use syntax::abi; +use syntax::{abi, ast}; use syntax::feature_gate::UnstableFeatures; -use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefId, LOCAL_CRATE}; +use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefId}; use rustc::middle::privacy::AccessLevels; use rustc::middle::stability; use rustc::hir; -use rustc::util::nodemap::{FnvHashMap, FnvHashSet}; +use rustc::util::nodemap::{FxHashMap, FxHashSet}; use rustc_data_structures::flock; -use clean::{self, Attributes, GetDefId, SelfTy, Mutability}; +use clean::{self, AttributesExt, GetDefId, SelfTy, Mutability}; use doctree; use fold::DocFolder; use html::escape::Escape; @@ -111,9 +111,9 @@ pub struct SharedContext { /// `true`. pub include_sources: bool, /// The local file sources we've emitted and their respective url-paths. - pub local_sources: FnvHashMap, + pub local_sources: FxHashMap, /// All the passes that were run on this crate. - pub passes: FnvHashSet, + pub passes: FxHashSet, /// The base-URL of the issue tracker for when an item has been tagged with /// an issue number. pub issue_tracker_base_url: Option, @@ -208,7 +208,7 @@ pub struct Cache { /// Mapping of typaram ids to the name of the type parameter. This is used /// when pretty-printing a type (so pretty printing doesn't have to /// painfully maintain a context like this) - pub typarams: FnvHashMap, + pub typarams: FxHashMap, /// Maps a type id to all known implementations for that type. This is only /// recognized for intra-crate `ResolvedPath` types, and is used to print @@ -216,35 +216,35 @@ pub struct Cache { /// /// The values of the map are a list of implementations and documentation /// found on that implementation. - pub impls: FnvHashMap>, + pub impls: FxHashMap>, /// Maintains a mapping of local crate node ids to the fully qualified name /// and "short type description" of that node. This is used when generating /// URLs when a type is being linked to. External paths are not located in /// this map because the `External` type itself has all the information /// necessary. - pub paths: FnvHashMap, ItemType)>, + pub paths: FxHashMap, ItemType)>, /// Similar to `paths`, but only holds external paths. This is only used for /// generating explicit hyperlinks to other crates. - pub external_paths: FnvHashMap, ItemType)>, + pub external_paths: FxHashMap, ItemType)>, /// This map contains information about all known traits of this crate. /// Implementations of a crate should inherit the documentation of the /// parent trait if no extra documentation is specified, and default methods /// should show up in documentation about trait implementations. - pub traits: FnvHashMap, + pub traits: FxHashMap, /// When rendering traits, it's often useful to be able to list all /// implementors of the trait, and this mapping is exactly, that: a mapping /// of trait ids to the list of known implementors of the trait - pub implementors: FnvHashMap>, + pub implementors: FxHashMap>, /// Cache of where external crate documentation can be found. - pub extern_locations: FnvHashMap, + pub extern_locations: FxHashMap, /// Cache of where documentation for primitives can be found. - pub primitive_locations: FnvHashMap, + pub primitive_locations: FxHashMap, // Note that external items for which `doc(hidden)` applies to are shown as // non-reachable while local items aren't. This is because we're reusing @@ -257,8 +257,6 @@ pub struct Cache { parent_stack: Vec, parent_is_trait_impl: bool, search_index: Vec, - seen_modules: FnvHashSet, - seen_mod: bool, stripped_mod: bool, deref_trait_did: Option, deref_mut_trait_did: Option, @@ -275,9 +273,9 @@ pub struct Cache { /// Later on moved into `CACHE_KEY`. #[derive(Default)] pub struct RenderInfo { - pub inlined: FnvHashSet, + pub inlined: FxHashSet, pub external_paths: ::core::ExternalPaths, - pub external_typarams: FnvHashMap, + pub external_typarams: FxHashMap, pub deref_trait_did: Option, pub deref_mut_trait_did: Option, } @@ -376,10 +374,10 @@ impl ToJson for IndexItemFunctionType { thread_local!(static CACHE_KEY: RefCell> = Default::default()); thread_local!(pub static CURRENT_LOCATION_KEY: RefCell> = RefCell::new(Vec::new())); -thread_local!(static USED_ID_MAP: RefCell> = +thread_local!(static USED_ID_MAP: RefCell> = RefCell::new(init_ids())); -fn init_ids() -> FnvHashMap { +fn init_ids() -> FxHashMap { [ "main", "search", @@ -406,7 +404,7 @@ pub fn reset_ids(embedded: bool) { *s.borrow_mut() = if embedded { init_ids() } else { - FnvHashMap() + FxHashMap() }; }); } @@ -430,8 +428,9 @@ pub fn derive_id(candidate: String) -> String { /// Generates the documentation for `crate` into the directory `dst` pub fn run(mut krate: clean::Crate, external_html: &ExternalHtml, + playground_url: Option, dst: PathBuf, - passes: FnvHashSet, + passes: FxHashSet, css_file_extension: Option, renderinfo: RenderInfo) -> Result<(), Error> { let src_root = match krate.src.parent() { @@ -442,7 +441,7 @@ pub fn run(mut krate: clean::Crate, src_root: src_root, passes: passes, include_sources: true, - local_sources: FnvHashMap(), + local_sources: FxHashMap(), issue_tracker_base_url: None, layout: layout::Layout { logo: "".to_string(), @@ -453,34 +452,35 @@ pub fn run(mut krate: clean::Crate, css_file_extension: css_file_extension.clone(), }; + // If user passed in `--playground-url` arg, we fill in crate name here + if let Some(url) = playground_url { + markdown::PLAYGROUND.with(|slot| { + *slot.borrow_mut() = Some((Some(krate.name.clone()), url)); + }); + } + // Crawl the crate attributes looking for attributes which control how we're // going to emit HTML - if let Some(attrs) = krate.module.as_ref().map(|m| m.attrs.list("doc")) { - for attr in attrs { - match *attr { - clean::NameValue(ref x, ref s) - if "html_favicon_url" == *x => { + if let Some(attrs) = krate.module.as_ref().map(|m| &m.attrs) { + for attr in attrs.lists("doc") { + let name = attr.name().map(|s| s.as_str()); + match (name.as_ref().map(|s| &s[..]), attr.value_str()) { + (Some("html_favicon_url"), Some(s)) => { scx.layout.favicon = s.to_string(); } - clean::NameValue(ref x, ref s) - if "html_logo_url" == *x => { + (Some("html_logo_url"), Some(s)) => { scx.layout.logo = s.to_string(); } - clean::NameValue(ref x, ref s) - if "html_playground_url" == *x => { + (Some("html_playground_url"), Some(s)) => { markdown::PLAYGROUND.with(|slot| { - if slot.borrow().is_none() { - let name = krate.name.clone(); - *slot.borrow_mut() = Some((Some(name), s.clone())); - } + let name = krate.name.clone(); + *slot.borrow_mut() = Some((Some(name), s.to_string())); }); } - clean::NameValue(ref x, ref s) - if "issue_tracker_base_url" == *x => { + (Some("issue_tracker_base_url"), Some(s)) => { scx.issue_tracker_base_url = Some(s.to_string()); } - clean::Word(ref x) - if "html_no_source" == *x => { + (Some("html_no_source"), None) if attr.is_word() => { scx.include_sources = false; } _ => {} @@ -510,22 +510,20 @@ pub fn run(mut krate: clean::Crate, .collect(); let mut cache = Cache { - impls: FnvHashMap(), + impls: FxHashMap(), external_paths: external_paths, - paths: FnvHashMap(), - implementors: FnvHashMap(), + paths: FxHashMap(), + implementors: FxHashMap(), stack: Vec::new(), parent_stack: Vec::new(), search_index: Vec::new(), parent_is_trait_impl: false, - extern_locations: FnvHashMap(), - primitive_locations: FnvHashMap(), - seen_modules: FnvHashSet(), - seen_mod: false, + extern_locations: FxHashMap(), + primitive_locations: FxHashMap(), stripped_mod: false, access_levels: krate.access_levels.clone(), orphan_impl_items: Vec::new(), - traits: mem::replace(&mut krate.external_traits, FnvHashMap()), + traits: mem::replace(&mut krate.external_traits, FxHashMap()), deref_trait_did: deref_trait_did, deref_mut_trait_did: deref_mut_trait_did, typarams: external_typarams, @@ -533,8 +531,13 @@ pub fn run(mut krate: clean::Crate, // Cache where all our extern crates are located for &(n, ref e) in &krate.externs { - cache.extern_locations.insert(n, (e.name.clone(), + let src_root = match Path::new(&e.src).parent() { + Some(p) => p.to_path_buf(), + None => PathBuf::new(), + }; + cache.extern_locations.insert(n, (e.name.clone(), src_root, extern_location(e, &cx.dst))); + let did = DefId { krate: n, index: CRATE_DEF_INDEX }; cache.external_paths.insert(did, (vec![e.name.to_string()], ItemType::Module)); } @@ -543,13 +546,13 @@ pub fn run(mut krate: clean::Crate, // // Favor linking to as local extern as possible, so iterate all crates in // reverse topological order. - for &(n, ref e) in krate.externs.iter().rev() { - for &prim in &e.primitives { - cache.primitive_locations.insert(prim, n); + for &(_, ref e) in krate.externs.iter().rev() { + for &(def_id, prim, _) in &e.primitives { + cache.primitive_locations.insert(prim, def_id); } } - for &prim in &krate.primitives { - cache.primitive_locations.insert(prim, LOCAL_CRATE); + for &(def_id, prim, _) in &krate.primitives { + cache.primitive_locations.insert(prim, def_id); } cache.stack.push(krate.name.clone()); @@ -572,7 +575,7 @@ pub fn run(mut krate: clean::Crate, /// Build the search index from the collected metadata fn build_index(krate: &clean::Crate, cache: &mut Cache) -> String { - let mut nodeid_to_pathid = FnvHashMap(); + let mut nodeid_to_pathid = FxHashMap(); let mut crate_items = Vec::with_capacity(cache.search_index.len()); let mut crate_paths = Vec::::new(); @@ -723,10 +726,13 @@ fn write_shared(cx: &Context, // Update the search index let dst = cx.dst.join("search-index.js"); - let all_indexes = try_err!(collect(&dst, &krate.name, "searchIndex"), &dst); + let mut all_indexes = try_err!(collect(&dst, &krate.name, "searchIndex"), &dst); + all_indexes.push(search_index); + // Sort the indexes by crate so the file will be generated identically even + // with rustdoc running in parallel. + all_indexes.sort(); let mut w = try_err!(File::create(&dst), &dst); try_err!(writeln!(&mut w, "var searchIndex = {{}};"), &dst); - try_err!(writeln!(&mut w, "{}", search_index), &dst); for index in &all_indexes { try_err!(writeln!(&mut w, "{}", *index), &dst); } @@ -734,7 +740,6 @@ fn write_shared(cx: &Context, // Update the list of all implementors for traits let dst = cx.dst.join("implementors"); - try_err!(mkdir(&dst), &dst); for (&did, imps) in &cache.implementors { // Private modules can leak through to this phase of rustdoc, which // could contain implementations for otherwise private types. In some @@ -751,37 +756,37 @@ fn write_shared(cx: &Context, } }; - let mut mydst = dst.clone(); - for part in &remote_path[..remote_path.len() - 1] { - mydst.push(part); - try_err!(mkdir(&mydst), &mydst); - } - mydst.push(&format!("{}.{}.js", - remote_item_type.css_class(), - remote_path[remote_path.len() - 1])); - let all_implementors = try_err!(collect(&mydst, &krate.name, - "implementors"), - &mydst); - - try_err!(mkdir(mydst.parent().unwrap()), - &mydst.parent().unwrap().to_path_buf()); - let mut f = BufWriter::new(try_err!(File::create(&mydst), &mydst)); - try_err!(writeln!(&mut f, "(function() {{var implementors = {{}};"), &mydst); - - for implementor in &all_implementors { - try_err!(write!(&mut f, "{}", *implementor), &mydst); - } - - try_err!(write!(&mut f, r#"implementors["{}"] = ["#, krate.name), &mydst); + let mut implementors = format!(r#"implementors["{}"] = ["#, krate.name); for imp in imps { // If the trait and implementation are in the same crate, then // there's no need to emit information about it (there's inlining // going on). If they're in different crates then the crate defining // the trait will be interested in our implementation. if imp.def_id.krate == did.krate { continue } - try_err!(write!(&mut f, r#""{}","#, imp.impl_), &mydst); + write!(implementors, r#""{}","#, imp.impl_).unwrap(); + } + implementors.push_str("];"); + + let mut mydst = dst.clone(); + for part in &remote_path[..remote_path.len() - 1] { + mydst.push(part); + } + try_err!(fs::create_dir_all(&mydst), &mydst); + mydst.push(&format!("{}.{}.js", + remote_item_type.css_class(), + remote_path[remote_path.len() - 1])); + + let mut all_implementors = try_err!(collect(&mydst, &krate.name, "implementors"), &mydst); + all_implementors.push(implementors); + // Sort the implementors by crate so the file will be generated + // identically even with rustdoc running in parallel. + all_implementors.sort(); + + let mut f = try_err!(File::create(&mydst), &mydst); + try_err!(writeln!(&mut f, "(function() {{var implementors = {{}};"), &mydst); + for implementor in &all_implementors { + try_err!(writeln!(&mut f, "{}", *implementor), &mydst); } - try_err!(writeln!(&mut f, r"];"), &mydst); try_err!(writeln!(&mut f, "{}", r" if (window.register_implementors) { window.register_implementors(implementors); @@ -866,24 +871,29 @@ fn extern_location(e: &clean::ExternalCrate, dst: &Path) -> ExternalLocation { // Failing that, see if there's an attribute specifying where to find this // external crate - e.attrs.list("doc").value("html_root_url").map(|url| { - let mut url = url.to_owned(); + e.attrs.lists("doc") + .filter(|a| a.check_name("html_root_url")) + .filter_map(|a| a.value_str()) + .map(|url| { + let mut url = url.to_string(); if !url.ends_with("/") { url.push('/') } Remote(url) - }).unwrap_or(Unknown) // Well, at least we tried. + }).next().unwrap_or(Unknown) // Well, at least we tried. } impl<'a> DocFolder for SourceCollector<'a> { fn fold_item(&mut self, item: clean::Item) -> Option { // If we're including source files, and we haven't seen this file yet, - // then we need to render it out to the filesystem + // then we need to render it out to the filesystem. if self.scx.include_sources // skip all invalid spans && item.source.filename != "" - // macros from other libraries get special filenames which we can - // safely ignore + // skip non-local items + && item.def_id.is_local() + // Macros from other libraries get special filenames which we can + // safely ignore. && !(item.source.filename.starts_with("<") && item.source.filename.ends_with("macros>")) { @@ -977,41 +987,30 @@ impl DocFolder for Cache { _ => self.stripped_mod, }; - // Inlining can cause us to visit the same item multiple times. - // (i.e. relevant for gathering impls and implementors) - let orig_seen_mod = if item.is_mod() { - let seen_this = self.seen_mod || !self.seen_modules.insert(item.def_id); - mem::replace(&mut self.seen_mod, seen_this) - } else { - self.seen_mod - }; - // Register any generics to their corresponding string. This is used - // when pretty-printing types + // when pretty-printing types. if let Some(generics) = item.inner.generics() { self.generics(generics); } - if !self.seen_mod { - // Propagate a trait methods' documentation to all implementors of the - // trait - if let clean::TraitItem(ref t) = item.inner { - self.traits.insert(item.def_id, t.clone()); - } + // Propagate a trait method's documentation to all implementors of the + // trait. + if let clean::TraitItem(ref t) = item.inner { + self.traits.entry(item.def_id).or_insert_with(|| t.clone()); + } - // Collect all the implementors of traits. - if let clean::ImplItem(ref i) = item.inner { - if let Some(did) = i.trait_.def_id() { - self.implementors.entry(did).or_insert(vec![]).push(Implementor { - def_id: item.def_id, - stability: item.stability.clone(), - impl_: i.clone(), - }); - } + // Collect all the implementors of traits. + if let clean::ImplItem(ref i) = item.inner { + if let Some(did) = i.trait_.def_id() { + self.implementors.entry(did).or_insert(vec![]).push(Implementor { + def_id: item.def_id, + stability: item.stability.clone(), + impl_: i.clone(), + }); } } - // Index this method for searching later on + // Index this method for searching later on. if let Some(ref s) = item.name { let (parent, is_inherent_impl_item) = match item.inner { clean::StrippedItem(..) => ((None, None), false), @@ -1112,8 +1111,8 @@ impl DocFolder for Cache { (self.stack.clone(), item.type_())); } } - // link variants to their parent enum because pages aren't emitted - // for each variant + // Link variants to their parent enum because pages aren't emitted + // for each variant. clean::VariantItem(..) if !self.stripped_mod => { let mut stack = self.stack.clone(); stack.pop(); @@ -1145,13 +1144,15 @@ impl DocFolder for Cache { true } ref t => { - match t.primitive_type() { - Some(prim) => { - let did = DefId::local(prim.to_def_index()); + let prim_did = t.primitive_type().and_then(|t| { + self.primitive_locations.get(&t).cloned() + }); + match prim_did { + Some(did) => { self.parent_stack.push(did); true } - _ => false, + None => false, } } } @@ -1159,8 +1160,8 @@ impl DocFolder for Cache { _ => false }; - // Once we've recursively found all the generics, then hoard off all the - // implementations elsewhere + // Once we've recursively found all the generics, hoard off all the + // implementations elsewhere. let ret = self.fold_item_recur(item).and_then(|item| { if let clean::Item { inner: clean::ImplItem(_), .. } = item { // Figure out the id of this impl. This may map to a @@ -1176,22 +1177,17 @@ impl DocFolder for Cache { } ref t => { t.primitive_type().and_then(|t| { - self.primitive_locations.get(&t).map(|n| { - let id = t.to_def_index(); - DefId { krate: *n, index: id } - }) + self.primitive_locations.get(&t).cloned() }) } } } else { unreachable!() }; - if !self.seen_mod { - if let Some(did) = did { - self.impls.entry(did).or_insert(vec![]).push(Impl { - impl_item: item, - }); - } + if let Some(did) = did { + self.impls.entry(did).or_insert(vec![]).push(Impl { + impl_item: item, + }); } None } else { @@ -1201,7 +1197,6 @@ impl DocFolder for Cache { if pushed { self.stack.pop().unwrap(); } if parent_pushed { self.parent_stack.pop().unwrap(); } - self.seen_mod = orig_seen_mod; self.stripped_mod = orig_stripped_mod; self.parent_is_trait_impl = orig_parent_is_trait_impl; ret @@ -1224,7 +1219,7 @@ impl Context { } /// Recurse in the directory structure and change the "root path" to make - /// sure it always points to the top (relatively) + /// sure it always points to the top (relatively). fn recurse(&mut self, s: String, f: F) -> T where F: FnOnce(&mut Context) -> T, { @@ -1255,11 +1250,11 @@ impl Context { fn krate(self, mut krate: clean::Crate) -> Result<(), Error> { let mut item = match krate.module.take() { Some(i) => i, - None => return Ok(()) + None => return Ok(()), }; item.name = Some(krate.name); - // render the crate documentation + // Render the crate documentation let mut work = vec![(self, item)]; while let Some((mut cx, item)) = work.pop() { @@ -1460,79 +1455,57 @@ impl<'a> Item<'a> { /// If `None` is returned, then a source link couldn't be generated. This /// may happen, for example, with externally inlined items where the source /// of their crate documentation isn't known. - fn href(&self) -> Option { - let href = if self.item.source.loline == self.item.source.hiline { + fn src_href(&self) -> Option { + let mut root = self.cx.root_path(); + + let cache = cache(); + let mut path = String::new(); + let (krate, path) = if self.item.def_id.is_local() { + let path = PathBuf::from(&self.item.source.filename); + if let Some(path) = self.cx.shared.local_sources.get(&path) { + (&self.cx.shared.layout.krate, path) + } else { + return None; + } + } else { + // Macros from other libraries get special filenames which we can + // safely ignore. + if self.item.source.filename.starts_with("<") && + self.item.source.filename.ends_with("macros>") { + return None; + } + + let (krate, src_root) = match cache.extern_locations.get(&self.item.def_id.krate) { + Some(&(ref name, ref src, Local)) => (name, src), + Some(&(ref name, ref src, Remote(ref s))) => { + root = s.to_string(); + (name, src) + } + Some(&(_, _, Unknown)) | None => return None, + }; + + let file = Path::new(&self.item.source.filename); + clean_srcpath(&src_root, file, false, |component| { + path.push_str(component); + path.push('/'); + }); + let mut fname = file.file_name().expect("source has no filename") + .to_os_string(); + fname.push(".html"); + path.push_str(&fname.to_string_lossy()); + (krate, &path) + }; + + let lines = if self.item.source.loline == self.item.source.hiline { format!("{}", self.item.source.loline) } else { format!("{}-{}", self.item.source.loline, self.item.source.hiline) }; - - // First check to see if this is an imported macro source. In this case - // we need to handle it specially as cross-crate inlined macros have... - // odd locations! - let imported_macro_from = match self.item.inner { - clean::MacroItem(ref m) => m.imported_from.as_ref(), - _ => None, - }; - if let Some(krate) = imported_macro_from { - let cache = cache(); - let root = cache.extern_locations.values().find(|&&(ref n, _)| { - *krate == *n - }).map(|l| &l.1); - let root = match root { - Some(&Remote(ref s)) => s.to_string(), - Some(&Local) => self.cx.root_path(), - None | Some(&Unknown) => return None, - }; - Some(format!("{root}/{krate}/macro.{name}.html?gotomacrosrc=1", - root = root, - krate = krate, - name = self.item.name.as_ref().unwrap())) - - // If this item is part of the local crate, then we're guaranteed to - // know the span, so we plow forward and generate a proper url. The url - // has anchors for the line numbers that we're linking to. - } else if self.item.def_id.is_local() { - let path = PathBuf::from(&self.item.source.filename); - self.cx.shared.local_sources.get(&path).map(|path| { - format!("{root}src/{krate}/{path}#{href}", - root = self.cx.root_path(), - krate = self.cx.shared.layout.krate, - path = path, - href = href) - }) - // If this item is not part of the local crate, then things get a little - // trickier. We don't actually know the span of the external item, but - // we know that the documentation on the other end knows the span! - // - // In this case, we generate a link to the *documentation* for this type - // in the original crate. There's an extra URL parameter which says that - // we want to go somewhere else, and the JS on the destination page will - // pick it up and instantly redirect the browser to the source code. - // - // If we don't know where the external documentation for this crate is - // located, then we return `None`. - } else { - let cache = cache(); - let external_path = match cache.external_paths.get(&self.item.def_id) { - Some(&(ref path, _)) => path, - None => return None, - }; - let mut path = match cache.extern_locations.get(&self.item.def_id.krate) { - Some(&(_, Remote(ref s))) => s.to_string(), - Some(&(_, Local)) => self.cx.root_path(), - Some(&(_, Unknown)) => return None, - None => return None, - }; - for item in &external_path[..external_path.len() - 1] { - path.push_str(item); - path.push_str("/"); - } - Some(format!("{path}{file}?gotosrc={goto}", - path = path, - file = item_path(self.item.type_(), external_path.last().unwrap()), - goto = self.item.def_id.index.as_usize())) - } + Some(format!("{root}src/{krate}/{path}#{lines}", + root = root, + krate = krate, + path = path, + lines = lines)) } } @@ -1597,10 +1570,9 @@ impl<'a> fmt::Display for Item<'a> { // this page, and this link will be auto-clicked. The `id` attribute is // used to find the link to auto-click. if self.cx.shared.include_sources && !self.item.is_primitive() { - if let Some(l) = self.href() { - write!(fmt, "
[src]", - self.item.def_id.index.as_usize(), l, "goto source code")?; + if let Some(l) = self.src_href() { + write!(fmt, "[src]", + l, "goto source code")?; } } @@ -1828,11 +1800,19 @@ fn item_module(w: &mut fmt::Formatter, cx: &Context, } else { String::new() }; + + let mut unsafety_flag = ""; + if let clean::FunctionItem(ref func) = myitem.inner { + if func.unsafety == hir::Unsafety::Unsafe { + unsafety_flag = ""; + } + } + let doc_value = myitem.doc_value().unwrap_or(""); write!(w, " {name} + title='{title}'>{name}{unsafety_flag} {stab_docs} {docs} @@ -1842,6 +1822,7 @@ fn item_module(w: &mut fmt::Formatter, cx: &Context, docs = shorter(Some(&Markdown(doc_value).to_string())), class = myitem.type_(), stab = myitem.stability_class(), + unsafety_flag = unsafety_flag, href = item_path(myitem.type_(), myitem.name.as_ref().unwrap()), title = full_path(cx, myitem))?; } @@ -1869,8 +1850,8 @@ fn short_stability(item: &clean::Item, cx: &Context, show_reason: bool) -> Vec Vec{}", text)) }; @@ -1900,7 +1881,12 @@ fn short_stability(item: &clean::Item, cx: &Context, show_reason: bool) -> Vec{}", text)) }; } else if let Some(depr) = item.deprecation.as_ref() { @@ -1963,14 +1949,13 @@ fn item_function(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, UnstableFeatures::Allow => f.constness, _ => hir::Constness::NotConst }; - let prefix = format!("{}{}{}{:#}fn {}{:#}", + let indent = format!("{}{}{}{:#}fn {}{:#}", VisSpace(&it.visibility), ConstnessSpace(vis_constness), UnsafetySpace(f.unsafety), AbiSpace(f.abi), it.name.as_ref().unwrap(), - f.generics); - let indent = repeat(" ").take(prefix.len()).collect::(); + f.generics).len(); write!(w, "

{vis}{constness}{unsafety}{abi}fn \
                {name}{generics}{decl}{where_clause}
", vis = VisSpace(&it.visibility), @@ -1979,22 +1964,29 @@ fn item_function(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, abi = AbiSpace(f.abi), name = it.name.as_ref().unwrap(), generics = f.generics, - where_clause = WhereClause(&f.generics), - decl = Method(&f.decl, &indent))?; + where_clause = WhereClause(&f.generics, 2), + decl = Method(&f.decl, indent))?; document(w, cx, it) } fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, t: &clean::Trait) -> fmt::Result { let mut bounds = String::new(); + let mut bounds_plain = String::new(); if !t.bounds.is_empty() { if !bounds.is_empty() { bounds.push(' '); + bounds_plain.push(' '); } bounds.push_str(": "); + bounds_plain.push_str(": "); for (i, p) in t.bounds.iter().enumerate() { - if i > 0 { bounds.push_str(" + "); } + if i > 0 { + bounds.push_str(" + "); + bounds_plain.push_str(" + "); + } bounds.push_str(&format!("{}", *p)); + bounds_plain.push_str(&format!("{:#}", *p)); } } @@ -2005,7 +1997,8 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, it.name.as_ref().unwrap(), t.generics, bounds, - WhereClause(&t.generics))?; + // Where clauses in traits are indented nine spaces, per rustdoc.css + WhereClause(&t.generics, 9))?; let types = t.items.iter().filter(|m| m.is_associated_type()).collect::>(); let consts = t.items.iter().filter(|m| m.is_associated_const()).collect::>(); @@ -2019,7 +2012,7 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, write!(w, "{{\n")?; for t in &types { write!(w, " ")?; - render_assoc_item(w, t, AssocItemLink::Anchor(None))?; + render_assoc_item(w, t, AssocItemLink::Anchor(None), ItemType::Trait)?; write!(w, ";\n")?; } if !types.is_empty() && !consts.is_empty() { @@ -2027,7 +2020,7 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, } for t in &consts { write!(w, " ")?; - render_assoc_item(w, t, AssocItemLink::Anchor(None))?; + render_assoc_item(w, t, AssocItemLink::Anchor(None), ItemType::Trait)?; write!(w, ";\n")?; } if !consts.is_empty() && !required.is_empty() { @@ -2035,7 +2028,7 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, } for m in &required { write!(w, " ")?; - render_assoc_item(w, m, AssocItemLink::Anchor(None))?; + render_assoc_item(w, m, AssocItemLink::Anchor(None), ItemType::Trait)?; write!(w, ";\n")?; } if !required.is_empty() && !provided.is_empty() { @@ -2043,7 +2036,7 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, } for m in &provided { write!(w, " ")?; - render_assoc_item(w, m, AssocItemLink::Anchor(None))?; + render_assoc_item(w, m, AssocItemLink::Anchor(None), ItemType::Trait)?; write!(w, " {{ ... }}\n")?; } write!(w, "}}")?; @@ -2064,7 +2057,7 @@ fn item_trait(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, id = id, stab = m.stability_class(), ns_id = ns_id)?; - render_assoc_item(w, m, AssocItemLink::Anchor(Some(&id)))?; + render_assoc_item(w, m, AssocItemLink::Anchor(Some(&id)), ItemType::Impl)?; write!(w, "")?; render_stability_since(w, m, t)?; write!(w, "")?; @@ -2218,7 +2211,8 @@ fn render_stability_since(w: &mut fmt::Formatter, fn render_assoc_item(w: &mut fmt::Formatter, item: &clean::Item, - link: AssocItemLink) -> fmt::Result { + link: AssocItemLink, + parent: ItemType) -> fmt::Result { fn method(w: &mut fmt::Formatter, meth: &clean::Item, unsafety: hir::Unsafety, @@ -2226,7 +2220,8 @@ fn render_assoc_item(w: &mut fmt::Formatter, abi: abi::Abi, g: &clean::Generics, d: &clean::FnDecl, - link: AssocItemLink) + link: AssocItemLink, + parent: ItemType) -> fmt::Result { let name = meth.name.as_ref().unwrap(); let anchor = format!("#{}.{}", meth.type_(), name); @@ -2256,7 +2251,16 @@ fn render_assoc_item(w: &mut fmt::Formatter, AbiSpace(abi), name, *g); - let indent = repeat(" ").take(prefix.len()).collect::(); + let mut indent = prefix.len(); + let where_indent = if parent == ItemType::Trait { + indent += 4; + 8 + } else if parent == ItemType::Impl { + 2 + } else { + let prefix = prefix + &format!("{:#}", Method(d, indent)); + prefix.lines().last().unwrap().len() + 1 + }; write!(w, "{}{}{}fn {name}\ {generics}{decl}{where_clause}", ConstnessSpace(vis_constness), @@ -2265,19 +2269,18 @@ fn render_assoc_item(w: &mut fmt::Formatter, href = href, name = name, generics = *g, - decl = Method(d, &indent), - where_clause = WhereClause(g)) + decl = Method(d, indent), + where_clause = WhereClause(g, where_indent)) } match item.inner { clean::StrippedItem(..) => Ok(()), clean::TyMethodItem(ref m) => { method(w, item, m.unsafety, hir::Constness::NotConst, - m.abi, &m.generics, &m.decl, link) + m.abi, &m.generics, &m.decl, link, parent) } clean::MethodItem(ref m) => { method(w, item, m.unsafety, m.constness, - m.abi, &m.generics, &m.decl, - link) + m.abi, &m.generics, &m.decl, link, parent) } clean::AssociatedConstItem(ref ty, ref default) => { assoc_const(w, item, ty, default.as_ref(), link) @@ -2374,11 +2377,15 @@ fn item_enum(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, e: &clean::Enum) -> fmt::Result { write!(w, "
")?;
     render_attributes(w, it)?;
+    let padding = format!("{}enum {}{:#} ",
+                          VisSpace(&it.visibility),
+                          it.name.as_ref().unwrap(),
+                          e.generics).len();
     write!(w, "{}enum {}{}{}",
            VisSpace(&it.visibility),
            it.name.as_ref().unwrap(),
            e.generics,
-           WhereClause(&e.generics))?;
+           WhereClause(&e.generics, padding))?;
     if e.variants.is_empty() && !e.variants_stripped {
         write!(w, " {{}}")?;
     } else {
@@ -2422,7 +2429,6 @@ fn item_enum(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item,
         write!(w, "}}")?;
     }
     write!(w, "
")?; - render_stability_since_raw(w, it.stable_since(), None)?; document(w, cx, it)?; if !e.variants.is_empty() { @@ -2458,8 +2464,13 @@ fn item_enum(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, if let clean::VariantItem(Variant { kind: VariantKind::Struct(ref s) }) = variant.inner { - write!(w, "

Fields

\n - ")?; + let variant_id = derive_id(format!("{}.{}.fields", + ItemType::Variant, + variant.name.as_ref().unwrap())); + write!(w, "", + id = variant_id)?; + write!(w, "

Fields of {name}

\n +
", name = variant.name.as_ref().unwrap())?; for field in &s.fields { use clean::StructFieldItem; if let StructFieldItem(ref ty) = field.inner { @@ -2483,7 +2494,7 @@ fn item_enum(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, write!(w, "")?; } } - write!(w, "
")?; + write!(w, "")?; } render_stability_since(w, variant, it)?; } @@ -2492,17 +2503,52 @@ fn item_enum(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, Ok(()) } -fn render_attributes(w: &mut fmt::Formatter, it: &clean::Item) -> fmt::Result { - for attr in &it.attrs { - match *attr { - clean::Word(ref s) if *s == "must_use" => { - write!(w, "#[{}]\n", s)?; - } - clean::NameValue(ref k, ref v) if *k == "must_use" => { - write!(w, "#[{} = \"{}\"]\n", k, v)?; - } - _ => () +fn render_attribute(attr: &ast::MetaItem) -> Option { + let name = attr.name(); + + if attr.is_word() { + Some(format!("{}", name)) + } else if let Some(v) = attr.value_str() { + Some(format!("{} = {:?}", name, &v.as_str()[..])) + } else if let Some(values) = attr.meta_item_list() { + let display: Vec<_> = values.iter().filter_map(|attr| { + attr.meta_item().and_then(|mi| render_attribute(mi)) + }).collect(); + + if display.len() > 0 { + Some(format!("{}({})", name, display.join(", "))) + } else { + None } + } else { + None + } +} + +const ATTRIBUTE_WHITELIST: &'static [&'static str] = &[ + "export_name", + "lang", + "link_section", + "must_use", + "no_mangle", + "repr", + "unsafe_destructor_blind_to_params" +]; + +fn render_attributes(w: &mut fmt::Formatter, it: &clean::Item) -> fmt::Result { + let mut attrs = String::new(); + + for attr in &it.attrs.other_attrs { + let name = attr.name(); + if !ATTRIBUTE_WHITELIST.contains(&&name.as_str()[..]) { + continue; + } + if let Some(s) = render_attribute(attr.meta()) { + attrs.push_str(&format!("#[{}]\n", s)); + } + } + if attrs.len() > 0 { + write!(w, "
{}
", &attrs)?; } Ok(()) } @@ -2513,17 +2559,23 @@ fn render_struct(w: &mut fmt::Formatter, it: &clean::Item, fields: &[clean::Item], tab: &str, structhead: bool) -> fmt::Result { + let mut plain = String::new(); write!(w, "{}{}{}", VisSpace(&it.visibility), if structhead {"struct "} else {""}, it.name.as_ref().unwrap())?; + plain.push_str(&format!("{}{}{}", + VisSpace(&it.visibility), + if structhead {"struct "} else {""}, + it.name.as_ref().unwrap())); if let Some(g) = g { + plain.push_str(&format!("{:#}", g)); write!(w, "{}", g)? } match ty { doctree::Plain => { if let Some(g) = g { - write!(w, "{}", WhereClause(g))? + write!(w, "{}", WhereClause(g, plain.len() + 1))? } let mut has_visible_fields = false; write!(w, " {{")?; @@ -2552,30 +2604,35 @@ fn render_struct(w: &mut fmt::Formatter, it: &clean::Item, } doctree::Tuple => { write!(w, "(")?; + plain.push_str("("); for (i, field) in fields.iter().enumerate() { if i > 0 { write!(w, ", ")?; + plain.push_str(", "); } match field.inner { clean::StrippedItem(box clean::StructFieldItem(..)) => { + plain.push_str("_"); write!(w, "_")? } clean::StructFieldItem(ref ty) => { + plain.push_str(&format!("{}{:#}", VisSpace(&field.visibility), *ty)); write!(w, "{}{}", VisSpace(&field.visibility), *ty)? } _ => unreachable!() } } write!(w, ")")?; + plain.push_str(")"); if let Some(g) = g { - write!(w, "{}", WhereClause(g))? + write!(w, "{}", WhereClause(g, plain.len() + 1))? } write!(w, ";")?; } doctree::Unit => { // Needed for PhantomData. if let Some(g) = g { - write!(w, "{}", WhereClause(g))? + write!(w, "{}", WhereClause(g, plain.len() + 1))? } write!(w, ";")?; } @@ -2588,13 +2645,19 @@ fn render_union(w: &mut fmt::Formatter, it: &clean::Item, fields: &[clean::Item], tab: &str, structhead: bool) -> fmt::Result { + let mut plain = String::new(); write!(w, "{}{}{}", VisSpace(&it.visibility), if structhead {"union "} else {""}, it.name.as_ref().unwrap())?; + plain.push_str(&format!("{}{}{}", + VisSpace(&it.visibility), + if structhead {"union "} else {""}, + it.name.as_ref().unwrap())); if let Some(g) = g { write!(w, "{}", g)?; - write!(w, "{}", WhereClause(g))?; + plain.push_str(&format!("{:#}", g)); + write!(w, "{}", WhereClause(g, plain.len() + 1))?; } write!(w, " {{\n{}", tab)?; @@ -2618,7 +2681,7 @@ fn render_union(w: &mut fmt::Formatter, it: &clean::Item, #[derive(Copy, Clone)] enum AssocItemLink<'a> { Anchor(Option<&'a str>), - GotoSource(DefId, &'a FnvHashSet), + GotoSource(DefId, &'a FxHashSet), } impl<'a> AssocItemLink<'a> { @@ -2711,8 +2774,7 @@ fn render_deref_methods(w: &mut fmt::Formatter, cx: &Context, impl_: &Impl, render_assoc_items(w, cx, container_item, did, what) } else { if let Some(prim) = target.primitive_type() { - if let Some(c) = cache().primitive_locations.get(&prim) { - let did = DefId { krate: *c, index: prim.to_def_index() }; + if let Some(&did) = cache().primitive_locations.get(&prim) { render_assoc_items(w, cx, container_item, did, what)?; } } @@ -2726,18 +2788,17 @@ fn render_impl(w: &mut fmt::Formatter, cx: &Context, i: &Impl, link: AssocItemLi write!(w, "

{}", i.inner_impl())?; write!(w, "")?; let since = i.impl_item.stability.as_ref().map(|s| &s.since[..]); - if let Some(l) = (Item { item: &i.impl_item, cx: cx }).href() { + if let Some(l) = (Item { item: &i.impl_item, cx: cx }).src_href() { write!(w, "
")?; render_stability_since_raw(w, since, outer_version)?; - write!(w, "[src]", - i.impl_item.def_id.index.as_usize(), l, "goto source code")?; + write!(w, "[src]", + l, "goto source code")?; } else { render_stability_since_raw(w, since, outer_version)?; } write!(w, "
")?; write!(w, "

\n")?; - if let Some(ref dox) = i.impl_item.attrs.value("doc") { + if let Some(ref dox) = i.impl_item.doc_value() { write!(w, "
{}
", Markdown(dox))?; } } @@ -2785,7 +2846,7 @@ fn render_impl(w: &mut fmt::Formatter, cx: &Context, i: &Impl, link: AssocItemLi write!(w, "

", id, item_type)?; write!(w, "

\n")?; @@ -2895,10 +2956,11 @@ fn render_impl(w: &mut fmt::Formatter, cx: &Context, i: &Impl, link: AssocItemLi fn item_typedef(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, t: &clean::Typedef) -> fmt::Result { + let indent = format!("type {}{:#} ", it.name.as_ref().unwrap(), t.generics).len(); write!(w, "
type {}{}{where_clause} = {type_};
", it.name.as_ref().unwrap(), t.generics, - where_clause = WhereClause(&t.generics), + where_clause = WhereClause(&t.generics, indent), type_ = t.type_)?; document(w, cx, it) @@ -2910,7 +2972,7 @@ impl<'a> fmt::Display for Sidebar<'a> { let it = self.item; let parentlen = cx.current.len() - if it.is_mod() {1} else {0}; - // the sidebar is designed to display sibling functions, modules and + // The sidebar is designed to display sibling functions, modules and // other miscellaneous information. since there are lots of sibling // items (and that causes quadratic growth in large modules), // we refactor common parts into a shared JavaScript file per module. @@ -2929,7 +2991,7 @@ impl<'a> fmt::Display for Sidebar<'a> { } write!(fmt, "

")?; - // sidebar refers to the enclosing module, not this module + // Sidebar refers to the enclosing module, not this module. let relpath = if it.is_mod() { "../" } else { "" }; write!(fmt, "", @@ -2978,7 +3040,6 @@ fn item_macro(w: &mut fmt::Formatter, cx: &Context, it: &clean::Item, Some("macro"), None, None))?; - render_stability_since_raw(w, it.stable_since(), None)?; document(w, cx, it) } diff --git a/src/librustdoc/html/static/main.js b/src/librustdoc/html/static/main.js index 9bb7246e7a..6ea25fa124 100644 --- a/src/librustdoc/html/static/main.js +++ b/src/librustdoc/html/static/main.js @@ -923,15 +923,6 @@ window.register_implementors(window.pending_implementors); } - // See documentation in html/render.rs for what this is doing. - var query = getQueryStringParams(); - if (query['gotosrc']) { - window.location = $('#src-' + query['gotosrc']).attr('href'); - } - if (query['gotomacrosrc']) { - window.location = $('.srclink').attr('href'); - } - function labelForToggleButton(sectionIsCollapsed) { if (sectionIsCollapsed) { // button will expand the section @@ -963,20 +954,22 @@ } } - $("#toggle-all-docs").on("click", toggleAllDocs); - - $(document).on("click", ".collapse-toggle", function() { - var toggle = $(this); + function collapseDocs(toggle, animate) { var relatedDoc = toggle.parent().next(); if (relatedDoc.is(".stability")) { relatedDoc = relatedDoc.next(); } if (relatedDoc.is(".docblock")) { if (relatedDoc.is(":visible")) { - relatedDoc.slideUp({duration: 'fast', easing: 'linear'}); + if (animate === true) { + relatedDoc.slideUp({duration: 'fast', easing: 'linear'}); + toggle.children(".toggle-label").fadeIn(); + } else { + relatedDoc.hide(); + toggle.children(".toggle-label").show(); + } toggle.parent(".toggle-wrapper").addClass("collapsed"); toggle.children(".inner").text(labelForToggleButton(true)); - toggle.children(".toggle-label").fadeIn(); } else { relatedDoc.slideDown({duration: 'fast', easing: 'linear'}); toggle.parent(".toggle-wrapper").removeClass("collapsed"); @@ -984,6 +977,12 @@ toggle.children(".toggle-label").hide(); } } + } + + $("#toggle-all-docs").on("click", toggleAllDocs); + + $(document).on("click", ".collapse-toggle", function() { + collapseDocs($(this), true) }); $(function() { @@ -999,12 +998,38 @@ }); var mainToggle = - $(toggle).append( + $(toggle.clone()).append( $('', {'class': 'toggle-label'}) .css('display', 'none') .html(' Expand description')); var wrapper = $("
").append(mainToggle); $("#main > .docblock").before(wrapper); + + $(".docblock.autohide").each(function() { + var wrap = $(this).prev(); + if (wrap.is(".toggle-wrapper")) { + var toggle = wrap.children().first(); + if ($(this).children().first().is("h3")) { + toggle.children(".toggle-label") + .text(" Show " + $(this).children().first().text()); + } + $(this).hide(); + wrap.addClass("collapsed"); + toggle.children(".inner").text(labelForToggleButton(true)); + toggle.children(".toggle-label").show(); + } + }); + + var mainToggle = + $(toggle).append( + $('', {'class': 'toggle-label'}) + .css('display', 'none') + .html(' Expand attributes')); + var wrapper = $("
").append(mainToggle); + $("#main > pre > .attributes").each(function() { + $(this).before(wrapper); + collapseDocs($($(this).prev().children()[0]), false); + }); }); $('pre.line-numbers').on('click', 'span', function() { diff --git a/src/librustdoc/html/static/rustdoc.css b/src/librustdoc/html/static/rustdoc.css index f49b8556f6..15912b41d5 100644 --- a/src/librustdoc/html/static/rustdoc.css +++ b/src/librustdoc/html/static/rustdoc.css @@ -14,160 +14,162 @@ /* See FiraSans-LICENSE.txt for the Fira Sans license. */ @font-face { - font-family: 'Fira Sans'; - font-style: normal; - font-weight: 400; - src: local('Fira Sans'), url("FiraSans-Regular.woff") format('woff'); + font-family: 'Fira Sans'; + font-style: normal; + font-weight: 400; + src: local('Fira Sans'), url("FiraSans-Regular.woff") format('woff'); } @font-face { - font-family: 'Fira Sans'; - font-style: normal; - font-weight: 500; - src: local('Fira Sans Medium'), url("FiraSans-Medium.woff") format('woff'); + font-family: 'Fira Sans'; + font-style: normal; + font-weight: 500; + src: local('Fira Sans Medium'), url("FiraSans-Medium.woff") format('woff'); } /* See SourceSerifPro-LICENSE.txt for the Source Serif Pro license and * Heuristica-LICENSE.txt for the Heuristica license. */ @font-face { - font-family: 'Source Serif Pro'; - font-style: normal; - font-weight: 400; - src: local('Source Serif Pro'), url("SourceSerifPro-Regular.woff") format('woff'); + font-family: 'Source Serif Pro'; + font-style: normal; + font-weight: 400; + src: local('Source Serif Pro'), url("SourceSerifPro-Regular.woff") format('woff'); } @font-face { - font-family: 'Source Serif Pro'; - font-style: italic; - font-weight: 400; - src: url("Heuristica-Italic.woff") format('woff'); + font-family: 'Source Serif Pro'; + font-style: italic; + font-weight: 400; + src: url("Heuristica-Italic.woff") format('woff'); } @font-face { - font-family: 'Source Serif Pro'; - font-style: normal; - font-weight: 700; - src: local('Source Serif Pro Bold'), url("SourceSerifPro-Bold.woff") format('woff'); + font-family: 'Source Serif Pro'; + font-style: normal; + font-weight: 700; + src: local('Source Serif Pro Bold'), url("SourceSerifPro-Bold.woff") format('woff'); } /* See SourceCodePro-LICENSE.txt for the Source Code Pro license. */ @font-face { - font-family: 'Source Code Pro'; - font-style: normal; - font-weight: 400; - src: local('Source Code Pro'), url("SourceCodePro-Regular.woff") format('woff'); + font-family: 'Source Code Pro'; + font-style: normal; + font-weight: 400; + /* Avoid using locally installed font because bad versions are in circulation: + * see https://github.com/rust-lang/rust/issues/24355 */ + src: url("SourceCodePro-Regular.woff") format('woff'); } @font-face { - font-family: 'Source Code Pro'; - font-style: normal; - font-weight: 600; - src: local('Source Code Pro Semibold'), url("SourceCodePro-Semibold.woff") format('woff'); + font-family: 'Source Code Pro'; + font-style: normal; + font-weight: 600; + src: url("SourceCodePro-Semibold.woff") format('woff'); } * { -webkit-box-sizing: border-box; - -moz-box-sizing: border-box; - box-sizing: border-box; + -moz-box-sizing: border-box; + box-sizing: border-box; } /* General structure and fonts */ body { - font: 16px/1.4 "Source Serif Pro", Georgia, Times, "Times New Roman", serif; - margin: 0; - position: relative; - padding: 10px 15px 20px 15px; + font: 16px/1.4 "Source Serif Pro", Georgia, Times, "Times New Roman", serif; + margin: 0; + position: relative; + padding: 10px 15px 20px 15px; - -webkit-font-feature-settings: "kern", "liga"; - -moz-font-feature-settings: "kern", "liga"; - font-feature-settings: "kern", "liga"; + -webkit-font-feature-settings: "kern", "liga"; + -moz-font-feature-settings: "kern", "liga"; + font-feature-settings: "kern", "liga"; } h1 { - font-size: 1.5em; + font-size: 1.5em; } h2 { - font-size: 1.4em; + font-size: 1.4em; } h3 { - font-size: 1.3em; + font-size: 1.3em; } h1, h2, h3:not(.impl):not(.method):not(.type):not(.tymethod), h4:not(.method):not(.type):not(.tymethod) { - font-weight: 500; - margin: 20px 0 15px 0; - padding-bottom: 6px; + font-weight: 500; + margin: 20px 0 15px 0; + padding-bottom: 6px; } h1.fqn { - border-bottom: 1px dashed; - margin-top: 0; - position: relative; + border-bottom: 1px dashed; + margin-top: 0; + position: relative; } h2, h3:not(.impl):not(.method):not(.type):not(.tymethod), h4:not(.method):not(.type):not(.tymethod) { - border-bottom: 1px solid; + border-bottom: 1px solid; } h3.impl, h3.method, h4.method, h3.type, h4.type { - font-weight: 600; - margin-top: 10px; - margin-bottom: 10px; - position: relative; + font-weight: 600; + margin-top: 10px; + margin-bottom: 10px; + position: relative; } h3.impl, h3.method, h3.type { - margin-top: 15px; + margin-top: 15px; } h1, h2, h3, h4, .sidebar, a.source, .search-input, .content table :not(code)>a, .collapse-toggle { - font-family: "Fira Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; + font-family: "Fira Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; } ol, ul { - padding-left: 25px; + padding-left: 25px; } ul ul, ol ul, ul ol, ol ol { - margin-bottom: 0; + margin-bottom: 0; } p { - margin: 0 0 .6em 0; + margin: 0 0 .6em 0; } code, pre { - font-family: "Source Code Pro", Menlo, Monaco, Consolas, "DejaVu Sans Mono", Inconsolata, monospace; - white-space: pre-wrap; + font-family: "Source Code Pro", Menlo, Monaco, Consolas, "DejaVu Sans Mono", Inconsolata, monospace; + white-space: pre-wrap; } .docblock code, .docblock-short code { - border-radius: 3px; - padding: 0 0.2em; + border-radius: 3px; + padding: 0 0.2em; } .docblock pre code, .docblock-short pre code { - padding: 0; + padding: 0; } pre { - padding: 14px; + padding: 14px; } .source pre { - padding: 20px; + padding: 20px; } img { - max-width: 100%; + max-width: 100%; } .content.source { - margin-top: 50px; - max-width: none; - overflow: visible; - margin-left: 0px; - min-width: 70em; + margin-top: 50px; + max-width: none; + overflow: visible; + margin-left: 0px; + min-width: 70em; } nav.sub { - font-size: 16px; - text-transform: uppercase; + font-size: 16px; + text-transform: uppercase; } .sidebar { - width: 200px; - position: absolute; - left: 0; - top: 0; - min-height: 100%; + width: 200px; + position: absolute; + left: 0; + top: 0; + min-height: 100%; } .content, nav { max-width: 960px; } @@ -177,88 +179,88 @@ nav.sub { .js-only, .hidden { display: none !important; } .sidebar { - padding: 10px; + padding: 10px; } .sidebar img { - margin: 20px auto; - display: block; + margin: 20px auto; + display: block; } .sidebar .location { - font-size: 17px; - margin: 30px 0 20px 0; - text-align: center; + font-size: 17px; + margin: 30px 0 20px 0; + text-align: center; } .location a:first-child { font-weight: 500; } .block { - padding: 0 10px; - margin-bottom: 14px; + padding: 0 10px; + margin-bottom: 14px; } .block h2, .block h3 { - margin-top: 0; - margin-bottom: 8px; - text-align: center; + margin-top: 0; + margin-bottom: 8px; + text-align: center; } .block ul, .block li { - margin: 0; - padding: 0; - list-style: none; + margin: 0; + padding: 0; + list-style: none; } .block a { - display: block; - text-overflow: ellipsis; - overflow: hidden; - line-height: 15px; - padding: 7px 5px; - font-size: 14px; - font-weight: 300; - transition: border 500ms ease-out; + display: block; + text-overflow: ellipsis; + overflow: hidden; + line-height: 15px; + padding: 7px 5px; + font-size: 14px; + font-weight: 300; + transition: border 500ms ease-out; } .content { - padding: 15px 0; + padding: 15px 0; } .content.source pre.rust { - white-space: pre; - overflow: auto; - padding-left: 0; + white-space: pre; + overflow: auto; + padding-left: 0; } .content pre.line-numbers { - float: left; - border: none; - position: relative; + float: left; + border: none; + position: relative; - -webkit-user-select: none; - -moz-user-select: none; - -ms-user-select: none; - user-select: none; + -webkit-user-select: none; + -moz-user-select: none; + -ms-user-select: none; + user-select: none; } .line-numbers span { cursor: pointer; } .docblock-short p { - display: inline; + display: inline; } .docblock-short.nowrap { - display: block; - overflow: hidden; - white-space: nowrap; - text-overflow: ellipsis; + display: block; + overflow: hidden; + white-space: nowrap; + text-overflow: ellipsis; } .docblock-short p { - overflow: hidden; - text-overflow: ellipsis; - margin: 0; + overflow: hidden; + text-overflow: ellipsis; + margin: 0; } .docblock-short code { white-space: nowrap; } .docblock h1, .docblock h2, .docblock h3, .docblock h4, .docblock h5 { - border-bottom: 1px solid; + border-bottom: 1px solid; } .docblock h1 { font-size: 1.3em; } @@ -266,53 +268,53 @@ nav.sub { .docblock h3, .docblock h4, .docblock h5 { font-size: 1em; } .docblock { - margin-left: 24px; + margin-left: 24px; } .content .out-of-band { - font-size: 23px; - margin: 0px; - padding: 0px; - text-align: right; - display: inline-block; - font-weight: normal; - position: absolute; - right: 0; + font-size: 23px; + margin: 0px; + padding: 0px; + text-align: right; + display: inline-block; + font-weight: normal; + position: absolute; + right: 0; } h3.impl > .out-of-band { - font-size: 21px; + font-size: 21px; } h4 > code, h3 > code, .invisible > code { - position: inherit; + position: inherit; } .in-band, code { - z-index: 5; + z-index: 5; } .invisible { - background: rgba(0, 0, 0, 0); - width: 100%; - display: inline-block; + background: rgba(0, 0, 0, 0); + width: 100%; + display: inline-block; } .content .in-band { - margin: 0px; - padding: 0px; - display: inline-block; + margin: 0px; + padding: 0px; + display: inline-block; } #main { position: relative; } #main > .since { - top: inherit; - font-family: "Fira Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; + top: inherit; + font-family: "Fira Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; } .content table { - border-spacing: 0 5px; - border-collapse: separate; + border-spacing: 0 5px; + border-collapse: separate; } .content td { vertical-align: top; } .content td:first-child { padding-right: 20px; } @@ -320,102 +322,114 @@ h4 > code, h3 > code, .invisible > code { .content td h1, .content td h2 { margin-left: 0; font-size: 1.1em; } .docblock table { - border: 1px solid; - margin: .5em 0; - border-collapse: collapse; - width: 100%; + border: 1px solid; + margin: .5em 0; + border-collapse: collapse; + width: 100%; } .docblock table td { - padding: .5em; - border-top: 1px dashed; - border-bottom: 1px dashed; + padding: .5em; + border-top: 1px dashed; + border-bottom: 1px dashed; } .docblock table th { - padding: .5em; - text-align: left; - border-top: 1px solid; - border-bottom: 1px solid; + padding: .5em; + text-align: left; + border-top: 1px solid; + border-bottom: 1px solid; +} + +.fields + table { + margin-bottom: 1em; } .content .item-list { - list-style-type: none; - padding: 0; + list-style-type: none; + padding: 0; } .content .item-list li { margin-bottom: 3px; } .content .multi-column { - -moz-column-count: 5; - -moz-column-gap: 2.5em; - -webkit-column-count: 5; - -webkit-column-gap: 2.5em; - column-count: 5; - column-gap: 2.5em; + -moz-column-count: 5; + -moz-column-gap: 2.5em; + -webkit-column-count: 5; + -webkit-column-gap: 2.5em; + column-count: 5; + column-gap: 2.5em; } .content .multi-column li { width: 100%; display: inline-block; } .content .method { - font-size: 1em; - position: relative; + font-size: 1em; + position: relative; } /* Shift "where ..." part of method or fn definition down a line */ -.content .method .where, .content .fn .where { display: block; } +.content .method .where, +.content .fn .where, +.content .where.fmt-newline { + display: block; +} /* Bit of whitespace to indent it */ -.content .method .where::before, .content .fn .where::before { content: ' '; } +.content .method .where::before, +.content .fn .where::before, +.content .where.fmt-newline::before { + content: ' '; +} .content .methods > div { margin-left: 40px; } .content .impl-items .docblock, .content .impl-items .stability { - margin-left: 40px; + margin-left: 40px; } .content .impl-items .method, .content .impl-items > .type { - margin-left: 20px; + margin-left: 20px; } .content .stability code { - font-size: 90%; + font-size: 90%; } /* Shift where in trait listing down a line */ pre.trait .where::before { - content: '\a '; + content: '\a '; } nav { - border-bottom: 1px solid; - padding-bottom: 10px; - margin-bottom: 10px; + border-bottom: 1px solid; + padding-bottom: 10px; + margin-bottom: 10px; } nav.main { - padding: 20px 0; - text-align: center; + padding: 20px 0; + text-align: center; } nav.main .current { - border-top: 1px solid; - border-bottom: 1px solid; + border-top: 1px solid; + border-bottom: 1px solid; } nav.main .separator { - border: 1px solid; - display: inline-block; - height: 23px; - margin: 0 20px; + border: 1px solid; + display: inline-block; + height: 23px; + margin: 0 20px; } nav.sum { text-align: right; } nav.sub form { display: inline; } nav.sub, .content { - margin-left: 230px; + margin-left: 230px; } a { - text-decoration: none; - background: transparent; + text-decoration: none; + background: transparent; } .docblock a:hover, .docblock-short a:hover, .stability a { - text-decoration: underline; + text-decoration: underline; } .content span.enum, .content a.enum, .block a.current.enum { color: #5e9766; } @@ -425,40 +439,40 @@ a { .block a.current.crate { font-weight: 500; } .search-input { - width: 100%; - /* Override Normalize.css: we have margins and do - not want to overflow - the `moz` attribute is necessary - until Firefox 29, too early to drop at this point */ - -moz-box-sizing: border-box !important; - box-sizing: border-box !important; - outline: none; - border: none; - border-radius: 1px; - margin-top: 5px; - padding: 10px 16px; - font-size: 17px; - transition: border-color 300ms ease; - transition: border-radius 300ms ease-in-out; - transition: box-shadow 300ms ease-in-out; + width: 100%; + /* Override Normalize.css: we have margins and do + not want to overflow - the `moz` attribute is necessary + until Firefox 29, too early to drop at this point */ + -moz-box-sizing: border-box !important; + box-sizing: border-box !important; + outline: none; + border: none; + border-radius: 1px; + margin-top: 5px; + padding: 10px 16px; + font-size: 17px; + transition: border-color 300ms ease; + transition: border-radius 300ms ease-in-out; + transition: box-shadow 300ms ease-in-out; } .search-input:focus { - border-color: #66afe9; - border-radius: 2px; - border: 0; - outline: 0; - box-shadow: 0 0 8px #078dd8; + border-color: #66afe9; + border-radius: 2px; + border: 0; + outline: 0; + box-shadow: 0 0 8px #078dd8; } .search-results .desc { - white-space: nowrap; - text-overflow: ellipsis; - overflow: hidden; - display: block; + white-space: nowrap; + text-overflow: ellipsis; + overflow: hidden; + display: block; } .search-results a { - display: block; + display: block; } .content .search-results td:first-child { padding-right: 0; } @@ -468,96 +482,96 @@ tr.result span.primitive::after { content: ' (primitive type)'; font-style: ital } body.blur > :not(#help) { - filter: blur(8px); - -webkit-filter: blur(8px); - opacity: .7; + filter: blur(8px); + -webkit-filter: blur(8px); + opacity: .7; } #help { - width: 100%; - height: 100vh; - position: fixed; - top: 0; - left: 0; - display: flex; - justify-content: center; - align-items: center; + width: 100%; + height: 100vh; + position: fixed; + top: 0; + left: 0; + display: flex; + justify-content: center; + align-items: center; } #help > div { - flex: 0 0 auto; - background: #e9e9e9; - box-shadow: 0 0 6px rgba(0,0,0,.2); - width: 550px; - height: 330px; - border: 1px solid #bfbfbf; + flex: 0 0 auto; + background: #e9e9e9; + box-shadow: 0 0 6px rgba(0,0,0,.2); + width: 550px; + height: 330px; + border: 1px solid #bfbfbf; } #help dt { - float: left; - border-radius: 4px; - border: 1px solid #bfbfbf; - background: #fff; - width: 23px; - text-align: center; - clear: left; - display: block; - margin-top: -1px; + float: left; + border-radius: 4px; + border: 1px solid #bfbfbf; + background: #fff; + width: 23px; + text-align: center; + clear: left; + display: block; + margin-top: -1px; } #help dd { margin: 5px 33px; } #help .infos { padding-left: 0; } #help h1, #help h2 { margin-top: 0; } #help > div div { - width: 50%; - float: left; - padding: 20px; + width: 50%; + float: left; + padding: 20px; } em.stab { - display: inline-block; - border-width: 1px; - border-style: solid; - padding: 3px; - margin-bottom: 5px; - font-size: 90%; - font-style: normal; + display: inline-block; + border-width: 1px; + border-style: solid; + padding: 3px; + margin-bottom: 5px; + font-size: 90%; + font-style: normal; } em.stab p { - display: inline; + display: inline; } .module-item .stab { - border-width: 0; - padding: 0; - margin: 0; - background: inherit !important; + border-width: 0; + padding: 0; + margin: 0; + background: inherit !important; } .module-item.unstable { - opacity: 0.65; + opacity: 0.65; } .since { - font-weight: normal; - font-size: initial; - color: grey; - position: absolute; - right: 0; - top: 0; + font-weight: normal; + font-size: initial; + color: grey; + position: absolute; + right: 0; + top: 0; } .variants_table { - width: 100%; + width: 100%; } .variants_table tbody tr td:first-child { - width: 1%; /* make the variant name as small as possible */ + width: 1%; /* make the variant name as small as possible */ } td.summary-column { - width: 100%; + width: 100%; } .summary { - padding-right: 0px; + padding-right: 0px; } .line-numbers :target { background-color: transparent; } @@ -571,110 +585,122 @@ pre.rust .attribute, pre.rust .attribute .ident { color: #C82829; } pre.rust .macro, pre.rust .macro-nonterminal { color: #3E999F; } pre.rust .lifetime { color: #B76514; } pre.rust .question-mark { - color: #ff9011; - font-weight: bold; + color: #ff9011; + font-weight: bold; } pre.rust { position: relative; } a.test-arrow { - background-color: rgba(78, 139, 202, 0.2); - display: inline-block; - position: absolute; - padding: 5px 10px 5px 10px; - border-radius: 5px; - font-size: 130%; - top: 5px; - right: 5px; + background-color: rgba(78, 139, 202, 0.2); + display: inline-block; + position: absolute; + padding: 5px 10px 5px 10px; + border-radius: 5px; + font-size: 130%; + top: 5px; + right: 5px; } a.test-arrow:hover{ - background-color: #4e8bca; - text-decoration: none; + background-color: #4e8bca; + text-decoration: none; } .section-header:hover a:after { - content: '\2002\00a7\2002'; + content: '\2002\00a7\2002'; } .section-header:hover a { - text-decoration: none; + text-decoration: none; } .section-header a { - color: inherit; + color: inherit; } .collapse-toggle { - font-weight: 300; - position: absolute; - left: -23px; - color: #999; - top: 0; + font-weight: 300; + position: absolute; + left: -23px; + color: #999; + top: 0; } .toggle-wrapper > .collapse-toggle { - left: -24px; - margin-top: 0px; + left: -24px; + margin-top: 0px; } .toggle-wrapper { - position: relative; + position: relative; } .toggle-wrapper.collapsed { - height: 1em; - transition: height .2s; + height: 1em; + transition: height .2s; } .collapse-toggle > .inner { - display: inline-block; - width: 1.2ch; - text-align: center; + display: inline-block; + width: 1.2ch; + text-align: center; } .toggle-label { - color: #999; + color: #999; } .ghost { - display: none; + display: none; } .ghost + .since { - position: initial; - display: table-cell; + position: initial; + display: table-cell; } .since + .srclink { - display: table-cell; - padding-left: 10px; + display: table-cell; + padding-left: 10px; } span.since { - position: initial; - font-size: 20px; - margin-right: 5px; + position: initial; + font-size: 20px; + margin-right: 5px; } .toggle-wrapper > .collapse-toggle { - left: 0; + left: 0; } .variant + .toggle-wrapper > a { - margin-top: 5px; + margin-top: 5px; +} + +.sub-variant, .sub-variant > h3 { + margin-top: 0 !important; } .enum > .toggle-wrapper + .docblock, .struct > .toggle-wrapper + .docblock { - margin-left: 30px; - margin-bottom: 20px; - margin-top: 5px; + margin-left: 30px; + margin-bottom: 20px; + margin-top: 5px; } .enum > .collapsed, .struct > .collapsed { - margin-bottom: 25px; + margin-bottom: 25px; } .enum .variant, .struct .structfield { - display: block; + display: block; +} + +.attributes { + display: block; + margin: 0px 0px 0px 30px !important; +} +.toggle-attributes.collapsed { + margin-bottom: 5px; } :target > code { @@ -685,71 +711,71 @@ span.since { /* Media Queries */ @media (max-width: 700px) { - body { - padding-top: 0px; - } + body { + padding-top: 0px; + } - .sidebar { - height: 40px; - min-height: 40px; - width: 100%; - margin: 0px; - padding: 0px; - position: static; - } + .sidebar { + height: 40px; + min-height: 40px; + width: 100%; + margin: 0px; + padding: 0px; + position: static; + } - .sidebar .location { - float: right; - margin: 0px; - padding: 3px 10px 1px 10px; - min-height: 39px; - background: inherit; - text-align: left; - font-size: 24px; - } + .sidebar .location { + float: right; + margin: 0px; + padding: 3px 10px 1px 10px; + min-height: 39px; + background: inherit; + text-align: left; + font-size: 24px; + } - .sidebar .location:empty { - padding: 0; - } + .sidebar .location:empty { + padding: 0; + } - .sidebar img { - width: 35px; - margin-top: 5px; - margin-bottom: 0px; - float: left; - } + .sidebar img { + width: 35px; + margin-top: 5px; + margin-bottom: 0px; + float: left; + } - nav.sub { - margin: 0 auto; - } + nav.sub { + margin: 0 auto; + } - .sidebar .block { - display: none; - } + .sidebar .block { + display: none; + } - .content { - margin-left: 0px; - } + .content { + margin-left: 0px; + } - .content .in-band { - width: 100%; - } + .content .in-band { + width: 100%; + } - .content .out-of-band { - display: none; - } + .content .out-of-band { + display: none; + } - .toggle-wrapper > .collapse-toggle { - left: 0px; - } + .toggle-wrapper > .collapse-toggle { + left: 0px; + } - .toggle-wrapper { - height: 1.5em; - } + .toggle-wrapper { + height: 1.5em; + } } @media print { - nav.sub, .content .out-of-band, .collapse-toggle { - display: none; - } + nav.sub, .content .out-of-band, .collapse-toggle { + display: none; + } } diff --git a/src/librustdoc/lib.rs b/src/librustdoc/lib.rs index ee395e0616..174118db93 100644 --- a/src/librustdoc/lib.rs +++ b/src/librustdoc/lib.rs @@ -20,7 +20,6 @@ #![feature(box_patterns)] #![feature(box_syntax)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(libc)] #![feature(rustc_private)] #![feature(set_stdio)] @@ -28,7 +27,6 @@ #![feature(staged_api)] #![feature(test)] #![feature(unicode)] -#![cfg_attr(stage0, feature(question_mark))] extern crate arena; extern crate getopts; @@ -47,7 +45,7 @@ extern crate serialize; #[macro_use] extern crate syntax; extern crate syntax_pos; extern crate test as testing; -extern crate rustc_unicode; +extern crate std_unicode; #[macro_use] extern crate log; extern crate rustc_errors as errors; @@ -56,6 +54,9 @@ extern crate serialize as rustc_serialize; // used by deriving use std::collections::{BTreeMap, BTreeSet}; use std::default::Default; use std::env; +use std::fmt::Display; +use std::io; +use std::io::Write; use std::path::PathBuf; use std::process; use std::sync::mpsc::channel; @@ -89,7 +90,7 @@ pub mod visit_ast; pub mod visit_lib; pub mod test; -use clean::Attributes; +use clean::AttributesExt; struct Output { krate: clean::Crate, @@ -162,6 +163,10 @@ pub fn opts() -> Vec { unstable(optmulti("Z", "", "internal and debugging options (only on nightly build)", "FLAG")), stable(optopt("", "sysroot", "Override the system root", "PATH")), + unstable(optopt("", "playground-url", + "URL to send code snippets to, may be reset by --markdown-playground-url \ + or `#![doc(html_playground_url=...)]`", + "URL")), ] } @@ -181,7 +186,7 @@ pub fn main_args(args: &[String]) -> isize { let matches = match getopts::getopts(&args[1..], &all_groups) { Ok(m) => m, Err(err) => { - println!("{}", err); + print_error(err); return 1; } }; @@ -209,11 +214,11 @@ pub fn main_args(args: &[String]) -> isize { } if matches.free.is_empty() { - println!("expected an input file to act on"); + print_error("missing file operand"); return 1; } if matches.free.len() > 1 { - println!("only one input file may be specified"); + print_error("too many file operands"); return 1; } let input = &matches.free[0]; @@ -225,7 +230,7 @@ pub fn main_args(args: &[String]) -> isize { let externs = match parse_externs(&matches) { Ok(ex) => ex, Err(err) => { - println!("{}", err); + print_error(err); return 1; } }; @@ -245,19 +250,22 @@ pub fn main_args(args: &[String]) -> isize { if let Some(ref p) = css_file_extension { if !p.is_file() { - println!("{}", "--extend-css option must take a css file as input"); + writeln!( + &mut io::stderr(), + "rustdoc: option --extend-css argument must be a file." + ).unwrap(); return 1; } } let external_html = match ExternalHtml::load( - &matches.opt_strs("html-in-header"), - &matches.opt_strs("html-before-content"), + &matches.opt_strs("html-in-header"), &matches.opt_strs("html-before-content"), &matches.opt_strs("html-after-content")) { Some(eh) => eh, None => return 3 }; let crate_name = matches.opt_str("crate-name"); + let playground_url = matches.opt_str("playground-url"); match (should_test, markdown_input) { (true, true) => { @@ -272,43 +280,54 @@ pub fn main_args(args: &[String]) -> isize { !matches.opt_present("markdown-no-toc")), (false, false) => {} } - let out = match acquire_input(input, externs, &matches) { - Ok(out) => out, - Err(s) => { - println!("input error: {}", s); - return 1; + + let output_format = matches.opt_str("w"); + let res = acquire_input(input, externs, &matches, move |out| { + let Output { krate, passes, renderinfo } = out; + info!("going to format"); + match output_format.as_ref().map(|s| &**s) { + Some("html") | None => { + html::render::run(krate, &external_html, playground_url, + output.unwrap_or(PathBuf::from("doc")), + passes.into_iter().collect(), + css_file_extension, + renderinfo) + .expect("failed to generate documentation"); + 0 + } + Some(s) => { + print_error(format!("unknown output format: {}", s)); + 1 + } } - }; - let Output { krate, passes, renderinfo } = out; - info!("going to format"); - match matches.opt_str("w").as_ref().map(|s| &**s) { - Some("html") | None => { - html::render::run(krate, &external_html, - output.unwrap_or(PathBuf::from("doc")), - passes.into_iter().collect(), - css_file_extension, - renderinfo) - .expect("failed to generate documentation"); - 0 - } - Some(s) => { - println!("unknown output format: {}", s); - 1 - } - } + }); + res.unwrap_or_else(|s| { + print_error(format!("input error: {}", s)); + 1 + }) +} + +/// Prints an uniformised error message on the standard error output +fn print_error(error_message: T) where T: Display { + writeln!( + &mut io::stderr(), + "rustdoc: {}\nTry 'rustdoc --help' for more information.", + error_message + ).unwrap(); } /// Looks inside the command line arguments to extract the relevant input format /// and files and then generates the necessary rustdoc output for formatting. -fn acquire_input(input: &str, - externs: Externs, - matches: &getopts::Matches) -> Result { +fn acquire_input(input: &str, + externs: Externs, + matches: &getopts::Matches, + f: F) + -> Result +where R: 'static + Send, F: 'static + Send + FnOnce(Output) -> R { match matches.opt_str("r").as_ref().map(|s| &**s) { - Some("rust") => Ok(rust_input(input, externs, matches)), + Some("rust") => Ok(rust_input(input, externs, matches, f)), Some(s) => Err(format!("unknown input format: {}", s)), - None => { - Ok(rust_input(input, externs, matches)) - } + None => Ok(rust_input(input, externs, matches, f)) } } @@ -334,7 +353,8 @@ fn parse_externs(matches: &getopts::Matches) -> Result { /// generated from the cleaned AST of the crate. /// /// This form of input will run all of the plug/cleaning passes -fn rust_input(cratefile: &str, externs: Externs, matches: &getopts::Matches) -> Output { +fn rust_input(cratefile: &str, externs: Externs, matches: &getopts::Matches, f: F) -> R +where R: 'static + Send, F: 'static + Send + FnOnce(Output) -> R { let mut default_passes = !matches.opt_present("no-defaults"); let mut passes = matches.opt_strs("passes"); let mut plugins = matches.opt_strs("plugins"); @@ -347,6 +367,8 @@ fn rust_input(cratefile: &str, externs: Externs, matches: &getopts::Matches) -> let cfgs = matches.opt_strs("cfg"); let triple = matches.opt_str("target"); let maybe_sysroot = matches.opt_str("sysroot").map(PathBuf::from); + let crate_name = matches.opt_str("crate-name"); + let plugin_path = matches.opt_str("plugin-path"); let cr = PathBuf::from(cratefile); info!("starting to run rustc"); @@ -355,67 +377,68 @@ fn rust_input(cratefile: &str, externs: Externs, matches: &getopts::Matches) -> rustc_driver::monitor(move || { use rustc::session::config::Input; - tx.send(core::run_core(paths, cfgs, externs, Input::File(cr), - triple, maybe_sysroot)).unwrap(); - }); - let (mut krate, renderinfo) = rx.recv().unwrap(); - info!("finished with rustc"); + let (mut krate, renderinfo) = + core::run_core(paths, cfgs, externs, Input::File(cr), triple, maybe_sysroot); - if let Some(name) = matches.opt_str("crate-name") { - krate.name = name - } + info!("finished with rustc"); - // Process all of the crate attributes, extracting plugin metadata along - // with the passes which we are supposed to run. - for attr in krate.module.as_ref().unwrap().attrs.list("doc") { - match *attr { - clean::Word(ref w) if "no_default_passes" == *w => { - default_passes = false; - }, - clean::NameValue(ref name, ref value) => { - let sink = match &name[..] { - "passes" => &mut passes, - "plugins" => &mut plugins, + if let Some(name) = crate_name { + krate.name = name + } + + // Process all of the crate attributes, extracting plugin metadata along + // with the passes which we are supposed to run. + for attr in krate.module.as_ref().unwrap().attrs.lists("doc") { + let name = attr.name().map(|s| s.as_str()); + let name = name.as_ref().map(|s| &s[..]); + if attr.is_word() { + if name == Some("no_default_passes") { + default_passes = false; + } + } else if let Some(value) = attr.value_str() { + let sink = match name { + Some("passes") => &mut passes, + Some("plugins") => &mut plugins, _ => continue, }; - for p in value.split_whitespace() { + for p in value.as_str().split_whitespace() { sink.push(p.to_string()); } } - _ => (), } - } - if default_passes { - for name in passes::DEFAULT_PASSES.iter().rev() { - passes.insert(0, name.to_string()); + if default_passes { + for name in passes::DEFAULT_PASSES.iter().rev() { + passes.insert(0, name.to_string()); + } } - } - // Load all plugins/passes into a PluginManager - let path = matches.opt_str("plugin-path") - .unwrap_or("/tmp/rustdoc/plugins".to_string()); - let mut pm = plugins::PluginManager::new(PathBuf::from(path)); - for pass in &passes { - let plugin = match passes::PASSES.iter() - .position(|&(p, ..)| { - p == *pass - }) { - Some(i) => passes::PASSES[i].1, - None => { - error!("unknown pass {}, skipping", *pass); - continue - }, - }; - pm.add_plugin(plugin); - } - info!("loading plugins..."); - for pname in plugins { - pm.load_plugin(pname); - } + // Load all plugins/passes into a PluginManager + let path = plugin_path.unwrap_or("/tmp/rustdoc/plugins".to_string()); + let mut pm = plugins::PluginManager::new(PathBuf::from(path)); + for pass in &passes { + let plugin = match passes::PASSES.iter() + .position(|&(p, ..)| { + p == *pass + }) { + Some(i) => passes::PASSES[i].1, + None => { + error!("unknown pass {}, skipping", *pass); + continue + }, + }; + pm.add_plugin(plugin); + } + info!("loading plugins..."); + for pname in plugins { + pm.load_plugin(pname); + } - // Run everything! - info!("Executing passes/plugins"); - let krate = pm.run_plugins(krate); - Output { krate: krate, renderinfo: renderinfo, passes: passes } + // Run everything! + info!("Executing passes/plugins"); + let krate = pm.run_plugins(krate); + + tx.send(f(Output { krate: krate, renderinfo: renderinfo, passes: passes })).unwrap(); + }); + rx.recv().unwrap() } diff --git a/src/librustdoc/markdown.rs b/src/librustdoc/markdown.rs index b617acfabb..9dbc9d30e6 100644 --- a/src/librustdoc/markdown.rs +++ b/src/librustdoc/markdown.rs @@ -63,14 +63,15 @@ pub fn render(input: &str, mut output: PathBuf, matches: &getopts::Matches, Err(LoadStringError::ReadFail) => return 1, Err(LoadStringError::BadUtf8) => return 2, }; - if let Some(playground) = matches.opt_str("markdown-playground-url") { + if let Some(playground) = matches.opt_str("markdown-playground-url").or( + matches.opt_str("playground-url")) { markdown::PLAYGROUND.with(|s| { *s.borrow_mut() = Some((None, playground)); }); } let mut out = match File::create(&output) { Err(e) => { let _ = writeln!(&mut io::stderr(), - "error opening `{}` for writing: {}", + "rustdoc: {}: {}", output.display(), e); return 4; } @@ -79,8 +80,10 @@ pub fn render(input: &str, mut output: PathBuf, matches: &getopts::Matches, let (metadata, text) = extract_leading_metadata(&input_str); if metadata.is_empty() { - let _ = writeln!(&mut io::stderr(), - "invalid markdown file: expecting initial line with `% ...TITLE...`"); + let _ = writeln!( + &mut io::stderr(), + "rustdoc: invalid markdown file: expecting initial line with `% ...TITLE...`" + ); return 5; } let title = metadata[0]; @@ -131,7 +134,7 @@ pub fn render(input: &str, mut output: PathBuf, matches: &getopts::Matches, match err { Err(e) => { let _ = writeln!(&mut io::stderr(), - "error writing to `{}`: {}", + "rustdoc: cannot write to `{}`: {}", output.display(), e); 6 } diff --git a/src/librustdoc/passes/collapse_docs.rs b/src/librustdoc/passes/collapse_docs.rs index c034ef9326..3c63302127 100644 --- a/src/librustdoc/passes/collapse_docs.rs +++ b/src/librustdoc/passes/collapse_docs.rs @@ -8,40 +8,33 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use std::string::String; - use clean::{self, Item}; use plugins; use fold; use fold::DocFolder; pub fn collapse_docs(krate: clean::Crate) -> plugins::PluginResult { - let mut collapser = Collapser; - let krate = collapser.fold_crate(krate); - krate + Collapser.fold_crate(krate) } struct Collapser; impl fold::DocFolder for Collapser { fn fold_item(&mut self, mut i: Item) -> Option { - let mut docstr = String::new(); - for attr in &i.attrs { - if let clean::NameValue(ref x, ref s) = *attr { - if "doc" == *x { - docstr.push_str(s); - docstr.push('\n'); - } - } - } - let mut a: Vec = i.attrs.iter().filter(|&a| match a { - &clean::NameValue(ref x, _) if "doc" == *x => false, - _ => true - }).cloned().collect(); - if !docstr.is_empty() { - a.push(clean::NameValue("doc".to_string(), docstr)); - } - i.attrs = a; + i.attrs.collapse_doc_comments(); self.fold_item_recur(i) } } + +impl clean::Attributes { + pub fn collapse_doc_comments(&mut self) { + let mut doc_string = self.doc_strings.join("\n"); + if doc_string.is_empty() { + self.doc_strings = vec![]; + } else { + // FIXME(eddyb) Is this still needed? + doc_string.push('\n'); + self.doc_strings = vec![doc_string]; + } + } +} diff --git a/src/librustdoc/passes/strip_hidden.rs b/src/librustdoc/passes/strip_hidden.rs index 927ccf9171..68c1231fc6 100644 --- a/src/librustdoc/passes/strip_hidden.rs +++ b/src/librustdoc/passes/strip_hidden.rs @@ -11,7 +11,7 @@ use rustc::util::nodemap::DefIdSet; use std::mem; -use clean::{self, Attributes}; +use clean::{self, AttributesExt, NestedAttributesExt}; use clean::Item; use plugins; use fold; @@ -41,7 +41,7 @@ struct Stripper<'a> { impl<'a> fold::DocFolder for Stripper<'a> { fn fold_item(&mut self, i: Item) -> Option { - if i.attrs.list("doc").has_word("hidden") { + if i.attrs.lists("doc").has_word("hidden") { debug!("found one in strip_hidden; removing"); // use a dedicated hidden item for given item type if any match i.inner { diff --git a/src/librustdoc/passes/unindent_comments.rs b/src/librustdoc/passes/unindent_comments.rs index 20640f3f88..4d94c30847 100644 --- a/src/librustdoc/passes/unindent_comments.rs +++ b/src/librustdoc/passes/unindent_comments.rs @@ -17,31 +17,26 @@ use plugins; use fold::{self, DocFolder}; pub fn unindent_comments(krate: clean::Crate) -> plugins::PluginResult { - let mut cleaner = CommentCleaner; - let krate = cleaner.fold_crate(krate); - krate + CommentCleaner.fold_crate(krate) } struct CommentCleaner; impl fold::DocFolder for CommentCleaner { fn fold_item(&mut self, mut i: Item) -> Option { - let mut avec: Vec = Vec::new(); - for attr in &i.attrs { - match attr { - &clean::NameValue(ref x, ref s) - if "doc" == *x => { - avec.push(clean::NameValue("doc".to_string(), - unindent(s))) - } - x => avec.push(x.clone()) - } - } - i.attrs = avec; + i.attrs.unindent_doc_comments(); self.fold_item_recur(i) } } +impl clean::Attributes { + pub fn unindent_doc_comments(&mut self) { + for doc_string in &mut self.doc_strings { + *doc_string = unindent(doc_string); + } + } +} + fn unindent(s: &str) -> String { let lines = s.lines().collect:: >(); let mut saw_first_line = false; diff --git a/src/librustdoc/test.rs b/src/librustdoc/test.rs index 1bbd67fb9b..b96a737ed0 100644 --- a/src/librustdoc/test.rs +++ b/src/librustdoc/test.rs @@ -8,7 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use std::cell::Cell; use std::env; use std::ffi::OsString; use std::io::prelude::*; @@ -23,7 +22,8 @@ use std::sync::{Arc, Mutex}; use testing; use rustc_lint; use rustc::dep_graph::DepGraph; -use rustc::hir::map as hir_map; +use rustc::hir; +use rustc::hir::intravisit; use rustc::session::{self, config}; use rustc::session::config::{OutputType, OutputTypes, Externs}; use rustc::session::search_paths::{SearchPaths, PathKind}; @@ -33,18 +33,15 @@ use rustc_driver::{driver, Compilation}; use rustc_driver::driver::phase_2_configure_and_expand; use rustc_metadata::cstore::CStore; use rustc_resolve::MakeGlobMap; +use rustc_trans::back::link; +use syntax::ast; use syntax::codemap::CodeMap; use syntax::feature_gate::UnstableFeatures; use errors; use errors::emitter::ColorConfig; -use core; -use clean; -use clean::Clean; -use fold::DocFolder; +use clean::Attributes; use html::markdown; -use passes; -use visit_ast::RustdocVisitor; #[derive(Clone, Default)] pub struct TestOptions { @@ -93,41 +90,30 @@ pub fn run(input: &str, ).expect("phase_2_configure_and_expand aborted in rustdoc!") }; - let dep_graph = DepGraph::new(false); + let crate_name = crate_name.unwrap_or_else(|| { + link::find_crate_name(None, &hir_forest.krate().attrs, &input) + }); let opts = scrape_test_config(hir_forest.krate()); - let _ignore = dep_graph.in_ignore(); - let map = hir_map::map_crate(&mut hir_forest, defs); - - let ctx = core::DocContext { - map: &map, - maybe_typed: core::NotTyped(&sess), - input: input, - populated_all_crate_impls: Cell::new(false), - external_traits: Default::default(), - deref_trait_did: Cell::new(None), - deref_mut_trait_did: Cell::new(None), - access_levels: Default::default(), - renderinfo: Default::default(), - ty_substs: Default::default(), - lt_substs: Default::default(), - }; - - let mut v = RustdocVisitor::new(&ctx); - v.visit(ctx.map.krate()); - let mut krate = v.clean(&ctx); - if let Some(name) = crate_name { - krate.name = name; - } - let krate = passes::collapse_docs(krate); - let krate = passes::unindent_comments(krate); - - let mut collector = Collector::new(krate.name.to_string(), + let mut collector = Collector::new(crate_name, cfgs, libs, externs, false, opts); - collector.fold_crate(krate); + + { + let dep_graph = DepGraph::new(false); + let _ignore = dep_graph.in_ignore(); + let map = hir::map::map_crate(&mut hir_forest, defs); + let krate = map.krate(); + let mut hir_collector = HirCollector { + collector: &mut collector, + map: &map + }; + hir_collector.visit_testable("".to_string(), &krate.attrs, |this| { + intravisit::walk_crate(this, krate); + }); + } test_args.insert(0, "rustdoctest".to_string()); @@ -359,7 +345,7 @@ pub fn maketest(s: &str, cratename: Option<&str>, dont_insert_main: bool, } fn partition_source(s: &str) -> (String, String) { - use rustc_unicode::str::UnicodeStr; + use std_unicode::str::UnicodeStr; let mut after_header = false; let mut before = String::new(); @@ -471,56 +457,88 @@ impl Collector { } } -impl DocFolder for Collector { - fn fold_item(&mut self, item: clean::Item) -> Option { - let current_name = match item.name { - Some(ref name) if !name.is_empty() => Some(name.clone()), - _ => typename_if_impl(&item) - }; +struct HirCollector<'a, 'hir: 'a> { + collector: &'a mut Collector, + map: &'a hir::map::Map<'hir> +} - let pushed = current_name.map(|name| self.names.push(name)).is_some(); - - if let Some(doc) = item.doc_value() { - self.cnt = 0; - markdown::find_testable_code(doc, &mut *self); +impl<'a, 'hir> HirCollector<'a, 'hir> { + fn visit_testable(&mut self, + name: String, + attrs: &[ast::Attribute], + nested: F) { + let has_name = !name.is_empty(); + if has_name { + self.collector.names.push(name); } - let ret = self.fold_item_recur(item); - if pushed { - self.names.pop(); + let mut attrs = Attributes::from_ast(attrs); + attrs.collapse_doc_comments(); + attrs.unindent_doc_comments(); + if let Some(doc) = attrs.doc_value() { + self.collector.cnt = 0; + markdown::find_testable_code(doc, self.collector); } - return ret; + nested(self); - // FIXME: it would be better to not have the escaped version in the first place - fn unescape_for_testname(mut s: String) -> String { - // for refs `&foo` - if s.contains("&") { - s = s.replace("&", "&"); - - // `::&'a mut Foo::` looks weird, let's make it `::<&'a mut Foo>`:: - if let Some('&') = s.chars().nth(0) { - s = format!("<{}>", s); - } - } - - // either `<..>` or `->` - if s.contains(">") { - s.replace(">", ">") - .replace("<", "<") - } else { - s - } - } - - fn typename_if_impl(item: &clean::Item) -> Option { - if let clean::ItemEnum::ImplItem(ref impl_) = item.inner { - let path = impl_.for_.to_string(); - let unescaped_path = unescape_for_testname(path); - Some(unescaped_path) - } else { - None - } + if has_name { + self.collector.names.pop(); } } } + +impl<'a, 'hir> intravisit::Visitor<'hir> for HirCollector<'a, 'hir> { + fn nested_visit_map<'this>(&'this mut self) -> intravisit::NestedVisitorMap<'this, 'hir> { + intravisit::NestedVisitorMap::All(&self.map) + } + + fn visit_item(&mut self, item: &'hir hir::Item) { + let name = if let hir::ItemImpl(.., ref ty, _) = item.node { + hir::print::ty_to_string(ty) + } else { + item.name.to_string() + }; + + self.visit_testable(name, &item.attrs, |this| { + intravisit::walk_item(this, item); + }); + } + + fn visit_trait_item(&mut self, item: &'hir hir::TraitItem) { + self.visit_testable(item.name.to_string(), &item.attrs, |this| { + intravisit::walk_trait_item(this, item); + }); + } + + fn visit_impl_item(&mut self, item: &'hir hir::ImplItem) { + self.visit_testable(item.name.to_string(), &item.attrs, |this| { + intravisit::walk_impl_item(this, item); + }); + } + + fn visit_foreign_item(&mut self, item: &'hir hir::ForeignItem) { + self.visit_testable(item.name.to_string(), &item.attrs, |this| { + intravisit::walk_foreign_item(this, item); + }); + } + + fn visit_variant(&mut self, + v: &'hir hir::Variant, + g: &'hir hir::Generics, + item_id: ast::NodeId) { + self.visit_testable(v.node.name.to_string(), &v.node.attrs, |this| { + intravisit::walk_variant(this, v, g, item_id); + }); + } + + fn visit_struct_field(&mut self, f: &'hir hir::StructField) { + self.visit_testable(f.name.to_string(), &f.attrs, |this| { + intravisit::walk_struct_field(this, f); + }); + } + + fn visit_macro_def(&mut self, macro_def: &'hir hir::MacroDef) { + self.visit_testable(macro_def.name.to_string(), ¯o_def.attrs, |_| ()); + } +} diff --git a/src/librustdoc/visit_ast.rs b/src/librustdoc/visit_ast.rs index 4d1af16227..4087b9a761 100644 --- a/src/librustdoc/visit_ast.rs +++ b/src/librustdoc/visit_ast.rs @@ -21,13 +21,14 @@ use syntax_pos::Span; use rustc::hir::map as hir_map; use rustc::hir::def::Def; use rustc::hir::def_id::LOCAL_CRATE; +use rustc::middle::cstore::LoadedMacro; use rustc::middle::privacy::AccessLevel; -use rustc::util::nodemap::FnvHashSet; +use rustc::util::nodemap::FxHashSet; use rustc::hir; use core; -use clean::{self, Clean, Attributes}; +use clean::{self, AttributesExt, NestedAttributesExt}; use doctree::*; // looks to me like the first two of these are actually @@ -42,37 +43,35 @@ pub struct RustdocVisitor<'a, 'tcx: 'a> { pub module: Module, pub attrs: hir::HirVec, pub cx: &'a core::DocContext<'a, 'tcx>, - view_item_stack: FnvHashSet, - inlining_from_glob: bool, + view_item_stack: FxHashSet, + inlining: bool, + /// Is the current module and all of its parents public? + inside_public_path: bool, } impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { pub fn new(cx: &'a core::DocContext<'a, 'tcx>) -> RustdocVisitor<'a, 'tcx> { // If the root is reexported, terminate all recursion. - let mut stack = FnvHashSet(); + let mut stack = FxHashSet(); stack.insert(ast::CRATE_NODE_ID); RustdocVisitor { module: Module::new(None), attrs: hir::HirVec::new(), cx: cx, view_item_stack: stack, - inlining_from_glob: false, + inlining: false, + inside_public_path: true, } } fn stability(&self, id: ast::NodeId) -> Option { - self.cx.tcx_opt().and_then(|tcx| { - self.cx.map.opt_local_def_id(id) - .and_then(|def_id| tcx.lookup_stability(def_id)) - .cloned() - }) + self.cx.tcx.map.opt_local_def_id(id) + .and_then(|def_id| self.cx.tcx.lookup_stability(def_id)).cloned() } fn deprecation(&self, id: ast::NodeId) -> Option { - self.cx.tcx_opt().and_then(|tcx| { - self.cx.map.opt_local_def_id(id) - .and_then(|def_id| tcx.lookup_deprecation(def_id)) - }) + self.cx.tcx.map.opt_local_def_id(id) + .and_then(|def_id| self.cx.tcx.lookup_deprecation(def_id)) } pub fn visit(&mut self, krate: &hir::Crate) { @@ -85,8 +84,9 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { &krate.module, None); // attach the crate's exported macros to the top-level module: - self.module.macros = krate.exported_macros.iter() - .map(|def| self.visit_macro(def)).collect(); + let macro_exports: Vec<_> = + krate.exported_macros.iter().map(|def| self.visit_macro(def)).collect(); + self.module.macros.extend(macro_exports); self.module.is_crate = true; } @@ -187,47 +187,42 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { om.stab = self.stability(id); om.depr = self.deprecation(id); om.id = id; + // Keep track of if there were any private modules in the path. + let orig_inside_public_path = self.inside_public_path; + self.inside_public_path &= vis == hir::Public; for i in &m.item_ids { - let item = self.cx.map.expect_item(i.id); + let item = self.cx.tcx.map.expect_item(i.id); self.visit_item(item, None, &mut om); } - om - } + self.inside_public_path = orig_inside_public_path; + if let Some(exports) = self.cx.export_map.get(&id) { + for export in exports { + if let Def::Macro(def_id) = export.def { + if def_id.krate == LOCAL_CRATE { + continue // These are `krate.exported_macros`, handled in `self.visit()`. + } + let def = match self.cx.sess().cstore.load_macro(def_id, self.cx.sess()) { + LoadedMacro::MacroRules(macro_rules) => macro_rules, + // FIXME(jseyfried): document proc macro reexports + LoadedMacro::ProcMacro(..) => continue, + }; - fn visit_view_path(&mut self, path: hir::ViewPath_, - om: &mut Module, - id: ast::NodeId, - please_inline: bool) -> Option { - match path { - hir::ViewPathSimple(dst, base) => { - if self.maybe_inline_local(id, Some(dst), false, om, please_inline) { - None - } else { - Some(hir::ViewPathSimple(dst, base)) - } - } - hir::ViewPathList(p, paths) => { - let mine = paths.into_iter().filter(|path| { - !self.maybe_inline_local(path.node.id, path.node.rename, - false, om, please_inline) - }).collect::>(); - - if mine.is_empty() { - None - } else { - Some(hir::ViewPathList(p, mine)) - } - } - - hir::ViewPathGlob(base) => { - if self.maybe_inline_local(id, None, true, om, please_inline) { - None - } else { - Some(hir::ViewPathGlob(base)) + // FIXME(jseyfried) merge with `self.visit_macro()` + let matchers = def.body.chunks(4).map(|arm| arm[0].get_span()).collect(); + om.macros.push(Macro { + id: def.id, + attrs: def.attrs.clone().into(), + name: def.ident.name, + whence: def.span, + matchers: matchers, + stab: self.stability(def.id), + depr: self.deprecation(def.id), + imported_from: def.imported_from.map(|ident| ident.name), + }) } } } - + om } /// Tries to resolve the target of a `pub use` statement and inlines the @@ -239,14 +234,18 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { /// and follows different rules. /// /// Returns true if the target has been inlined. - fn maybe_inline_local(&mut self, id: ast::NodeId, renamed: Option, - glob: bool, om: &mut Module, please_inline: bool) -> bool { + fn maybe_inline_local(&mut self, + id: ast::NodeId, + def: Def, + renamed: Option, + glob: bool, + om: &mut Module, + please_inline: bool) -> bool { fn inherits_doc_hidden(cx: &core::DocContext, mut node: ast::NodeId) -> bool { - while let Some(id) = cx.map.get_enclosing_scope(node) { + while let Some(id) = cx.tcx.map.get_enclosing_scope(node) { node = id; - let attrs = cx.map.attrs(node).clean(cx); - if attrs.list("doc").has_word("hidden") { + if cx.tcx.map.attrs(node).lists("doc").has_word("hidden") { return true; } if node == ast::CRATE_NODE_ID { @@ -256,25 +255,24 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { false } - let tcx = match self.cx.tcx_opt() { - Some(tcx) => tcx, - None => return false - }; - let def = tcx.expect_def(id); + let tcx = self.cx.tcx; + if def == Def::Err { + return false; + } let def_did = def.def_id(); - let use_attrs = tcx.map.attrs(id).clean(self.cx); + let use_attrs = tcx.map.attrs(id); // Don't inline doc(hidden) imports so they can be stripped at a later stage. - let is_no_inline = use_attrs.list("doc").has_word("no_inline") || - use_attrs.list("doc").has_word("hidden"); + let is_no_inline = use_attrs.lists("doc").has_word("no_inline") || + use_attrs.lists("doc").has_word("hidden"); // For cross-crate impl inlining we need to know whether items are // reachable in documentation - a previously nonreachable item can be // made reachable by cross-crate inlining which we're checking here. // (this is done here because we need to know this upfront) if !def_did.is_local() && !is_no_inline { - let attrs = clean::inline::load_attrs(self.cx, tcx, def_did); - let self_is_hidden = attrs.list("doc").has_word("hidden"); + let attrs = clean::inline::load_attrs(self.cx, def_did); + let self_is_hidden = attrs.lists("doc").has_word("hidden"); match def { Def::Trait(did) | Def::Struct(did) | @@ -307,22 +305,22 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { let ret = match tcx.map.get(def_node_id) { hir_map::NodeItem(it) => { + let prev = mem::replace(&mut self.inlining, true); if glob { - let prev = mem::replace(&mut self.inlining_from_glob, true); match it.node { hir::ItemMod(ref m) => { for i in &m.item_ids { - let i = self.cx.map.expect_item(i.id); + let i = self.cx.tcx.map.expect_item(i.id); self.visit_item(i, None, om); } } hir::ItemEnum(..) => {} _ => { panic!("glob not mapped to a module or enum"); } } - self.inlining_from_glob = prev; } else { self.visit_item(it, renamed, om); } + self.inlining = prev; true } _ => false, @@ -336,6 +334,19 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { debug!("Visiting item {:?}", item); let name = renamed.unwrap_or(item.name); match item.node { + hir::ItemForeignMod(ref fm) => { + // If inlining we only want to include public functions. + om.foreigns.push(if self.inlining { + hir::ForeignMod { + abi: fm.abi, + items: fm.items.iter().filter(|i| i.vis == hir::Public).cloned().collect(), + } + } else { + fm.clone() + }); + } + // If we're inlining, skip private items. + _ if self.inlining && item.vis != hir::Public => {} hir::ItemExternCrate(ref p) => { let cstore = &self.cx.sess().cstore; om.extern_crates.push(ExternCrate { @@ -348,9 +359,13 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { whence: item.span, }) } - hir::ItemUse(ref vpath) => { - let node = vpath.node.clone(); - let node = if item.vis == hir::Public { + hir::ItemUse(_, hir::UseKind::ListStem) => {} + hir::ItemUse(ref path, kind) => { + let is_glob = kind == hir::UseKind::Glob; + + // If there was a private module in the current path then don't bother inlining + // anything as it will probably be stripped anyway. + if item.vis == hir::Public && self.inside_public_path { let please_inline = item.attrs.iter().any(|item| { match item.meta_item_list() { Some(list) if item.check_name("doc") => { @@ -359,18 +374,24 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { _ => false, } }); - match self.visit_view_path(node, om, item.id, please_inline) { - None => return, - Some(p) => p + let name = if is_glob { None } else { Some(name) }; + if self.maybe_inline_local(item.id, + path.def, + name, + is_glob, + om, + please_inline) { + return; } - } else { - node - }; + } + om.imports.push(Import { + name: name, id: item.id, vis: item.vis.clone(), attrs: item.attrs.clone(), - node: node, + path: (**path).clone(), + glob: is_glob, whence: item.span, }); } @@ -450,43 +471,44 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { }; om.traits.push(t); }, - hir::ItemImpl(unsafety, polarity, ref gen, ref tr, ref ty, ref items) => { - let i = Impl { - unsafety: unsafety, - polarity: polarity, - generics: gen.clone(), - trait_: tr.clone(), - for_: ty.clone(), - items: items.clone(), - attrs: item.attrs.clone(), - id: item.id, - whence: item.span, - vis: item.vis.clone(), - stab: self.stability(item.id), - depr: self.deprecation(item.id), - }; - // Don't duplicate impls when inlining glob imports, we'll pick - // them up regardless of where they're located. - if !self.inlining_from_glob { + + hir::ItemImpl(unsafety, polarity, ref gen, ref tr, ref ty, ref item_ids) => { + // Don't duplicate impls when inlining, we'll pick them up + // regardless of where they're located. + if !self.inlining { + let items = item_ids.iter() + .map(|ii| self.cx.tcx.map.impl_item(ii.id).clone()) + .collect(); + let i = Impl { + unsafety: unsafety, + polarity: polarity, + generics: gen.clone(), + trait_: tr.clone(), + for_: ty.clone(), + items: items, + attrs: item.attrs.clone(), + id: item.id, + whence: item.span, + vis: item.vis.clone(), + stab: self.stability(item.id), + depr: self.deprecation(item.id), + }; om.impls.push(i); } }, hir::ItemDefaultImpl(unsafety, ref trait_ref) => { - let i = DefaultImpl { - unsafety: unsafety, - trait_: trait_ref.clone(), - id: item.id, - attrs: item.attrs.clone(), - whence: item.span, - }; - // see comment above about ItemImpl - if !self.inlining_from_glob { + // See comment above about ItemImpl. + if !self.inlining { + let i = DefaultImpl { + unsafety: unsafety, + trait_: trait_ref.clone(), + id: item.id, + attrs: item.attrs.clone(), + whence: item.span, + }; om.def_traits.push(i); } } - hir::ItemForeignMod(ref fm) => { - om.foreigns.push(fm.clone()); - } } } diff --git a/src/librustdoc/visit_lib.rs b/src/librustdoc/visit_lib.rs index 6d2830c561..cee292f991 100644 --- a/src/librustdoc/visit_lib.rs +++ b/src/librustdoc/visit_lib.rs @@ -16,7 +16,7 @@ use rustc::ty::Visibility; use std::cell::RefMut; -use clean::{Attributes, Clean}; +use clean::{AttributesExt, NestedAttributesExt}; // FIXME: this may not be exhaustive, but is sufficient for rustdocs current uses @@ -49,10 +49,7 @@ impl<'a, 'b, 'tcx> LibEmbargoVisitor<'a, 'b, 'tcx> { // Updates node level and returns the updated level fn update(&mut self, did: DefId, level: Option) -> Option { - let attrs: Vec<_> = self.cx.tcx().get_attrs(did).iter() - .map(|a| a.clean(self.cx)) - .collect(); - let is_hidden = attrs.list("doc").has_word("hidden"); + let is_hidden = self.cx.tcx.get_attrs(did).lists("doc").has_word("hidden"); let old_level = self.access_levels.map.get(&did).cloned(); // Accessibility levels can only grow diff --git a/src/libserialize/leb128.rs b/src/libserialize/leb128.rs index 8e8e03f1f8..5b72c6d46a 100644 --- a/src/libserialize/leb128.rs +++ b/src/libserialize/leb128.rs @@ -9,18 +9,26 @@ // except according to those terms. #[inline] -pub fn write_to_vec(vec: &mut Vec, position: &mut usize, byte: u8) { - if *position == vec.len() { +fn write_to_vec(vec: &mut Vec, position: usize, byte: u8) { + if position == vec.len() { vec.push(byte); } else { - vec[*position] = byte; + vec[position] = byte; } - - *position += 1; } -pub fn write_unsigned_leb128(out: &mut Vec, start_position: usize, mut value: u64) -> usize { - let mut position = start_position; +#[inline] +/// encodes an integer using unsigned leb128 encoding and stores +/// the result using a callback function. +/// +/// The callback `write` is called once for each position +/// that is to be written to with the byte to be encoded +/// at that position. +pub fn write_unsigned_leb128_to(mut value: u64, mut write: W) -> usize + where W: FnMut(usize, u8) +{ + let mut position = 0; + loop { let mut byte = (value & 0x7F) as u8; value >>= 7; @@ -28,14 +36,19 @@ pub fn write_unsigned_leb128(out: &mut Vec, start_position: usize, mut value byte |= 0x80; } - write_to_vec(out, &mut position, byte); + write(position, byte); + position += 1; if value == 0 { break; } } - return position - start_position; + position +} + +pub fn write_unsigned_leb128(out: &mut Vec, start_position: usize, value: u64) -> usize { + write_unsigned_leb128_to(value, |i, v| write_to_vec(out, start_position+i, v)) } #[inline] @@ -56,9 +69,17 @@ pub fn read_unsigned_leb128(data: &[u8], start_position: usize) -> (u64, usize) (result, position - start_position) } - -pub fn write_signed_leb128(out: &mut Vec, start_position: usize, mut value: i64) -> usize { - let mut position = start_position; +#[inline] +/// encodes an integer using signed leb128 encoding and stores +/// the result using a callback function. +/// +/// The callback `write` is called once for each position +/// that is to be written to with the byte to be encoded +/// at that position. +pub fn write_signed_leb128_to(mut value: i64, mut write: W) -> usize + where W: FnMut(usize, u8) +{ + let mut position = 0; loop { let mut byte = (value as u8) & 0x7f; @@ -69,14 +90,19 @@ pub fn write_signed_leb128(out: &mut Vec, start_position: usize, mut value: byte |= 0x80; // Mark this byte to show that more bytes will follow. } - write_to_vec(out, &mut position, byte); + write(position, byte); + position += 1; if !more { break; } } - return position - start_position; + position +} + +pub fn write_signed_leb128(out: &mut Vec, start_position: usize, value: i64) -> usize { + write_signed_leb128_to(value, |i, v| write_to_vec(out, start_position+i, v)) } #[inline] diff --git a/src/libserialize/lib.rs b/src/libserialize/lib.rs index 884f24ddc4..ad2304e155 100644 --- a/src/libserialize/lib.rs +++ b/src/libserialize/lib.rs @@ -35,14 +35,13 @@ Core encoding and decoding interfaces. #![feature(specialization)] #![feature(staged_api)] #![feature(unicode)] -#![cfg_attr(stage0, feature(question_mark))] #![cfg_attr(test, feature(test))] // test harness access #[cfg(test)] extern crate test; #[macro_use] extern crate log; -extern crate rustc_unicode; +extern crate std_unicode; extern crate collections; pub use self::serialize::{Decoder, Encoder, Decodable, Encodable}; diff --git a/src/libstd/Cargo.toml b/src/libstd/Cargo.toml index 21e6acc37f..fcf84cb716 100644 --- a/src/libstd/Cargo.toml +++ b/src/libstd/Cargo.toml @@ -13,14 +13,14 @@ crate-type = ["dylib", "rlib"] alloc = { path = "../liballoc" } alloc_jemalloc = { path = "../liballoc_jemalloc", optional = true } alloc_system = { path = "../liballoc_system" } -panic_unwind = { path = "../libpanic_unwind" } +panic_unwind = { path = "../libpanic_unwind", optional = true } panic_abort = { path = "../libpanic_abort" } collections = { path = "../libcollections" } core = { path = "../libcore" } libc = { path = "../rustc/libc_shim" } rand = { path = "../librand" } compiler_builtins = { path = "../libcompiler_builtins" } -rustc_unicode = { path = "../librustc_unicode" } +std_unicode = { path = "../libstd_unicode" } unwind = { path = "../libunwind" } [build-dependencies] @@ -29,5 +29,6 @@ gcc = "0.3.27" [features] backtrace = [] -jemalloc = ["alloc_jemalloc"] debug-jemalloc = ["alloc_jemalloc/debug"] +jemalloc = ["alloc_jemalloc"] +panic-unwind = ["panic_unwind"] diff --git a/src/libstd/build.rs b/src/libstd/build.rs index 72cd6e4830..b3eba50831 100644 --- a/src/libstd/build.rs +++ b/src/libstd/build.rs @@ -60,6 +60,8 @@ fn main() { println!("cargo:rustc-link-lib=shell32"); } else if target.contains("fuchsia") { println!("cargo:rustc-link-lib=magenta"); + println!("cargo:rustc-link-lib=mxio"); + println!("cargo:rustc-link-lib=launchpad"); // for std::process } } @@ -102,7 +104,7 @@ fn build_libbacktrace(host: &str, target: &str) { .env("AR", &ar) .env("RANLIB", format!("{} s", ar.display())) .env("CFLAGS", cflags)); - run(Command::new("make") + run(Command::new(build_helper::make(host)) .current_dir(&build_dir) .arg(format!("INCDIR={}", src_dir.display())) .arg("-j").arg(env::var("NUM_JOBS").expect("NUM_JOBS was not set"))); diff --git a/src/libstd/collections/hash/map.rs b/src/libstd/collections/hash/map.rs index ece51d6d82..0b310eb258 100644 --- a/src/libstd/collections/hash/map.rs +++ b/src/libstd/collections/hash/map.rs @@ -371,9 +371,9 @@ fn search_hashed(table: M, hash: SafeHash, mut is_match: F) -> Inter return InternalEntry::TableIsEmpty; } - let size = table.size() as isize; + let size = table.size(); let mut probe = Bucket::new(table, hash); - let ib = probe.index() as isize; + let mut displacement = 0; loop { let full = match probe.peek() { @@ -387,15 +387,15 @@ fn search_hashed(table: M, hash: SafeHash, mut is_match: F) -> Inter Full(bucket) => bucket, }; - let robin_ib = full.index() as isize - full.displacement() as isize; + let probe_displacement = full.displacement(); - if ib < robin_ib { + if probe_displacement < displacement { // Found a luckier bucket than me. // We can finish the search early if we hit any bucket // with a lower distance to initial bucket than we've probed. return InternalEntry::Vacant { hash: hash, - elem: NeqElem(full, robin_ib as usize), + elem: NeqElem(full, probe_displacement), }; } @@ -406,9 +406,9 @@ fn search_hashed(table: M, hash: SafeHash, mut is_match: F) -> Inter return InternalEntry::Occupied { elem: full }; } } - + displacement += 1; probe = full.next(); - debug_assert!(probe.index() as isize != ib + size + 1); + debug_assert!(displacement <= size); } } @@ -431,12 +431,11 @@ fn pop_internal(starting_bucket: FullBucketMut) -> (K, V) { } /// Perform robin hood bucket stealing at the given `bucket`. You must -/// also pass the position of that bucket's initial bucket so we don't have -/// to recalculate it. +/// also pass that bucket's displacement so we don't have to recalculate it. /// /// `hash`, `k`, and `v` are the elements to "robin hood" into the hashtable. fn robin_hood<'a, K: 'a, V: 'a>(bucket: FullBucketMut<'a, K, V>, - mut ib: usize, + mut displacement: usize, mut hash: SafeHash, mut key: K, mut val: V) @@ -457,6 +456,7 @@ fn robin_hood<'a, K: 'a, V: 'a>(bucket: FullBucketMut<'a, K, V>, val = old_val; loop { + displacement += 1; let probe = bucket.next(); debug_assert!(probe.index() != idx_end); @@ -476,13 +476,13 @@ fn robin_hood<'a, K: 'a, V: 'a>(bucket: FullBucketMut<'a, K, V>, Full(bucket) => bucket, }; - let probe_ib = full_bucket.index() - full_bucket.displacement(); + let probe_displacement = full_bucket.displacement(); bucket = full_bucket; // Robin hood! Steal the spot. - if ib < probe_ib { - ib = probe_ib; + if probe_displacement < displacement { + displacement = probe_displacement; break; } } @@ -520,13 +520,16 @@ impl HashMap search_hashed(&mut self.table, hash, |k| q.eq(k.borrow())) } - // The caller should ensure that invariants by Robin Hood Hashing hold. + // The caller should ensure that invariants by Robin Hood Hashing hold + // and that there's space in the underlying table. fn insert_hashed_ordered(&mut self, hash: SafeHash, k: K, v: V) { let raw_cap = self.raw_capacity(); let mut buckets = Bucket::new(&mut self.table, hash); - let ib = buckets.index(); + // note that buckets.index() keeps increasing + // even if the pointer wraps back to the first bucket. + let limit_bucket = buckets.index() + raw_cap; - while buckets.index() != ib + raw_cap { + loop { // We don't need to compare hashes for value swap. // Not even DIBs for Robin Hood. buckets = match buckets.peek() { @@ -537,8 +540,8 @@ impl HashMap Full(b) => b.into_bucket(), }; buckets.next(); + debug_assert!(buckets.index() < limit_bucket); } - panic!("Internal HashMap error: Out of space."); } } @@ -1959,7 +1962,7 @@ impl<'a, K: 'a, V: 'a> VacantEntry<'a, K, V> { #[stable(feature = "rust1", since = "1.0.0")] pub fn insert(self, value: V) -> &'a mut V { match self.elem { - NeqElem(bucket, ib) => robin_hood(bucket, ib, self.hash, self.key, value), + NeqElem(bucket, disp) => robin_hood(bucket, disp, self.hash, self.key, value), NoElem(bucket) => bucket.put(self.hash, self.key, value).into_mut_refs().1, } } @@ -1971,10 +1974,8 @@ impl FromIterator<(K, V)> for HashMap S: BuildHasher + Default { fn from_iter>(iter: T) -> HashMap { - let iterator = iter.into_iter(); - let lower = iterator.size_hint().0; - let mut map = HashMap::with_capacity_and_hasher(lower, Default::default()); - map.extend(iterator); + let mut map = HashMap::with_hasher(Default::default()); + map.extend(iter); map } } @@ -1985,6 +1986,17 @@ impl Extend<(K, V)> for HashMap S: BuildHasher { fn extend>(&mut self, iter: T) { + // Keys may be already present or show multiple times in the iterator. + // Reserve the entire hint lower bound if the map is empty. + // Otherwise reserve half the hint (rounded up), so the map + // will only resize twice in the worst case. + let iter = iter.into_iter(); + let reserve = if self.is_empty() { + iter.size_hint().0 + } else { + (iter.size_hint().0 + 1) / 2 + }; + self.reserve(reserve); for (k, v) in iter { self.insert(k, v); } @@ -2105,6 +2117,10 @@ impl DefaultHasher { #[stable(feature = "hashmap_default_hasher", since = "1.13.0")] impl Default for DefaultHasher { + /// Creates a new `DefaultHasher` using [`DefaultHasher::new`]. See + /// [`DefaultHasher::new`] documentation for more information. + /// + /// [`DefaultHasher::new`]: #method.new fn default() -> DefaultHasher { DefaultHasher::new() } diff --git a/src/libstd/collections/hash/set.rs b/src/libstd/collections/hash/set.rs index 1ec7a4a7b6..72af612f56 100644 --- a/src/libstd/collections/hash/set.rs +++ b/src/libstd/collections/hash/set.rs @@ -663,10 +663,8 @@ impl FromIterator for HashSet S: BuildHasher + Default { fn from_iter>(iter: I) -> HashSet { - let iterator = iter.into_iter(); - let lower = iterator.size_hint().0; - let mut set = HashSet::with_capacity_and_hasher(lower, Default::default()); - set.extend(iterator); + let mut set = HashSet::with_hasher(Default::default()); + set.extend(iter); set } } @@ -677,9 +675,7 @@ impl Extend for HashSet S: BuildHasher { fn extend>(&mut self, iter: I) { - for k in iter { - self.insert(k); - } + self.map.extend(iter.into_iter().map(|k| (k, ()))); } } diff --git a/src/libstd/env.rs b/src/libstd/env.rs index e29dbe35c5..ee6a907f61 100644 --- a/src/libstd/env.rs +++ b/src/libstd/env.rs @@ -546,17 +546,23 @@ pub fn current_exe() -> io::Result { os_imp::current_exe() } -/// An iterator over the arguments of a process, yielding a `String` value +/// An iterator over the arguments of a process, yielding a [`String`] value /// for each argument. /// -/// This structure is created through the `std::env::args` method. +/// This structure is created through the [`std::env::args`] method. +/// +/// [`String`]: ../string/struct.String.html +/// [`std::env::args`]: ./fn.args.html #[stable(feature = "env", since = "1.0.0")] pub struct Args { inner: ArgsOs } -/// An iterator over the arguments of a process, yielding an `OsString` value +/// An iterator over the arguments of a process, yielding an [`OsString`] value /// for each argument. /// -/// This structure is created through the `std::env::args_os` method. +/// This structure is created through the [`std::env::args_os`] method. +/// +/// [`OsString`]: ../ffi/struct.OsString.html +/// [`std::env::args_os`]: ./fn.args_os.html #[stable(feature = "env", since = "1.0.0")] pub struct ArgsOs { inner: sys::args::Args } @@ -571,7 +577,7 @@ pub struct ArgsOs { inner: sys::args::Args } /// /// The returned iterator will panic during iteration if any argument to the /// process is not valid unicode. If this is not desired, -/// use the `args_os` function instead. +/// use the [`args_os`] function instead. /// /// # Examples /// @@ -583,6 +589,8 @@ pub struct ArgsOs { inner: sys::args::Args } /// println!("{}", argument); /// } /// ``` +/// +/// [`args_os`]: ./fn.args_os.html #[stable(feature = "env", since = "1.0.0")] pub fn args() -> Args { Args { inner: args_os() } @@ -622,6 +630,7 @@ impl Iterator for Args { #[stable(feature = "env", since = "1.0.0")] impl ExactSizeIterator for Args { fn len(&self) -> usize { self.inner.len() } + fn is_empty(&self) -> bool { self.inner.is_empty() } } #[stable(feature = "env_iterators", since = "1.11.0")] @@ -641,6 +650,7 @@ impl Iterator for ArgsOs { #[stable(feature = "env", since = "1.0.0")] impl ExactSizeIterator for ArgsOs { fn len(&self) -> usize { self.inner.len() } + fn is_empty(&self) -> bool { self.inner.is_empty() } } #[stable(feature = "env_iterators", since = "1.11.0")] diff --git a/src/libstd/error.rs b/src/libstd/error.rs index 454fa47cfb..e115263d2e 100644 --- a/src/libstd/error.rs +++ b/src/libstd/error.rs @@ -109,7 +109,7 @@ pub trait Error: Debug + Display { /// /// impl Error for SuperError { /// fn description(&self) -> &str { - /// "I'm the superhero of errors!" + /// "I'm the superhero of errors" /// } /// /// fn cause(&self) -> Option<&Error> { @@ -128,7 +128,7 @@ pub trait Error: Debug + Display { /// /// impl Error for SuperErrorSideKick { /// fn description(&self) -> &str { - /// "I'm SuperError side kick!" + /// "I'm SuperError side kick" /// } /// } /// diff --git a/src/libstd/ffi/c_str.rs b/src/libstd/ffi/c_str.rs index 3ad5b5627d..d1b8fcd744 100644 --- a/src/libstd/ffi/c_str.rs +++ b/src/libstd/ffi/c_str.rs @@ -686,7 +686,7 @@ impl ToOwned for CStr { type Owned = CString; fn to_owned(&self) -> CString { - unsafe { CString::from_vec_unchecked(self.to_bytes().to_vec()) } + CString { inner: self.to_bytes_with_nul().to_vec().into_boxed_slice() } } } diff --git a/src/libstd/fs.rs b/src/libstd/fs.rs index df5741d00a..e91e808c54 100644 --- a/src/libstd/fs.rs +++ b/src/libstd/fs.rs @@ -348,6 +348,41 @@ impl File { inner: self.inner.duplicate()? }) } + + /// Changes the permissions on the underlying file. + /// + /// # Platform-specific behavior + /// + /// This function currently corresponds to the `fchmod` function on Unix and + /// the `SetFileInformationByHandle` function on Windows. Note that, this + /// [may change in the future][changes]. + /// + /// [changes]: ../io/index.html#platform-specific-behavior + /// + /// # Errors + /// + /// This function will return an error if the user lacks permission change + /// attributes on the underlying file. It may also return an error in other + /// os-specific unspecified cases. + /// + /// # Examples + /// + /// ``` + /// #![feature(set_permissions_atomic)] + /// # fn foo() -> std::io::Result<()> { + /// use std::fs::File; + /// + /// let file = File::open("foo.txt")?; + /// let mut perms = file.metadata()?.permissions(); + /// perms.set_readonly(true); + /// file.set_permissions(perms)?; + /// # Ok(()) + /// # } + /// ``` + #[unstable(feature = "set_permissions_atomic", issue="37916")] + pub fn set_permissions(&self, perm: Permissions) -> io::Result<()> { + self.inner.set_permissions(perm.0) + } } impl AsInner for File { @@ -2469,6 +2504,24 @@ mod tests { check!(fs::set_permissions(&file, p)); } + #[test] + fn fchmod_works() { + let tmpdir = tmpdir(); + let path = tmpdir.join("in.txt"); + + let file = check!(File::create(&path)); + let attr = check!(fs::metadata(&path)); + assert!(!attr.permissions().readonly()); + let mut p = attr.permissions(); + p.set_readonly(true); + check!(file.set_permissions(p.clone())); + let attr = check!(fs::metadata(&path)); + assert!(attr.permissions().readonly()); + + p.set_readonly(false); + check!(file.set_permissions(p)); + } + #[test] fn sync_doesnt_kill_anything() { let tmpdir = tmpdir(); diff --git a/src/libstd/io/buffered.rs b/src/libstd/io/buffered.rs index 44dd4e9874..cd7a50d07e 100644 --- a/src/libstd/io/buffered.rs +++ b/src/libstd/io/buffered.rs @@ -187,7 +187,10 @@ impl BufRead for BufReader { fn fill_buf(&mut self) -> io::Result<&[u8]> { // If we've reached the end of our internal buffer then we need to fetch // some more data from the underlying reader. - if self.pos == self.cap { + // Branch using `>=` instead of the more correct `==` + // to tell the compiler that the pos..cap slice is always valid. + if self.pos >= self.cap { + debug_assert!(self.pos == self.cap); self.cap = self.inner.read(&mut self.buf)?; self.pos = 0; } diff --git a/src/libstd/io/impls.rs b/src/libstd/io/impls.rs index 6b26c01663..f691289811 100644 --- a/src/libstd/io/impls.rs +++ b/src/libstd/io/impls.rs @@ -157,7 +157,16 @@ impl<'a> Read for &'a [u8] { fn read(&mut self, buf: &mut [u8]) -> io::Result { let amt = cmp::min(buf.len(), self.len()); let (a, b) = self.split_at(amt); - buf[..amt].copy_from_slice(a); + + // First check if the amount of bytes we want to read is small: + // `copy_from_slice` will generally expand to a call to `memcpy`, and + // for a single byte the overhead is significant. + if amt == 1 { + buf[0] = a[0]; + } else { + buf[..amt].copy_from_slice(a); + } + *self = b; Ok(amt) } @@ -169,7 +178,16 @@ impl<'a> Read for &'a [u8] { "failed to fill whole buffer")); } let (a, b) = self.split_at(buf.len()); - buf.copy_from_slice(a); + + // First check if the amount of bytes we want to read is small: + // `copy_from_slice` will generally expand to a call to `memcpy`, and + // for a single byte the overhead is significant. + if buf.len() == 1 { + buf[0] = a[0]; + } else { + buf.copy_from_slice(a); + } + *self = b; Ok(()) } diff --git a/src/libstd/io/mod.rs b/src/libstd/io/mod.rs index 193f396c0d..b07da0dc26 100644 --- a/src/libstd/io/mod.rs +++ b/src/libstd/io/mod.rs @@ -21,7 +21,7 @@ //! of other types, and you can implement them for your types too. As such, //! you'll see a few different types of I/O throughout the documentation in //! this module: [`File`]s, [`TcpStream`]s, and sometimes even [`Vec`]s. For -//! example, `Read` adds a `read()` method, which we can use on `File`s: +//! example, [`Read`] adds a [`read()`] method, which we can use on `File`s: //! //! ``` //! use std::io; @@ -251,11 +251,12 @@ //! [`Lines`]: struct.Lines.html //! [`io::Result`]: type.Result.html //! [`try!`]: ../macro.try.html +//! [`read()`]: trait.Read.html#tymethod.read #![stable(feature = "rust1", since = "1.0.0")] use cmp; -use rustc_unicode::str as core_str; +use std_unicode::str as core_str; use error as std_error; use fmt; use result; @@ -814,19 +815,23 @@ pub trait Read { /// /// Implementors of the `Write` trait are sometimes called 'writers'. /// -/// Writers are defined by two required methods, `write()` and `flush()`: +/// Writers are defined by two required methods, [`write()`] and [`flush()`]: /// -/// * The `write()` method will attempt to write some data into the object, +/// * The [`write()`] method will attempt to write some data into the object, /// returning how many bytes were successfully written. /// -/// * The `flush()` method is useful for adaptors and explicit buffers +/// * The [`flush()`] method is useful for adaptors and explicit buffers /// themselves for ensuring that all buffered data has been pushed out to the /// 'true sink'. /// /// Writers are intended to be composable with one another. Many implementors -/// throughout `std::io` take and provide types which implement the `Write` +/// throughout [`std::io`] take and provide types which implement the `Write` /// trait. /// +/// [`write()`]: #tymethod.write +/// [`flush()`]: #tymethod.flush +/// [`std::io`]: index.html +/// /// # Examples /// /// ``` @@ -1475,10 +1480,10 @@ impl BufRead for Chain { /// Reader adaptor which limits the bytes read from an underlying reader. /// -/// This struct is generally created by calling [`take()`][take] on a reader. -/// Please see the documentation of `take()` for more details. +/// This struct is generally created by calling [`take()`] on a reader. +/// Please see the documentation of [`take()`] for more details. /// -/// [take]: trait.Read.html#method.take +/// [`take()`]: trait.Read.html#method.take #[stable(feature = "rust1", since = "1.0.0")] pub struct Take { inner: T, @@ -1491,8 +1496,10 @@ impl Take { /// /// # Note /// - /// This instance may reach EOF after reading fewer bytes than indicated by - /// this method if the underlying `Read` instance reaches EOF. + /// This instance may reach `EOF` after reading fewer bytes than indicated by + /// this method if the underlying [`Read`] instance reaches EOF. + /// + /// [`Read`]: ../../std/io/trait.Read.html /// /// # Examples /// @@ -1519,8 +1526,6 @@ impl Take { /// # Examples /// /// ``` - /// #![feature(io_take_into_inner)] - /// /// use std::io; /// use std::io::prelude::*; /// use std::fs::File; @@ -1536,7 +1541,7 @@ impl Take { /// # Ok(()) /// # } /// ``` - #[unstable(feature = "io_take_into_inner", issue = "23755")] + #[stable(feature = "io_take_into_inner", since = "1.15.0")] pub fn into_inner(self) -> T { self.inner } diff --git a/src/libstd/io/stdio.rs b/src/libstd/io/stdio.rs index 1777b79ea1..1a65bee13b 100644 --- a/src/libstd/io/stdio.rs +++ b/src/libstd/io/stdio.rs @@ -10,7 +10,7 @@ use io::prelude::*; -use cell::{RefCell, BorrowState}; +use cell::RefCell; use fmt; use io::lazy::Lazy; use io::{self, BufReader, LineWriter}; @@ -81,11 +81,11 @@ impl Read for StdinRaw { } impl Write for StdoutRaw { fn write(&mut self, buf: &[u8]) -> io::Result { self.0.write(buf) } - fn flush(&mut self) -> io::Result<()> { Ok(()) } + fn flush(&mut self) -> io::Result<()> { self.0.flush() } } impl Write for StderrRaw { fn write(&mut self, buf: &[u8]) -> io::Result { self.0.write(buf) } - fn flush(&mut self) -> io::Result<()> { Ok(()) } + fn flush(&mut self) -> io::Result<()> { self.0.flush() } } enum Maybe { @@ -318,10 +318,11 @@ impl<'a> BufRead for StdinLock<'a> { /// /// Each handle shares a global buffer of data to be written to the standard /// output stream. Access is also synchronized via a lock and explicit control -/// over locking is available via the `lock` method. +/// over locking is available via the [`lock()`] method. /// /// Created by the [`io::stdout`] method. /// +/// [`lock()`]: #method.lock /// [`io::stdout`]: fn.stdout.html #[stable(feature = "rust1", since = "1.0.0")] pub struct Stdout { @@ -637,8 +638,8 @@ pub fn _print(args: fmt::Arguments) { LocalKeyState::Destroyed => stdout().write_fmt(args), LocalKeyState::Valid => { LOCAL_STDOUT.with(|s| { - if s.borrow_state() == BorrowState::Unused { - if let Some(w) = s.borrow_mut().as_mut() { + if let Ok(mut borrowed) = s.try_borrow_mut() { + if let Some(w) = borrowed.as_mut() { return w.write_fmt(args); } } diff --git a/src/libstd/lib.rs b/src/libstd/lib.rs index 12dbbe3c46..414f25fa5e 100644 --- a/src/libstd/lib.rs +++ b/src/libstd/lib.rs @@ -249,8 +249,8 @@ #![feature(const_fn)] #![feature(core_float)] #![feature(core_intrinsics)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] #![feature(dropck_parametricity)] +#![feature(exact_size_is_empty)] #![feature(float_extras)] #![feature(float_from_str_radix)] #![feature(fn_traits)] @@ -276,7 +276,6 @@ #![feature(panic_unwind)] #![feature(placement_in_syntax)] #![feature(prelude_import)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(rand)] #![feature(raw)] #![feature(repr_simd)] @@ -324,7 +323,7 @@ extern crate collections as core_collections; #[allow(deprecated)] extern crate rand as core_rand; extern crate alloc; -extern crate rustc_unicode; +extern crate std_unicode; extern crate libc; // We always need an unwinder currently for backtraces @@ -421,7 +420,7 @@ pub use core_collections::string; #[stable(feature = "rust1", since = "1.0.0")] pub use core_collections::vec; #[stable(feature = "rust1", since = "1.0.0")] -pub use rustc_unicode::char; +pub use std_unicode::char; pub mod f32; pub mod f64; diff --git a/src/libstd/net/addr.rs b/src/libstd/net/addr.rs index 20dc5b3801..1ce37f6cc0 100644 --- a/src/libstd/net/addr.rs +++ b/src/libstd/net/addr.rs @@ -31,7 +31,7 @@ pub enum SocketAddr { /// An IPv4 socket address which is a (ip, port) combination. #[stable(feature = "rust1", since = "1.0.0")] V4(#[stable(feature = "rust1", since = "1.0.0")] SocketAddrV4), - /// An IPv6 socket address + /// An IPv6 socket address. #[stable(feature = "rust1", since = "1.0.0")] V6(#[stable(feature = "rust1", since = "1.0.0")] SocketAddrV6), } @@ -48,6 +48,16 @@ pub struct SocketAddrV6 { inner: c::sockaddr_in6 } impl SocketAddr { /// Creates a new socket address from the (ip, port) pair. + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1))); + /// assert_eq!(socket.port(), 8080); + /// ``` #[stable(feature = "ip_addr", since = "1.7.0")] pub fn new(ip: IpAddr, port: u16) -> SocketAddr { match ip { @@ -57,6 +67,15 @@ impl SocketAddr { } /// Returns the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1))); + /// ``` #[stable(feature = "ip_addr", since = "1.7.0")] pub fn ip(&self) -> IpAddr { match *self { @@ -66,6 +85,16 @@ impl SocketAddr { } /// Change the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// let mut socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// socket.set_ip(IpAddr::V4(Ipv4Addr::new(10, 10, 0, 1))); + /// assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(10, 10, 0, 1))); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_ip(&mut self, new_ip: IpAddr) { // `match (*self, new_ip)` would have us mutate a copy of self only to throw it away. @@ -77,6 +106,15 @@ impl SocketAddr { } /// Returns the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// assert_eq!(socket.port(), 8080); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn port(&self) -> u16 { match *self { @@ -86,6 +124,16 @@ impl SocketAddr { } /// Change the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// let mut socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// socket.set_port(1025); + /// assert_eq!(socket.port(), 1025); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_port(&mut self, new_port: u16) { match *self { @@ -96,6 +144,20 @@ impl SocketAddr { /// Returns true if the IP in this `SocketAddr` is a valid IPv4 address, /// false if it's a valid IPv6 address. + /// + /// # Examples + /// + /// ``` + /// #![feature(sockaddr_checker)] + /// + /// use std::net::{IpAddr, Ipv4Addr, SocketAddr}; + /// + /// fn main() { + /// let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); + /// assert_eq!(socket.is_ipv4(), true); + /// assert_eq!(socket.is_ipv6(), false); + /// } + /// ``` #[unstable(feature = "sockaddr_checker", issue = "36949")] pub fn is_ipv4(&self) -> bool { match *self { @@ -106,6 +168,21 @@ impl SocketAddr { /// Returns true if the IP in this `SocketAddr` is a valid IPv6 address, /// false if it's a valid IPv4 address. + /// + /// # Examples + /// + /// ``` + /// #![feature(sockaddr_checker)] + /// + /// use std::net::{IpAddr, Ipv6Addr, SocketAddr}; + /// + /// fn main() { + /// let socket = SocketAddr::new( + /// IpAddr::V6(Ipv6Addr::new(0, 0, 0, 0, 0, 65535, 0, 1)), 8080); + /// assert_eq!(socket.is_ipv4(), false); + /// assert_eq!(socket.is_ipv6(), true); + /// } + /// ``` #[unstable(feature = "sockaddr_checker", issue = "36949")] pub fn is_ipv6(&self) -> bool { match *self { @@ -117,6 +194,14 @@ impl SocketAddr { impl SocketAddrV4 { /// Creates a new socket address from the (ip, port) pair. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV4, Ipv4Addr}; + /// + /// let socket = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn new(ip: Ipv4Addr, port: u16) -> SocketAddrV4 { SocketAddrV4 { @@ -130,6 +215,15 @@ impl SocketAddrV4 { } /// Returns the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV4, Ipv4Addr}; + /// + /// let socket = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080); + /// assert_eq!(socket.ip(), &Ipv4Addr::new(127, 0, 0, 1)); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn ip(&self) -> &Ipv4Addr { unsafe { @@ -138,18 +232,47 @@ impl SocketAddrV4 { } /// Change the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV4, Ipv4Addr}; + /// + /// let mut socket = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080); + /// socket.set_ip(Ipv4Addr::new(192, 168, 0, 1)); + /// assert_eq!(socket.ip(), &Ipv4Addr::new(192, 168, 0, 1)); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_ip(&mut self, new_ip: Ipv4Addr) { self.inner.sin_addr = *new_ip.as_inner() } /// Returns the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV4, Ipv4Addr}; + /// + /// let socket = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080); + /// assert_eq!(socket.port(), 8080); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn port(&self) -> u16 { ntoh(self.inner.sin_port) } /// Change the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV4, Ipv4Addr}; + /// + /// let mut socket = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080); + /// socket.set_port(4242); + /// assert_eq!(socket.port(), 4242); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_port(&mut self, new_port: u16) { self.inner.sin_port = hton(new_port); @@ -159,6 +282,14 @@ impl SocketAddrV4 { impl SocketAddrV6 { /// Creates a new socket address from the ip/port/flowinfo/scope_id /// components. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 0); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn new(ip: Ipv6Addr, port: u16, flowinfo: u32, scope_id: u32) -> SocketAddrV6 { @@ -175,6 +306,15 @@ impl SocketAddrV6 { } /// Returns the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 0); + /// assert_eq!(socket.ip(), &Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1)); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn ip(&self) -> &Ipv6Addr { unsafe { @@ -183,18 +323,47 @@ impl SocketAddrV6 { } /// Change the IP address associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let mut socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 0); + /// socket.set_ip(Ipv6Addr::new(76, 45, 0, 0, 0, 0, 0, 0)); + /// assert_eq!(socket.ip(), &Ipv6Addr::new(76, 45, 0, 0, 0, 0, 0, 0)); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_ip(&mut self, new_ip: Ipv6Addr) { self.inner.sin6_addr = *new_ip.as_inner() } /// Returns the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 0); + /// assert_eq!(socket.port(), 8080); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn port(&self) -> u16 { ntoh(self.inner.sin6_port) } /// Change the port number associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let mut socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 0); + /// socket.set_port(4242); + /// assert_eq!(socket.port(), 4242); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_port(&mut self, new_port: u16) { self.inner.sin6_port = hton(new_port); @@ -202,12 +371,31 @@ impl SocketAddrV6 { /// Returns the flow information associated with this address, /// corresponding to the `sin6_flowinfo` field in C. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 10, 0); + /// assert_eq!(socket.flowinfo(), 10); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn flowinfo(&self) -> u32 { self.inner.sin6_flowinfo } /// Change the flow information associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let mut socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 10, 0); + /// socket.set_flowinfo(56); + /// assert_eq!(socket.flowinfo(), 56); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_flowinfo(&mut self, new_flowinfo: u32) { self.inner.sin6_flowinfo = new_flowinfo; @@ -215,12 +403,31 @@ impl SocketAddrV6 { /// Returns the scope ID associated with this address, /// corresponding to the `sin6_scope_id` field in C. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 78); + /// assert_eq!(socket.scope_id(), 78); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn scope_id(&self) -> u32 { self.inner.sin6_scope_id } /// Change the scope ID associated with this socket address. + /// + /// # Examples + /// + /// ``` + /// use std::net::{SocketAddrV6, Ipv6Addr}; + /// + /// let mut socket = SocketAddrV6::new(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1), 8080, 0, 78); + /// socket.set_scope_id(42); + /// assert_eq!(socket.scope_id(), 42); + /// ``` #[stable(feature = "sockaddr_setters", since = "1.9.0")] pub fn set_scope_id(&mut self, new_scope_id: u32) { self.inner.sin6_scope_id = new_scope_id; diff --git a/src/libstd/net/ip.rs b/src/libstd/net/ip.rs index 49080680fa..c1e610f33f 100644 --- a/src/libstd/net/ip.rs +++ b/src/libstd/net/ip.rs @@ -79,8 +79,18 @@ pub enum Ipv6MulticastScope { impl IpAddr { /// Returns true for the special 'unspecified' address ([IPv4], [IPv6]). + /// /// [IPv4]: ../../std/net/struct.Ipv4Addr.html#method.is_unspecified /// [IPv6]: ../../std/net/struct.Ipv6Addr.html#method.is_unspecified + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)).is_unspecified(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0)).is_unspecified(), true); + /// ``` #[stable(feature = "ip_shared", since = "1.12.0")] pub fn is_unspecified(&self) -> bool { match *self { @@ -90,8 +100,18 @@ impl IpAddr { } /// Returns true if this is a loopback address ([IPv4], [IPv6]). + /// /// [IPv4]: ../../std/net/struct.Ipv4Addr.html#method.is_loopback /// [IPv6]: ../../std/net/struct.Ipv6Addr.html#method.is_loopback + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)).is_loopback(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0x1)).is_loopback(), true); + /// ``` #[stable(feature = "ip_shared", since = "1.12.0")] pub fn is_loopback(&self) -> bool { match *self { @@ -101,8 +121,23 @@ impl IpAddr { } /// Returns true if the address appears to be globally routable ([IPv4], [IPv6]). + /// /// [IPv4]: ../../std/net/struct.Ipv4Addr.html#method.is_global /// [IPv6]: ../../std/net/struct.Ipv6Addr.html#method.is_global + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// fn main() { + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(80, 9, 12, 3)).is_global(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0, 0, 0x1c9, 0, 0, 0xafc8, 0, 0x1)).is_global(), + /// true); + /// } + /// ``` pub fn is_global(&self) -> bool { match *self { IpAddr::V4(ref a) => a.is_global(), @@ -111,8 +146,18 @@ impl IpAddr { } /// Returns true if this is a multicast address ([IPv4], [IPv6]). + /// /// [IPv4]: ../../std/net/struct.Ipv4Addr.html#method.is_multicast /// [IPv6]: ../../std/net/struct.Ipv6Addr.html#method.is_multicast + /// + /// # Examples + /// + /// ``` + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(224, 254, 0, 0)).is_multicast(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0xff00, 0, 0, 0, 0, 0, 0, 0)).is_multicast(), true); + /// ``` #[stable(feature = "ip_shared", since = "1.12.0")] pub fn is_multicast(&self) -> bool { match *self { @@ -122,8 +167,23 @@ impl IpAddr { } /// Returns true if this address is in a range designated for documentation ([IPv4], [IPv6]). + /// /// [IPv4]: ../../std/net/struct.Ipv4Addr.html#method.is_documentation /// [IPv6]: ../../std/net/struct.Ipv6Addr.html#method.is_documentation + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// fn main() { + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(203, 0, 113, 6)).is_documentation(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 0)) + /// .is_documentation(), true); + /// } + /// ``` pub fn is_documentation(&self) -> bool { match *self { IpAddr::V4(ref a) => a.is_documentation(), @@ -132,6 +192,20 @@ impl IpAddr { } /// Returns true if this address is a valid IPv4 address, false if it's a valid IPv6 address. + /// + /// # Examples + /// + /// ``` + /// #![feature(ipaddr_checker)] + /// + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// fn main() { + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(203, 0, 113, 6)).is_ipv4(), true); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 0)).is_ipv4(), + /// false); + /// } + /// ``` #[unstable(feature = "ipaddr_checker", issue = "36949")] pub fn is_ipv4(&self) -> bool { match *self { @@ -141,6 +215,20 @@ impl IpAddr { } /// Returns true if this address is a valid IPv6 address, false if it's a valid IPv4 address. + /// + /// # Examples + /// + /// ``` + /// #![feature(ipaddr_checker)] + /// + /// use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; + /// + /// fn main() { + /// assert_eq!(IpAddr::V4(Ipv4Addr::new(203, 0, 113, 6)).is_ipv6(), false); + /// assert_eq!(IpAddr::V6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 0)).is_ipv6(), + /// true); + /// } + /// ``` #[unstable(feature = "ipaddr_checker", issue = "36949")] pub fn is_ipv6(&self) -> bool { match *self { @@ -154,6 +242,14 @@ impl Ipv4Addr { /// Creates a new IPv4 address from four eight-bit octets. /// /// The result will represent the IP address `a`.`b`.`c`.`d`. + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// let addr = Ipv4Addr::new(127, 0, 0, 1); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn new(a: u8, b: u8, c: u8, d: u8) -> Ipv4Addr { Ipv4Addr { @@ -167,6 +263,15 @@ impl Ipv4Addr { } /// Returns the four eight-bit integers that make up this address. + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// let addr = Ipv4Addr::new(127, 0, 0, 1); + /// assert_eq!(addr.octets(), [127, 0, 0, 1]); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn octets(&self) -> [u8; 4] { let bits = ntoh(self.inner.s_addr); @@ -176,8 +281,18 @@ impl Ipv4Addr { /// Returns true for the special 'unspecified' address (0.0.0.0). /// /// This property is defined in _UNIX Network Programming, Second Edition_, - /// W. Richard Stevens, p. 891; see also [ip7] - /// [ip7][http://man7.org/linux/man-pages/man7/ip.7.html] + /// W. Richard Stevens, p. 891; see also [ip7]. + /// + /// [ip7]: http://man7.org/linux/man-pages/man7/ip.7.html + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(0, 0, 0, 0).is_unspecified(), true); + /// assert_eq!(Ipv4Addr::new(45, 22, 13, 197).is_unspecified(), false); + /// ``` #[stable(feature = "ip_shared", since = "1.12.0")] pub fn is_unspecified(&self) -> bool { self.inner.s_addr == 0 @@ -186,7 +301,17 @@ impl Ipv4Addr { /// Returns true if this is a loopback address (127.0.0.0/8). /// /// This property is defined by [RFC 1122]. + /// /// [RFC 1122]: https://tools.ietf.org/html/rfc1122 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(127, 0, 0, 1).is_loopback(), true); + /// assert_eq!(Ipv4Addr::new(45, 22, 13, 197).is_loopback(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_loopback(&self) -> bool { self.octets()[0] == 127 @@ -195,11 +320,26 @@ impl Ipv4Addr { /// Returns true if this is a private address. /// /// The private address ranges are defined in [RFC 1918] and include: - /// [RFC 1918]: https://tools.ietf.org/html/rfc1918 /// /// - 10.0.0.0/8 /// - 172.16.0.0/12 /// - 192.168.0.0/16 + /// + /// [RFC 1918]: https://tools.ietf.org/html/rfc1918 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(10, 0, 0, 1).is_private(), true); + /// assert_eq!(Ipv4Addr::new(10, 10, 10, 10).is_private(), true); + /// assert_eq!(Ipv4Addr::new(172, 16, 10, 10).is_private(), true); + /// assert_eq!(Ipv4Addr::new(172, 29, 45, 14).is_private(), true); + /// assert_eq!(Ipv4Addr::new(172, 32, 0, 2).is_private(), false); + /// assert_eq!(Ipv4Addr::new(192, 168, 0, 2).is_private(), true); + /// assert_eq!(Ipv4Addr::new(192, 169, 0, 2).is_private(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_private(&self) -> bool { match (self.octets()[0], self.octets()[1]) { @@ -213,7 +353,18 @@ impl Ipv4Addr { /// Returns true if the address is link-local (169.254.0.0/16). /// /// This property is defined by [RFC 3927]. + /// /// [RFC 3927]: https://tools.ietf.org/html/rfc3927 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(169, 254, 0, 0).is_link_local(), true); + /// assert_eq!(Ipv4Addr::new(169, 254, 10, 65).is_link_local(), true); + /// assert_eq!(Ipv4Addr::new(16, 89, 10, 65).is_link_local(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_link_local(&self) -> bool { self.octets()[0] == 169 && self.octets()[1] == 254 @@ -221,7 +372,6 @@ impl Ipv4Addr { /// Returns true if the address appears to be globally routable. /// See [iana-ipv4-special-registry][ipv4-sr]. - /// [ipv4-sr]: http://goo.gl/RaZ7lg /// /// The following return false: /// @@ -231,6 +381,24 @@ impl Ipv4Addr { /// - the broadcast address (255.255.255.255/32) /// - test addresses used for documentation (192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24) /// - the unspecified address (0.0.0.0) + /// + /// [ipv4-sr]: http://goo.gl/RaZ7lg + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv4Addr; + /// + /// fn main() { + /// assert_eq!(Ipv4Addr::new(10, 254, 0, 0).is_global(), false); + /// assert_eq!(Ipv4Addr::new(192, 168, 10, 65).is_global(), false); + /// assert_eq!(Ipv4Addr::new(172, 16, 10, 65).is_global(), false); + /// assert_eq!(Ipv4Addr::new(0, 0, 0, 0).is_global(), false); + /// assert_eq!(Ipv4Addr::new(80, 9, 12, 3).is_global(), true); + /// } + /// ``` pub fn is_global(&self) -> bool { !self.is_private() && !self.is_loopback() && !self.is_link_local() && !self.is_broadcast() && !self.is_documentation() && !self.is_unspecified() @@ -240,7 +408,18 @@ impl Ipv4Addr { /// /// Multicast addresses have a most significant octet between 224 and 239, /// and is defined by [RFC 5771]. + /// /// [RFC 5771]: https://tools.ietf.org/html/rfc5771 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(224, 254, 0, 0).is_multicast(), true); + /// assert_eq!(Ipv4Addr::new(236, 168, 10, 65).is_multicast(), true); + /// assert_eq!(Ipv4Addr::new(172, 16, 10, 65).is_multicast(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_multicast(&self) -> bool { self.octets()[0] >= 224 && self.octets()[0] <= 239 @@ -249,7 +428,17 @@ impl Ipv4Addr { /// Returns true if this is a broadcast address (255.255.255.255). /// /// A broadcast address has all octets set to 255 as defined in [RFC 919]. + /// /// [RFC 919]: https://tools.ietf.org/html/rfc919 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(255, 255, 255, 255).is_broadcast(), true); + /// assert_eq!(Ipv4Addr::new(236, 168, 10, 65).is_broadcast(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_broadcast(&self) -> bool { self.octets()[0] == 255 && self.octets()[1] == 255 && @@ -259,11 +448,23 @@ impl Ipv4Addr { /// Returns true if this address is in a range designated for documentation. /// /// This is defined in [RFC 5737]: - /// [RFC 5737]: https://tools.ietf.org/html/rfc5737 /// /// - 192.0.2.0/24 (TEST-NET-1) /// - 198.51.100.0/24 (TEST-NET-2) /// - 203.0.113.0/24 (TEST-NET-3) + /// + /// [RFC 5737]: https://tools.ietf.org/html/rfc5737 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv4Addr; + /// + /// assert_eq!(Ipv4Addr::new(192, 0, 2, 255).is_documentation(), true); + /// assert_eq!(Ipv4Addr::new(198, 51, 100, 65).is_documentation(), true); + /// assert_eq!(Ipv4Addr::new(203, 0, 113, 6).is_documentation(), true); + /// assert_eq!(Ipv4Addr::new(193, 34, 17, 19).is_documentation(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_documentation(&self) -> bool { match(self.octets()[0], self.octets()[1], self.octets()[2], self.octets()[3]) { @@ -277,6 +478,15 @@ impl Ipv4Addr { /// Converts this address to an IPv4-compatible IPv6 address. /// /// a.b.c.d becomes ::a.b.c.d + /// + /// # Examples + /// + /// ``` + /// use std::net::{Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(Ipv4Addr::new(192, 0, 2, 255).to_ipv6_compatible(), + /// Ipv6Addr::new(0, 0, 0, 0, 0, 0, 49152, 767)); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn to_ipv6_compatible(&self) -> Ipv6Addr { Ipv6Addr::new(0, 0, 0, 0, 0, 0, @@ -287,6 +497,15 @@ impl Ipv4Addr { /// Converts this address to an IPv4-mapped IPv6 address. /// /// a.b.c.d becomes ::ffff:a.b.c.d + /// + /// # Examples + /// + /// ``` + /// use std::net::{Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(Ipv4Addr::new(192, 0, 2, 255).to_ipv6_mapped(), + /// Ipv6Addr::new(0, 0, 0, 0, 0, 65535, 49152, 767)); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn to_ipv6_mapped(&self) -> Ipv6Addr { Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, @@ -391,6 +610,14 @@ impl Ipv6Addr { /// Creates a new IPv6 address from eight 16-bit segments. /// /// The result will represent the IP address a:b:c:d:e:f:g:h. + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// let addr = Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn new(a: u16, b: u16, c: u16, d: u16, e: u16, f: u16, g: u16, h: u16) -> Ipv6Addr { @@ -407,6 +634,15 @@ impl Ipv6Addr { } /// Returns the eight 16-bit segments that make up this address. + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).segments(), + /// [0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff]); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn segments(&self) -> [u16; 8] { let arr = &self.inner.s6_addr; @@ -425,7 +661,17 @@ impl Ipv6Addr { /// Returns true for the special 'unspecified' address (::). /// /// This property is defined in [RFC 4291]. + /// /// [RFC 4291]: https://tools.ietf.org/html/rfc4291 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unspecified(), false); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0).is_unspecified(), true); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_unspecified(&self) -> bool { self.segments() == [0, 0, 0, 0, 0, 0, 0, 0] @@ -434,7 +680,17 @@ impl Ipv6Addr { /// Returns true if this is a loopback address (::1). /// /// This property is defined in [RFC 4291]. + /// /// [RFC 4291]: https://tools.ietf.org/html/rfc4291 + /// + /// # Examples + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_loopback(), false); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0x1).is_loopback(), true); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_loopback(&self) -> bool { self.segments() == [0, 0, 0, 0, 0, 0, 0, 1] @@ -447,6 +703,20 @@ impl Ipv6Addr { /// - the loopback address /// - link-local, site-local, and unique local unicast addresses /// - interface-, link-, realm-, admin- and site-local multicast addresses + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_global(), true); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0x1).is_global(), false); + /// assert_eq!(Ipv6Addr::new(0, 0, 0x1c9, 0, 0, 0xafc8, 0, 0x1).is_global(), true); + /// } + /// ``` pub fn is_global(&self) -> bool { match self.multicast_scope() { Some(Ipv6MulticastScope::Global) => true, @@ -458,7 +728,22 @@ impl Ipv6Addr { /// Returns true if this is a unique local address (fc00::/7). /// /// This property is defined in [RFC 4193]. + /// /// [RFC 4193]: https://tools.ietf.org/html/rfc4193 + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unique_local(), + /// false); + /// assert_eq!(Ipv6Addr::new(0xfc02, 0, 0, 0, 0, 0, 0, 0).is_unique_local(), true); + /// } + /// ``` pub fn is_unique_local(&self) -> bool { (self.segments()[0] & 0xfe00) == 0xfc00 } @@ -466,13 +751,42 @@ impl Ipv6Addr { /// Returns true if the address is unicast and link-local (fe80::/10). /// /// This property is defined in [RFC 4291]. + /// /// [RFC 4291]: https://tools.ietf.org/html/rfc4291 + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unicast_link_local(), + /// false); + /// assert_eq!(Ipv6Addr::new(0xfe8a, 0, 0, 0, 0, 0, 0, 0).is_unicast_link_local(), true); + /// } + /// ``` pub fn is_unicast_link_local(&self) -> bool { (self.segments()[0] & 0xffc0) == 0xfe80 } /// Returns true if this is a deprecated unicast site-local address /// (fec0::/10). + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unicast_site_local(), + /// false); + /// assert_eq!(Ipv6Addr::new(0xfec2, 0, 0, 0, 0, 0, 0, 0).is_unicast_site_local(), true); + /// } + /// ``` pub fn is_unicast_site_local(&self) -> bool { (self.segments()[0] & 0xffc0) == 0xfec0 } @@ -481,7 +795,22 @@ impl Ipv6Addr { /// (2001:db8::/32). /// /// This property is defined in [RFC 3849]. + /// /// [RFC 3849]: https://tools.ietf.org/html/rfc3849 + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_documentation(), + /// false); + /// assert_eq!(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 0).is_documentation(), true); + /// } + /// ``` pub fn is_documentation(&self) -> bool { (self.segments()[0] == 0x2001) && (self.segments()[1] == 0xdb8) } @@ -496,6 +825,20 @@ impl Ipv6Addr { /// - unique local addresses /// - the unspecified address /// - the address range reserved for documentation + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::Ipv6Addr; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 0).is_unicast_global(), false); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unicast_global(), + /// true); + /// } + /// ``` pub fn is_unicast_global(&self) -> bool { !self.is_multicast() && !self.is_loopback() && !self.is_unicast_link_local() @@ -504,6 +847,20 @@ impl Ipv6Addr { } /// Returns the address's multicast scope if the address is multicast. + /// + /// # Examples + /// + /// ``` + /// #![feature(ip)] + /// + /// use std::net::{Ipv6Addr, Ipv6MulticastScope}; + /// + /// fn main() { + /// assert_eq!(Ipv6Addr::new(0xff0e, 0, 0, 0, 0, 0, 0, 0).multicast_scope(), + /// Some(Ipv6MulticastScope::Global)); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).multicast_scope(), None); + /// } + /// ``` pub fn multicast_scope(&self) -> Option { if self.is_multicast() { match self.segments()[0] & 0x000f { @@ -524,7 +881,16 @@ impl Ipv6Addr { /// Returns true if this is a multicast address (ff00::/8). /// /// This property is defined by [RFC 4291]. + /// /// [RFC 4291]: https://tools.ietf.org/html/rfc4291 + /// # Examples + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// assert_eq!(Ipv6Addr::new(0xff00, 0, 0, 0, 0, 0, 0, 0).is_multicast(), true); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_multicast(), false); + /// ``` #[stable(since = "1.7.0", feature = "ip_17")] pub fn is_multicast(&self) -> bool { (self.segments()[0] & 0xff00) == 0xff00 @@ -534,6 +900,16 @@ impl Ipv6Addr { /// neither IPv4-compatible or IPv4-mapped. /// /// ::a.b.c.d and ::ffff:a.b.c.d become a.b.c.d + /// + /// ``` + /// use std::net::{Ipv4Addr, Ipv6Addr}; + /// + /// assert_eq!(Ipv6Addr::new(0xff00, 0, 0, 0, 0, 0, 0, 0).to_ipv4(), None); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).to_ipv4(), + /// Some(Ipv4Addr::new(192, 10, 2, 255))); + /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1).to_ipv4(), + /// Some(Ipv4Addr::new(0, 0, 0, 1))); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn to_ipv4(&self) -> Option { match self.segments() { @@ -546,6 +922,13 @@ impl Ipv6Addr { } /// Returns the sixteen eight-bit integers the IPv6 address consists of. + /// + /// ``` + /// use std::net::Ipv6Addr; + /// + /// assert_eq!(Ipv6Addr::new(0xff00, 0, 0, 0, 0, 0, 0, 0).octets(), + /// [255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]); + /// ``` #[stable(feature = "ipv6_to_octets", since = "1.12.0")] pub fn octets(&self) -> [u8; 16] { self.inner.s6_addr diff --git a/src/libstd/net/tcp.rs b/src/libstd/net/tcp.rs index 0e7c5b0671..be9636a0a1 100644 --- a/src/libstd/net/tcp.rs +++ b/src/libstd/net/tcp.rs @@ -67,11 +67,12 @@ pub struct TcpListener(net_imp::TcpListener); /// An infinite iterator over the connections from a `TcpListener`. /// -/// This iterator will infinitely yield `Some` of the accepted connections. It +/// This iterator will infinitely yield [`Some`] of the accepted connections. It /// is equivalent to calling `accept` in a loop. /// /// This `struct` is created by the [`incoming`] method on [`TcpListener`]. /// +/// [`Some`]: ../../std/option/enum.Option.html#variant.Some /// [`incoming`]: struct.TcpListener.html#method.incoming /// [`TcpListener`]: struct.TcpListener.html #[stable(feature = "rust1", since = "1.0.0")] @@ -85,18 +86,52 @@ impl TcpStream { /// documentation for concrete examples. /// In case `ToSocketAddrs::to_socket_addrs()` returns more than one entry, /// then the first valid and reachable address is used. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// if let Ok(stream) = TcpStream::connect("127.0.0.1:8080") { + /// println!("Connected to the server!"); + /// } else { + /// println!("Couldn't connect to server..."); + /// } + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn connect(addr: A) -> io::Result { super::each_addr(addr, net_imp::TcpStream::connect).map(TcpStream) } /// Returns the socket address of the remote peer of this TCP connection. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4, TcpStream}; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// assert_eq!(stream.peer_addr().unwrap(), + /// SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080))); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn peer_addr(&self) -> io::Result { self.0.peer_addr() } /// Returns the socket address of the local half of this TCP connection. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4, TcpStream}; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// assert_eq!(stream.local_addr().unwrap(), + /// SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080))); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn local_addr(&self) -> io::Result { self.0.socket_addr() @@ -106,7 +141,19 @@ impl TcpStream { /// /// This function will cause all pending and future I/O on the specified /// portions to return immediately with an appropriate value (see the - /// documentation of `Shutdown`). + /// documentation of [`Shutdown`]). + /// + /// [`Shutdown`]: ../../std/net/enum.Shutdown.html + /// + /// # Examples + /// + /// ```no_run + /// use std::net::{Shutdown, TcpStream}; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.shutdown(Shutdown::Both).expect("shutdown call failed"); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn shutdown(&self, how: Shutdown) -> io::Result<()> { self.0.shutdown(how) @@ -118,6 +165,16 @@ impl TcpStream { /// object references. Both handles will read and write the same stream of /// data, and options set on one stream will be propagated to the other /// stream. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// let stream_clone = stream.try_clone().expect("clone failed..."); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn try_clone(&self) -> io::Result { self.0.duplicate().map(TcpStream) @@ -125,7 +182,7 @@ impl TcpStream { /// Sets the read timeout to the timeout specified. /// - /// If the value specified is `None`, then `read` calls will block + /// If the value specified is [`None`], then [`read()`] calls will block /// indefinitely. It is an error to pass the zero `Duration` to this /// method. /// @@ -133,7 +190,22 @@ impl TcpStream { /// /// Platforms may return a different error code whenever a read times out as /// a result of setting this option. For example Unix typically returns an - /// error of the kind `WouldBlock`, but Windows may return `TimedOut`. + /// error of the kind [`WouldBlock`], but Windows may return [`TimedOut`]. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`read()`]: ../../std/io/trait.Read.html#tymethod.read + /// [`WouldBlock`]: ../../std/io/enum.ErrorKind.html#variant.WouldBlock + /// [`TimedOut`]: ../../std/io/enum.ErrorKind.html#variant.TimedOut + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_read_timeout(None).expect("set_read_timeout call failed"); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn set_read_timeout(&self, dur: Option) -> io::Result<()> { self.0.set_read_timeout(dur) @@ -141,15 +213,31 @@ impl TcpStream { /// Sets the write timeout to the timeout specified. /// - /// If the value specified is `None`, then `write` calls will block - /// indefinitely. It is an error to pass the zero `Duration` to this + /// If the value specified is [`None`], then [`write()`] calls will block + /// indefinitely. It is an error to pass the zero [`Duration`] to this /// method. /// /// # Note /// /// Platforms may return a different error code whenever a write times out /// as a result of setting this option. For example Unix typically returns - /// an error of the kind `WouldBlock`, but Windows may return `TimedOut`. + /// an error of the kind [`WouldBlock`], but Windows may return [`TimedOut`]. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`write()`]: ../../std/io/trait.Write.html#tymethod.write + /// [`Duration`]: ../../std/time/struct.Duration.html + /// [`WouldBlock`]: ../../std/io/enum.ErrorKind.html#variant.WouldBlock + /// [`TimedOut`]: ../../std/io/enum.ErrorKind.html#variant.TimedOut + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_write_timeout(None).expect("set_write_timeout call failed"); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn set_write_timeout(&self, dur: Option) -> io::Result<()> { self.0.set_write_timeout(dur) @@ -157,11 +245,25 @@ impl TcpStream { /// Returns the read timeout of this socket. /// - /// If the timeout is `None`, then `read` calls will block indefinitely. + /// If the timeout is [`None`], then [`read()`] calls will block indefinitely. /// /// # Note /// /// Some platforms do not provide access to the current timeout. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`read()`]: ../../std/io/trait.Read.html#tymethod.read + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_read_timeout(None).expect("set_read_timeout call failed"); + /// assert_eq!(stream.read_timeout().unwrap(), None); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn read_timeout(&self) -> io::Result> { self.0.read_timeout() @@ -169,11 +271,25 @@ impl TcpStream { /// Returns the write timeout of this socket. /// - /// If the timeout is `None`, then `write` calls will block indefinitely. + /// If the timeout is [`None`], then [`write()`] calls will block indefinitely. /// /// # Note /// /// Some platforms do not provide access to the current timeout. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`write()`]: ../../std/io/trait.Write.html#tymethod.write + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_write_timeout(None).expect("set_write_timeout call failed"); + /// assert_eq!(stream.write_timeout().unwrap(), None); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn write_timeout(&self) -> io::Result> { self.0.write_timeout() @@ -186,6 +302,16 @@ impl TcpStream { /// small amount of data. When not set, data is buffered until there is a /// sufficient amount to send out, thereby avoiding the frequent sending of /// small packets. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_nodelay(true).expect("set_nodelay call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_nodelay(&self, nodelay: bool) -> io::Result<()> { self.0.set_nodelay(nodelay) @@ -196,6 +322,17 @@ impl TcpStream { /// For more information about this option, see [`set_nodelay`][link]. /// /// [link]: #method.set_nodelay + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_nodelay(true).expect("set_nodelay call failed"); + /// assert_eq!(stream.nodelay().unwrap_or(false), true); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn nodelay(&self) -> io::Result { self.0.nodelay() @@ -205,6 +342,16 @@ impl TcpStream { /// /// This value sets the time-to-live field that is used in every packet sent /// from this socket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_ttl(100).expect("set_ttl call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_ttl(&self, ttl: u32) -> io::Result<()> { self.0.set_ttl(ttl) @@ -215,6 +362,17 @@ impl TcpStream { /// For more information about this option, see [`set_ttl`][link]. /// /// [link]: #method.set_ttl + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_ttl(100).expect("set_ttl call failed"); + /// assert_eq!(stream.ttl().unwrap_or(0), 100); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn ttl(&self) -> io::Result { self.0.ttl() @@ -225,6 +383,16 @@ impl TcpStream { /// This will retrieve the stored error in the underlying socket, clearing /// the field in the process. This can be useful for checking errors between /// calls. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.take_error().expect("No error was expected..."); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn take_error(&self) -> io::Result> { self.0.take_error() @@ -234,6 +402,16 @@ impl TcpStream { /// /// On Unix this corresponds to calling fcntl, and on Windows this /// corresponds to calling ioctlsocket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpStream; + /// + /// let stream = TcpStream::connect("127.0.0.1:8080") + /// .expect("Couldn't connect to the server..."); + /// stream.set_nonblocking(true).expect("set_nonblocking call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> { self.0.set_nonblocking(nonblocking) @@ -296,12 +474,30 @@ impl TcpListener { /// /// The address type can be any implementor of `ToSocketAddrs` trait. See /// its documentation for concrete examples. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn bind(addr: A) -> io::Result { super::each_addr(addr, net_imp::TcpListener::bind).map(TcpListener) } /// Returns the local socket address of this listener. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4, TcpListener}; + /// + /// let listener = TcpListener::bind("127.0.0.1:8080").unwrap(); + /// assert_eq!(listener.local_addr().unwrap(), + /// SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080))); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn local_addr(&self) -> io::Result { self.0.socket_addr() @@ -312,6 +508,15 @@ impl TcpListener { /// The returned `TcpListener` is a reference to the same socket that this /// object references. Both handles can be used to accept incoming /// connections and options set on one listener will affect the other. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:8080").unwrap(); + /// let listener_clone = listener.try_clone().unwrap(); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn try_clone(&self) -> io::Result { self.0.duplicate().map(TcpListener) @@ -322,6 +527,18 @@ impl TcpListener { /// This function will block the calling thread until a new TCP connection /// is established. When established, the corresponding `TcpStream` and the /// remote peer's address will be returned. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:8080").unwrap(); + /// match listener.accept() { + /// Ok((_socket, addr)) => println!("new client: {:?}", addr), + /// Err(e) => println!("couldn't get client: {:?}", e), + /// } + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn accept(&self) -> io::Result<(TcpStream, SocketAddr)> { self.0.accept().map(|(a, b)| (TcpStream(a), b)) @@ -330,8 +547,28 @@ impl TcpListener { /// Returns an iterator over the connections being received on this /// listener. /// - /// The returned iterator will never return `None` and will also not yield - /// the peer's `SocketAddr` structure. + /// The returned iterator will never return [`None`] and will also not yield + /// the peer's [`SocketAddr`] structure. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`SocketAddr`]: ../../std/net/enum.SocketAddr.html + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// + /// for stream in listener.incoming() { + /// match stream { + /// Ok(stream) => { + /// println!("new client!"); + /// } + /// Err(e) => { /* connection failed */ } + /// } + /// } + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn incoming(&self) -> Incoming { Incoming { listener: self } @@ -341,6 +578,15 @@ impl TcpListener { /// /// This value sets the time-to-live field that is used in every packet sent /// from this socket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.set_ttl(100).expect("could not set TTL"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_ttl(&self, ttl: u32) -> io::Result<()> { self.0.set_ttl(ttl) @@ -348,9 +594,19 @@ impl TcpListener { /// Gets the value of the `IP_TTL` option for this socket. /// - /// For more information about this option, see [`set_ttl`][link]. + /// For more information about this option, see [`set_ttl()`][link]. /// /// [link]: #method.set_ttl + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.set_ttl(100).expect("could not set TTL"); + /// assert_eq!(listener.ttl().unwrap_or(0), 100); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn ttl(&self) -> io::Result { self.0.ttl() @@ -364,6 +620,15 @@ impl TcpListener { /// /// If this is set to `false` then the socket can be used to send and /// receive packets from an IPv4-mapped IPv6 address. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.set_only_v6(true).expect("Cannot set to IPv6"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_only_v6(&self, only_v6: bool) -> io::Result<()> { self.0.set_only_v6(only_v6) @@ -374,6 +639,16 @@ impl TcpListener { /// For more information about this option, see [`set_only_v6`][link]. /// /// [link]: #method.set_only_v6 + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.set_only_v6(true).expect("Cannot set to IPv6"); + /// assert_eq!(listener.only_v6().unwrap_or(false), true); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn only_v6(&self) -> io::Result { self.0.only_v6() @@ -384,6 +659,15 @@ impl TcpListener { /// This will retrieve the stored error in the underlying socket, clearing /// the field in the process. This can be useful for checking errors between /// calls. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.take_error().expect("No error was expected"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn take_error(&self) -> io::Result> { self.0.take_error() @@ -393,6 +677,15 @@ impl TcpListener { /// /// On Unix this corresponds to calling fcntl, and on Windows this /// corresponds to calling ioctlsocket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::TcpListener; + /// + /// let listener = TcpListener::bind("127.0.0.1:80").unwrap(); + /// listener.set_nonblocking(true).expect("Cannot set non-blocking"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> { self.0.set_nonblocking(nonblocking) diff --git a/src/libstd/net/udp.rs b/src/libstd/net/udp.rs index c03ac496ad..f8a5ec0b37 100644 --- a/src/libstd/net/udp.rs +++ b/src/libstd/net/udp.rs @@ -48,8 +48,18 @@ pub struct UdpSocket(net_imp::UdpSocket); impl UdpSocket { /// Creates a UDP socket from the given address. /// - /// The address type can be any implementor of `ToSocketAddr` trait. See + /// The address type can be any implementor of [`ToSocketAddrs`] trait. See /// its documentation for concrete examples. + /// + /// [`ToSocketAddrs`]: ../../std/net/trait.ToSocketAddrs.html + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn bind(addr: A) -> io::Result { super::each_addr(addr, net_imp::UdpSocket::bind).map(UdpSocket) @@ -57,6 +67,17 @@ impl UdpSocket { /// Receives data from the socket. On success, returns the number of bytes /// read and the address from whence the data came. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// let mut buf = [0; 10]; + /// let (number_of_bytes, src_addr) = socket.recv_from(&mut buf) + /// .expect("Didn't receive data"); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn recv_from(&self, buf: &mut [u8]) -> io::Result<(usize, SocketAddr)> { self.0.recv_from(buf) @@ -65,8 +86,24 @@ impl UdpSocket { /// Sends data on the socket to the given address. On success, returns the /// number of bytes written. /// - /// Address type can be any implementor of `ToSocketAddrs` trait. See its + /// Address type can be any implementor of [`ToSocketAddrs`] trait. See its /// documentation for concrete examples. + /// + /// This will return an error when the IP version of the local socket + /// does not match that returned from [`ToSocketAddrs`]. + /// + /// See https://github.com/rust-lang/rust/issues/34202 for more details. + /// + /// [`ToSocketAddrs`]: ../../std/net/trait.ToSocketAddrs.html + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.send_to(&[0; 10], "127.0.0.1:4242").expect("couldn't send data"); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn send_to(&self, buf: &[u8], addr: A) -> io::Result { @@ -78,6 +115,16 @@ impl UdpSocket { } /// Returns the socket address that this socket was created from. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4, UdpSocket}; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// assert_eq!(socket.local_addr().unwrap(), + /// SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 34254))); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn local_addr(&self) -> io::Result { self.0.socket_addr() @@ -88,6 +135,15 @@ impl UdpSocket { /// The returned `UdpSocket` is a reference to the same socket that this /// object references. Both handles will read and write the same port, and /// options set on one socket will be propagated to the other. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// let socket_clone = socket.try_clone().expect("couldn't clone the socket"); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn try_clone(&self) -> io::Result { self.0.duplicate().map(UdpSocket) @@ -95,15 +151,30 @@ impl UdpSocket { /// Sets the read timeout to the timeout specified. /// - /// If the value specified is `None`, then `read` calls will block - /// indefinitely. It is an error to pass the zero `Duration` to this + /// If the value specified is [`None`], then [`read()`] calls will block + /// indefinitely. It is an error to pass the zero [`Duration`] to this /// method. /// /// # Note /// /// Platforms may return a different error code whenever a read times out as /// a result of setting this option. For example Unix typically returns an - /// error of the kind `WouldBlock`, but Windows may return `TimedOut`. + /// error of the kind [`WouldBlock`], but Windows may return [`TimedOut`]. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`read()`]: ../../std/io/trait.Read.html#tymethod.read + /// [`Duration`]: ../../std/time/struct.Duration.html + /// [`WouldBlock`]: ../../std/io/enum.ErrorKind.html#variant.WouldBlock + /// [`TimedOut`]: ../../std/io/enum.ErrorKind.html#variant.TimedOut + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_read_timeout(None).expect("set_read_timeout call failed"); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn set_read_timeout(&self, dur: Option) -> io::Result<()> { self.0.set_read_timeout(dur) @@ -111,15 +182,30 @@ impl UdpSocket { /// Sets the write timeout to the timeout specified. /// - /// If the value specified is `None`, then `write` calls will block - /// indefinitely. It is an error to pass the zero `Duration` to this + /// If the value specified is [`None`], then [`write()`] calls will block + /// indefinitely. It is an error to pass the zero [`Duration`] to this /// method. /// /// # Note /// /// Platforms may return a different error code whenever a write times out /// as a result of setting this option. For example Unix typically returns - /// an error of the kind `WouldBlock`, but Windows may return `TimedOut`. + /// an error of the kind [`WouldBlock`], but Windows may return [`TimedOut`]. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`write()`]: ../../std/io/trait.Write.html#tymethod.write + /// [`Duration`]: ../../std/time/struct.Duration.html + /// [`WouldBlock`]: ../../std/io/enum.ErrorKind.html#variant.WouldBlock + /// [`TimedOut`]: ../../std/io/enum.ErrorKind.html#variant.TimedOut + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_write_timeout(None).expect("set_write_timeout call failed"); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn set_write_timeout(&self, dur: Option) -> io::Result<()> { self.0.set_write_timeout(dur) @@ -127,7 +213,20 @@ impl UdpSocket { /// Returns the read timeout of this socket. /// - /// If the timeout is `None`, then `read` calls will block indefinitely. + /// If the timeout is [`None`], then [`read()`] calls will block indefinitely. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`read()`]: ../../std/io/trait.Read.html#tymethod.read + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_read_timeout(None).expect("set_read_timeout call failed"); + /// assert_eq!(socket.read_timeout().unwrap(), None); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn read_timeout(&self) -> io::Result> { self.0.read_timeout() @@ -135,7 +234,20 @@ impl UdpSocket { /// Returns the write timeout of this socket. /// - /// If the timeout is `None`, then `write` calls will block indefinitely. + /// If the timeout is [`None`], then [`write()`] calls will block indefinitely. + /// + /// [`None`]: ../../std/option/enum.Option.html#variant.None + /// [`write()`]: ../../std/io/trait.Write.html#tymethod.write + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_write_timeout(None).expect("set_write_timeout call failed"); + /// assert_eq!(socket.write_timeout().unwrap(), None); + /// ``` #[stable(feature = "socket_timeout", since = "1.4.0")] pub fn write_timeout(&self) -> io::Result> { self.0.write_timeout() @@ -145,6 +257,15 @@ impl UdpSocket { /// /// When enabled, this socket is allowed to send packets to a broadcast /// address. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_broadcast(false).expect("set_broadcast call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_broadcast(&self, broadcast: bool) -> io::Result<()> { self.0.set_broadcast(broadcast) @@ -156,6 +277,16 @@ impl UdpSocket { /// [`set_broadcast`][link]. /// /// [link]: #method.set_broadcast + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_broadcast(false).expect("set_broadcast call failed"); + /// assert_eq!(socket.broadcast().unwrap(), false); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn broadcast(&self) -> io::Result { self.0.broadcast() @@ -165,6 +296,15 @@ impl UdpSocket { /// /// If enabled, multicast packets will be looped back to the local socket. /// Note that this may not have any affect on IPv6 sockets. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_loop_v4(false).expect("set_multicast_loop_v4 call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_multicast_loop_v4(&self, multicast_loop_v4: bool) -> io::Result<()> { self.0.set_multicast_loop_v4(multicast_loop_v4) @@ -176,6 +316,16 @@ impl UdpSocket { /// [`set_multicast_loop_v4`][link]. /// /// [link]: #method.set_multicast_loop_v4 + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_loop_v4(false).expect("set_multicast_loop_v4 call failed"); + /// assert_eq!(socket.multicast_loop_v4().unwrap(), false); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn multicast_loop_v4(&self) -> io::Result { self.0.multicast_loop_v4() @@ -188,6 +338,15 @@ impl UdpSocket { /// don't leave the local network unless explicitly requested. /// /// Note that this may not have any affect on IPv6 sockets. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_ttl_v4(42).expect("set_multicast_ttl_v4 call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_multicast_ttl_v4(&self, multicast_ttl_v4: u32) -> io::Result<()> { self.0.set_multicast_ttl_v4(multicast_ttl_v4) @@ -199,6 +358,16 @@ impl UdpSocket { /// [`set_multicast_ttl_v4`][link]. /// /// [link]: #method.set_multicast_ttl_v4 + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_ttl_v4(42).expect("set_multicast_ttl_v4 call failed"); + /// assert_eq!(socket.multicast_ttl_v4().unwrap(), 42); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn multicast_ttl_v4(&self) -> io::Result { self.0.multicast_ttl_v4() @@ -208,6 +377,15 @@ impl UdpSocket { /// /// Controls whether this socket sees the multicast packets it sends itself. /// Note that this may not have any affect on IPv4 sockets. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_loop_v6(false).expect("set_multicast_loop_v6 call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_multicast_loop_v6(&self, multicast_loop_v6: bool) -> io::Result<()> { self.0.set_multicast_loop_v6(multicast_loop_v6) @@ -219,6 +397,16 @@ impl UdpSocket { /// [`set_multicast_loop_v6`][link]. /// /// [link]: #method.set_multicast_loop_v6 + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_multicast_loop_v6(false).expect("set_multicast_loop_v6 call failed"); + /// assert_eq!(socket.multicast_loop_v6().unwrap(), false); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn multicast_loop_v6(&self) -> io::Result { self.0.multicast_loop_v6() @@ -228,6 +416,15 @@ impl UdpSocket { /// /// This value sets the time-to-live field that is used in every packet sent /// from this socket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_ttl(42).expect("set_ttl call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_ttl(&self, ttl: u32) -> io::Result<()> { self.0.set_ttl(ttl) @@ -238,6 +435,16 @@ impl UdpSocket { /// For more information about this option, see [`set_ttl`][link]. /// /// [link]: #method.set_ttl + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_ttl(42).expect("set_ttl call failed"); + /// assert_eq!(socket.ttl().unwrap(), 42); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn ttl(&self) -> io::Result { self.0.ttl() @@ -292,6 +499,19 @@ impl UdpSocket { /// This will retrieve the stored error in the underlying socket, clearing /// the field in the process. This can be useful for checking errors between /// calls. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// match socket.take_error() { + /// Ok(Some(error)) => println!("UdpSocket error: {:?}", error), + /// Ok(None) => println!("No error"), + /// Err(error) => println!("UdpSocket.take_error failed: {:?}", error), + /// } + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn take_error(&self) -> io::Result> { self.0.take_error() @@ -300,6 +520,15 @@ impl UdpSocket { /// Connects this UDP socket to a remote address, allowing the `send` and /// `recv` syscalls to be used to send data and also applies filters to only /// receive data from the specified address. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.connect("127.0.0.1:8080").expect("connect function failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn connect(&self, addr: A) -> io::Result<()> { super::each_addr(addr, |addr| self.0.connect(addr)) @@ -307,8 +536,20 @@ impl UdpSocket { /// Sends data on the socket to the remote address to which it is connected. /// - /// The `connect` method will connect this socket to a remote address. This + /// The [`connect()`] method will connect this socket to a remote address. This /// method will fail if the socket is not connected. + /// + /// [`connect()`]: #method.connect + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.connect("127.0.0.1:8080").expect("connect function failed"); + /// socket.send(&[0, 1, 2]).expect("couldn't send message"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn send(&self, buf: &[u8]) -> io::Result { self.0.send(buf) @@ -319,6 +560,20 @@ impl UdpSocket { /// /// The `connect` method will connect this socket to a remote address. This /// method will fail if the socket is not connected. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.connect("127.0.0.1:8080").expect("connect function failed"); + /// let mut buf = [0; 10]; + /// match socket.recv(&mut buf) { + /// Ok(received) => println!("received {} bytes", received), + /// Err(e) => println!("recv function failed: {:?}", e), + /// } + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn recv(&self, buf: &mut [u8]) -> io::Result { self.0.recv(buf) @@ -328,6 +583,15 @@ impl UdpSocket { /// /// On Unix this corresponds to calling fcntl, and on Windows this /// corresponds to calling ioctlsocket. + /// + /// # Examples + /// + /// ```no_run + /// use std::net::UdpSocket; + /// + /// let socket = UdpSocket::bind("127.0.0.1:34254").expect("couldn't bind to address"); + /// socket.set_nonblocking(true).expect("set_nonblocking call failed"); + /// ``` #[stable(feature = "net2_mutators", since = "1.9.0")] pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> { self.0.set_nonblocking(nonblocking) diff --git a/src/libstd/os/mod.rs b/src/libstd/os/mod.rs index 366a167415..58f1604de1 100644 --- a/src/libstd/os/mod.rs +++ b/src/libstd/os/mod.rs @@ -13,7 +13,7 @@ #![stable(feature = "os", since = "1.0.0")] #![allow(missing_docs, bad_style)] -#[cfg(unix)] +#[cfg(any(target_os = "redox", unix))] #[stable(feature = "rust1", since = "1.0.0")] pub use sys::ext as unix; #[cfg(windows)] diff --git a/src/libstd/panicking.rs b/src/libstd/panicking.rs index 1f5b3437b6..45a10d2452 100644 --- a/src/libstd/panicking.rs +++ b/src/libstd/panicking.rs @@ -153,12 +153,29 @@ pub fn take_hook() -> Box { match hook { Hook::Default => Box::new(default_hook), - Hook::Custom(ptr) => {Box::from_raw(ptr)} // FIXME #30530 + Hook::Custom(ptr) => Box::from_raw(ptr), } } } /// A struct providing information about a panic. +/// +/// `PanicInfo` structure is passed to a panic hook set by the [`set_hook()`] +/// function. +/// +/// [`set_hook()`]: ../../std/panic/fn.set_hook.html +/// +/// # Examples +/// +/// ```should_panic +/// use std::panic; +/// +/// panic::set_hook(Box::new(|panic_info| { +/// println!("panic occured: {:?}", panic_info.payload().downcast_ref::<&str>().unwrap()); +/// })); +/// +/// panic!("Normal panic"); +/// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub struct PanicInfo<'a> { payload: &'a (Any + Send), @@ -168,7 +185,21 @@ pub struct PanicInfo<'a> { impl<'a> PanicInfo<'a> { /// Returns the payload associated with the panic. /// - /// This will commonly, but not always, be a `&'static str` or `String`. + /// This will commonly, but not always, be a `&'static str` or [`String`]. + /// + /// [`String`]: ../../std/string/struct.String.html + /// + /// # Examples + /// + /// ```should_panic + /// use std::panic; + /// + /// panic::set_hook(Box::new(|panic_info| { + /// println!("panic occured: {:?}", panic_info.payload().downcast_ref::<&str>().unwrap()); + /// })); + /// + /// panic!("Normal panic"); + /// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub fn payload(&self) -> &(Any + Send) { self.payload @@ -177,8 +208,26 @@ impl<'a> PanicInfo<'a> { /// Returns information about the location from which the panic originated, /// if available. /// - /// This method will currently always return `Some`, but this may change + /// This method will currently always return [`Some`], but this may change /// in future versions. + /// + /// [`Some`]: ../../std/option/enum.Option.html#variant.Some + /// + /// # Examples + /// + /// ```should_panic + /// use std::panic; + /// + /// panic::set_hook(Box::new(|panic_info| { + /// if let Some(location) = panic_info.location() { + /// println!("panic occured in file '{}' at line {}", location.file(), location.line()); + /// } else { + /// println!("panic occured but can't get location information..."); + /// } + /// })); + /// + /// panic!("Normal panic"); + /// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub fn location(&self) -> Option<&Location> { Some(&self.location) @@ -186,6 +235,27 @@ impl<'a> PanicInfo<'a> { } /// A struct containing information about the location of a panic. +/// +/// This structure is created by the [`location()`] method of [`PanicInfo`]. +/// +/// [`location()`]: ../../std/panic/struct.PanicInfo.html#method.location +/// [`PanicInfo`]: ../../std/panic/struct.PanicInfo.html +/// +/// # Examples +/// +/// ```should_panic +/// use std::panic; +/// +/// panic::set_hook(Box::new(|panic_info| { +/// if let Some(location) = panic_info.location() { +/// println!("panic occured in file '{}' at line {}", location.file(), location.line()); +/// } else { +/// println!("panic occured but can't get location information..."); +/// } +/// })); +/// +/// panic!("Normal panic"); +/// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub struct Location<'a> { file: &'a str, @@ -194,12 +264,44 @@ pub struct Location<'a> { impl<'a> Location<'a> { /// Returns the name of the source file from which the panic originated. + /// + /// # Examples + /// + /// ```should_panic + /// use std::panic; + /// + /// panic::set_hook(Box::new(|panic_info| { + /// if let Some(location) = panic_info.location() { + /// println!("panic occured in file '{}'", location.file()); + /// } else { + /// println!("panic occured but can't get location information..."); + /// } + /// })); + /// + /// panic!("Normal panic"); + /// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub fn file(&self) -> &str { self.file } /// Returns the line number from which the panic originated. + /// + /// # Examples + /// + /// ```should_panic + /// use std::panic; + /// + /// panic::set_hook(Box::new(|panic_info| { + /// if let Some(location) = panic_info.location() { + /// println!("panic occured at line {}", location.line()); + /// } else { + /// println!("panic occured but can't get location information..."); + /// } + /// })); + /// + /// panic!("Normal panic"); + /// ``` #[stable(feature = "panic_hooks", since = "1.10.0")] pub fn line(&self) -> u32 { self.line diff --git a/src/libstd/path.rs b/src/libstd/path.rs index bb6883236e..d13baea40a 100644 --- a/src/libstd/path.rs +++ b/src/libstd/path.rs @@ -25,11 +25,18 @@ //! //! ```rust //! use std::path::Path; +//! use std::ffi::OsStr; //! //! let path = Path::new("/tmp/foo/bar.txt"); -//! let file = path.file_name(); +//! +//! let parent = path.parent(); +//! assert_eq!(parent, Some(Path::new("/tmp/foo"))); +//! +//! let file_stem = path.file_stem(); +//! assert_eq!(file_stem, Some(OsStr::new("bar"))); +//! //! let extension = path.extension(); -//! let parent_dir = path.parent(); +//! assert_eq!(extension, Some(OsStr::new("txt"))); //! ``` //! //! To build or modify paths, use `PathBuf`: @@ -247,7 +254,9 @@ pub fn is_separator(c: char) -> bool { c.is_ascii() && is_sep_byte(c as u8) } -/// The primary separator for the current platform +/// The primary separator of path components for the current platform. +/// +/// For example, `/` on Unix and `\` on Windows. #[stable(feature = "rust1", since = "1.0.0")] pub const MAIN_SEPARATOR: char = ::sys::path::MAIN_SEP; @@ -448,7 +457,17 @@ pub enum Component<'a> { } impl<'a> Component<'a> { - /// Extracts the underlying `OsStr` slice + /// Extracts the underlying `OsStr` slice. + /// + /// # Examples + /// + /// ``` + /// use std::path::Path; + /// + /// let path = Path::new("./tmp/foo/bar.txt"); + /// let components: Vec<_> = path.components().map(|comp| comp.as_os_str()).collect(); + /// assert_eq!(&components, &[".", "tmp", "foo", "bar.txt"]); + /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn as_os_str(self) -> &'a OsStr { match self { @@ -983,17 +1002,24 @@ impl PathBuf { /// /// # Examples /// + /// Pushing a relative path extends the existing path: + /// /// ``` /// use std::path::PathBuf; /// - /// let mut path = PathBuf::new(); - /// path.push("/tmp"); + /// let mut path = PathBuf::from("/tmp"); /// path.push("file.bk"); /// assert_eq!(path, PathBuf::from("/tmp/file.bk")); + /// ``` /// - /// // Pushing an absolute path replaces the current path - /// path.push("/etc/passwd"); - /// assert_eq!(path, PathBuf::from("/etc/passwd")); + /// Pushing an absolute path replaces the existing path: + /// + /// ``` + /// use std::path::PathBuf; + /// + /// let mut path = PathBuf::from("/tmp"); + /// path.push("/etc"); + /// assert_eq!(path, PathBuf::from("/etc")); /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn push>(&mut self, path: P) { @@ -1312,13 +1338,19 @@ impl AsRef for PathBuf { /// /// ``` /// use std::path::Path; +/// use std::ffi::OsStr; /// /// let path = Path::new("/tmp/foo/bar.txt"); -/// let file = path.file_name(); -/// let extension = path.extension(); -/// let parent_dir = path.parent(); -/// ``` /// +/// let parent = path.parent(); +/// assert_eq!(parent, Some(Path::new("/tmp/foo"))); +/// +/// let file_stem = path.file_stem(); +/// assert_eq!(file_stem, Some(OsStr::new("bar"))); +/// +/// let extension = path.extension(); +/// assert_eq!(extension, Some(OsStr::new("txt"))); +/// ``` #[stable(feature = "rust1", since = "1.0.0")] pub struct Path { inner: OsStr, diff --git a/src/libstd/process.rs b/src/libstd/process.rs index 9d21a76e81..2dcb8c2f15 100644 --- a/src/libstd/process.rs +++ b/src/libstd/process.rs @@ -253,6 +253,14 @@ impl Command { /// Builder methods are provided to change these defaults and /// otherwise configure the process. /// + /// If `program` is not an absolute path, the `PATH` will be searched in + /// an OS-defined way. + /// + /// The search path to be used may be controlled by setting the + /// `PATH` environment variable on the Command, + /// but this has some implementation limitations on Windows + /// (see https://github.com/rust-lang/rust/issues/37519). + /// /// # Examples /// /// Basic usage: @@ -819,12 +827,52 @@ impl Child { /// will be run. If a clean shutdown is needed it is recommended to only call /// this function at a known point where there are no more destructors left /// to run. +/// +/// ## Platform-specific behavior +/// +/// **Unix**: On Unix-like platforms, it is unlikely that all 32 bits of `exit` +/// will be visible to a parent process inspecting the exit code. On most +/// Unix-like platforms, only the eight least-significant bits are considered. +/// +/// # Examples +/// +/// ``` +/// use std::process; +/// +/// process::exit(0); +/// ``` +/// +/// Due to [platform-specific behavior], the exit code for this example will be +/// `0` on Linux, but `256` on Windows: +/// +/// ```no_run +/// use std::process; +/// +/// process::exit(0x0f00); +/// ``` +/// +/// [platform-specific behavior]: #platform-specific-behavior #[stable(feature = "rust1", since = "1.0.0")] pub fn exit(code: i32) -> ! { ::sys_common::cleanup(); ::sys::os::exit(code) } +/// Terminates the process in an abnormal fashion. +/// +/// The function will never return and will immediately terminate the current +/// process in a platform specific "abnormal" manner. +/// +/// Note that because this function never returns, and that it terminates the +/// process, no destructors on the current stack or any other thread's stack +/// will be run. If a clean shutdown is needed it is recommended to only call +/// this function at a known point where there are no more destructors left +/// to run. +#[unstable(feature = "process_abort", issue = "37838")] +pub fn abort() -> ! { + unsafe { ::sys::abort_internal() }; +} + #[cfg(all(test, not(target_os = "emscripten")))] mod tests { use io::prelude::*; @@ -1144,4 +1192,62 @@ mod tests { Ok(_) => panic!(), } } + + /// Test that process creation flags work by debugging a process. + /// Other creation flags make it hard or impossible to detect + /// behavioral changes in the process. + #[test] + #[cfg(windows)] + fn test_creation_flags() { + use os::windows::process::CommandExt; + use sys::c::{BOOL, DWORD, INFINITE}; + #[repr(C, packed)] + struct DEBUG_EVENT { + pub event_code: DWORD, + pub process_id: DWORD, + pub thread_id: DWORD, + // This is a union in the real struct, but we don't + // need this data for the purposes of this test. + pub _junk: [u8; 164], + } + + extern "system" { + fn WaitForDebugEvent(lpDebugEvent: *mut DEBUG_EVENT, dwMilliseconds: DWORD) -> BOOL; + fn ContinueDebugEvent(dwProcessId: DWORD, dwThreadId: DWORD, + dwContinueStatus: DWORD) -> BOOL; + } + + const DEBUG_PROCESS: DWORD = 1; + const EXIT_PROCESS_DEBUG_EVENT: DWORD = 5; + const DBG_EXCEPTION_NOT_HANDLED: DWORD = 0x80010001; + + let mut child = Command::new("cmd") + .creation_flags(DEBUG_PROCESS) + .stdin(Stdio::piped()).spawn().unwrap(); + child.stdin.take().unwrap().write_all(b"exit\r\n").unwrap(); + let mut events = 0; + let mut event = DEBUG_EVENT { + event_code: 0, + process_id: 0, + thread_id: 0, + _junk: [0; 164], + }; + loop { + if unsafe { WaitForDebugEvent(&mut event as *mut DEBUG_EVENT, INFINITE) } == 0 { + panic!("WaitForDebugEvent failed!"); + } + events += 1; + + if event.event_code == EXIT_PROCESS_DEBUG_EVENT { + break; + } + + if unsafe { ContinueDebugEvent(event.process_id, + event.thread_id, + DBG_EXCEPTION_NOT_HANDLED) } == 0 { + panic!("ContinueDebugEvent failed!"); + } + } + assert!(events > 0); + } } diff --git a/src/libstd/sync/mpsc/mod.rs b/src/libstd/sync/mpsc/mod.rs index fce640e7c7..8bcf008649 100644 --- a/src/libstd/sync/mpsc/mod.rs +++ b/src/libstd/sync/mpsc/mod.rs @@ -316,7 +316,7 @@ pub struct Iter<'a, T: 'a> { /// /// This Iterator will never block the caller in order to wait for data to /// become available. Instead, it will return `None`. -#[unstable(feature = "receiver_try_iter", issue = "34931")] +#[stable(feature = "receiver_try_iter", since = "1.15.0")] pub struct TryIter<'a, T: 'a> { rx: &'a Receiver } @@ -348,7 +348,7 @@ impl !Sync for Sender { } /// owned by one thread, but it can be cloned to send to other threads. #[stable(feature = "rust1", since = "1.0.0")] pub struct SyncSender { - inner: Arc>>, + inner: Arc>, } #[stable(feature = "rust1", since = "1.0.0")] @@ -426,10 +426,10 @@ pub enum TrySendError { } enum Flavor { - Oneshot(Arc>>), - Stream(Arc>>), - Shared(Arc>>), - Sync(Arc>>), + Oneshot(Arc>), + Stream(Arc>), + Shared(Arc>), + Sync(Arc>), } #[doc(hidden)] @@ -454,10 +454,16 @@ impl UnsafeFlavor for Receiver { } /// Creates a new asynchronous channel, returning the sender/receiver halves. -/// /// All data sent on the sender will become available on the receiver, and no /// send will block the calling thread (this channel has an "infinite buffer"). /// +/// If the [`Receiver`] is disconnected while trying to [`send()`] with the +/// [`Sender`], the [`send()`] method will return an error. +/// +/// [`send()`]: ../../../std/sync/mpsc/struct.Sender.html#method.send +/// [`Sender`]: ../../../std/sync/mpsc/struct.Sender.html +/// [`Receiver`]: ../../../std/sync/mpsc/struct.Receiver.html +/// /// # Examples /// /// ``` @@ -481,24 +487,29 @@ impl UnsafeFlavor for Receiver { /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn channel() -> (Sender, Receiver) { - let a = Arc::new(UnsafeCell::new(oneshot::Packet::new())); + let a = Arc::new(oneshot::Packet::new()); (Sender::new(Flavor::Oneshot(a.clone())), Receiver::new(Flavor::Oneshot(a))) } /// Creates a new synchronous, bounded channel. /// -/// Like asynchronous channels, the `Receiver` will block until a message +/// Like asynchronous channels, the [`Receiver`] will block until a message /// becomes available. These channels differ greatly in the semantics of the /// sender from asynchronous channels, however. /// -/// This channel has an internal buffer on which messages will be queued. When -/// the internal buffer becomes full, future sends will *block* waiting for the -/// buffer to open up. Note that a buffer size of 0 is valid, in which case this -/// becomes "rendezvous channel" where each send will not return until a recv -/// is paired with it. +/// This channel has an internal buffer on which messages will be queued. +/// `bound` specifies the buffer size. When the internal buffer becomes full, +/// future sends will *block* waiting for the buffer to open up. Note that a +/// buffer size of 0 is valid, in which case this becomes "rendezvous channel" +/// where each [`send()`] will not return until a recv is paired with it. /// -/// As with asynchronous channels, all senders will panic in `send` if the -/// `Receiver` has been destroyed. +/// Like asynchronous channels, if the [`Receiver`] is disconnected while +/// trying to [`send()`] with the [`SyncSender`], the [`send()`] method will +/// return an error. +/// +/// [`send()`]: ../../../std/sync/mpsc/struct.SyncSender.html#method.send +/// [`SyncSender`]: ../../../std/sync/mpsc/struct.SyncSender.html +/// [`Receiver`]: ../../../std/sync/mpsc/struct.Receiver.html /// /// # Examples /// @@ -521,7 +532,7 @@ pub fn channel() -> (Sender, Receiver) { /// ``` #[stable(feature = "rust1", since = "1.0.0")] pub fn sync_channel(bound: usize) -> (SyncSender, Receiver) { - let a = Arc::new(UnsafeCell::new(sync::Packet::new(bound))); + let a = Arc::new(sync::Packet::new(bound)); (SyncSender::new(a.clone()), Receiver::new(Flavor::Sync(a))) } @@ -567,38 +578,30 @@ impl Sender { pub fn send(&self, t: T) -> Result<(), SendError> { let (new_inner, ret) = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - unsafe { - let p = p.get(); - if !(*p).sent() { - return (*p).send(t).map_err(SendError); - } else { - let a = - Arc::new(UnsafeCell::new(stream::Packet::new())); - let rx = Receiver::new(Flavor::Stream(a.clone())); - match (*p).upgrade(rx) { - oneshot::UpSuccess => { - let ret = (*a.get()).send(t); - (a, ret) - } - oneshot::UpDisconnected => (a, Err(t)), - oneshot::UpWoke(token) => { - // This send cannot panic because the thread is - // asleep (we're looking at it), so the receiver - // can't go away. - (*a.get()).send(t).ok().unwrap(); - token.signal(); - (a, Ok(())) - } + if !p.sent() { + return p.send(t).map_err(SendError); + } else { + let a = Arc::new(stream::Packet::new()); + let rx = Receiver::new(Flavor::Stream(a.clone())); + match p.upgrade(rx) { + oneshot::UpSuccess => { + let ret = a.send(t); + (a, ret) + } + oneshot::UpDisconnected => (a, Err(t)), + oneshot::UpWoke(token) => { + // This send cannot panic because the thread is + // asleep (we're looking at it), so the receiver + // can't go away. + a.send(t).ok().unwrap(); + token.signal(); + (a, Ok(())) } } } } - Flavor::Stream(ref p) => return unsafe { - (*p.get()).send(t).map_err(SendError) - }, - Flavor::Shared(ref p) => return unsafe { - (*p.get()).send(t).map_err(SendError) - }, + Flavor::Stream(ref p) => return p.send(t).map_err(SendError), + Flavor::Shared(ref p) => return p.send(t).map_err(SendError), Flavor::Sync(..) => unreachable!(), }; @@ -613,41 +616,43 @@ impl Sender { #[stable(feature = "rust1", since = "1.0.0")] impl Clone for Sender { fn clone(&self) -> Sender { - let (packet, sleeper, guard) = match *unsafe { self.inner() } { + let packet = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - let a = Arc::new(UnsafeCell::new(shared::Packet::new())); - unsafe { - let guard = (*a.get()).postinit_lock(); + let a = Arc::new(shared::Packet::new()); + { + let guard = a.postinit_lock(); let rx = Receiver::new(Flavor::Shared(a.clone())); - match (*p.get()).upgrade(rx) { + let sleeper = match p.upgrade(rx) { oneshot::UpSuccess | - oneshot::UpDisconnected => (a, None, guard), - oneshot::UpWoke(task) => (a, Some(task), guard) - } + oneshot::UpDisconnected => None, + oneshot::UpWoke(task) => Some(task), + }; + a.inherit_blocker(sleeper, guard); } + a } Flavor::Stream(ref p) => { - let a = Arc::new(UnsafeCell::new(shared::Packet::new())); - unsafe { - let guard = (*a.get()).postinit_lock(); + let a = Arc::new(shared::Packet::new()); + { + let guard = a.postinit_lock(); let rx = Receiver::new(Flavor::Shared(a.clone())); - match (*p.get()).upgrade(rx) { + let sleeper = match p.upgrade(rx) { stream::UpSuccess | - stream::UpDisconnected => (a, None, guard), - stream::UpWoke(task) => (a, Some(task), guard), - } + stream::UpDisconnected => None, + stream::UpWoke(task) => Some(task), + }; + a.inherit_blocker(sleeper, guard); } + a } Flavor::Shared(ref p) => { - unsafe { (*p.get()).clone_chan(); } + p.clone_chan(); return Sender::new(Flavor::Shared(p.clone())); } Flavor::Sync(..) => unreachable!(), }; unsafe { - (*packet.get()).inherit_blocker(sleeper, guard); - let tmp = Sender::new(Flavor::Shared(packet.clone())); mem::swap(self.inner_mut(), tmp.inner_mut()); } @@ -658,10 +663,10 @@ impl Clone for Sender { #[stable(feature = "rust1", since = "1.0.0")] impl Drop for Sender { fn drop(&mut self) { - match *unsafe { self.inner_mut() } { - Flavor::Oneshot(ref mut p) => unsafe { (*p.get()).drop_chan(); }, - Flavor::Stream(ref mut p) => unsafe { (*p.get()).drop_chan(); }, - Flavor::Shared(ref mut p) => unsafe { (*p.get()).drop_chan(); }, + match *unsafe { self.inner() } { + Flavor::Oneshot(ref p) => p.drop_chan(), + Flavor::Stream(ref p) => p.drop_chan(), + Flavor::Shared(ref p) => p.drop_chan(), Flavor::Sync(..) => unreachable!(), } } @@ -679,7 +684,7 @@ impl fmt::Debug for Sender { //////////////////////////////////////////////////////////////////////////////// impl SyncSender { - fn new(inner: Arc>>) -> SyncSender { + fn new(inner: Arc>) -> SyncSender { SyncSender { inner: inner } } @@ -699,7 +704,7 @@ impl SyncSender { /// information. #[stable(feature = "rust1", since = "1.0.0")] pub fn send(&self, t: T) -> Result<(), SendError> { - unsafe { (*self.inner.get()).send(t).map_err(SendError) } + self.inner.send(t).map_err(SendError) } /// Attempts to send a value on this channel without blocking. @@ -713,14 +718,14 @@ impl SyncSender { /// receiver has received the data or not if this function is successful. #[stable(feature = "rust1", since = "1.0.0")] pub fn try_send(&self, t: T) -> Result<(), TrySendError> { - unsafe { (*self.inner.get()).try_send(t) } + self.inner.try_send(t) } } #[stable(feature = "rust1", since = "1.0.0")] impl Clone for SyncSender { fn clone(&self) -> SyncSender { - unsafe { (*self.inner.get()).clone_chan(); } + self.inner.clone_chan(); SyncSender::new(self.inner.clone()) } } @@ -728,7 +733,7 @@ impl Clone for SyncSender { #[stable(feature = "rust1", since = "1.0.0")] impl Drop for SyncSender { fn drop(&mut self) { - unsafe { (*self.inner.get()).drop_chan(); } + self.inner.drop_chan(); } } @@ -761,7 +766,7 @@ impl Receiver { loop { let new_port = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - match unsafe { (*p.get()).try_recv() } { + match p.try_recv() { Ok(t) => return Ok(t), Err(oneshot::Empty) => return Err(TryRecvError::Empty), Err(oneshot::Disconnected) => { @@ -771,7 +776,7 @@ impl Receiver { } } Flavor::Stream(ref p) => { - match unsafe { (*p.get()).try_recv() } { + match p.try_recv() { Ok(t) => return Ok(t), Err(stream::Empty) => return Err(TryRecvError::Empty), Err(stream::Disconnected) => { @@ -781,7 +786,7 @@ impl Receiver { } } Flavor::Shared(ref p) => { - match unsafe { (*p.get()).try_recv() } { + match p.try_recv() { Ok(t) => return Ok(t), Err(shared::Empty) => return Err(TryRecvError::Empty), Err(shared::Disconnected) => { @@ -790,7 +795,7 @@ impl Receiver { } } Flavor::Sync(ref p) => { - match unsafe { (*p.get()).try_recv() } { + match p.try_recv() { Ok(t) => return Ok(t), Err(sync::Empty) => return Err(TryRecvError::Empty), Err(sync::Disconnected) => { @@ -864,7 +869,7 @@ impl Receiver { loop { let new_port = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - match unsafe { (*p.get()).recv(None) } { + match p.recv(None) { Ok(t) => return Ok(t), Err(oneshot::Disconnected) => return Err(RecvError), Err(oneshot::Upgraded(rx)) => rx, @@ -872,7 +877,7 @@ impl Receiver { } } Flavor::Stream(ref p) => { - match unsafe { (*p.get()).recv(None) } { + match p.recv(None) { Ok(t) => return Ok(t), Err(stream::Disconnected) => return Err(RecvError), Err(stream::Upgraded(rx)) => rx, @@ -880,15 +885,13 @@ impl Receiver { } } Flavor::Shared(ref p) => { - match unsafe { (*p.get()).recv(None) } { + match p.recv(None) { Ok(t) => return Ok(t), Err(shared::Disconnected) => return Err(RecvError), Err(shared::Empty) => unreachable!(), } } - Flavor::Sync(ref p) => return unsafe { - (*p.get()).recv(None).map_err(|_| RecvError) - } + Flavor::Sync(ref p) => return p.recv(None).map_err(|_| RecvError), }; unsafe { mem::swap(self.inner_mut(), new_port.inner_mut()); @@ -941,7 +944,7 @@ impl Receiver { loop { let port_or_empty = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - match unsafe { (*p.get()).recv(Some(deadline)) } { + match p.recv(Some(deadline)) { Ok(t) => return Ok(t), Err(oneshot::Disconnected) => return Err(Disconnected), Err(oneshot::Upgraded(rx)) => Some(rx), @@ -949,7 +952,7 @@ impl Receiver { } } Flavor::Stream(ref p) => { - match unsafe { (*p.get()).recv(Some(deadline)) } { + match p.recv(Some(deadline)) { Ok(t) => return Ok(t), Err(stream::Disconnected) => return Err(Disconnected), Err(stream::Upgraded(rx)) => Some(rx), @@ -957,14 +960,14 @@ impl Receiver { } } Flavor::Shared(ref p) => { - match unsafe { (*p.get()).recv(Some(deadline)) } { + match p.recv(Some(deadline)) { Ok(t) => return Ok(t), Err(shared::Disconnected) => return Err(Disconnected), Err(shared::Empty) => None, } } Flavor::Sync(ref p) => { - match unsafe { (*p.get()).recv(Some(deadline)) } { + match p.recv(Some(deadline)) { Ok(t) => return Ok(t), Err(sync::Disconnected) => return Err(Disconnected), Err(sync::Empty) => None, @@ -997,7 +1000,7 @@ impl Receiver { /// It will return `None` if there are no more pending values or if the /// channel has hung up. The iterator will never `panic!` or block the /// user by waiting for values. - #[unstable(feature = "receiver_try_iter", issue = "34931")] + #[stable(feature = "receiver_try_iter", since = "1.15.0")] pub fn try_iter(&self) -> TryIter { TryIter { rx: self } } @@ -1009,23 +1012,19 @@ impl select::Packet for Receiver { loop { let new_port = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - match unsafe { (*p.get()).can_recv() } { + match p.can_recv() { Ok(ret) => return ret, Err(upgrade) => upgrade, } } Flavor::Stream(ref p) => { - match unsafe { (*p.get()).can_recv() } { + match p.can_recv() { Ok(ret) => return ret, Err(upgrade) => upgrade, } } - Flavor::Shared(ref p) => { - return unsafe { (*p.get()).can_recv() }; - } - Flavor::Sync(ref p) => { - return unsafe { (*p.get()).can_recv() }; - } + Flavor::Shared(ref p) => return p.can_recv(), + Flavor::Sync(ref p) => return p.can_recv(), }; unsafe { mem::swap(self.inner_mut(), @@ -1038,25 +1037,21 @@ impl select::Packet for Receiver { loop { let (t, new_port) = match *unsafe { self.inner() } { Flavor::Oneshot(ref p) => { - match unsafe { (*p.get()).start_selection(token) } { + match p.start_selection(token) { oneshot::SelSuccess => return Installed, oneshot::SelCanceled => return Abort, oneshot::SelUpgraded(t, rx) => (t, rx), } } Flavor::Stream(ref p) => { - match unsafe { (*p.get()).start_selection(token) } { + match p.start_selection(token) { stream::SelSuccess => return Installed, stream::SelCanceled => return Abort, stream::SelUpgraded(t, rx) => (t, rx), } } - Flavor::Shared(ref p) => { - return unsafe { (*p.get()).start_selection(token) }; - } - Flavor::Sync(ref p) => { - return unsafe { (*p.get()).start_selection(token) }; - } + Flavor::Shared(ref p) => return p.start_selection(token), + Flavor::Sync(ref p) => return p.start_selection(token), }; token = t; unsafe { @@ -1069,16 +1064,10 @@ impl select::Packet for Receiver { let mut was_upgrade = false; loop { let result = match *unsafe { self.inner() } { - Flavor::Oneshot(ref p) => unsafe { (*p.get()).abort_selection() }, - Flavor::Stream(ref p) => unsafe { - (*p.get()).abort_selection(was_upgrade) - }, - Flavor::Shared(ref p) => return unsafe { - (*p.get()).abort_selection(was_upgrade) - }, - Flavor::Sync(ref p) => return unsafe { - (*p.get()).abort_selection() - }, + Flavor::Oneshot(ref p) => p.abort_selection(), + Flavor::Stream(ref p) => p.abort_selection(was_upgrade), + Flavor::Shared(ref p) => return p.abort_selection(was_upgrade), + Flavor::Sync(ref p) => return p.abort_selection(), }; let new_port = match result { Ok(b) => return b, Err(p) => p }; was_upgrade = true; @@ -1097,7 +1086,7 @@ impl<'a, T> Iterator for Iter<'a, T> { fn next(&mut self) -> Option { self.rx.recv().ok() } } -#[unstable(feature = "receiver_try_iter", issue = "34931")] +#[stable(feature = "receiver_try_iter", since = "1.15.0")] impl<'a, T> Iterator for TryIter<'a, T> { type Item = T; @@ -1131,11 +1120,11 @@ impl IntoIterator for Receiver { #[stable(feature = "rust1", since = "1.0.0")] impl Drop for Receiver { fn drop(&mut self) { - match *unsafe { self.inner_mut() } { - Flavor::Oneshot(ref mut p) => unsafe { (*p.get()).drop_port(); }, - Flavor::Stream(ref mut p) => unsafe { (*p.get()).drop_port(); }, - Flavor::Shared(ref mut p) => unsafe { (*p.get()).drop_port(); }, - Flavor::Sync(ref mut p) => unsafe { (*p.get()).drop_port(); }, + match *unsafe { self.inner() } { + Flavor::Oneshot(ref p) => p.drop_port(), + Flavor::Stream(ref p) => p.drop_port(), + Flavor::Shared(ref p) => p.drop_port(), + Flavor::Sync(ref p) => p.drop_port(), } } } @@ -1267,6 +1256,38 @@ impl error::Error for TryRecvError { } } +#[stable(feature = "mpsc_recv_timeout_error", since = "1.14.0")] +impl fmt::Display for RecvTimeoutError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match *self { + RecvTimeoutError::Timeout => { + "timed out waiting on channel".fmt(f) + } + RecvTimeoutError::Disconnected => { + "channel is empty and sending half is closed".fmt(f) + } + } + } +} + +#[stable(feature = "mpsc_recv_timeout_error", since = "1.14.0")] +impl error::Error for RecvTimeoutError { + fn description(&self) -> &str { + match *self { + RecvTimeoutError::Timeout => { + "timed out waiting on channel" + } + RecvTimeoutError::Disconnected => { + "channel is empty and sending half is closed" + } + } + } + + fn cause(&self) -> Option<&error::Error> { + None + } +} + #[cfg(all(test, not(target_os = "emscripten")))] mod tests { use env; diff --git a/src/libstd/sync/mpsc/oneshot.rs b/src/libstd/sync/mpsc/oneshot.rs index 767e9f96ac..b8e50c9297 100644 --- a/src/libstd/sync/mpsc/oneshot.rs +++ b/src/libstd/sync/mpsc/oneshot.rs @@ -39,7 +39,8 @@ use self::MyUpgrade::*; use sync::mpsc::Receiver; use sync::mpsc::blocking::{self, SignalToken}; -use core::mem; +use cell::UnsafeCell; +use ptr; use sync::atomic::{AtomicUsize, Ordering}; use time::Instant; @@ -57,10 +58,10 @@ pub struct Packet { // Internal state of the chan/port pair (stores the blocked thread as well) state: AtomicUsize, // One-shot data slot location - data: Option, + data: UnsafeCell>, // when used for the second time, a oneshot channel must be upgraded, and // this contains the slot for the upgrade - upgrade: MyUpgrade, + upgrade: UnsafeCell>, } pub enum Failure { @@ -90,42 +91,44 @@ enum MyUpgrade { impl Packet { pub fn new() -> Packet { Packet { - data: None, - upgrade: NothingSent, + data: UnsafeCell::new(None), + upgrade: UnsafeCell::new(NothingSent), state: AtomicUsize::new(EMPTY), } } - pub fn send(&mut self, t: T) -> Result<(), T> { - // Sanity check - match self.upgrade { - NothingSent => {} - _ => panic!("sending on a oneshot that's already sent on "), - } - assert!(self.data.is_none()); - self.data = Some(t); - self.upgrade = SendUsed; - - match self.state.swap(DATA, Ordering::SeqCst) { - // Sent the data, no one was waiting - EMPTY => Ok(()), - - // Couldn't send the data, the port hung up first. Return the data - // back up the stack. - DISCONNECTED => { - self.state.swap(DISCONNECTED, Ordering::SeqCst); - self.upgrade = NothingSent; - Err(self.data.take().unwrap()) + pub fn send(&self, t: T) -> Result<(), T> { + unsafe { + // Sanity check + match *self.upgrade.get() { + NothingSent => {} + _ => panic!("sending on a oneshot that's already sent on "), } + assert!((*self.data.get()).is_none()); + ptr::write(self.data.get(), Some(t)); + ptr::write(self.upgrade.get(), SendUsed); - // Not possible, these are one-use channels - DATA => unreachable!(), + match self.state.swap(DATA, Ordering::SeqCst) { + // Sent the data, no one was waiting + EMPTY => Ok(()), - // There is a thread waiting on the other end. We leave the 'DATA' - // state inside so it'll pick it up on the other end. - ptr => unsafe { - SignalToken::cast_from_usize(ptr).signal(); - Ok(()) + // Couldn't send the data, the port hung up first. Return the data + // back up the stack. + DISCONNECTED => { + self.state.swap(DISCONNECTED, Ordering::SeqCst); + ptr::write(self.upgrade.get(), NothingSent); + Err((&mut *self.data.get()).take().unwrap()) + } + + // Not possible, these are one-use channels + DATA => unreachable!(), + + // There is a thread waiting on the other end. We leave the 'DATA' + // state inside so it'll pick it up on the other end. + ptr => { + SignalToken::cast_from_usize(ptr).signal(); + Ok(()) + } } } } @@ -133,13 +136,15 @@ impl Packet { // Just tests whether this channel has been sent on or not, this is only // safe to use from the sender. pub fn sent(&self) -> bool { - match self.upgrade { - NothingSent => false, - _ => true, + unsafe { + match *self.upgrade.get() { + NothingSent => false, + _ => true, + } } } - pub fn recv(&mut self, deadline: Option) -> Result> { + pub fn recv(&self, deadline: Option) -> Result> { // Attempt to not block the thread (it's a little expensive). If it looks // like we're not empty, then immediately go through to `try_recv`. if self.state.load(Ordering::SeqCst) == EMPTY { @@ -167,73 +172,77 @@ impl Packet { self.try_recv() } - pub fn try_recv(&mut self) -> Result> { - match self.state.load(Ordering::SeqCst) { - EMPTY => Err(Empty), + pub fn try_recv(&self) -> Result> { + unsafe { + match self.state.load(Ordering::SeqCst) { + EMPTY => Err(Empty), - // We saw some data on the channel, but the channel can be used - // again to send us an upgrade. As a result, we need to re-insert - // into the channel that there's no data available (otherwise we'll - // just see DATA next time). This is done as a cmpxchg because if - // the state changes under our feet we'd rather just see that state - // change. - DATA => { - self.state.compare_and_swap(DATA, EMPTY, Ordering::SeqCst); - match self.data.take() { - Some(data) => Ok(data), - None => unreachable!(), + // We saw some data on the channel, but the channel can be used + // again to send us an upgrade. As a result, we need to re-insert + // into the channel that there's no data available (otherwise we'll + // just see DATA next time). This is done as a cmpxchg because if + // the state changes under our feet we'd rather just see that state + // change. + DATA => { + self.state.compare_and_swap(DATA, EMPTY, Ordering::SeqCst); + match (&mut *self.data.get()).take() { + Some(data) => Ok(data), + None => unreachable!(), + } } - } - // There's no guarantee that we receive before an upgrade happens, - // and an upgrade flags the channel as disconnected, so when we see - // this we first need to check if there's data available and *then* - // we go through and process the upgrade. - DISCONNECTED => { - match self.data.take() { - Some(data) => Ok(data), - None => { - match mem::replace(&mut self.upgrade, SendUsed) { - SendUsed | NothingSent => Err(Disconnected), - GoUp(upgrade) => Err(Upgraded(upgrade)) + // There's no guarantee that we receive before an upgrade happens, + // and an upgrade flags the channel as disconnected, so when we see + // this we first need to check if there's data available and *then* + // we go through and process the upgrade. + DISCONNECTED => { + match (&mut *self.data.get()).take() { + Some(data) => Ok(data), + None => { + match ptr::replace(self.upgrade.get(), SendUsed) { + SendUsed | NothingSent => Err(Disconnected), + GoUp(upgrade) => Err(Upgraded(upgrade)) + } } } } - } - // We are the sole receiver; there cannot be a blocking - // receiver already. - _ => unreachable!() + // We are the sole receiver; there cannot be a blocking + // receiver already. + _ => unreachable!() + } } } // Returns whether the upgrade was completed. If the upgrade wasn't // completed, then the port couldn't get sent to the other half (it will // never receive it). - pub fn upgrade(&mut self, up: Receiver) -> UpgradeResult { - let prev = match self.upgrade { - NothingSent => NothingSent, - SendUsed => SendUsed, - _ => panic!("upgrading again"), - }; - self.upgrade = GoUp(up); + pub fn upgrade(&self, up: Receiver) -> UpgradeResult { + unsafe { + let prev = match *self.upgrade.get() { + NothingSent => NothingSent, + SendUsed => SendUsed, + _ => panic!("upgrading again"), + }; + ptr::write(self.upgrade.get(), GoUp(up)); - match self.state.swap(DISCONNECTED, Ordering::SeqCst) { - // If the channel is empty or has data on it, then we're good to go. - // Senders will check the data before the upgrade (in case we - // plastered over the DATA state). - DATA | EMPTY => UpSuccess, + match self.state.swap(DISCONNECTED, Ordering::SeqCst) { + // If the channel is empty or has data on it, then we're good to go. + // Senders will check the data before the upgrade (in case we + // plastered over the DATA state). + DATA | EMPTY => UpSuccess, - // If the other end is already disconnected, then we failed the - // upgrade. Be sure to trash the port we were given. - DISCONNECTED => { self.upgrade = prev; UpDisconnected } + // If the other end is already disconnected, then we failed the + // upgrade. Be sure to trash the port we were given. + DISCONNECTED => { ptr::replace(self.upgrade.get(), prev); UpDisconnected } - // If someone's waiting, we gotta wake them up - ptr => UpWoke(unsafe { SignalToken::cast_from_usize(ptr) }) + // If someone's waiting, we gotta wake them up + ptr => UpWoke(SignalToken::cast_from_usize(ptr)) + } } } - pub fn drop_chan(&mut self) { + pub fn drop_chan(&self) { match self.state.swap(DISCONNECTED, Ordering::SeqCst) { DATA | DISCONNECTED | EMPTY => {} @@ -244,7 +253,7 @@ impl Packet { } } - pub fn drop_port(&mut self) { + pub fn drop_port(&self) { match self.state.swap(DISCONNECTED, Ordering::SeqCst) { // An empty channel has nothing to do, and a remotely disconnected // channel also has nothing to do b/c we're about to run the drop @@ -254,7 +263,7 @@ impl Packet { // There's data on the channel, so make sure we destroy it promptly. // This is why not using an arc is a little difficult (need the box // to stay valid while we take the data). - DATA => { self.data.take().unwrap(); } + DATA => unsafe { (&mut *self.data.get()).take().unwrap(); }, // We're the only ones that can block on this port _ => unreachable!() @@ -267,62 +276,66 @@ impl Packet { // If Ok, the value is whether this port has data, if Err, then the upgraded // port needs to be checked instead of this one. - pub fn can_recv(&mut self) -> Result> { - match self.state.load(Ordering::SeqCst) { - EMPTY => Ok(false), // Welp, we tried - DATA => Ok(true), // we have some un-acquired data - DISCONNECTED if self.data.is_some() => Ok(true), // we have data - DISCONNECTED => { - match mem::replace(&mut self.upgrade, SendUsed) { - // The other end sent us an upgrade, so we need to - // propagate upwards whether the upgrade can receive - // data - GoUp(upgrade) => Err(upgrade), + pub fn can_recv(&self) -> Result> { + unsafe { + match self.state.load(Ordering::SeqCst) { + EMPTY => Ok(false), // Welp, we tried + DATA => Ok(true), // we have some un-acquired data + DISCONNECTED if (*self.data.get()).is_some() => Ok(true), // we have data + DISCONNECTED => { + match ptr::replace(self.upgrade.get(), SendUsed) { + // The other end sent us an upgrade, so we need to + // propagate upwards whether the upgrade can receive + // data + GoUp(upgrade) => Err(upgrade), - // If the other end disconnected without sending an - // upgrade, then we have data to receive (the channel is - // disconnected). - up => { self.upgrade = up; Ok(true) } + // If the other end disconnected without sending an + // upgrade, then we have data to receive (the channel is + // disconnected). + up => { ptr::write(self.upgrade.get(), up); Ok(true) } + } } + _ => unreachable!(), // we're the "one blocker" } - _ => unreachable!(), // we're the "one blocker" } } // Attempts to start selection on this port. This can either succeed, fail // because there is data, or fail because there is an upgrade pending. - pub fn start_selection(&mut self, token: SignalToken) -> SelectionResult { - let ptr = unsafe { token.cast_to_usize() }; - match self.state.compare_and_swap(EMPTY, ptr, Ordering::SeqCst) { - EMPTY => SelSuccess, - DATA => { - drop(unsafe { SignalToken::cast_from_usize(ptr) }); - SelCanceled - } - DISCONNECTED if self.data.is_some() => { - drop(unsafe { SignalToken::cast_from_usize(ptr) }); - SelCanceled - } - DISCONNECTED => { - match mem::replace(&mut self.upgrade, SendUsed) { - // The other end sent us an upgrade, so we need to - // propagate upwards whether the upgrade can receive - // data - GoUp(upgrade) => { - SelUpgraded(unsafe { SignalToken::cast_from_usize(ptr) }, upgrade) - } + pub fn start_selection(&self, token: SignalToken) -> SelectionResult { + unsafe { + let ptr = token.cast_to_usize(); + match self.state.compare_and_swap(EMPTY, ptr, Ordering::SeqCst) { + EMPTY => SelSuccess, + DATA => { + drop(SignalToken::cast_from_usize(ptr)); + SelCanceled + } + DISCONNECTED if (*self.data.get()).is_some() => { + drop(SignalToken::cast_from_usize(ptr)); + SelCanceled + } + DISCONNECTED => { + match ptr::replace(self.upgrade.get(), SendUsed) { + // The other end sent us an upgrade, so we need to + // propagate upwards whether the upgrade can receive + // data + GoUp(upgrade) => { + SelUpgraded(SignalToken::cast_from_usize(ptr), upgrade) + } - // If the other end disconnected without sending an - // upgrade, then we have data to receive (the channel is - // disconnected). - up => { - self.upgrade = up; - drop(unsafe { SignalToken::cast_from_usize(ptr) }); - SelCanceled + // If the other end disconnected without sending an + // upgrade, then we have data to receive (the channel is + // disconnected). + up => { + ptr::write(self.upgrade.get(), up); + drop(SignalToken::cast_from_usize(ptr)); + SelCanceled + } } } + _ => unreachable!(), // we're the "one blocker" } - _ => unreachable!(), // we're the "one blocker" } } @@ -330,7 +343,7 @@ impl Packet { // blocked thread will no longer be visible to any other threads. // // The return value indicates whether there's data on this port. - pub fn abort_selection(&mut self) -> Result> { + pub fn abort_selection(&self) -> Result> { let state = match self.state.load(Ordering::SeqCst) { // Each of these states means that no further activity will happen // with regard to abortion selection @@ -356,16 +369,16 @@ impl Packet { // // We then need to check to see if there was an upgrade requested, // and if so, the upgraded port needs to have its selection aborted. - DISCONNECTED => { - if self.data.is_some() { + DISCONNECTED => unsafe { + if (*self.data.get()).is_some() { Ok(true) } else { - match mem::replace(&mut self.upgrade, SendUsed) { + match ptr::replace(self.upgrade.get(), SendUsed) { GoUp(port) => Err(port), _ => Ok(true), } } - } + }, // We woke ourselves up from select. ptr => unsafe { diff --git a/src/libstd/sync/mpsc/shared.rs b/src/libstd/sync/mpsc/shared.rs index 2a9618251f..f9e0290416 100644 --- a/src/libstd/sync/mpsc/shared.rs +++ b/src/libstd/sync/mpsc/shared.rs @@ -24,6 +24,8 @@ use core::cmp; use core::intrinsics::abort; use core::isize; +use cell::UnsafeCell; +use ptr; use sync::atomic::{AtomicUsize, AtomicIsize, AtomicBool, Ordering}; use sync::mpsc::blocking::{self, SignalToken}; use sync::mpsc::mpsc_queue as mpsc; @@ -44,7 +46,7 @@ const MAX_STEALS: isize = 1 << 20; pub struct Packet { queue: mpsc::Queue, cnt: AtomicIsize, // How many items are on this channel - steals: isize, // How many times has a port received without blocking? + steals: UnsafeCell, // How many times has a port received without blocking? to_wake: AtomicUsize, // SignalToken for wake up // The number of channels which are currently using this packet. @@ -72,7 +74,7 @@ impl Packet { Packet { queue: mpsc::Queue::new(), cnt: AtomicIsize::new(0), - steals: 0, + steals: UnsafeCell::new(0), to_wake: AtomicUsize::new(0), channels: AtomicUsize::new(2), port_dropped: AtomicBool::new(false), @@ -95,7 +97,7 @@ impl Packet { // threads in select(). // // This can only be called at channel-creation time - pub fn inherit_blocker(&mut self, + pub fn inherit_blocker(&self, token: Option, guard: MutexGuard<()>) { token.map(|token| { @@ -122,7 +124,7 @@ impl Packet { // To offset this bad increment, we initially set the steal count to // -1. You'll find some special code in abort_selection() as well to // ensure that this -1 steal count doesn't escape too far. - self.steals = -1; + unsafe { *self.steals.get() = -1; } }); // When the shared packet is constructed, we grabbed this lock. The @@ -133,7 +135,7 @@ impl Packet { drop(guard); } - pub fn send(&mut self, t: T) -> Result<(), T> { + pub fn send(&self, t: T) -> Result<(), T> { // See Port::drop for what's going on if self.port_dropped.load(Ordering::SeqCst) { return Err(t) } @@ -218,7 +220,7 @@ impl Packet { Ok(()) } - pub fn recv(&mut self, deadline: Option) -> Result { + pub fn recv(&self, deadline: Option) -> Result { // This code is essentially the exact same as that found in the stream // case (see stream.rs) match self.try_recv() { @@ -239,37 +241,38 @@ impl Packet { } match self.try_recv() { - data @ Ok(..) => { self.steals -= 1; data } + data @ Ok(..) => unsafe { *self.steals.get() -= 1; data }, data => data, } } // Essentially the exact same thing as the stream decrement function. // Returns true if blocking should proceed. - fn decrement(&mut self, token: SignalToken) -> StartResult { - assert_eq!(self.to_wake.load(Ordering::SeqCst), 0); - let ptr = unsafe { token.cast_to_usize() }; - self.to_wake.store(ptr, Ordering::SeqCst); + fn decrement(&self, token: SignalToken) -> StartResult { + unsafe { + assert_eq!(self.to_wake.load(Ordering::SeqCst), 0); + let ptr = token.cast_to_usize(); + self.to_wake.store(ptr, Ordering::SeqCst); - let steals = self.steals; - self.steals = 0; + let steals = ptr::replace(self.steals.get(), 0); - match self.cnt.fetch_sub(1 + steals, Ordering::SeqCst) { - DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); } - // If we factor in our steals and notice that the channel has no - // data, we successfully sleep - n => { - assert!(n >= 0); - if n - steals <= 0 { return Installed } + match self.cnt.fetch_sub(1 + steals, Ordering::SeqCst) { + DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); } + // If we factor in our steals and notice that the channel has no + // data, we successfully sleep + n => { + assert!(n >= 0); + if n - steals <= 0 { return Installed } + } } - } - self.to_wake.store(0, Ordering::SeqCst); - drop(unsafe { SignalToken::cast_from_usize(ptr) }); - Abort + self.to_wake.store(0, Ordering::SeqCst); + drop(SignalToken::cast_from_usize(ptr)); + Abort + } } - pub fn try_recv(&mut self) -> Result { + pub fn try_recv(&self) -> Result { let ret = match self.queue.pop() { mpsc::Data(t) => Some(t), mpsc::Empty => None, @@ -303,23 +306,23 @@ impl Packet { match ret { // See the discussion in the stream implementation for why we // might decrement steals. - Some(data) => { - if self.steals > MAX_STEALS { + Some(data) => unsafe { + if *self.steals.get() > MAX_STEALS { match self.cnt.swap(0, Ordering::SeqCst) { DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); } n => { - let m = cmp::min(n, self.steals); - self.steals -= m; + let m = cmp::min(n, *self.steals.get()); + *self.steals.get() -= m; self.bump(n - m); } } - assert!(self.steals >= 0); + assert!(*self.steals.get() >= 0); } - self.steals += 1; + *self.steals.get() += 1; Ok(data) - } + }, // See the discussion in the stream implementation for why we try // again. @@ -341,7 +344,7 @@ impl Packet { // Prepares this shared packet for a channel clone, essentially just bumping // a refcount. - pub fn clone_chan(&mut self) { + pub fn clone_chan(&self) { let old_count = self.channels.fetch_add(1, Ordering::SeqCst); // See comments on Arc::clone() on why we do this (for `mem::forget`). @@ -355,7 +358,7 @@ impl Packet { // Decrement the reference count on a channel. This is called whenever a // Chan is dropped and may end up waking up a receiver. It's the receiver's // responsibility on the other end to figure out that we've disconnected. - pub fn drop_chan(&mut self) { + pub fn drop_chan(&self) { match self.channels.fetch_sub(1, Ordering::SeqCst) { 1 => {} n if n > 1 => return, @@ -371,9 +374,9 @@ impl Packet { // See the long discussion inside of stream.rs for why the queue is drained, // and why it is done in this fashion. - pub fn drop_port(&mut self) { + pub fn drop_port(&self) { self.port_dropped.store(true, Ordering::SeqCst); - let mut steals = self.steals; + let mut steals = unsafe { *self.steals.get() }; while { let cnt = self.cnt.compare_and_swap(steals, DISCONNECTED, Ordering::SeqCst); cnt != DISCONNECTED && cnt != steals @@ -390,7 +393,7 @@ impl Packet { } // Consumes ownership of the 'to_wake' field. - fn take_to_wake(&mut self) -> SignalToken { + fn take_to_wake(&self) -> SignalToken { let ptr = self.to_wake.load(Ordering::SeqCst); self.to_wake.store(0, Ordering::SeqCst); assert!(ptr != 0); @@ -406,13 +409,13 @@ impl Packet { // // This is different than the stream version because there's no need to peek // at the queue, we can just look at the local count. - pub fn can_recv(&mut self) -> bool { + pub fn can_recv(&self) -> bool { let cnt = self.cnt.load(Ordering::SeqCst); - cnt == DISCONNECTED || cnt - self.steals > 0 + cnt == DISCONNECTED || cnt - unsafe { *self.steals.get() } > 0 } // increment the count on the channel (used for selection) - fn bump(&mut self, amt: isize) -> isize { + fn bump(&self, amt: isize) -> isize { match self.cnt.fetch_add(amt, Ordering::SeqCst) { DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); @@ -427,7 +430,7 @@ impl Packet { // // The code here is the same as in stream.rs, except that it doesn't need to // peek at the channel to see if an upgrade is pending. - pub fn start_selection(&mut self, token: SignalToken) -> StartResult { + pub fn start_selection(&self, token: SignalToken) -> StartResult { match self.decrement(token) { Installed => Installed, Abort => { @@ -443,7 +446,7 @@ impl Packet { // // This is similar to the stream implementation (hence fewer comments), but // uses a different value for the "steals" variable. - pub fn abort_selection(&mut self, _was_upgrade: bool) -> bool { + pub fn abort_selection(&self, _was_upgrade: bool) -> bool { // Before we do anything else, we bounce on this lock. The reason for // doing this is to ensure that any upgrade-in-progress is gone and // done with. Without this bounce, we can race with inherit_blocker @@ -477,12 +480,15 @@ impl Packet { thread::yield_now(); } } - // if the number of steals is -1, it was the pre-emptive -1 steal - // count from when we inherited a blocker. This is fine because - // we're just going to overwrite it with a real value. - assert!(self.steals == 0 || self.steals == -1); - self.steals = steals; - prev >= 0 + unsafe { + // if the number of steals is -1, it was the pre-emptive -1 steal + // count from when we inherited a blocker. This is fine because + // we're just going to overwrite it with a real value. + let old = self.steals.get(); + assert!(*old == 0 || *old == -1); + *old = steals; + prev >= 0 + } } } } diff --git a/src/libstd/sync/mpsc/stream.rs b/src/libstd/sync/mpsc/stream.rs index 61c8316467..47cd8977fd 100644 --- a/src/libstd/sync/mpsc/stream.rs +++ b/src/libstd/sync/mpsc/stream.rs @@ -22,8 +22,10 @@ pub use self::UpgradeResult::*; pub use self::SelectionResult::*; use self::Message::*; +use cell::UnsafeCell; use core::cmp; use core::isize; +use ptr; use thread; use time::Instant; @@ -42,7 +44,7 @@ pub struct Packet { queue: spsc::Queue>, // internal queue for all message cnt: AtomicIsize, // How many items are on this channel - steals: isize, // How many times has a port received without blocking? + steals: UnsafeCell, // How many times has a port received without blocking? to_wake: AtomicUsize, // SignalToken for the blocked thread to wake up port_dropped: AtomicBool, // flag if the channel has been destroyed. @@ -79,14 +81,14 @@ impl Packet { queue: unsafe { spsc::Queue::new(128) }, cnt: AtomicIsize::new(0), - steals: 0, + steals: UnsafeCell::new(0), to_wake: AtomicUsize::new(0), port_dropped: AtomicBool::new(false), } } - pub fn send(&mut self, t: T) -> Result<(), T> { + pub fn send(&self, t: T) -> Result<(), T> { // If the other port has deterministically gone away, then definitely // must return the data back up the stack. Otherwise, the data is // considered as being sent. @@ -99,7 +101,7 @@ impl Packet { Ok(()) } - pub fn upgrade(&mut self, up: Receiver) -> UpgradeResult { + pub fn upgrade(&self, up: Receiver) -> UpgradeResult { // If the port has gone away, then there's no need to proceed any // further. if self.port_dropped.load(Ordering::SeqCst) { return UpDisconnected } @@ -107,7 +109,7 @@ impl Packet { self.do_send(GoUp(up)) } - fn do_send(&mut self, t: Message) -> UpgradeResult { + fn do_send(&self, t: Message) -> UpgradeResult { self.queue.push(t); match self.cnt.fetch_add(1, Ordering::SeqCst) { // As described in the mod's doc comment, -1 == wakeup @@ -141,7 +143,7 @@ impl Packet { } // Consumes ownership of the 'to_wake' field. - fn take_to_wake(&mut self) -> SignalToken { + fn take_to_wake(&self) -> SignalToken { let ptr = self.to_wake.load(Ordering::SeqCst); self.to_wake.store(0, Ordering::SeqCst); assert!(ptr != 0); @@ -151,13 +153,12 @@ impl Packet { // Decrements the count on the channel for a sleeper, returning the sleeper // back if it shouldn't sleep. Note that this is the location where we take // steals into account. - fn decrement(&mut self, token: SignalToken) -> Result<(), SignalToken> { + fn decrement(&self, token: SignalToken) -> Result<(), SignalToken> { assert_eq!(self.to_wake.load(Ordering::SeqCst), 0); let ptr = unsafe { token.cast_to_usize() }; self.to_wake.store(ptr, Ordering::SeqCst); - let steals = self.steals; - self.steals = 0; + let steals = unsafe { ptr::replace(self.steals.get(), 0) }; match self.cnt.fetch_sub(1 + steals, Ordering::SeqCst) { DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); } @@ -173,7 +174,7 @@ impl Packet { Err(unsafe { SignalToken::cast_from_usize(ptr) }) } - pub fn recv(&mut self, deadline: Option) -> Result> { + pub fn recv(&self, deadline: Option) -> Result> { // Optimistic preflight check (scheduling is expensive). match self.try_recv() { Err(Empty) => {} @@ -199,16 +200,16 @@ impl Packet { // a steal, so offset the decrement here (we already have our // "steal" factored into the channel count above). data @ Ok(..) | - data @ Err(Upgraded(..)) => { - self.steals -= 1; + data @ Err(Upgraded(..)) => unsafe { + *self.steals.get() -= 1; data - } + }, data => data, } } - pub fn try_recv(&mut self) -> Result> { + pub fn try_recv(&self) -> Result> { match self.queue.pop() { // If we stole some data, record to that effect (this will be // factored into cnt later on). @@ -221,26 +222,26 @@ impl Packet { // a pretty slow operation, of swapping 0 into cnt, taking steals // down as much as possible (without going negative), and then // adding back in whatever we couldn't factor into steals. - Some(data) => { - if self.steals > MAX_STEALS { + Some(data) => unsafe { + if *self.steals.get() > MAX_STEALS { match self.cnt.swap(0, Ordering::SeqCst) { DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); } n => { - let m = cmp::min(n, self.steals); - self.steals -= m; + let m = cmp::min(n, *self.steals.get()); + *self.steals.get() -= m; self.bump(n - m); } } - assert!(self.steals >= 0); + assert!(*self.steals.get() >= 0); } - self.steals += 1; + *self.steals.get() += 1; match data { Data(t) => Ok(t), GoUp(up) => Err(Upgraded(up)), } - } + }, None => { match self.cnt.load(Ordering::SeqCst) { @@ -269,7 +270,7 @@ impl Packet { } } - pub fn drop_chan(&mut self) { + pub fn drop_chan(&self) { // Dropping a channel is pretty simple, we just flag it as disconnected // and then wakeup a blocker if there is one. match self.cnt.swap(DISCONNECTED, Ordering::SeqCst) { @@ -279,7 +280,7 @@ impl Packet { } } - pub fn drop_port(&mut self) { + pub fn drop_port(&self) { // Dropping a port seems like a fairly trivial thing. In theory all we // need to do is flag that we're disconnected and then everything else // can take over (we don't have anyone to wake up). @@ -309,7 +310,7 @@ impl Packet { // continue to fail while active senders send data while we're dropping // data, but eventually we're guaranteed to break out of this loop // (because there is a bounded number of senders). - let mut steals = self.steals; + let mut steals = unsafe { *self.steals.get() }; while { let cnt = self.cnt.compare_and_swap( steals, DISCONNECTED, Ordering::SeqCst); @@ -332,7 +333,7 @@ impl Packet { // Tests to see whether this port can receive without blocking. If Ok is // returned, then that's the answer. If Err is returned, then the returned // port needs to be queried instead (an upgrade happened) - pub fn can_recv(&mut self) -> Result> { + pub fn can_recv(&self) -> Result> { // We peek at the queue to see if there's anything on it, and we use // this return value to determine if we should pop from the queue and // upgrade this channel immediately. If it looks like we've got an @@ -351,7 +352,7 @@ impl Packet { } // increment the count on the channel (used for selection) - fn bump(&mut self, amt: isize) -> isize { + fn bump(&self, amt: isize) -> isize { match self.cnt.fetch_add(amt, Ordering::SeqCst) { DISCONNECTED => { self.cnt.store(DISCONNECTED, Ordering::SeqCst); @@ -363,7 +364,7 @@ impl Packet { // Attempts to start selecting on this port. Like a oneshot, this can fail // immediately because of an upgrade. - pub fn start_selection(&mut self, token: SignalToken) -> SelectionResult { + pub fn start_selection(&self, token: SignalToken) -> SelectionResult { match self.decrement(token) { Ok(()) => SelSuccess, Err(token) => { @@ -387,7 +388,7 @@ impl Packet { } // Removes a previous thread from being blocked in this port - pub fn abort_selection(&mut self, + pub fn abort_selection(&self, was_upgrade: bool) -> Result> { // If we're aborting selection after upgrading from a oneshot, then // we're guarantee that no one is waiting. The only way that we could @@ -403,7 +404,7 @@ impl Packet { // this end. This is fine because we know it's a small bounded windows // of time until the data is actually sent. if was_upgrade { - assert_eq!(self.steals, 0); + assert_eq!(unsafe { *self.steals.get() }, 0); assert_eq!(self.to_wake.load(Ordering::SeqCst), 0); return Ok(true) } @@ -444,8 +445,10 @@ impl Packet { thread::yield_now(); } } - assert_eq!(self.steals, 0); - self.steals = steals; + unsafe { + assert_eq!(*self.steals.get(), 0); + *self.steals.get() = steals; + } // if we were previously positive, then there's surely data to // receive diff --git a/src/libstd/sync/mutex.rs b/src/libstd/sync/mutex.rs index 812724c7a1..df4a3746a4 100644 --- a/src/libstd/sync/mutex.rs +++ b/src/libstd/sync/mutex.rs @@ -133,7 +133,14 @@ unsafe impl Sync for Mutex { } /// dropped (falls out of scope), the lock will be unlocked. /// /// The data protected by the mutex can be access through this guard via its -/// `Deref` and `DerefMut` implementations +/// `Deref` and `DerefMut` implementations. +/// +/// This structure is created by the [`lock()`] and [`try_lock()`] methods on +/// [`Mutex`]. +/// +/// [`lock()`]: struct.Mutex.html#method.lock +/// [`try_lock()`]: struct.Mutex.html#method.try_lock +/// [`Mutex`]: struct.Mutex.html #[must_use] #[stable(feature = "rust1", since = "1.0.0")] pub struct MutexGuard<'a, T: ?Sized + 'a> { diff --git a/src/libstd/sync/rwlock.rs b/src/libstd/sync/rwlock.rs index f08b764152..f83cf7ba9c 100644 --- a/src/libstd/sync/rwlock.rs +++ b/src/libstd/sync/rwlock.rs @@ -77,6 +77,13 @@ unsafe impl Sync for RwLock {} /// RAII structure used to release the shared read access of a lock when /// dropped. +/// +/// This structure is created by the [`read()`] and [`try_read()`] methods on +/// [`RwLock`]. +/// +/// [`read()`]: struct.RwLock.html#method.read +/// [`try_read()`]: struct.RwLock.html#method.try_read +/// [`RwLock`]: struct.RwLock.html #[must_use] #[stable(feature = "rust1", since = "1.0.0")] pub struct RwLockReadGuard<'a, T: ?Sized + 'a> { @@ -88,6 +95,13 @@ impl<'a, T: ?Sized> !marker::Send for RwLockReadGuard<'a, T> {} /// RAII structure used to release the exclusive write access of a lock when /// dropped. +/// +/// This structure is created by the [`write()`] and [`try_write()`] methods +/// on [`RwLock`]. +/// +/// [`write()`]: struct.RwLock.html#method.write +/// [`try_write()`]: struct.RwLock.html#method.try_write +/// [`RwLock`]: struct.RwLock.html #[must_use] #[stable(feature = "rust1", since = "1.0.0")] pub struct RwLockWriteGuard<'a, T: ?Sized + 'a> { diff --git a/src/libstd/sys/mod.rs b/src/libstd/sys/mod.rs index 84f41a1c53..e4b0d980c9 100644 --- a/src/libstd/sys/mod.rs +++ b/src/libstd/sys/mod.rs @@ -23,7 +23,7 @@ //! integration code in `std::sys_common`. See that module's //! documentation for details. //! -//! In the future it would be desirable for the indepedent +//! In the future it would be desirable for the independent //! implementations of this module to be extracted to their own crates //! that `std` can link to, thus enabling their implementation //! out-of-tree via crate replacement. Though due to the complex @@ -32,6 +32,10 @@ pub use self::imp::*; +#[cfg(target_os = "redox")] +#[path = "redox/mod.rs"] +mod imp; + #[cfg(unix)] #[path = "unix/mod.rs"] mod imp; diff --git a/src/libstd/sys/redox/args.rs b/src/libstd/sys/redox/args.rs new file mode 100644 index 0000000000..f6fea2f107 --- /dev/null +++ b/src/libstd/sys/redox/args.rs @@ -0,0 +1,109 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Global initialization and retreival of command line arguments. +//! +//! On some platforms these are stored during runtime startup, +//! and on some they are retrieved from the system on demand. + +#![allow(dead_code)] // runtime init functions not used during testing + +use ffi::OsString; +use marker::PhantomData; +use vec; + +/// One-time global initialization. +pub unsafe fn init(argc: isize, argv: *const *const u8) { imp::init(argc, argv) } + +/// One-time global cleanup. +pub unsafe fn cleanup() { imp::cleanup() } + +/// Returns the command line arguments +pub fn args() -> Args { + imp::args() +} + +pub struct Args { + iter: vec::IntoIter, + _dont_send_or_sync_me: PhantomData<*mut ()>, +} + +impl Iterator for Args { + type Item = OsString; + fn next(&mut self) -> Option { self.iter.next() } + fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } +} + +impl ExactSizeIterator for Args { + fn len(&self) -> usize { self.iter.len() } +} + +impl DoubleEndedIterator for Args { + fn next_back(&mut self) -> Option { self.iter.next_back() } +} + +mod imp { + use os::unix::prelude::*; + use mem; + use ffi::OsString; + use marker::PhantomData; + use slice; + use str; + use super::Args; + + use sys_common::mutex::Mutex; + + static mut GLOBAL_ARGS_PTR: usize = 0; + static LOCK: Mutex = Mutex::new(); + + pub unsafe fn init(argc: isize, argv: *const *const u8) { + let mut args: Vec> = Vec::new(); + for i in 0..argc { + let len = *(argv.offset(i * 2)) as usize; + let ptr = *(argv.offset(i * 2 + 1)); + args.push(slice::from_raw_parts(ptr, len).to_vec()); + } + + LOCK.lock(); + let ptr = get_global_ptr(); + assert!((*ptr).is_none()); + (*ptr) = Some(box args); + LOCK.unlock(); + } + + pub unsafe fn cleanup() { + LOCK.lock(); + *get_global_ptr() = None; + LOCK.unlock(); + } + + pub fn args() -> Args { + let bytes = clone().unwrap_or(Vec::new()); + let v: Vec = bytes.into_iter().map(|v| { + OsStringExt::from_vec(v) + }).collect(); + Args { iter: v.into_iter(), _dont_send_or_sync_me: PhantomData } + } + + fn clone() -> Option>> { + unsafe { + LOCK.lock(); + let ptr = get_global_ptr(); + let ret = (*ptr).as_ref().map(|s| (**s).clone()); + LOCK.unlock(); + return ret + } + } + + fn get_global_ptr() -> *mut Option>>> { + unsafe { mem::transmute(&GLOBAL_ARGS_PTR) } + } + +} diff --git a/src/libstd/sys/redox/backtrace.rs b/src/libstd/sys/redox/backtrace.rs new file mode 100644 index 0000000000..6f53841502 --- /dev/null +++ b/src/libstd/sys/redox/backtrace.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use libc; +use io; +use sys_common::backtrace::output; + +#[inline(never)] +pub fn write(w: &mut io::Write) -> io::Result<()> { + output(w, 0, 0 as *mut libc::c_void, None) +} diff --git a/src/libstd/sys/redox/condvar.rs b/src/libstd/sys/redox/condvar.rs new file mode 100644 index 0000000000..0ca0987b24 --- /dev/null +++ b/src/libstd/sys/redox/condvar.rs @@ -0,0 +1,106 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use cell::UnsafeCell; +use intrinsics::{atomic_cxchg, atomic_xadd, atomic_xchg}; +use ptr; +use time::Duration; + +use sys::mutex::{mutex_lock, mutex_unlock, Mutex}; +use sys::syscall::{futex, FUTEX_WAIT, FUTEX_WAKE, FUTEX_REQUEUE}; + +pub struct Condvar { + lock: UnsafeCell<*mut i32>, + seq: UnsafeCell +} + +impl Condvar { + pub const fn new() -> Condvar { + Condvar { + lock: UnsafeCell::new(ptr::null_mut()), + seq: UnsafeCell::new(0) + } + } + + #[inline] + pub unsafe fn init(&self) { + *self.lock.get() = ptr::null_mut(); + *self.seq.get() = 0; + } + + #[inline] + pub fn notify_one(&self) { + unsafe { + let seq = self.seq.get(); + + atomic_xadd(seq, 1); + + let _ = futex(seq, FUTEX_WAKE, 1, 0, ptr::null_mut()); + } + } + + #[inline] + pub fn notify_all(&self) { + unsafe { + let lock = self.lock.get(); + let seq = self.seq.get(); + + if *lock == ptr::null_mut() { + return; + } + + atomic_xadd(seq, 1); + + let _ = futex(seq, FUTEX_REQUEUE, 1, ::usize::MAX, *lock); + } + } + + #[inline] + pub fn wait(&self, mutex: &Mutex) { + unsafe { + let lock = self.lock.get(); + let seq = self.seq.get(); + + if *lock != mutex.lock.get() { + if *lock != ptr::null_mut() { + panic!("Condvar used with more than one Mutex"); + } + + atomic_cxchg(lock as *mut usize, 0, mutex.lock.get() as usize); + } + + mutex_unlock(*lock); + + let _ = futex(seq, FUTEX_WAIT, *seq, 0, ptr::null_mut()); + + while atomic_xchg(*lock, 2) != 0 { + let _ = futex(*lock, FUTEX_WAIT, 2, 0, ptr::null_mut()); + } + + mutex_lock(*lock); + } + } + + #[inline] + pub fn wait_timeout(&self, _mutex: &Mutex, _dur: Duration) -> bool { + ::sys_common::util::dumb_print(format_args!("condvar wait_timeout\n")); + unimplemented!(); + } + + #[inline] + pub unsafe fn destroy(&self) { + *self.lock.get() = ptr::null_mut(); + *self.seq.get() = 0; + } +} + +unsafe impl Send for Condvar {} + +unsafe impl Sync for Condvar {} diff --git a/src/libstd/sys/redox/env.rs b/src/libstd/sys/redox/env.rs new file mode 100644 index 0000000000..669b7520df --- /dev/null +++ b/src/libstd/sys/redox/env.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +pub mod os { + pub const FAMILY: &'static str = "redox"; + pub const OS: &'static str = "redox"; + pub const DLL_PREFIX: &'static str = "lib"; + pub const DLL_SUFFIX: &'static str = ".so"; + pub const DLL_EXTENSION: &'static str = "so"; + pub const EXE_SUFFIX: &'static str = ""; + pub const EXE_EXTENSION: &'static str = ""; +} diff --git a/src/libstd/sys/redox/ext/ffi.rs b/src/libstd/sys/redox/ext/ffi.rs new file mode 100644 index 0000000000..d59b4fc0b7 --- /dev/null +++ b/src/libstd/sys/redox/ext/ffi.rs @@ -0,0 +1,61 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Unix-specific extension to the primitives in the `std::ffi` module + +#![stable(feature = "rust1", since = "1.0.0")] + +use ffi::{OsStr, OsString}; +use mem; +use sys::os_str::Buf; +use sys_common::{FromInner, IntoInner, AsInner}; + +/// Unix-specific extensions to `OsString`. +#[stable(feature = "rust1", since = "1.0.0")] +pub trait OsStringExt { + /// Creates an `OsString` from a byte vector. + #[stable(feature = "rust1", since = "1.0.0")] + fn from_vec(vec: Vec) -> Self; + + /// Yields the underlying byte vector of this `OsString`. + #[stable(feature = "rust1", since = "1.0.0")] + fn into_vec(self) -> Vec; +} + +#[stable(feature = "rust1", since = "1.0.0")] +impl OsStringExt for OsString { + fn from_vec(vec: Vec) -> OsString { + FromInner::from_inner(Buf { inner: vec }) + } + fn into_vec(self) -> Vec { + self.into_inner().inner + } +} + +/// Unix-specific extensions to `OsStr`. +#[stable(feature = "rust1", since = "1.0.0")] +pub trait OsStrExt { + #[stable(feature = "rust1", since = "1.0.0")] + fn from_bytes(slice: &[u8]) -> &Self; + + /// Gets the underlying byte view of the `OsStr` slice. + #[stable(feature = "rust1", since = "1.0.0")] + fn as_bytes(&self) -> &[u8]; +} + +#[stable(feature = "rust1", since = "1.0.0")] +impl OsStrExt for OsStr { + fn from_bytes(slice: &[u8]) -> &OsStr { + unsafe { mem::transmute(slice) } + } + fn as_bytes(&self) -> &[u8] { + &self.as_inner().inner + } +} diff --git a/src/libstd/sys/redox/ext/fs.rs b/src/libstd/sys/redox/ext/fs.rs new file mode 100644 index 0000000000..b4e220971f --- /dev/null +++ b/src/libstd/sys/redox/ext/fs.rs @@ -0,0 +1,298 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Unix-specific extensions to primitives in the `std::fs` module. + +#![stable(feature = "rust1", since = "1.0.0")] + +use fs::{self, Permissions, OpenOptions}; +use io; +use path::Path; +use sys; +use sys_common::{FromInner, AsInner, AsInnerMut}; + +/// Unix-specific extensions to `Permissions` +#[stable(feature = "fs_ext", since = "1.1.0")] +pub trait PermissionsExt { + /// Returns the underlying raw `mode_t` bits that are the standard Unix + /// permissions for this file. + /// + /// # Examples + /// + /// ```rust,ignore + /// use std::fs::File; + /// use std::os::unix::fs::PermissionsExt; + /// + /// let f = try!(File::create("foo.txt")); + /// let metadata = try!(f.metadata()); + /// let permissions = metadata.permissions(); + /// + /// println!("permissions: {}", permissions.mode()); + /// ``` + #[stable(feature = "fs_ext", since = "1.1.0")] + fn mode(&self) -> u32; + + /// Sets the underlying raw bits for this set of permissions. + /// + /// # Examples + /// + /// ```rust,ignore + /// use std::fs::File; + /// use std::os::unix::fs::PermissionsExt; + /// + /// let f = try!(File::create("foo.txt")); + /// let metadata = try!(f.metadata()); + /// let mut permissions = metadata.permissions(); + /// + /// permissions.set_mode(0o644); // Read/write for owner and read for others. + /// assert_eq!(permissions.mode(), 0o644); + /// ``` + #[stable(feature = "fs_ext", since = "1.1.0")] + fn set_mode(&mut self, mode: u32); + + /// Creates a new instance of `Permissions` from the given set of Unix + /// permission bits. + /// + /// # Examples + /// + /// ```rust,ignore + /// use std::fs::Permissions; + /// use std::os::unix::fs::PermissionsExt; + /// + /// // Read/write for owner and read for others. + /// let permissions = Permissions::from_mode(0o644); + /// assert_eq!(permissions.mode(), 0o644); + /// ``` + #[stable(feature = "fs_ext", since = "1.1.0")] + fn from_mode(mode: u32) -> Self; +} + +#[stable(feature = "fs_ext", since = "1.1.0")] +impl PermissionsExt for Permissions { + fn mode(&self) -> u32 { + self.as_inner().mode() + } + + fn set_mode(&mut self, mode: u32) { + *self = Permissions::from_inner(FromInner::from_inner(mode)); + } + + fn from_mode(mode: u32) -> Permissions { + Permissions::from_inner(FromInner::from_inner(mode)) + } +} + +/// Unix-specific extensions to `OpenOptions` +#[stable(feature = "fs_ext", since = "1.1.0")] +pub trait OpenOptionsExt { + /// Sets the mode bits that a new file will be created with. + /// + /// If a new file is created as part of a `File::open_opts` call then this + /// specified `mode` will be used as the permission bits for the new file. + /// If no `mode` is set, the default of `0o666` will be used. + /// The operating system masks out bits with the systems `umask`, to produce + /// the final permissions. + /// + /// # Examples + /// + /// ```rust,ignore + /// extern crate libc; + /// use std::fs::OpenOptions; + /// use std::os::unix::fs::OpenOptionsExt; + /// + /// let mut options = OpenOptions::new(); + /// options.mode(0o644); // Give read/write for owner and read for others. + /// let file = options.open("foo.txt"); + /// ``` + #[stable(feature = "fs_ext", since = "1.1.0")] + fn mode(&mut self, mode: u32) -> &mut Self; + + /// Pass custom flags to the `flags` agument of `open`. + /// + /// The bits that define the access mode are masked out with `O_ACCMODE`, to + /// ensure they do not interfere with the access mode set by Rusts options. + /// + /// Custom flags can only set flags, not remove flags set by Rusts options. + /// This options overwrites any previously set custom flags. + /// + /// # Examples + /// + /// ```rust,ignore + /// extern crate libc; + /// use std::fs::OpenOptions; + /// use std::os::unix::fs::OpenOptionsExt; + /// + /// let mut options = OpenOptions::new(); + /// options.write(true); + /// if cfg!(unix) { + /// options.custom_flags(libc::O_NOFOLLOW); + /// } + /// let file = options.open("foo.txt"); + /// ``` + #[stable(feature = "open_options_ext", since = "1.10.0")] + fn custom_flags(&mut self, flags: i32) -> &mut Self; +} + +#[stable(feature = "fs_ext", since = "1.1.0")] +impl OpenOptionsExt for OpenOptions { + fn mode(&mut self, mode: u32) -> &mut OpenOptions { + self.as_inner_mut().mode(mode); self + } + + fn custom_flags(&mut self, flags: i32) -> &mut OpenOptions { + self.as_inner_mut().custom_flags(flags); self + } +} + +// Hm, why are there casts here to the returned type, shouldn't the types always +// be the same? Right you are! Turns out, however, on android at least the types +// in the raw `stat` structure are not the same as the types being returned. Who +// knew! +// +// As a result to make sure this compiles for all platforms we do the manual +// casts and rely on manual lowering to `stat` if the raw type is desired. +#[stable(feature = "metadata_ext", since = "1.1.0")] +pub trait MetadataExt { + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn mode(&self) -> u32; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn uid(&self) -> u32; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn gid(&self) -> u32; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn size(&self) -> u64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn atime(&self) -> i64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn atime_nsec(&self) -> i64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn mtime(&self) -> i64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn mtime_nsec(&self) -> i64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn ctime(&self) -> i64; + #[stable(feature = "metadata_ext", since = "1.1.0")] + fn ctime_nsec(&self) -> i64; +} + +#[stable(feature = "metadata_ext", since = "1.1.0")] +impl MetadataExt for fs::Metadata { + fn mode(&self) -> u32 { + self.as_inner().as_inner().st_mode as u32 + } + fn uid(&self) -> u32 { + self.as_inner().as_inner().st_uid as u32 + } + fn gid(&self) -> u32 { + self.as_inner().as_inner().st_gid as u32 + } + fn size(&self) -> u64 { + self.as_inner().as_inner().st_size as u64 + } + fn atime(&self) -> i64 { + self.as_inner().as_inner().st_atime as i64 + } + fn atime_nsec(&self) -> i64 { + self.as_inner().as_inner().st_atime_nsec as i64 + } + fn mtime(&self) -> i64 { + self.as_inner().as_inner().st_mtime as i64 + } + fn mtime_nsec(&self) -> i64 { + self.as_inner().as_inner().st_mtime_nsec as i64 + } + fn ctime(&self) -> i64 { + self.as_inner().as_inner().st_ctime as i64 + } + fn ctime_nsec(&self) -> i64 { + self.as_inner().as_inner().st_ctime_nsec as i64 + } +} + +/// Add special unix types (block/char device, fifo and socket) +#[stable(feature = "file_type_ext", since = "1.5.0")] +pub trait FileTypeExt { + /// Returns whether this file type is a block device. + #[stable(feature = "file_type_ext", since = "1.5.0")] + fn is_block_device(&self) -> bool; + /// Returns whether this file type is a char device. + #[stable(feature = "file_type_ext", since = "1.5.0")] + fn is_char_device(&self) -> bool; + /// Returns whether this file type is a fifo. + #[stable(feature = "file_type_ext", since = "1.5.0")] + fn is_fifo(&self) -> bool; + /// Returns whether this file type is a socket. + #[stable(feature = "file_type_ext", since = "1.5.0")] + fn is_socket(&self) -> bool; +} + +#[stable(feature = "file_type_ext", since = "1.5.0")] +impl FileTypeExt for fs::FileType { + fn is_block_device(&self) -> bool { false /*FIXME: Implement block device mode*/ } + fn is_char_device(&self) -> bool { false /*FIXME: Implement char device mode*/ } + fn is_fifo(&self) -> bool { false /*FIXME: Implement fifo mode*/ } + fn is_socket(&self) -> bool { false /*FIXME: Implement socket mode*/ } +} + +/// Creates a new symbolic link on the filesystem. +/// +/// The `dst` path will be a symbolic link pointing to the `src` path. +/// +/// # Note +/// +/// On Windows, you must specify whether a symbolic link points to a file +/// or directory. Use `os::windows::fs::symlink_file` to create a +/// symbolic link to a file, or `os::windows::fs::symlink_dir` to create a +/// symbolic link to a directory. Additionally, the process must have +/// `SeCreateSymbolicLinkPrivilege` in order to be able to create a +/// symbolic link. +/// +/// # Examples +/// +/// ``` +/// use std::os::unix::fs; +/// +/// # fn foo() -> std::io::Result<()> { +/// try!(fs::symlink("a.txt", "b.txt")); +/// # Ok(()) +/// # } +/// ``` +#[stable(feature = "symlink", since = "1.1.0")] +pub fn symlink, Q: AsRef>(src: P, dst: Q) -> io::Result<()> +{ + sys::fs::symlink(src.as_ref(), dst.as_ref()) +} + +#[stable(feature = "dir_builder", since = "1.6.0")] +/// An extension trait for `fs::DirBuilder` for unix-specific options. +pub trait DirBuilderExt { + /// Sets the mode to create new directories with. This option defaults to + /// 0o777. + /// + /// # Examples + /// + /// ```ignore + /// use std::fs::DirBuilder; + /// use std::os::unix::fs::DirBuilderExt; + /// + /// let mut builder = DirBuilder::new(); + /// builder.mode(0o755); + /// ``` + #[stable(feature = "dir_builder", since = "1.6.0")] + fn mode(&mut self, mode: u32) -> &mut Self; +} + +#[stable(feature = "dir_builder", since = "1.6.0")] +impl DirBuilderExt for fs::DirBuilder { + fn mode(&mut self, mode: u32) -> &mut fs::DirBuilder { + self.as_inner_mut().set_mode(mode); + self + } +} diff --git a/src/libstd/sys/redox/ext/io.rs b/src/libstd/sys/redox/ext/io.rs new file mode 100644 index 0000000000..135e31fae1 --- /dev/null +++ b/src/libstd/sys/redox/ext/io.rs @@ -0,0 +1,146 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Unix-specific extensions to general I/O primitives + +#![stable(feature = "rust1", since = "1.0.0")] + +use fs; +use sys; +use sys_common::{AsInner, FromInner, IntoInner}; + +/// Raw file descriptors. +#[stable(feature = "rust1", since = "1.0.0")] +pub type RawFd = usize; + +/// A trait to extract the raw unix file descriptor from an underlying +/// object. +/// +/// This is only available on unix platforms and must be imported in order +/// to call the method. Windows platforms have a corresponding `AsRawHandle` +/// and `AsRawSocket` set of traits. +#[stable(feature = "rust1", since = "1.0.0")] +pub trait AsRawFd { + /// Extracts the raw file descriptor. + /// + /// This method does **not** pass ownership of the raw file descriptor + /// to the caller. The descriptor is only guaranteed to be valid while + /// the original object has not yet been destroyed. + #[stable(feature = "rust1", since = "1.0.0")] + fn as_raw_fd(&self) -> RawFd; +} + +/// A trait to express the ability to construct an object from a raw file +/// descriptor. +#[stable(feature = "from_raw_os", since = "1.1.0")] +pub trait FromRawFd { + /// Constructs a new instances of `Self` from the given raw file + /// descriptor. + /// + /// This function **consumes ownership** of the specified file + /// descriptor. The returned object will take responsibility for closing + /// it when the object goes out of scope. + /// + /// This function is also unsafe as the primitives currently returned + /// have the contract that they are the sole owner of the file + /// descriptor they are wrapping. Usage of this function could + /// accidentally allow violating this contract which can cause memory + /// unsafety in code that relies on it being true. + #[stable(feature = "from_raw_os", since = "1.1.0")] + unsafe fn from_raw_fd(fd: RawFd) -> Self; +} + +/// A trait to express the ability to consume an object and acquire ownership of +/// its raw file descriptor. +#[stable(feature = "into_raw_os", since = "1.4.0")] +pub trait IntoRawFd { + /// Consumes this object, returning the raw underlying file descriptor. + /// + /// This function **transfers ownership** of the underlying file descriptor + /// to the caller. Callers are then the unique owners of the file descriptor + /// and must close the descriptor once it's no longer needed. + #[stable(feature = "into_raw_os", since = "1.4.0")] + fn into_raw_fd(self) -> RawFd; +} + +#[stable(feature = "rust1", since = "1.0.0")] +impl AsRawFd for fs::File { + fn as_raw_fd(&self) -> RawFd { + self.as_inner().fd().raw() + } +} +#[stable(feature = "from_raw_os", since = "1.1.0")] +impl FromRawFd for fs::File { + unsafe fn from_raw_fd(fd: RawFd) -> fs::File { + fs::File::from_inner(sys::fs::File::from_inner(fd)) + } +} +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for fs::File { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_fd().into_raw() + } +} + +/* +#[stable(feature = "rust1", since = "1.0.0")] +impl AsRawFd for net::TcpStream { + fn as_raw_fd(&self) -> RawFd { *self.as_inner().socket().as_inner() } +} +#[stable(feature = "rust1", since = "1.0.0")] +impl AsRawFd for net::TcpListener { + fn as_raw_fd(&self) -> RawFd { *self.as_inner().socket().as_inner() } +} +#[stable(feature = "rust1", since = "1.0.0")] +impl AsRawFd for net::UdpSocket { + fn as_raw_fd(&self) -> RawFd { *self.as_inner().socket().as_inner() } +} + +#[stable(feature = "from_raw_os", since = "1.1.0")] +impl FromRawFd for net::TcpStream { + unsafe fn from_raw_fd(fd: RawFd) -> net::TcpStream { + let socket = sys::net::Socket::from_inner(fd); + net::TcpStream::from_inner(sys_common::net::TcpStream::from_inner(socket)) + } +} +#[stable(feature = "from_raw_os", since = "1.1.0")] +impl FromRawFd for net::TcpListener { + unsafe fn from_raw_fd(fd: RawFd) -> net::TcpListener { + let socket = sys::net::Socket::from_inner(fd); + net::TcpListener::from_inner(sys_common::net::TcpListener::from_inner(socket)) + } +} +#[stable(feature = "from_raw_os", since = "1.1.0")] +impl FromRawFd for net::UdpSocket { + unsafe fn from_raw_fd(fd: RawFd) -> net::UdpSocket { + let socket = sys::net::Socket::from_inner(fd); + net::UdpSocket::from_inner(sys_common::net::UdpSocket::from_inner(socket)) + } +} + +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for net::TcpStream { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_socket().into_inner() + } +} +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for net::TcpListener { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_socket().into_inner() + } +} +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for net::UdpSocket { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_socket().into_inner() + } +} +*/ diff --git a/src/libstd/sys/redox/ext/mod.rs b/src/libstd/sys/redox/ext/mod.rs new file mode 100644 index 0000000000..513ef272e9 --- /dev/null +++ b/src/libstd/sys/redox/ext/mod.rs @@ -0,0 +1,50 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Experimental extensions to `std` for Unix platforms. +//! +//! For now, this module is limited to extracting file descriptors, +//! but its functionality will grow over time. +//! +//! # Example +//! +//! ```no_run +//! use std::fs::File; +//! use std::os::unix::prelude::*; +//! +//! fn main() { +//! let f = File::create("foo.txt").unwrap(); +//! let fd = f.as_raw_fd(); +//! +//! // use fd with native unix bindings +//! } +//! ``` + +#![stable(feature = "rust1", since = "1.0.0")] + +pub mod ffi; +pub mod fs; +pub mod io; +pub mod process; + +/// A prelude for conveniently writing platform-specific code. +/// +/// Includes all extension traits, and some important type definitions. +#[stable(feature = "rust1", since = "1.0.0")] +pub mod prelude { + #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] + pub use super::io::{RawFd, AsRawFd, FromRawFd, IntoRawFd}; + #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] + pub use super::ffi::{OsStrExt, OsStringExt}; + #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] + pub use super::fs::{FileTypeExt, PermissionsExt, OpenOptionsExt, MetadataExt}; + #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] + pub use super::process::{CommandExt, ExitStatusExt}; +} diff --git a/src/libstd/sys/redox/ext/process.rs b/src/libstd/sys/redox/ext/process.rs new file mode 100644 index 0000000000..c59524974b --- /dev/null +++ b/src/libstd/sys/redox/ext/process.rs @@ -0,0 +1,183 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Unix-specific extensions to primitives in the `std::process` module. + +#![stable(feature = "rust1", since = "1.0.0")] + +use io; +use os::unix::io::{FromRawFd, RawFd, AsRawFd, IntoRawFd}; +use process; +use sys; +use sys_common::{AsInnerMut, AsInner, FromInner, IntoInner}; + +/// Unix-specific extensions to the `std::process::Command` builder +#[stable(feature = "rust1", since = "1.0.0")] +pub trait CommandExt { + /// Sets the child process's user id. This translates to a + /// `setuid` call in the child process. Failure in the `setuid` + /// call will cause the spawn to fail. + #[stable(feature = "rust1", since = "1.0.0")] + fn uid(&mut self, id: u32) -> &mut process::Command; + + /// Similar to `uid`, but sets the group id of the child process. This has + /// the same semantics as the `uid` field. + #[stable(feature = "rust1", since = "1.0.0")] + fn gid(&mut self, id: u32) -> &mut process::Command; + + /// Schedules a closure to be run just before the `exec` function is + /// invoked. + /// + /// The closure is allowed to return an I/O error whose OS error code will + /// be communicated back to the parent and returned as an error from when + /// the spawn was requested. + /// + /// Multiple closures can be registered and they will be called in order of + /// their registration. If a closure returns `Err` then no further closures + /// will be called and the spawn operation will immediately return with a + /// failure. + /// + /// # Notes + /// + /// This closure will be run in the context of the child process after a + /// `fork`. This primarily means that any modificatons made to memory on + /// behalf of this closure will **not** be visible to the parent process. + /// This is often a very constrained environment where normal operations + /// like `malloc` or acquiring a mutex are not guaranteed to work (due to + /// other threads perhaps still running when the `fork` was run). + /// + /// When this closure is run, aspects such as the stdio file descriptors and + /// working directory have successfully been changed, so output to these + /// locations may not appear where intended. + #[stable(feature = "process_exec", since = "1.15.0")] + fn before_exec(&mut self, f: F) -> &mut process::Command + where F: FnMut() -> io::Result<()> + Send + Sync + 'static; + + /// Performs all the required setup by this `Command`, followed by calling + /// the `execvp` syscall. + /// + /// On success this function will not return, and otherwise it will return + /// an error indicating why the exec (or another part of the setup of the + /// `Command`) failed. + /// + /// This function, unlike `spawn`, will **not** `fork` the process to create + /// a new child. Like spawn, however, the default behavior for the stdio + /// descriptors will be to inherited from the current process. + /// + /// # Notes + /// + /// The process may be in a "broken state" if this function returns in + /// error. For example the working directory, environment variables, signal + /// handling settings, various user/group information, or aspects of stdio + /// file descriptors may have changed. If a "transactional spawn" is + /// required to gracefully handle errors it is recommended to use the + /// cross-platform `spawn` instead. + #[stable(feature = "process_exec2", since = "1.9.0")] + fn exec(&mut self) -> io::Error; +} + +#[stable(feature = "rust1", since = "1.0.0")] +impl CommandExt for process::Command { + fn uid(&mut self, id: u32) -> &mut process::Command { + self.as_inner_mut().uid(id); + self + } + + fn gid(&mut self, id: u32) -> &mut process::Command { + self.as_inner_mut().gid(id); + self + } + + fn before_exec(&mut self, f: F) -> &mut process::Command + where F: FnMut() -> io::Result<()> + Send + Sync + 'static + { + self.as_inner_mut().before_exec(Box::new(f)); + self + } + + fn exec(&mut self) -> io::Error { + self.as_inner_mut().exec(sys::process::Stdio::Inherit) + } +} + +/// Unix-specific extensions to `std::process::ExitStatus` +#[stable(feature = "rust1", since = "1.0.0")] +pub trait ExitStatusExt { + /// Creates a new `ExitStatus` from the raw underlying `i32` return value of + /// a process. + #[stable(feature = "exit_status_from", since = "1.12.0")] + fn from_raw(raw: i32) -> Self; + + /// If the process was terminated by a signal, returns that signal. + #[stable(feature = "rust1", since = "1.0.0")] + fn signal(&self) -> Option; +} + +#[stable(feature = "rust1", since = "1.0.0")] +impl ExitStatusExt for process::ExitStatus { + fn from_raw(raw: i32) -> Self { + process::ExitStatus::from_inner(From::from(raw)) + } + + fn signal(&self) -> Option { + self.as_inner().signal() + } +} + +#[stable(feature = "process_extensions", since = "1.2.0")] +impl FromRawFd for process::Stdio { + unsafe fn from_raw_fd(fd: RawFd) -> process::Stdio { + let fd = sys::fd::FileDesc::new(fd); + let io = sys::process::Stdio::Fd(fd); + process::Stdio::from_inner(io) + } +} + +#[stable(feature = "process_extensions", since = "1.2.0")] +impl AsRawFd for process::ChildStdin { + fn as_raw_fd(&self) -> RawFd { + self.as_inner().fd().raw() + } +} + +#[stable(feature = "process_extensions", since = "1.2.0")] +impl AsRawFd for process::ChildStdout { + fn as_raw_fd(&self) -> RawFd { + self.as_inner().fd().raw() + } +} + +#[stable(feature = "process_extensions", since = "1.2.0")] +impl AsRawFd for process::ChildStderr { + fn as_raw_fd(&self) -> RawFd { + self.as_inner().fd().raw() + } +} + +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for process::ChildStdin { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_fd().into_raw() + } +} + +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for process::ChildStdout { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_fd().into_raw() + } +} + +#[stable(feature = "into_raw_os", since = "1.4.0")] +impl IntoRawFd for process::ChildStderr { + fn into_raw_fd(self) -> RawFd { + self.into_inner().into_fd().into_raw() + } +} diff --git a/src/libstd/sys/redox/fast_thread_local.rs b/src/libstd/sys/redox/fast_thread_local.rs new file mode 100644 index 0000000000..6eeae2d90e --- /dev/null +++ b/src/libstd/sys/redox/fast_thread_local.rs @@ -0,0 +1,116 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![cfg(target_thread_local)] +#![unstable(feature = "thread_local_internals", issue = "0")] + +use cell::{Cell, UnsafeCell}; +use intrinsics; +use ptr; + +pub struct Key { + inner: UnsafeCell>, + + // Metadata to keep track of the state of the destructor. Remember that + // these variables are thread-local, not global. + dtor_registered: Cell, + dtor_running: Cell, +} + +unsafe impl ::marker::Sync for Key { } + +impl Key { + pub const fn new() -> Key { + Key { + inner: UnsafeCell::new(None), + dtor_registered: Cell::new(false), + dtor_running: Cell::new(false) + } + } + + pub fn get(&'static self) -> Option<&'static UnsafeCell>> { + unsafe { + if intrinsics::needs_drop::() && self.dtor_running.get() { + return None + } + self.register_dtor(); + } + Some(&self.inner) + } + + unsafe fn register_dtor(&self) { + if !intrinsics::needs_drop::() || self.dtor_registered.get() { + return + } + + register_dtor(self as *const _ as *mut u8, + destroy_value::); + self.dtor_registered.set(true); + } +} + +unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) { + // The fallback implementation uses a vanilla OS-based TLS key to track + // the list of destructors that need to be run for this thread. The key + // then has its own destructor which runs all the other destructors. + // + // The destructor for DTORS is a little special in that it has a `while` + // loop to continuously drain the list of registered destructors. It + // *should* be the case that this loop always terminates because we + // provide the guarantee that a TLS key cannot be set after it is + // flagged for destruction. + use sys_common::thread_local as os; + + static DTORS: os::StaticKey = os::StaticKey::new(Some(run_dtors)); + type List = Vec<(*mut u8, unsafe extern fn(*mut u8))>; + if DTORS.get().is_null() { + let v: Box = box Vec::new(); + DTORS.set(Box::into_raw(v) as *mut u8); + } + let list: &mut List = &mut *(DTORS.get() as *mut List); + list.push((t, dtor)); + + unsafe extern fn run_dtors(mut ptr: *mut u8) { + while !ptr.is_null() { + let list: Box = Box::from_raw(ptr as *mut List); + for &(ptr, dtor) in list.iter() { + dtor(ptr); + } + ptr = DTORS.get(); + DTORS.set(ptr::null_mut()); + } + } +} + +pub unsafe extern fn destroy_value(ptr: *mut u8) { + let ptr = ptr as *mut Key; + // Right before we run the user destructor be sure to flag the + // destructor as running for this thread so calls to `get` will return + // `None`. + (*ptr).dtor_running.set(true); + + // The OSX implementation of TLS apparently had an odd aspect to it + // where the pointer we have may be overwritten while this destructor + // is running. Specifically if a TLS destructor re-accesses TLS it may + // trigger a re-initialization of all TLS variables, paving over at + // least some destroyed ones with initial values. + // + // This means that if we drop a TLS value in place on OSX that we could + // revert the value to its original state halfway through the + // destructor, which would be bad! + // + // Hence, we use `ptr::read` on OSX (to move to a "safe" location) + // instead of drop_in_place. + if cfg!(target_os = "macos") { + ptr::read((*ptr).inner.get()); + } else { + ptr::drop_in_place((*ptr).inner.get()); + } +} diff --git a/src/libstd/sys/redox/fd.rs b/src/libstd/sys/redox/fd.rs new file mode 100644 index 0000000000..b6de68a9dc --- /dev/null +++ b/src/libstd/sys/redox/fd.rs @@ -0,0 +1,100 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![unstable(reason = "not public", issue = "0", feature = "fd")] + +use io::{self, Read}; +use mem; +use sys::{cvt, syscall}; +use sys_common::AsInner; +use sys_common::io::read_to_end_uninitialized; + +pub struct FileDesc { + fd: usize, +} + +impl FileDesc { + pub fn new(fd: usize) -> FileDesc { + FileDesc { fd: fd } + } + + pub fn raw(&self) -> usize { self.fd } + + /// Extracts the actual filedescriptor without closing it. + pub fn into_raw(self) -> usize { + let fd = self.fd; + mem::forget(self); + fd + } + + pub fn read(&self, buf: &mut [u8]) -> io::Result { + cvt(syscall::read(self.fd, buf)) + } + + pub fn read_to_end(&self, buf: &mut Vec) -> io::Result { + let mut me = self; + (&mut me).read_to_end(buf) + } + + pub fn write(&self, buf: &[u8]) -> io::Result { + cvt(syscall::write(self.fd, buf)) + } + + pub fn duplicate(&self) -> io::Result { + let new_fd = cvt(syscall::dup(self.fd, &[]))?; + Ok(FileDesc::new(new_fd)) + } + + pub fn nonblocking(&self) -> io::Result { + let flags = cvt(syscall::fcntl(self.fd, syscall::F_GETFL, 0))?; + Ok(flags & syscall::O_NONBLOCK == syscall::O_NONBLOCK) + } + + pub fn set_cloexec(&self) -> io::Result<()> { + let mut flags = cvt(syscall::fcntl(self.fd, syscall::F_GETFL, 0))?; + flags |= syscall::O_CLOEXEC; + cvt(syscall::fcntl(self.fd, syscall::F_SETFL, flags)).and(Ok(())) + } + + pub fn set_nonblocking(&self, nonblocking: bool) -> io::Result<()> { + let mut flags = cvt(syscall::fcntl(self.fd, syscall::F_GETFL, 0))?; + if nonblocking { + flags |= syscall::O_NONBLOCK; + } else { + flags &= !syscall::O_NONBLOCK; + } + cvt(syscall::fcntl(self.fd, syscall::F_SETFL, flags)).and(Ok(())) + } +} + +impl<'a> Read for &'a FileDesc { + fn read(&mut self, buf: &mut [u8]) -> io::Result { + (**self).read(buf) + } + + fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { + unsafe { read_to_end_uninitialized(self, buf) } + } +} + +impl AsInner for FileDesc { + fn as_inner(&self) -> &usize { &self.fd } +} + +impl Drop for FileDesc { + fn drop(&mut self) { + // Note that errors are ignored when closing a file descriptor. The + // reason for this is that if an error occurs we don't actually know if + // the file descriptor was closed or not, and if we retried (for + // something like EINTR), we might close another valid file descriptor + // (opened after we closed ours. + let _ = syscall::close(self.fd); + } +} diff --git a/src/libstd/sys/redox/fs.rs b/src/libstd/sys/redox/fs.rs new file mode 100644 index 0000000000..e3bd77f400 --- /dev/null +++ b/src/libstd/sys/redox/fs.rs @@ -0,0 +1,469 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use os::unix::prelude::*; + +use ffi::{OsString, OsStr}; +use fmt; +use io::{self, Error, ErrorKind, SeekFrom}; +use path::{Path, PathBuf}; +use sync::Arc; +use sys::fd::FileDesc; +use sys::time::SystemTime; +use sys::{cvt, syscall}; +use sys_common::{AsInner, FromInner}; + +pub struct File(FileDesc); + +#[derive(Clone)] +pub struct FileAttr { + stat: syscall::Stat, +} + +pub struct ReadDir { + data: Vec, + i: usize, + root: Arc, +} + +struct Dir(FileDesc); + +unsafe impl Send for Dir {} +unsafe impl Sync for Dir {} + +pub struct DirEntry { + root: Arc, + name: Box<[u8]> +} + +#[derive(Clone)] +pub struct OpenOptions { + // generic + read: bool, + write: bool, + append: bool, + truncate: bool, + create: bool, + create_new: bool, + // system-specific + custom_flags: i32, + mode: u16, +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct FilePermissions { mode: u16 } + +#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)] +pub struct FileType { mode: u16 } + +pub struct DirBuilder { mode: u16 } + +impl FileAttr { + pub fn size(&self) -> u64 { self.stat.st_size as u64 } + pub fn perm(&self) -> FilePermissions { + FilePermissions { mode: (self.stat.st_mode as u16) & 0o777 } + } + + pub fn file_type(&self) -> FileType { + FileType { mode: self.stat.st_mode as u16 } + } +} + +impl FileAttr { + pub fn modified(&self) -> io::Result { + Ok(SystemTime::from(syscall::TimeSpec { + tv_sec: self.stat.st_mtime as i64, + tv_nsec: self.stat.st_mtime_nsec as i32, + })) + } + + pub fn accessed(&self) -> io::Result { + Ok(SystemTime::from(syscall::TimeSpec { + tv_sec: self.stat.st_atime as i64, + tv_nsec: self.stat.st_atime_nsec as i32, + })) + } + + pub fn created(&self) -> io::Result { + Ok(SystemTime::from(syscall::TimeSpec { + tv_sec: self.stat.st_ctime as i64, + tv_nsec: self.stat.st_ctime_nsec as i32, + })) + } +} + +impl AsInner for FileAttr { + fn as_inner(&self) -> &syscall::Stat { &self.stat } +} + +impl FilePermissions { + pub fn readonly(&self) -> bool { self.mode & 0o222 == 0 } + pub fn set_readonly(&mut self, readonly: bool) { + if readonly { + self.mode &= !0o222; + } else { + self.mode |= 0o222; + } + } + pub fn mode(&self) -> u32 { self.mode as u32 } +} + +impl FileType { + pub fn is_dir(&self) -> bool { self.is(syscall::MODE_DIR) } + pub fn is_file(&self) -> bool { self.is(syscall::MODE_FILE) } + pub fn is_symlink(&self) -> bool { false /*FIXME: Implement symlink mode*/ } + + pub fn is(&self, mode: u16) -> bool { + self.mode & (syscall::MODE_DIR | syscall::MODE_FILE) == mode + } +} + +impl FromInner for FilePermissions { + fn from_inner(mode: u32) -> FilePermissions { + FilePermissions { mode: mode as u16 } + } +} + +impl fmt::Debug for ReadDir { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + // This will only be called from std::fs::ReadDir, which will add a "ReadDir()" frame. + // Thus the result will be e g 'ReadDir("/home")' + fmt::Debug::fmt(&*self.root, f) + } +} + +impl Iterator for ReadDir { + type Item = io::Result; + + fn next(&mut self) -> Option> { + loop { + let start = self.i; + let mut i = self.i; + while i < self.data.len() { + self.i += 1; + if self.data[i] == b'\n' { + break; + } + i += 1; + } + if start < self.i { + let ret = DirEntry { + name: self.data[start .. i].to_owned().into_boxed_slice(), + root: self.root.clone() + }; + if ret.name_bytes() != b"." && ret.name_bytes() != b".." { + return Some(Ok(ret)) + } + } else { + return None; + } + } + } +} + +impl DirEntry { + pub fn path(&self) -> PathBuf { + self.root.join(OsStr::from_bytes(self.name_bytes())) + } + + pub fn file_name(&self) -> OsString { + OsStr::from_bytes(self.name_bytes()).to_os_string() + } + + pub fn metadata(&self) -> io::Result { + lstat(&self.path()) + } + + pub fn file_type(&self) -> io::Result { + lstat(&self.path()).map(|m| m.file_type()) + } + + fn name_bytes(&self) -> &[u8] { + &*self.name + } +} + +impl OpenOptions { + pub fn new() -> OpenOptions { + OpenOptions { + // generic + read: false, + write: false, + append: false, + truncate: false, + create: false, + create_new: false, + // system-specific + custom_flags: 0, + mode: 0o666, + } + } + + pub fn read(&mut self, read: bool) { self.read = read; } + pub fn write(&mut self, write: bool) { self.write = write; } + pub fn append(&mut self, append: bool) { self.append = append; } + pub fn truncate(&mut self, truncate: bool) { self.truncate = truncate; } + pub fn create(&mut self, create: bool) { self.create = create; } + pub fn create_new(&mut self, create_new: bool) { self.create_new = create_new; } + + pub fn custom_flags(&mut self, flags: i32) { self.custom_flags = flags; } + pub fn mode(&mut self, mode: u32) { self.mode = mode as u16; } + + fn get_access_mode(&self) -> io::Result { + match (self.read, self.write, self.append) { + (true, false, false) => Ok(syscall::O_RDONLY), + (false, true, false) => Ok(syscall::O_WRONLY), + (true, true, false) => Ok(syscall::O_RDWR), + (false, _, true) => Ok(syscall::O_WRONLY | syscall::O_APPEND), + (true, _, true) => Ok(syscall::O_RDWR | syscall::O_APPEND), + (false, false, false) => Err(Error::from_raw_os_error(syscall::EINVAL)), + } + } + + fn get_creation_mode(&self) -> io::Result { + match (self.write, self.append) { + (true, false) => {} + (false, false) => + if self.truncate || self.create || self.create_new { + return Err(Error::from_raw_os_error(syscall::EINVAL)); + }, + (_, true) => + if self.truncate && !self.create_new { + return Err(Error::from_raw_os_error(syscall::EINVAL)); + }, + } + + Ok(match (self.create, self.truncate, self.create_new) { + (false, false, false) => 0, + (true, false, false) => syscall::O_CREAT, + (false, true, false) => syscall::O_TRUNC, + (true, true, false) => syscall::O_CREAT | syscall::O_TRUNC, + (_, _, true) => syscall::O_CREAT | syscall::O_EXCL, + }) + } +} + +impl File { + pub fn open(path: &Path, opts: &OpenOptions) -> io::Result { + let flags = syscall::O_CLOEXEC | + opts.get_access_mode()? as usize | + opts.get_creation_mode()? as usize | + (opts.custom_flags as usize & !syscall::O_ACCMODE); + let fd = cvt(syscall::open(path.to_str().unwrap(), flags | opts.mode as usize))?; + Ok(File(FileDesc::new(fd))) + } + + pub fn file_attr(&self) -> io::Result { + let mut stat = syscall::Stat::default(); + cvt(syscall::fstat(self.0.raw(), &mut stat))?; + Ok(FileAttr { stat: stat }) + } + + pub fn fsync(&self) -> io::Result<()> { + cvt(syscall::fsync(self.0.raw()))?; + Ok(()) + } + + pub fn datasync(&self) -> io::Result<()> { + self.fsync() + } + + pub fn truncate(&self, size: u64) -> io::Result<()> { + cvt(syscall::ftruncate(self.0.raw(), size as usize))?; + Ok(()) + } + + pub fn read(&self, buf: &mut [u8]) -> io::Result { + self.0.read(buf) + } + + pub fn read_to_end(&self, buf: &mut Vec) -> io::Result { + self.0.read_to_end(buf) + } + + pub fn write(&self, buf: &[u8]) -> io::Result { + self.0.write(buf) + } + + pub fn flush(&self) -> io::Result<()> { Ok(()) } + + pub fn seek(&self, pos: SeekFrom) -> io::Result { + let (whence, pos) = match pos { + // Casting to `i64` is fine, too large values will end up as + // negative which will cause an error in `lseek64`. + SeekFrom::Start(off) => (syscall::SEEK_SET, off as i64), + SeekFrom::End(off) => (syscall::SEEK_END, off), + SeekFrom::Current(off) => (syscall::SEEK_CUR, off), + }; + let n = cvt(syscall::lseek(self.0.raw(), pos as isize, whence))?; + Ok(n as u64) + } + + pub fn duplicate(&self) -> io::Result { + self.0.duplicate().map(File) + } + + pub fn dup(&self, buf: &[u8]) -> io::Result { + let fd = cvt(syscall::dup(*self.fd().as_inner() as usize, buf))?; + Ok(File(FileDesc::new(fd))) + } + + pub fn set_permissions(&self, perm: FilePermissions) -> io::Result<()> { + set_perm(&self.path()?, perm) + } + + pub fn path(&self) -> io::Result { + let mut buf: [u8; 4096] = [0; 4096]; + let count = cvt(syscall::fpath(*self.fd().as_inner() as usize, &mut buf))?; + Ok(PathBuf::from(unsafe { String::from_utf8_unchecked(Vec::from(&buf[..count])) })) + } + + pub fn fd(&self) -> &FileDesc { &self.0 } + + pub fn into_fd(self) -> FileDesc { self.0 } +} + +impl DirBuilder { + pub fn new() -> DirBuilder { + DirBuilder { mode: 0o777 } + } + + pub fn mkdir(&self, p: &Path) -> io::Result<()> { + let flags = syscall::O_CREAT | syscall::O_DIRECTORY | syscall::O_EXCL; + let fd = cvt(syscall::open(p.to_str().unwrap(), flags | (self.mode as usize & 0o777)))?; + let _ = syscall::close(fd); + Ok(()) + } + + pub fn set_mode(&mut self, mode: u32) { + self.mode = mode as u16; + } +} + +impl FromInner for File { + fn from_inner(fd: usize) -> File { + File(FileDesc::new(fd)) + } +} + +impl fmt::Debug for File { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let mut b = f.debug_struct("File"); + b.field("fd", &self.0.raw()); + if let Ok(path) = self.path() { + b.field("path", &path); + } + /* + if let Some((read, write)) = get_mode(fd) { + b.field("read", &read).field("write", &write); + } + */ + b.finish() + } +} + +pub fn readdir(p: &Path) -> io::Result { + let root = Arc::new(p.to_path_buf()); + + let flags = syscall::O_CLOEXEC | syscall::O_RDONLY | syscall::O_DIRECTORY; + let fd = cvt(syscall::open(p.to_str().unwrap(), flags))?; + let file = FileDesc::new(fd); + let mut data = Vec::new(); + file.read_to_end(&mut data)?; + + Ok(ReadDir { data: data, i: 0, root: root }) +} + +pub fn unlink(p: &Path) -> io::Result<()> { + cvt(syscall::unlink(p.to_str().unwrap()))?; + Ok(()) +} + +pub fn rename(_old: &Path, _new: &Path) -> io::Result<()> { + ::sys_common::util::dumb_print(format_args!("Rename\n")); + unimplemented!(); +} + +pub fn set_perm(p: &Path, perm: FilePermissions) -> io::Result<()> { + cvt(syscall::chmod(p.to_str().unwrap(), perm.mode as usize))?; + Ok(()) +} + +pub fn rmdir(p: &Path) -> io::Result<()> { + cvt(syscall::rmdir(p.to_str().unwrap()))?; + Ok(()) +} + +pub fn remove_dir_all(path: &Path) -> io::Result<()> { + let filetype = lstat(path)?.file_type(); + if filetype.is_symlink() { + unlink(path) + } else { + remove_dir_all_recursive(path) + } +} + +fn remove_dir_all_recursive(path: &Path) -> io::Result<()> { + for child in readdir(path)? { + let child = child?; + if child.file_type()?.is_dir() { + remove_dir_all_recursive(&child.path())?; + } else { + unlink(&child.path())?; + } + } + rmdir(path) +} + +pub fn readlink(p: &Path) -> io::Result { + canonicalize(p) +} + +pub fn symlink(_src: &Path, _dst: &Path) -> io::Result<()> { + ::sys_common::util::dumb_print(format_args!("Symlink\n")); + unimplemented!(); +} + +pub fn link(_src: &Path, _dst: &Path) -> io::Result<()> { + ::sys_common::util::dumb_print(format_args!("Link\n")); + unimplemented!(); +} + +pub fn stat(p: &Path) -> io::Result { + let fd = cvt(syscall::open(p.to_str().unwrap(), syscall::O_CLOEXEC | syscall::O_STAT))?; + let file = File(FileDesc::new(fd)); + file.file_attr() +} + +pub fn lstat(p: &Path) -> io::Result { + stat(p) +} + +pub fn canonicalize(p: &Path) -> io::Result { + let fd = cvt(syscall::open(p.to_str().unwrap(), syscall::O_CLOEXEC | syscall::O_STAT))?; + let file = File(FileDesc::new(fd)); + file.path() +} + +pub fn copy(from: &Path, to: &Path) -> io::Result { + use fs::{File, set_permissions}; + if !from.is_file() { + return Err(Error::new(ErrorKind::InvalidInput, + "the source path is not an existing regular file")) + } + + let mut reader = File::open(from)?; + let mut writer = File::create(to)?; + let perm = reader.metadata()?.permissions(); + + let ret = io::copy(&mut reader, &mut writer)?; + set_permissions(to, perm)?; + Ok(ret) +} diff --git a/src/libstd/sys/redox/memchr.rs b/src/libstd/sys/redox/memchr.rs new file mode 100644 index 0000000000..4c314b7a47 --- /dev/null +++ b/src/libstd/sys/redox/memchr.rs @@ -0,0 +1,14 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. +// +// Original implementation taken from rust-memchr +// Copyright 2015 Andrew Gallant, bluss and Nicolas Koch + +pub use sys_common::memchr::fallback::{memchr, memrchr}; diff --git a/src/libstd/sys/redox/mod.rs b/src/libstd/sys/redox/mod.rs new file mode 100644 index 0000000000..96efa27c0d --- /dev/null +++ b/src/libstd/sys/redox/mod.rs @@ -0,0 +1,95 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![allow(dead_code, missing_docs, bad_style)] + +pub extern crate syscall; + +use io::{self, ErrorKind}; + +pub mod args; +pub mod backtrace; +pub mod condvar; +pub mod env; +pub mod ext; +pub mod fast_thread_local; +pub mod fd; +pub mod fs; +pub mod memchr; +pub mod mutex; +pub mod net; +pub mod os; +pub mod os_str; +pub mod path; +pub mod pipe; +pub mod process; +pub mod rand; +pub mod rwlock; +pub mod stack_overflow; +pub mod stdio; +pub mod thread; +pub mod thread_local; +pub mod time; + +#[cfg(not(test))] +pub fn init() { + use alloc::oom; + + oom::set_oom_handler(oom_handler); + + // A nicer handler for out-of-memory situations than the default one. This + // one prints a message to stderr before aborting. It is critical that this + // code does not allocate any memory since we are in an OOM situation. Any + // errors are ignored while printing since there's nothing we can do about + // them and we are about to exit anyways. + fn oom_handler() -> ! { + use intrinsics; + let msg = "fatal runtime error: out of memory\n"; + unsafe { + let _ = syscall::write(2, msg.as_bytes()); + intrinsics::abort(); + } + } +} + +pub fn decode_error_kind(errno: i32) -> ErrorKind { + match errno { + syscall::ECONNREFUSED => ErrorKind::ConnectionRefused, + syscall::ECONNRESET => ErrorKind::ConnectionReset, + syscall::EPERM | syscall::EACCES => ErrorKind::PermissionDenied, + syscall::EPIPE => ErrorKind::BrokenPipe, + syscall::ENOTCONN => ErrorKind::NotConnected, + syscall::ECONNABORTED => ErrorKind::ConnectionAborted, + syscall::EADDRNOTAVAIL => ErrorKind::AddrNotAvailable, + syscall::EADDRINUSE => ErrorKind::AddrInUse, + syscall::ENOENT => ErrorKind::NotFound, + syscall::EINTR => ErrorKind::Interrupted, + syscall::EINVAL => ErrorKind::InvalidInput, + syscall::ETIMEDOUT => ErrorKind::TimedOut, + syscall::EEXIST => ErrorKind::AlreadyExists, + + // These two constants can have the same value on some systems, + // but different values on others, so we can't use a match + // clause + x if x == syscall::EAGAIN || x == syscall::EWOULDBLOCK => + ErrorKind::WouldBlock, + + _ => ErrorKind::Other, + } +} + +pub fn cvt(result: Result) -> io::Result { + result.map_err(|err| io::Error::from_raw_os_error(err.errno)) +} + +/// On Redox, use an illegal instruction to abort +pub unsafe fn abort_internal() -> ! { + ::core::intrinsics::abort(); +} diff --git a/src/libstd/sys/redox/mutex.rs b/src/libstd/sys/redox/mutex.rs new file mode 100644 index 0000000000..a995f597fc --- /dev/null +++ b/src/libstd/sys/redox/mutex.rs @@ -0,0 +1,179 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use cell::UnsafeCell; +use intrinsics::{atomic_cxchg, atomic_xchg}; +use ptr; + +use sys::syscall::{futex, getpid, FUTEX_WAIT, FUTEX_WAKE}; + +pub unsafe fn mutex_try_lock(m: *mut i32) -> bool { + atomic_cxchg(m, 0, 1).0 == 0 +} + +pub unsafe fn mutex_lock(m: *mut i32) { + let mut c = 0; + //Set to larger value for longer spin test + for _i in 0..100 { + c = atomic_cxchg(m, 0, 1).0; + if c == 0 { + break; + } + //cpu_relax() + } + if c == 1 { + c = atomic_xchg(m, 2); + } + while c != 0 { + let _ = futex(m, FUTEX_WAIT, 2, 0, ptr::null_mut()); + c = atomic_xchg(m, 2); + } +} + +pub unsafe fn mutex_unlock(m: *mut i32) { + if *m == 2 { + *m = 0; + } else if atomic_xchg(m, 0) == 1 { + return; + } + //Set to larger value for longer spin test + for _i in 0..100 { + if *m != 0 { + if atomic_cxchg(m, 1, 2).0 != 0 { + return; + } + } + //cpu_relax() + } + let _ = futex(m, FUTEX_WAKE, 1, 0, ptr::null_mut()); +} + +pub struct Mutex { + pub lock: UnsafeCell, +} + +impl Mutex { + /// Create a new mutex. + pub const fn new() -> Self { + Mutex { + lock: UnsafeCell::new(0), + } + } + + #[inline] + pub unsafe fn init(&self) { + *self.lock.get() = 0; + } + + /// Try to lock the mutex + #[inline] + pub unsafe fn try_lock(&self) -> bool { + mutex_try_lock(self.lock.get()) + } + + /// Lock the mutex + #[inline] + pub unsafe fn lock(&self) { + mutex_lock(self.lock.get()); + } + + /// Unlock the mutex + #[inline] + pub unsafe fn unlock(&self) { + mutex_unlock(self.lock.get()); + } + + #[inline] + pub unsafe fn destroy(&self) { + *self.lock.get() = 0; + } +} + +unsafe impl Send for Mutex {} + +unsafe impl Sync for Mutex {} + +pub struct ReentrantMutex { + pub lock: UnsafeCell, + pub owner: UnsafeCell, + pub own_count: UnsafeCell, +} + +impl ReentrantMutex { + pub const fn uninitialized() -> Self { + ReentrantMutex { + lock: UnsafeCell::new(0), + owner: UnsafeCell::new(0), + own_count: UnsafeCell::new(0), + } + } + + #[inline] + pub unsafe fn init(&mut self) { + *self.lock.get() = 0; + *self.owner.get() = 0; + *self.own_count.get() = 0; + } + + /// Try to lock the mutex + #[inline] + pub unsafe fn try_lock(&self) -> bool { + let pid = getpid().unwrap(); + if *self.own_count.get() > 0 && *self.owner.get() == pid { + *self.own_count.get() += 1; + true + } else { + if mutex_try_lock(self.lock.get()) { + *self.owner.get() = pid; + *self.own_count.get() = 1; + true + } else { + false + } + } + } + + /// Lock the mutex + #[inline] + pub unsafe fn lock(&self) { + let pid = getpid().unwrap(); + if *self.own_count.get() > 0 && *self.owner.get() == pid { + *self.own_count.get() += 1; + } else { + mutex_lock(self.lock.get()); + *self.owner.get() = pid; + *self.own_count.get() = 1; + } + } + + /// Unlock the mutex + #[inline] + pub unsafe fn unlock(&self) { + let pid = getpid().unwrap(); + if *self.own_count.get() > 0 && *self.owner.get() == pid { + *self.own_count.get() -= 1; + if *self.own_count.get() == 0 { + *self.owner.get() = 0; + mutex_unlock(self.lock.get()); + } + } + } + + #[inline] + pub unsafe fn destroy(&self) { + *self.lock.get() = 0; + *self.owner.get() = 0; + *self.own_count.get() = 0; + } +} + +unsafe impl Send for ReentrantMutex {} + +unsafe impl Sync for ReentrantMutex {} diff --git a/src/libstd/sys/redox/net/dns/answer.rs b/src/libstd/sys/redox/net/dns/answer.rs new file mode 100644 index 0000000000..8e6aaeb029 --- /dev/null +++ b/src/libstd/sys/redox/net/dns/answer.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use string::String; +use vec::Vec; + +#[derive(Clone, Debug)] +pub struct DnsAnswer { + pub name: String, + pub a_type: u16, + pub a_class: u16, + pub ttl_a: u16, + pub ttl_b: u16, + pub data: Vec +} diff --git a/src/libstd/sys/redox/net/dns/mod.rs b/src/libstd/sys/redox/net/dns/mod.rs new file mode 100644 index 0000000000..43c4fe7ac9 --- /dev/null +++ b/src/libstd/sys/redox/net/dns/mod.rs @@ -0,0 +1,217 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +pub use self::answer::DnsAnswer; +pub use self::query::DnsQuery; + +use slice; +use u16; +use string::String; +use vec::Vec; + +mod answer; +mod query; + +#[unstable(feature = "n16", issue="0")] +#[allow(non_camel_case_types)] +#[derive(Copy, Clone, Debug, Default)] +#[repr(packed)] +pub struct n16 { + inner: u16 +} + +impl n16 { + #[unstable(feature = "n16", issue="0")] + pub fn as_bytes(&self) -> &[u8] { + unsafe { slice::from_raw_parts((&self.inner as *const u16) as *const u8, 2) } + } + + #[unstable(feature = "n16", issue="0")] + pub fn from_bytes(bytes: &[u8]) -> Self { + n16 { + inner: unsafe { slice::from_raw_parts(bytes.as_ptr() as *const u16, bytes.len()/2)[0] } + } + } +} + +#[unstable(feature = "n16", issue="0")] +impl From for n16 { + fn from(value: u16) -> Self { + n16 { + inner: value.to_be() + } + } +} + +#[unstable(feature = "n16", issue="0")] +impl From for u16 { + fn from(value: n16) -> Self { + u16::from_be(value.inner) + } +} + +#[derive(Clone, Debug)] +pub struct Dns { + pub transaction_id: u16, + pub flags: u16, + pub queries: Vec, + pub answers: Vec +} + +impl Dns { + pub fn compile(&self) -> Vec { + let mut data = Vec::new(); + + macro_rules! push_u8 { + ($value:expr) => { + data.push($value); + }; + }; + + macro_rules! push_n16 { + ($value:expr) => { + data.extend_from_slice(n16::from($value).as_bytes()); + }; + }; + + push_n16!(self.transaction_id); + push_n16!(self.flags); + push_n16!(self.queries.len() as u16); + push_n16!(self.answers.len() as u16); + push_n16!(0); + push_n16!(0); + + for query in self.queries.iter() { + for part in query.name.split('.') { + push_u8!(part.len() as u8); + data.extend_from_slice(part.as_bytes()); + } + push_u8!(0); + push_n16!(query.q_type); + push_n16!(query.q_class); + } + + data + } + + pub fn parse(data: &[u8]) -> Result { + let mut i = 0; + + macro_rules! pop_u8 { + () => { + { + i += 1; + if i > data.len() { + return Err(format!("{}: {}: pop_u8", file!(), line!())); + } + data[i - 1] + } + }; + }; + + macro_rules! pop_n16 { + () => { + { + i += 2; + if i > data.len() { + return Err(format!("{}: {}: pop_n16", file!(), line!())); + } + u16::from(n16::from_bytes(&data[i - 2 .. i])) + } + }; + }; + + macro_rules! pop_data { + () => { + { + let mut data = Vec::new(); + + let data_len = pop_n16!(); + for _data_i in 0..data_len { + data.push(pop_u8!()); + } + + data + } + }; + }; + + macro_rules! pop_name { + () => { + { + let mut name = String::new(); + + loop { + let name_len = pop_u8!(); + if name_len == 0 { + break; + } + if ! name.is_empty() { + name.push('.'); + } + for _name_i in 0..name_len { + name.push(pop_u8!() as char); + } + } + + name + } + }; + }; + + let transaction_id = pop_n16!(); + let flags = pop_n16!(); + let queries_len = pop_n16!(); + let answers_len = pop_n16!(); + pop_n16!(); + pop_n16!(); + + let mut queries = Vec::new(); + for _query_i in 0..queries_len { + queries.push(DnsQuery { + name: pop_name!(), + q_type: pop_n16!(), + q_class: pop_n16!() + }); + } + + let mut answers = Vec::new(); + for _answer_i in 0..answers_len { + let name_ind = 0b11000000; + let name_test = pop_u8!(); + i -= 1; + + answers.push(DnsAnswer { + name: if name_test & name_ind == name_ind { + let name_off = pop_n16!() - ((name_ind as u16) << 8); + let old_i = i; + i = name_off as usize; + let name = pop_name!(); + i = old_i; + name + } else { + pop_name!() + }, + a_type: pop_n16!(), + a_class: pop_n16!(), + ttl_a: pop_n16!(), + ttl_b: pop_n16!(), + data: pop_data!() + }); + } + + Ok(Dns { + transaction_id: transaction_id, + flags: flags, + queries: queries, + answers: answers, + }) + } +} diff --git a/src/libstd/sys/redox/net/dns/query.rs b/src/libstd/sys/redox/net/dns/query.rs new file mode 100644 index 0000000000..b0dcdcb624 --- /dev/null +++ b/src/libstd/sys/redox/net/dns/query.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use string::String; + +#[derive(Clone, Debug)] +pub struct DnsQuery { + pub name: String, + pub q_type: u16, + pub q_class: u16 +} diff --git a/src/libstd/sys/redox/net/mod.rs b/src/libstd/sys/redox/net/mod.rs new file mode 100644 index 0000000000..334c5e51c3 --- /dev/null +++ b/src/libstd/sys/redox/net/mod.rs @@ -0,0 +1,112 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use fs::File; +use io::{Error, Result, Read}; +use iter::Iterator; +use net::{Ipv4Addr, SocketAddr, SocketAddrV4}; +use str::FromStr; +use string::{String, ToString}; +use sys::syscall::EINVAL; +use time; +use vec::{IntoIter, Vec}; + +use self::dns::{Dns, DnsQuery}; + +pub extern crate libc as netc; +pub use self::tcp::{TcpStream, TcpListener}; +pub use self::udp::UdpSocket; + +mod dns; +mod tcp; +mod udp; + +pub struct LookupHost(IntoIter); + +impl Iterator for LookupHost { + type Item = SocketAddr; + fn next(&mut self) -> Option { + self.0.next() + } +} + +pub fn lookup_host(host: &str) -> Result { + let mut ip_string = String::new(); + File::open("/etc/net/ip")?.read_to_string(&mut ip_string)?; + let ip: Vec = ip_string.trim().split(".").map(|part| part.parse::() + .unwrap_or(0)).collect(); + + let mut dns_string = String::new(); + File::open("/etc/net/dns")?.read_to_string(&mut dns_string)?; + let dns: Vec = dns_string.trim().split(".").map(|part| part.parse::() + .unwrap_or(0)).collect(); + + if ip.len() == 4 && dns.len() == 4 { + let time = time::SystemTime::now().duration_since(time::UNIX_EPOCH).unwrap(); + let tid = (time.subsec_nanos() >> 16) as u16; + + let packet = Dns { + transaction_id: tid, + flags: 0x0100, + queries: vec![DnsQuery { + name: host.to_string(), + q_type: 0x0001, + q_class: 0x0001, + }], + answers: vec![] + }; + + let packet_data = packet.compile(); + + let my_ip = Ipv4Addr::new(ip[0], ip[1], ip[2], ip[3]); + let dns_ip = Ipv4Addr::new(dns[0], dns[1], dns[2], dns[3]); + let socket = UdpSocket::bind(&SocketAddr::V4(SocketAddrV4::new(my_ip, 0)))?; + socket.connect(&SocketAddr::V4(SocketAddrV4::new(dns_ip, 53)))?; + socket.send(&packet_data)?; + + let mut buf = [0; 65536]; + let count = socket.recv(&mut buf)?; + + match Dns::parse(&buf[.. count]) { + Ok(response) => { + let mut addrs = vec![]; + for answer in response.answers.iter() { + if answer.a_type == 0x0001 && answer.a_class == 0x0001 + && answer.data.len() == 4 + { + let answer_ip = Ipv4Addr::new(answer.data[0], + answer.data[1], + answer.data[2], + answer.data[3]); + addrs.push(SocketAddr::V4(SocketAddrV4::new(answer_ip, 0))); + } + } + Ok(LookupHost(addrs.into_iter())) + }, + Err(_err) => Err(Error::from_raw_os_error(EINVAL)) + } + } else { + Err(Error::from_raw_os_error(EINVAL)) + } +} + +fn path_to_peer_addr(path_str: &str) -> SocketAddr { + let mut parts = path_str.split('/').next().unwrap_or("").split(':').skip(1); + let host = Ipv4Addr::from_str(parts.next().unwrap_or("")).unwrap_or(Ipv4Addr::new(0, 0, 0, 0)); + let port = parts.next().unwrap_or("").parse::().unwrap_or(0); + SocketAddr::V4(SocketAddrV4::new(host, port)) +} + +fn path_to_local_addr(path_str: &str) -> SocketAddr { + let mut parts = path_str.split('/').nth(1).unwrap_or("").split(':'); + let host = Ipv4Addr::from_str(parts.next().unwrap_or("")).unwrap_or(Ipv4Addr::new(0, 0, 0, 0)); + let port = parts.next().unwrap_or("").parse::().unwrap_or(0); + SocketAddr::V4(SocketAddrV4::new(host, port)) +} diff --git a/src/libstd/sys/redox/net/tcp.rs b/src/libstd/sys/redox/net/tcp.rs new file mode 100644 index 0000000000..1bfec2e861 --- /dev/null +++ b/src/libstd/sys/redox/net/tcp.rs @@ -0,0 +1,170 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io::{Error, ErrorKind, Result}; +use net::{SocketAddr, Shutdown}; +use path::Path; +use sys::fs::{File, OpenOptions}; +use time::Duration; +use vec::Vec; + +use super::{path_to_peer_addr, path_to_local_addr}; + +#[derive(Debug)] +pub struct TcpStream(File); + +impl TcpStream { + pub fn connect(addr: &SocketAddr) -> Result { + let path = format!("tcp:{}", addr); + let mut options = OpenOptions::new(); + options.read(true); + options.write(true); + Ok(TcpStream(File::open(&Path::new(path.as_str()), &options)?)) + } + + pub fn duplicate(&self) -> Result { + Ok(TcpStream(self.0.dup(&[])?)) + } + + pub fn read(&self, buf: &mut [u8]) -> Result { + self.0.read(buf) + } + + pub fn read_to_end(&self, buf: &mut Vec) -> Result { + self.0.read_to_end(buf) + } + + pub fn write(&self, buf: &[u8]) -> Result { + self.0.write(buf) + } + + pub fn take_error(&self) -> Result> { + Ok(None) + } + + pub fn peer_addr(&self) -> Result { + let path = self.0.path()?; + Ok(path_to_peer_addr(path.to_str().unwrap_or(""))) + } + + pub fn socket_addr(&self) -> Result { + let path = self.0.path()?; + Ok(path_to_local_addr(path.to_str().unwrap_or(""))) + } + + pub fn shutdown(&self, _how: Shutdown) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::shutdown not implemented")) + } + + pub fn nodelay(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpStream::nodelay not implemented")) + } + + pub fn nonblocking(&self) -> Result { + self.0.fd().nonblocking() + } + + pub fn only_v6(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpStream::only_v6 not implemented")) + } + + pub fn ttl(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpStream::ttl not implemented")) + } + + pub fn read_timeout(&self) -> Result> { + Err(Error::new(ErrorKind::Other, "TcpStream::read_timeout not implemented")) + } + + pub fn write_timeout(&self) -> Result> { + Err(Error::new(ErrorKind::Other, "TcpStream::write_timeout not implemented")) + } + + pub fn set_nodelay(&self, _nodelay: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::set_nodelay not implemented")) + } + + pub fn set_nonblocking(&self, nonblocking: bool) -> Result<()> { + self.0.fd().set_nonblocking(nonblocking) + } + + pub fn set_only_v6(&self, _only_v6: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::set_only_v6 not implemented")) + } + + pub fn set_ttl(&self, _ttl: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::set_ttl not implemented")) + } + + pub fn set_read_timeout(&self, _dur: Option) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::set_read_timeout not implemented")) + } + + pub fn set_write_timeout(&self, _dur: Option) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpStream::set_write_timeout not implemented")) + } +} + +#[derive(Debug)] +pub struct TcpListener(File); + +impl TcpListener { + pub fn bind(addr: &SocketAddr) -> Result { + let path = format!("tcp:/{}", addr); + let mut options = OpenOptions::new(); + options.read(true); + options.write(true); + Ok(TcpListener(File::open(&Path::new(path.as_str()), &options)?)) + } + + pub fn accept(&self) -> Result<(TcpStream, SocketAddr)> { + let file = self.0.dup(b"listen")?; + let path = file.path()?; + let peer_addr = path_to_peer_addr(path.to_str().unwrap_or("")); + Ok((TcpStream(file), peer_addr)) + } + + pub fn duplicate(&self) -> Result { + Ok(TcpListener(self.0.dup(&[])?)) + } + + pub fn take_error(&self) -> Result> { + Ok(None) + } + + pub fn socket_addr(&self) -> Result { + let path = self.0.path()?; + Ok(path_to_local_addr(path.to_str().unwrap_or(""))) + } + + pub fn nonblocking(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpListener::nonblocking not implemented")) + } + + pub fn only_v6(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpListener::only_v6 not implemented")) + } + + pub fn ttl(&self) -> Result { + Err(Error::new(ErrorKind::Other, "TcpListener::ttl not implemented")) + } + + pub fn set_nonblocking(&self, _nonblocking: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpListener::set_nonblocking not implemented")) + } + + pub fn set_only_v6(&self, _only_v6: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpListener::set_only_v6 not implemented")) + } + + pub fn set_ttl(&self, _ttl: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "TcpListener::set_ttl not implemented")) + } +} diff --git a/src/libstd/sys/redox/net/udp.rs b/src/libstd/sys/redox/net/udp.rs new file mode 100644 index 0000000000..b81508e8f0 --- /dev/null +++ b/src/libstd/sys/redox/net/udp.rs @@ -0,0 +1,173 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use cell::UnsafeCell; +use io::{Error, ErrorKind, Result}; +use net::{SocketAddr, Ipv4Addr, Ipv6Addr}; +use path::Path; +use sys::fs::{File, OpenOptions}; +use time::Duration; + +use super::{path_to_peer_addr, path_to_local_addr}; + +#[derive(Debug)] +pub struct UdpSocket(File, UnsafeCell>); + +impl UdpSocket { + pub fn bind(addr: &SocketAddr) -> Result { + let path = format!("udp:/{}", addr); + let mut options = OpenOptions::new(); + options.read(true); + options.write(true); + Ok(UdpSocket(File::open(&Path::new(path.as_str()), &options)?, UnsafeCell::new(None))) + } + + fn get_conn(&self) -> &mut Option { + unsafe { &mut *(self.1.get()) } + } + + pub fn connect(&self, addr: &SocketAddr) -> Result<()> { + unsafe { *self.1.get() = Some(*addr) }; + Ok(()) + } + + pub fn duplicate(&self) -> Result { + let new_bind = self.0.dup(&[])?; + let new_conn = *self.get_conn(); + Ok(UdpSocket(new_bind, UnsafeCell::new(new_conn))) + } + + pub fn recv_from(&self, buf: &mut [u8]) -> Result<(usize, SocketAddr)> { + let from = self.0.dup(b"listen")?; + let path = from.path()?; + let peer_addr = path_to_peer_addr(path.to_str().unwrap_or("")); + let count = from.read(buf)?; + Ok((count, peer_addr)) + } + + pub fn recv(&self, buf: &mut [u8]) -> Result { + if let Some(addr) = *self.get_conn() { + let from = self.0.dup(format!("{}", addr).as_bytes())?; + from.read(buf) + } else { + Err(Error::new(ErrorKind::Other, "UdpSocket::recv not connected")) + } + } + + pub fn send_to(&self, buf: &[u8], addr: &SocketAddr) -> Result { + let to = self.0.dup(format!("{}", addr).as_bytes())?; + to.write(buf) + } + + pub fn send(&self, buf: &[u8]) -> Result { + if let Some(addr) = *self.get_conn() { + self.send_to(buf, &addr) + } else { + Err(Error::new(ErrorKind::Other, "UdpSocket::send not connected")) + } + } + + pub fn take_error(&self) -> Result> { + Ok(None) + } + + pub fn socket_addr(&self) -> Result { + let path = self.0.path()?; + Ok(path_to_local_addr(path.to_str().unwrap_or(""))) + } + + pub fn broadcast(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::broadcast not implemented")) + } + + pub fn multicast_loop_v4(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::multicast_loop_v4 not implemented")) + } + + pub fn multicast_loop_v6(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::multicast_loop_v6 not implemented")) + } + + pub fn multicast_ttl_v4(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::multicast_ttl_v4 not implemented")) + } + + pub fn nonblocking(&self) -> Result { + self.0.fd().nonblocking() + } + + pub fn only_v6(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::only_v6 not implemented")) + } + + pub fn ttl(&self) -> Result { + Err(Error::new(ErrorKind::Other, "UdpSocket::ttl not implemented")) + } + + pub fn read_timeout(&self) -> Result> { + Err(Error::new(ErrorKind::Other, "UdpSocket::read_timeout not implemented")) + } + + pub fn write_timeout(&self) -> Result> { + Err(Error::new(ErrorKind::Other, "UdpSocket::write_timeout not implemented")) + } + + pub fn set_broadcast(&self, _broadcast: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_broadcast not implemented")) + } + + pub fn set_multicast_loop_v4(&self, _multicast_loop_v4: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_multicast_loop_v4 not implemented")) + } + + pub fn set_multicast_loop_v6(&self, _multicast_loop_v6: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_multicast_loop_v6 not implemented")) + } + + pub fn set_multicast_ttl_v4(&self, _multicast_ttl_v4: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_multicast_ttl_v4 not implemented")) + } + + pub fn set_nonblocking(&self, nonblocking: bool) -> Result<()> { + self.0.fd().set_nonblocking(nonblocking) + } + + pub fn set_only_v6(&self, _only_v6: bool) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_only_v6 not implemented")) + } + + pub fn set_ttl(&self, _ttl: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_ttl not implemented")) + } + + pub fn set_read_timeout(&self, _dur: Option) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_read_timeout not implemented")) + } + + pub fn set_write_timeout(&self, _dur: Option) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::set_write_timeout not implemented")) + } + + pub fn join_multicast_v4(&self, _multiaddr: &Ipv4Addr, _interface: &Ipv4Addr) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::join_multicast_v4 not implemented")) + } + + pub fn join_multicast_v6(&self, _multiaddr: &Ipv6Addr, _interface: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::join_multicast_v6 not implemented")) + } + + pub fn leave_multicast_v4(&self, _multiaddr: &Ipv4Addr, _interface: &Ipv4Addr) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::leave_multicast_v4 not implemented")) + } + + pub fn leave_multicast_v6(&self, _multiaddr: &Ipv6Addr, _interface: u32) -> Result<()> { + Err(Error::new(ErrorKind::Other, "UdpSocket::leave_multicast_v6 not implemented")) + } +} diff --git a/src/libstd/sys/redox/os.rs b/src/libstd/sys/redox/os.rs new file mode 100644 index 0000000000..135e972bca --- /dev/null +++ b/src/libstd/sys/redox/os.rs @@ -0,0 +1,204 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Implementation of `std::os` functionality for unix systems + +#![allow(unused_imports)] // lots of cfg code here + +use os::unix::prelude::*; + +use error::Error as StdError; +use ffi::{OsString, OsStr}; +use fmt; +use io::{self, Read, Write}; +use iter; +use marker::PhantomData; +use mem; +use memchr; +use path::{self, PathBuf}; +use ptr; +use slice; +use str; +use sys_common::mutex::Mutex; +use sys::{cvt, fd, syscall}; +use vec; + +const TMPBUF_SZ: usize = 128; +static ENV_LOCK: Mutex = Mutex::new(); + +/// Returns the platform-specific value of errno +pub fn errno() -> i32 { + 0 +} + +/// Gets a detailed string description for the given error number. +pub fn error_string(errno: i32) -> String { + if let Some(string) = syscall::STR_ERROR.get(errno as usize) { + string.to_string() + } else { + "unknown error".to_string() + } +} + +pub fn getcwd() -> io::Result { + let mut buf = [0; 4096]; + let count = cvt(syscall::getcwd(&mut buf))?; + Ok(PathBuf::from(OsString::from_vec(buf[.. count].to_vec()))) +} + +pub fn chdir(p: &path::Path) -> io::Result<()> { + cvt(syscall::chdir(p.to_str().unwrap())).and(Ok(())) +} + +pub struct SplitPaths<'a> { + iter: iter::Map bool>, + fn(&'a [u8]) -> PathBuf>, +} + +pub fn split_paths(unparsed: &OsStr) -> SplitPaths { + fn bytes_to_path(b: &[u8]) -> PathBuf { + PathBuf::from(::from_bytes(b)) + } + fn is_colon(b: &u8) -> bool { *b == b':' } + let unparsed = unparsed.as_bytes(); + SplitPaths { + iter: unparsed.split(is_colon as fn(&u8) -> bool) + .map(bytes_to_path as fn(&[u8]) -> PathBuf) + } +} + +impl<'a> Iterator for SplitPaths<'a> { + type Item = PathBuf; + fn next(&mut self) -> Option { self.iter.next() } + fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } +} + +#[derive(Debug)] +pub struct JoinPathsError; + +pub fn join_paths(paths: I) -> Result + where I: Iterator, T: AsRef +{ + let mut joined = Vec::new(); + let sep = b':'; + + for (i, path) in paths.enumerate() { + let path = path.as_ref().as_bytes(); + if i > 0 { joined.push(sep) } + if path.contains(&sep) { + return Err(JoinPathsError) + } + joined.extend_from_slice(path); + } + Ok(OsStringExt::from_vec(joined)) +} + +impl fmt::Display for JoinPathsError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + "path segment contains separator `:`".fmt(f) + } +} + +impl StdError for JoinPathsError { + fn description(&self) -> &str { "failed to join paths" } +} + +pub fn current_exe() -> io::Result { + use fs::File; + + let mut file = File::open("sys:exe")?; + + let mut path = String::new(); + file.read_to_string(&mut path)?; + + if path.ends_with('\n') { + path.pop(); + } + + Ok(PathBuf::from(path)) +} + +pub struct Env { + iter: vec::IntoIter<(OsString, OsString)>, + _dont_send_or_sync_me: PhantomData<*mut ()>, +} + +impl Iterator for Env { + type Item = (OsString, OsString); + fn next(&mut self) -> Option<(OsString, OsString)> { self.iter.next() } + fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } +} + +/// Returns a vector of (variable, value) byte-vector pairs for all the +/// environment variables of the current process. +pub fn env() -> Env { + let mut variables: Vec<(OsString, OsString)> = Vec::new(); + if let Ok(mut file) = ::fs::File::open("env:") { + let mut string = String::new(); + if file.read_to_string(&mut string).is_ok() { + for line in string.lines() { + let mut parts = line.splitn(2, '='); + if let Some(name) = parts.next() { + let value = parts.next().unwrap_or(""); + variables.push((OsString::from(name.to_string()), + OsString::from(value.to_string()))); + } + } + } + } + Env { iter: variables.into_iter(), _dont_send_or_sync_me: PhantomData } +} + +pub fn getenv(key: &OsStr) -> io::Result> { + if ! key.is_empty() { + if let Ok(mut file) = ::fs::File::open(&("env:".to_owned() + key.to_str().unwrap())) { + let mut string = String::new(); + file.read_to_string(&mut string)?; + Ok(Some(OsString::from(string))) + } else { + Ok(None) + } + } else { + Ok(None) + } +} + +pub fn setenv(key: &OsStr, value: &OsStr) -> io::Result<()> { + if ! key.is_empty() { + let mut file = ::fs::File::open(&("env:".to_owned() + key.to_str().unwrap()))?; + file.write_all(value.as_bytes())?; + file.set_len(value.len() as u64)?; + } + Ok(()) +} + +pub fn unsetenv(key: &OsStr) -> io::Result<()> { + ::fs::remove_file(&("env:".to_owned() + key.to_str().unwrap()))?; + Ok(()) +} + +pub fn page_size() -> usize { + 4096 +} + +pub fn temp_dir() -> PathBuf { + ::env::var_os("TMPDIR").map(PathBuf::from).unwrap_or_else(|| { + PathBuf::from("/tmp") + }) +} + +pub fn home_dir() -> Option { + return ::env::var_os("HOME").map(PathBuf::from); +} + +pub fn exit(code: i32) -> ! { + let _ = syscall::exit(code as usize); + unreachable!(); +} diff --git a/src/libstd/sys/redox/os_str.rs b/src/libstd/sys/redox/os_str.rs new file mode 100644 index 0000000000..8922bf04f5 --- /dev/null +++ b/src/libstd/sys/redox/os_str.rs @@ -0,0 +1,119 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +/// The underlying OsString/OsStr implementation on Unix systems: just +/// a `Vec`/`[u8]`. + +use borrow::Cow; +use fmt::{self, Debug}; +use str; +use mem; +use sys_common::{AsInner, IntoInner}; + +#[derive(Clone, Hash)] +pub struct Buf { + pub inner: Vec +} + +pub struct Slice { + pub inner: [u8] +} + +impl Debug for Slice { + fn fmt(&self, formatter: &mut fmt::Formatter) -> Result<(), fmt::Error> { + self.to_string_lossy().fmt(formatter) + } +} + +impl Debug for Buf { + fn fmt(&self, formatter: &mut fmt::Formatter) -> Result<(), fmt::Error> { + self.as_slice().fmt(formatter) + } +} + +impl IntoInner> for Buf { + fn into_inner(self) -> Vec { + self.inner + } +} + +impl AsInner<[u8]> for Buf { + fn as_inner(&self) -> &[u8] { + &self.inner + } +} + + +impl Buf { + pub fn from_string(s: String) -> Buf { + Buf { inner: s.into_bytes() } + } + + #[inline] + pub fn with_capacity(capacity: usize) -> Buf { + Buf { + inner: Vec::with_capacity(capacity) + } + } + + #[inline] + pub fn clear(&mut self) { + self.inner.clear() + } + + #[inline] + pub fn capacity(&self) -> usize { + self.inner.capacity() + } + + #[inline] + pub fn reserve(&mut self, additional: usize) { + self.inner.reserve(additional) + } + + #[inline] + pub fn reserve_exact(&mut self, additional: usize) { + self.inner.reserve_exact(additional) + } + + pub fn as_slice(&self) -> &Slice { + unsafe { mem::transmute(&*self.inner) } + } + + pub fn into_string(self) -> Result { + String::from_utf8(self.inner).map_err(|p| Buf { inner: p.into_bytes() } ) + } + + pub fn push_slice(&mut self, s: &Slice) { + self.inner.extend_from_slice(&s.inner) + } +} + +impl Slice { + fn from_u8_slice(s: &[u8]) -> &Slice { + unsafe { mem::transmute(s) } + } + + pub fn from_str(s: &str) -> &Slice { + Slice::from_u8_slice(s.as_bytes()) + } + + pub fn to_str(&self) -> Option<&str> { + str::from_utf8(&self.inner).ok() + } + + pub fn to_string_lossy(&self) -> Cow { + String::from_utf8_lossy(&self.inner) + } + + pub fn to_owned(&self) -> Buf { + Buf { inner: self.inner.to_vec() } + } +} diff --git a/src/libstd/sys/redox/path.rs b/src/libstd/sys/redox/path.rs new file mode 100644 index 0000000000..e6a267dd5d --- /dev/null +++ b/src/libstd/sys/redox/path.rs @@ -0,0 +1,39 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use ffi::OsStr; +use path::Prefix; + +#[inline] +pub fn is_sep_byte(b: u8) -> bool { + b == b'/' +} + +#[inline] +pub fn is_verbatim_sep(b: u8) -> bool { + b == b'/' +} + +pub fn parse_prefix(path: &OsStr) -> Option { + if let Some(path_str) = path.to_str() { + if let Some(_i) = path_str.find(':') { + // FIXME: Redox specific prefix + // Some(Prefix::Verbatim(OsStr::new(&path_str[..i]))) + None + } else { + None + } + } else { + None + } +} + +pub const MAIN_SEP_STR: &'static str = "/"; +pub const MAIN_SEP: char = '/'; diff --git a/src/libstd/sys/redox/pipe.rs b/src/libstd/sys/redox/pipe.rs new file mode 100644 index 0000000000..e7240fbe7b --- /dev/null +++ b/src/libstd/sys/redox/pipe.rs @@ -0,0 +1,107 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io; +use sys::{cvt, syscall}; +use sys::fd::FileDesc; + +//////////////////////////////////////////////////////////////////////////////// +// Anonymous pipes +//////////////////////////////////////////////////////////////////////////////// + +pub struct AnonPipe(FileDesc); + +pub fn anon_pipe() -> io::Result<(AnonPipe, AnonPipe)> { + let mut fds = [0; 2]; + cvt(syscall::pipe2(&mut fds, syscall::O_CLOEXEC))?; + Ok((AnonPipe(FileDesc::new(fds[0])), AnonPipe(FileDesc::new(fds[1])))) +} + +impl AnonPipe { + pub fn from_fd(fd: FileDesc) -> io::Result { + fd.set_cloexec()?; + Ok(AnonPipe(fd)) + } + + pub fn read(&self, buf: &mut [u8]) -> io::Result { + self.0.read(buf) + } + + pub fn read_to_end(&self, buf: &mut Vec) -> io::Result { + self.0.read_to_end(buf) + } + + pub fn write(&self, buf: &[u8]) -> io::Result { + self.0.write(buf) + } + + pub fn fd(&self) -> &FileDesc { &self.0 } + pub fn into_fd(self) -> FileDesc { self.0 } +} + +pub fn read2(p1: AnonPipe, + v1: &mut Vec, + p2: AnonPipe, + v2: &mut Vec) -> io::Result<()> { + //FIXME: Use event based I/O multiplexing + //unimplemented!() + + p1.read_to_end(v1)?; + p2.read_to_end(v2)?; + + Ok(()) + + /* + // Set both pipes into nonblocking mode as we're gonna be reading from both + // in the `select` loop below, and we wouldn't want one to block the other! + let p1 = p1.into_fd(); + let p2 = p2.into_fd(); + p1.set_nonblocking(true)?; + p2.set_nonblocking(true)?; + + loop { + // wait for either pipe to become readable using `select` + cvt_r(|| unsafe { + let mut read: libc::fd_set = mem::zeroed(); + libc::FD_SET(p1.raw(), &mut read); + libc::FD_SET(p2.raw(), &mut read); + libc::select(max + 1, &mut read, ptr::null_mut(), ptr::null_mut(), + ptr::null_mut()) + })?; + + // Read as much as we can from each pipe, ignoring EWOULDBLOCK or + // EAGAIN. If we hit EOF, then this will happen because the underlying + // reader will return Ok(0), in which case we'll see `Ok` ourselves. In + // this case we flip the other fd back into blocking mode and read + // whatever's leftover on that file descriptor. + let read = |fd: &FileDesc, dst: &mut Vec| { + match fd.read_to_end(dst) { + Ok(_) => Ok(true), + Err(e) => { + if e.raw_os_error() == Some(libc::EWOULDBLOCK) || + e.raw_os_error() == Some(libc::EAGAIN) { + Ok(false) + } else { + Err(e) + } + } + } + }; + if read(&p1, v1)? { + p2.set_nonblocking(false)?; + return p2.read_to_end(v2).map(|_| ()); + } + if read(&p2, v2)? { + p1.set_nonblocking(false)?; + return p1.read_to_end(v1).map(|_| ()); + } + } + */ +} diff --git a/src/libstd/sys/redox/process.rs b/src/libstd/sys/redox/process.rs new file mode 100644 index 0000000000..849f51013e --- /dev/null +++ b/src/libstd/sys/redox/process.rs @@ -0,0 +1,504 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use collections::hash_map::HashMap; +use env; +use ffi::OsStr; +use fmt; +use io::{self, Error, ErrorKind}; +use path::Path; +use sys::fd::FileDesc; +use sys::fs::{File, OpenOptions}; +use sys::pipe::{self, AnonPipe}; +use sys::{cvt, syscall}; + +//////////////////////////////////////////////////////////////////////////////// +// Command +//////////////////////////////////////////////////////////////////////////////// + +pub struct Command { + // Currently we try hard to ensure that the call to `.exec()` doesn't + // actually allocate any memory. While many platforms try to ensure that + // memory allocation works after a fork in a multithreaded process, it's + // been observed to be buggy and somewhat unreliable, so we do our best to + // just not do it at all! + // + // Along those lines, the `argv` and `envp` raw pointers here are exactly + // what's gonna get passed to `execvp`. The `argv` array starts with the + // `program` and ends with a NULL, and the `envp` pointer, if present, is + // also null-terminated. + // + // Right now we don't support removing arguments, so there's no much fancy + // support there, but we support adding and removing environment variables, + // so a side table is used to track where in the `envp` array each key is + // located. Whenever we add a key we update it in place if it's already + // present, and whenever we remove a key we update the locations of all + // other keys. + program: String, + args: Vec, + env: HashMap, + + cwd: Option, + uid: Option, + gid: Option, + saw_nul: bool, + closures: Vec io::Result<()> + Send + Sync>>, + stdin: Option, + stdout: Option, + stderr: Option, +} + +// passed back to std::process with the pipes connected to the child, if any +// were requested +pub struct StdioPipes { + pub stdin: Option, + pub stdout: Option, + pub stderr: Option, +} + +// passed to do_exec() with configuration of what the child stdio should look +// like +struct ChildPipes { + stdin: ChildStdio, + stdout: ChildStdio, + stderr: ChildStdio, +} + +enum ChildStdio { + Inherit, + Explicit(usize), + Owned(FileDesc), +} + +pub enum Stdio { + Inherit, + Null, + MakePipe, + Fd(FileDesc), +} + +impl Command { + pub fn new(program: &OsStr) -> Command { + Command { + program: program.to_str().unwrap().to_owned(), + args: Vec::new(), + env: HashMap::new(), + cwd: None, + uid: None, + gid: None, + saw_nul: false, + closures: Vec::new(), + stdin: None, + stdout: None, + stderr: None, + } + } + + pub fn arg(&mut self, arg: &OsStr) { + self.args.push(arg.to_str().unwrap().to_owned()); + } + + pub fn env(&mut self, key: &OsStr, val: &OsStr) { + self.env.insert(key.to_str().unwrap().to_owned(), val.to_str().unwrap().to_owned()); + } + + pub fn env_remove(&mut self, key: &OsStr) { + self.env.remove(key.to_str().unwrap()); + } + + pub fn env_clear(&mut self) { + self.env.clear(); + } + + pub fn cwd(&mut self, dir: &OsStr) { + self.cwd = Some(dir.to_str().unwrap().to_owned()); + } + pub fn uid(&mut self, id: u32) { + self.uid = Some(id); + } + pub fn gid(&mut self, id: u32) { + self.gid = Some(id); + } + + pub fn before_exec(&mut self, + f: Box io::Result<()> + Send + Sync>) { + self.closures.push(f); + } + + pub fn stdin(&mut self, stdin: Stdio) { + self.stdin = Some(stdin); + } + pub fn stdout(&mut self, stdout: Stdio) { + self.stdout = Some(stdout); + } + pub fn stderr(&mut self, stderr: Stdio) { + self.stderr = Some(stderr); + } + + pub fn spawn(&mut self, default: Stdio, needs_stdin: bool) + -> io::Result<(Process, StdioPipes)> { + const CLOEXEC_MSG_FOOTER: &'static [u8] = b"NOEX"; + + if self.saw_nul { + return Err(io::Error::new(ErrorKind::InvalidInput, + "nul byte found in provided data")); + } + + let (ours, theirs) = self.setup_io(default, needs_stdin)?; + let (input, output) = pipe::anon_pipe()?; + + let pid = unsafe { + match cvt(syscall::clone(0))? { + 0 => { + drop(input); + let err = self.do_exec(theirs); + let errno = err.raw_os_error().unwrap_or(syscall::EINVAL) as u32; + let bytes = [ + (errno >> 24) as u8, + (errno >> 16) as u8, + (errno >> 8) as u8, + (errno >> 0) as u8, + CLOEXEC_MSG_FOOTER[0], CLOEXEC_MSG_FOOTER[1], + CLOEXEC_MSG_FOOTER[2], CLOEXEC_MSG_FOOTER[3] + ]; + // pipe I/O up to PIPE_BUF bytes should be atomic, and then + // we want to be sure we *don't* run at_exit destructors as + // we're being torn down regardless + assert!(output.write(&bytes).is_ok()); + let _ = syscall::exit(1); + panic!("failed to exit"); + } + n => n, + } + }; + + let mut p = Process { pid: pid, status: None }; + drop(output); + let mut bytes = [0; 8]; + + // loop to handle EINTR + loop { + match input.read(&mut bytes) { + Ok(0) => return Ok((p, ours)), + Ok(8) => { + assert!(combine(CLOEXEC_MSG_FOOTER) == combine(&bytes[4.. 8]), + "Validation on the CLOEXEC pipe failed: {:?}", bytes); + let errno = combine(&bytes[0.. 4]); + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + return Err(Error::from_raw_os_error(errno)) + } + Err(ref e) if e.kind() == ErrorKind::Interrupted => {} + Err(e) => { + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + panic!("the CLOEXEC pipe failed: {:?}", e) + }, + Ok(..) => { // pipe I/O up to PIPE_BUF bytes should be atomic + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + panic!("short read on the CLOEXEC pipe") + } + } + } + + fn combine(arr: &[u8]) -> i32 { + let a = arr[0] as u32; + let b = arr[1] as u32; + let c = arr[2] as u32; + let d = arr[3] as u32; + + ((a << 24) | (b << 16) | (c << 8) | (d << 0)) as i32 + } + } + + pub fn exec(&mut self, default: Stdio) -> io::Error { + if self.saw_nul { + return io::Error::new(ErrorKind::InvalidInput, + "nul byte found in provided data") + } + + match self.setup_io(default, true) { + Ok((_, theirs)) => unsafe { self.do_exec(theirs) }, + Err(e) => e, + } + } + + // And at this point we've reached a special time in the life of the + // child. The child must now be considered hamstrung and unable to + // do anything other than syscalls really. Consider the following + // scenario: + // + // 1. Thread A of process 1 grabs the malloc() mutex + // 2. Thread B of process 1 forks(), creating thread C + // 3. Thread C of process 2 then attempts to malloc() + // 4. The memory of process 2 is the same as the memory of + // process 1, so the mutex is locked. + // + // This situation looks a lot like deadlock, right? It turns out + // that this is what pthread_atfork() takes care of, which is + // presumably implemented across platforms. The first thing that + // threads to *before* forking is to do things like grab the malloc + // mutex, and then after the fork they unlock it. + // + // Despite this information, libnative's spawn has been witnessed to + // deadlock on both OSX and FreeBSD. I'm not entirely sure why, but + // all collected backtraces point at malloc/free traffic in the + // child spawned process. + // + // For this reason, the block of code below should contain 0 + // invocations of either malloc of free (or their related friends). + // + // As an example of not having malloc/free traffic, we don't close + // this file descriptor by dropping the FileDesc (which contains an + // allocation). Instead we just close it manually. This will never + // have the drop glue anyway because this code never returns (the + // child will either exec() or invoke syscall::exit) + unsafe fn do_exec(&mut self, stdio: ChildPipes) -> io::Error { + macro_rules! t { + ($e:expr) => (match $e { + Ok(e) => e, + Err(e) => return e, + }) + } + + if let Some(fd) = stdio.stderr.fd() { + let _ = syscall::close(2); + t!(cvt(syscall::dup(fd, &[]))); + let _ = syscall::close(fd); + } + if let Some(fd) = stdio.stdout.fd() { + let _ = syscall::close(1); + t!(cvt(syscall::dup(fd, &[]))); + let _ = syscall::close(fd); + } + if let Some(fd) = stdio.stdin.fd() { + let _ = syscall::close(0); + t!(cvt(syscall::dup(fd, &[]))); + let _ = syscall::close(fd); + } + + if let Some(g) = self.gid { + t!(cvt(syscall::setregid(g as usize, g as usize))); + } + if let Some(u) = self.uid { + t!(cvt(syscall::setreuid(u as usize, u as usize))); + } + if let Some(ref cwd) = self.cwd { + t!(cvt(syscall::chdir(cwd))); + } + + for callback in self.closures.iter_mut() { + t!(callback()); + } + + let mut args: Vec<[usize; 2]> = Vec::new(); + args.push([self.program.as_ptr() as usize, self.program.len()]); + for arg in self.args.iter() { + args.push([arg.as_ptr() as usize, arg.len()]); + } + + for (key, val) in self.env.iter() { + env::set_var(key, val); + } + + let program = if self.program.contains(':') || self.program.contains('/') { + self.program.to_owned() + } else { + let mut path_env = ::env::var("PATH").unwrap_or(".".to_string()); + + if ! path_env.ends_with('/') { + path_env.push('/'); + } + + path_env.push_str(&self.program); + + path_env + }; + + if let Err(err) = syscall::execve(&program, &args) { + io::Error::from_raw_os_error(err.errno as i32) + } else { + panic!("return from exec without err"); + } + } + + + fn setup_io(&self, default: Stdio, needs_stdin: bool) + -> io::Result<(StdioPipes, ChildPipes)> { + let null = Stdio::Null; + let default_stdin = if needs_stdin {&default} else {&null}; + let stdin = self.stdin.as_ref().unwrap_or(default_stdin); + let stdout = self.stdout.as_ref().unwrap_or(&default); + let stderr = self.stderr.as_ref().unwrap_or(&default); + let (their_stdin, our_stdin) = stdin.to_child_stdio(true)?; + let (their_stdout, our_stdout) = stdout.to_child_stdio(false)?; + let (their_stderr, our_stderr) = stderr.to_child_stdio(false)?; + let ours = StdioPipes { + stdin: our_stdin, + stdout: our_stdout, + stderr: our_stderr, + }; + let theirs = ChildPipes { + stdin: their_stdin, + stdout: their_stdout, + stderr: their_stderr, + }; + Ok((ours, theirs)) + } +} + +impl Stdio { + fn to_child_stdio(&self, readable: bool) + -> io::Result<(ChildStdio, Option)> { + match *self { + Stdio::Inherit => Ok((ChildStdio::Inherit, None)), + + // Make sure that the source descriptors are not an stdio + // descriptor, otherwise the order which we set the child's + // descriptors may blow away a descriptor which we are hoping to + // save. For example, suppose we want the child's stderr to be the + // parent's stdout, and the child's stdout to be the parent's + // stderr. No matter which we dup first, the second will get + // overwritten prematurely. + Stdio::Fd(ref fd) => { + if fd.raw() <= 2 { + Ok((ChildStdio::Owned(fd.duplicate()?), None)) + } else { + Ok((ChildStdio::Explicit(fd.raw()), None)) + } + } + + Stdio::MakePipe => { + let (reader, writer) = pipe::anon_pipe()?; + let (ours, theirs) = if readable { + (writer, reader) + } else { + (reader, writer) + }; + Ok((ChildStdio::Owned(theirs.into_fd()), Some(ours))) + } + + Stdio::Null => { + let mut opts = OpenOptions::new(); + opts.read(readable); + opts.write(!readable); + let fd = File::open(&Path::new("null:"), &opts)?; + Ok((ChildStdio::Owned(fd.into_fd()), None)) + } + } + } +} + +impl ChildStdio { + fn fd(&self) -> Option { + match *self { + ChildStdio::Inherit => None, + ChildStdio::Explicit(fd) => Some(fd), + ChildStdio::Owned(ref fd) => Some(fd.raw()), + } + } +} + +impl fmt::Debug for Command { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "{:?}", self.program)?; + for arg in &self.args { + write!(f, " {:?}", arg)?; + } + Ok(()) + } +} + +//////////////////////////////////////////////////////////////////////////////// +// Processes +//////////////////////////////////////////////////////////////////////////////// + +/// Unix exit statuses +#[derive(PartialEq, Eq, Clone, Copy, Debug)] +pub struct ExitStatus(i32); + +impl ExitStatus { + fn exited(&self) -> bool { + self.0 & 0x7F == 0 + } + + pub fn success(&self) -> bool { + self.code() == Some(0) + } + + pub fn code(&self) -> Option { + if self.exited() { + Some((self.0 >> 8) & 0xFF) + } else { + None + } + } + + pub fn signal(&self) -> Option { + if !self.exited() { + Some(self.0 & 0x7F) + } else { + None + } + } +} + +impl From for ExitStatus { + fn from(a: i32) -> ExitStatus { + ExitStatus(a) + } +} + +impl fmt::Display for ExitStatus { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + if let Some(code) = self.code() { + write!(f, "exit code: {}", code) + } else { + let signal = self.signal().unwrap(); + write!(f, "signal: {}", signal) + } + } +} + +/// The unique id of the process (this should never be negative). +pub struct Process { + pid: usize, + status: Option, +} + +impl Process { + pub fn id(&self) -> u32 { + self.pid as u32 + } + + pub fn kill(&mut self) -> io::Result<()> { + // If we've already waited on this process then the pid can be recycled + // and used for another process, and we probably shouldn't be killing + // random processes, so just return an error. + if self.status.is_some() { + Err(Error::new(ErrorKind::InvalidInput, + "invalid argument: can't kill an exited process")) + } else { + cvt(syscall::kill(self.pid, syscall::SIGKILL))?; + Ok(()) + } + } + + pub fn wait(&mut self) -> io::Result { + if let Some(status) = self.status { + return Ok(status) + } + let mut status = 0; + cvt(syscall::waitpid(self.pid, &mut status, 0))?; + self.status = Some(ExitStatus(status as i32)); + Ok(ExitStatus(status as i32)) + } +} diff --git a/src/libstd/sys/redox/rand.rs b/src/libstd/sys/redox/rand.rs new file mode 100644 index 0000000000..d7e4d09a9d --- /dev/null +++ b/src/libstd/sys/redox/rand.rs @@ -0,0 +1,40 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io; +use libc; +use rand::Rng; + +pub struct OsRng; + +impl OsRng { + /// Create a new `OsRng`. + pub fn new() -> io::Result { + Ok(OsRng) + } +} + +impl Rng for OsRng { + fn next_u32(&mut self) -> u32 { + self.next_u64() as u32 + } + fn next_u64(&mut self) -> u64 { + unsafe { libc::random() } + } + fn fill_bytes(&mut self, buf: &mut [u8]) { + for chunk in buf.chunks_mut(8) { + let mut rand: u64 = self.next_u64(); + for b in chunk.iter_mut() { + *b = rand as u8; + rand = rand >> 8; + } + } + } +} diff --git a/src/libstd/sys/redox/rwlock.rs b/src/libstd/sys/redox/rwlock.rs new file mode 100644 index 0000000000..d74b614ba4 --- /dev/null +++ b/src/libstd/sys/redox/rwlock.rs @@ -0,0 +1,61 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use super::mutex::Mutex; + +pub struct RWLock { + mutex: Mutex +} + +unsafe impl Send for RWLock {} +unsafe impl Sync for RWLock {} + +impl RWLock { + pub const fn new() -> RWLock { + RWLock { + mutex: Mutex::new() + } + } + + #[inline] + pub unsafe fn read(&self) { + self.mutex.lock(); + } + + #[inline] + pub unsafe fn try_read(&self) -> bool { + self.mutex.try_lock() + } + + #[inline] + pub unsafe fn write(&self) { + self.mutex.lock(); + } + + #[inline] + pub unsafe fn try_write(&self) -> bool { + self.mutex.try_lock() + } + + #[inline] + pub unsafe fn read_unlock(&self) { + self.mutex.unlock(); + } + + #[inline] + pub unsafe fn write_unlock(&self) { + self.mutex.unlock(); + } + + #[inline] + pub unsafe fn destroy(&self) { + self.mutex.destroy(); + } +} diff --git a/src/libstd/sys/redox/stack_overflow.rs b/src/libstd/sys/redox/stack_overflow.rs new file mode 100644 index 0000000000..760fe06c57 --- /dev/null +++ b/src/libstd/sys/redox/stack_overflow.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![cfg_attr(test, allow(dead_code))] + +pub struct Handler; + +impl Handler { + pub unsafe fn new() -> Handler { + Handler + } +} + +pub unsafe fn init() { + +} + +pub unsafe fn cleanup() { + +} diff --git a/src/libstd/sys/redox/stdio.rs b/src/libstd/sys/redox/stdio.rs new file mode 100644 index 0000000000..607eef051d --- /dev/null +++ b/src/libstd/sys/redox/stdio.rs @@ -0,0 +1,81 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io; +use sys::{cvt, syscall}; +use sys::fd::FileDesc; + +pub struct Stdin(()); +pub struct Stdout(()); +pub struct Stderr(()); + +impl Stdin { + pub fn new() -> io::Result { Ok(Stdin(())) } + + pub fn read(&self, data: &mut [u8]) -> io::Result { + let fd = FileDesc::new(0); + let ret = fd.read(data); + fd.into_raw(); + ret + } + + pub fn read_to_end(&self, buf: &mut Vec) -> io::Result { + let fd = FileDesc::new(0); + let ret = fd.read_to_end(buf); + fd.into_raw(); + ret + } +} + +impl Stdout { + pub fn new() -> io::Result { Ok(Stdout(())) } + + pub fn write(&self, data: &[u8]) -> io::Result { + let fd = FileDesc::new(1); + let ret = fd.write(data); + fd.into_raw(); + ret + } + + pub fn flush(&self) -> io::Result<()> { + cvt(syscall::fsync(1)).and(Ok(())) + } +} + +impl Stderr { + pub fn new() -> io::Result { Ok(Stderr(())) } + + pub fn write(&self, data: &[u8]) -> io::Result { + let fd = FileDesc::new(2); + let ret = fd.write(data); + fd.into_raw(); + ret + } + + pub fn flush(&self) -> io::Result<()> { + cvt(syscall::fsync(2)).and(Ok(())) + } +} + +// FIXME: right now this raw stderr handle is used in a few places because +// std::io::stderr_raw isn't exposed, but once that's exposed this impl +// should go away +impl io::Write for Stderr { + fn write(&mut self, data: &[u8]) -> io::Result { + Stderr::write(self, data) + } + + fn flush(&mut self) -> io::Result<()> { + Stderr::flush(self) + } +} + +pub const EBADF_ERR: i32 = ::sys::syscall::EBADF; +pub const STDIN_BUF_SIZE: usize = ::sys_common::io::DEFAULT_BUF_SIZE; diff --git a/src/libstd/sys/redox/thread.rs b/src/libstd/sys/redox/thread.rs new file mode 100644 index 0000000000..b2c0e285f0 --- /dev/null +++ b/src/libstd/sys/redox/thread.rs @@ -0,0 +1,91 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use alloc::boxed::FnBox; +use ffi::CStr; +use io; +use mem; +use sys_common::thread::start_thread; +use sys::{cvt, syscall}; +use time::Duration; + +pub struct Thread { + id: usize, +} + +// Some platforms may have pthread_t as a pointer in which case we still want +// a thread to be Send/Sync +unsafe impl Send for Thread {} +unsafe impl Sync for Thread {} + +impl Thread { + pub unsafe fn new<'a>(_stack: usize, p: Box) -> io::Result { + let p = box p; + + let id = cvt(syscall::clone(syscall::CLONE_VM | syscall::CLONE_FS | syscall::CLONE_FILES))?; + if id == 0 { + start_thread(&*p as *const _ as *mut _); + let _ = syscall::exit(0); + panic!("thread failed to exit"); + } else { + mem::forget(p); + Ok(Thread { id: id }) + } + } + + pub fn yield_now() { + let ret = syscall::sched_yield().expect("failed to sched_yield"); + debug_assert_eq!(ret, 0); + } + + pub fn set_name(_name: &CStr) { + + } + + pub fn sleep(dur: Duration) { + let mut secs = dur.as_secs(); + let mut nsecs = dur.subsec_nanos() as i32; + + // If we're awoken with a signal then the return value will be -1 and + // nanosleep will fill in `ts` with the remaining time. + while secs > 0 || nsecs > 0 { + let req = syscall::TimeSpec { + tv_sec: secs as i64, + tv_nsec: nsecs, + }; + secs -= req.tv_sec as u64; + let mut rem = syscall::TimeSpec::default(); + if syscall::nanosleep(&req, &mut rem).is_err() { + secs += rem.tv_sec as u64; + nsecs = rem.tv_nsec; + } else { + nsecs = 0; + } + } + } + + pub fn join(self) { + let mut status = 0; + syscall::waitpid(self.id, &mut status, 0).unwrap(); + } + + pub fn id(&self) -> usize { self.id } + + pub fn into_id(self) -> usize { + let id = self.id; + mem::forget(self); + id + } +} + +pub mod guard { + pub unsafe fn current() -> Option { None } + pub unsafe fn init() -> Option { None } +} diff --git a/src/libstd/sys/redox/thread_local.rs b/src/libstd/sys/redox/thread_local.rs new file mode 100644 index 0000000000..abdd9ace79 --- /dev/null +++ b/src/libstd/sys/redox/thread_local.rs @@ -0,0 +1,66 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![allow(dead_code)] // not used on all platforms + +use collections::BTreeMap; +use ptr; +use sync::atomic::{AtomicUsize, ATOMIC_USIZE_INIT, Ordering}; + +pub type Key = usize; + +type Dtor = unsafe extern fn(*mut u8); + +static NEXT_KEY: AtomicUsize = ATOMIC_USIZE_INIT; + +static mut KEYS: *mut BTreeMap> = ptr::null_mut(); + +#[thread_local] +static mut LOCALS: *mut BTreeMap = ptr::null_mut(); + +unsafe fn keys() -> &'static mut BTreeMap> { + if KEYS == ptr::null_mut() { + KEYS = Box::into_raw(Box::new(BTreeMap::new())); + } + &mut *KEYS +} + +unsafe fn locals() -> &'static mut BTreeMap { + if LOCALS == ptr::null_mut() { + LOCALS = Box::into_raw(Box::new(BTreeMap::new())); + } + &mut *LOCALS +} + +#[inline] +pub unsafe fn create(dtor: Option) -> Key { + let key = NEXT_KEY.fetch_add(1, Ordering::SeqCst); + keys().insert(key, dtor); + key +} + +#[inline] +pub unsafe fn get(key: Key) -> *mut u8 { + if let Some(&entry) = locals().get(&key) { + entry + } else { + ptr::null_mut() + } +} + +#[inline] +pub unsafe fn set(key: Key, value: *mut u8) { + locals().insert(key, value); +} + +#[inline] +pub unsafe fn destroy(key: Key) { + keys().remove(&key); +} diff --git a/src/libstd/sys/redox/time.rs b/src/libstd/sys/redox/time.rs new file mode 100644 index 0000000000..dea406efe6 --- /dev/null +++ b/src/libstd/sys/redox/time.rs @@ -0,0 +1,198 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use cmp::Ordering; +use fmt; +use sys::{cvt, syscall}; +use time::Duration; + +const NSEC_PER_SEC: u64 = 1_000_000_000; + +#[derive(Copy, Clone)] +struct Timespec { + t: syscall::TimeSpec, +} + +impl Timespec { + fn sub_timespec(&self, other: &Timespec) -> Result { + if self >= other { + Ok(if self.t.tv_nsec >= other.t.tv_nsec { + Duration::new((self.t.tv_sec - other.t.tv_sec) as u64, + (self.t.tv_nsec - other.t.tv_nsec) as u32) + } else { + Duration::new((self.t.tv_sec - 1 - other.t.tv_sec) as u64, + self.t.tv_nsec as u32 + (NSEC_PER_SEC as u32) - + other.t.tv_nsec as u32) + }) + } else { + match other.sub_timespec(self) { + Ok(d) => Err(d), + Err(d) => Ok(d), + } + } + } + + fn add_duration(&self, other: &Duration) -> Timespec { + let secs = (self.t.tv_sec as i64).checked_add(other.as_secs() as i64); + let mut secs = secs.expect("overflow when adding duration to time"); + + // Nano calculations can't overflow because nanos are <1B which fit + // in a u32. + let mut nsec = other.subsec_nanos() + self.t.tv_nsec as u32; + if nsec >= NSEC_PER_SEC as u32 { + nsec -= NSEC_PER_SEC as u32; + secs = secs.checked_add(1).expect("overflow when adding \ + duration to time"); + } + Timespec { + t: syscall::TimeSpec { + tv_sec: secs as i64, + tv_nsec: nsec as i32, + }, + } + } + + fn sub_duration(&self, other: &Duration) -> Timespec { + let secs = (self.t.tv_sec as i64).checked_sub(other.as_secs() as i64); + let mut secs = secs.expect("overflow when subtracting duration \ + from time"); + + // Similar to above, nanos can't overflow. + let mut nsec = self.t.tv_nsec as i32 - other.subsec_nanos() as i32; + if nsec < 0 { + nsec += NSEC_PER_SEC as i32; + secs = secs.checked_sub(1).expect("overflow when subtracting \ + duration from time"); + } + Timespec { + t: syscall::TimeSpec { + tv_sec: secs as i64, + tv_nsec: nsec as i32, + }, + } + } +} + +impl PartialEq for Timespec { + fn eq(&self, other: &Timespec) -> bool { + self.t.tv_sec == other.t.tv_sec && self.t.tv_nsec == other.t.tv_nsec + } +} + +impl Eq for Timespec {} + +impl PartialOrd for Timespec { + fn partial_cmp(&self, other: &Timespec) -> Option { + Some(self.cmp(other)) + } +} + +impl Ord for Timespec { + fn cmp(&self, other: &Timespec) -> Ordering { + let me = (self.t.tv_sec, self.t.tv_nsec); + let other = (other.t.tv_sec, other.t.tv_nsec); + me.cmp(&other) + } +} + +#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord)] +pub struct Instant { + t: Timespec, +} + +#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord)] +pub struct SystemTime { + t: Timespec, +} + +pub const UNIX_EPOCH: SystemTime = SystemTime { + t: Timespec { + t: syscall::TimeSpec { + tv_sec: 0, + tv_nsec: 0, + }, + }, +}; + +impl Instant { + pub fn now() -> Instant { + Instant { t: now(syscall::CLOCK_MONOTONIC) } + } + + pub fn sub_instant(&self, other: &Instant) -> Duration { + self.t.sub_timespec(&other.t).unwrap_or_else(|_| { + panic!("other was less than the current instant") + }) + } + + pub fn add_duration(&self, other: &Duration) -> Instant { + Instant { t: self.t.add_duration(other) } + } + + pub fn sub_duration(&self, other: &Duration) -> Instant { + Instant { t: self.t.sub_duration(other) } + } +} + +impl fmt::Debug for Instant { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.debug_struct("Instant") + .field("tv_sec", &self.t.t.tv_sec) + .field("tv_nsec", &self.t.t.tv_nsec) + .finish() + } +} + +impl SystemTime { + pub fn now() -> SystemTime { + SystemTime { t: now(syscall::CLOCK_REALTIME) } + } + + pub fn sub_time(&self, other: &SystemTime) + -> Result { + self.t.sub_timespec(&other.t) + } + + pub fn add_duration(&self, other: &Duration) -> SystemTime { + SystemTime { t: self.t.add_duration(other) } + } + + pub fn sub_duration(&self, other: &Duration) -> SystemTime { + SystemTime { t: self.t.sub_duration(other) } + } +} + +impl From for SystemTime { + fn from(t: syscall::TimeSpec) -> SystemTime { + SystemTime { t: Timespec { t: t } } + } +} + +impl fmt::Debug for SystemTime { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.debug_struct("SystemTime") + .field("tv_sec", &self.t.t.tv_sec) + .field("tv_nsec", &self.t.t.tv_nsec) + .finish() + } +} + +pub type clock_t = usize; + +fn now(clock: clock_t) -> Timespec { + let mut t = Timespec { + t: syscall::TimeSpec { + tv_sec: 0, + tv_nsec: 0, + } + }; + cvt(syscall::clock_gettime(clock, &mut t.t)).unwrap(); + t +} diff --git a/src/libstd/sys/unix/args.rs b/src/libstd/sys/unix/args.rs index c04fd86367..0f447ff4ec 100644 --- a/src/libstd/sys/unix/args.rs +++ b/src/libstd/sys/unix/args.rs @@ -172,10 +172,23 @@ mod imp { extern { fn sel_registerName(name: *const libc::c_uchar) -> Sel; - fn objc_msgSend(obj: NsId, sel: Sel, ...) -> NsId; fn objc_getClass(class_name: *const libc::c_uchar) -> NsId; } + #[cfg(target_arch="aarch64")] + extern { + fn objc_msgSend(obj: NsId, sel: Sel) -> NsId; + #[link_name="objc_msgSend"] + fn objc_msgSend_ul(obj: NsId, sel: Sel, i: libc::c_ulong) -> NsId; + } + + #[cfg(not(target_arch="aarch64"))] + extern { + fn objc_msgSend(obj: NsId, sel: Sel, ...) -> NsId; + #[link_name="objc_msgSend"] + fn objc_msgSend_ul(obj: NsId, sel: Sel, ...) -> NsId; + } + #[link(name = "Foundation", kind = "framework")] #[link(name = "objc")] #[cfg(not(cargobuild))] @@ -199,7 +212,7 @@ mod imp { let cnt: usize = mem::transmute(objc_msgSend(args, count_sel)); for i in 0..cnt { - let tmp = objc_msgSend(args, object_at_sel, i); + let tmp = objc_msgSend_ul(args, object_at_sel, i as libc::c_ulong); let utf_c_str: *const libc::c_char = mem::transmute(objc_msgSend(tmp, utf8_sel)); let bytes = CStr::from_ptr(utf_c_str).to_bytes(); diff --git a/src/libstd/sys/unix/ext/fs.rs b/src/libstd/sys/unix/ext/fs.rs index fcfab05158..900f463fa8 100644 --- a/src/libstd/sys/unix/ext/fs.rs +++ b/src/libstd/sys/unix/ext/fs.rs @@ -21,7 +21,7 @@ use sys_common::{FromInner, AsInner, AsInnerMut}; use sys::platform::fs::MetadataExt as UnixMetadataExt; /// Unix-specific extensions to `File` -#[unstable(feature = "file_offset", issue = "35918")] +#[stable(feature = "file_offset", since = "1.15.0")] pub trait FileExt { /// Reads a number of bytes starting from a given offset. /// @@ -34,7 +34,7 @@ pub trait FileExt { /// /// Note that similar to `File::read`, it is not an error to return with a /// short read. - #[unstable(feature = "file_offset", issue = "35918")] + #[stable(feature = "file_offset", since = "1.15.0")] fn read_at(&self, buf: &mut [u8], offset: u64) -> io::Result; /// Writes a number of bytes starting from a given offset. @@ -51,11 +51,11 @@ pub trait FileExt { /// /// Note that similar to `File::write`, it is not an error to return a /// short write. - #[unstable(feature = "file_offset", issue = "35918")] + #[stable(feature = "file_offset", since = "1.15.0")] fn write_at(&self, buf: &[u8], offset: u64) -> io::Result; } -#[unstable(feature = "file_offset", issue = "35918")] +#[stable(feature = "file_offset", since = "1.15.0")] impl FileExt for fs::File { fn read_at(&self, buf: &mut [u8], offset: u64) -> io::Result { self.as_inner().read_at(buf, offset) diff --git a/src/libstd/sys/unix/ext/io.rs b/src/libstd/sys/unix/ext/io.rs index 4163ede46a..296235e173 100644 --- a/src/libstd/sys/unix/ext/io.rs +++ b/src/libstd/sys/unix/ext/io.rs @@ -43,7 +43,7 @@ pub trait AsRawFd { /// descriptor. #[stable(feature = "from_raw_os", since = "1.1.0")] pub trait FromRawFd { - /// Constructs a new instances of `Self` from the given raw file + /// Constructs a new instance of `Self` from the given raw file /// descriptor. /// /// This function **consumes ownership** of the specified file diff --git a/src/libstd/sys/unix/ext/mod.rs b/src/libstd/sys/unix/ext/mod.rs index b2483f4e20..1be9f11b92 100644 --- a/src/libstd/sys/unix/ext/mod.rs +++ b/src/libstd/sys/unix/ext/mod.rs @@ -50,7 +50,7 @@ pub mod prelude { pub use super::fs::{PermissionsExt, OpenOptionsExt, MetadataExt, FileTypeExt}; #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] pub use super::fs::DirEntryExt; - #[doc(no_inline)] #[unstable(feature = "file_offset", issue = "35918")] + #[doc(no_inline)] #[stable(feature = "file_offset", since = "1.15.0")] pub use super::fs::FileExt; #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] pub use super::thread::JoinHandleExt; diff --git a/src/libstd/sys/unix/ext/process.rs b/src/libstd/sys/unix/ext/process.rs index 3a7c59d4e6..585dcbb9a3 100644 --- a/src/libstd/sys/unix/ext/process.rs +++ b/src/libstd/sys/unix/ext/process.rs @@ -56,7 +56,7 @@ pub trait CommandExt { /// When this closure is run, aspects such as the stdio file descriptors and /// working directory have successfully been changed, so output to these /// locations may not appear where intended. - #[unstable(feature = "process_exec", issue = "31398")] + #[stable(feature = "process_exec", since = "1.15.0")] fn before_exec(&mut self, f: F) -> &mut process::Command where F: FnMut() -> io::Result<()> + Send + Sync + 'static; diff --git a/src/libstd/sys/unix/fd.rs b/src/libstd/sys/unix/fd.rs index 41bb96fed1..61eb60da48 100644 --- a/src/libstd/sys/unix/fd.rs +++ b/src/libstd/sys/unix/fd.rs @@ -110,6 +110,7 @@ impl FileDesc { #[cfg(not(any(target_env = "newlib", target_os = "solaris", target_os = "emscripten", + target_os = "fuchsia", target_os = "haiku")))] pub fn set_cloexec(&self) -> io::Result<()> { unsafe { @@ -120,6 +121,7 @@ impl FileDesc { #[cfg(any(target_env = "newlib", target_os = "solaris", target_os = "emscripten", + target_os = "fuchsia", target_os = "haiku"))] pub fn set_cloexec(&self) -> io::Result<()> { unsafe { diff --git a/src/libstd/sys/unix/fs.rs b/src/libstd/sys/unix/fs.rs index 0b43fd2ac8..9ee0458b5d 100644 --- a/src/libstd/sys/unix/fs.rs +++ b/src/libstd/sys/unix/fs.rs @@ -526,6 +526,11 @@ impl File { pub fn fd(&self) -> &FileDesc { &self.0 } pub fn into_fd(self) -> FileDesc { self.0 } + + pub fn set_permissions(&self, perm: FilePermissions) -> io::Result<()> { + cvt_r(|| unsafe { libc::fchmod(self.0.raw(), perm.mode) })?; + Ok(()) + } } impl DirBuilder { diff --git a/src/libstd/sys/unix/os.rs b/src/libstd/sys/unix/os.rs index e591f25cac..6992a17832 100644 --- a/src/libstd/sys/unix/os.rs +++ b/src/libstd/sys/unix/os.rs @@ -78,7 +78,7 @@ pub fn errno() -> i32 { static errno: c_int; } - errno as i32 + unsafe { errno as i32 } } /// Gets a detailed string description for the given error number. @@ -193,7 +193,7 @@ impl StdError for JoinPathsError { fn description(&self) -> &str { "failed to join paths" } } -#[cfg(target_os = "freebsd")] +#[cfg(any(target_os = "freebsd", target_os = "dragonfly"))] pub fn current_exe() -> io::Result { unsafe { let mut mib = [libc::CTL_KERN as c_int, @@ -218,11 +218,6 @@ pub fn current_exe() -> io::Result { } } -#[cfg(target_os = "dragonfly")] -pub fn current_exe() -> io::Result { - ::fs::read_link("/proc/curproc/file") -} - #[cfg(target_os = "netbsd")] pub fn current_exe() -> io::Result { ::fs::read_link("/proc/curproc/exe") diff --git a/src/libstd/sys/unix/pipe.rs b/src/libstd/sys/unix/pipe.rs index ffe8032e46..a8ed415b7f 100644 --- a/src/libstd/sys/unix/pipe.rs +++ b/src/libstd/sys/unix/pipe.rs @@ -77,6 +77,7 @@ pub fn read2(p1: AnonPipe, v1: &mut Vec, p2: AnonPipe, v2: &mut Vec) -> io::Result<()> { + // Set both pipes into nonblocking mode as we're gonna be reading from both // in the `select` loop below, and we wouldn't want one to block the other! let p1 = p1.into_fd(); diff --git a/src/libstd/sys/unix/process/magenta.rs b/src/libstd/sys/unix/process/magenta.rs new file mode 100644 index 0000000000..319fbce35c --- /dev/null +++ b/src/libstd/sys/unix/process/magenta.rs @@ -0,0 +1,191 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![allow(non_camel_case_types)] + +use convert::TryInto; +use io; +use os::raw::c_char; +use u64; + +use libc::{c_int, c_void}; + +pub type mx_handle_t = i32; +pub type mx_vaddr_t = usize; +pub type mx_rights_t = u32; +pub type mx_status_t = i32; + +pub type mx_size_t = usize; +pub type mx_ssize_t = isize; + +pub const MX_HANDLE_INVALID: mx_handle_t = 0; + +pub type mx_time_t = u64; +pub const MX_TIME_INFINITE : mx_time_t = u64::MAX; + +pub type mx_signals_t = u32; + +pub const MX_OBJECT_SIGNAL_3 : mx_signals_t = 1 << 3; + +pub const MX_TASK_TERMINATED : mx_signals_t = MX_OBJECT_SIGNAL_3; + +pub const MX_RIGHT_SAME_RIGHTS : mx_rights_t = 1 << 31; + +pub type mx_object_info_topic_t = u32; + +pub const MX_INFO_PROCESS : mx_object_info_topic_t = 3; + +pub const MX_HND_TYPE_JOB: u32 = 6; + +pub fn mx_cvt(t: T) -> io::Result where T: TryInto+Copy { + if let Ok(status) = TryInto::try_into(t) { + if status < 0 { + Err(io::Error::from_raw_os_error(status)) + } else { + Ok(t) + } + } else { + Err(io::Error::last_os_error()) + } +} + +// Safe wrapper around mx_handle_t +pub struct Handle { + raw: mx_handle_t, +} + +impl Handle { + pub fn new(raw: mx_handle_t) -> Handle { + Handle { + raw: raw, + } + } + + pub fn raw(&self) -> mx_handle_t { + self.raw + } +} + +impl Drop for Handle { + fn drop(&mut self) { + unsafe { mx_cvt(mx_handle_close(self.raw)).expect("Failed to close mx_handle_t"); } + } +} + +// Common MX_INFO header +#[derive(Default)] +#[repr(C)] +pub struct mx_info_header_t { + pub topic: u32, // identifies the info struct + pub avail_topic_size: u16, // “native” size of the struct + pub topic_size: u16, // size of the returned struct (<=topic_size) + pub avail_count: u32, // number of records the kernel has + pub count: u32, // number of records returned (limited by buffer size) +} + +#[derive(Default)] +#[repr(C)] +pub struct mx_record_process_t { + pub return_code: c_int, +} + +// Returned for topic MX_INFO_PROCESS +#[derive(Default)] +#[repr(C)] +pub struct mx_info_process_t { + pub hdr: mx_info_header_t, + pub rec: mx_record_process_t, +} + +extern { + pub fn mx_task_kill(handle: mx_handle_t) -> mx_status_t; + + pub fn mx_handle_close(handle: mx_handle_t) -> mx_status_t; + + pub fn mx_handle_duplicate(handle: mx_handle_t, rights: mx_rights_t, + out: *const mx_handle_t) -> mx_handle_t; + + pub fn mx_handle_wait_one(handle: mx_handle_t, signals: mx_signals_t, timeout: mx_time_t, + pending: *mut mx_signals_t) -> mx_status_t; + + pub fn mx_object_get_info(handle: mx_handle_t, topic: u32, buffer: *mut c_void, + buffer_size: mx_size_t, actual_size: *mut mx_size_t, + avail: *mut mx_size_t) -> mx_status_t; +} + +// Handle Info entries associate a type and optional +// argument with each handle included in the process +// arguments message. +pub fn mx_hnd_info(hnd_type: u32, arg: u32) -> u32 { + (hnd_type & 0xFFFF) | ((arg & 0xFFFF) << 16) +} + +extern { + pub fn mxio_get_startup_handle(id: u32) -> mx_handle_t; +} + +// From `enum special_handles` in system/ulib/launchpad/launchpad.c +#[allow(unused)] pub const HND_LOADER_SVC: usize = 0; +// HND_EXEC_VMO = 1 +#[allow(unused)] pub const HND_SPECIAL_COUNT: usize = 2; + +#[repr(C)] +pub struct launchpad_t { + argc: u32, + envc: u32, + args: *const c_char, + args_len: usize, + env: *const c_char, + env_len: usize, + + handles: *mut mx_handle_t, + handles_info: *mut u32, + handle_count: usize, + handle_alloc: usize, + + entry: mx_vaddr_t, + base: mx_vaddr_t, + vdso_base: mx_vaddr_t, + + stack_size: usize, + + special_handles: [mx_handle_t; HND_SPECIAL_COUNT], + loader_message: bool, +} + +extern { + pub fn launchpad_create(job: mx_handle_t, name: *const c_char, + lp: *mut *mut launchpad_t) -> mx_status_t; + + pub fn launchpad_start(lp: *mut launchpad_t) -> mx_status_t; + + pub fn launchpad_destroy(lp: *mut launchpad_t); + + pub fn launchpad_arguments(lp: *mut launchpad_t, argc: c_int, + argv: *const *const c_char) -> mx_status_t; + + pub fn launchpad_environ(lp: *mut launchpad_t, envp: *const *const c_char) -> mx_status_t; + + pub fn launchpad_clone_mxio_root(lp: *mut launchpad_t) -> mx_status_t; + + pub fn launchpad_clone_mxio_cwd(lp: *mut launchpad_t) -> mx_status_t; + + pub fn launchpad_clone_fd(lp: *mut launchpad_t, fd: c_int, target_fd: c_int) -> mx_status_t; + + pub fn launchpad_transfer_fd(lp: *mut launchpad_t, fd: c_int, target_fd: c_int) -> mx_status_t; + + pub fn launchpad_elf_load(lp: *mut launchpad_t, vmo: mx_handle_t) -> mx_status_t; + + pub fn launchpad_add_vdso_vmo(lp: *mut launchpad_t) -> mx_status_t; + + pub fn launchpad_load_vdso(lp: *mut launchpad_t, vmo: mx_handle_t) -> mx_status_t; + + pub fn launchpad_vmo_from_file(filename: *const c_char) -> mx_handle_t; +} diff --git a/src/libstd/sys/unix/process/mod.rs b/src/libstd/sys/unix/process/mod.rs new file mode 100644 index 0000000000..b50384d8ee --- /dev/null +++ b/src/libstd/sys/unix/process/mod.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +pub use self::process_common::{Command, ExitStatus, Stdio, StdioPipes}; +pub use self::process_inner::Process; + +mod process_common; +#[cfg(not(target_os = "fuchsia"))] +#[path = "process_unix.rs"] +mod process_inner; +#[cfg(target_os = "fuchsia")] +#[path = "process_fuchsia.rs"] +mod process_inner; +#[cfg(target_os = "fuchsia")] +mod magenta; diff --git a/src/libstd/sys/unix/process.rs b/src/libstd/sys/unix/process/process_common.rs similarity index 59% rename from src/libstd/sys/unix/process.rs rename to src/libstd/sys/unix/process/process_common.rs index dafc11d9cc..3497b26634 100644 --- a/src/libstd/sys/unix/process.rs +++ b/src/libstd/sys/unix/process/process_common.rs @@ -1,4 +1,4 @@ -// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -14,14 +14,12 @@ use collections::hash_map::{HashMap, Entry}; use env; use ffi::{OsString, OsStr, CString, CStr}; use fmt; -use io::{self, Error, ErrorKind}; -use libc::{self, pid_t, c_int, gid_t, uid_t, c_char}; -use mem; +use io; +use libc::{self, c_int, gid_t, uid_t, c_char}; use ptr; use sys::fd::FileDesc; use sys::fs::{File, OpenOptions}; use sys::pipe::{self, AnonPipe}; -use sys::{self, cvt, cvt_r}; //////////////////////////////////////////////////////////////////////////////// // Command @@ -71,13 +69,13 @@ pub struct StdioPipes { // passed to do_exec() with configuration of what the child stdio should look // like -struct ChildPipes { - stdin: ChildStdio, - stdout: ChildStdio, - stderr: ChildStdio, +pub struct ChildPipes { + pub stdin: ChildStdio, + pub stdout: ChildStdio, + pub stderr: ChildStdio, } -enum ChildStdio { +pub enum ChildStdio { Inherit, Explicit(c_int), Owned(FileDesc), @@ -195,6 +193,33 @@ impl Command { self.gid = Some(id); } + pub fn saw_nul(&self) -> bool { + self.saw_nul + } + pub fn get_envp(&self) -> &Option> { + &self.envp + } + pub fn get_argv(&self) -> &Vec<*const c_char> { + &self.argv + } + + #[allow(dead_code)] + pub fn get_cwd(&self) -> &Option { + &self.cwd + } + #[allow(dead_code)] + pub fn get_uid(&self) -> Option { + self.uid + } + #[allow(dead_code)] + pub fn get_gid(&self) -> Option { + self.gid + } + + pub fn get_closures(&mut self) -> &mut Vec io::Result<()> + Send + Sync>> { + &mut self.closures + } + pub fn before_exec(&mut self, f: Box io::Result<()> + Send + Sync>) { self.closures.push(f); @@ -203,200 +228,16 @@ impl Command { pub fn stdin(&mut self, stdin: Stdio) { self.stdin = Some(stdin); } + pub fn stdout(&mut self, stdout: Stdio) { self.stdout = Some(stdout); } + pub fn stderr(&mut self, stderr: Stdio) { self.stderr = Some(stderr); } - pub fn spawn(&mut self, default: Stdio, needs_stdin: bool) - -> io::Result<(Process, StdioPipes)> { - const CLOEXEC_MSG_FOOTER: &'static [u8] = b"NOEX"; - - if self.saw_nul { - return Err(io::Error::new(ErrorKind::InvalidInput, - "nul byte found in provided data")); - } - - let (ours, theirs) = self.setup_io(default, needs_stdin)?; - let (input, output) = sys::pipe::anon_pipe()?; - - let pid = unsafe { - match cvt(libc::fork())? { - 0 => { - drop(input); - let err = self.do_exec(theirs); - let errno = err.raw_os_error().unwrap_or(libc::EINVAL) as u32; - let bytes = [ - (errno >> 24) as u8, - (errno >> 16) as u8, - (errno >> 8) as u8, - (errno >> 0) as u8, - CLOEXEC_MSG_FOOTER[0], CLOEXEC_MSG_FOOTER[1], - CLOEXEC_MSG_FOOTER[2], CLOEXEC_MSG_FOOTER[3] - ]; - // pipe I/O up to PIPE_BUF bytes should be atomic, and then - // we want to be sure we *don't* run at_exit destructors as - // we're being torn down regardless - assert!(output.write(&bytes).is_ok()); - libc::_exit(1) - } - n => n, - } - }; - - let mut p = Process { pid: pid, status: None }; - drop(output); - let mut bytes = [0; 8]; - - // loop to handle EINTR - loop { - match input.read(&mut bytes) { - Ok(0) => return Ok((p, ours)), - Ok(8) => { - assert!(combine(CLOEXEC_MSG_FOOTER) == combine(&bytes[4.. 8]), - "Validation on the CLOEXEC pipe failed: {:?}", bytes); - let errno = combine(&bytes[0.. 4]); - assert!(p.wait().is_ok(), - "wait() should either return Ok or panic"); - return Err(Error::from_raw_os_error(errno)) - } - Err(ref e) if e.kind() == ErrorKind::Interrupted => {} - Err(e) => { - assert!(p.wait().is_ok(), - "wait() should either return Ok or panic"); - panic!("the CLOEXEC pipe failed: {:?}", e) - }, - Ok(..) => { // pipe I/O up to PIPE_BUF bytes should be atomic - assert!(p.wait().is_ok(), - "wait() should either return Ok or panic"); - panic!("short read on the CLOEXEC pipe") - } - } - } - - fn combine(arr: &[u8]) -> i32 { - let a = arr[0] as u32; - let b = arr[1] as u32; - let c = arr[2] as u32; - let d = arr[3] as u32; - - ((a << 24) | (b << 16) | (c << 8) | (d << 0)) as i32 - } - } - - pub fn exec(&mut self, default: Stdio) -> io::Error { - if self.saw_nul { - return io::Error::new(ErrorKind::InvalidInput, - "nul byte found in provided data") - } - - match self.setup_io(default, true) { - Ok((_, theirs)) => unsafe { self.do_exec(theirs) }, - Err(e) => e, - } - } - - // And at this point we've reached a special time in the life of the - // child. The child must now be considered hamstrung and unable to - // do anything other than syscalls really. Consider the following - // scenario: - // - // 1. Thread A of process 1 grabs the malloc() mutex - // 2. Thread B of process 1 forks(), creating thread C - // 3. Thread C of process 2 then attempts to malloc() - // 4. The memory of process 2 is the same as the memory of - // process 1, so the mutex is locked. - // - // This situation looks a lot like deadlock, right? It turns out - // that this is what pthread_atfork() takes care of, which is - // presumably implemented across platforms. The first thing that - // threads to *before* forking is to do things like grab the malloc - // mutex, and then after the fork they unlock it. - // - // Despite this information, libnative's spawn has been witnessed to - // deadlock on both OSX and FreeBSD. I'm not entirely sure why, but - // all collected backtraces point at malloc/free traffic in the - // child spawned process. - // - // For this reason, the block of code below should contain 0 - // invocations of either malloc of free (or their related friends). - // - // As an example of not having malloc/free traffic, we don't close - // this file descriptor by dropping the FileDesc (which contains an - // allocation). Instead we just close it manually. This will never - // have the drop glue anyway because this code never returns (the - // child will either exec() or invoke libc::exit) - unsafe fn do_exec(&mut self, stdio: ChildPipes) -> io::Error { - macro_rules! t { - ($e:expr) => (match $e { - Ok(e) => e, - Err(e) => return e, - }) - } - - if let Some(fd) = stdio.stdin.fd() { - t!(cvt_r(|| libc::dup2(fd, libc::STDIN_FILENO))); - } - if let Some(fd) = stdio.stdout.fd() { - t!(cvt_r(|| libc::dup2(fd, libc::STDOUT_FILENO))); - } - if let Some(fd) = stdio.stderr.fd() { - t!(cvt_r(|| libc::dup2(fd, libc::STDERR_FILENO))); - } - - if let Some(u) = self.gid { - t!(cvt(libc::setgid(u as gid_t))); - } - if let Some(u) = self.uid { - // When dropping privileges from root, the `setgroups` call - // will remove any extraneous groups. If we don't call this, - // then even though our uid has dropped, we may still have - // groups that enable us to do super-user things. This will - // fail if we aren't root, so don't bother checking the - // return value, this is just done as an optimistic - // privilege dropping function. - let _ = libc::setgroups(0, ptr::null()); - - t!(cvt(libc::setuid(u as uid_t))); - } - if let Some(ref cwd) = self.cwd { - t!(cvt(libc::chdir(cwd.as_ptr()))); - } - if let Some(ref envp) = self.envp { - *sys::os::environ() = envp.as_ptr(); - } - - // NaCl has no signal support. - if cfg!(not(any(target_os = "nacl", target_os = "emscripten"))) { - // Reset signal handling so the child process starts in a - // standardized state. libstd ignores SIGPIPE, and signal-handling - // libraries often set a mask. Child processes inherit ignored - // signals and the signal mask from their parent, but most - // UNIX programs do not reset these things on their own, so we - // need to clean things up now to avoid confusing the program - // we're about to run. - let mut set: libc::sigset_t = mem::uninitialized(); - t!(cvt(libc::sigemptyset(&mut set))); - t!(cvt(libc::pthread_sigmask(libc::SIG_SETMASK, &set, - ptr::null_mut()))); - let ret = super::signal(libc::SIGPIPE, libc::SIG_DFL); - if ret == libc::SIG_ERR { - return io::Error::last_os_error() - } - } - - for callback in self.closures.iter_mut() { - t!(callback()); - } - - libc::execvp(self.argv[0], self.argv.as_ptr()); - io::Error::last_os_error() - } - - - fn setup_io(&self, default: Stdio, needs_stdin: bool) + pub fn setup_io(&self, default: Stdio, needs_stdin: bool) -> io::Result<(StdioPipes, ChildPipes)> { let null = Stdio::Null; let default_stdin = if needs_stdin {&default} else {&null}; @@ -428,10 +269,12 @@ fn os2c(s: &OsStr, saw_nul: &mut bool) -> CString { } impl Stdio { - fn to_child_stdio(&self, readable: bool) + pub fn to_child_stdio(&self, readable: bool) -> io::Result<(ChildStdio, Option)> { match *self { - Stdio::Inherit => Ok((ChildStdio::Inherit, None)), + Stdio::Inherit => { + Ok((ChildStdio::Inherit, None)) + }, // Make sure that the source descriptors are not an stdio // descriptor, otherwise the order which we set the child's @@ -473,7 +316,7 @@ impl Stdio { } impl ChildStdio { - fn fd(&self) -> Option { + pub fn fd(&self) -> Option { match *self { ChildStdio::Inherit => None, ChildStdio::Explicit(fd) => Some(fd), @@ -504,15 +347,15 @@ impl fmt::Debug for Command { } } -//////////////////////////////////////////////////////////////////////////////// -// Processes -//////////////////////////////////////////////////////////////////////////////// - /// Unix exit statuses #[derive(PartialEq, Eq, Clone, Copy, Debug)] pub struct ExitStatus(c_int); impl ExitStatus { + pub fn new(status: c_int) -> ExitStatus { + ExitStatus(status) + } + fn exited(&self) -> bool { unsafe { libc::WIFEXITED(self.0) } } @@ -555,40 +398,6 @@ impl fmt::Display for ExitStatus { } } -/// The unique id of the process (this should never be negative). -pub struct Process { - pid: pid_t, - status: Option, -} - -impl Process { - pub fn id(&self) -> u32 { - self.pid as u32 - } - - pub fn kill(&mut self) -> io::Result<()> { - // If we've already waited on this process then the pid can be recycled - // and used for another process, and we probably shouldn't be killing - // random processes, so just return an error. - if self.status.is_some() { - Err(Error::new(ErrorKind::InvalidInput, - "invalid argument: can't kill an exited process")) - } else { - cvt(unsafe { libc::kill(self.pid, libc::SIGKILL) }).map(|_| ()) - } - } - - pub fn wait(&mut self) -> io::Result { - if let Some(status) = self.status { - return Ok(status) - } - let mut status = 0 as c_int; - cvt_r(|| unsafe { libc::waitpid(self.pid, &mut status, 0) })?; - self.status = Some(ExitStatus(status)); - Ok(ExitStatus(status)) - } -} - #[cfg(all(test, not(target_os = "emscripten")))] mod tests { use super::*; diff --git a/src/libstd/sys/unix/process/process_fuchsia.rs b/src/libstd/sys/unix/process/process_fuchsia.rs new file mode 100644 index 0000000000..f0a42b1279 --- /dev/null +++ b/src/libstd/sys/unix/process/process_fuchsia.rs @@ -0,0 +1,174 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io; +use libc; +use mem; +use ptr; + +use sys::process::magenta::{Handle, launchpad_t, mx_handle_t}; +use sys::process::process_common::*; + +//////////////////////////////////////////////////////////////////////////////// +// Command +//////////////////////////////////////////////////////////////////////////////// + +impl Command { + pub fn spawn(&mut self, default: Stdio, needs_stdin: bool) + -> io::Result<(Process, StdioPipes)> { + if self.saw_nul() { + return Err(io::Error::new(io::ErrorKind::InvalidInput, + "nul byte found in provided data")); + } + + let (ours, theirs) = self.setup_io(default, needs_stdin)?; + + let (launchpad, process_handle) = unsafe { self.do_exec(theirs)? }; + + Ok((Process { launchpad: launchpad, handle: Handle::new(process_handle) }, ours)) + } + + pub fn exec(&mut self, default: Stdio) -> io::Error { + if self.saw_nul() { + return io::Error::new(io::ErrorKind::InvalidInput, + "nul byte found in provided data") + } + + match self.setup_io(default, true) { + Ok((_, _)) => { + // FIXME: This is tough because we don't support the exec syscalls + unimplemented!(); + }, + Err(e) => e, + } + } + + unsafe fn do_exec(&mut self, stdio: ChildPipes) + -> io::Result<(*mut launchpad_t, mx_handle_t)> { + use sys::process::magenta::*; + + let job_handle = mxio_get_startup_handle(mx_hnd_info(MX_HND_TYPE_JOB, 0)); + let envp = match *self.get_envp() { + Some(ref envp) => envp.as_ptr(), + None => ptr::null(), + }; + + // To make sure launchpad_destroy gets called on the launchpad if this function fails + struct LaunchpadDestructor(*mut launchpad_t); + impl Drop for LaunchpadDestructor { + fn drop(&mut self) { unsafe { launchpad_destroy(self.0); } } + } + + let mut launchpad: *mut launchpad_t = ptr::null_mut(); + let launchpad_destructor = LaunchpadDestructor(launchpad); + + // Duplicate the job handle + let mut job_copy: mx_handle_t = MX_HANDLE_INVALID; + mx_cvt(mx_handle_duplicate(job_handle, MX_RIGHT_SAME_RIGHTS, &mut job_copy))?; + // Create a launchpad + mx_cvt(launchpad_create(job_copy, self.get_argv()[0], &mut launchpad))?; + // Set the process argv + mx_cvt(launchpad_arguments(launchpad, self.get_argv().len() as i32 - 1, + self.get_argv().as_ptr()))?; + // Setup the environment vars + mx_cvt(launchpad_environ(launchpad, envp))?; + mx_cvt(launchpad_add_vdso_vmo(launchpad))?; + mx_cvt(launchpad_clone_mxio_root(launchpad))?; + // Load the executable + mx_cvt(launchpad_elf_load(launchpad, launchpad_vmo_from_file(self.get_argv()[0])))?; + mx_cvt(launchpad_load_vdso(launchpad, MX_HANDLE_INVALID))?; + mx_cvt(launchpad_clone_mxio_cwd(launchpad))?; + + // Clone stdin, stdout, and stderr + if let Some(fd) = stdio.stdin.fd() { + launchpad_transfer_fd(launchpad, fd, 0); + } else { + launchpad_clone_fd(launchpad, 0, 0); + } + if let Some(fd) = stdio.stdout.fd() { + launchpad_transfer_fd(launchpad, fd, 1); + } else { + launchpad_clone_fd(launchpad, 1, 1); + } + if let Some(fd) = stdio.stderr.fd() { + launchpad_transfer_fd(launchpad, fd, 2); + } else { + launchpad_clone_fd(launchpad, 2, 2); + } + + // We don't want FileDesc::drop to be called on any stdio. It would close their fds. The + // fds will be closed once the child process finishes. + mem::forget(stdio); + + for callback in self.get_closures().iter_mut() { + callback()?; + } + + let process_handle = mx_cvt(launchpad_start(launchpad))?; + + // Successfully started the launchpad + mem::forget(launchpad_destructor); + + Ok((launchpad, process_handle)) + } +} + +//////////////////////////////////////////////////////////////////////////////// +// Processes +//////////////////////////////////////////////////////////////////////////////// + +pub struct Process { + launchpad: *mut launchpad_t, + handle: Handle, +} + +impl Process { + pub fn id(&self) -> u32 { + self.handle.raw() as u32 + } + + pub fn kill(&mut self) -> io::Result<()> { + use sys::process::magenta::*; + + unsafe { mx_cvt(mx_task_kill(self.handle.raw()))?; } + + Ok(()) + } + + pub fn wait(&mut self) -> io::Result { + use default::Default; + use sys::process::magenta::*; + + let mut proc_info: mx_info_process_t = Default::default(); + let mut actual: mx_size_t = 0; + let mut avail: mx_size_t = 0; + + unsafe { + mx_cvt(mx_handle_wait_one(self.handle.raw(), MX_TASK_TERMINATED, + MX_TIME_INFINITE, ptr::null_mut()))?; + mx_cvt(mx_object_get_info(self.handle.raw(), MX_INFO_PROCESS, + &mut proc_info as *mut _ as *mut libc::c_void, + mem::size_of::(), &mut actual, + &mut avail))?; + } + if actual != 1 { + return Err(io::Error::new(io::ErrorKind::InvalidData, + "Failed to get exit status of process")); + } + Ok(ExitStatus::new(proc_info.rec.return_code)) + } +} + +impl Drop for Process { + fn drop(&mut self) { + use sys::process::magenta::launchpad_destroy; + unsafe { launchpad_destroy(self.launchpad); } + } +} diff --git a/src/libstd/sys/unix/process/process_unix.rs b/src/libstd/sys/unix/process/process_unix.rs new file mode 100644 index 0000000000..aa42672202 --- /dev/null +++ b/src/libstd/sys/unix/process/process_unix.rs @@ -0,0 +1,250 @@ +// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use io::{self, Error, ErrorKind}; +use libc::{self, c_int, gid_t, pid_t, uid_t}; +use mem; +use ptr; + +use sys::cvt; +use sys::process::process_common::*; + +//////////////////////////////////////////////////////////////////////////////// +// Command +//////////////////////////////////////////////////////////////////////////////// + +impl Command { + pub fn spawn(&mut self, default: Stdio, needs_stdin: bool) + -> io::Result<(Process, StdioPipes)> { + use sys; + + const CLOEXEC_MSG_FOOTER: &'static [u8] = b"NOEX"; + + if self.saw_nul() { + return Err(io::Error::new(ErrorKind::InvalidInput, + "nul byte found in provided data")); + } + + let (ours, theirs) = self.setup_io(default, needs_stdin)?; + let (input, output) = sys::pipe::anon_pipe()?; + + let pid = unsafe { + match cvt(libc::fork())? { + 0 => { + drop(input); + let err = self.do_exec(theirs); + let errno = err.raw_os_error().unwrap_or(libc::EINVAL) as u32; + let bytes = [ + (errno >> 24) as u8, + (errno >> 16) as u8, + (errno >> 8) as u8, + (errno >> 0) as u8, + CLOEXEC_MSG_FOOTER[0], CLOEXEC_MSG_FOOTER[1], + CLOEXEC_MSG_FOOTER[2], CLOEXEC_MSG_FOOTER[3] + ]; + // pipe I/O up to PIPE_BUF bytes should be atomic, and then + // we want to be sure we *don't* run at_exit destructors as + // we're being torn down regardless + assert!(output.write(&bytes).is_ok()); + libc::_exit(1) + } + n => n, + } + }; + + let mut p = Process { pid: pid, status: None }; + drop(output); + let mut bytes = [0; 8]; + + // loop to handle EINTR + loop { + match input.read(&mut bytes) { + Ok(0) => return Ok((p, ours)), + Ok(8) => { + assert!(combine(CLOEXEC_MSG_FOOTER) == combine(&bytes[4.. 8]), + "Validation on the CLOEXEC pipe failed: {:?}", bytes); + let errno = combine(&bytes[0.. 4]); + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + return Err(Error::from_raw_os_error(errno)) + } + Err(ref e) if e.kind() == ErrorKind::Interrupted => {} + Err(e) => { + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + panic!("the CLOEXEC pipe failed: {:?}", e) + }, + Ok(..) => { // pipe I/O up to PIPE_BUF bytes should be atomic + assert!(p.wait().is_ok(), + "wait() should either return Ok or panic"); + panic!("short read on the CLOEXEC pipe") + } + } + } + + fn combine(arr: &[u8]) -> i32 { + let a = arr[0] as u32; + let b = arr[1] as u32; + let c = arr[2] as u32; + let d = arr[3] as u32; + + ((a << 24) | (b << 16) | (c << 8) | (d << 0)) as i32 + } + } + + pub fn exec(&mut self, default: Stdio) -> io::Error { + if self.saw_nul() { + return io::Error::new(ErrorKind::InvalidInput, + "nul byte found in provided data") + } + + match self.setup_io(default, true) { + Ok((_, theirs)) => unsafe { self.do_exec(theirs) }, + Err(e) => e, + } + } + + // And at this point we've reached a special time in the life of the + // child. The child must now be considered hamstrung and unable to + // do anything other than syscalls really. Consider the following + // scenario: + // + // 1. Thread A of process 1 grabs the malloc() mutex + // 2. Thread B of process 1 forks(), creating thread C + // 3. Thread C of process 2 then attempts to malloc() + // 4. The memory of process 2 is the same as the memory of + // process 1, so the mutex is locked. + // + // This situation looks a lot like deadlock, right? It turns out + // that this is what pthread_atfork() takes care of, which is + // presumably implemented across platforms. The first thing that + // threads to *before* forking is to do things like grab the malloc + // mutex, and then after the fork they unlock it. + // + // Despite this information, libnative's spawn has been witnessed to + // deadlock on both OSX and FreeBSD. I'm not entirely sure why, but + // all collected backtraces point at malloc/free traffic in the + // child spawned process. + // + // For this reason, the block of code below should contain 0 + // invocations of either malloc of free (or their related friends). + // + // As an example of not having malloc/free traffic, we don't close + // this file descriptor by dropping the FileDesc (which contains an + // allocation). Instead we just close it manually. This will never + // have the drop glue anyway because this code never returns (the + // child will either exec() or invoke libc::exit) + unsafe fn do_exec(&mut self, stdio: ChildPipes) -> io::Error { + use sys::{self, cvt_r}; + + macro_rules! t { + ($e:expr) => (match $e { + Ok(e) => e, + Err(e) => return e, + }) + } + + if let Some(fd) = stdio.stdin.fd() { + t!(cvt_r(|| libc::dup2(fd, libc::STDIN_FILENO))); + } + if let Some(fd) = stdio.stdout.fd() { + t!(cvt_r(|| libc::dup2(fd, libc::STDOUT_FILENO))); + } + if let Some(fd) = stdio.stderr.fd() { + t!(cvt_r(|| libc::dup2(fd, libc::STDERR_FILENO))); + } + + if let Some(u) = self.get_gid() { + t!(cvt(libc::setgid(u as gid_t))); + } + if let Some(u) = self.get_uid() { + // When dropping privileges from root, the `setgroups` call + // will remove any extraneous groups. If we don't call this, + // then even though our uid has dropped, we may still have + // groups that enable us to do super-user things. This will + // fail if we aren't root, so don't bother checking the + // return value, this is just done as an optimistic + // privilege dropping function. + let _ = libc::setgroups(0, ptr::null()); + + t!(cvt(libc::setuid(u as uid_t))); + } + if let Some(ref cwd) = *self.get_cwd() { + t!(cvt(libc::chdir(cwd.as_ptr()))); + } + if let Some(ref envp) = *self.get_envp() { + *sys::os::environ() = envp.as_ptr(); + } + + // NaCl has no signal support. + if cfg!(not(any(target_os = "nacl", target_os = "emscripten"))) { + // Reset signal handling so the child process starts in a + // standardized state. libstd ignores SIGPIPE, and signal-handling + // libraries often set a mask. Child processes inherit ignored + // signals and the signal mask from their parent, but most + // UNIX programs do not reset these things on their own, so we + // need to clean things up now to avoid confusing the program + // we're about to run. + let mut set: libc::sigset_t = mem::uninitialized(); + t!(cvt(libc::sigemptyset(&mut set))); + t!(cvt(libc::pthread_sigmask(libc::SIG_SETMASK, &set, + ptr::null_mut()))); + let ret = sys::signal(libc::SIGPIPE, libc::SIG_DFL); + if ret == libc::SIG_ERR { + return io::Error::last_os_error() + } + } + + for callback in self.get_closures().iter_mut() { + t!(callback()); + } + + libc::execvp(self.get_argv()[0], self.get_argv().as_ptr()); + io::Error::last_os_error() + } +} + +//////////////////////////////////////////////////////////////////////////////// +// Processes +//////////////////////////////////////////////////////////////////////////////// + +/// The unique id of the process (this should never be negative). +pub struct Process { + pid: pid_t, + status: Option, +} + +impl Process { + pub fn id(&self) -> u32 { + self.pid as u32 + } + + pub fn kill(&mut self) -> io::Result<()> { + // If we've already waited on this process then the pid can be recycled + // and used for another process, and we probably shouldn't be killing + // random processes, so just return an error. + if self.status.is_some() { + Err(Error::new(ErrorKind::InvalidInput, + "invalid argument: can't kill an exited process")) + } else { + cvt(unsafe { libc::kill(self.pid, libc::SIGKILL) }).map(|_| ()) + } + } + pub fn wait(&mut self) -> io::Result { + use sys::cvt_r; + if let Some(status) = self.status { + return Ok(status) + } + let mut status = 0 as c_int; + cvt_r(|| unsafe { libc::waitpid(self.pid, &mut status, 0) })?; + self.status = Some(ExitStatus::new(status)); + Ok(ExitStatus::new(status)) + } +} diff --git a/src/libstd/sys/unix/stdio.rs b/src/libstd/sys/unix/stdio.rs index 273341b191..6d38b00b39 100644 --- a/src/libstd/sys/unix/stdio.rs +++ b/src/libstd/sys/unix/stdio.rs @@ -43,6 +43,10 @@ impl Stdout { fd.into_raw(); ret } + + pub fn flush(&self) -> io::Result<()> { + Ok(()) + } } impl Stderr { @@ -54,6 +58,10 @@ impl Stderr { fd.into_raw(); ret } + + pub fn flush(&self) -> io::Result<()> { + Ok(()) + } } // FIXME: right now this raw stderr handle is used in a few places because @@ -63,7 +71,10 @@ impl io::Write for Stderr { fn write(&mut self, data: &[u8]) -> io::Result { Stderr::write(self, data) } - fn flush(&mut self) -> io::Result<()> { Ok(()) } + + fn flush(&mut self) -> io::Result<()> { + Stderr::flush(self) + } } pub const EBADF_ERR: i32 = ::libc::EBADF as i32; diff --git a/src/libstd/sys/windows/c.rs b/src/libstd/sys/windows/c.rs index ce563dc7b1..e99be0cfc5 100644 --- a/src/libstd/sys/windows/c.rs +++ b/src/libstd/sys/windows/c.rs @@ -47,7 +47,9 @@ pub type CHAR = c_char; pub type HCRYPTPROV = LONG_PTR; pub type ULONG_PTR = c_ulonglong; pub type ULONG = c_ulong; +#[cfg(target_arch = "x86_64")] pub type ULONGLONG = u64; +#[cfg(target_arch = "x86_64")] pub type DWORDLONG = ULONGLONG; pub type LPBOOL = *mut BOOL; @@ -66,7 +68,6 @@ pub type LPVOID = *mut c_void; pub type LPWCH = *mut WCHAR; pub type LPWIN32_FIND_DATAW = *mut WIN32_FIND_DATAW; pub type LPWSADATA = *mut WSADATA; -pub type LPWSAPROTOCOLCHAIN = *mut WSAPROTOCOLCHAIN; pub type LPWSAPROTOCOL_INFO = *mut WSAPROTOCOL_INFO; pub type LPWSTR = *mut WCHAR; pub type LPFILETIME = *mut FILETIME; @@ -182,6 +183,7 @@ pub const ERROR_INVALID_HANDLE: DWORD = 6; pub const ERROR_NO_MORE_FILES: DWORD = 18; pub const ERROR_HANDLE_EOF: DWORD = 38; pub const ERROR_FILE_EXISTS: DWORD = 80; +pub const ERROR_INVALID_PARAMETER: DWORD = 87; pub const ERROR_BROKEN_PIPE: DWORD = 109; pub const ERROR_CALL_NOT_IMPLEMENTED: DWORD = 120; pub const ERROR_INSUFFICIENT_BUFFER: DWORD = 122; @@ -280,6 +282,7 @@ pub const EXCEPTION_STACK_OVERFLOW: DWORD = 0xc00000fd; pub const EXCEPTION_MAXIMUM_PARAMETERS: usize = 15; pub const PIPE_ACCESS_INBOUND: DWORD = 0x00000001; +pub const PIPE_ACCESS_OUTBOUND: DWORD = 0x00000002; pub const FILE_FLAG_FIRST_PIPE_INSTANCE: DWORD = 0x00080000; pub const FILE_FLAG_OVERLAPPED: DWORD = 0x40000000; pub const PIPE_WAIT: DWORD = 0x00000000; @@ -310,8 +313,6 @@ pub struct WSADATA { pub szSystemStatus: [u8; WSASYS_STATUS_LEN + 1], } -pub type WSAEVENT = HANDLE; - #[repr(C)] pub struct WSAPROTOCOL_INFO { pub dwServiceFlags1: DWORD, @@ -388,6 +389,15 @@ pub enum FILE_INFO_BY_HANDLE_CLASS { MaximumFileInfoByHandlesClass } +#[repr(C)] +pub struct FILE_BASIC_INFO { + pub CreationTime: LARGE_INTEGER, + pub LastAccessTime: LARGE_INTEGER, + pub LastWriteTime: LARGE_INTEGER, + pub ChangeTime: LARGE_INTEGER, + pub FileAttributes: DWORD, +} + #[repr(C)] pub struct FILE_END_OF_FILE_INFO { pub EndOfFile: LARGE_INTEGER, diff --git a/src/libstd/sys/windows/ext/fs.rs b/src/libstd/sys/windows/ext/fs.rs index 1e2b8bf38f..7fc04ad69d 100644 --- a/src/libstd/sys/windows/ext/fs.rs +++ b/src/libstd/sys/windows/ext/fs.rs @@ -19,7 +19,7 @@ use sys; use sys_common::{AsInnerMut, AsInner}; /// Windows-specific extensions to `File` -#[unstable(feature = "file_offset", issue = "35918")] +#[stable(feature = "file_offset", since = "1.15.0")] pub trait FileExt { /// Seeks to a given position and reads a number of bytes. /// @@ -35,7 +35,7 @@ pub trait FileExt { /// Note that similar to `File::read`, it is not an error to return with a /// short read. When returning from such a short read, the file pointer is /// still updated. - #[unstable(feature = "file_offset", issue = "35918")] + #[stable(feature = "file_offset", since = "1.15.0")] fn seek_read(&self, buf: &mut [u8], offset: u64) -> io::Result; /// Seeks to a given position and writes a number of bytes. @@ -52,11 +52,11 @@ pub trait FileExt { /// Note that similar to `File::write`, it is not an error to return a /// short write. When returning from such a short write, the file pointer /// is still updated. - #[unstable(feature = "file_offset", issue = "35918")] + #[stable(feature = "file_offset", since = "1.15.0")] fn seek_write(&self, buf: &[u8], offset: u64) -> io::Result; } -#[unstable(feature = "file_offset", issue = "35918")] +#[stable(feature = "file_offset", since = "1.15.0")] impl FileExt for fs::File { fn seek_read(&self, buf: &mut [u8], offset: u64) -> io::Result { self.as_inner().read_at(buf, offset) diff --git a/src/libstd/sys/windows/ext/mod.rs b/src/libstd/sys/windows/ext/mod.rs index 932bb5e956..f12e50cc92 100644 --- a/src/libstd/sys/windows/ext/mod.rs +++ b/src/libstd/sys/windows/ext/mod.rs @@ -36,6 +36,6 @@ pub mod prelude { pub use super::ffi::{OsStrExt, OsStringExt}; #[doc(no_inline)] #[stable(feature = "rust1", since = "1.0.0")] pub use super::fs::{OpenOptionsExt, MetadataExt}; - #[doc(no_inline)] #[unstable(feature = "file_offset", issue = "35918")] + #[doc(no_inline)] #[stable(feature = "file_offset", since = "1.15.0")] pub use super::fs::FileExt; } diff --git a/src/libstd/sys/windows/ext/process.rs b/src/libstd/sys/windows/ext/process.rs index bce32959a2..0a3221aeae 100644 --- a/src/libstd/sys/windows/ext/process.rs +++ b/src/libstd/sys/windows/ext/process.rs @@ -15,7 +15,7 @@ use os::windows::io::{FromRawHandle, RawHandle, AsRawHandle, IntoRawHandle}; use process; use sys; -use sys_common::{AsInner, FromInner, IntoInner}; +use sys_common::{AsInnerMut, AsInner, FromInner, IntoInner}; #[stable(feature = "process_extensions", since = "1.2.0")] impl FromRawHandle for process::Stdio { @@ -97,3 +97,22 @@ impl ExitStatusExt for process::ExitStatus { process::ExitStatus::from_inner(From::from(raw)) } } + +/// Windows-specific extensions to the `std::process::Command` builder +#[unstable(feature = "windows_process_extensions", issue = "37827")] +pub trait CommandExt { + /// Sets the [process creation flags][1] to be passed to `CreateProcess`. + /// + /// These will always be ORed with `CREATE_UNICODE_ENVIRONMENT`. + /// [1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx + #[unstable(feature = "windows_process_extensions", issue = "37827")] + fn creation_flags(&mut self, flags: u32) -> &mut process::Command; +} + +#[unstable(feature = "windows_process_extensions", issue = "37827")] +impl CommandExt for process::Command { + fn creation_flags(&mut self, flags: u32) -> &mut process::Command { + self.as_inner_mut().creation_flags(flags); + self + } +} diff --git a/src/libstd/sys/windows/fs.rs b/src/libstd/sys/windows/fs.rs index 98fd15f863..7d7d78bbd8 100644 --- a/src/libstd/sys/windows/fs.rs +++ b/src/libstd/sys/windows/fs.rs @@ -417,6 +417,24 @@ impl File { Ok(PathBuf::from(OsString::from_wide(subst))) } } + + pub fn set_permissions(&self, perm: FilePermissions) -> io::Result<()> { + let mut info = c::FILE_BASIC_INFO { + CreationTime: 0, + LastAccessTime: 0, + LastWriteTime: 0, + ChangeTime: 0, + FileAttributes: perm.attrs, + }; + let size = mem::size_of_val(&info); + cvt(unsafe { + c::SetFileInformationByHandle(self.handle.raw(), + c::FileBasicInfo, + &mut info as *mut _ as *mut _, + size as c::DWORD) + })?; + Ok(()) + } } impl FromInner for File { diff --git a/src/libstd/sys/windows/pipe.rs b/src/libstd/sys/windows/pipe.rs index ed7e88e72c..8073473f7f 100644 --- a/src/libstd/sys/windows/pipe.rs +++ b/src/libstd/sys/windows/pipe.rs @@ -29,20 +29,46 @@ pub struct AnonPipe { inner: Handle, } -pub fn anon_pipe() -> io::Result<(AnonPipe, AnonPipe)> { +pub struct Pipes { + pub ours: AnonPipe, + pub theirs: AnonPipe, +} + +/// Although this looks similar to `anon_pipe` in the Unix module it's actually +/// subtly different. Here we'll return two pipes in the `Pipes` return value, +/// but one is intended for "us" where as the other is intended for "someone +/// else". +/// +/// Currently the only use case for this function is pipes for stdio on +/// processes in the standard library, so "ours" is the one that'll stay in our +/// process whereas "theirs" will be inherited to a child. +/// +/// The ours/theirs pipes are *not* specifically readable or writable. Each +/// one only supports a read or a write, but which is which depends on the +/// boolean flag given. If `ours_readable` is true then `ours` is readable where +/// `theirs` is writable. Conversely if `ours_readable` is false then `ours` is +/// writable where `theirs` is readable. +/// +/// Also note that the `ours` pipe is always a handle opened up in overlapped +/// mode. This means that technically speaking it should only ever be used +/// with `OVERLAPPED` instances, but also works out ok if it's only ever used +/// once at a time (which we do indeed guarantee). +pub fn anon_pipe(ours_readable: bool) -> io::Result { // Note that we specifically do *not* use `CreatePipe` here because // unfortunately the anonymous pipes returned do not support overlapped - // operations. + // operations. Instead, we create a "hopefully unique" name and create a + // named pipe which has overlapped operations enabled. // - // Instead, we create a "hopefully unique" name and create a named pipe - // which has overlapped operations enabled. - // - // Once we do this, we connect do it as usual via `CreateFileW`, and then we - // return those reader/writer halves. + // Once we do this, we connect do it as usual via `CreateFileW`, and then + // we return those reader/writer halves. Note that the `ours` pipe return + // value is always the named pipe, whereas `theirs` is just the normal file. + // This should hopefully shield us from child processes which assume their + // stdout is a named pipe, which would indeed be odd! unsafe { - let reader; + let ours; let mut name; let mut tries = 0; + let mut reject_remote_clients_flag = c::PIPE_REJECT_REMOTE_CLIENTS; loop { tries += 1; let key: u64 = rand::thread_rng().gen(); @@ -53,15 +79,20 @@ pub fn anon_pipe() -> io::Result<(AnonPipe, AnonPipe)> { .encode_wide() .chain(Some(0)) .collect::>(); + let mut flags = c::FILE_FLAG_FIRST_PIPE_INSTANCE | + c::FILE_FLAG_OVERLAPPED; + if ours_readable { + flags |= c::PIPE_ACCESS_INBOUND; + } else { + flags |= c::PIPE_ACCESS_OUTBOUND; + } let handle = c::CreateNamedPipeW(wide_name.as_ptr(), - c::PIPE_ACCESS_INBOUND | - c::FILE_FLAG_FIRST_PIPE_INSTANCE | - c::FILE_FLAG_OVERLAPPED, + flags, c::PIPE_TYPE_BYTE | - c::PIPE_READMODE_BYTE | - c::PIPE_WAIT | - c::PIPE_REJECT_REMOTE_CLIENTS, + c::PIPE_READMODE_BYTE | + c::PIPE_WAIT | + reject_remote_clients_flag, 1, 4096, 4096, @@ -76,29 +107,52 @@ pub fn anon_pipe() -> io::Result<(AnonPipe, AnonPipe)> { // // Don't try again too much though as this could also perhaps be a // legit error. + // If ERROR_INVALID_PARAMETER is returned, this probably means we're + // running on pre-Vista version where PIPE_REJECT_REMOTE_CLIENTS is + // not supported, so we continue retrying without it. This implies + // reduced security on Windows versions older than Vista by allowing + // connections to this pipe from remote machines. + // Proper fix would increase the number of FFI imports and introduce + // significant amount of Windows XP specific code with no clean + // testing strategy + // for more info see https://github.com/rust-lang/rust/pull/37677 if handle == c::INVALID_HANDLE_VALUE { let err = io::Error::last_os_error(); - if tries < 10 && - err.raw_os_error() == Some(c::ERROR_ACCESS_DENIED as i32) { - continue + let raw_os_err = err.raw_os_error(); + if tries < 10 { + if raw_os_err == Some(c::ERROR_ACCESS_DENIED as i32) { + continue + } else if reject_remote_clients_flag != 0 && + raw_os_err == Some(c::ERROR_INVALID_PARAMETER as i32) { + reject_remote_clients_flag = 0; + tries -= 1; + continue + } } return Err(err) } - reader = Handle::new(handle); + ours = Handle::new(handle); break } - // Connect to the named pipe we just created in write-only mode (also - // overlapped for async I/O below). + // Connect to the named pipe we just created. This handle is going to be + // returned in `theirs`, so if `ours` is readable we want this to be + // writable, otherwise if `ours` is writable we want this to be + // readable. + // + // Additionally we don't enable overlapped mode on this because most + // client processes aren't enabled to work with that. let mut opts = OpenOptions::new(); - opts.write(true); - opts.read(false); + opts.write(ours_readable); + opts.read(!ours_readable); opts.share_mode(0); - opts.attributes(c::FILE_FLAG_OVERLAPPED); - let writer = File::open(Path::new(&name), &opts)?; - let writer = AnonPipe { inner: writer.into_handle() }; + let theirs = File::open(Path::new(&name), &opts)?; + let theirs = AnonPipe { inner: theirs.into_handle() }; - Ok((AnonPipe { inner: reader }, AnonPipe { inner: writer.into_handle() })) + Ok(Pipes { + ours: AnonPipe { inner: ours }, + theirs: AnonPipe { inner: theirs.into_handle() }, + }) } } diff --git a/src/libstd/sys/windows/process.rs b/src/libstd/sys/windows/process.rs index d371714ff0..7dc8959e1b 100644 --- a/src/libstd/sys/windows/process.rs +++ b/src/libstd/sys/windows/process.rs @@ -54,6 +54,7 @@ pub struct Command { args: Vec, env: Option>, cwd: Option, + flags: u32, detach: bool, // not currently exposed in std::process stdin: Option, stdout: Option, @@ -84,6 +85,7 @@ impl Command { args: Vec::new(), env: None, cwd: None, + flags: 0, detach: false, stdin: None, stdout: None, @@ -124,6 +126,9 @@ impl Command { pub fn stderr(&mut self, stderr: Stdio) { self.stderr = Some(stderr); } + pub fn creation_flags(&mut self, flags: u32) { + self.flags = flags; + } pub fn spawn(&mut self, default: Stdio, needs_stdin: bool) -> io::Result<(Process, StdioPipes)> { @@ -157,7 +162,7 @@ impl Command { cmd_str.push(0); // add null terminator // stolen from the libuv code. - let mut flags = c::CREATE_UNICODE_ENVIRONMENT; + let mut flags = self.flags | c::CREATE_UNICODE_ENVIRONMENT; if self.detach { flags |= c::DETACHED_PROCESS | c::CREATE_NEW_PROCESS_GROUP; } @@ -259,19 +264,15 @@ impl Stdio { } Stdio::MakePipe => { - let (reader, writer) = pipe::anon_pipe()?; - let (ours, theirs) = if stdio_id == c::STD_INPUT_HANDLE { - (writer, reader) - } else { - (reader, writer) - }; - *pipe = Some(ours); + let ours_readable = stdio_id != c::STD_INPUT_HANDLE; + let pipes = pipe::anon_pipe(ours_readable)?; + *pipe = Some(pipes.ours); cvt(unsafe { - c::SetHandleInformation(theirs.handle().raw(), + c::SetHandleInformation(pipes.theirs.handle().raw(), c::HANDLE_FLAG_INHERIT, c::HANDLE_FLAG_INHERIT) })?; - Ok(theirs.into_handle()) + Ok(pipes.theirs.into_handle()) } Stdio::Handle(ref handle) => { diff --git a/src/libstd/sys/windows/stdio.rs b/src/libstd/sys/windows/stdio.rs index 72788776de..a74e7699ba 100644 --- a/src/libstd/sys/windows/stdio.rs +++ b/src/libstd/sys/windows/stdio.rs @@ -156,6 +156,10 @@ impl Stdout { pub fn write(&self, data: &[u8]) -> io::Result { write(&self.0, data) } + + pub fn flush(&self) -> io::Result<()> { + Ok(()) + } } impl Stderr { @@ -166,6 +170,10 @@ impl Stderr { pub fn write(&self, data: &[u8]) -> io::Result { write(&self.0, data) } + + pub fn flush(&self) -> io::Result<()> { + Ok(()) + } } // FIXME: right now this raw stderr handle is used in a few places because @@ -175,7 +183,10 @@ impl io::Write for Stderr { fn write(&mut self, data: &[u8]) -> io::Result { Stderr::write(self, data) } - fn flush(&mut self) -> io::Result<()> { Ok(()) } + + fn flush(&mut self) -> io::Result<()> { + Stderr::flush(self) + } } impl NoClose { diff --git a/src/libstd/sys_common/mod.rs b/src/libstd/sys_common/mod.rs index 1dd9b73e26..5c07e36508 100644 --- a/src/libstd/sys_common/mod.rs +++ b/src/libstd/sys_common/mod.rs @@ -10,7 +10,7 @@ //! Platform-independent platform abstraction //! -//! This is the platform-independent portion of the standard libraries +//! This is the platform-independent portion of the standard library's //! platform abstraction layer, whereas `std::sys` is the //! platform-specific portion. //! @@ -34,7 +34,6 @@ pub mod condvar; pub mod io; pub mod memchr; pub mod mutex; -pub mod net; pub mod poison; pub mod remutex; pub mod rwlock; @@ -44,6 +43,12 @@ pub mod thread_local; pub mod util; pub mod wtf8; +#[cfg(target_os = "redox")] +pub use sys::net; + +#[cfg(not(target_os = "redox"))] +pub mod net; + #[cfg(any(not(cargobuild), feature = "backtrace"))] #[cfg(any(all(unix, not(any(target_os = "macos", target_os = "ios", target_os = "emscripten"))), all(windows, target_env = "gnu")))] diff --git a/src/libstd/thread/mod.rs b/src/libstd/thread/mod.rs index 255cd2a9bc..55adc3dabf 100644 --- a/src/libstd/thread/mod.rs +++ b/src/libstd/thread/mod.rs @@ -17,13 +17,11 @@ //! provide some built-in support for low-level synchronization. //! //! Communication between threads can be done through -//! [channels](../../std/sync/mpsc/index.html), Rust's message-passing -//! types, along with [other forms of thread +//! [channels], Rust's message-passing types, along with [other forms of thread //! synchronization](../../std/sync/index.html) and shared-memory data //! structures. In particular, types that are guaranteed to be //! threadsafe are easily shared between threads using the -//! atomically-reference-counted container, -//! [`Arc`](../../std/sync/struct.Arc.html). +//! atomically-reference-counted container, [`Arc`]. //! //! Fatal logic errors in Rust cause *thread panic*, during which //! a thread will unwind the stack, running destructors and freeing @@ -40,7 +38,7 @@ //! //! ## Spawning a thread //! -//! A new thread can be spawned using the `thread::spawn` function: +//! A new thread can be spawned using the [`thread::spawn`][`spawn`] function: //! //! ```rust //! use std::thread; @@ -55,7 +53,7 @@ //! it), unless this parent is the main thread. //! //! The parent thread can also wait on the completion of the child -//! thread; a call to `spawn` produces a `JoinHandle`, which provides +//! thread; a call to [`spawn`] produces a [`JoinHandle`], which provides //! a `join` method for waiting: //! //! ```rust @@ -68,13 +66,13 @@ //! let res = child.join(); //! ``` //! -//! The `join` method returns a `Result` containing `Ok` of the final -//! value produced by the child thread, or `Err` of the value given to -//! a call to `panic!` if the child panicked. +//! The [`join`] method returns a [`Result`] containing [`Ok`] of the final +//! value produced by the child thread, or [`Err`] of the value given to +//! a call to [`panic!`] if the child panicked. //! //! ## Configuring threads //! -//! A new thread can be configured before it is spawned via the `Builder` type, +//! A new thread can be configured before it is spawned via the [`Builder`] type, //! which currently allows you to set the name and stack size for the child thread: //! //! ```rust @@ -88,43 +86,43 @@ //! //! ## The `Thread` type //! -//! Threads are represented via the `Thread` type, which you can get in one of +//! Threads are represented via the [`Thread`] type, which you can get in one of //! two ways: //! -//! * By spawning a new thread, e.g. using the `thread::spawn` function, and -//! calling `thread()` on the `JoinHandle`. -//! * By requesting the current thread, using the `thread::current` function. +//! * By spawning a new thread, e.g. using the [`thread::spawn`][`spawn`] +//! function, and calling [`thread()`] on the [`JoinHandle`]. +//! * By requesting the current thread, using the [`thread::current()`] function. //! -//! The `thread::current()` function is available even for threads not spawned +//! The [`thread::current()`] function is available even for threads not spawned //! by the APIs of this module. //! //! ## Blocking support: park and unpark //! //! Every thread is equipped with some basic low-level blocking support, via the -//! `thread::park()` function and `thread::Thread::unpark()` method. `park()` -//! blocks the current thread, which can then be resumed from another thread by -//! calling the `unpark()` method on the blocked thread's handle. +//! [`thread::park()`][`park()`] function and [`thread::Thread::unpark()`][`unpark()`] +//! method. [`park()`] blocks the current thread, which can then be resumed from +//! another thread by calling the [`unpark()`] method on the blocked thread's handle. //! -//! Conceptually, each `Thread` handle has an associated token, which is +//! Conceptually, each [`Thread`] handle has an associated token, which is //! initially not present: //! -//! * The `thread::park()` function blocks the current thread unless or until +//! * The [`thread::park()`][`park()`] function blocks the current thread unless or until //! the token is available for its thread handle, at which point it atomically //! consumes the token. It may also return *spuriously*, without consuming the -//! token. `thread::park_timeout()` does the same, but allows specifying a +//! token. [`thread::park_timeout()`] does the same, but allows specifying a //! maximum time to block the thread for. //! -//! * The `unpark()` method on a `Thread` atomically makes the token available +//! * The [`unpark()`] method on a [`Thread`] atomically makes the token available //! if it wasn't already. //! -//! In other words, each `Thread` acts a bit like a semaphore with initial count +//! In other words, each [`Thread`] acts a bit like a semaphore with initial count //! 0, except that the semaphore is *saturating* (the count cannot go above 1), //! and can return spuriously. //! //! The API is typically used by acquiring a handle to the current thread, //! placing that handle in a shared data structure so that other threads can //! find it, and then `park`ing. When some desired condition is met, another -//! thread calls `unpark` on the handle. +//! thread calls [`unpark()`] on the handle. //! //! The motivation for this design is twofold: //! @@ -149,6 +147,22 @@ //! will want to make use of some form of **interior mutability** through the //! [`Cell`] or [`RefCell`] types. //! +//! [channels]: ../../std/sync/mpsc/index.html +//! [`Arc`]: ../../std/sync/struct.Arc.html +//! [`spawn`]: ../../std/thread/fn.spawn.html +//! [`JoinHandle`]: ../../std/thread/struct.JoinHandle.html +//! [`thread()`]: ../../std/thread/struct.JoinHandle.html#method.thread +//! [`join`]: ../../std/thread/struct.JoinHandle.html#method.join +//! [`Result`]: ../../std/result/enum.Result.html +//! [`Ok`]: ../../std/result/enum.Result.html#variant.Ok +//! [`Err`]: ../../std/result/enum.Result.html#variant.Err +//! [`panic!`]: ../../std/macro.panic.html +//! [`Builder`]: ../../std/thread/struct.Builder.html +//! [`thread::current()`]: ../../std/thread/fn.spawn.html +//! [`Thread`]: ../../std/thread/struct.Thread.html +//! [`park()`]: ../../std/thread/fn.park.html +//! [`unpark()`]: ../../std/thread/struct.Thread.html#method.unpark +//! [`thread::park_timeout()`]: ../../std/thread/fn.park_timeout.html //! [`Cell`]: ../cell/struct.Cell.html //! [`RefCell`]: ../cell/struct.RefCell.html //! [`thread_local!`]: ../macro.thread_local.html diff --git a/src/libstd/time/duration.rs b/src/libstd/time/duration.rs index 10a0c567e1..41d675b6f8 100644 --- a/src/libstd/time/duration.rs +++ b/src/libstd/time/duration.rs @@ -39,7 +39,7 @@ const MILLIS_PER_SEC: u64 = 1_000; /// let ten_millis = Duration::from_millis(10); /// ``` #[stable(feature = "duration", since = "1.3.0")] -#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug, Hash)] +#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug, Hash, Default)] pub struct Duration { secs: u64, nanos: u32, // Always 0 <= nanos < NANOS_PER_SEC diff --git a/src/librustc_unicode/Cargo.toml b/src/libstd_unicode/Cargo.toml similarity index 73% rename from src/librustc_unicode/Cargo.toml rename to src/libstd_unicode/Cargo.toml index 1f4213f0ab..28fbd3c1aa 100644 --- a/src/librustc_unicode/Cargo.toml +++ b/src/libstd_unicode/Cargo.toml @@ -1,12 +1,13 @@ [package] authors = ["The Rust Project Developers"] -name = "rustc_unicode" +name = "std_unicode" version = "0.0.0" [lib] -name = "rustc_unicode" +name = "std_unicode" path = "lib.rs" test = false +bench = false [dependencies] core = { path = "../libcore" } diff --git a/src/librustc_unicode/char.rs b/src/libstd_unicode/char.rs similarity index 98% rename from src/librustc_unicode/char.rs rename to src/libstd_unicode/char.rs index 702d7d8b4b..53dafadb5d 100644 --- a/src/librustc_unicode/char.rs +++ b/src/libstd_unicode/char.rs @@ -138,7 +138,7 @@ impl char { /// A 'radix' here is sometimes also called a 'base'. A radix of two /// indicates a binary number, a radix of ten, decimal, and a radix of /// sixteen, hexadecimal, to give some common values. Arbitrary - /// radicum are supported. + /// radices are supported. /// /// Compared to `is_numeric()`, this function only recognizes the characters /// `0-9`, `a-z` and `A-Z`. @@ -190,7 +190,7 @@ impl char { /// A 'radix' here is sometimes also called a 'base'. A radix of two /// indicates a binary number, a radix of ten, decimal, and a radix of /// sixteen, hexadecimal, to give some common values. Arbitrary - /// radicum are supported. + /// radices are supported. /// /// 'Digit' is defined to be only the following characters: /// @@ -448,8 +448,6 @@ impl char { /// In both of these examples, 'ß' takes two bytes to encode. /// /// ``` - /// #![feature(unicode)] - /// /// let mut b = [0; 2]; /// /// let result = 'ß'.encode_utf8(&mut b); @@ -462,7 +460,6 @@ impl char { /// A buffer that's too small: /// /// ``` - /// #![feature(unicode)] /// use std::thread; /// /// let result = thread::spawn(|| { @@ -474,9 +471,7 @@ impl char { /// /// assert!(result.is_err()); /// ``` - #[unstable(feature = "unicode", - reason = "pending decision about Iterator/Writer/Reader", - issue = "27784")] + #[stable(feature = "unicode_encode_char", since = "1.15.0")] #[inline] pub fn encode_utf8(self, dst: &mut [u8]) -> &mut str { C::encode_utf8(self, dst) @@ -495,8 +490,6 @@ impl char { /// In both of these examples, '𝕊' takes two `u16`s to encode. /// /// ``` - /// #![feature(unicode)] - /// /// let mut b = [0; 2]; /// /// let result = '𝕊'.encode_utf16(&mut b); @@ -507,7 +500,6 @@ impl char { /// A buffer that's too small: /// /// ``` - /// #![feature(unicode)] /// use std::thread; /// /// let result = thread::spawn(|| { @@ -519,9 +511,7 @@ impl char { /// /// assert!(result.is_err()); /// ``` - #[unstable(feature = "unicode", - reason = "pending decision about Iterator/Writer/Reader", - issue = "27784")] + #[stable(feature = "unicode_encode_char", since = "1.15.0")] #[inline] pub fn encode_utf16(self, dst: &mut [u16]) -> &mut [u16] { C::encode_utf16(self, dst) diff --git a/src/librustc_unicode/lib.rs b/src/libstd_unicode/lib.rs similarity index 97% rename from src/librustc_unicode/lib.rs rename to src/libstd_unicode/lib.rs index 65bd717e01..11724e74cd 100644 --- a/src/librustc_unicode/lib.rs +++ b/src/libstd_unicode/lib.rs @@ -20,7 +20,7 @@ //! provide for basic string-related manipulations. This crate does not //! (yet) aim to provide a full set of Unicode tables. -#![crate_name = "rustc_unicode"] +#![crate_name = "std_unicode"] #![unstable(feature = "unicode", issue = "27783")] #![crate_type = "rlib"] #![doc(html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png", @@ -39,7 +39,6 @@ #![feature(lang_items)] #![feature(staged_api)] #![feature(try_from)] -#![feature(unicode)] mod tables; mod u_str; diff --git a/src/librustc_unicode/tables.rs b/src/libstd_unicode/tables.rs similarity index 100% rename from src/librustc_unicode/tables.rs rename to src/libstd_unicode/tables.rs diff --git a/src/librustc_unicode/u_str.rs b/src/libstd_unicode/u_str.rs similarity index 100% rename from src/librustc_unicode/u_str.rs rename to src/libstd_unicode/u_str.rs diff --git a/src/libsyntax/Cargo.toml b/src/libsyntax/Cargo.toml index 8b61e1b0d3..0b38f5450b 100644 --- a/src/libsyntax/Cargo.toml +++ b/src/libsyntax/Cargo.toml @@ -14,3 +14,4 @@ log = { path = "../liblog" } rustc_bitflags = { path = "../librustc_bitflags" } syntax_pos = { path = "../libsyntax_pos" } rustc_errors = { path = "../librustc_errors" } +rustc_data_structures = { path = "../librustc_data_structures" } diff --git a/src/libsyntax/ast.rs b/src/libsyntax/ast.rs index f7581924eb..2a911aceb9 100644 --- a/src/libsyntax/ast.rs +++ b/src/libsyntax/ast.rs @@ -14,71 +14,43 @@ pub use self::TyParamBound::*; pub use self::UnsafeSource::*; pub use self::ViewPath_::*; pub use self::PathParameters::*; +pub use symbol::Symbol as Name; pub use util::ThinVec; use syntax_pos::{mk_sp, Span, DUMMY_SP, ExpnId}; use codemap::{respan, Spanned}; use abi::Abi; use ext::hygiene::SyntaxContext; -use parse::token::{self, keywords, InternedString}; use print::pprust; use ptr::P; +use symbol::{Symbol, keywords}; use tokenstream::{TokenTree}; +use std::collections::HashSet; use std::fmt; use std::rc::Rc; use std::u32; use serialize::{self, Encodable, Decodable, Encoder, Decoder}; -/// A name is a part of an identifier, representing a string or gensym. It's -/// the result of interning. -#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] -pub struct Name(pub u32); - /// An identifier contains a Name (index into the interner /// table) and a SyntaxContext to track renaming and /// macro expansion per Flatt et al., "Macros That Work Together" #[derive(Clone, Copy, PartialEq, Eq, Hash)] pub struct Ident { - pub name: Name, + pub name: Symbol, pub ctxt: SyntaxContext } -impl Name { - pub fn as_str(self) -> token::InternedString { - token::InternedString::new_from_name(self) - } -} - -impl fmt::Debug for Name { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - write!(f, "{}({})", self, self.0) - } -} - -impl fmt::Display for Name { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - fmt::Display::fmt(&self.as_str(), f) - } -} - -impl Encodable for Name { - fn encode(&self, s: &mut S) -> Result<(), S::Error> { - s.emit_str(&self.as_str()) - } -} - -impl Decodable for Name { - fn decode(d: &mut D) -> Result { - Ok(token::intern(&d.read_str()?)) - } -} - impl Ident { pub const fn with_empty_ctxt(name: Name) -> Ident { Ident { name: name, ctxt: SyntaxContext::empty() } } + + /// Maps a string to an identifier with an empty syntax context. + pub fn from_str(s: &str) -> Ident { + Ident::with_empty_ctxt(Symbol::intern(s)) + } } impl fmt::Debug for Ident { @@ -399,6 +371,14 @@ impl Generics { pub fn is_parameterized(&self) -> bool { self.is_lt_parameterized() || self.is_type_parameterized() } + pub fn span_for_name(&self, name: &str) -> Option { + for t in &self.ty_params { + if t.ident.name == name { + return Some(t.span); + } + } + None + } } impl Default for Generics { @@ -471,7 +451,7 @@ pub struct WhereEqPredicate { /// The set of MetaItems that define the compilation environment of the crate, /// used to drive conditional compilation -pub type CrateConfig = Vec>; +pub type CrateConfig = HashSet<(Name, Option)>; #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub struct Crate { @@ -490,7 +470,7 @@ pub type NestedMetaItem = Spanned; #[derive(Clone, Eq, RustcEncodable, RustcDecodable, Hash, Debug, PartialEq)] pub enum NestedMetaItemKind { /// A full MetaItem, for recursive meta items. - MetaItem(P), + MetaItem(MetaItem), /// A literal. /// /// E.g. "foo", 64, true @@ -500,53 +480,30 @@ pub enum NestedMetaItemKind { /// A spanned compile-time attribute item. /// /// E.g. `#[test]`, `#[derive(..)]` or `#[feature = "foo"]` -pub type MetaItem = Spanned; +#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] +pub struct MetaItem { + pub name: Name, + pub node: MetaItemKind, + pub span: Span, +} /// A compile-time attribute item. /// /// E.g. `#[test]`, `#[derive(..)]` or `#[feature = "foo"]` -#[derive(Clone, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] +#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub enum MetaItemKind { /// Word meta item. /// /// E.g. `test` as in `#[test]` - Word(InternedString), + Word, /// List meta item. /// /// E.g. `derive(..)` as in `#[derive(..)]` - List(InternedString, Vec), + List(Vec), /// Name value meta item. /// /// E.g. `feature = "foo"` as in `#[feature = "foo"]` - NameValue(InternedString, Lit), -} - -// can't be derived because the MetaItemKind::List requires an unordered comparison -impl PartialEq for MetaItemKind { - fn eq(&self, other: &MetaItemKind) -> bool { - use self::MetaItemKind::*; - match *self { - Word(ref ns) => match *other { - Word(ref no) => (*ns) == (*no), - _ => false - }, - List(ref ns, ref miss) => match *other { - List(ref no, ref miso) => { - ns == no && - miss.iter().all(|mi| { - miso.iter().any(|x| x.node == mi.node) - }) - } - _ => false - }, - NameValue(ref ns, ref vs) => match *other { - NameValue(ref no, ref vo) => { - (*ns) == (*no) && vs.node == vo.node - } - _ => false - }, - } - } + NameValue(Lit) } /// A Block (`{ .. }`). @@ -1009,10 +966,10 @@ pub enum ExprKind { Loop(P, Option), /// A `match` block. Match(P, Vec), - /// A closure (for example, `move |a, b, c| {a + b + c}`) + /// A closure (for example, `move |a, b, c| a + b + c`) /// /// The final span is the span of the argument block `|...|` - Closure(CaptureBy, P, P, Span), + Closure(CaptureBy, P, P, Span), /// A block (`{ ... }`) Block(P), @@ -1042,8 +999,8 @@ pub enum ExprKind { /// A referencing operation (`&a` or `&mut a`) AddrOf(Mutability, P), - /// A `break`, with an optional label to break - Break(Option), + /// A `break`, with an optional label to break, and an optional expression + Break(Option, Option>), /// A `continue`, with an optional label Continue(Option), /// A `return`, with an optional value to be returned @@ -1141,7 +1098,7 @@ pub enum LitIntType { #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub enum LitKind { /// A string literal (`"foo"`) - Str(InternedString, StrStyle), + Str(Symbol, StrStyle), /// A byte string (`b"foo"`) ByteStr(Rc>), /// A byte char (`b'f'`) @@ -1151,9 +1108,9 @@ pub enum LitKind { /// An integer literal (`1`) Int(u64, LitIntType), /// A float literal (`1f64` or `1E10f64`) - Float(InternedString, FloatTy), + Float(Symbol, FloatTy), /// A float literal without a suffix (`1.0 or 1.0E10`) - FloatUnsuffixed(InternedString), + FloatUnsuffixed(Symbol), /// A boolean literal Bool(bool), } @@ -1485,7 +1442,7 @@ pub enum AsmDialect { /// E.g. `"={eax}"(result)` as in `asm!("mov eax, 2" : "={eax}"(result) : : : "intel")`` #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub struct InlineAsmOutput { - pub constraint: InternedString, + pub constraint: Symbol, pub expr: P, pub is_rw: bool, pub is_indirect: bool, @@ -1496,11 +1453,11 @@ pub struct InlineAsmOutput { /// E.g. `asm!("NOP");` #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] pub struct InlineAsm { - pub asm: InternedString, + pub asm: Symbol, pub asm_str_style: StrStyle, pub outputs: Vec, - pub inputs: Vec<(InternedString, P)>, - pub clobbers: Vec, + pub inputs: Vec<(Symbol, P)>, + pub clobbers: Vec, pub volatile: bool, pub alignstack: bool, pub dialect: AsmDialect, @@ -1747,8 +1704,6 @@ impl ViewPath_ { } } -/// Meta-data associated with an item -pub type Attribute = Spanned; /// Distinguishes between Attributes that decorate items and Attributes that /// are contained as statements within items. These two cases need to be @@ -1762,13 +1717,15 @@ pub enum AttrStyle { #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)] pub struct AttrId(pub usize); +/// Meta-data associated with an item /// Doc-comments are promoted to attributes that have is_sugared_doc = true #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)] -pub struct Attribute_ { +pub struct Attribute { pub id: AttrId, pub style: AttrStyle, - pub value: P, + pub value: MetaItem, pub is_sugared_doc: bool, + pub span: Span, } /// TraitRef's appear in impls. diff --git a/src/libsyntax/attr.rs b/src/libsyntax/attr.rs index 0335f21034..45c120e0b9 100644 --- a/src/libsyntax/attr.rs +++ b/src/libsyntax/attr.rs @@ -15,29 +15,30 @@ pub use self::ReprAttr::*; pub use self::IntType::*; use ast; -use ast::{AttrId, Attribute, Attribute_}; +use ast::{AttrId, Attribute, Name}; use ast::{MetaItem, MetaItemKind, NestedMetaItem, NestedMetaItemKind}; use ast::{Lit, Expr, Item, Local, Stmt, StmtKind}; -use codemap::{respan, spanned, dummy_spanned}; +use codemap::{spanned, dummy_spanned, mk_sp}; use syntax_pos::{Span, BytePos, DUMMY_SP}; use errors::Handler; use feature_gate::{Features, GatedCfg}; use parse::lexer::comments::{doc_comment_style, strip_doc_comment_decoration}; -use parse::token::InternedString; -use parse::{ParseSess, token}; +use parse::ParseSess; use ptr::P; +use symbol::Symbol; use util::ThinVec; use std::cell::{RefCell, Cell}; use std::collections::HashSet; thread_local! { - static USED_ATTRS: RefCell> = RefCell::new(Vec::new()) + static USED_ATTRS: RefCell> = RefCell::new(Vec::new()); + static KNOWN_ATTRS: RefCell> = RefCell::new(Vec::new()); } enum AttrError { - MultipleItem(InternedString), - UnknownMetaItem(InternedString), + MultipleItem(Name), + UnknownMetaItem(Name), MissingSince, MissingFeature, MultipleStabilityLevels, @@ -60,7 +61,7 @@ fn handle_errors(diag: &Handler, span: Span, error: AttrError) { pub fn mark_used(attr: &Attribute) { debug!("Marking {:?} as used.", attr); - let AttrId(id) = attr.node.id; + let AttrId(id) = attr.id; USED_ATTRS.with(|slot| { let idx = (id / 64) as usize; let shift = id % 64; @@ -72,7 +73,7 @@ pub fn mark_used(attr: &Attribute) { } pub fn is_used(attr: &Attribute) -> bool { - let AttrId(id) = attr.node.id; + let AttrId(id) = attr.id; USED_ATTRS.with(|slot| { let idx = (id / 64) as usize; let shift = id % 64; @@ -81,9 +82,32 @@ pub fn is_used(attr: &Attribute) -> bool { }) } +pub fn mark_known(attr: &Attribute) { + debug!("Marking {:?} as known.", attr); + let AttrId(id) = attr.id; + KNOWN_ATTRS.with(|slot| { + let idx = (id / 64) as usize; + let shift = id % 64; + if slot.borrow().len() <= idx { + slot.borrow_mut().resize(idx + 1, 0); + } + slot.borrow_mut()[idx] |= 1 << shift; + }); +} + +pub fn is_known(attr: &Attribute) -> bool { + let AttrId(id) = attr.id; + KNOWN_ATTRS.with(|slot| { + let idx = (id / 64) as usize; + let shift = id % 64; + slot.borrow().get(idx).map(|bits| bits & (1 << shift) != 0) + .unwrap_or(false) + }) +} + impl NestedMetaItem { /// Returns the MetaItem if self is a NestedMetaItemKind::MetaItem. - pub fn meta_item(&self) -> Option<&P> { + pub fn meta_item(&self) -> Option<&MetaItem> { match self.node { NestedMetaItemKind::MetaItem(ref item) => Some(&item), _ => None @@ -110,18 +134,18 @@ impl NestedMetaItem { /// Returns the name of the meta item, e.g. `foo` in `#[foo]`, /// `#[foo="bar"]` and `#[foo(bar)]`, if self is a MetaItem - pub fn name(&self) -> Option { + pub fn name(&self) -> Option { self.meta_item().and_then(|meta_item| Some(meta_item.name())) } /// Gets the string value if self is a MetaItem and the MetaItem is a /// MetaItemKind::NameValue variant containing a string, otherwise None. - pub fn value_str(&self) -> Option { + pub fn value_str(&self) -> Option { self.meta_item().and_then(|meta_item| meta_item.value_str()) } /// Returns a MetaItem if self is a MetaItem with Kind Word. - pub fn word(&self) -> Option<&P> { + pub fn word(&self) -> Option<&MetaItem> { self.meta_item().and_then(|meta_item| if meta_item.is_word() { Some(meta_item) } else { @@ -162,16 +186,16 @@ impl NestedMetaItem { impl Attribute { pub fn check_name(&self, name: &str) -> bool { - let matches = name == &self.name()[..]; + let matches = self.name() == name; if matches { mark_used(self); } matches } - pub fn name(&self) -> InternedString { self.meta().name() } + pub fn name(&self) -> Name { self.meta().name() } - pub fn value_str(&self) -> Option { + pub fn value_str(&self) -> Option { self.meta().value_str() } @@ -194,17 +218,13 @@ impl Attribute { } impl MetaItem { - pub fn name(&self) -> InternedString { - match self.node { - MetaItemKind::Word(ref n) => (*n).clone(), - MetaItemKind::NameValue(ref n, _) => (*n).clone(), - MetaItemKind::List(ref n, _) => (*n).clone(), - } + pub fn name(&self) -> Name { + self.name } - pub fn value_str(&self) -> Option { + pub fn value_str(&self) -> Option { match self.node { - MetaItemKind::NameValue(_, ref v) => { + MetaItemKind::NameValue(ref v) => { match v.node { ast::LitKind::Str(ref s, _) => Some((*s).clone()), _ => None, @@ -216,14 +236,14 @@ impl MetaItem { pub fn meta_item_list(&self) -> Option<&[NestedMetaItem]> { match self.node { - MetaItemKind::List(_, ref l) => Some(&l[..]), + MetaItemKind::List(ref l) => Some(&l[..]), _ => None } } pub fn is_word(&self) -> bool { match self.node { - MetaItemKind::Word(_) => true, + MetaItemKind::Word => true, _ => false, } } @@ -231,7 +251,7 @@ impl MetaItem { pub fn span(&self) -> Span { self.span } pub fn check_name(&self, name: &str) -> bool { - name == &self.name()[..] + self.name() == name } pub fn is_value_str(&self) -> bool { @@ -246,7 +266,7 @@ impl MetaItem { impl Attribute { /// Extract the MetaItem from inside this Attribute. pub fn meta(&self) -> &MetaItem { - &self.node.value + &self.value } /// Convert self to a normal #[doc="foo"] comment, if it is a @@ -255,16 +275,15 @@ impl Attribute { pub fn with_desugared_doc(&self, f: F) -> T where F: FnOnce(&Attribute) -> T, { - if self.node.is_sugared_doc { + if self.is_sugared_doc { let comment = self.value_str().unwrap(); let meta = mk_name_value_item_str( - InternedString::new("doc"), - token::intern_and_get_ident(&strip_doc_comment_decoration( - &comment))); - if self.node.style == ast::AttrStyle::Outer { - f(&mk_attr_outer(self.node.id, meta)) + Symbol::intern("doc"), + Symbol::intern(&strip_doc_comment_decoration(&comment.as_str()))); + if self.style == ast::AttrStyle::Outer { + f(&mk_attr_outer(self.id, meta)) } else { - f(&mk_attr_inner(self.node.id, meta)) + f(&mk_attr_inner(self.id, meta)) } } else { f(self) @@ -274,41 +293,37 @@ impl Attribute { /* Constructors */ -pub fn mk_name_value_item_str(name: InternedString, value: InternedString) - -> P { +pub fn mk_name_value_item_str(name: Name, value: Symbol) -> MetaItem { let value_lit = dummy_spanned(ast::LitKind::Str(value, ast::StrStyle::Cooked)); mk_spanned_name_value_item(DUMMY_SP, name, value_lit) } -pub fn mk_name_value_item(name: InternedString, value: ast::Lit) - -> P { +pub fn mk_name_value_item(name: Name, value: ast::Lit) -> MetaItem { mk_spanned_name_value_item(DUMMY_SP, name, value) } -pub fn mk_list_item(name: InternedString, items: Vec) -> P { +pub fn mk_list_item(name: Name, items: Vec) -> MetaItem { mk_spanned_list_item(DUMMY_SP, name, items) } -pub fn mk_list_word_item(name: InternedString) -> ast::NestedMetaItem { +pub fn mk_list_word_item(name: Name) -> ast::NestedMetaItem { dummy_spanned(NestedMetaItemKind::MetaItem(mk_spanned_word_item(DUMMY_SP, name))) } -pub fn mk_word_item(name: InternedString) -> P { +pub fn mk_word_item(name: Name) -> MetaItem { mk_spanned_word_item(DUMMY_SP, name) } -pub fn mk_spanned_name_value_item(sp: Span, name: InternedString, value: ast::Lit) - -> P { - P(respan(sp, MetaItemKind::NameValue(name, value))) +pub fn mk_spanned_name_value_item(sp: Span, name: Name, value: ast::Lit) -> MetaItem { + MetaItem { span: sp, name: name, node: MetaItemKind::NameValue(value) } } -pub fn mk_spanned_list_item(sp: Span, name: InternedString, items: Vec) - -> P { - P(respan(sp, MetaItemKind::List(name, items))) +pub fn mk_spanned_list_item(sp: Span, name: Name, items: Vec) -> MetaItem { + MetaItem { span: sp, name: name, node: MetaItemKind::List(items) } } -pub fn mk_spanned_word_item(sp: Span, name: InternedString) -> P { - P(respan(sp, MetaItemKind::Word(name))) +pub fn mk_spanned_word_item(sp: Span, name: Name) -> MetaItem { + MetaItem { span: sp, name: name, node: MetaItemKind::Word } } @@ -325,71 +340,63 @@ pub fn mk_attr_id() -> AttrId { } /// Returns an inner attribute with the given value. -pub fn mk_attr_inner(id: AttrId, item: P) -> Attribute { +pub fn mk_attr_inner(id: AttrId, item: MetaItem) -> Attribute { mk_spanned_attr_inner(DUMMY_SP, id, item) } /// Returns an innter attribute with the given value and span. -pub fn mk_spanned_attr_inner(sp: Span, id: AttrId, item: P) -> Attribute { - respan(sp, - Attribute_ { - id: id, - style: ast::AttrStyle::Inner, - value: item, - is_sugared_doc: false, - }) +pub fn mk_spanned_attr_inner(sp: Span, id: AttrId, item: MetaItem) -> Attribute { + Attribute { + id: id, + style: ast::AttrStyle::Inner, + value: item, + is_sugared_doc: false, + span: sp, + } } /// Returns an outer attribute with the given value. -pub fn mk_attr_outer(id: AttrId, item: P) -> Attribute { +pub fn mk_attr_outer(id: AttrId, item: MetaItem) -> Attribute { mk_spanned_attr_outer(DUMMY_SP, id, item) } /// Returns an outer attribute with the given value and span. -pub fn mk_spanned_attr_outer(sp: Span, id: AttrId, item: P) -> Attribute { - respan(sp, - Attribute_ { - id: id, - style: ast::AttrStyle::Outer, - value: item, - is_sugared_doc: false, - }) +pub fn mk_spanned_attr_outer(sp: Span, id: AttrId, item: MetaItem) -> Attribute { + Attribute { + id: id, + style: ast::AttrStyle::Outer, + value: item, + is_sugared_doc: false, + span: sp, + } } -pub fn mk_doc_attr_outer(id: AttrId, item: P, is_sugared_doc: bool) -> Attribute { - dummy_spanned(Attribute_ { +pub fn mk_doc_attr_outer(id: AttrId, item: MetaItem, is_sugared_doc: bool) -> Attribute { + Attribute { id: id, style: ast::AttrStyle::Outer, value: item, is_sugared_doc: is_sugared_doc, - }) + span: DUMMY_SP, + } } -pub fn mk_sugared_doc_attr(id: AttrId, text: InternedString, lo: BytePos, - hi: BytePos) +pub fn mk_sugared_doc_attr(id: AttrId, text: Symbol, lo: BytePos, hi: BytePos) -> Attribute { - let style = doc_comment_style(&text); + let style = doc_comment_style(&text.as_str()); let lit = spanned(lo, hi, ast::LitKind::Str(text, ast::StrStyle::Cooked)); - let attr = Attribute_ { + Attribute { id: id, style: style, - value: P(spanned(lo, hi, MetaItemKind::NameValue(InternedString::new("doc"), lit))), - is_sugared_doc: true - }; - spanned(lo, hi, attr) -} - -/* Searching */ -/// Check if `needle` occurs in `haystack` by a structural -/// comparison. This is slightly subtle, and relies on ignoring the -/// span included in the `==` comparison a plain MetaItem. -pub fn contains(haystack: &[P], needle: &MetaItem) -> bool { - debug!("attr::contains (name={})", needle.name()); - haystack.iter().any(|item| { - debug!(" testing: {}", item.name()); - item.node == needle.node - }) + value: MetaItem { + span: mk_sp(lo, hi), + name: Symbol::intern("doc"), + node: MetaItemKind::NameValue(lit), + }, + is_sugared_doc: true, + span: mk_sp(lo, hi), + } } pub fn list_contains_name(items: &[NestedMetaItem], name: &str) -> bool { @@ -408,15 +415,13 @@ pub fn contains_name(attrs: &[Attribute], name: &str) -> bool { }) } -pub fn first_attr_value_str_by_name(attrs: &[Attribute], name: &str) - -> Option { +pub fn first_attr_value_str_by_name(attrs: &[Attribute], name: &str) -> Option { attrs.iter() .find(|at| at.check_name(name)) .and_then(|at| at.value_str()) } -pub fn last_meta_item_value_str_by_name(items: &[P], name: &str) - -> Option { +pub fn last_meta_item_value_str_by_name(items: &[MetaItem], name: &str) -> Option { items.iter() .rev() .find(|mi| mi.check_name(name)) @@ -425,12 +430,12 @@ pub fn last_meta_item_value_str_by_name(items: &[P], name: &str) /* Higher-level applications */ -pub fn find_crate_name(attrs: &[Attribute]) -> Option { +pub fn find_crate_name(attrs: &[Attribute]) -> Option { first_attr_value_str_by_name(attrs, "crate_name") } /// Find the value of #[export_name=*] attribute and check its validity. -pub fn find_export_name_attr(diag: &Handler, attrs: &[Attribute]) -> Option { +pub fn find_export_name_attr(diag: &Handler, attrs: &[Attribute]) -> Option { attrs.iter().fold(None, |ia,attr| { if attr.check_name("export_name") { if let s@Some(_) = attr.value_str() { @@ -464,13 +469,14 @@ pub enum InlineAttr { /// Determine what `#[inline]` attribute is present in `attrs`, if any. pub fn find_inline_attr(diagnostic: Option<&Handler>, attrs: &[Attribute]) -> InlineAttr { - attrs.iter().fold(InlineAttr::None, |ia,attr| { - match attr.node.value.node { - MetaItemKind::Word(ref n) if n == "inline" => { + attrs.iter().fold(InlineAttr::None, |ia, attr| { + match attr.value.node { + _ if attr.value.name != "inline" => ia, + MetaItemKind::Word => { mark_used(attr); InlineAttr::Hint } - MetaItemKind::List(ref n, ref items) if n == "inline" => { + MetaItemKind::List(ref items) => { mark_used(attr); if items.len() != 1 { diagnostic.map(|d|{ span_err!(d, attr.span, E0534, "expected one argument"); }); @@ -503,7 +509,7 @@ pub fn requests_inline(attrs: &[Attribute]) -> bool { /// Tests if a cfg-pattern matches the cfg set pub fn cfg_matches(cfg: &ast::MetaItem, sess: &ParseSess, features: Option<&Features>) -> bool { match cfg.node { - ast::MetaItemKind::List(ref pred, ref mis) => { + ast::MetaItemKind::List(ref mis) => { for mi in mis.iter() { if !mi.is_meta_item() { handle_errors(&sess.span_diagnostic, mi.span, AttrError::UnsupportedLiteral); @@ -513,7 +519,7 @@ pub fn cfg_matches(cfg: &ast::MetaItem, sess: &ParseSess, features: Option<&Feat // The unwraps below may look dangerous, but we've already asserted // that they won't fail with the loop above. - match &pred[..] { + match &*cfg.name.as_str() { "any" => mis.iter().any(|mi| { cfg_matches(mi.meta_item().unwrap(), sess, features) }), @@ -534,11 +540,11 @@ pub fn cfg_matches(cfg: &ast::MetaItem, sess: &ParseSess, features: Option<&Feat } } }, - ast::MetaItemKind::Word(_) | ast::MetaItemKind::NameValue(..) => { + ast::MetaItemKind::Word | ast::MetaItemKind::NameValue(..) => { if let (Some(feats), Some(gated_cfg)) = (features, GatedCfg::gate(cfg)) { gated_cfg.check_and_emit(sess, feats); } - contains(&sess.config, cfg) + sess.config.contains(&(cfg.name(), cfg.value_str())) } } } @@ -547,7 +553,7 @@ pub fn cfg_matches(cfg: &ast::MetaItem, sess: &ParseSess, features: Option<&Feat #[derive(RustcEncodable, RustcDecodable, Clone, Debug, PartialEq, Eq, Hash)] pub struct Stability { pub level: StabilityLevel, - pub feature: InternedString, + pub feature: Symbol, pub rustc_depr: Option, } @@ -555,20 +561,20 @@ pub struct Stability { #[derive(RustcEncodable, RustcDecodable, PartialEq, PartialOrd, Clone, Debug, Eq, Hash)] pub enum StabilityLevel { // Reason for the current stability level and the relevant rust-lang issue - Unstable { reason: Option, issue: u32 }, - Stable { since: InternedString }, + Unstable { reason: Option, issue: u32 }, + Stable { since: Symbol }, } #[derive(RustcEncodable, RustcDecodable, PartialEq, PartialOrd, Clone, Debug, Eq, Hash)] pub struct RustcDeprecation { - pub since: InternedString, - pub reason: InternedString, + pub since: Symbol, + pub reason: Symbol, } #[derive(RustcEncodable, RustcDecodable, PartialEq, PartialOrd, Clone, Debug, Eq, Hash)] pub struct Deprecation { - pub since: Option, - pub note: Option, + pub since: Option, + pub note: Option, } impl StabilityLevel { @@ -587,7 +593,6 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, 'outer: for attr in attrs_iter { let tag = attr.name(); - let tag = &*tag; if tag != "rustc_deprecated" && tag != "unstable" && tag != "stable" { continue // not a stability level } @@ -595,7 +600,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, mark_used(attr); if let Some(metas) = attr.meta_item_list() { - let get = |meta: &MetaItem, item: &mut Option| { + let get = |meta: &MetaItem, item: &mut Option| { if item.is_some() { handle_errors(diagnostic, meta.span, AttrError::MultipleItem(meta.name())); return false @@ -609,7 +614,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, } }; - match tag { + match &*tag.as_str() { "rustc_deprecated" => { if rustc_depr.is_some() { span_err!(diagnostic, item_sp, E0540, @@ -621,7 +626,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, let mut reason = None; for meta in metas { if let Some(mi) = meta.meta_item() { - match &*mi.name() { + match &*mi.name().as_str() { "since" => if !get(mi, &mut since) { continue 'outer }, "reason" => if !get(mi, &mut reason) { continue 'outer }, _ => { @@ -664,7 +669,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, let mut issue = None; for meta in metas { if let Some(mi) = meta.meta_item() { - match &*mi.name() { + match &*mi.name().as_str() { "feature" => if !get(mi, &mut feature) { continue 'outer }, "reason" => if !get(mi, &mut reason) { continue 'outer }, "issue" => if !get(mi, &mut issue) { continue 'outer }, @@ -686,7 +691,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, level: Unstable { reason: reason, issue: { - if let Ok(issue) = issue.parse() { + if let Ok(issue) = issue.as_str().parse() { issue } else { span_err!(diagnostic, attr.span(), E0545, @@ -719,7 +724,7 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, let mut since = None; for meta in metas { if let NestedMetaItemKind::MetaItem(ref mi) = meta.node { - match &*mi.name() { + match &*mi.name().as_str() { "feature" => if !get(mi, &mut feature) { continue 'outer }, "since" => if !get(mi, &mut since) { continue 'outer }, _ => { @@ -765,9 +770,6 @@ fn find_stability_generic<'a, I>(diagnostic: &Handler, // Merge the deprecation info into the stability info if let Some(rustc_depr) = rustc_depr { if let Some(ref mut stab) = stab { - if let Unstable {reason: ref mut reason @ None, ..} = stab.level { - *reason = Some(rustc_depr.reason.clone()) - } stab.rustc_depr = Some(rustc_depr); } else { span_err!(diagnostic, item_sp, E0549, @@ -800,7 +802,7 @@ fn find_deprecation_generic<'a, I>(diagnostic: &Handler, } depr = if let Some(metas) = attr.meta_item_list() { - let get = |meta: &MetaItem, item: &mut Option| { + let get = |meta: &MetaItem, item: &mut Option| { if item.is_some() { handle_errors(diagnostic, meta.span, AttrError::MultipleItem(meta.name())); return false @@ -818,7 +820,7 @@ fn find_deprecation_generic<'a, I>(diagnostic: &Handler, let mut note = None; for meta in metas { if let NestedMetaItemKind::MetaItem(ref mi) = meta.node { - match &*mi.name() { + match &*mi.name().as_str() { "since" => if !get(mi, &mut since) { continue 'outer }, "note" => if !get(mi, &mut note) { continue 'outer }, _ => { @@ -854,7 +856,7 @@ pub fn find_deprecation(diagnostic: &Handler, attrs: &[Attribute], find_deprecation_generic(diagnostic, attrs.iter(), item_sp) } -pub fn require_unique_names(diagnostic: &Handler, metas: &[P]) { +pub fn require_unique_names(diagnostic: &Handler, metas: &[MetaItem]) { let mut set = HashSet::new(); for meta in metas { let name = meta.name(); @@ -875,8 +877,8 @@ pub fn require_unique_names(diagnostic: &Handler, metas: &[P]) { /// structure layout, and `packed` to remove padding. pub fn find_repr_attrs(diagnostic: &Handler, attr: &Attribute) -> Vec { let mut acc = Vec::new(); - match attr.node.value.node { - ast::MetaItemKind::List(ref s, ref items) if s == "repr" => { + match attr.value.node { + ast::MetaItemKind::List(ref items) if attr.value.name == "repr" => { mark_used(attr); for item in items { if !item.is_meta_item() { @@ -885,7 +887,7 @@ pub fn find_repr_attrs(diagnostic: &Handler, attr: &Attribute) -> Vec } if let Some(mi) = item.word() { - let word = &*mi.name(); + let word = &*mi.name().as_str(); let hint = match word { // Can't use "extern" because it's not a lexical identifier. "C" => Some(ReprExtern), diff --git a/src/libsyntax/codemap.rs b/src/libsyntax/codemap.rs index 49012ad036..12ce642891 100644 --- a/src/libsyntax/codemap.rs +++ b/src/libsyntax/codemap.rs @@ -51,6 +51,8 @@ pub enum ExpnFormat { MacroAttribute(Name), /// e.g. `format!()` MacroBang(Name), + /// Desugaring done by the compiler during HIR lowering. + CompilerDesugaring(Name) } #[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)] @@ -105,8 +107,9 @@ pub struct NameAndSpan { impl NameAndSpan { pub fn name(&self) -> Name { match self.format { - ExpnFormat::MacroAttribute(s) => s, - ExpnFormat::MacroBang(s) => s, + ExpnFormat::MacroAttribute(s) | + ExpnFormat::MacroBang(s) | + ExpnFormat::CompilerDesugaring(s) => s, } } } @@ -813,6 +816,7 @@ impl CodeMap { let (pre, post) = match ei.callee.format { MacroAttribute(..) => ("#[", "]"), MacroBang(..) => ("", "!"), + CompilerDesugaring(..) => ("desugaring of `", "`"), }; let macro_decl_name = format!("{}{}{}", pre, @@ -871,6 +875,7 @@ impl CodeMapper for CodeMap { #[cfg(test)] mod tests { use super::*; + use symbol::keywords; use std::rc::Rc; #[test] @@ -1097,10 +1102,9 @@ mod tests { #[test] fn t11() { // Test span_to_expanded_string works with expansion - use ast::Name; let cm = init_code_map(); let root = Span { lo: BytePos(0), hi: BytePos(11), expn_id: NO_EXPANSION }; - let format = ExpnFormat::MacroBang(Name(0u32)); + let format = ExpnFormat::MacroBang(keywords::Invalid.name()); let callee = NameAndSpan { format: format, allow_internal_unstable: false, span: None }; @@ -1197,11 +1201,9 @@ mod tests { fn init_expansion_chain(cm: &CodeMap) -> Span { // Creates an expansion chain containing two recursive calls // root -> expA -> expA -> expB -> expB -> end - use ast::Name; - let root = Span { lo: BytePos(0), hi: BytePos(11), expn_id: NO_EXPANSION }; - let format_root = ExpnFormat::MacroBang(Name(0u32)); + let format_root = ExpnFormat::MacroBang(keywords::Invalid.name()); let callee_root = NameAndSpan { format: format_root, allow_internal_unstable: false, span: Some(root) }; @@ -1210,7 +1212,7 @@ mod tests { let id_a1 = cm.record_expansion(info_a1); let span_a1 = Span { lo: BytePos(12), hi: BytePos(23), expn_id: id_a1 }; - let format_a = ExpnFormat::MacroBang(Name(1u32)); + let format_a = ExpnFormat::MacroBang(keywords::As.name()); let callee_a = NameAndSpan { format: format_a, allow_internal_unstable: false, span: Some(span_a1) }; @@ -1223,7 +1225,7 @@ mod tests { let id_b1 = cm.record_expansion(info_b1); let span_b1 = Span { lo: BytePos(25), hi: BytePos(36), expn_id: id_b1 }; - let format_b = ExpnFormat::MacroBang(Name(2u32)); + let format_b = ExpnFormat::MacroBang(keywords::Box.name()); let callee_b = NameAndSpan { format: format_b, allow_internal_unstable: false, span: None }; diff --git a/src/libsyntax/config.rs b/src/libsyntax/config.rs index 946257a16d..89eea3f6f8 100644 --- a/src/libsyntax/config.rs +++ b/src/libsyntax/config.rs @@ -12,7 +12,7 @@ use attr::HasAttrs; use feature_gate::{feature_err, EXPLAIN_STMT_ATTR_SYNTAX, Features, get_features, GateIssue}; use {fold, attr}; use ast; -use codemap::{Spanned, respan}; +use codemap::Spanned; use parse::ParseSess; use ptr::P; @@ -106,12 +106,13 @@ impl<'a> StripUnconfigured<'a> { match (cfg.meta_item(), mi.meta_item()) { (Some(cfg), Some(mi)) => if cfg_matches(&cfg, self.sess, self.features) { - self.process_cfg_attr(respan(mi.span, ast::Attribute_ { + self.process_cfg_attr(ast::Attribute { id: attr::mk_attr_id(), - style: attr.node.style, + style: attr.style, value: mi.clone(), is_sugared_doc: false, - })) + span: mi.span, + }) } else { None }, @@ -131,8 +132,8 @@ impl<'a> StripUnconfigured<'a> { return false; } - let mis = match attr.node.value.node { - ast::MetaItemKind::List(_, ref mis) if is_cfg(&attr) => mis, + let mis = match attr.value.node { + ast::MetaItemKind::List(ref mis) if is_cfg(&attr) => mis, _ => return true }; @@ -160,7 +161,7 @@ impl<'a> StripUnconfigured<'a> { attr.span, GateIssue::Language, EXPLAIN_STMT_ATTR_SYNTAX); - if attr.node.is_sugared_doc { + if attr.is_sugared_doc { err.help("`///` is for documentation comments. For a plain comment, use `//`."); } err.emit(); @@ -277,7 +278,7 @@ impl<'a> fold::Folder for StripUnconfigured<'a> { fn fold_stmt(&mut self, stmt: ast::Stmt) -> SmallVector { match self.configure_stmt(stmt) { Some(stmt) => fold::noop_fold_stmt(stmt, self), - None => return SmallVector::zero(), + None => return SmallVector::new(), } } diff --git a/src/libsyntax/diagnostics/plugin.rs b/src/libsyntax/diagnostics/plugin.rs index 81c8e0bdb8..fe5cb87ad5 100644 --- a/src/libsyntax/diagnostics/plugin.rs +++ b/src/libsyntax/diagnostics/plugin.rs @@ -19,6 +19,7 @@ use ext::base::{ExtCtxt, MacEager, MacResult}; use ext::build::AstBuilder; use parse::token; use ptr::P; +use symbol::Symbol; use tokenstream::{TokenTree}; use util::small_vector::SmallVector; @@ -141,7 +142,7 @@ pub fn expand_register_diagnostic<'cx>(ecx: &'cx mut ExtCtxt, )); } }); - let sym = Ident::with_empty_ctxt(token::gensym(&format!( + let sym = Ident::with_empty_ctxt(Symbol::gensym(&format!( "__register_diagnostic_{}", code ))); MacEager::items(SmallVector::many(vec![ @@ -194,11 +195,11 @@ pub fn expand_build_diagnostic_array<'cx>(ecx: &'cx mut ExtCtxt, let (count, expr) = with_registered_diagnostics(|diagnostics| { let descriptions: Vec> = - diagnostics.iter().filter_map(|(code, info)| { + diagnostics.iter().filter_map(|(&code, info)| { info.description.map(|description| { ecx.expr_tuple(span, vec![ - ecx.expr_str(span, code.as_str()), - ecx.expr_str(span, description.as_str()) + ecx.expr_str(span, code), + ecx.expr_str(span, description) ]) }) }).collect(); diff --git a/src/libsyntax/entry.rs b/src/libsyntax/entry.rs index 7014e576e2..93ca1948ed 100644 --- a/src/libsyntax/entry.rs +++ b/src/libsyntax/entry.rs @@ -28,7 +28,7 @@ pub fn entry_point_type(item: &Item, depth: usize) -> EntryPointType { EntryPointType::Start } else if attr::contains_name(&item.attrs, "main") { EntryPointType::MainAttr - } else if item.ident.name.as_str() == "main" { + } else if item.ident.name == "main" { if depth == 1 { // This is a top-level function so can be 'main' EntryPointType::MainNamed diff --git a/src/libsyntax/ext/base.rs b/src/libsyntax/ext/base.rs index 1f47a91fcc..ddbca47429 100644 --- a/src/libsyntax/ext/base.rs +++ b/src/libsyntax/ext/base.rs @@ -18,10 +18,10 @@ use errors::DiagnosticBuilder; use ext::expand::{self, Expansion}; use ext::hygiene::Mark; use fold::{self, Folder}; -use parse::{self, parser}; +use parse::{self, parser, DirectoryOwnership}; use parse::token; -use parse::token::{InternedString, str_to_ident}; use ptr::P; +use symbol::Symbol; use util::small_vector::SmallVector; use std::path::PathBuf; @@ -440,7 +440,7 @@ impl MacResult for DummyResult { if self.expr_only { None } else { - Some(SmallVector::zero()) + Some(SmallVector::new()) } } @@ -448,7 +448,7 @@ impl MacResult for DummyResult { if self.expr_only { None } else { - Some(SmallVector::zero()) + Some(SmallVector::new()) } } @@ -456,7 +456,7 @@ impl MacResult for DummyResult { if self.expr_only { None } else { - Some(SmallVector::zero()) + Some(SmallVector::new()) } } @@ -517,12 +517,14 @@ pub type NamedSyntaxExtension = (Name, SyntaxExtension); pub trait Resolver { fn next_node_id(&mut self) -> ast::NodeId; fn get_module_scope(&mut self, id: ast::NodeId) -> Mark; + fn eliminate_crate_var(&mut self, item: P) -> P; fn visit_expansion(&mut self, mark: Mark, expansion: &Expansion); fn add_macro(&mut self, scope: Mark, def: ast::MacroDef, export: bool); fn add_ext(&mut self, ident: ast::Ident, ext: Rc); fn add_expansions_at_stmt(&mut self, id: ast::NodeId, macros: Vec); + fn resolve_imports(&mut self); fn find_attr_invoc(&mut self, attrs: &mut Vec) -> Option; fn resolve_macro(&mut self, scope: Mark, path: &ast::Path, force: bool) -> Result, Determinacy>; @@ -539,12 +541,14 @@ pub struct DummyResolver; impl Resolver for DummyResolver { fn next_node_id(&mut self) -> ast::NodeId { ast::DUMMY_NODE_ID } fn get_module_scope(&mut self, _id: ast::NodeId) -> Mark { Mark::root() } + fn eliminate_crate_var(&mut self, item: P) -> P { item } fn visit_expansion(&mut self, _invoc: Mark, _expansion: &Expansion) {} fn add_macro(&mut self, _scope: Mark, _def: ast::MacroDef, _export: bool) {} fn add_ext(&mut self, _ident: ast::Ident, _ext: Rc) {} fn add_expansions_at_stmt(&mut self, _id: ast::NodeId, _macros: Vec) {} + fn resolve_imports(&mut self) {} fn find_attr_invoc(&mut self, _attrs: &mut Vec) -> Option { None } fn resolve_macro(&mut self, _scope: Mark, _path: &ast::Path, _force: bool) -> Result, Determinacy> { @@ -564,9 +568,7 @@ pub struct ExpansionData { pub depth: usize, pub backtrace: ExpnId, pub module: Rc, - - // True if non-inline modules without a `#[path]` are forbidden at the root of this expansion. - pub no_noninline_mod: bool, + pub directory_ownership: DirectoryOwnership, } /// One of these is made during expansion and incrementally updated as we go; @@ -597,7 +599,7 @@ impl<'a> ExtCtxt<'a> { depth: 0, backtrace: NO_EXPANSION, module: Rc::new(ModuleData { mod_path: Vec::new(), directory: PathBuf::new() }), - no_noninline_mod: false, + directory_ownership: DirectoryOwnership::Owned, }, } } @@ -639,7 +641,7 @@ impl<'a> ExtCtxt<'a> { loop { if self.codemap().with_expn_info(expn_id, |info| { info.map_or(None, |i| { - if i.callee.name().as_str() == "include" { + if i.callee.name() == "include" { // Stop going up the backtrace once include! is encountered return None; } @@ -731,7 +733,7 @@ impl<'a> ExtCtxt<'a> { self.ecfg.trace_mac = x } pub fn ident_of(&self, st: &str) -> ast::Ident { - str_to_ident(st) + ast::Ident::from_str(st) } pub fn std_path(&self, components: &[&str]) -> Vec { let mut v = Vec::new(); @@ -742,7 +744,7 @@ impl<'a> ExtCtxt<'a> { return v } pub fn name_of(&self, st: &str) -> ast::Name { - token::intern(st) + Symbol::intern(st) } } @@ -750,7 +752,7 @@ impl<'a> ExtCtxt<'a> { /// emitting `err_msg` if `expr` is not a string literal. This does not stop /// compilation on error, merely emits a non-fatal error and returns None. pub fn expr_to_spanned_string(cx: &mut ExtCtxt, expr: P, err_msg: &str) - -> Option> { + -> Option> { // Update `expr.span`'s expn_id now in case expr is an `include!` macro invocation. let expr = expr.map(|mut expr| { expr.span.expn_id = cx.backtrace(); @@ -761,7 +763,7 @@ pub fn expr_to_spanned_string(cx: &mut ExtCtxt, expr: P, err_msg: &st let expr = cx.expander().fold_expr(expr); match expr.node { ast::ExprKind::Lit(ref l) => match l.node { - ast::LitKind::Str(ref s, style) => return Some(respan(expr.span, (s.clone(), style))), + ast::LitKind::Str(s, style) => return Some(respan(expr.span, (s, style))), _ => cx.span_err(l.span, err_msg) }, _ => cx.span_err(expr.span, err_msg) @@ -770,7 +772,7 @@ pub fn expr_to_spanned_string(cx: &mut ExtCtxt, expr: P, err_msg: &st } pub fn expr_to_string(cx: &mut ExtCtxt, expr: P, err_msg: &str) - -> Option<(InternedString, ast::StrStyle)> { + -> Option<(Symbol, ast::StrStyle)> { expr_to_spanned_string(cx, expr, err_msg).map(|s| s.node) } diff --git a/src/libsyntax/ext/build.rs b/src/libsyntax/ext/build.rs index 37bd83be7b..6c0d40b69d 100644 --- a/src/libsyntax/ext/build.rs +++ b/src/libsyntax/ext/build.rs @@ -11,11 +11,11 @@ use abi::Abi; use ast::{self, Ident, Generics, Expr, BlockCheckMode, UnOp, PatKind}; use attr; -use syntax_pos::{Span, DUMMY_SP, Pos}; +use syntax_pos::{Span, DUMMY_SP}; use codemap::{dummy_spanned, respan, Spanned}; use ext::base::ExtCtxt; -use parse::token::{self, keywords, InternedString}; use ptr::P; +use symbol::{Symbol, keywords}; // Transitional reexports so qquote can find the paths it is looking for mod syntax { @@ -149,7 +149,7 @@ pub trait AstBuilder { fn expr_vec(&self, sp: Span, exprs: Vec>) -> P; fn expr_vec_ng(&self, sp: Span) -> P; fn expr_vec_slice(&self, sp: Span, exprs: Vec>) -> P; - fn expr_str(&self, sp: Span, s: InternedString) -> P; + fn expr_str(&self, sp: Span, s: Symbol) -> P; fn expr_some(&self, sp: Span, expr: P) -> P; fn expr_none(&self, sp: Span) -> P; @@ -158,7 +158,7 @@ pub trait AstBuilder { fn expr_tuple(&self, sp: Span, exprs: Vec>) -> P; - fn expr_fail(&self, span: Span, msg: InternedString) -> P; + fn expr_fail(&self, span: Span, msg: Symbol) -> P; fn expr_unreachable(&self, span: Span) -> P; fn expr_ok(&self, span: Span, expr: P) -> P; @@ -198,17 +198,13 @@ pub trait AstBuilder { fn lambda_fn_decl(&self, span: Span, fn_decl: P, - blk: P, + body: P, fn_decl_span: Span) -> P; - fn lambda(&self, span: Span, ids: Vec, blk: P) -> P; - fn lambda0(&self, span: Span, blk: P) -> P; - fn lambda1(&self, span: Span, blk: P, ident: ast::Ident) -> P; - - fn lambda_expr(&self, span: Span, ids: Vec , blk: P) -> P; - fn lambda_expr_0(&self, span: Span, expr: P) -> P; - fn lambda_expr_1(&self, span: Span, expr: P, ident: ast::Ident) -> P; + fn lambda(&self, span: Span, ids: Vec, body: P) -> P; + fn lambda0(&self, span: Span, body: P) -> P; + fn lambda1(&self, span: Span, body: P, ident: ast::Ident) -> P; fn lambda_stmts(&self, span: Span, ids: Vec, blk: Vec) -> P; @@ -279,22 +275,22 @@ pub trait AstBuilder { generics: Generics) -> P; fn item_ty(&self, span: Span, name: Ident, ty: P) -> P; - fn attribute(&self, sp: Span, mi: P) -> ast::Attribute; + fn attribute(&self, sp: Span, mi: ast::MetaItem) -> ast::Attribute; - fn meta_word(&self, sp: Span, w: InternedString) -> P; + fn meta_word(&self, sp: Span, w: ast::Name) -> ast::MetaItem; - fn meta_list_item_word(&self, sp: Span, w: InternedString) -> ast::NestedMetaItem; + fn meta_list_item_word(&self, sp: Span, w: ast::Name) -> ast::NestedMetaItem; fn meta_list(&self, sp: Span, - name: InternedString, + name: ast::Name, mis: Vec ) - -> P; + -> ast::MetaItem; fn meta_name_value(&self, sp: Span, - name: InternedString, + name: ast::Name, value: ast::LitKind) - -> P; + -> ast::MetaItem; fn item_use(&self, sp: Span, vis: ast::Visibility, vp: P) -> P; @@ -663,23 +659,11 @@ impl<'a> AstBuilder for ExtCtxt<'a> { } fn expr_field_access(&self, sp: Span, expr: P, ident: ast::Ident) -> P { - let field_span = Span { - lo: sp.lo - Pos::from_usize(ident.name.as_str().len()), - hi: sp.hi, - expn_id: sp.expn_id, - }; - - let id = Spanned { node: ident, span: field_span }; + let id = Spanned { node: ident, span: sp }; self.expr(sp, ast::ExprKind::Field(expr, id)) } fn expr_tup_field_access(&self, sp: Span, expr: P, idx: usize) -> P { - let field_span = Span { - lo: sp.lo - Pos::from_usize(idx.to_string().len()), - hi: sp.hi, - expn_id: sp.expn_id, - }; - - let id = Spanned { node: idx, span: field_span }; + let id = Spanned { node: idx, span: sp }; self.expr(sp, ast::ExprKind::TupField(expr, id)) } fn expr_addr_of(&self, sp: Span, e: P) -> P { @@ -759,7 +743,7 @@ impl<'a> AstBuilder for ExtCtxt<'a> { fn expr_vec_slice(&self, sp: Span, exprs: Vec>) -> P { self.expr_addr_of(sp, self.expr_vec(sp, exprs)) } - fn expr_str(&self, sp: Span, s: InternedString) -> P { + fn expr_str(&self, sp: Span, s: Symbol) -> P { self.expr_lit(sp, ast::LitKind::Str(s, ast::StrStyle::Cooked)) } @@ -781,7 +765,7 @@ impl<'a> AstBuilder for ExtCtxt<'a> { fn expr_break(&self, sp: Span) -> P { - self.expr(sp, ast::ExprKind::Break(None)) + self.expr(sp, ast::ExprKind::Break(None, None)) } @@ -789,10 +773,9 @@ impl<'a> AstBuilder for ExtCtxt<'a> { self.expr(sp, ast::ExprKind::Tup(exprs)) } - fn expr_fail(&self, span: Span, msg: InternedString) -> P { + fn expr_fail(&self, span: Span, msg: Symbol) -> P { let loc = self.codemap().lookup_char_pos(span.lo); - let expr_file = self.expr_str(span, - token::intern_and_get_ident(&loc.file.name)); + let expr_file = self.expr_str(span, Symbol::intern(&loc.file.name)); let expr_line = self.expr_u32(span, loc.line as u32); let expr_file_line_tuple = self.expr_tuple(span, vec![expr_file, expr_line]); let expr_file_line_ptr = self.expr_addr_of(span, expr_file_line_tuple); @@ -805,9 +788,7 @@ impl<'a> AstBuilder for ExtCtxt<'a> { } fn expr_unreachable(&self, span: Span) -> P { - self.expr_fail(span, - InternedString::new( - "internal error: entered unreachable code")) + self.expr_fail(span, Symbol::intern("internal error: entered unreachable code")) } fn expr_ok(&self, sp: Span, expr: P) -> P { @@ -940,19 +921,19 @@ impl<'a> AstBuilder for ExtCtxt<'a> { fn lambda_fn_decl(&self, span: Span, fn_decl: P, - blk: P, + body: P, fn_decl_span: Span) // span of the `|...|` part -> P { self.expr(span, ast::ExprKind::Closure(ast::CaptureBy::Ref, fn_decl, - blk, + body, fn_decl_span)) } fn lambda(&self, span: Span, ids: Vec, - blk: P) + body: P) -> P { let fn_decl = self.fn_decl( ids.iter().map(|id| self.arg(span, *id, self.ty_infer(span))).collect(), @@ -962,26 +943,15 @@ impl<'a> AstBuilder for ExtCtxt<'a> { // part of the lambda, but it probably (maybe?) corresponds to // the entire lambda body. Probably we should extend the API // here, but that's not entirely clear. - self.expr(span, ast::ExprKind::Closure(ast::CaptureBy::Ref, fn_decl, blk, span)) + self.expr(span, ast::ExprKind::Closure(ast::CaptureBy::Ref, fn_decl, body, span)) } - fn lambda0(&self, span: Span, blk: P) -> P { - self.lambda(span, Vec::new(), blk) + fn lambda0(&self, span: Span, body: P) -> P { + self.lambda(span, Vec::new(), body) } - fn lambda1(&self, span: Span, blk: P, ident: ast::Ident) -> P { - self.lambda(span, vec![ident], blk) - } - - fn lambda_expr(&self, span: Span, ids: Vec, - expr: P) -> P { - self.lambda(span, ids, self.block_expr(expr)) - } - fn lambda_expr_0(&self, span: Span, expr: P) -> P { - self.lambda0(span, self.block_expr(expr)) - } - fn lambda_expr_1(&self, span: Span, expr: P, ident: ast::Ident) -> P { - self.lambda1(span, self.block_expr(expr), ident) + fn lambda1(&self, span: Span, body: P, ident: ast::Ident) -> P { + self.lambda(span, vec![ident], body) } fn lambda_stmts(&self, @@ -989,14 +959,14 @@ impl<'a> AstBuilder for ExtCtxt<'a> { ids: Vec, stmts: Vec) -> P { - self.lambda(span, ids, self.block(span, stmts)) + self.lambda(span, ids, self.expr_block(self.block(span, stmts))) } fn lambda_stmts_0(&self, span: Span, stmts: Vec) -> P { - self.lambda0(span, self.block(span, stmts)) + self.lambda0(span, self.expr_block(self.block(span, stmts))) } fn lambda_stmts_1(&self, span: Span, stmts: Vec, ident: ast::Ident) -> P { - self.lambda1(span, self.block(span, stmts), ident) + self.lambda1(span, self.expr_block(self.block(span, stmts)), ident) } fn arg(&self, span: Span, ident: ast::Ident, ty: P) -> ast::Arg { @@ -1161,25 +1131,25 @@ impl<'a> AstBuilder for ExtCtxt<'a> { self.item_ty_poly(span, name, ty, Generics::default()) } - fn attribute(&self, sp: Span, mi: P) -> ast::Attribute { + fn attribute(&self, sp: Span, mi: ast::MetaItem) -> ast::Attribute { attr::mk_spanned_attr_outer(sp, attr::mk_attr_id(), mi) } - fn meta_word(&self, sp: Span, w: InternedString) -> P { + fn meta_word(&self, sp: Span, w: ast::Name) -> ast::MetaItem { attr::mk_spanned_word_item(sp, w) } - fn meta_list_item_word(&self, sp: Span, w: InternedString) -> ast::NestedMetaItem { + fn meta_list_item_word(&self, sp: Span, w: ast::Name) -> ast::NestedMetaItem { respan(sp, ast::NestedMetaItemKind::MetaItem(attr::mk_spanned_word_item(sp, w))) } - fn meta_list(&self, sp: Span, name: InternedString, mis: Vec) - -> P { + fn meta_list(&self, sp: Span, name: ast::Name, mis: Vec) + -> ast::MetaItem { attr::mk_spanned_list_item(sp, name, mis) } - fn meta_name_value(&self, sp: Span, name: InternedString, value: ast::LitKind) - -> P { + fn meta_name_value(&self, sp: Span, name: ast::Name, value: ast::LitKind) + -> ast::MetaItem { attr::mk_spanned_name_value_item(sp, name, respan(sp, value)) } diff --git a/src/libsyntax/ext/expand.rs b/src/libsyntax/ext/expand.rs index e3b23e239f..8abd3d252c 100644 --- a/src/libsyntax/ext/expand.rs +++ b/src/libsyntax/ext/expand.rs @@ -21,12 +21,13 @@ use ext::base::*; use feature_gate::{self, Features}; use fold; use fold::*; -use parse::{ParseSess, PResult, lexer}; +use parse::{ParseSess, DirectoryOwnership, PResult, lexer}; use parse::parser::Parser; -use parse::token::{self, intern, keywords}; +use parse::token; use print::pprust; use ptr::P; use std_inject; +use symbol::keywords; use tokenstream::{TokenTree, TokenStream}; use util::small_vector::SmallVector; use visit::Visitor; @@ -84,12 +85,12 @@ macro_rules! expansions { } } - pub fn visit_with(&self, visitor: &mut V) { + pub fn visit_with<'a, V: Visitor<'a>>(&'a self, visitor: &mut V) { match *self { Expansion::OptExpr(Some(ref expr)) => visitor.visit_expr(expr), Expansion::OptExpr(None) => {} $($( Expansion::$kind(ref ast) => visitor.$visit(ast), )*)* - $($( Expansion::$kind(ref ast) => for ast in ast.as_slice() { + $($( Expansion::$kind(ref ast) => for ast in &ast[..] { visitor.$visit_elt(ast); }, )*)* } @@ -190,7 +191,7 @@ impl<'a, 'b> MacroExpander<'a, 'b> { pub fn expand_crate(&mut self, mut krate: ast::Crate) -> ast::Crate { self.cx.crate_root = std_inject::injected_crate_name(&krate); let mut module = ModuleData { - mod_path: vec![token::str_to_ident(&self.cx.ecfg.crate_name)], + mod_path: vec![Ident::from_str(&self.cx.ecfg.crate_name)], directory: PathBuf::from(self.cx.codemap().span_to_filename(krate.span)), }; module.directory.pop(); @@ -222,6 +223,7 @@ impl<'a, 'b> MacroExpander<'a, 'b> { self.cx.current_expansion.depth = 0; let (expansion, mut invocations) = self.collect_invocations(expansion); + self.resolve_imports(); invocations.reverse(); let mut expansions = Vec::new(); @@ -230,9 +232,9 @@ impl<'a, 'b> MacroExpander<'a, 'b> { loop { let invoc = if let Some(invoc) = invocations.pop() { invoc - } else if undetermined_invocations.is_empty() { - break } else { + self.resolve_imports(); + if undetermined_invocations.is_empty() { break } invocations = mem::replace(&mut undetermined_invocations, Vec::new()); force = !mem::replace(&mut progress, false); continue @@ -245,7 +247,7 @@ impl<'a, 'b> MacroExpander<'a, 'b> { self.cx.resolver.resolve_macro(scope, &mac.node.path, force) } InvocationKind::Attr { ref attr, .. } => { - let ident = ast::Ident::with_empty_ctxt(intern(&*attr.name())); + let ident = Ident::with_empty_ctxt(attr.name()); let path = ast::Path::from_ident(attr.span, ident); self.cx.resolver.resolve_macro(scope, &path, force) } @@ -292,6 +294,14 @@ impl<'a, 'b> MacroExpander<'a, 'b> { expansion.fold_with(&mut placeholder_expander) } + fn resolve_imports(&mut self) { + if self.monotonic { + let err_count = self.cx.parse_sess.span_diagnostic.err_count(); + self.cx.resolver.resolve_imports(); + self.cx.resolve_err_count += self.cx.parse_sess.span_diagnostic.err_count() - err_count; + } + } + fn collect_invocations(&mut self, expansion: Expansion) -> (Expansion, Vec) { let result = { let mut collector = InvocationCollector { @@ -332,7 +342,7 @@ impl<'a, 'b> MacroExpander<'a, 'b> { }; attr::mark_used(&attr); - let name = intern(&attr.name()); + let name = attr.name(); self.cx.bt_push(ExpnInfo { call_site: attr.span, callee: NameAndSpan { @@ -344,12 +354,12 @@ impl<'a, 'b> MacroExpander<'a, 'b> { match *ext { MultiModifier(ref mac) => { - let item = mac.expand(self.cx, attr.span, &attr.node.value, item); + let item = mac.expand(self.cx, attr.span, &attr.value, item); kind.expect_from_annotatables(item) } MultiDecorator(ref mac) => { let mut items = Vec::new(); - mac.expand(self.cx, attr.span, &attr.node.value, &item, + mac.expand(self.cx, attr.span, &attr.value, &item, &mut |item| items.push(item)); items.push(item); kind.expect_from_annotatables(items) @@ -390,12 +400,7 @@ impl<'a, 'b> MacroExpander<'a, 'b> { &self.cx.ecfg.features.unwrap()); } - if path.segments.len() > 1 || path.global || !path.segments[0].parameters.is_empty() { - self.cx.span_err(path.span, "expected macro name without module separators"); - return kind.dummy(span); - } - - let extname = path.segments[0].identifier.name; + let extname = path.segments.last().unwrap().identifier.name; let ident = ident.unwrap_or(keywords::Invalid.ident()); let marked_tts = mark_tts(&tts, mark); let opt_expanded = match *ext { @@ -511,29 +516,31 @@ impl<'a> Parser<'a> { -> PResult<'a, Expansion> { Ok(match kind { ExpansionKind::Items => { - let mut items = SmallVector::zero(); + let mut items = SmallVector::new(); while let Some(item) = self.parse_item()? { items.push(item); } Expansion::Items(items) } ExpansionKind::TraitItems => { - let mut items = SmallVector::zero(); + let mut items = SmallVector::new(); while self.token != token::Eof { items.push(self.parse_trait_item()?); } Expansion::TraitItems(items) } ExpansionKind::ImplItems => { - let mut items = SmallVector::zero(); + let mut items = SmallVector::new(); while self.token != token::Eof { items.push(self.parse_impl_item()?); } Expansion::ImplItems(items) } ExpansionKind::Stmts => { - let mut stmts = SmallVector::zero(); - while self.token != token::Eof { + let mut stmts = SmallVector::new(); + while self.token != token::Eof && + // won't make progress on a `}` + self.token != token::CloseDelim(token::Brace) { if let Some(stmt) = self.parse_full_stmt(macro_legacy_warnings)? { stmts.push(stmt); } @@ -571,7 +578,7 @@ macro_rules! fully_configure { ($this:ident, $node:ident, $noop_fold:ident) => { match $noop_fold($node, &mut $this.cfg).pop() { Some(node) => node, - None => return SmallVector::zero(), + None => return SmallVector::new(), } } } @@ -643,7 +650,7 @@ fn string_to_tts(text: String, parse_sess: &ParseSess) -> Vec { .new_filemap(String::from(""), None, text); let lexer = lexer::StringReader::new(&parse_sess.span_diagnostic, filemap); - let mut parser = Parser::new(parse_sess, Box::new(lexer)); + let mut parser = Parser::new(parse_sess, Box::new(lexer), None, false); panictry!(parser.parse_all_token_trees()) } @@ -687,7 +694,7 @@ impl<'a, 'b> Folder for InvocationCollector<'a, 'b> { fn fold_stmt(&mut self, stmt: ast::Stmt) -> SmallVector { let stmt = match self.cfg.configure_stmt(stmt) { Some(stmt) => stmt, - None => return SmallVector::zero(), + None => return SmallVector::new(), }; let (mac, style, attrs) = if let StmtKind::Mac(mac) = stmt.node { @@ -715,9 +722,10 @@ impl<'a, 'b> Folder for InvocationCollector<'a, 'b> { } fn fold_block(&mut self, block: P) -> P { - let no_noninline_mod = mem::replace(&mut self.cx.current_expansion.no_noninline_mod, true); + let old_directory_ownership = self.cx.current_expansion.directory_ownership; + self.cx.current_expansion.directory_ownership = DirectoryOwnership::UnownedViaBlock; let result = noop_fold_block(block, self); - self.cx.current_expansion.no_noninline_mod = no_noninline_mod; + self.cx.current_expansion.directory_ownership = old_directory_ownership; result } @@ -756,7 +764,7 @@ impl<'a, 'b> Folder for InvocationCollector<'a, 'b> { return noop_fold_item(item, self); } - let orig_no_noninline_mod = self.cx.current_expansion.no_noninline_mod; + let orig_directory_ownership = self.cx.current_expansion.directory_ownership; let mut module = (*self.cx.current_expansion.module).clone(); module.mod_path.push(item.ident); @@ -767,23 +775,28 @@ impl<'a, 'b> Folder for InvocationCollector<'a, 'b> { if inline_module { if let Some(path) = attr::first_attr_value_str_by_name(&item.attrs, "path") { - self.cx.current_expansion.no_noninline_mod = false; - module.directory.push(&*path); + self.cx.current_expansion.directory_ownership = DirectoryOwnership::Owned; + module.directory.push(&*path.as_str()); } else { module.directory.push(&*item.ident.name.as_str()); } } else { - self.cx.current_expansion.no_noninline_mod = false; - module.directory = + let mut path = PathBuf::from(self.cx.parse_sess.codemap().span_to_filename(inner)); - module.directory.pop(); + let directory_ownership = match path.file_name().unwrap().to_str() { + Some("mod.rs") => DirectoryOwnership::Owned, + _ => DirectoryOwnership::UnownedViaMod(false), + }; + path.pop(); + module.directory = path; + self.cx.current_expansion.directory_ownership = directory_ownership; } let orig_module = mem::replace(&mut self.cx.current_expansion.module, Rc::new(module)); let result = noop_fold_item(item, self); self.cx.current_expansion.module = orig_module; - self.cx.current_expansion.no_noninline_mod = orig_no_noninline_mod; + self.cx.current_expansion.directory_ownership = orig_directory_ownership; return result; } // Ensure that test functions are accessible from the test harness. @@ -910,7 +923,6 @@ impl<'feat> ExpansionConfig<'feat> { fn enable_allow_internal_unstable = allow_internal_unstable, fn enable_custom_derive = custom_derive, fn enable_pushpop_unsafe = pushpop_unsafe, - fn enable_proc_macro = proc_macro, } } diff --git a/src/libsyntax/ext/placeholders.rs b/src/libsyntax/ext/placeholders.rs index e323dd2f62..4fe57a8345 100644 --- a/src/libsyntax/ext/placeholders.rs +++ b/src/libsyntax/ext/placeholders.rs @@ -13,8 +13,8 @@ use codemap::{DUMMY_SP, dummy_spanned}; use ext::base::ExtCtxt; use ext::expand::{Expansion, ExpansionKind}; use fold::*; -use parse::token::{intern, keywords}; use ptr::P; +use symbol::{Symbol, keywords}; use util::move_map::MoveMap; use util::small_vector::SmallVector; @@ -227,7 +227,7 @@ pub fn reconstructed_macro_rules(def: &ast::MacroDef) -> Expansion { span: DUMMY_SP, global: false, segments: vec![ast::PathSegment { - identifier: ast::Ident::with_empty_ctxt(intern("macro_rules")), + identifier: ast::Ident::with_empty_ctxt(Symbol::intern("macro_rules")), parameters: ast::PathParameters::none(), }], }, diff --git a/src/libsyntax/ext/proc_macro_shim.rs b/src/libsyntax/ext/proc_macro_shim.rs index dc3a01f41b..21ce89a6dd 100644 --- a/src/libsyntax/ext/proc_macro_shim.rs +++ b/src/libsyntax/ext/proc_macro_shim.rs @@ -66,6 +66,7 @@ pub mod prelude { pub use ast::Ident; pub use codemap::{DUMMY_SP, Span}; pub use ext::base::{ExtCtxt, MacResult}; - pub use parse::token::{self, Token, DelimToken, keywords, str_to_ident}; + pub use parse::token::{self, Token, DelimToken}; + pub use symbol::keywords; pub use tokenstream::{TokenTree, TokenStream}; } diff --git a/src/libsyntax/ext/quote.rs b/src/libsyntax/ext/quote.rs index 969cfa292c..aa777a19a9 100644 --- a/src/libsyntax/ext/quote.rs +++ b/src/libsyntax/ext/quote.rs @@ -33,6 +33,7 @@ pub mod rt { use parse::{self, token, classify}; use ptr::P; use std::rc::Rc; + use symbol::Symbol; use tokenstream::{self, TokenTree}; @@ -211,7 +212,7 @@ pub mod rt { impl_to_tokens_slice! { P, [] } impl_to_tokens_slice! { ast::Arg, [TokenTree::Token(DUMMY_SP, token::Comma)] } - impl ToTokens for P { + impl ToTokens for ast::MetaItem { fn to_tokens(&self, _cx: &ExtCtxt) -> Vec { let nt = token::NtMeta(self.clone()); vec![TokenTree::Token(DUMMY_SP, token::Interpolated(Rc::new(nt)))] @@ -223,13 +224,13 @@ pub mod rt { let mut r = vec![]; // FIXME: The spans could be better r.push(TokenTree::Token(self.span, token::Pound)); - if self.node.style == ast::AttrStyle::Inner { + if self.style == ast::AttrStyle::Inner { r.push(TokenTree::Token(self.span, token::Not)); } r.push(TokenTree::Delimited(self.span, Rc::new(tokenstream::Delimited { delim: token::Bracket, open_span: self.span, - tts: self.node.value.to_tokens(cx), + tts: self.value.to_tokens(cx), close_span: self.span, }))); r @@ -238,8 +239,7 @@ pub mod rt { impl ToTokens for str { fn to_tokens(&self, cx: &ExtCtxt) -> Vec { - let lit = ast::LitKind::Str( - token::intern_and_get_ident(self), ast::StrStyle::Cooked); + let lit = ast::LitKind::Str(Symbol::intern(self), ast::StrStyle::Cooked); dummy_spanned(lit).to_tokens(cx) } } @@ -405,7 +405,7 @@ pub fn parse_block_panic(parser: &mut Parser) -> P { panictry!(parser.parse_block()) } -pub fn parse_meta_item_panic(parser: &mut Parser) -> P { +pub fn parse_meta_item_panic(parser: &mut Parser) -> ast::MetaItem { panictry!(parser.parse_meta_item()) } @@ -527,17 +527,17 @@ pub fn expand_quote_matcher(cx: &mut ExtCtxt, base::MacEager::expr(expanded) } -fn ids_ext(strs: Vec ) -> Vec { - strs.iter().map(|str| str_to_ident(&(*str))).collect() +fn ids_ext(strs: Vec) -> Vec { + strs.iter().map(|s| ast::Ident::from_str(s)).collect() } -fn id_ext(str: &str) -> ast::Ident { - str_to_ident(str) +fn id_ext(s: &str) -> ast::Ident { + ast::Ident::from_str(s) } // Lift an ident to the expr that evaluates to that ident. fn mk_ident(cx: &ExtCtxt, sp: Span, ident: ast::Ident) -> P { - let e_str = cx.expr_str(sp, ident.name.as_str()); + let e_str = cx.expr_str(sp, ident.name); cx.expr_method_call(sp, cx.expr_ident(sp, id_ext("ext_cx")), id_ext("ident_of"), @@ -546,7 +546,7 @@ fn mk_ident(cx: &ExtCtxt, sp: Span, ident: ast::Ident) -> P { // Lift a name to the expr that evaluates to that name fn mk_name(cx: &ExtCtxt, sp: Span, ident: ast::Ident) -> P { - let e_str = cx.expr_str(sp, ident.name.as_str()); + let e_str = cx.expr_str(sp, ident.name); cx.expr_method_call(sp, cx.expr_ident(sp, id_ext("ext_cx")), id_ext("name_of"), diff --git a/src/libsyntax/ext/source_util.rs b/src/libsyntax/ext/source_util.rs index ec48cae3f7..39b92c7d00 100644 --- a/src/libsyntax/ext/source_util.rs +++ b/src/libsyntax/ext/source_util.rs @@ -13,10 +13,11 @@ use syntax_pos::{self, Pos, Span}; use ext::base::*; use ext::base; use ext::build::AstBuilder; -use parse::token; +use parse::{token, DirectoryOwnership}; use parse; use print::pprust; use ptr::P; +use symbol::Symbol; use tokenstream; use util::small_vector::SmallVector; @@ -60,15 +61,13 @@ pub fn expand_file(cx: &mut ExtCtxt, sp: Span, tts: &[tokenstream::TokenTree]) let topmost = cx.expansion_cause(); let loc = cx.codemap().lookup_char_pos(topmost.lo); - let filename = token::intern_and_get_ident(&loc.file.name); - base::MacEager::expr(cx.expr_str(topmost, filename)) + base::MacEager::expr(cx.expr_str(topmost, Symbol::intern(&loc.file.name))) } pub fn expand_stringify(cx: &mut ExtCtxt, sp: Span, tts: &[tokenstream::TokenTree]) -> Box { let s = pprust::tts_to_string(tts); - base::MacEager::expr(cx.expr_str(sp, - token::intern_and_get_ident(&s[..]))) + base::MacEager::expr(cx.expr_str(sp, Symbol::intern(&s))) } pub fn expand_mod(cx: &mut ExtCtxt, sp: Span, tts: &[tokenstream::TokenTree]) @@ -77,9 +76,7 @@ pub fn expand_mod(cx: &mut ExtCtxt, sp: Span, tts: &[tokenstream::TokenTree]) let mod_path = &cx.current_expansion.module.mod_path; let string = mod_path.iter().map(|x| x.to_string()).collect::>().join("::"); - base::MacEager::expr(cx.expr_str( - sp, - token::intern_and_get_ident(&string[..]))) + base::MacEager::expr(cx.expr_str(sp, Symbol::intern(&string))) } /// include! : parse the given file as an expr @@ -93,7 +90,8 @@ pub fn expand_include<'cx>(cx: &'cx mut ExtCtxt, sp: Span, tts: &[tokenstream::T }; // The file will be added to the code map by the parser let path = res_rel_file(cx, sp, Path::new(&file)); - let p = parse::new_sub_parser_from_file(cx.parse_sess(), &path, true, None, sp); + let directory_ownership = DirectoryOwnership::Owned; + let p = parse::new_sub_parser_from_file(cx.parse_sess(), &path, directory_ownership, None, sp); struct ExpandResult<'a> { p: parse::parser::Parser<'a>, @@ -104,7 +102,7 @@ pub fn expand_include<'cx>(cx: &'cx mut ExtCtxt, sp: Span, tts: &[tokenstream::T } fn make_items(mut self: Box>) -> Option>> { - let mut ret = SmallVector::zero(); + let mut ret = SmallVector::new(); while self.p.token != token::Eof { match panictry!(self.p.parse_item()) { Some(item) => ret.push(item), @@ -144,10 +142,9 @@ pub fn expand_include_str(cx: &mut ExtCtxt, sp: Span, tts: &[tokenstream::TokenT // Add this input file to the code map to make it available as // dependency information let filename = format!("{}", file.display()); - let interned = token::intern_and_get_ident(&src[..]); cx.codemap().new_filemap_and_lines(&filename, None, &src); - base::MacEager::expr(cx.expr_str(sp, interned)) + base::MacEager::expr(cx.expr_str(sp, Symbol::intern(&src))) } Err(_) => { cx.span_err(sp, diff --git a/src/libsyntax/ext/tt/macro_parser.rs b/src/libsyntax/ext/tt/macro_parser.rs index 1066646aa8..2de3116607 100644 --- a/src/libsyntax/ext/tt/macro_parser.rs +++ b/src/libsyntax/ext/tt/macro_parser.rs @@ -83,7 +83,7 @@ use syntax_pos::{self, BytePos, mk_sp, Span}; use codemap::Spanned; use errors::FatalError; use parse::lexer::*; //resolve bug? -use parse::ParseSess; +use parse::{Directory, ParseSess}; use parse::parser::{PathStyle, Parser}; use parse::token::{DocComment, MatchNt, SubstNt}; use parse::token::{Token, Nonterminal}; @@ -130,7 +130,7 @@ struct MatcherTtFrame { } #[derive(Clone)] -pub struct MatcherPos { +struct MatcherPos { stack: Vec, top_elts: TokenTreeOrTokenTreeVec, sep: Option, @@ -143,6 +143,8 @@ pub struct MatcherPos { sp_lo: BytePos, } +pub type NamedParseResult = ParseResult>>; + pub fn count_names(ms: &[TokenTree]) -> usize { ms.iter().fold(0, |count, elt| { count + match *elt { @@ -160,14 +162,13 @@ pub fn count_names(ms: &[TokenTree]) -> usize { }) } -pub fn initial_matcher_pos(ms: Vec, sep: Option, lo: BytePos) - -> Box { +fn initial_matcher_pos(ms: Vec, lo: BytePos) -> Box { let match_idx_hi = count_names(&ms[..]); - let matches: Vec<_> = (0..match_idx_hi).map(|_| Vec::new()).collect(); + let matches = create_matches(match_idx_hi); Box::new(MatcherPos { stack: vec![], top_elts: TtSeq(ms), - sep: sep, + sep: None, idx: 0, up: None, matches: matches, @@ -200,27 +201,25 @@ pub enum NamedMatch { MatchedNonterminal(Rc) } -pub fn nameize(p_s: &ParseSess, ms: &[TokenTree], res: &[Rc]) - -> ParseResult>> { - fn n_rec(p_s: &ParseSess, m: &TokenTree, res: &[Rc], - ret_val: &mut HashMap>, idx: &mut usize) +fn nameize>>(ms: &[TokenTree], mut res: I) -> NamedParseResult { + fn n_rec>>(m: &TokenTree, mut res: &mut I, + ret_val: &mut HashMap>) -> Result<(), (syntax_pos::Span, String)> { match *m { TokenTree::Sequence(_, ref seq) => { for next_m in &seq.tts { - n_rec(p_s, next_m, res, ret_val, idx)? + n_rec(next_m, res.by_ref(), ret_val)? } } TokenTree::Delimited(_, ref delim) => { for next_m in &delim.tts { - n_rec(p_s, next_m, res, ret_val, idx)?; + n_rec(next_m, res.by_ref(), ret_val)?; } } TokenTree::Token(sp, MatchNt(bind_name, _)) => { match ret_val.entry(bind_name) { Vacant(spot) => { - spot.insert(res[*idx].clone()); - *idx += 1; + spot.insert(res.next().unwrap()); } Occupied(..) => { return Err((sp, format!("duplicated bind name: {}", bind_name))) @@ -237,9 +236,8 @@ pub fn nameize(p_s: &ParseSess, ms: &[TokenTree], res: &[Rc]) } let mut ret_val = HashMap::new(); - let mut idx = 0; for m in ms { - match n_rec(p_s, m, res, &mut ret_val, &mut idx) { + match n_rec(m, res.by_ref(), &mut ret_val) { Ok(_) => {}, Err((sp, msg)) => return Error(sp, msg), } @@ -265,11 +263,8 @@ pub fn parse_failure_msg(tok: Token) -> String { } } -pub type NamedParseResult = ParseResult>>; - -/// Perform a token equality check, ignoring syntax context (that is, an -/// unhygienic comparison) -pub fn token_name_eq(t1 : &Token, t2 : &Token) -> bool { +/// Perform a token equality check, ignoring syntax context (that is, an unhygienic comparison) +fn token_name_eq(t1 : &Token, t2 : &Token) -> bool { match (t1,t2) { (&token::Ident(id1),&token::Ident(id2)) | (&token::Lifetime(id1),&token::Lifetime(id2)) => @@ -278,234 +273,229 @@ pub fn token_name_eq(t1 : &Token, t2 : &Token) -> bool { } } -pub fn parse(sess: &ParseSess, rdr: TtReader, ms: &[TokenTree]) -> NamedParseResult { - let mut parser = Parser::new_with_doc_flag(sess, Box::new(rdr), true); - let mut cur_eis = SmallVector::one(initial_matcher_pos(ms.to_owned(), None, parser.span.lo)); +fn create_matches(len: usize) -> Vec>> { + (0..len).into_iter().map(|_| Vec::new()).collect() +} - loop { - let mut bb_eis = Vec::new(); // black-box parsed by parser.rs - let mut next_eis = Vec::new(); // or proceed normally - let mut eof_eis = Vec::new(); - - let (sp, tok) = (parser.span, parser.token.clone()); - - /* we append new items to this while we go */ - loop { - let mut ei = match cur_eis.pop() { - None => break, /* for each Earley Item */ - Some(ei) => ei, - }; - - // When unzipped trees end, remove them - while ei.idx >= ei.top_elts.len() { - match ei.stack.pop() { - Some(MatcherTtFrame { elts, idx }) => { - ei.top_elts = elts; - ei.idx = idx + 1; - } - None => break +fn inner_parse_loop(cur_eis: &mut SmallVector>, + next_eis: &mut Vec>, + eof_eis: &mut SmallVector>, + bb_eis: &mut SmallVector>, + token: &Token, span: &syntax_pos::Span) -> ParseResult<()> { + while let Some(mut ei) = cur_eis.pop() { + // When unzipped trees end, remove them + while ei.idx >= ei.top_elts.len() { + match ei.stack.pop() { + Some(MatcherTtFrame { elts, idx }) => { + ei.top_elts = elts; + ei.idx = idx + 1; } + None => break } + } - let idx = ei.idx; - let len = ei.top_elts.len(); + let idx = ei.idx; + let len = ei.top_elts.len(); - /* at end of sequence */ - if idx >= len { - // can't move out of `match`es, so: - if ei.up.is_some() { - // hack: a matcher sequence is repeating iff it has a - // parent (the top level is just a container) + // at end of sequence + if idx >= len { + // We are repeating iff there is a parent + if ei.up.is_some() { + // Disregarding the separator, add the "up" case to the tokens that should be + // examined. + // (remove this condition to make trailing seps ok) + if idx == len { + let mut new_pos = ei.up.clone().unwrap(); + // update matches (the MBE "parse tree") by appending + // each tree as a subtree. - // disregard separator, try to go up - // (remove this condition to make trailing seps ok) - if idx == len { - // pop from the matcher position + // I bet this is a perf problem: we're preemptively + // doing a lot of array work that will get thrown away + // most of the time. - let mut new_pos = ei.up.clone().unwrap(); - - // update matches (the MBE "parse tree") by appending - // each tree as a subtree. - - // I bet this is a perf problem: we're preemptively - // doing a lot of array work that will get thrown away - // most of the time. - - // Only touch the binders we have actually bound - for idx in ei.match_lo..ei.match_hi { - let sub = (ei.matches[idx]).clone(); - (&mut new_pos.matches[idx]) - .push(Rc::new(MatchedSeq(sub, mk_sp(ei.sp_lo, - sp.hi)))); - } - - new_pos.match_cur = ei.match_hi; - new_pos.idx += 1; - cur_eis.push(new_pos); + // Only touch the binders we have actually bound + for idx in ei.match_lo..ei.match_hi { + let sub = ei.matches[idx].clone(); + new_pos.matches[idx] + .push(Rc::new(MatchedSeq(sub, mk_sp(ei.sp_lo, + span.hi)))); } - // can we go around again? + new_pos.match_cur = ei.match_hi; + new_pos.idx += 1; + cur_eis.push(new_pos); + } - // the *_t vars are workarounds for the lack of unary move - match ei.sep { - Some(ref t) if idx == len => { // we need a separator - // i'm conflicted about whether this should be hygienic.... - // though in this case, if the separators are never legal - // idents, it shouldn't matter. - if token_name_eq(&tok, t) { //pass the separator - let mut ei_t = ei.clone(); - // ei_t.match_cur = ei_t.match_lo; - ei_t.idx += 1; - next_eis.push(ei_t); - } - } - _ => { // we don't need a separator - let mut ei_t = ei; - ei_t.match_cur = ei_t.match_lo; - ei_t.idx = 0; - cur_eis.push(ei_t); - } + // Check if we need a separator + if idx == len && ei.sep.is_some() { + // We have a separator, and it is the current token. + if ei.sep.as_ref().map(|ref sep| token_name_eq(&token, sep)).unwrap_or(false) { + ei.idx += 1; + next_eis.push(ei); } - } else { - eof_eis.push(ei); + } else { // we don't need a separator + ei.match_cur = ei.match_lo; + ei.idx = 0; + cur_eis.push(ei); } } else { - match ei.top_elts.get_tt(idx) { - /* need to descend into sequence */ - TokenTree::Sequence(sp, seq) => { - if seq.op == tokenstream::KleeneOp::ZeroOrMore { - let mut new_ei = ei.clone(); - new_ei.match_cur += seq.num_captures; - new_ei.idx += 1; - //we specifically matched zero repeats. - for idx in ei.match_cur..ei.match_cur + seq.num_captures { - (&mut new_ei.matches[idx]).push(Rc::new(MatchedSeq(vec![], sp))); - } + // We aren't repeating, so we must be potentially at the end of the input. + eof_eis.push(ei); + } + } else { + match ei.top_elts.get_tt(idx) { + /* need to descend into sequence */ + TokenTree::Sequence(sp, seq) => { + if seq.op == tokenstream::KleeneOp::ZeroOrMore { + // Examine the case where there are 0 matches of this sequence + let mut new_ei = ei.clone(); + new_ei.match_cur += seq.num_captures; + new_ei.idx += 1; + for idx in ei.match_cur..ei.match_cur + seq.num_captures { + new_ei.matches[idx].push(Rc::new(MatchedSeq(vec![], sp))); + } + cur_eis.push(new_ei); + } - cur_eis.push(new_ei); - } - - let matches: Vec<_> = (0..ei.matches.len()) - .map(|_| Vec::new()).collect(); - let ei_t = ei; - cur_eis.push(Box::new(MatcherPos { - stack: vec![], - sep: seq.separator.clone(), - idx: 0, - matches: matches, - match_lo: ei_t.match_cur, - match_cur: ei_t.match_cur, - match_hi: ei_t.match_cur + seq.num_captures, - up: Some(ei_t), - sp_lo: sp.lo, - top_elts: Tt(TokenTree::Sequence(sp, seq)), - })); + // Examine the case where there is at least one match of this sequence + let matches = create_matches(ei.matches.len()); + cur_eis.push(Box::new(MatcherPos { + stack: vec![], + sep: seq.separator.clone(), + idx: 0, + matches: matches, + match_lo: ei.match_cur, + match_cur: ei.match_cur, + match_hi: ei.match_cur + seq.num_captures, + up: Some(ei), + sp_lo: sp.lo, + top_elts: Tt(TokenTree::Sequence(sp, seq)), + })); + } + TokenTree::Token(_, MatchNt(..)) => { + // Built-in nonterminals never start with these tokens, + // so we can eliminate them from consideration. + match *token { + token::CloseDelim(_) => {}, + _ => bb_eis.push(ei), } - TokenTree::Token(_, MatchNt(..)) => { - // Built-in nonterminals never start with these tokens, - // so we can eliminate them from consideration. - match tok { - token::CloseDelim(_) => {}, - _ => bb_eis.push(ei), - } - } - TokenTree::Token(sp, SubstNt(..)) => { - return Error(sp, "missing fragment specifier".to_string()) - } - seq @ TokenTree::Delimited(..) | seq @ TokenTree::Token(_, DocComment(..)) => { - let lower_elts = mem::replace(&mut ei.top_elts, Tt(seq)); - let idx = ei.idx; - ei.stack.push(MatcherTtFrame { - elts: lower_elts, - idx: idx, - }); - ei.idx = 0; - cur_eis.push(ei); - } - TokenTree::Token(_, ref t) => { - if token_name_eq(t,&tok) { - let mut ei_t = ei.clone(); - ei_t.idx += 1; - next_eis.push(ei_t); - } + } + TokenTree::Token(sp, SubstNt(..)) => { + return Error(sp, "missing fragment specifier".to_string()) + } + seq @ TokenTree::Delimited(..) | seq @ TokenTree::Token(_, DocComment(..)) => { + let lower_elts = mem::replace(&mut ei.top_elts, Tt(seq)); + let idx = ei.idx; + ei.stack.push(MatcherTtFrame { + elts: lower_elts, + idx: idx, + }); + ei.idx = 0; + cur_eis.push(ei); + } + TokenTree::Token(_, ref t) => { + if token_name_eq(t, &token) { + ei.idx += 1; + next_eis.push(ei); } } } } + } + + Success(()) +} + +pub fn parse(sess: &ParseSess, rdr: TtReader, ms: &[TokenTree], directory: Option) + -> NamedParseResult { + let mut parser = Parser::new(sess, Box::new(rdr), directory, true); + let mut cur_eis = SmallVector::one(initial_matcher_pos(ms.to_owned(), parser.span.lo)); + let mut next_eis = Vec::new(); // or proceed normally + + loop { + let mut bb_eis = SmallVector::new(); // black-box parsed by parser.rs + let mut eof_eis = SmallVector::new(); + assert!(next_eis.is_empty()); + + match inner_parse_loop(&mut cur_eis, &mut next_eis, &mut eof_eis, &mut bb_eis, + &parser.token, &parser.span) { + Success(_) => {}, + Failure(sp, tok) => return Failure(sp, tok), + Error(sp, msg) => return Error(sp, msg), + } + + // inner parse loop handled all cur_eis, so it's empty + assert!(cur_eis.is_empty()); /* error messages here could be improved with links to orig. rules */ - if token_name_eq(&tok, &token::Eof) { + if token_name_eq(&parser.token, &token::Eof) { if eof_eis.len() == 1 { - let mut v = Vec::new(); - for dv in &mut (&mut eof_eis[0]).matches { - v.push(dv.pop().unwrap()); - } - return nameize(sess, ms, &v[..]); + return nameize(ms, eof_eis[0].matches.iter_mut().map(|mut dv| dv.pop().unwrap())); } else if eof_eis.len() > 1 { - return Error(sp, "ambiguity: multiple successful parses".to_string()); + return Error(parser.span, "ambiguity: multiple successful parses".to_string()); } else { - return Failure(sp, token::Eof); + return Failure(parser.span, token::Eof); } - } else { - if (!bb_eis.is_empty() && !next_eis.is_empty()) - || bb_eis.len() > 1 { - let nts = bb_eis.iter().map(|ei| match ei.top_elts.get_tt(ei.idx) { - TokenTree::Token(_, MatchNt(bind, name)) => { - format!("{} ('{}')", name, bind) - } - _ => panic!() - }).collect::>().join(" or "); + } else if (!bb_eis.is_empty() && !next_eis.is_empty()) || bb_eis.len() > 1 { + let nts = bb_eis.iter().map(|ei| match ei.top_elts.get_tt(ei.idx) { + TokenTree::Token(_, MatchNt(bind, name)) => { + format!("{} ('{}')", name, bind) + } + _ => panic!() + }).collect::>().join(" or "); - return Error(sp, format!( - "local ambiguity: multiple parsing options: {}", - match next_eis.len() { - 0 => format!("built-in NTs {}.", nts), - 1 => format!("built-in NTs {} or 1 other option.", nts), - n => format!("built-in NTs {} or {} other options.", nts, n), - } - )) - } else if bb_eis.is_empty() && next_eis.is_empty() { - return Failure(sp, tok); - } else if !next_eis.is_empty() { - /* Now process the next token */ - while !next_eis.is_empty() { - cur_eis.push(next_eis.pop().unwrap()); + return Error(parser.span, format!( + "local ambiguity: multiple parsing options: {}", + match next_eis.len() { + 0 => format!("built-in NTs {}.", nts), + 1 => format!("built-in NTs {} or 1 other option.", nts), + n => format!("built-in NTs {} or {} other options.", nts, n), } - parser.bump(); - } else /* bb_eis.len() == 1 */ { - let mut ei = bb_eis.pop().unwrap(); - if let TokenTree::Token(span, MatchNt(_, ident)) = ei.top_elts.get_tt(ei.idx) { - let match_cur = ei.match_cur; - (&mut ei.matches[match_cur]).push(Rc::new(MatchedNonterminal( - Rc::new(parse_nt(&mut parser, span, &ident.name.as_str()))))); - ei.idx += 1; - ei.match_cur += 1; - } else { - unreachable!() - } - cur_eis.push(ei); + )); + } else if bb_eis.is_empty() && next_eis.is_empty() { + return Failure(parser.span, parser.token); + } else if !next_eis.is_empty() { + /* Now process the next token */ + cur_eis.extend(next_eis.drain(..)); + parser.bump(); + } else /* bb_eis.len() == 1 */ { + let mut ei = bb_eis.pop().unwrap(); + if let TokenTree::Token(span, MatchNt(_, ident)) = ei.top_elts.get_tt(ei.idx) { + let match_cur = ei.match_cur; + ei.matches[match_cur].push(Rc::new(MatchedNonterminal( + Rc::new(parse_nt(&mut parser, span, &ident.name.as_str()))))); + ei.idx += 1; + ei.match_cur += 1; + } else { + unreachable!() } + cur_eis.push(ei); } assert!(!cur_eis.is_empty()); } } -pub fn parse_nt<'a>(p: &mut Parser<'a>, sp: Span, name: &str) -> Nonterminal { +fn parse_nt<'a>(p: &mut Parser<'a>, sp: Span, name: &str) -> Nonterminal { match name { "tt" => { p.quote_depth += 1; //but in theory, non-quoted tts might be useful let mut tt = panictry!(p.parse_token_tree()); p.quote_depth -= 1; - loop { - let nt = match tt { - TokenTree::Token(_, token::Interpolated(ref nt)) => nt.clone(), - _ => break, - }; - match *nt { - token::NtTT(ref sub_tt) => tt = sub_tt.clone(), - _ => break, + while let TokenTree::Token(sp, token::Interpolated(nt)) = tt { + if let token::NtTT(..) = *nt { + match Rc::try_unwrap(nt) { + Ok(token::NtTT(sub_tt)) => tt = sub_tt, + Ok(_) => unreachable!(), + Err(nt_rc) => match *nt_rc { + token::NtTT(ref sub_tt) => tt = sub_tt.clone(), + _ => unreachable!(), + }, + } + } else { + tt = TokenTree::Token(sp, token::Interpolated(nt.clone())); + break } } return token::NtTT(tt); diff --git a/src/libsyntax/ext/tt/macro_rules.rs b/src/libsyntax/ext/tt/macro_rules.rs index 552d4de961..ca18e580ec 100644 --- a/src/libsyntax/ext/tt/macro_rules.rs +++ b/src/libsyntax/ext/tt/macro_rules.rs @@ -17,12 +17,13 @@ use ext::placeholders; use ext::tt::macro_parser::{Success, Error, Failure}; use ext::tt::macro_parser::{MatchedSeq, MatchedNonterminal}; use ext::tt::macro_parser::{parse, parse_failure_msg}; -use parse::ParseSess; +use parse::{Directory, ParseSess}; use parse::lexer::new_tt_reader; -use parse::parser::{Parser, Restrictions}; -use parse::token::{self, gensym_ident, NtTT, Token}; +use parse::parser::Parser; +use parse::token::{self, NtTT, Token}; use parse::token::Token::*; use print; +use symbol::Symbol; use tokenstream::{self, TokenTree}; use std::collections::{HashMap}; @@ -115,12 +116,14 @@ fn generic_extension<'cx>(cx: &'cx ExtCtxt, // rhs has holes ( `$id` and `$(...)` that need filled) let trncbr = new_tt_reader(&cx.parse_sess.span_diagnostic, Some(named_matches), rhs); - let mut p = Parser::new(cx.parse_sess(), Box::new(trncbr)); - p.directory = cx.current_expansion.module.directory.clone(); - p.restrictions = match cx.current_expansion.no_noninline_mod { - true => Restrictions::NO_NONINLINE_MOD, - false => Restrictions::empty(), + let directory = Directory { + path: cx.current_expansion.module.directory.clone(), + ownership: cx.current_expansion.directory_ownership, }; + let mut p = Parser::new(cx.parse_sess(), Box::new(trncbr), Some(directory), false); + p.root_module_name = cx.current_expansion.module.mod_path.last() + .map(|id| (*id.name.as_str()).to_owned()); + p.check_unknown_macro_variable(); // Let the context choose how to interpret the result. // Weird, but useful for X-macros. @@ -187,16 +190,16 @@ impl IdentMacroExpander for MacroRulesExpander { /// Converts a `macro_rules!` invocation into a syntax extension. pub fn compile(sess: &ParseSess, def: &ast::MacroDef) -> SyntaxExtension { - let lhs_nm = gensym_ident("lhs"); - let rhs_nm = gensym_ident("rhs"); + let lhs_nm = ast::Ident::with_empty_ctxt(Symbol::gensym("lhs")); + let rhs_nm = ast::Ident::with_empty_ctxt(Symbol::gensym("rhs")); // The pattern that macro_rules matches. // The grammar for macro_rules! is: // $( $lhs:tt => $rhs:tt );+ // ...quasiquoting this would be nice. // These spans won't matter, anyways - let match_lhs_tok = MatchNt(lhs_nm, token::str_to_ident("tt")); - let match_rhs_tok = MatchNt(rhs_nm, token::str_to_ident("tt")); + let match_lhs_tok = MatchNt(lhs_nm, ast::Ident::from_str("tt")); + let match_rhs_tok = MatchNt(rhs_nm, ast::Ident::from_str("tt")); let argument_gram = vec![ TokenTree::Sequence(DUMMY_SP, Rc::new(tokenstream::SequenceRepetition { tts: vec![ @@ -220,7 +223,7 @@ pub fn compile(sess: &ParseSess, def: &ast::MacroDef) -> SyntaxExtension { // Parse the macro_rules! invocation (`none` is for no interpolations): let arg_reader = new_tt_reader(&sess.span_diagnostic, None, def.body.clone()); - let argument_map = match parse(sess, arg_reader, &argument_gram) { + let argument_map = match parse(sess, arg_reader, &argument_gram, None) { Success(m) => m, Failure(sp, tok) => { let s = parse_failure_msg(tok); @@ -790,8 +793,7 @@ fn is_in_follow(tok: &Token, frag: &str) -> Result "pat" => { match *tok { FatArrow | Comma | Eq | BinOp(token::Or) => Ok(true), - Ident(i) if (i.name.as_str() == "if" || - i.name.as_str() == "in") => Ok(true), + Ident(i) if i.name == "if" || i.name == "in" => Ok(true), _ => Ok(false) } }, @@ -799,8 +801,8 @@ fn is_in_follow(tok: &Token, frag: &str) -> Result match *tok { OpenDelim(token::DelimToken::Brace) | OpenDelim(token::DelimToken::Bracket) | Comma | FatArrow | Colon | Eq | Gt | Semi | BinOp(token::Or) => Ok(true), - MatchNt(_, ref frag) if frag.name.as_str() == "block" => Ok(true), - Ident(i) if i.name.as_str() == "as" || i.name.as_str() == "where" => Ok(true), + MatchNt(_, ref frag) if frag.name == "block" => Ok(true), + Ident(i) if i.name == "as" || i.name == "where" => Ok(true), _ => Ok(false) } }, diff --git a/src/libsyntax/feature_gate.rs b/src/libsyntax/feature_gate.rs index c5fae9f323..10d76692b1 100644 --- a/src/libsyntax/feature_gate.rs +++ b/src/libsyntax/feature_gate.rs @@ -33,7 +33,7 @@ use syntax_pos::Span; use errors::{DiagnosticBuilder, Handler}; use visit::{self, FnKind, Visitor}; use parse::ParseSess; -use parse::token::InternedString; +use symbol::Symbol; use std::ascii::AsciiExt; use std::env; @@ -59,9 +59,9 @@ macro_rules! declare_features { /// A set of features to be used by later passes. pub struct Features { /// #![feature] attrs for stable language features, for error reporting - pub declared_stable_lang_features: Vec<(InternedString, Span)>, + pub declared_stable_lang_features: Vec<(Symbol, Span)>, /// #![feature] attrs for non-language (library) features - pub declared_lib_features: Vec<(InternedString, Span)>, + pub declared_lib_features: Vec<(Symbol, Span)>, $(pub $feature: bool),+ } @@ -132,7 +132,6 @@ declare_features! ( (active, allocator, "1.0.0", Some(27389)), (active, fundamental, "1.0.0", Some(29635)), - (active, linked_from, "1.3.0", Some(29629)), (active, main, "1.0.0", Some(29634)), (active, needs_allocator, "1.4.0", Some(27389)), (active, on_unimplemented, "1.0.0", Some(29628)), @@ -153,10 +152,6 @@ declare_features! ( // rustc internal (active, staged_api, "1.0.0", None), - // Allows using items which are missing stability attributes - // rustc internal - (active, unmarked_api, "1.0.0", None), - // Allows using #![no_core] (active, no_core, "1.3.0", Some(29639)), @@ -271,7 +266,6 @@ declare_features! ( // Allows `impl Trait` in function return types. (active, conservative_impl_trait, "1.12.0", Some(34511)), - // Allows tuple structs and variants in more contexts, // Permits numeric fields in struct expressions and patterns. (active, relaxed_adts, "1.12.0", Some(35626)), @@ -285,12 +279,6 @@ declare_features! ( // instead of just the platforms on which it is the C ABI (active, abi_sysv64, "1.13.0", Some(36167)), - // Use the import semantics from RFC 1560. - (active, item_like_imports, "1.13.0", Some(35120)), - - // Macros 1.1 - (active, proc_macro, "1.13.0", Some(35900)), - // Allows untagged unions `union U { ... }` (active, untagged_unions, "1.13.0", Some(32836)), @@ -312,6 +300,17 @@ declare_features! ( // Allows using `Self` and associated types in struct expressions and patterns. (active, more_struct_aliases, "1.14.0", Some(37544)), + + // Allows #[link(..., cfg(..))] + (active, link_cfg, "1.14.0", Some(37406)), + + (active, use_extern_macros, "1.15.0", Some(35896)), + + // Allows `break {expr}` with a value inside `loop`s. + (active, loop_break_value, "1.14.0", Some(37339)), + + // Allows #[target_feature(...)] + (active, target_feature, "1.15.0", None), ); declare_features! ( @@ -326,6 +325,9 @@ declare_features! ( (removed, test_removed_feature, "1.0.0", None), (removed, visible_private_types, "1.0.0", None), (removed, unsafe_no_drop_flag, "1.0.0", None), + // Allows using items which are missing stability attributes + // rustc internal + (removed, unmarked_api, "1.0.0", None), ); declare_features! ( @@ -355,9 +357,12 @@ declare_features! ( // Allows `#[deprecated]` attribute (accepted, deprecated, "1.9.0", Some(29935)), // `expr?` - (accepted, question_mark, "1.14.0", Some(31436)), + (accepted, question_mark, "1.13.0", Some(31436)), // Allows `..` in tuple (struct) patterns (accepted, dotdot_in_tuple_patterns, "1.14.0", Some(33627)), + (accepted, item_like_imports, "1.14.0", Some(35120)), + // Macros 1.1 + (accepted, proc_macro, "1.15.0", Some(35900)), ); // (changing above list without updating src/doc/reference.md makes @cmr sad) @@ -422,11 +427,11 @@ macro_rules! cfg_fn { } pub fn deprecated_attributes() -> Vec<&'static (&'static str, AttributeType, AttributeGate)> { - KNOWN_ATTRIBUTES.iter().filter(|a| a.2.is_deprecated()).collect() + BUILTIN_ATTRIBUTES.iter().filter(|a| a.2.is_deprecated()).collect() } // Attributes that have a special meaning to rustc or rustdoc -pub const KNOWN_ATTRIBUTES: &'static [(&'static str, AttributeType, AttributeGate)] = &[ +pub const BUILTIN_ATTRIBUTES: &'static [(&'static str, AttributeType, AttributeGate)] = &[ // Normal attributes ("warn", Normal, Ungated), @@ -631,17 +636,7 @@ pub const KNOWN_ATTRIBUTES: &'static [(&'static str, AttributeType, AttributeGat is an experimental feature", cfg_fn!(fundamental))), - ("linked_from", Normal, Gated(Stability::Unstable, - "linked_from", - "the `#[linked_from]` attribute \ - is an experimental feature", - cfg_fn!(linked_from))), - - ("proc_macro_derive", Normal, Gated(Stability::Unstable, - "proc_macro", - "the `#[proc_macro_derive]` attribute \ - is an experimental feature", - cfg_fn!(proc_macro))), + ("proc_macro_derive", Normal, Ungated), ("rustc_copy_clone_marker", Whitelisted, Gated(Stability::Unstable, "rustc_attrs", @@ -659,6 +654,10 @@ pub const KNOWN_ATTRIBUTES: &'static [(&'static str, AttributeType, AttributeGat "the `#[naked]` attribute \ is an experimental feature", cfg_fn!(naked_functions))), + ("target_feature", Whitelisted, Gated( + Stability::Unstable, "target_feature", + "the `#[target_feature]` attribute is an experimental feature", + cfg_fn!(target_feature))), ("export_name", Whitelisted, Ungated), ("inline", Whitelisted, Ungated), ("link", Whitelisted, Ungated), @@ -733,6 +732,7 @@ pub const KNOWN_ATTRIBUTES: &'static [(&'static str, AttributeType, AttributeGat ("no_main", CrateLevel, Ungated), ("no_builtins", CrateLevel, Ungated), ("recursion_limit", CrateLevel, Ungated), + ("type_length_limit", CrateLevel, Ungated), ]; // cfg(...)'s that are feature gated @@ -742,7 +742,6 @@ const GATED_CFGS: &'static [(&'static str, &'static str, fn(&Features) -> bool)] ("target_vendor", "cfg_target_vendor", cfg_fn!(cfg_target_vendor)), ("target_thread_local", "cfg_target_thread_local", cfg_fn!(cfg_target_thread_local)), ("target_has_atomic", "cfg_target_has_atomic", cfg_fn!(cfg_target_has_atomic)), - ("proc_macro", "proc_macro", cfg_fn!(proc_macro)), ]; #[derive(Debug, Eq, PartialEq)] @@ -753,7 +752,7 @@ pub struct GatedCfg { impl GatedCfg { pub fn gate(cfg: &ast::MetaItem) -> Option { - let name = cfg.name(); + let name = &*cfg.name().as_str(); GATED_CFGS.iter() .position(|info| info.0 == name) .map(|idx| { @@ -800,13 +799,13 @@ macro_rules! gate_feature { impl<'a> Context<'a> { fn check_attribute(&self, attr: &ast::Attribute, is_macro: bool) { debug!("check_attribute(attr = {:?})", attr); - let name = &*attr.name(); - for &(n, ty, ref gateage) in KNOWN_ATTRIBUTES { + let name = &*attr.name().as_str(); + for &(n, ty, ref gateage) in BUILTIN_ATTRIBUTES { if n == name { if let &Gated(_, ref name, ref desc, ref has_feature) = gateage { gate_feature_fn!(self, has_feature, attr.span, name, desc); } - debug!("check_attribute: {:?} is known, {:?}, {:?}", name, ty, gateage); + debug!("check_attribute: {:?} is builtin, {:?}, {:?}", name, ty, gateage); return; } } @@ -826,6 +825,8 @@ impl<'a> Context<'a> { are reserved for internal compiler diagnostics"); } else if name.starts_with("derive_") { gate_feature!(self, custom_derive, attr.span, EXPLAIN_DERIVE_UNDERSCORE); + } else if attr::is_known(attr) { + debug!("check_attribute: {:?} is known", name); } else { // Only run the custom attribute lint during regular // feature gate checking. Macro gating runs @@ -985,25 +986,29 @@ fn contains_novel_literal(item: &ast::MetaItem) -> bool { use ast::NestedMetaItemKind::*; match item.node { - Word(..) => false, - NameValue(_, ref lit) => !lit.node.is_str(), - List(_, ref list) => list.iter().any(|li| { + Word => false, + NameValue(ref lit) => !lit.node.is_str(), + List(ref list) => list.iter().any(|li| { match li.node { - MetaItem(ref mi) => contains_novel_literal(&**mi), + MetaItem(ref mi) => contains_novel_literal(&mi), Literal(_) => true, } }), } } -impl<'a> Visitor for PostExpansionVisitor<'a> { +fn starts_with_digit(s: &str) -> bool { + s.as_bytes().first().cloned().map_or(false, |b| b >= b'0' && b <= b'9') +} + +impl<'a> Visitor<'a> for PostExpansionVisitor<'a> { fn visit_attribute(&mut self, attr: &ast::Attribute) { if !self.context.cm.span_allows_unstable(attr.span) { // check for gated attributes self.context.check_attribute(attr, false); } - if contains_novel_literal(&*(attr.node.value)) { + if contains_novel_literal(&attr.value) { gate_feature_post!(&self, attr_literals, attr.span, "non-string literals in attributes, or string \ literals in top-level positions, are experimental"); @@ -1017,7 +1022,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { } } - fn visit_item(&mut self, i: &ast::Item) { + fn visit_item(&mut self, i: &'a ast::Item) { match i.node { ast::ItemKind::ExternCrate(_) => { if attr::contains_name(&i.attrs[..], "macro_reexport") { @@ -1110,10 +1115,9 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_item(self, i); } - fn visit_foreign_item(&mut self, i: &ast::ForeignItem) { - let links_to_llvm = match attr::first_attr_value_str_by_name(&i.attrs, - "link_name") { - Some(val) => val.starts_with("llvm."), + fn visit_foreign_item(&mut self, i: &'a ast::ForeignItem) { + let links_to_llvm = match attr::first_attr_value_str_by_name(&i.attrs, "link_name") { + Some(val) => val.as_str().starts_with("llvm."), _ => false }; if links_to_llvm { @@ -1124,7 +1128,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_foreign_item(self, i) } - fn visit_ty(&mut self, ty: &ast::Ty) { + fn visit_ty(&mut self, ty: &'a ast::Ty) { match ty.node { ast::TyKind::BareFn(ref bare_fn_ty) => { self.check_abi(bare_fn_ty.abi, ty.span); @@ -1142,7 +1146,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_ty(self, ty) } - fn visit_fn_ret_ty(&mut self, ret_ty: &ast::FunctionRetTy) { + fn visit_fn_ret_ty(&mut self, ret_ty: &'a ast::FunctionRetTy) { if let ast::FunctionRetTy::Ty(ref output_ty) = *ret_ty { match output_ty.node { ast::TyKind::Never => return, @@ -1152,7 +1156,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { } } - fn visit_expr(&mut self, e: &ast::Expr) { + fn visit_expr(&mut self, e: &'a ast::Expr) { match e.node { ast::ExprKind::Box(_) => { gate_feature_post!(&self, box_syntax, e.span, EXPLAIN_BOX_SYNTAX); @@ -1175,14 +1179,23 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { gate_feature_post!(&self, field_init_shorthand, field.span, "struct field shorthands are unstable"); } + if starts_with_digit(&field.ident.node.name.as_str()) { + gate_feature_post!(&self, relaxed_adts, + field.span, + "numeric fields in struct expressions are unstable"); + } } } + ast::ExprKind::Break(_, Some(_)) => { + gate_feature_post!(&self, loop_break_value, e.span, + "`break` with a value is experimental"); + } _ => {} } visit::walk_expr(self, e); } - fn visit_pat(&mut self, pattern: &ast::Pat) { + fn visit_pat(&mut self, pattern: &'a ast::Pat) { match pattern.node { PatKind::Slice(_, Some(_), ref last) if !last.is_empty() => { gate_feature_post!(&self, advanced_slice_patterns, @@ -1201,10 +1214,14 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { pattern.span, "box pattern syntax is experimental"); } - PatKind::TupleStruct(_, ref fields, ddpos) - if ddpos.is_none() && fields.is_empty() => { - gate_feature_post!(&self, relaxed_adts, pattern.span, - "empty tuple structs patterns are unstable"); + PatKind::Struct(_, ref fields, _) => { + for field in fields { + if starts_with_digit(&field.node.ident.name.as_str()) { + gate_feature_post!(&self, relaxed_adts, + field.span, + "numeric fields in struct patterns are unstable"); + } + } } _ => {} } @@ -1212,14 +1229,13 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { } fn visit_fn(&mut self, - fn_kind: FnKind, - fn_decl: &ast::FnDecl, - block: &ast::Block, + fn_kind: FnKind<'a>, + fn_decl: &'a ast::FnDecl, span: Span, _node_id: NodeId) { // check for const fn declarations match fn_kind { - FnKind::ItemFn(_, _, _, Spanned { node: ast::Constness::Const, .. }, _, _) => { + FnKind::ItemFn(_, _, _, Spanned { node: ast::Constness::Const, .. }, _, _, _) => { gate_feature_post!(&self, const_fn, span, "const fn is unstable"); } _ => { @@ -1231,16 +1247,16 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { } match fn_kind { - FnKind::ItemFn(_, _, _, _, abi, _) | - FnKind::Method(_, &ast::MethodSig { abi, .. }, _) => { + FnKind::ItemFn(_, _, _, _, abi, _, _) | + FnKind::Method(_, &ast::MethodSig { abi, .. }, _, _) => { self.check_abi(abi, span); } _ => {} } - visit::walk_fn(self, fn_kind, fn_decl, block, span); + visit::walk_fn(self, fn_kind, fn_decl, span); } - fn visit_trait_item(&mut self, ti: &ast::TraitItem) { + fn visit_trait_item(&mut self, ti: &'a ast::TraitItem) { match ti.node { ast::TraitItemKind::Const(..) => { gate_feature_post!(&self, associated_consts, @@ -1264,7 +1280,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_trait_item(self, ti); } - fn visit_impl_item(&mut self, ii: &ast::ImplItem) { + fn visit_impl_item(&mut self, ii: &'a ast::ImplItem) { if ii.defaultness == ast::Defaultness::Default { gate_feature_post!(&self, specialization, ii.span, @@ -1287,20 +1303,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_impl_item(self, ii); } - fn visit_variant_data(&mut self, vdata: &ast::VariantData, _: ast::Ident, - _: &ast::Generics, _: NodeId, span: Span) { - if vdata.fields().is_empty() { - if vdata.is_tuple() { - gate_feature_post!(&self, relaxed_adts, span, - "empty tuple structs and enum variants are unstable, \ - use unit structs and enum variants instead"); - } - } - - visit::walk_struct_def(self, vdata) - } - - fn visit_vis(&mut self, vis: &ast::Visibility) { + fn visit_vis(&mut self, vis: &'a ast::Visibility) { let span = match *vis { ast::Visibility::Crate(span) => span, ast::Visibility::Restricted { ref path, .. } => path.span, @@ -1311,7 +1314,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_vis(self, vis) } - fn visit_generics(&mut self, g: &ast::Generics) { + fn visit_generics(&mut self, g: &'a ast::Generics) { for t in &g.ty_params { if !t.attrs.is_empty() { gate_feature_post!(&self, generic_param_attrs, t.attrs[0].span, @@ -1321,7 +1324,7 @@ impl<'a> Visitor for PostExpansionVisitor<'a> { visit::walk_generics(self, g) } - fn visit_lifetime_def(&mut self, lifetime_def: &ast::LifetimeDef) { + fn visit_lifetime_def(&mut self, lifetime_def: &'a ast::LifetimeDef) { if !lifetime_def.attrs.is_empty() { gate_feature_post!(&self, generic_param_attrs, lifetime_def.attrs[0].span, "attributes on lifetime bindings are experimental"); diff --git a/src/libsyntax/fold.rs b/src/libsyntax/fold.rs index 1deeaf4223..b7276d82b5 100644 --- a/src/libsyntax/fold.rs +++ b/src/libsyntax/fold.rs @@ -22,8 +22,9 @@ use ast::*; use ast; use syntax_pos::Span; use codemap::{Spanned, respan}; -use parse::token::{self, keywords}; +use parse::token; use ptr::P; +use symbol::keywords; use tokenstream::*; use util::small_vector::SmallVector; use util::move_map::MoveMap; @@ -43,7 +44,7 @@ pub trait Folder : Sized { noop_fold_crate(c, self) } - fn fold_meta_items(&mut self, meta_items: Vec>) -> Vec> { + fn fold_meta_items(&mut self, meta_items: Vec) -> Vec { noop_fold_meta_items(meta_items, self) } @@ -51,7 +52,7 @@ pub trait Folder : Sized { noop_fold_meta_list_item(list_item, self) } - fn fold_meta_item(&mut self, meta_item: P) -> P { + fn fold_meta_item(&mut self, meta_item: MetaItem) -> MetaItem { noop_fold_meta_item(meta_item, self) } @@ -293,8 +294,7 @@ pub trait Folder : Sized { } } -pub fn noop_fold_meta_items(meta_items: Vec>, fld: &mut T) - -> Vec> { +pub fn noop_fold_meta_items(meta_items: Vec, fld: &mut T) -> Vec { meta_items.move_map(|x| fld.fold_meta_item(x)) } @@ -486,16 +486,13 @@ pub fn noop_fold_local(l: P, fld: &mut T) -> P { }) } -pub fn noop_fold_attribute(at: Attribute, fld: &mut T) -> Option { - let Spanned {node: Attribute_ {id, style, value, is_sugared_doc}, span} = at; - Some(Spanned { - node: Attribute_ { - id: id, - style: style, - value: fld.fold_meta_item(value), - is_sugared_doc: is_sugared_doc - }, - span: fld.new_span(span) +pub fn noop_fold_attribute(attr: Attribute, fld: &mut T) -> Option { + Some(Attribute { + id: attr.id, + style: attr.style, + value: fld.fold_meta_item(attr.value), + is_sugared_doc: attr.is_sugared_doc, + span: fld.new_span(attr.span), }) } @@ -522,17 +519,18 @@ pub fn noop_fold_meta_list_item(li: NestedMetaItem, fld: &mut T) } } -pub fn noop_fold_meta_item(mi: P, fld: &mut T) -> P { - mi.map(|Spanned {node, span}| Spanned { - node: match node { - MetaItemKind::Word(id) => MetaItemKind::Word(id), - MetaItemKind::List(id, mis) => { - MetaItemKind::List(id, mis.move_map(|e| fld.fold_meta_list_item(e))) - } - MetaItemKind::NameValue(id, s) => MetaItemKind::NameValue(id, s) +pub fn noop_fold_meta_item(mi: MetaItem, fld: &mut T) -> MetaItem { + MetaItem { + name: mi.name, + node: match mi.node { + MetaItemKind::Word => MetaItemKind::Word, + MetaItemKind::List(mis) => { + MetaItemKind::List(mis.move_map(|e| fld.fold_meta_list_item(e))) + }, + MetaItemKind::NameValue(s) => MetaItemKind::NameValue(s), }, - span: fld.new_span(span) - }) + span: fld.new_span(mi.span) + } } pub fn noop_fold_arg(Arg {id, pat, ty}: Arg, fld: &mut T) -> Arg { @@ -546,19 +544,19 @@ pub fn noop_fold_arg(Arg {id, pat, ty}: Arg, fld: &mut T) -> Arg { pub fn noop_fold_tt(tt: &TokenTree, fld: &mut T) -> TokenTree { match *tt { TokenTree::Token(span, ref tok) => - TokenTree::Token(span, fld.fold_token(tok.clone())), + TokenTree::Token(fld.new_span(span), fld.fold_token(tok.clone())), TokenTree::Delimited(span, ref delimed) => { - TokenTree::Delimited(span, Rc::new( + TokenTree::Delimited(fld.new_span(span), Rc::new( Delimited { delim: delimed.delim, - open_span: delimed.open_span, + open_span: fld.new_span(delimed.open_span), tts: fld.fold_tts(&delimed.tts), - close_span: delimed.close_span, + close_span: fld.new_span(delimed.close_span), } )) }, TokenTree::Sequence(span, ref seq) => - TokenTree::Sequence(span, + TokenTree::Sequence(fld.new_span(span), Rc::new(SequenceRepetition { tts: fld.fold_tts(&seq.tts), separator: seq.separator.clone().map(|tok| fld.fold_token(tok)), @@ -651,7 +649,7 @@ pub fn noop_fold_fn_decl(decl: P, fld: &mut T) -> P { inputs: inputs.move_map(|x| fld.fold_arg(x)), output: match output { FunctionRetTy::Ty(ty) => FunctionRetTy::Ty(fld.fold_ty(ty)), - FunctionRetTy::Default(span) => FunctionRetTy::Default(span), + FunctionRetTy::Default(span) => FunctionRetTy::Default(fld.new_span(span)), }, variadic: variadic }) @@ -678,7 +676,7 @@ pub fn noop_fold_ty_param(tp: TyParam, fld: &mut T) -> TyParam { ident: ident, bounds: fld.fold_bounds(bounds), default: default.map(|x| fld.fold_ty(x)), - span: span + span: fld.new_span(span), } } @@ -1201,7 +1199,7 @@ pub fn noop_fold_expr(Expr {id, node, span, attrs}: Expr, folder: &mu ExprKind::Closure(capture_clause, decl, body, span) => { ExprKind::Closure(capture_clause, folder.fold_fn_decl(decl), - folder.fold_block(body), + folder.fold_expr(body), folder.new_span(span)) } ExprKind::Block(blk) => ExprKind::Block(folder.fold_block(blk)), @@ -1240,10 +1238,11 @@ pub fn noop_fold_expr(Expr {id, node, span, attrs}: Expr, folder: &mu }); ExprKind::Path(qself, folder.fold_path(path)) } - ExprKind::Break(opt_ident) => ExprKind::Break(opt_ident.map(|label| - respan(folder.new_span(label.span), - folder.fold_ident(label.node))) - ), + ExprKind::Break(opt_ident, opt_expr) => { + ExprKind::Break(opt_ident.map(|label| respan(folder.new_span(label.span), + folder.fold_ident(label.node))), + opt_expr.map(|e| folder.fold_expr(e))) + } ExprKind::Continue(opt_ident) => ExprKind::Continue(opt_ident.map(|label| respan(folder.new_span(label.span), folder.fold_ident(label.node))) @@ -1334,9 +1333,8 @@ pub fn noop_fold_vis(vis: Visibility, folder: &mut T) -> Visibility { #[cfg(test)] mod tests { use std::io; - use ast; + use ast::{self, Ident}; use util::parser_testing::{string_to_crate, matches_codepattern}; - use parse::token; use print::pprust; use fold; use super::*; @@ -1352,7 +1350,7 @@ mod tests { impl Folder for ToZzIdentFolder { fn fold_ident(&mut self, _: ast::Ident) -> ast::Ident { - token::str_to_ident("zz") + Ident::from_str("zz") } fn fold_mac(&mut self, mac: ast::Mac) -> ast::Mac { fold::noop_fold_mac(mac, self) diff --git a/src/libsyntax/json.rs b/src/libsyntax/json.rs index a1c273baee..adab76309f 100644 --- a/src/libsyntax/json.rs +++ b/src/libsyntax/json.rs @@ -296,7 +296,7 @@ impl DiagnosticSpanLine { h_end: usize) -> DiagnosticSpanLine { DiagnosticSpanLine { - text: fm.get_line(index).unwrap().to_owned(), + text: fm.get_line(index).unwrap_or("").to_owned(), highlight_start: h_start, highlight_end: h_end, } diff --git a/src/libsyntax/lib.rs b/src/libsyntax/lib.rs index feed400897..b3b0ee6093 100644 --- a/src/libsyntax/lib.rs +++ b/src/libsyntax/lib.rs @@ -27,14 +27,13 @@ #![feature(associated_consts)] #![feature(const_fn)] #![feature(libc)] +#![feature(optin_builtin_traits)] #![feature(rustc_private)] #![feature(staged_api)] #![feature(str_escape)] #![feature(unicode)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(rustc_diagnostic_macros)] #![feature(specialization)] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] extern crate core; extern crate serialize; @@ -42,9 +41,10 @@ extern crate term; extern crate libc; #[macro_use] extern crate log; #[macro_use] #[no_link] extern crate rustc_bitflags; -extern crate rustc_unicode; +extern crate std_unicode; pub extern crate rustc_errors as errors; extern crate syntax_pos; +extern crate rustc_data_structures; extern crate serialize as rustc_serialize; // used by deriving @@ -82,7 +82,6 @@ pub mod diagnostics { pub mod diagnostic_list; pub mod util { - pub mod interner; pub mod lev_distance; pub mod node_count; pub mod parser; @@ -117,6 +116,7 @@ pub mod ptr; pub mod show_span; pub mod std_inject; pub mod str; +pub mod symbol; pub mod test; pub mod tokenstream; pub mod visit; @@ -143,4 +143,7 @@ pub mod ext { } } +#[cfg(test)] +mod test_snippet; + // __build_diagnostic_array! { libsyntax, DIAGNOSTICS } diff --git a/src/libsyntax/parse/attr.rs b/src/libsyntax/parse/attr.rs index 983c882eaf..ded676da3c 100644 --- a/src/libsyntax/parse/attr.rs +++ b/src/libsyntax/parse/attr.rs @@ -11,12 +11,11 @@ use attr; use ast; use syntax_pos::{mk_sp, Span}; -use codemap::{spanned, Spanned}; +use codemap::spanned; use parse::common::SeqSep; use parse::PResult; use parse::token; use parse::parser::{Parser, TokenType}; -use ptr::P; #[derive(PartialEq, Eq, Debug)] enum InnerAttributeParsePolicy<'a> { @@ -49,13 +48,9 @@ impl<'a> Parser<'a> { just_parsed_doc_comment = false; } token::DocComment(s) => { - let attr = ::attr::mk_sugared_doc_attr( - attr::mk_attr_id(), - self.id_to_interned_str(ast::Ident::with_empty_ctxt(s)), - self.span.lo, - self.span.hi - ); - if attr.node.style != ast::AttrStyle::Outer { + let Span { lo, hi, .. } = self.span; + let attr = attr::mk_sugared_doc_attr(attr::mk_attr_id(), s, lo, hi); + if attr.style != ast::AttrStyle::Outer { let mut err = self.fatal("expected outer doc comment"); err.note("inner doc comments like this (starting with \ `//!` or `/*!`) can only appear before items"); @@ -145,14 +140,12 @@ impl<'a> Parser<'a> { style = ast::AttrStyle::Inner; } - Ok(Spanned { + Ok(ast::Attribute { + id: attr::mk_attr_id(), + style: style, + value: value, + is_sugared_doc: false, span: span, - node: ast::Attribute_ { - id: attr::mk_attr_id(), - style: style, - value: value, - is_sugared_doc: false, - }, }) } @@ -172,15 +165,14 @@ impl<'a> Parser<'a> { } let attr = self.parse_attribute(true)?; - assert!(attr.node.style == ast::AttrStyle::Inner); + assert!(attr.style == ast::AttrStyle::Inner); attrs.push(attr); } token::DocComment(s) => { // we need to get the position of this token before we bump. let Span { lo, hi, .. } = self.span; - let str = self.id_to_interned_str(ast::Ident::with_empty_ctxt(s)); - let attr = attr::mk_sugared_doc_attr(attr::mk_attr_id(), str, lo, hi); - if attr.node.style == ast::AttrStyle::Inner { + let attr = attr::mk_sugared_doc_attr(attr::mk_attr_id(), s, lo, hi); + if attr.style == ast::AttrStyle::Inner { attrs.push(attr); self.bump(); } else { @@ -213,7 +205,7 @@ impl<'a> Parser<'a> { /// /// meta_item : IDENT ( '=' UNSUFFIXED_LIT | '(' meta_item_inner? ')' )? ; /// meta_item_inner : (meta_item | UNSUFFIXED_LIT) (',' meta_item_inner)? ; - pub fn parse_meta_item(&mut self) -> PResult<'a, P> { + pub fn parse_meta_item(&mut self) -> PResult<'a, ast::MetaItem> { let nt_meta = match self.token { token::Interpolated(ref nt) => match **nt { token::NtMeta(ref e) => Some(e.clone()), @@ -229,24 +221,15 @@ impl<'a> Parser<'a> { let lo = self.span.lo; let ident = self.parse_ident()?; - let name = self.id_to_interned_str(ident); - match self.token { - token::Eq => { - self.bump(); - let lit = self.parse_unsuffixed_lit()?; - let hi = self.prev_span.hi; - Ok(P(spanned(lo, hi, ast::MetaItemKind::NameValue(name, lit)))) - } - token::OpenDelim(token::Paren) => { - let inner_items = self.parse_meta_seq()?; - let hi = self.prev_span.hi; - Ok(P(spanned(lo, hi, ast::MetaItemKind::List(name, inner_items)))) - } - _ => { - let hi = self.prev_span.hi; - Ok(P(spanned(lo, hi, ast::MetaItemKind::Word(name)))) - } - } + let node = if self.eat(&token::Eq) { + ast::MetaItemKind::NameValue(self.parse_unsuffixed_lit()?) + } else if self.token == token::OpenDelim(token::Paren) { + ast::MetaItemKind::List(self.parse_meta_seq()?) + } else { + ast::MetaItemKind::Word + }; + let hi = self.prev_span.hi; + Ok(ast::MetaItem { name: ident.name, node: node, span: mk_sp(lo, hi) }) } /// matches meta_item_inner : (meta_item | UNSUFFIXED_LIT) ; diff --git a/src/libsyntax/parse/lexer/mod.rs b/src/libsyntax/parse/lexer/mod.rs index cf48c445c8..818742e449 100644 --- a/src/libsyntax/parse/lexer/mod.rs +++ b/src/libsyntax/parse/lexer/mod.rs @@ -8,14 +8,15 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use ast; +use ast::{self, Ident}; use syntax_pos::{self, BytePos, CharPos, Pos, Span}; use codemap::CodeMap; use errors::{FatalError, Handler, DiagnosticBuilder}; use ext::tt::transcribe::tt_next_token; -use parse::token::{self, keywords, str_to_ident}; +use parse::token; use str::char_at; -use rustc_unicode::property::Pattern_White_Space; +use symbol::{Symbol, keywords}; +use std_unicode::property::Pattern_White_Space; use std::borrow::Cow; use std::char; @@ -350,13 +351,13 @@ impl<'a> StringReader<'a> { /// single-byte delimiter). pub fn name_from(&self, start: BytePos) -> ast::Name { debug!("taking an ident from {:?} to {:?}", start, self.pos); - self.with_str_from(start, token::intern) + self.with_str_from(start, Symbol::intern) } /// As name_from, with an explicit endpoint. pub fn name_from_to(&self, start: BytePos, end: BytePos) -> ast::Name { debug!("taking an ident from {:?} to {:?}", start, end); - self.with_str_from_to(start, end, token::intern) + self.with_str_from_to(start, end, Symbol::intern) } /// Calls `f` with a string slice of the source text spanning from `start` @@ -492,7 +493,7 @@ impl<'a> StringReader<'a> { if string == "_" { None } else { - Some(token::intern(string)) + Some(Symbol::intern(string)) } }) } @@ -540,7 +541,7 @@ impl<'a> StringReader<'a> { self.with_str_from(start_bpos, |string| { // comments with only more "/"s are not doc comments let tok = if is_doc_comment(string) { - token::DocComment(token::intern(string)) + token::DocComment(Symbol::intern(string)) } else { token::Comment }; @@ -669,7 +670,7 @@ impl<'a> StringReader<'a> { } else { string.into() }; - token::DocComment(token::intern(&string[..])) + token::DocComment(Symbol::intern(&string[..])) } else { token::Comment }; @@ -758,7 +759,7 @@ impl<'a> StringReader<'a> { self.err_span_(start_bpos, self.pos, "no valid digits found for number"); - return token::Integer(token::intern("0")); + return token::Integer(Symbol::intern("0")); } // might be a float, but don't be greedy if this is actually an @@ -1097,7 +1098,7 @@ impl<'a> StringReader<'a> { token::Underscore } else { // FIXME: perform NFKC normalization here. (Issue #2253) - token::Ident(str_to_ident(string)) + token::Ident(Ident::from_str(string)) } })); } @@ -1277,13 +1278,13 @@ impl<'a> StringReader<'a> { // expansion purposes. See #12512 for the gory details of why // this is necessary. let ident = self.with_str_from(start, |lifetime_name| { - str_to_ident(&format!("'{}", lifetime_name)) + Ident::from_str(&format!("'{}", lifetime_name)) }); // Conjure up a "keyword checking ident" to make sure that // the lifetime name is not a keyword. let keyword_checking_ident = self.with_str_from(start, |lifetime_name| { - str_to_ident(lifetime_name) + Ident::from_str(lifetime_name) }); let keyword_checking_token = &token::Ident(keyword_checking_ident); let last_bpos = self.pos; @@ -1310,7 +1311,7 @@ impl<'a> StringReader<'a> { let id = if valid { self.name_from(start) } else { - token::intern("0") + Symbol::intern("0") }; self.bump(); // advance ch past token let suffix = self.scan_optional_raw_name(); @@ -1352,7 +1353,7 @@ impl<'a> StringReader<'a> { let id = if valid { self.name_from(start_bpos + BytePos(1)) } else { - token::intern("??") + Symbol::intern("??") }; self.bump(); let suffix = self.scan_optional_raw_name(); @@ -1424,7 +1425,7 @@ impl<'a> StringReader<'a> { let id = if valid { self.name_from_to(content_start_bpos, content_end_bpos) } else { - token::intern("??") + Symbol::intern("??") }; let suffix = self.scan_optional_raw_name(); return Ok(token::Literal(token::StrRaw(id, hash_count), suffix)); @@ -1551,7 +1552,7 @@ impl<'a> StringReader<'a> { let id = if valid { self.name_from(start) } else { - token::intern("?") + Symbol::intern("?") }; self.bump(); // advance ch past token return token::Byte(id); @@ -1584,7 +1585,7 @@ impl<'a> StringReader<'a> { let id = if valid { self.name_from(start) } else { - token::intern("??") + Symbol::intern("??") }; self.bump(); return token::ByteStr(id); @@ -1700,11 +1701,12 @@ fn ident_continue(c: Option) -> bool { mod tests { use super::*; + use ast::Ident; + use symbol::Symbol; use syntax_pos::{BytePos, Span, NO_EXPANSION}; use codemap::CodeMap; use errors; use parse::token; - use parse::token::str_to_ident; use std::io; use std::rc::Rc; @@ -1732,7 +1734,7 @@ mod tests { &sh, "/* my source file */ fn main() { println!(\"zebra\"); }\n" .to_string()); - let id = str_to_ident("fn"); + let id = Ident::from_str("fn"); assert_eq!(string_reader.next_token().tok, token::Comment); assert_eq!(string_reader.next_token().tok, token::Whitespace); let tok1 = string_reader.next_token(); @@ -1751,7 +1753,7 @@ mod tests { // read another token: let tok3 = string_reader.next_token(); let tok4 = TokenAndSpan { - tok: token::Ident(str_to_ident("main")), + tok: token::Ident(Ident::from_str("main")), sp: Span { lo: BytePos(24), hi: BytePos(28), @@ -1773,7 +1775,7 @@ mod tests { // make the identifier by looking up the string in the interner fn mk_ident(id: &str) -> token::Token { - token::Ident(str_to_ident(id)) + token::Ident(Ident::from_str(id)) } #[test] @@ -1813,7 +1815,7 @@ mod tests { let cm = Rc::new(CodeMap::new()); let sh = mk_sh(cm.clone()); assert_eq!(setup(&cm, &sh, "'a'".to_string()).next_token().tok, - token::Literal(token::Char(token::intern("a")), None)); + token::Literal(token::Char(Symbol::intern("a")), None)); } #[test] @@ -1821,7 +1823,7 @@ mod tests { let cm = Rc::new(CodeMap::new()); let sh = mk_sh(cm.clone()); assert_eq!(setup(&cm, &sh, "' '".to_string()).next_token().tok, - token::Literal(token::Char(token::intern(" ")), None)); + token::Literal(token::Char(Symbol::intern(" ")), None)); } #[test] @@ -1829,7 +1831,7 @@ mod tests { let cm = Rc::new(CodeMap::new()); let sh = mk_sh(cm.clone()); assert_eq!(setup(&cm, &sh, "'\\n'".to_string()).next_token().tok, - token::Literal(token::Char(token::intern("\\n")), None)); + token::Literal(token::Char(Symbol::intern("\\n")), None)); } #[test] @@ -1837,7 +1839,7 @@ mod tests { let cm = Rc::new(CodeMap::new()); let sh = mk_sh(cm.clone()); assert_eq!(setup(&cm, &sh, "'abc".to_string()).next_token().tok, - token::Lifetime(token::str_to_ident("'abc"))); + token::Lifetime(Ident::from_str("'abc"))); } #[test] @@ -1847,7 +1849,7 @@ mod tests { assert_eq!(setup(&cm, &sh, "r###\"\"#a\\b\x00c\"\"###".to_string()) .next_token() .tok, - token::Literal(token::StrRaw(token::intern("\"#a\\b\x00c\""), 3), None)); + token::Literal(token::StrRaw(Symbol::intern("\"#a\\b\x00c\""), 3), None)); } #[test] @@ -1857,11 +1859,11 @@ mod tests { macro_rules! test { ($input: expr, $tok_type: ident, $tok_contents: expr) => {{ assert_eq!(setup(&cm, &sh, format!("{}suffix", $input)).next_token().tok, - token::Literal(token::$tok_type(token::intern($tok_contents)), - Some(token::intern("suffix")))); + token::Literal(token::$tok_type(Symbol::intern($tok_contents)), + Some(Symbol::intern("suffix")))); // with a whitespace separator: assert_eq!(setup(&cm, &sh, format!("{} suffix", $input)).next_token().tok, - token::Literal(token::$tok_type(token::intern($tok_contents)), + token::Literal(token::$tok_type(Symbol::intern($tok_contents)), None)); }} } @@ -1877,14 +1879,14 @@ mod tests { test!("1.0e10", Float, "1.0e10"); assert_eq!(setup(&cm, &sh, "2us".to_string()).next_token().tok, - token::Literal(token::Integer(token::intern("2")), - Some(token::intern("us")))); + token::Literal(token::Integer(Symbol::intern("2")), + Some(Symbol::intern("us")))); assert_eq!(setup(&cm, &sh, "r###\"raw\"###suffix".to_string()).next_token().tok, - token::Literal(token::StrRaw(token::intern("raw"), 3), - Some(token::intern("suffix")))); + token::Literal(token::StrRaw(Symbol::intern("raw"), 3), + Some(Symbol::intern("suffix")))); assert_eq!(setup(&cm, &sh, "br###\"raw\"###suffix".to_string()).next_token().tok, - token::Literal(token::ByteStrRaw(token::intern("raw"), 3), - Some(token::intern("suffix")))); + token::Literal(token::ByteStrRaw(Symbol::intern("raw"), 3), + Some(Symbol::intern("suffix")))); } #[test] @@ -1904,7 +1906,7 @@ mod tests { _ => panic!("expected a comment!"), } assert_eq!(lexer.next_token().tok, - token::Literal(token::Char(token::intern("a")), None)); + token::Literal(token::Char(Symbol::intern("a")), None)); } #[test] @@ -1917,6 +1919,6 @@ mod tests { assert_eq!(comment.sp, ::syntax_pos::mk_sp(BytePos(0), BytePos(7))); assert_eq!(lexer.next_token().tok, token::Whitespace); assert_eq!(lexer.next_token().tok, - token::DocComment(token::intern("/// test"))); + token::DocComment(Symbol::intern("/// test"))); } } diff --git a/src/libsyntax/parse/mod.rs b/src/libsyntax/parse/mod.rs index 12408c7d3c..c982205f0e 100644 --- a/src/libsyntax/parse/mod.rs +++ b/src/libsyntax/parse/mod.rs @@ -16,12 +16,13 @@ use syntax_pos::{self, Span, FileMap}; use errors::{Handler, ColorConfig, DiagnosticBuilder}; use feature_gate::UnstableFeatures; use parse::parser::Parser; -use parse::token::InternedString; use ptr::P; use str::char_at; +use symbol::Symbol; use tokenstream; use std::cell::RefCell; +use std::collections::HashSet; use std::iter; use std::path::{Path, PathBuf}; use std::rc::Rc; @@ -64,7 +65,7 @@ impl ParseSess { ParseSess { span_diagnostic: handler, unstable_features: UnstableFeatures::from_environment(), - config: Vec::new(), + config: HashSet::new(), included_mod_stack: RefCell::new(vec![]), code_map: code_map } @@ -75,6 +76,19 @@ impl ParseSess { } } +#[derive(Clone)] +pub struct Directory { + pub path: PathBuf, + pub ownership: DirectoryOwnership, +} + +#[derive(Copy, Clone)] +pub enum DirectoryOwnership { + Owned, + UnownedViaBlock, + UnownedViaMod(bool /* legacy warnings? */), +} + // a bunch of utility functions of the form parse__from_ // where includes crate, expr, item, stmt, tts, and one that // uses a HOF to parse anything, and includes file and @@ -116,7 +130,7 @@ pub fn parse_item_from_source_str<'a>(name: String, source: String, sess: &'a Pa } pub fn parse_meta_from_source_str<'a>(name: String, source: String, sess: &'a ParseSess) - -> PResult<'a, P> { + -> PResult<'a, ast::MetaItem> { new_parser_from_source_str(sess, name, source).parse_meta_item() } @@ -151,11 +165,11 @@ pub fn new_parser_from_file<'a>(sess: &'a ParseSess, path: &Path) -> Parser<'a> /// On an error, use the given span as the source of the problem. pub fn new_sub_parser_from_file<'a>(sess: &'a ParseSess, path: &Path, - owns_directory: bool, + directory_ownership: DirectoryOwnership, module_name: Option, sp: Span) -> Parser<'a> { let mut p = filemap_to_parser(sess, file_to_filemap(sess, path, Some(sp))); - p.owns_directory = owns_directory; + p.directory.ownership = directory_ownership; p.root_module_name = module_name; p } @@ -208,14 +222,14 @@ pub fn filemap_to_tts(sess: &ParseSess, filemap: Rc) // it appears to me that the cfg doesn't matter here... indeed, // parsing tt's probably shouldn't require a parser at all. let srdr = lexer::StringReader::new(&sess.span_diagnostic, filemap); - let mut p1 = Parser::new(sess, Box::new(srdr)); + let mut p1 = Parser::new(sess, Box::new(srdr), None, false); panictry!(p1.parse_all_token_trees()) } /// Given tts and the ParseSess, produce a parser pub fn tts_to_parser<'a>(sess: &'a ParseSess, tts: Vec) -> Parser<'a> { let trdr = lexer::new_tt_reader(&sess.span_diagnostic, None, tts); - let mut p = Parser::new(sess, Box::new(trdr)); + let mut p = Parser::new(sess, Box::new(trdr), None, false); p.check_unknown_macro_variable(); p } @@ -371,13 +385,18 @@ fn looks_like_width_suffix(first_chars: &[char], s: &str) -> bool { s[1..].chars().all(|c| '0' <= c && c <= '9') } -fn filtered_float_lit(data: token::InternedString, suffix: Option<&str>, - sd: &Handler, sp: Span) -> ast::LitKind { +fn filtered_float_lit(data: Symbol, suffix: Option, sd: &Handler, sp: Span) + -> ast::LitKind { debug!("filtered_float_lit: {}, {:?}", data, suffix); - match suffix.as_ref().map(|s| &**s) { - Some("f32") => ast::LitKind::Float(data, ast::FloatTy::F32), - Some("f64") => ast::LitKind::Float(data, ast::FloatTy::F64), - Some(suf) => { + let suffix = match suffix { + Some(suffix) => suffix, + None => return ast::LitKind::FloatUnsuffixed(data), + }; + + match &*suffix.as_str() { + "f32" => ast::LitKind::Float(data, ast::FloatTy::F32), + "f64" => ast::LitKind::Float(data, ast::FloatTy::F64), + suf => { if suf.len() >= 2 && looks_like_width_suffix(&['f'], suf) { // if it looks like a width, lets try to be helpful. sd.struct_span_err(sp, &format!("invalid width `{}` for float literal", &suf[1..])) @@ -391,16 +410,13 @@ fn filtered_float_lit(data: token::InternedString, suffix: Option<&str>, ast::LitKind::FloatUnsuffixed(data) } - None => ast::LitKind::FloatUnsuffixed(data) } } -pub fn float_lit(s: &str, suffix: Option, - sd: &Handler, sp: Span) -> ast::LitKind { +pub fn float_lit(s: &str, suffix: Option, sd: &Handler, sp: Span) -> ast::LitKind { debug!("float_lit: {:?}, {:?}", s, suffix); // FIXME #2252: bounds checking float literals is deferred until trans let s = s.chars().filter(|&c| c != '_').collect::(); - let data = token::intern_and_get_ident(&s); - filtered_float_lit(data, suffix.as_ref().map(|s| &**s), sd, sp) + filtered_float_lit(Symbol::intern(&s), suffix, sd, sp) } /// Parse a string representing a byte literal into its final form. Similar to `char_lit` @@ -495,11 +511,7 @@ pub fn byte_str_lit(lit: &str) -> Rc> { Rc::new(res) } -pub fn integer_lit(s: &str, - suffix: Option, - sd: &Handler, - sp: Span) - -> ast::LitKind { +pub fn integer_lit(s: &str, suffix: Option, sd: &Handler, sp: Span) -> ast::LitKind { // s can only be ascii, byte indexing is fine let s2 = s.chars().filter(|&c| c != '_').collect::(); @@ -521,16 +533,15 @@ pub fn integer_lit(s: &str, } // 1f64 and 2f32 etc. are valid float literals. - if let Some(ref suf) = suffix { - if looks_like_width_suffix(&['f'], suf) { + if let Some(suf) = suffix { + if looks_like_width_suffix(&['f'], &suf.as_str()) { match base { 16 => sd.span_err(sp, "hexadecimal float literal is not supported"), 8 => sd.span_err(sp, "octal float literal is not supported"), 2 => sd.span_err(sp, "binary float literal is not supported"), _ => () } - let ident = token::intern_and_get_ident(&s); - return filtered_float_lit(ident, Some(&suf), sd, sp) + return filtered_float_lit(Symbol::intern(&s), Some(suf), sd, sp) } } @@ -538,9 +549,9 @@ pub fn integer_lit(s: &str, s = &s[2..]; } - if let Some(ref suf) = suffix { - if suf.is_empty() { sd.span_bug(sp, "found empty literal suffix in Some")} - ty = match &**suf { + if let Some(suf) = suffix { + if suf.as_str().is_empty() { sd.span_bug(sp, "found empty literal suffix in Some")} + ty = match &*suf.as_str() { "isize" => ast::LitIntType::Signed(ast::IntTy::Is), "i8" => ast::LitIntType::Signed(ast::IntTy::I8), "i16" => ast::LitIntType::Signed(ast::IntTy::I16), @@ -551,7 +562,7 @@ pub fn integer_lit(s: &str, "u16" => ast::LitIntType::Unsigned(ast::UintTy::U16), "u32" => ast::LitIntType::Unsigned(ast::UintTy::U32), "u64" => ast::LitIntType::Unsigned(ast::UintTy::U64), - _ => { + suf => { // i and u look like widths, so lets // give an error message along those lines if looks_like_width_suffix(&['i', 'u'], suf) { @@ -599,12 +610,11 @@ mod tests { use std::rc::Rc; use syntax_pos::{self, Span, BytePos, Pos, NO_EXPANSION}; use codemap::Spanned; - use ast::{self, PatKind}; + use ast::{self, Ident, PatKind}; use abi::Abi; use attr::first_attr_value_str_by_name; use parse; use parse::parser::Parser; - use parse::token::{str_to_ident}; use print::pprust::item_to_string; use ptr::P; use tokenstream::{self, TokenTree}; @@ -626,7 +636,7 @@ mod tests { global: false, segments: vec![ ast::PathSegment { - identifier: str_to_ident("a"), + identifier: Ident::from_str("a"), parameters: ast::PathParameters::none(), } ], @@ -645,11 +655,11 @@ mod tests { global: true, segments: vec![ ast::PathSegment { - identifier: str_to_ident("a"), + identifier: Ident::from_str("a"), parameters: ast::PathParameters::none(), }, ast::PathSegment { - identifier: str_to_ident("b"), + identifier: Ident::from_str("b"), parameters: ast::PathParameters::none(), } ] @@ -678,8 +688,8 @@ mod tests { Some(&TokenTree::Token(_, token::Ident(name_zip))), Some(&TokenTree::Delimited(_, ref macro_delimed)), ) - if name_macro_rules.name.as_str() == "macro_rules" - && name_zip.name.as_str() == "zip" => { + if name_macro_rules.name == "macro_rules" + && name_zip.name == "zip" => { let tts = ¯o_delimed.tts[..]; match (tts.len(), tts.get(0), tts.get(1), tts.get(2)) { ( @@ -696,8 +706,7 @@ mod tests { Some(&TokenTree::Token(_, token::Dollar)), Some(&TokenTree::Token(_, token::Ident(ident))), ) - if first_delimed.delim == token::Paren - && ident.name.as_str() == "a" => {}, + if first_delimed.delim == token::Paren && ident.name == "a" => {}, _ => panic!("value 3: {:?}", **first_delimed), } let tts = &second_delimed.tts[..]; @@ -708,7 +717,7 @@ mod tests { Some(&TokenTree::Token(_, token::Ident(ident))), ) if second_delimed.delim == token::Paren - && ident.name.as_str() == "a" => {}, + && ident.name == "a" => {}, _ => panic!("value 4: {:?}", **second_delimed), } }, @@ -724,17 +733,17 @@ mod tests { let tts = string_to_tts("fn a (b : i32) { b; }".to_string()); let expected = vec![ - TokenTree::Token(sp(0, 2), token::Ident(str_to_ident("fn"))), - TokenTree::Token(sp(3, 4), token::Ident(str_to_ident("a"))), + TokenTree::Token(sp(0, 2), token::Ident(Ident::from_str("fn"))), + TokenTree::Token(sp(3, 4), token::Ident(Ident::from_str("a"))), TokenTree::Delimited( sp(5, 14), Rc::new(tokenstream::Delimited { delim: token::DelimToken::Paren, open_span: sp(5, 6), tts: vec![ - TokenTree::Token(sp(6, 7), token::Ident(str_to_ident("b"))), + TokenTree::Token(sp(6, 7), token::Ident(Ident::from_str("b"))), TokenTree::Token(sp(8, 9), token::Colon), - TokenTree::Token(sp(10, 13), token::Ident(str_to_ident("i32"))), + TokenTree::Token(sp(10, 13), token::Ident(Ident::from_str("i32"))), ], close_span: sp(13, 14), })), @@ -744,7 +753,7 @@ mod tests { delim: token::DelimToken::Brace, open_span: sp(15, 16), tts: vec![ - TokenTree::Token(sp(17, 18), token::Ident(str_to_ident("b"))), + TokenTree::Token(sp(17, 18), token::Ident(Ident::from_str("b"))), TokenTree::Token(sp(18, 19), token::Semi), ], close_span: sp(20, 21), @@ -765,7 +774,7 @@ mod tests { global: false, segments: vec![ ast::PathSegment { - identifier: str_to_ident("d"), + identifier: Ident::from_str("d"), parameters: ast::PathParameters::none(), } ], @@ -788,7 +797,7 @@ mod tests { global:false, segments: vec![ ast::PathSegment { - identifier: str_to_ident("b"), + identifier: Ident::from_str("b"), parameters: ast::PathParameters::none(), } ], @@ -812,7 +821,7 @@ mod tests { id: ast::DUMMY_NODE_ID, node: PatKind::Ident(ast::BindingMode::ByValue(ast::Mutability::Immutable), Spanned{ span:sp(0, 1), - node: str_to_ident("b") + node: Ident::from_str("b") }, None), span: sp(0,1)})); @@ -824,7 +833,7 @@ mod tests { // this test depends on the intern order of "fn" and "i32" assert_eq!(string_to_item("fn a (b : i32) { b; }".to_string()), Some( - P(ast::Item{ident:str_to_ident("a"), + P(ast::Item{ident:Ident::from_str("a"), attrs:Vec::new(), id: ast::DUMMY_NODE_ID, node: ast::ItemKind::Fn(P(ast::FnDecl { @@ -835,8 +844,7 @@ mod tests { global:false, segments: vec![ ast::PathSegment { - identifier: - str_to_ident("i32"), + identifier: Ident::from_str("i32"), parameters: ast::PathParameters::none(), } ], @@ -849,7 +857,7 @@ mod tests { ast::BindingMode::ByValue(ast::Mutability::Immutable), Spanned{ span: sp(6,7), - node: str_to_ident("b")}, + node: Ident::from_str("b")}, None ), span: sp(6,7) @@ -884,9 +892,7 @@ mod tests { global:false, segments: vec![ ast::PathSegment { - identifier: - str_to_ident( - "b"), + identifier: Ident::from_str("b"), parameters: ast::PathParameters::none(), } @@ -934,8 +940,8 @@ mod tests { struct PatIdentVisitor { spans: Vec } - impl ::visit::Visitor for PatIdentVisitor { - fn visit_pat(&mut self, p: &ast::Pat) { + impl<'a> ::visit::Visitor<'a> for PatIdentVisitor { + fn visit_pat(&mut self, p: &'a ast::Pat) { match p.node { PatKind::Ident(_ , ref spannedident, _) => { self.spans.push(spannedident.span.clone()); @@ -998,12 +1004,12 @@ mod tests { let item = parse_item_from_source_str(name.clone(), source, &sess) .unwrap().unwrap(); let doc = first_attr_value_str_by_name(&item.attrs, "doc").unwrap(); - assert_eq!(&doc[..], "/// doc comment"); + assert_eq!(doc, "/// doc comment"); let source = "/// doc comment\r\n/// line 2\r\nfn foo() {}".to_string(); let item = parse_item_from_source_str(name.clone(), source, &sess) .unwrap().unwrap(); - let docs = item.attrs.iter().filter(|a| &*a.name() == "doc") + let docs = item.attrs.iter().filter(|a| a.name() == "doc") .map(|a| a.value_str().unwrap().to_string()).collect::>(); let b: &[_] = &["/// doc comment".to_string(), "/// line 2".to_string()]; assert_eq!(&docs[..], b); @@ -1011,7 +1017,7 @@ mod tests { let source = "/** doc comment\r\n * with CRLF */\r\nfn foo() {}".to_string(); let item = parse_item_from_source_str(name, source, &sess).unwrap().unwrap(); let doc = first_attr_value_str_by_name(&item.attrs, "doc").unwrap(); - assert_eq!(&doc[..], "/** doc comment\n * with CRLF */"); + assert_eq!(doc, "/** doc comment\n * with CRLF */"); } #[test] diff --git a/src/libsyntax/parse/parser.rs b/src/libsyntax/parse/parser.rs index b670a73847..5bd03cc68a 100644 --- a/src/libsyntax/parse/parser.rs +++ b/src/libsyntax/parse/parser.rs @@ -38,7 +38,7 @@ use ast::{Ty, TyKind, TypeBinding, TyParam, TyParamBounds}; use ast::{ViewPath, ViewPathGlob, ViewPathList, ViewPathSimple}; use ast::{Visibility, WhereClause}; use ast::{BinOpKind, UnOp}; -use ast; +use {ast, attr}; use codemap::{self, CodeMap, Spanned, spanned, respan}; use syntax_pos::{self, Span, BytePos, mk_sp}; use errors::{self, DiagnosticBuilder}; @@ -48,13 +48,14 @@ use parse::classify; use parse::common::SeqSep; use parse::lexer::{Reader, TokenAndSpan}; use parse::obsolete::ObsoleteSyntax; -use parse::token::{self, intern, keywords, MatchNt, SubstNt, InternedString}; -use parse::{new_sub_parser_from_file, ParseSess}; +use parse::token::{self, MatchNt, SubstNt}; +use parse::{new_sub_parser_from_file, ParseSess, Directory, DirectoryOwnership}; use util::parser::{AssocOp, Fixity}; use print::pprust; use ptr::P; use parse::PResult; use tokenstream::{self, Delimited, SequenceRepetition, TokenTree}; +use symbol::{Symbol, keywords}; use util::ThinVec; use std::collections::HashSet; @@ -67,7 +68,6 @@ bitflags! { flags Restrictions: u8 { const RESTRICTION_STMT_EXPR = 1 << 0, const RESTRICTION_NO_STRUCT_LITERAL = 1 << 1, - const NO_NONINLINE_MOD = 1 << 2, } } @@ -88,13 +88,6 @@ pub enum PathStyle { Expr, } -/// How to parse a bound, whether to allow bound modifiers such as `?`. -#[derive(Copy, Clone, PartialEq)] -pub enum BoundParsingMode { - Bare, - Modified, -} - #[derive(Clone, Copy, PartialEq)] pub enum SemiColonMode { Break, @@ -199,12 +192,9 @@ pub struct Parser<'a> { /// extra detail when the same error is seen twice pub obsolete_set: HashSet, /// Used to determine the path to externally loaded source files - pub directory: PathBuf, + pub directory: Directory, /// Stack of open delimiters and their spans. Used for error message. pub open_braces: Vec<(token::DelimToken, Span)>, - /// Flag if this parser "owns" the directory that it is currently parsing - /// in. This will affect how nested files are looked up. - pub owns_directory: bool, /// Name of the root module this parser originated from. If `None`, then the /// name is not known. This does not change while the parser is descending /// into modules, and sub-parsers have new values for this name. @@ -244,8 +234,9 @@ pub struct ModulePath { } pub struct ModulePathSuccess { - pub path: ::std::path::PathBuf, - pub owns_directory: bool, + pub path: PathBuf, + pub directory_ownership: DirectoryOwnership, + warn: bool, } pub struct ModulePathError { @@ -276,12 +267,11 @@ impl From> for LhsExpr { } impl<'a> Parser<'a> { - pub fn new(sess: &'a ParseSess, rdr: Box) -> Self { - Parser::new_with_doc_flag(sess, rdr, false) - } - - pub fn new_with_doc_flag(sess: &'a ParseSess, rdr: Box, desugar_doc_comments: bool) - -> Self { + pub fn new(sess: &'a ParseSess, + rdr: Box, + directory: Option, + desugar_doc_comments: bool) + -> Self { let mut parser = Parser { reader: rdr, sess: sess, @@ -295,9 +285,8 @@ impl<'a> Parser<'a> { quote_depth: 0, parsing_token_tree: false, obsolete_set: HashSet::new(), - directory: PathBuf::new(), + directory: Directory { path: PathBuf::new(), ownership: DirectoryOwnership::Owned }, open_braces: Vec::new(), - owns_directory: true, root_module_name: None, expected_tokens: Vec::new(), tts: Vec::new(), @@ -308,9 +297,11 @@ impl<'a> Parser<'a> { let tok = parser.next_tok(); parser.token = tok.tok; parser.span = tok.sp; - if parser.span != syntax_pos::DUMMY_SP { - parser.directory = PathBuf::from(sess.codemap().span_to_filename(parser.span)); - parser.directory.pop(); + if let Some(directory) = directory { + parser.directory = directory; + } else if parser.span != syntax_pos::DUMMY_SP { + parser.directory.path = PathBuf::from(sess.codemap().span_to_filename(parser.span)); + parser.directory.path.pop(); } parser } @@ -998,10 +989,6 @@ impl<'a> Parser<'a> { &self.sess.span_diagnostic } - pub fn id_to_interned_str(&mut self, id: Ident) -> InternedString { - id.name.as_str() - } - /// Is the current token one of the keywords that signals a bare function /// type? pub fn token_is_bare_fn_keyword(&mut self) -> bool { @@ -1048,7 +1035,7 @@ impl<'a> Parser<'a> { trait_ref: trait_ref, span: mk_sp(lo, hi)}; let other_bounds = if self.eat(&token::BinOp(token::Plus)) { - self.parse_ty_param_bounds(BoundParsingMode::Bare)? + self.parse_ty_param_bounds()? } else { P::new() }; @@ -1066,7 +1053,7 @@ impl<'a> Parser<'a> { The `impl` has already been consumed. */ - let bounds = self.parse_ty_param_bounds(BoundParsingMode::Modified)?; + let bounds = self.parse_ty_param_bounds()?; if !bounds.iter().any(|b| if let TraitTyParamBound(..) = *b { true } else { false }) { self.span_err(self.prev_span, "at least one trait must be specified"); @@ -1278,7 +1265,7 @@ impl<'a> Parser<'a> { return Ok(lhs); } - let bounds = self.parse_ty_param_bounds(BoundParsingMode::Bare)?; + let bounds = self.parse_ty_param_bounds()?; // In type grammar, `+` is treated like a binary operator, // and hence both L and R side are required. @@ -1523,34 +1510,28 @@ impl<'a> Parser<'a> { // float literals, so all the handling is done // internally. token::Integer(s) => { - (false, parse::integer_lit(&s.as_str(), - suf.as_ref().map(|s| s.as_str()), - &self.sess.span_diagnostic, - self.span)) + let diag = &self.sess.span_diagnostic; + (false, parse::integer_lit(&s.as_str(), suf, diag, self.span)) } token::Float(s) => { - (false, parse::float_lit(&s.as_str(), - suf.as_ref().map(|s| s.as_str()), - &self.sess.span_diagnostic, - self.span)) + let diag = &self.sess.span_diagnostic; + (false, parse::float_lit(&s.as_str(), suf, diag, self.span)) } token::Str_(s) => { - (true, - LitKind::Str(token::intern_and_get_ident(&parse::str_lit(&s.as_str())), - ast::StrStyle::Cooked)) + let s = Symbol::intern(&parse::str_lit(&s.as_str())); + (true, LitKind::Str(s, ast::StrStyle::Cooked)) } token::StrRaw(s, n) => { - (true, - LitKind::Str( - token::intern_and_get_ident(&parse::raw_str_lit(&s.as_str())), - ast::StrStyle::Raw(n))) + let s = Symbol::intern(&parse::raw_str_lit(&s.as_str())); + (true, LitKind::Str(s, ast::StrStyle::Raw(n))) + } + token::ByteStr(i) => { + (true, LitKind::ByteStr(parse::byte_str_lit(&i.as_str()))) + } + token::ByteStrRaw(i, _) => { + (true, LitKind::ByteStr(Rc::new(i.to_string().into_bytes()))) } - token::ByteStr(i) => - (true, LitKind::ByteStr(parse::byte_str_lit(&i.as_str()))), - token::ByteStrRaw(i, _) => - (true, - LitKind::ByteStr(Rc::new(i.to_string().into_bytes()))), }; if suffix_illegal { @@ -1902,12 +1883,12 @@ impl<'a> Parser<'a> { if let Some(recv) = followed_by_ty_params { assert!(recv.is_empty()); *recv = attrs; - } else { + debug!("parse_lifetime_defs ret {:?}", res); + return Ok(res); + } else if !attrs.is_empty() { let msg = "trailing attribute after lifetime parameters"; return Err(self.fatal(msg)); } - debug!("parse_lifetime_defs ret {:?}", res); - return Ok(res); } } @@ -2269,15 +2250,25 @@ impl<'a> Parser<'a> { ex = ExprKind::Ret(None); } } else if self.eat_keyword(keywords::Break) { - if self.token.is_lifetime() { - ex = ExprKind::Break(Some(Spanned { + let lt = if self.token.is_lifetime() { + let spanned_lt = Spanned { node: self.get_lifetime(), span: self.span - })); + }; self.bump(); + Some(spanned_lt) } else { - ex = ExprKind::Break(None); - } + None + }; + let e = if self.token.can_begin_expr() + && !(self.token == token::OpenDelim(token::Brace) + && self.restrictions.contains( + Restrictions::RESTRICTION_NO_STRUCT_LITERAL)) { + Some(self.parse_expr()?) + } else { + None + }; + ex = ExprKind::Break(lt, e); hi = self.prev_span.hi; } else if self.token.is_keyword(keywords::Let) { // Catch this syntax error here, instead of in `check_strict_keywords`, so @@ -2544,7 +2535,7 @@ impl<'a> Parser<'a> { let prev_span = self.prev_span; let fstr = n.as_str(); let mut err = self.diagnostic().struct_span_err(prev_span, - &format!("unexpected token: `{}`", n.as_str())); + &format!("unexpected token: `{}`", n)); if fstr.chars().all(|x| "0123456789.".contains(x)) { let float = match fstr.parse::().ok() { Some(f) => f, @@ -2627,7 +2618,7 @@ impl<'a> Parser<'a> { }))); } else if self.token.is_keyword(keywords::Crate) { let ident = match self.token { - token::Ident(id) => ast::Ident { name: token::intern("$crate"), ..id }, + token::Ident(id) => ast::Ident { name: Symbol::intern("$crate"), ..id }, _ => unreachable!(), }; self.bump(); @@ -3162,25 +3153,12 @@ impl<'a> Parser<'a> { let decl = self.parse_fn_block_decl()?; let decl_hi = self.prev_span.hi; let body = match decl.output { - FunctionRetTy::Default(_) => { - // If no explicit return type is given, parse any - // expr and wrap it up in a dummy block: - let body_expr = self.parse_expr()?; - P(ast::Block { - id: ast::DUMMY_NODE_ID, - span: body_expr.span, - stmts: vec![Stmt { - span: body_expr.span, - node: StmtKind::Expr(body_expr), - id: ast::DUMMY_NODE_ID, - }], - rules: BlockCheckMode::Default, - }) - } + FunctionRetTy::Default(_) => self.parse_expr()?, _ => { // If an explicit return type is given, require a // block to appear (RFC 968). - self.parse_block()? + let body_lo = self.span.lo; + self.parse_block_expr(body_lo, BlockCheckMode::Default, ThinVec::new())? } }; @@ -3764,9 +3742,7 @@ impl<'a> Parser<'a> { /// Emit an expected item after attributes error. fn expected_item_err(&self, attrs: &[Attribute]) { let message = match attrs.last() { - Some(&Attribute { node: ast::Attribute_ { is_sugared_doc: true, .. }, .. }) => { - "expected item after doc comment" - } + Some(&Attribute { is_sugared_doc: true, .. }) => "expected item after doc comment", _ => "expected item after attributes", }; @@ -3990,9 +3966,11 @@ impl<'a> Parser<'a> { } } else { // FIXME: Bad copy of attrs - let restrictions = self.restrictions | Restrictions::NO_NONINLINE_MOD; - match self.with_res(restrictions, - |this| this.parse_item_(attrs.clone(), false, true))? { + let old_directory_ownership = + mem::replace(&mut self.directory.ownership, DirectoryOwnership::UnownedViaBlock); + let item = self.parse_item_(attrs.clone(), false, true)?; + self.directory.ownership = old_directory_ownership; + match item { Some(i) => Stmt { id: ast::DUMMY_NODE_ID, span: mk_sp(lo, i.span.hi), @@ -4164,14 +4142,12 @@ impl<'a> Parser<'a> { // Parses a sequence of bounds if a `:` is found, // otherwise returns empty list. - fn parse_colon_then_ty_param_bounds(&mut self, - mode: BoundParsingMode) - -> PResult<'a, TyParamBounds> + fn parse_colon_then_ty_param_bounds(&mut self) -> PResult<'a, TyParamBounds> { if !self.eat(&token::Colon) { Ok(P::new()) } else { - self.parse_ty_param_bounds(mode) + self.parse_ty_param_bounds() } } @@ -4179,9 +4155,7 @@ impl<'a> Parser<'a> { // where boundseq = ( polybound + boundseq ) | polybound // and polybound = ( 'for' '<' 'region '>' )? bound // and bound = 'region | trait_ref - fn parse_ty_param_bounds(&mut self, - mode: BoundParsingMode) - -> PResult<'a, TyParamBounds> + fn parse_ty_param_bounds(&mut self) -> PResult<'a, TyParamBounds> { let mut result = vec![]; loop { @@ -4200,16 +4174,10 @@ impl<'a> Parser<'a> { })); self.bump(); } - token::ModSep | token::Ident(..) => { + _ if self.token.is_path_start() || self.token.is_keyword(keywords::For) => { let poly_trait_ref = self.parse_poly_trait_ref()?; let modifier = if ate_question { - if mode == BoundParsingMode::Modified { - TraitBoundModifier::Maybe - } else { - self.span_err(question_span, - "unexpected `?`"); - TraitBoundModifier::None - } + TraitBoundModifier::Maybe } else { TraitBoundModifier::None }; @@ -4231,7 +4199,7 @@ impl<'a> Parser<'a> { let span = self.span; let ident = self.parse_ident()?; - let bounds = self.parse_colon_then_ty_param_bounds(BoundParsingMode::Modified)?; + let bounds = self.parse_colon_then_ty_param_bounds()?; let default = if self.check(&token::Eq) { self.bump(); @@ -4410,6 +4378,23 @@ impl<'a> Parser<'a> { return Ok(where_clause); } + // This is a temporary hack. + // + // We are considering adding generics to the `where` keyword as an alternative higher-rank + // parameter syntax (as in `where<'a>` or `where`. To avoid that being a breaking + // change, for now we refuse to parse `where < (ident | lifetime) (> | , | :)`. + if token::Lt == self.token { + let ident_or_lifetime = self.look_ahead(1, |t| t.is_ident() || t.is_lifetime()); + if ident_or_lifetime { + let gt_comma_or_colon = self.look_ahead(2, |t| { + *t == token::Gt || *t == token::Comma || *t == token::Colon + }); + if gt_comma_or_colon { + self.span_err(self.span, "syntax `where` is reserved for future use"); + } + } + } + let mut parsed_something = false; loop { let lo = self.span.lo; @@ -4422,7 +4407,7 @@ impl<'a> Parser<'a> { let bounded_lifetime = self.parse_lifetime()?; - self.eat(&token::Colon); + self.expect(&token::Colon)?; let bounds = self.parse_lifetimes(token::BinOp(token::Plus))?; @@ -4455,7 +4440,7 @@ impl<'a> Parser<'a> { let bounded_ty = self.parse_ty()?; if self.eat(&token::Colon) { - let bounds = self.parse_ty_param_bounds(BoundParsingMode::Bare)?; + let bounds = self.parse_ty_param_bounds()?; let hi = self.prev_span.hi; let span = mk_sp(lo, hi); @@ -4850,7 +4835,7 @@ impl<'a> Parser<'a> { Visibility::Inherited => (), _ => { let is_macro_rules: bool = match self.token { - token::Ident(sid) => sid.name == intern("macro_rules"), + token::Ident(sid) => sid.name == Symbol::intern("macro_rules"), _ => false, }; if is_macro_rules { @@ -4917,7 +4902,7 @@ impl<'a> Parser<'a> { let mut tps = self.parse_generics()?; // Parse supertrait bounds. - let bounds = self.parse_colon_then_ty_param_bounds(BoundParsingMode::Bare)?; + let bounds = self.parse_colon_then_ty_param_bounds()?; tps.where_clause = self.parse_where_clause()?; @@ -5295,39 +5280,53 @@ impl<'a> Parser<'a> { self.bump(); if in_cfg { // This mod is in an external file. Let's go get it! - let (m, attrs) = self.eval_src_mod(id, &outer_attrs, id_span)?; - Ok((id, m, Some(attrs))) + let ModulePathSuccess { path, directory_ownership, warn } = + self.submod_path(id, &outer_attrs, id_span)?; + let (module, mut attrs) = + self.eval_src_mod(path, directory_ownership, id.to_string(), id_span)?; + if warn { + let attr = ast::Attribute { + id: attr::mk_attr_id(), + style: ast::AttrStyle::Outer, + value: ast::MetaItem { + name: Symbol::intern("warn_directory_ownership"), + node: ast::MetaItemKind::Word, + span: syntax_pos::DUMMY_SP, + }, + is_sugared_doc: false, + span: syntax_pos::DUMMY_SP, + }; + attr::mark_known(&attr); + attrs.push(attr); + } + Ok((id, module, Some(attrs))) } else { let placeholder = ast::Mod { inner: syntax_pos::DUMMY_SP, items: Vec::new() }; Ok((id, ItemKind::Mod(placeholder), None)) } } else { - let directory = self.directory.clone(); - let restrictions = self.push_directory(id, &outer_attrs); + let old_directory = self.directory.clone(); + self.push_directory(id, &outer_attrs); self.expect(&token::OpenDelim(token::Brace))?; let mod_inner_lo = self.span.lo; let attrs = self.parse_inner_attributes()?; - let m = self.with_res(restrictions, |this| { - this.parse_mod_items(&token::CloseDelim(token::Brace), mod_inner_lo) - })?; - self.directory = directory; - Ok((id, ItemKind::Mod(m), Some(attrs))) + let module = self.parse_mod_items(&token::CloseDelim(token::Brace), mod_inner_lo)?; + self.directory = old_directory; + Ok((id, ItemKind::Mod(module), Some(attrs))) } } - fn push_directory(&mut self, id: Ident, attrs: &[Attribute]) -> Restrictions { - if let Some(path) = ::attr::first_attr_value_str_by_name(attrs, "path") { - self.directory.push(&*path); - self.restrictions - Restrictions::NO_NONINLINE_MOD + fn push_directory(&mut self, id: Ident, attrs: &[Attribute]) { + if let Some(path) = attr::first_attr_value_str_by_name(attrs, "path") { + self.directory.path.push(&*path.as_str()); + self.directory.ownership = DirectoryOwnership::Owned; } else { - let default_path = self.id_to_interned_str(id); - self.directory.push(&*default_path); - self.restrictions + self.directory.path.push(&*id.name.as_str()); } } pub fn submod_path_from_attr(attrs: &[ast::Attribute], dir_path: &Path) -> Option { - ::attr::first_attr_value_str_by_name(attrs, "path").map(|d| dir_path.join(&*d)) + attr::first_attr_value_str_by_name(attrs, "path").map(|d| dir_path.join(&*d.as_str())) } /// Returns either a path to a module, or . @@ -5342,8 +5341,16 @@ impl<'a> Parser<'a> { let secondary_exists = codemap.file_exists(&secondary_path); let result = match (default_exists, secondary_exists) { - (true, false) => Ok(ModulePathSuccess { path: default_path, owns_directory: false }), - (false, true) => Ok(ModulePathSuccess { path: secondary_path, owns_directory: true }), + (true, false) => Ok(ModulePathSuccess { + path: default_path, + directory_ownership: DirectoryOwnership::UnownedViaMod(false), + warn: false, + }), + (false, true) => Ok(ModulePathSuccess { + path: secondary_path, + directory_ownership: DirectoryOwnership::Owned, + warn: false, + }), (false, false) => Err(ModulePathError { err_msg: format!("file not found for module `{}`", mod_name), help_msg: format!("name the file either {} or {} inside the directory {:?}", @@ -5371,13 +5378,20 @@ impl<'a> Parser<'a> { id: ast::Ident, outer_attrs: &[ast::Attribute], id_sp: Span) -> PResult<'a, ModulePathSuccess> { - if let Some(p) = Parser::submod_path_from_attr(outer_attrs, &self.directory) { - return Ok(ModulePathSuccess { path: p, owns_directory: true }); + if let Some(path) = Parser::submod_path_from_attr(outer_attrs, &self.directory.path) { + return Ok(ModulePathSuccess { + directory_ownership: match path.file_name().and_then(|s| s.to_str()) { + Some("mod.rs") => DirectoryOwnership::Owned, + _ => DirectoryOwnership::UnownedViaMod(true), + }, + path: path, + warn: false, + }); } - let paths = Parser::default_submod_path(id, &self.directory, self.sess.codemap()); + let paths = Parser::default_submod_path(id, &self.directory.path, self.sess.codemap()); - if self.restrictions.contains(Restrictions::NO_NONINLINE_MOD) { + if let DirectoryOwnership::UnownedViaBlock = self.directory.ownership { let msg = "Cannot declare a non-inline module inside a block unless it has a path attribute"; let mut err = self.diagnostic().struct_span_err(id_sp, msg); @@ -5387,10 +5401,15 @@ impl<'a> Parser<'a> { err.span_note(id_sp, &msg); } return Err(err); - } else if !self.owns_directory { + } else if let DirectoryOwnership::UnownedViaMod(warn) = self.directory.ownership { + if warn { + if let Ok(result) = paths.result { + return Ok(ModulePathSuccess { warn: true, ..result }); + } + } let mut err = self.diagnostic().struct_span_err(id_sp, "cannot declare a new module at this location"); - let this_module = match self.directory.file_name() { + let this_module = match self.directory.path.file_name() { Some(file_name) => file_name.to_str().unwrap().to_owned(), None => self.root_module_name.as_ref().unwrap().clone(), }; @@ -5403,8 +5422,10 @@ impl<'a> Parser<'a> { &format!("... or maybe `use` the module `{}` instead \ of possibly redeclaring it", paths.name)); - } - return Err(err); + return Err(err); + } else { + return Err(err); + }; } match paths.result { @@ -5415,25 +5436,11 @@ impl<'a> Parser<'a> { /// Read a module from a source file. fn eval_src_mod(&mut self, - id: ast::Ident, - outer_attrs: &[ast::Attribute], + path: PathBuf, + directory_ownership: DirectoryOwnership, + name: String, id_sp: Span) -> PResult<'a, (ast::ItemKind, Vec )> { - let ModulePathSuccess { path, owns_directory } = self.submod_path(id, - outer_attrs, - id_sp)?; - - self.eval_src_mod_from_path(path, - owns_directory, - id.to_string(), - id_sp) - } - - fn eval_src_mod_from_path(&mut self, - path: PathBuf, - owns_directory: bool, - name: String, - id_sp: Span) -> PResult<'a, (ast::ItemKind, Vec )> { let mut included_mod_stack = self.sess.included_mod_stack.borrow_mut(); if let Some(i) = included_mod_stack.iter().position(|p| *p == path) { let mut err = String::from("circular modules: "); @@ -5448,7 +5455,8 @@ impl<'a> Parser<'a> { included_mod_stack.push(path.clone()); drop(included_mod_stack); - let mut p0 = new_sub_parser_from_file(self.sess, &path, owns_directory, Some(name), id_sp); + let mut p0 = + new_sub_parser_from_file(self.sess, &path, directory_ownership, Some(name), id_sp); let mod_inner_lo = p0.span.lo; let mod_attrs = p0.parse_inner_attributes()?; let m0 = p0.parse_mod_items(&token::Eof, mod_inner_lo)?; @@ -6141,26 +6149,17 @@ impl<'a> Parser<'a> { }) } - pub fn parse_optional_str(&mut self) - -> Option<(InternedString, - ast::StrStyle, - Option)> { + pub fn parse_optional_str(&mut self) -> Option<(Symbol, ast::StrStyle, Option)> { let ret = match self.token { - token::Literal(token::Str_(s), suf) => { - let s = self.id_to_interned_str(ast::Ident::with_empty_ctxt(s)); - (s, ast::StrStyle::Cooked, suf) - } - token::Literal(token::StrRaw(s, n), suf) => { - let s = self.id_to_interned_str(ast::Ident::with_empty_ctxt(s)); - (s, ast::StrStyle::Raw(n), suf) - } + token::Literal(token::Str_(s), suf) => (s, ast::StrStyle::Cooked, suf), + token::Literal(token::StrRaw(s, n), suf) => (s, ast::StrStyle::Raw(n), suf), _ => return None }; self.bump(); Some(ret) } - pub fn parse_str(&mut self) -> PResult<'a, (InternedString, StrStyle)> { + pub fn parse_str(&mut self) -> PResult<'a, (Symbol, StrStyle)> { match self.parse_optional_str() { Some((s, style, suf)) => { let sp = self.prev_span; diff --git a/src/libsyntax/parse/token.rs b/src/libsyntax/parse/token.rs index 0198ee073d..8ac39dd462 100644 --- a/src/libsyntax/parse/token.rs +++ b/src/libsyntax/parse/token.rs @@ -16,13 +16,10 @@ pub use self::Token::*; use ast::{self}; use ptr::P; -use util::interner::Interner; +use symbol::keywords; use tokenstream; -use serialize::{Decodable, Decoder, Encodable, Encoder}; -use std::cell::RefCell; use std::fmt; -use std::ops::Deref; use std::rc::Rc; #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Eq, Hash, Debug, Copy)] @@ -301,7 +298,7 @@ pub enum Nonterminal { NtTy(P), NtIdent(ast::SpannedIdent), /// Stuff inside brackets for attributes - NtMeta(P), + NtMeta(ast::MetaItem), NtPath(ast::Path), NtTT(tokenstream::TokenTree), // These are not exposed to macros, but are used by quasiquote. @@ -335,270 +332,3 @@ impl fmt::Debug for Nonterminal { } } } - -// In this macro, there is the requirement that the name (the number) must be monotonically -// increasing by one in the special identifiers, starting at 0; the same holds for the keywords, -// except starting from the next number instead of zero. -macro_rules! declare_keywords {( - $( ($index: expr, $konst: ident, $string: expr) )* -) => { - pub mod keywords { - use ast; - #[derive(Clone, Copy, PartialEq, Eq)] - pub struct Keyword { - ident: ast::Ident, - } - impl Keyword { - #[inline] pub fn ident(self) -> ast::Ident { self.ident } - #[inline] pub fn name(self) -> ast::Name { self.ident.name } - } - $( - #[allow(non_upper_case_globals)] - pub const $konst: Keyword = Keyword { - ident: ast::Ident::with_empty_ctxt(ast::Name($index)) - }; - )* - } - - fn mk_fresh_ident_interner() -> IdentInterner { - Interner::prefill(&[$($string,)*]) - } -}} - -// NB: leaving holes in the ident table is bad! a different ident will get -// interned with the id from the hole, but it will be between the min and max -// of the reserved words, and thus tagged as "reserved". -// After modifying this list adjust `is_strict_keyword`/`is_reserved_keyword`, -// this should be rarely necessary though if the keywords are kept in alphabetic order. -declare_keywords! { - // Invalid identifier - (0, Invalid, "") - - // Strict keywords used in the language. - (1, As, "as") - (2, Box, "box") - (3, Break, "break") - (4, Const, "const") - (5, Continue, "continue") - (6, Crate, "crate") - (7, Else, "else") - (8, Enum, "enum") - (9, Extern, "extern") - (10, False, "false") - (11, Fn, "fn") - (12, For, "for") - (13, If, "if") - (14, Impl, "impl") - (15, In, "in") - (16, Let, "let") - (17, Loop, "loop") - (18, Match, "match") - (19, Mod, "mod") - (20, Move, "move") - (21, Mut, "mut") - (22, Pub, "pub") - (23, Ref, "ref") - (24, Return, "return") - (25, SelfValue, "self") - (26, SelfType, "Self") - (27, Static, "static") - (28, Struct, "struct") - (29, Super, "super") - (30, Trait, "trait") - (31, True, "true") - (32, Type, "type") - (33, Unsafe, "unsafe") - (34, Use, "use") - (35, Where, "where") - (36, While, "while") - - // Keywords reserved for future use. - (37, Abstract, "abstract") - (38, Alignof, "alignof") - (39, Become, "become") - (40, Do, "do") - (41, Final, "final") - (42, Macro, "macro") - (43, Offsetof, "offsetof") - (44, Override, "override") - (45, Priv, "priv") - (46, Proc, "proc") - (47, Pure, "pure") - (48, Sizeof, "sizeof") - (49, Typeof, "typeof") - (50, Unsized, "unsized") - (51, Virtual, "virtual") - (52, Yield, "yield") - - // Weak keywords, have special meaning only in specific contexts. - (53, Default, "default") - (54, StaticLifetime, "'static") - (55, Union, "union") -} - -// looks like we can get rid of this completely... -pub type IdentInterner = Interner; - -// if an interner exists in TLS, return it. Otherwise, prepare a -// fresh one. -// FIXME(eddyb) #8726 This should probably use a thread-local reference. -pub fn with_ident_interner T>(f: F) -> T { - thread_local!(static KEY: RefCell = { - RefCell::new(mk_fresh_ident_interner()) - }); - KEY.with(|interner| f(&mut *interner.borrow_mut())) -} - -/// Reset the ident interner to its initial state. -pub fn reset_ident_interner() { - with_ident_interner(|interner| *interner = mk_fresh_ident_interner()); -} - -pub fn clear_ident_interner() { - with_ident_interner(|interner| *interner = IdentInterner::new()); -} - -/// Represents a string stored in the thread-local interner. Because the -/// interner lives for the life of the thread, this can be safely treated as an -/// immortal string, as long as it never crosses between threads. -/// -/// FIXME(pcwalton): You must be careful about what you do in the destructors -/// of objects stored in TLS, because they may run after the interner is -/// destroyed. In particular, they must not access string contents. This can -/// be fixed in the future by just leaking all strings until thread death -/// somehow. -#[derive(Clone, PartialEq, Hash, PartialOrd, Eq, Ord)] -pub struct InternedString { - string: Rc, -} - -impl InternedString { - #[inline] - pub fn new(string: &'static str) -> InternedString { - InternedString { - string: Rc::__from_str(string), - } - } - - #[inline] - pub fn new_from_name(name: ast::Name) -> InternedString { - with_ident_interner(|interner| InternedString { string: interner.get(name) }) - } -} - -impl Deref for InternedString { - type Target = str; - - fn deref(&self) -> &str { &self.string } -} - -impl fmt::Debug for InternedString { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - fmt::Debug::fmt(&self.string, f) - } -} - -impl fmt::Display for InternedString { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - fmt::Display::fmt(&self.string, f) - } -} - -impl<'a> PartialEq<&'a str> for InternedString { - #[inline(always)] - fn eq(&self, other: & &'a str) -> bool { - PartialEq::eq(&self.string[..], *other) - } - #[inline(always)] - fn ne(&self, other: & &'a str) -> bool { - PartialEq::ne(&self.string[..], *other) - } -} - -impl<'a> PartialEq for &'a str { - #[inline(always)] - fn eq(&self, other: &InternedString) -> bool { - PartialEq::eq(*self, &other.string[..]) - } - #[inline(always)] - fn ne(&self, other: &InternedString) -> bool { - PartialEq::ne(*self, &other.string[..]) - } -} - -impl PartialEq for InternedString { - #[inline(always)] - fn eq(&self, other: &str) -> bool { - PartialEq::eq(&self.string[..], other) - } - #[inline(always)] - fn ne(&self, other: &str) -> bool { - PartialEq::ne(&self.string[..], other) - } -} - -impl PartialEq for str { - #[inline(always)] - fn eq(&self, other: &InternedString) -> bool { - PartialEq::eq(self, &other.string[..]) - } - #[inline(always)] - fn ne(&self, other: &InternedString) -> bool { - PartialEq::ne(self, &other.string[..]) - } -} - -impl Decodable for InternedString { - fn decode(d: &mut D) -> Result { - Ok(intern(&d.read_str()?).as_str()) - } -} - -impl Encodable for InternedString { - fn encode(&self, s: &mut S) -> Result<(), S::Error> { - s.emit_str(&self.string) - } -} - -/// Interns and returns the string contents of an identifier, using the -/// thread-local interner. -#[inline] -pub fn intern_and_get_ident(s: &str) -> InternedString { - intern(s).as_str() -} - -/// Maps a string to its interned representation. -#[inline] -pub fn intern(s: &str) -> ast::Name { - with_ident_interner(|interner| interner.intern(s)) -} - -/// gensym's a new usize, using the current interner. -#[inline] -pub fn gensym(s: &str) -> ast::Name { - with_ident_interner(|interner| interner.gensym(s)) -} - -/// Maps a string to an identifier with an empty syntax context. -#[inline] -pub fn str_to_ident(s: &str) -> ast::Ident { - ast::Ident::with_empty_ctxt(intern(s)) -} - -/// Maps a string to a gensym'ed identifier. -#[inline] -pub fn gensym_ident(s: &str) -> ast::Ident { - ast::Ident::with_empty_ctxt(gensym(s)) -} - -// create a fresh name that maps to the same string as the old one. -// note that this guarantees that str_ptr_eq(ident_to_string(src),interner_get(fresh_name(src))); -// that is, that the new name and the old one are connected to ptr_eq strings. -pub fn fresh_name(src: ast::Ident) -> ast::Name { - with_ident_interner(|interner| interner.gensym_copy(src.name)) - // following: debug version. Could work in final except that it's incompatible with - // good error messages and uses of struct names in ambiguous could-be-binding - // locations. Also definitely destroys the guarantee given above about ptr_eq. - /*let num = rand::thread_rng().gen_uint_range(0,0xffff); - gensym(format!("{}_{}",ident_to_string(src),num))*/ -} diff --git a/src/libsyntax/print/pprust.rs b/src/libsyntax/print/pprust.rs index 7352792a8a..c28b9d0050 100644 --- a/src/libsyntax/print/pprust.rs +++ b/src/libsyntax/print/pprust.rs @@ -19,7 +19,7 @@ use attr; use codemap::{self, CodeMap}; use syntax_pos::{self, BytePos}; use errors; -use parse::token::{self, keywords, BinOpToken, Token, InternedString}; +use parse::token::{self, BinOpToken, Token}; use parse::lexer::comments; use parse; use print::pp::{self, break_offset, word, space, zerobreak, hardbreak}; @@ -27,6 +27,7 @@ use print::pp::{Breaks, eof}; use print::pp::Breaks::{Consistent, Inconsistent}; use ptr::P; use std_inject; +use symbol::{Symbol, keywords}; use tokenstream::{self, TokenTree}; use std::ascii; @@ -119,14 +120,13 @@ pub fn print_crate<'a>(cm: &'a CodeMap, // of the feature gate, so we fake them up here. // #![feature(prelude_import)] - let prelude_import_meta = attr::mk_list_word_item(InternedString::new("prelude_import")); - let list = attr::mk_list_item(InternedString::new("feature"), - vec![prelude_import_meta]); + let prelude_import_meta = attr::mk_list_word_item(Symbol::intern("prelude_import")); + let list = attr::mk_list_item(Symbol::intern("feature"), vec![prelude_import_meta]); let fake_attr = attr::mk_attr_inner(attr::mk_attr_id(), list); try!(s.print_attribute(&fake_attr)); // #![no_std] - let no_std_meta = attr::mk_word_item(InternedString::new("no_std")); + let no_std_meta = attr::mk_word_item(Symbol::intern("no_std")); let fake_attr = attr::mk_attr_inner(attr::mk_attr_id(), no_std_meta); try!(s.print_attribute(&fake_attr)); } @@ -630,7 +630,7 @@ pub trait PrintState<'a> { _ => () } match lit.node { - ast::LitKind::Str(ref st, style) => self.print_string(&st, style), + ast::LitKind::Str(st, style) => self.print_string(&st.as_str(), style), ast::LitKind::Byte(byte) => { let mut res = String::from("b'"); res.extend(ascii::escape_default(byte).map(|c| c as char)); @@ -664,7 +664,7 @@ pub trait PrintState<'a> { &f, t.ty_to_string())) } - ast::LitKind::FloatUnsuffixed(ref f) => word(self.writer(), &f[..]), + ast::LitKind::FloatUnsuffixed(ref f) => word(self.writer(), &f.as_str()), ast::LitKind::Bool(val) => { if val { word(self.writer(), "true") } else { word(self.writer(), "false") } } @@ -727,7 +727,7 @@ pub trait PrintState<'a> { trailing_hardbreak: bool) -> io::Result<()> { let mut count = 0; for attr in attrs { - if attr.node.style == kind { + if attr.style == kind { try!(self.print_attribute_inline(attr, is_inline)); if is_inline { try!(self.nbsp()); @@ -751,11 +751,11 @@ pub trait PrintState<'a> { try!(self.hardbreak_if_not_bol()); } try!(self.maybe_print_comment(attr.span.lo)); - if attr.node.is_sugared_doc { - try!(word(self.writer(), &attr.value_str().unwrap())); + if attr.is_sugared_doc { + try!(word(self.writer(), &attr.value_str().unwrap().as_str())); hardbreak(self.writer()) } else { - match attr.node.style { + match attr.style { ast::AttrStyle::Inner => try!(word(self.writer(), "#![")), ast::AttrStyle::Outer => try!(word(self.writer(), "#[")), } @@ -778,16 +778,16 @@ pub trait PrintState<'a> { fn print_meta_item(&mut self, item: &ast::MetaItem) -> io::Result<()> { try!(self.ibox(INDENT_UNIT)); match item.node { - ast::MetaItemKind::Word(ref name) => { - try!(word(self.writer(), &name)); + ast::MetaItemKind::Word => { + try!(word(self.writer(), &item.name.as_str())); } - ast::MetaItemKind::NameValue(ref name, ref value) => { - try!(self.word_space(&name[..])); + ast::MetaItemKind::NameValue(ref value) => { + try!(self.word_space(&item.name.as_str())); try!(self.word_space("=")); try!(self.print_literal(value)); } - ast::MetaItemKind::List(ref name, ref items) => { - try!(word(self.writer(), &name)); + ast::MetaItemKind::List(ref items) => { + try!(word(self.writer(), &item.name.as_str())); try!(self.popen()); try!(self.commasep(Consistent, &items[..], @@ -2128,26 +2128,8 @@ impl<'a> State<'a> { try!(self.print_fn_block_args(&decl)); try!(space(&mut self.s)); - - let default_return = match decl.output { - ast::FunctionRetTy::Default(..) => true, - _ => false - }; - - match body.stmts.last().map(|stmt| &stmt.node) { - Some(&ast::StmtKind::Expr(ref i_expr)) if default_return && - body.stmts.len() == 1 => { - // we extract the block, so as not to create another set of boxes - if let ast::ExprKind::Block(ref blk) = i_expr.node { - try!(self.print_block_unclosed_with_attrs(&blk, &i_expr.attrs)); - } else { - // this is a bare expression - try!(self.print_expr(&i_expr)); - try!(self.end()); // need to close a box - } - } - _ => try!(self.print_block_unclosed(&body)), - } + try!(self.print_expr(body)); + try!(self.end()); // need to close a box // a box will be closed by print_expr, but we didn't want an overall // wrapper so we closed the corresponding opening. so create an @@ -2209,13 +2191,17 @@ impl<'a> State<'a> { ast::ExprKind::Path(Some(ref qself), ref path) => { try!(self.print_qpath(path, qself, true)) } - ast::ExprKind::Break(opt_ident) => { + ast::ExprKind::Break(opt_ident, ref opt_expr) => { try!(word(&mut self.s, "break")); try!(space(&mut self.s)); if let Some(ident) = opt_ident { try!(self.print_ident(ident.node)); try!(space(&mut self.s)); } + if let Some(ref expr) = *opt_expr { + try!(self.print_expr(expr)); + try!(space(&mut self.s)); + } } ast::ExprKind::Continue(opt_ident) => { try!(word(&mut self.s, "continue")); @@ -2238,19 +2224,18 @@ impl<'a> State<'a> { ast::ExprKind::InlineAsm(ref a) => { try!(word(&mut self.s, "asm!")); try!(self.popen()); - try!(self.print_string(&a.asm, a.asm_str_style)); + try!(self.print_string(&a.asm.as_str(), a.asm_str_style)); try!(self.word_space(":")); - try!(self.commasep(Inconsistent, &a.outputs, - |s, out| { - let mut ch = out.constraint.chars(); + try!(self.commasep(Inconsistent, &a.outputs, |s, out| { + let constraint = out.constraint.as_str(); + let mut ch = constraint.chars(); match ch.next() { Some('=') if out.is_rw => { try!(s.print_string(&format!("+{}", ch.as_str()), ast::StrStyle::Cooked)) } - _ => try!(s.print_string(&out.constraint, - ast::StrStyle::Cooked)) + _ => try!(s.print_string(&constraint, ast::StrStyle::Cooked)) } try!(s.popen()); try!(s.print_expr(&out.expr)); @@ -2260,9 +2245,8 @@ impl<'a> State<'a> { try!(space(&mut self.s)); try!(self.word_space(":")); - try!(self.commasep(Inconsistent, &a.inputs, - |s, &(ref co, ref o)| { - try!(s.print_string(&co, ast::StrStyle::Cooked)); + try!(self.commasep(Inconsistent, &a.inputs, |s, &(co, ref o)| { + try!(s.print_string(&co.as_str(), ast::StrStyle::Cooked)); try!(s.popen()); try!(s.print_expr(&o)); try!(s.pclose()); @@ -2273,7 +2257,7 @@ impl<'a> State<'a> { try!(self.commasep(Inconsistent, &a.clobbers, |s, co| { - try!(s.print_string(&co, ast::StrStyle::Cooked)); + try!(s.print_string(&co.as_str(), ast::StrStyle::Cooked)); Ok(()) })); @@ -3100,12 +3084,11 @@ mod tests { use ast; use codemap; - use parse::token; use syntax_pos; #[test] fn test_fun_to_string() { - let abba_ident = token::str_to_ident("abba"); + let abba_ident = ast::Ident::from_str("abba"); let decl = ast::FnDecl { inputs: Vec::new(), @@ -3121,7 +3104,7 @@ mod tests { #[test] fn test_variant_to_string() { - let ident = token::str_to_ident("principal_skinner"); + let ident = ast::Ident::from_str("principal_skinner"); let var = codemap::respan(syntax_pos::DUMMY_SP, ast::Variant_ { name: ident, diff --git a/src/libsyntax/show_span.rs b/src/libsyntax/show_span.rs index 928ffb202d..263a4f13c1 100644 --- a/src/libsyntax/show_span.rs +++ b/src/libsyntax/show_span.rs @@ -44,29 +44,29 @@ struct ShowSpanVisitor<'a> { mode: Mode, } -impl<'a> Visitor for ShowSpanVisitor<'a> { - fn visit_expr(&mut self, e: &ast::Expr) { +impl<'a> Visitor<'a> for ShowSpanVisitor<'a> { + fn visit_expr(&mut self, e: &'a ast::Expr) { if let Mode::Expression = self.mode { self.span_diagnostic.span_warn(e.span, "expression"); } visit::walk_expr(self, e); } - fn visit_pat(&mut self, p: &ast::Pat) { + fn visit_pat(&mut self, p: &'a ast::Pat) { if let Mode::Pattern = self.mode { self.span_diagnostic.span_warn(p.span, "pattern"); } visit::walk_pat(self, p); } - fn visit_ty(&mut self, t: &ast::Ty) { + fn visit_ty(&mut self, t: &'a ast::Ty) { if let Mode::Type = self.mode { self.span_diagnostic.span_warn(t.span, "type"); } visit::walk_ty(self, t); } - fn visit_mac(&mut self, mac: &ast::Mac) { + fn visit_mac(&mut self, mac: &'a ast::Mac) { visit::walk_mac(self, mac); } } diff --git a/src/libsyntax/std_inject.rs b/src/libsyntax/std_inject.rs index 1b63a2b707..6a291ad9c4 100644 --- a/src/libsyntax/std_inject.rs +++ b/src/libsyntax/std_inject.rs @@ -10,10 +10,10 @@ use ast; use attr; +use symbol::{Symbol, keywords}; use syntax_pos::{DUMMY_SP, Span}; use codemap::{self, ExpnInfo, NameAndSpan, MacroAttribute}; -use parse::token::{intern, InternedString, keywords}; -use parse::{token, ParseSess}; +use parse::ParseSess; use ptr::P; /// Craft a span that will be ignored by the stability lint's @@ -23,7 +23,7 @@ fn ignored_span(sess: &ParseSess, sp: Span) -> Span { let info = ExpnInfo { call_site: DUMMY_SP, callee: NameAndSpan { - format: MacroAttribute(intern("std_inject")), + format: MacroAttribute(Symbol::intern("std_inject")), span: None, allow_internal_unstable: true, } @@ -53,14 +53,14 @@ pub fn maybe_inject_crates_ref(sess: &ParseSess, None => return krate, }; - let crate_name = token::intern(&alt_std_name.unwrap_or(name.to_string())); + let crate_name = Symbol::intern(&alt_std_name.unwrap_or(name.to_string())); krate.module.items.insert(0, P(ast::Item { attrs: vec![attr::mk_attr_outer(attr::mk_attr_id(), - attr::mk_word_item(InternedString::new("macro_use")))], + attr::mk_word_item(Symbol::intern("macro_use")))], vis: ast::Visibility::Inherited, node: ast::ItemKind::ExternCrate(Some(crate_name)), - ident: token::str_to_ident(name), + ident: ast::Ident::from_str(name), id: ast::DUMMY_NODE_ID, span: DUMMY_SP, })); @@ -68,22 +68,21 @@ pub fn maybe_inject_crates_ref(sess: &ParseSess, let span = ignored_span(sess, DUMMY_SP); krate.module.items.insert(0, P(ast::Item { attrs: vec![ast::Attribute { - node: ast::Attribute_ { - style: ast::AttrStyle::Outer, - value: P(ast::MetaItem { - node: ast::MetaItemKind::Word(token::intern_and_get_ident("prelude_import")), - span: span, - }), - id: attr::mk_attr_id(), - is_sugared_doc: false, + style: ast::AttrStyle::Outer, + value: ast::MetaItem { + name: Symbol::intern("prelude_import"), + node: ast::MetaItemKind::Word, + span: span, }, + id: attr::mk_attr_id(), + is_sugared_doc: false, span: span, }], vis: ast::Visibility::Inherited, node: ast::ItemKind::Use(P(codemap::dummy_spanned(ast::ViewPathGlob(ast::Path { global: false, segments: vec![name, "prelude", "v1"].into_iter().map(|name| ast::PathSegment { - identifier: token::str_to_ident(name), + identifier: ast::Ident::from_str(name), parameters: ast::PathParameters::none(), }).collect(), span: span, diff --git a/src/libsyntax/symbol.rs b/src/libsyntax/symbol.rs new file mode 100644 index 0000000000..fe9a176179 --- /dev/null +++ b/src/libsyntax/symbol.rs @@ -0,0 +1,303 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! An "interner" is a data structure that associates values with usize tags and +//! allows bidirectional lookup; i.e. given a value, one can easily find the +//! type, and vice versa. + +use serialize::{Decodable, Decoder, Encodable, Encoder}; +use std::cell::RefCell; +use std::collections::HashMap; +use std::fmt; + +/// A symbol is an interned or gensymed string. +#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] +pub struct Symbol(u32); + +// The interner in thread-local, so `Symbol` shouldn't move between threads. +impl !Send for Symbol { } + +impl Symbol { + /// Maps a string to its interned representation. + pub fn intern(string: &str) -> Self { + with_interner(|interner| interner.intern(string)) + } + + /// gensym's a new usize, using the current interner. + pub fn gensym(string: &str) -> Self { + with_interner(|interner| interner.gensym(string)) + } + + pub fn as_str(self) -> InternedString { + with_interner(|interner| unsafe { + InternedString { + string: ::std::mem::transmute::<&str, &str>(interner.get(self)) + } + }) + } + + pub fn as_u32(self) -> u32 { + self.0 + } +} + +impl fmt::Debug for Symbol { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "{}({})", self, self.0) + } +} + +impl fmt::Display for Symbol { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Display::fmt(&self.as_str(), f) + } +} + +impl Encodable for Symbol { + fn encode(&self, s: &mut S) -> Result<(), S::Error> { + s.emit_str(&self.as_str()) + } +} + +impl Decodable for Symbol { + fn decode(d: &mut D) -> Result { + Ok(Symbol::intern(&d.read_str()?)) + } +} + +impl<'a> PartialEq<&'a str> for Symbol { + fn eq(&self, other: &&str) -> bool { + *self.as_str() == **other + } +} + +#[derive(Default)] +pub struct Interner { + names: HashMap, Symbol>, + strings: Vec>, +} + +impl Interner { + pub fn new() -> Self { + Interner::default() + } + + fn prefill(init: &[&str]) -> Self { + let mut this = Interner::new(); + for &string in init { + this.intern(string); + } + this + } + + pub fn intern(&mut self, string: &str) -> Symbol { + if let Some(&name) = self.names.get(string) { + return name; + } + + let name = Symbol(self.strings.len() as u32); + let string = string.to_string().into_boxed_str(); + self.strings.push(string.clone()); + self.names.insert(string, name); + name + } + + fn gensym(&mut self, string: &str) -> Symbol { + let gensym = Symbol(self.strings.len() as u32); + // leave out of `names` to avoid colliding + self.strings.push(string.to_string().into_boxed_str()); + gensym + } + + pub fn get(&self, name: Symbol) -> &str { + &self.strings[name.0 as usize] + } +} + +// In this macro, there is the requirement that the name (the number) must be monotonically +// increasing by one in the special identifiers, starting at 0; the same holds for the keywords, +// except starting from the next number instead of zero. +macro_rules! declare_keywords {( + $( ($index: expr, $konst: ident, $string: expr) )* +) => { + pub mod keywords { + use ast; + #[derive(Clone, Copy, PartialEq, Eq)] + pub struct Keyword { + ident: ast::Ident, + } + impl Keyword { + #[inline] pub fn ident(self) -> ast::Ident { self.ident } + #[inline] pub fn name(self) -> ast::Name { self.ident.name } + } + $( + #[allow(non_upper_case_globals)] + pub const $konst: Keyword = Keyword { + ident: ast::Ident::with_empty_ctxt(ast::Name($index)) + }; + )* + } + + impl Interner { + fn fresh() -> Self { + Interner::prefill(&[$($string,)*]) + } + } +}} + +// NB: leaving holes in the ident table is bad! a different ident will get +// interned with the id from the hole, but it will be between the min and max +// of the reserved words, and thus tagged as "reserved". +// After modifying this list adjust `is_strict_keyword`/`is_reserved_keyword`, +// this should be rarely necessary though if the keywords are kept in alphabetic order. +declare_keywords! { + // Invalid identifier + (0, Invalid, "") + + // Strict keywords used in the language. + (1, As, "as") + (2, Box, "box") + (3, Break, "break") + (4, Const, "const") + (5, Continue, "continue") + (6, Crate, "crate") + (7, Else, "else") + (8, Enum, "enum") + (9, Extern, "extern") + (10, False, "false") + (11, Fn, "fn") + (12, For, "for") + (13, If, "if") + (14, Impl, "impl") + (15, In, "in") + (16, Let, "let") + (17, Loop, "loop") + (18, Match, "match") + (19, Mod, "mod") + (20, Move, "move") + (21, Mut, "mut") + (22, Pub, "pub") + (23, Ref, "ref") + (24, Return, "return") + (25, SelfValue, "self") + (26, SelfType, "Self") + (27, Static, "static") + (28, Struct, "struct") + (29, Super, "super") + (30, Trait, "trait") + (31, True, "true") + (32, Type, "type") + (33, Unsafe, "unsafe") + (34, Use, "use") + (35, Where, "where") + (36, While, "while") + + // Keywords reserved for future use. + (37, Abstract, "abstract") + (38, Alignof, "alignof") + (39, Become, "become") + (40, Do, "do") + (41, Final, "final") + (42, Macro, "macro") + (43, Offsetof, "offsetof") + (44, Override, "override") + (45, Priv, "priv") + (46, Proc, "proc") + (47, Pure, "pure") + (48, Sizeof, "sizeof") + (49, Typeof, "typeof") + (50, Unsized, "unsized") + (51, Virtual, "virtual") + (52, Yield, "yield") + + // Weak keywords, have special meaning only in specific contexts. + (53, Default, "default") + (54, StaticLifetime, "'static") + (55, Union, "union") +} + +// If an interner exists in TLS, return it. Otherwise, prepare a fresh one. +fn with_interner T>(f: F) -> T { + thread_local!(static INTERNER: RefCell = { + RefCell::new(Interner::fresh()) + }); + INTERNER.with(|interner| f(&mut *interner.borrow_mut())) +} + +/// Represents a string stored in the thread-local interner. Because the +/// interner lives for the life of the thread, this can be safely treated as an +/// immortal string, as long as it never crosses between threads. +/// +/// FIXME(pcwalton): You must be careful about what you do in the destructors +/// of objects stored in TLS, because they may run after the interner is +/// destroyed. In particular, they must not access string contents. This can +/// be fixed in the future by just leaking all strings until thread death +/// somehow. +#[derive(Clone, PartialEq, Hash, PartialOrd, Eq, Ord)] +pub struct InternedString { + string: &'static str, +} + +impl !Send for InternedString { } + +impl ::std::ops::Deref for InternedString { + type Target = str; + fn deref(&self) -> &str { self.string } +} + +impl fmt::Debug for InternedString { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Debug::fmt(self.string, f) + } +} + +impl fmt::Display for InternedString { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Display::fmt(self.string, f) + } +} + +impl Decodable for InternedString { + fn decode(d: &mut D) -> Result { + Ok(Symbol::intern(&d.read_str()?).as_str()) + } +} + +impl Encodable for InternedString { + fn encode(&self, s: &mut S) -> Result<(), S::Error> { + s.emit_str(self.string) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use ast::Name; + + #[test] + fn interner_tests() { + let mut i: Interner = Interner::new(); + // first one is zero: + assert_eq!(i.intern("dog"), Name(0)); + // re-use gets the same entry: + assert_eq!(i.intern ("dog"), Name(0)); + // different string gets a different #: + assert_eq!(i.intern("cat"), Name(1)); + assert_eq!(i.intern("cat"), Name(1)); + // dog is still at zero + assert_eq!(i.intern("dog"), Name(0)); + // gensym gets 3 + assert_eq!(i.gensym("zebra"), Name(2)); + // gensym of same string gets new number : + assert_eq!(i.gensym("zebra"), Name(3)); + // gensym of *existing* string gets new number: + assert_eq!(i.gensym("dog"), Name(4)); + } +} diff --git a/src/libsyntax/test.rs b/src/libsyntax/test.rs index 618878c1f7..fca89e265e 100644 --- a/src/libsyntax/test.rs +++ b/src/libsyntax/test.rs @@ -34,21 +34,21 @@ use ext::expand::ExpansionConfig; use fold::Folder; use util::move_map::MoveMap; use fold; -use parse::token::{intern, keywords, InternedString}; use parse::{token, ParseSess}; use print::pprust; -use ast; +use ast::{self, Ident}; use ptr::P; +use symbol::{self, Symbol, keywords}; use util::small_vector::SmallVector; enum ShouldPanic { No, - Yes(Option), + Yes(Option), } struct Test { span: Span, - path: Vec , + path: Vec , bench: bool, ignore: bool, should_panic: ShouldPanic @@ -57,14 +57,14 @@ struct Test { struct TestCtxt<'a> { sess: &'a ParseSess, span_diagnostic: &'a errors::Handler, - path: Vec, + path: Vec, ext_cx: ExtCtxt<'a>, testfns: Vec, - reexport_test_harness_main: Option, + reexport_test_harness_main: Option, is_test_crate: bool, // top-level re-export submodule, filled out after folding is finished - toplevel_reexport: Option, + toplevel_reexport: Option, } // Traverse the crate, collecting all the test functions, eliding any @@ -91,10 +91,10 @@ pub fn modify_for_testing(sess: &ParseSess, struct TestHarnessGenerator<'a> { cx: TestCtxt<'a>, - tests: Vec, + tests: Vec, // submodule name, gensym'd identifier for re-exports - tested_submods: Vec<(ast::Ident, ast::Ident)>, + tested_submods: Vec<(Ident, Ident)>, } impl<'a> fold::Folder for TestHarnessGenerator<'a> { @@ -132,7 +132,7 @@ impl<'a> fold::Folder for TestHarnessGenerator<'a> { path: self.cx.path.clone(), bench: is_bench_fn(&self.cx, &i), ignore: is_ignored(&i), - should_panic: should_panic(&i) + should_panic: should_panic(&i, &self.cx) }; self.cx.testfns.push(test); self.tests.push(i.ident); @@ -191,8 +191,8 @@ impl fold::Folder for EntryPointCleaner { EntryPointType::MainAttr | EntryPointType::Start => folded.map(|ast::Item {id, ident, attrs, node, vis, span}| { - let allow_str = InternedString::new("allow"); - let dead_code_str = InternedString::new("dead_code"); + let allow_str = Symbol::intern("allow"); + let dead_code_str = Symbol::intern("dead_code"); let word_vec = vec![attr::mk_list_word_item(dead_code_str)]; let allow_dead_code_item = attr::mk_list_item(allow_str, word_vec); let allow_dead_code = attr::mk_attr_outer(attr::mk_attr_id(), @@ -222,15 +222,18 @@ impl fold::Folder for EntryPointCleaner { fn fold_mac(&mut self, mac: ast::Mac) -> ast::Mac { mac } } -fn mk_reexport_mod(cx: &mut TestCtxt, parent: ast::NodeId, tests: Vec, - tested_submods: Vec<(ast::Ident, ast::Ident)>) -> (P, ast::Ident) { - let super_ = token::str_to_ident("super"); +fn mk_reexport_mod(cx: &mut TestCtxt, + parent: ast::NodeId, + tests: Vec, + tested_submods: Vec<(Ident, Ident)>) + -> (P, Ident) { + let super_ = Ident::from_str("super"); // Generate imports with `#[allow(private_in_public)]` to work around issue #36768. let allow_private_in_public = cx.ext_cx.attribute(DUMMY_SP, cx.ext_cx.meta_list( DUMMY_SP, - InternedString::new("allow"), - vec![cx.ext_cx.meta_list_item_word(DUMMY_SP, InternedString::new("private_in_public"))], + Symbol::intern("allow"), + vec![cx.ext_cx.meta_list_item_word(DUMMY_SP, Symbol::intern("private_in_public"))], )); let items = tests.into_iter().map(|r| { cx.ext_cx.item_use_simple(DUMMY_SP, ast::Visibility::Public, @@ -247,7 +250,7 @@ fn mk_reexport_mod(cx: &mut TestCtxt, parent: ast::NodeId, tests: Vec, + reexport_test_harness_main: Option, krate: ast::Crate, sd: &errors::Handler) -> ast::Crate { // Remove the entry points @@ -286,7 +289,7 @@ fn generate_test_harness(sess: &ParseSess, cx.ext_cx.bt_push(ExpnInfo { call_site: DUMMY_SP, callee: NameAndSpan { - format: MacroAttribute(intern("test")), + format: MacroAttribute(Symbol::intern("test")), span: None, allow_internal_unstable: false, } @@ -304,9 +307,9 @@ fn generate_test_harness(sess: &ParseSess, /// The expanded code calls some unstable functions in the test crate. fn ignored_span(cx: &TestCtxt, sp: Span) -> Span { let info = ExpnInfo { - call_site: DUMMY_SP, + call_site: sp, callee: NameAndSpan { - format: MacroAttribute(intern("test")), + format: MacroAttribute(Symbol::intern("test")), span: None, allow_internal_unstable: true, } @@ -395,14 +398,44 @@ fn is_ignored(i: &ast::Item) -> bool { i.attrs.iter().any(|attr| attr.check_name("ignore")) } -fn should_panic(i: &ast::Item) -> ShouldPanic { +fn should_panic(i: &ast::Item, cx: &TestCtxt) -> ShouldPanic { match i.attrs.iter().find(|attr| attr.check_name("should_panic")) { Some(attr) => { - let msg = attr.meta_item_list() - .and_then(|list| list.iter().find(|mi| mi.check_name("expected"))) - .and_then(|li| li.meta_item()) - .and_then(|mi| mi.value_str()); - ShouldPanic::Yes(msg) + let sd = cx.span_diagnostic; + if attr.is_value_str() { + sd.struct_span_warn( + attr.span(), + "attribute must be of the form: \ + `#[should_panic]` or \ + `#[should_panic(expected = \"error message\")]`" + ).note("Errors in this attribute were erroneously allowed \ + and will become a hard error in a future release.") + .emit(); + return ShouldPanic::Yes(None); + } + match attr.meta_item_list() { + // Handle #[should_panic] + None => ShouldPanic::Yes(None), + // Handle #[should_panic(expected = "foo")] + Some(list) => { + let msg = list.iter() + .find(|mi| mi.check_name("expected")) + .and_then(|mi| mi.meta_item()) + .and_then(|mi| mi.value_str()); + if list.len() != 1 || msg.is_none() { + sd.struct_span_warn( + attr.span(), + "argument must be of the form: \ + `expected = \"error message\"`" + ).note("Errors in this attribute were erroneously \ + allowed and will become a hard error in a \ + future release.").emit(); + ShouldPanic::Yes(None) + } else { + ShouldPanic::Yes(msg) + } + }, + } } None => ShouldPanic::No, } @@ -426,7 +459,8 @@ mod __test { */ fn mk_std(cx: &TestCtxt) -> P { - let id_test = token::str_to_ident("test"); + let id_test = Ident::from_str("test"); + let sp = ignored_span(cx, DUMMY_SP); let (vi, vis, ident) = if cx.is_test_crate { (ast::ItemKind::Use( P(nospan(ast::ViewPathSimple(id_test, @@ -441,7 +475,7 @@ fn mk_std(cx: &TestCtxt) -> P { node: vi, attrs: vec![], vis: vis, - span: DUMMY_SP + span: sp }) } @@ -457,16 +491,17 @@ fn mk_main(cx: &mut TestCtxt) -> P { let ecx = &cx.ext_cx; // test::test_main_static - let test_main_path = ecx.path(sp, vec![token::str_to_ident("test"), - token::str_to_ident("test_main_static")]); + let test_main_path = + ecx.path(sp, vec![Ident::from_str("test"), Ident::from_str("test_main_static")]); + // test::test_main_static(...) let test_main_path_expr = ecx.expr_path(test_main_path); - let tests_ident_expr = ecx.expr_ident(sp, token::str_to_ident("TESTS")); + let tests_ident_expr = ecx.expr_ident(sp, Ident::from_str("TESTS")); let call_test_main = ecx.expr_call(sp, test_main_path_expr, vec![tests_ident_expr]); let call_test_main = ecx.stmt_expr(call_test_main); // #![main] - let main_meta = ecx.meta_word(sp, token::intern_and_get_ident("main")); + let main_meta = ecx.meta_word(sp, Symbol::intern("main")); let main_attr = ecx.attribute(sp, main_meta); // pub fn main() { ... } let main_ret_ty = ecx.ty(sp, ast::TyKind::Tup(vec![])); @@ -476,7 +511,7 @@ fn mk_main(cx: &mut TestCtxt) -> P { dummy_spanned(ast::Constness::NotConst), ::abi::Abi::Rust, ast::Generics::default(), main_body); let main = P(ast::Item { - ident: token::str_to_ident("main"), + ident: Ident::from_str("main"), attrs: vec![main_attr], id: ast::DUMMY_NODE_ID, node: main, @@ -503,7 +538,7 @@ fn mk_test_module(cx: &mut TestCtxt) -> (P, Option>) { items: vec![import, mainfn, tests], }; let item_ = ast::ItemKind::Mod(testmod); - let mod_ident = token::gensym_ident("__test"); + let mod_ident = Ident::with_empty_ctxt(Symbol::gensym("__test")); let mut expander = cx.ext_cx.monotonic_expander(); let item = expander.fold_item(P(ast::Item { @@ -514,13 +549,13 @@ fn mk_test_module(cx: &mut TestCtxt) -> (P, Option>) { vis: ast::Visibility::Public, span: DUMMY_SP, })).pop().unwrap(); - let reexport = cx.reexport_test_harness_main.as_ref().map(|s| { + let reexport = cx.reexport_test_harness_main.map(|s| { // building `use = __test::main` - let reexport_ident = token::str_to_ident(&s); + let reexport_ident = Ident::with_empty_ctxt(s); let use_path = nospan(ast::ViewPathSimple(reexport_ident, - path_node(vec![mod_ident, token::str_to_ident("main")]))); + path_node(vec![mod_ident, Ident::from_str("main")]))); expander.fold_item(P(ast::Item { id: ast::DUMMY_NODE_ID, @@ -541,7 +576,7 @@ fn nospan(t: T) -> codemap::Spanned { codemap::Spanned { node: t, span: DUMMY_SP } } -fn path_node(ids: Vec ) -> ast::Path { +fn path_node(ids: Vec) -> ast::Path { ast::Path { span: DUMMY_SP, global: false, @@ -552,7 +587,7 @@ fn path_node(ids: Vec ) -> ast::Path { } } -fn path_name_i(idents: &[ast::Ident]) -> String { +fn path_name_i(idents: &[Ident]) -> String { // FIXME: Bad copies (#2543 -- same for everything else that says "bad") idents.iter().map(|i| i.to_string()).collect::>().join("::") } @@ -564,7 +599,7 @@ fn mk_tests(cx: &TestCtxt) -> P { // FIXME #15962: should be using quote_item, but that stringifies // __test_reexports, causing it to be reinterned, losing the // gensym information. - let sp = DUMMY_SP; + let sp = ignored_span(cx, DUMMY_SP); let ecx = &cx.ext_cx; let struct_type = ecx.ty_path(ecx.path(sp, vec![ecx.ident_of("self"), ecx.ident_of("test"), @@ -584,7 +619,7 @@ fn mk_tests(cx: &TestCtxt) -> P { fn is_test_crate(krate: &ast::Crate) -> bool { match attr::find_crate_name(&krate.attrs) { - Some(ref s) if "test" == &s[..] => true, + Some(s) if "test" == &*s.as_str() => true, _ => false } } @@ -630,7 +665,7 @@ fn mk_test_desc_and_fn_rec(cx: &TestCtxt, test: &Test) -> P { // path to the #[test] function: "foo::bar::baz" let path_string = path_name_i(&path[..]); - let name_expr = ecx.expr_str(span, token::intern_and_get_ident(&path_string[..])); + let name_expr = ecx.expr_str(span, Symbol::intern(&path_string)); // self::test::StaticTestName($name_expr) let name_expr = ecx.expr_call(span, @@ -643,10 +678,10 @@ fn mk_test_desc_and_fn_rec(cx: &TestCtxt, test: &Test) -> P { }; let fail_expr = match test.should_panic { ShouldPanic::No => ecx.expr_path(should_panic_path("No")), - ShouldPanic::Yes(ref msg) => { - match *msg { - Some(ref msg) => { - let msg = ecx.expr_str(span, msg.clone()); + ShouldPanic::Yes(msg) => { + match msg { + Some(msg) => { + let msg = ecx.expr_str(span, msg); let path = should_panic_path("YesWithMessage"); ecx.expr_call(span, ecx.expr_path(path), vec![msg]) } diff --git a/src/libsyntax/test_snippet.rs b/src/libsyntax/test_snippet.rs new file mode 100644 index 0000000000..98e574867b --- /dev/null +++ b/src/libsyntax/test_snippet.rs @@ -0,0 +1,546 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use codemap::CodeMap; +use errors::Handler; +use errors::emitter::EmitterWriter; +use std::io; +use std::io::prelude::*; +use std::rc::Rc; +use std::str; +use std::sync::{Arc, Mutex}; +use syntax_pos::{BytePos, NO_EXPANSION, Span, MultiSpan}; + +/// Identify a position in the text by the Nth occurrence of a string. +struct Position { + string: &'static str, + count: usize, +} + +struct SpanLabel { + start: Position, + end: Position, + label: &'static str, +} + +struct Shared { + data: Arc>, +} + +impl Write for Shared { + fn write(&mut self, buf: &[u8]) -> io::Result { + self.data.lock().unwrap().write(buf) + } + + fn flush(&mut self) -> io::Result<()> { + self.data.lock().unwrap().flush() + } +} + +fn test_harness(file_text: &str, span_labels: Vec, expected_output: &str) { + let output = Arc::new(Mutex::new(Vec::new())); + + let code_map = Rc::new(CodeMap::new()); + code_map.new_filemap_and_lines("test.rs", None, &file_text); + + let primary_span = make_span(&file_text, &span_labels[0].start, &span_labels[0].end); + let mut msp = MultiSpan::from_span(primary_span); + for span_label in span_labels { + let span = make_span(&file_text, &span_label.start, &span_label.end); + msp.push_span_label(span, span_label.label.to_string()); + println!("span: {:?} label: {:?}", span, span_label.label); + println!("text: {:?}", code_map.span_to_snippet(span)); + } + + let emitter = EmitterWriter::new(Box::new(Shared { data: output.clone() }), + Some(code_map.clone())); + let handler = Handler::with_emitter(true, false, Box::new(emitter)); + handler.span_err(msp, "foo"); + + assert!(expected_output.chars().next() == Some('\n'), + "expected output should begin with newline"); + let expected_output = &expected_output[1..]; + + let bytes = output.lock().unwrap(); + let actual_output = str::from_utf8(&bytes).unwrap(); + println!("expected output:\n------\n{}------", expected_output); + println!("actual output:\n------\n{}------", actual_output); + + assert!(expected_output == actual_output) +} + +fn make_span(file_text: &str, start: &Position, end: &Position) -> Span { + let start = make_pos(file_text, start); + let end = make_pos(file_text, end) + end.string.len(); // just after matching thing ends + assert!(start <= end); + Span { + lo: BytePos(start as u32), + hi: BytePos(end as u32), + expn_id: NO_EXPANSION, + } +} + +fn make_pos(file_text: &str, pos: &Position) -> usize { + let mut remainder = file_text; + let mut offset = 0; + for _ in 0..pos.count { + if let Some(n) = remainder.find(&pos.string) { + offset += n; + remainder = &remainder[n + 1..]; + } else { + panic!("failed to find {} instances of {:?} in {:?}", + pos.count, + pos.string, + file_text); + } + } + offset +} + +#[test] +fn ends_on_col0() { + test_harness(r#" +fn foo() { +} +"#, + vec![ + SpanLabel { + start: Position { + string: "{", + count: 1, + }, + end: Position { + string: "}", + count: 1, + }, + label: "test", + }, + ], + r#" +error: foo + --> test.rs:2:10 + | +2 | fn foo() { + | __________^ starting here... +3 | | } + | |_^ ...ending here: test + +"#); +} + +#[test] +fn ends_on_col2() { + test_harness(r#" +fn foo() { + + + } +"#, + vec![ + SpanLabel { + start: Position { + string: "{", + count: 1, + }, + end: Position { + string: "}", + count: 1, + }, + label: "test", + }, + ], + r#" +error: foo + --> test.rs:2:10 + | +2 | fn foo() { + | __________^ starting here... +3 | | +4 | | +5 | | } + | |___^ ...ending here: test + +"#); +} +#[test] +fn non_nested() { + test_harness(r#" +fn foo() { + X0 Y0 + X1 Y1 + X2 Y2 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "X0", + count: 1, + }, + end: Position { + string: "X2", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "Y2", + count: 1, + }, + label: "`Y` is a good letter too", + }, + ], + r#" +error: foo + --> test.rs:3:3 + | +3 | X0 Y0 + | ____^__- starting here... + | | ___| + | || starting here... +4 | || X1 Y1 +5 | || X2 Y2 + | ||____^__- ...ending here: `Y` is a good letter too + | |____| + | ...ending here: `X` is a good letter + +"#); +} + +#[test] +fn nested() { + test_harness(r#" +fn foo() { + X0 Y0 + Y1 X1 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "X0", + count: 1, + }, + end: Position { + string: "X1", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "Y1", + count: 1, + }, + label: "`Y` is a good letter too", + }, + ], +r#" +error: foo + --> test.rs:3:3 + | +3 | X0 Y0 + | ____^__- starting here... + | | ___| + | || starting here... +4 | || Y1 X1 + | ||____-__^ ...ending here: `X` is a good letter + | |_____| + | ...ending here: `Y` is a good letter too + +"#); +} + +#[test] +fn different_overlap() { + test_harness(r#" +fn foo() { + X0 Y0 Z0 + X1 Y1 Z1 + X2 Y2 Z2 + X3 Y3 Z3 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "X2", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Z1", + count: 1, + }, + end: Position { + string: "X3", + count: 1, + }, + label: "`Y` is a good letter too", + }, + ], + r#" +error: foo + --> test.rs:3:6 + | +3 | X0 Y0 Z0 + | ______^ starting here... +4 | | X1 Y1 Z1 + | |_________- starting here... +5 | || X2 Y2 Z2 + | ||____^ ...ending here: `X` is a good letter +6 | | X3 Y3 Z3 + | |_____- ...ending here: `Y` is a good letter too + +"#); +} + +#[test] +fn triple_overlap() { + test_harness(r#" +fn foo() { + X0 Y0 Z0 + X1 Y1 Z1 + X2 Y2 Z2 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "X0", + count: 1, + }, + end: Position { + string: "X2", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "Y2", + count: 1, + }, + label: "`Y` is a good letter too", + }, + SpanLabel { + start: Position { + string: "Z0", + count: 1, + }, + end: Position { + string: "Z2", + count: 1, + }, + label: "`Z` label", + }, + ], + r#" +error: foo + --> test.rs:3:3 + | +3 | X0 Y0 Z0 + | _____^__-__- starting here... + | | ____|__| + | || ___| starting here... + | ||| starting here... +4 | ||| X1 Y1 Z1 +5 | ||| X2 Y2 Z2 + | |||____^__-__- ...ending here: `Z` label + | ||____|__| + | |____| ...ending here: `Y` is a good letter too + | ...ending here: `X` is a good letter + +"#); +} + +#[test] +fn minimum_depth() { + test_harness(r#" +fn foo() { + X0 Y0 Z0 + X1 Y1 Z1 + X2 Y2 Z2 + X3 Y3 Z3 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "X1", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Y1", + count: 1, + }, + end: Position { + string: "Z2", + count: 1, + }, + label: "`Y` is a good letter too", + }, + SpanLabel { + start: Position { + string: "X2", + count: 1, + }, + end: Position { + string: "Y3", + count: 1, + }, + label: "`Z`", + }, + ], + r#" +error: foo + --> test.rs:3:6 + | +3 | X0 Y0 Z0 + | ______^ starting here... +4 | | X1 Y1 Z1 + | |____^_- starting here... + | ||____| + | | ...ending here: `X` is a good letter +5 | | X2 Y2 Z2 + | |____-______- ...ending here: `Y` is a good letter too + | ____| + | | starting here... +6 | | X3 Y3 Z3 + | |________- ...ending here: `Z` + +"#); +} + +#[test] +fn non_overlaping() { + test_harness(r#" +fn foo() { + X0 Y0 Z0 + X1 Y1 Z1 + X2 Y2 Z2 + X3 Y3 Z3 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "X1", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Y2", + count: 1, + }, + end: Position { + string: "Z3", + count: 1, + }, + label: "`Y` is a good letter too", + }, + ], + r#" +error: foo + --> test.rs:3:6 + | +3 | X0 Y0 Z0 + | ______^ starting here... +4 | | X1 Y1 Z1 + | |____^ ...ending here: `X` is a good letter +5 | X2 Y2 Z2 + | ______- starting here... +6 | | X3 Y3 Z3 + | |__________- ...ending here: `Y` is a good letter too + +"#); +} +#[test] +fn overlaping_start_and_end() { + test_harness(r#" +fn foo() { + X0 Y0 Z0 + X1 Y1 Z1 + X2 Y2 Z2 + X3 Y3 Z3 +} +"#, + vec![ + SpanLabel { + start: Position { + string: "Y0", + count: 1, + }, + end: Position { + string: "X1", + count: 1, + }, + label: "`X` is a good letter", + }, + SpanLabel { + start: Position { + string: "Z1", + count: 1, + }, + end: Position { + string: "Z3", + count: 1, + }, + label: "`Y` is a good letter too", + }, + ], + r#" +error: foo + --> test.rs:3:6 + | +3 | X0 Y0 Z0 + | ______^ starting here... +4 | | X1 Y1 Z1 + | |____^____- starting here... + | ||____| + | | ...ending here: `X` is a good letter +5 | | X2 Y2 Z2 +6 | | X3 Y3 Z3 + | |___________- ...ending here: `Y` is a good letter too + +"#); +} diff --git a/src/libsyntax/tokenstream.rs b/src/libsyntax/tokenstream.rs index 9ef6c07e48..e352e7853c 100644 --- a/src/libsyntax/tokenstream.rs +++ b/src/libsyntax/tokenstream.rs @@ -31,9 +31,10 @@ use ext::base; use ext::tt::macro_parser; use parse::lexer::comments::{doc_comment_style, strip_doc_comment_decoration}; use parse::lexer; -use parse; +use parse::{self, Directory}; use parse::token::{self, Token, Lit, Nonterminal}; use print::pprust; +use symbol::Symbol; use std::fmt; use std::iter::*; @@ -173,10 +174,10 @@ impl TokenTree { TokenTree::Delimited(sp, Rc::new(Delimited { delim: token::Bracket, open_span: sp, - tts: vec![TokenTree::Token(sp, token::Ident(token::str_to_ident("doc"))), + tts: vec![TokenTree::Token(sp, token::Ident(ast::Ident::from_str("doc"))), TokenTree::Token(sp, token::Eq), TokenTree::Token(sp, token::Literal( - token::StrRaw(token::intern(&stripped), num_of_hashes), None))], + token::StrRaw(Symbol::intern(&stripped), num_of_hashes), None))], close_span: sp, })) } @@ -217,7 +218,11 @@ impl TokenTree { let diag = &cx.parse_sess().span_diagnostic; // `None` is because we're not interpolating let arg_rdr = lexer::new_tt_reader(diag, None, tts.iter().cloned().collect()); - macro_parser::parse(cx.parse_sess(), arg_rdr, mtch) + let directory = Directory { + path: cx.current_expansion.module.directory.clone(), + ownership: cx.current_expansion.directory_ownership, + }; + macro_parser::parse(cx.parse_sess(), arg_rdr, mtch, Some(directory)) } /// Check if this TokenTree is equal to the other, regardless of span information. @@ -295,7 +300,7 @@ impl TokenTree { pub fn maybe_str(&self) -> Option { match *self { TokenTree::Token(sp, Token::Literal(Lit::Str_(s), _)) => { - let l = LitKind::Str(token::intern_and_get_ident(&parse::str_lit(&s.as_str())), + let l = LitKind::Str(Symbol::intern(&parse::str_lit(&s.as_str())), ast::StrStyle::Cooked); Some(Spanned { node: l, @@ -303,7 +308,7 @@ impl TokenTree { }) } TokenTree::Token(sp, Token::Literal(Lit::StrRaw(s, n), _)) => { - let l = LitKind::Str(token::intern_and_get_ident(&parse::raw_str_lit(&s.as_str())), + let l = LitKind::Str(Symbol::intern(&parse::raw_str_lit(&s.as_str())), ast::StrStyle::Raw(n)); Some(Spanned { node: l, @@ -871,8 +876,9 @@ impl Index for InternalTS { #[cfg(test)] mod tests { use super::*; + use syntax::ast::Ident; use syntax_pos::{Span, BytePos, NO_EXPANSION, DUMMY_SP}; - use parse::token::{self, str_to_ident, Token}; + use parse::token::{self, Token}; use util::parser_testing::string_to_tts; use std::rc::Rc; @@ -967,15 +973,17 @@ mod tests { let test_res = TokenStream::from_tts(string_to_tts("foo::bar::baz".to_string())) .slice(2..3); let test_eqs = TokenStream::from_tts(vec![TokenTree::Token(sp(5,8), - token::Ident(str_to_ident("bar")))]); + token::Ident(Ident::from_str("bar")))]); assert_eq!(test_res, test_eqs) } #[test] fn test_is_empty() { let test0 = TokenStream::from_tts(Vec::new()); - let test1 = TokenStream::from_tts(vec![TokenTree::Token(sp(0, 1), - Token::Ident(str_to_ident("a")))]); + let test1 = TokenStream::from_tts( + vec![TokenTree::Token(sp(0, 1), Token::Ident(Ident::from_str("a")))] + ); + let test2 = TokenStream::from_tts(string_to_tts("foo(bar::baz)".to_string())); assert_eq!(test0.is_empty(), true); @@ -1035,20 +1043,20 @@ mod tests { assert_eq!(test0, None); let test1_expected = TokenStream::from_tts(vec![TokenTree::Token(sp(1, 4), - token::Ident(str_to_ident("bar"))), + token::Ident(Ident::from_str("bar"))), TokenTree::Token(sp(4, 6), token::ModSep), TokenTree::Token(sp(6, 9), - token::Ident(str_to_ident("baz")))]); + token::Ident(Ident::from_str("baz")))]); assert_eq!(test1, Some(test1_expected)); let test2_expected = TokenStream::from_tts(vec![TokenTree::Token(sp(1, 4), - token::Ident(str_to_ident("foo"))), + token::Ident(Ident::from_str("foo"))), TokenTree::Token(sp(4, 5), token::Comma), TokenTree::Token(sp(5, 8), - token::Ident(str_to_ident("bar"))), + token::Ident(Ident::from_str("bar"))), TokenTree::Token(sp(8, 9), token::Comma), TokenTree::Token(sp(9, 12), - token::Ident(str_to_ident("baz")))]); + token::Ident(Ident::from_str("baz")))]); assert_eq!(test2, Some(test2_expected)); assert_eq!(test3, None); @@ -1069,7 +1077,7 @@ mod tests { assert_eq!(test0, None); assert_eq!(test1, None); - assert_eq!(test2, Some(str_to_ident("foo"))); + assert_eq!(test2, Some(Ident::from_str("foo"))); assert_eq!(test3, None); assert_eq!(test4, None); } @@ -1079,9 +1087,9 @@ mod tests { let test0 = as_paren_delimited_stream(string_to_tts("foo,bar,".to_string())); let test1 = as_paren_delimited_stream(string_to_tts("baz(foo,bar)".to_string())); - let test0_tts = vec![TokenTree::Token(sp(0, 3), token::Ident(str_to_ident("foo"))), + let test0_tts = vec![TokenTree::Token(sp(0, 3), token::Ident(Ident::from_str("foo"))), TokenTree::Token(sp(3, 4), token::Comma), - TokenTree::Token(sp(4, 7), token::Ident(str_to_ident("bar"))), + TokenTree::Token(sp(4, 7), token::Ident(Ident::from_str("bar"))), TokenTree::Token(sp(7, 8), token::Comma)]; let test0_stream = TokenStream::from_tts(vec![TokenTree::Delimited(sp(0, 8), Rc::new(Delimited { @@ -1094,11 +1102,11 @@ mod tests { assert_eq!(test0, test0_stream); - let test1_tts = vec![TokenTree::Token(sp(4, 7), token::Ident(str_to_ident("foo"))), + let test1_tts = vec![TokenTree::Token(sp(4, 7), token::Ident(Ident::from_str("foo"))), TokenTree::Token(sp(7, 8), token::Comma), - TokenTree::Token(sp(8, 11), token::Ident(str_to_ident("bar")))]; + TokenTree::Token(sp(8, 11), token::Ident(Ident::from_str("bar")))]; - let test1_parse = vec![TokenTree::Token(sp(0, 3), token::Ident(str_to_ident("baz"))), + let test1_parse = vec![TokenTree::Token(sp(0, 3), token::Ident(Ident::from_str("baz"))), TokenTree::Delimited(sp(3, 12), Rc::new(Delimited { delim: token::DelimToken::Paren, diff --git a/src/libsyntax/util/interner.rs b/src/libsyntax/util/interner.rs deleted file mode 100644 index f56c6cedcd..0000000000 --- a/src/libsyntax/util/interner.rs +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright 2012 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -//! An "interner" is a data structure that associates values with usize tags and -//! allows bidirectional lookup; i.e. given a value, one can easily find the -//! type, and vice versa. - -use ast::Name; - -use std::collections::HashMap; -use std::rc::Rc; - -#[derive(Default)] -pub struct Interner { - names: HashMap, Name>, - strings: Vec>, -} - -/// When traits can extend traits, we should extend index to get [] -impl Interner { - pub fn new() -> Self { - Interner::default() - } - - pub fn prefill(init: &[&str]) -> Self { - let mut this = Interner::new(); - for &string in init { - this.intern(string); - } - this - } - - pub fn intern(&mut self, string: &str) -> Name { - if let Some(&name) = self.names.get(string) { - return name; - } - - let name = Name(self.strings.len() as u32); - let string = Rc::__from_str(string); - self.strings.push(string.clone()); - self.names.insert(string, name); - name - } - - pub fn gensym(&mut self, string: &str) -> Name { - let gensym = Name(self.strings.len() as u32); - // leave out of `names` to avoid colliding - self.strings.push(Rc::__from_str(string)); - gensym - } - - /// Create a gensym with the same name as an existing entry. - pub fn gensym_copy(&mut self, name: Name) -> Name { - let gensym = Name(self.strings.len() as u32); - // leave out of `names` to avoid colliding - let string = self.strings[name.0 as usize].clone(); - self.strings.push(string); - gensym - } - - pub fn get(&self, name: Name) -> Rc { - self.strings[name.0 as usize].clone() - } - - pub fn find(&self, string: &str) -> Option { - self.names.get(string).cloned() - } -} - -#[cfg(test)] -mod tests { - use super::*; - use ast::Name; - - #[test] - fn interner_tests() { - let mut i: Interner = Interner::new(); - // first one is zero: - assert_eq!(i.intern("dog"), Name(0)); - // re-use gets the same entry: - assert_eq!(i.intern ("dog"), Name(0)); - // different string gets a different #: - assert_eq!(i.intern("cat"), Name(1)); - assert_eq!(i.intern("cat"), Name(1)); - // dog is still at zero - assert_eq!(i.intern("dog"), Name(0)); - // gensym gets 3 - assert_eq!(i.gensym("zebra"), Name(2)); - // gensym of same string gets new number : - assert_eq!(i.gensym("zebra"), Name(3)); - // gensym of *existing* string gets new number: - assert_eq!(i.gensym("dog"), Name(4)); - // gensym tests again with gensym_copy: - assert_eq!(i.gensym_copy(Name(2)), Name(5)); - assert_eq!(&*i.get(Name(5)), "zebra"); - assert_eq!(i.gensym_copy(Name(2)), Name(6)); - assert_eq!(&*i.get(Name(6)), "zebra"); - assert_eq!(&*i.get(Name(0)), "dog"); - assert_eq!(&*i.get(Name(1)), "cat"); - assert_eq!(&*i.get(Name(2)), "zebra"); - assert_eq!(&*i.get(Name(3)), "zebra"); - assert_eq!(&*i.get(Name(4)), "dog"); - } -} diff --git a/src/libsyntax/util/lev_distance.rs b/src/libsyntax/util/lev_distance.rs index e0796c34e5..a6fff2d707 100644 --- a/src/libsyntax/util/lev_distance.rs +++ b/src/libsyntax/util/lev_distance.rs @@ -8,9 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use ast::Name; use std::cmp; -use parse::token::InternedString; +use symbol::Symbol; /// To find the Levenshtein distance between two strings pub fn lev_distance(a: &str, b: &str) -> usize { @@ -48,14 +47,14 @@ pub fn lev_distance(a: &str, b: &str) -> usize { /// to one-third of the given word pub fn find_best_match_for_name<'a, T>(iter_names: T, lookup: &str, - dist: Option) -> Option - where T: Iterator { + dist: Option) -> Option + where T: Iterator { let max_dist = dist.map_or_else(|| cmp::max(lookup.len(), 3) / 3, |d| d); iter_names - .filter_map(|name| { + .filter_map(|&name| { let dist = lev_distance(lookup, &name.as_str()); match dist <= max_dist { // filter the unwanted cases - true => Some((name.as_str(), dist)), + true => Some((name, dist)), false => None, } }) diff --git a/src/libsyntax/util/move_map.rs b/src/libsyntax/util/move_map.rs index e1078b719b..fe05e2958b 100644 --- a/src/libsyntax/util/move_map.rs +++ b/src/libsyntax/util/move_map.rs @@ -10,6 +10,8 @@ use std::ptr; +use util::small_vector::SmallVector; + pub trait MoveMap: Sized { fn move_map(self, mut f: F) -> Self where F: FnMut(T) -> T { self.move_flat_map(|e| Some(f(e))) @@ -75,3 +77,50 @@ impl MoveMap for ::ptr::P<[T]> { ::ptr::P::from_vec(self.into_vec().move_flat_map(f)) } } + +impl MoveMap for SmallVector { + fn move_flat_map(mut self, mut f: F) -> Self + where F: FnMut(T) -> I, + I: IntoIterator + { + let mut read_i = 0; + let mut write_i = 0; + unsafe { + let mut old_len = self.len(); + self.set_len(0); // make sure we just leak elements in case of panic + + while read_i < old_len { + // move the read_i'th item out of the vector and map it + // to an iterator + let e = ptr::read(self.get_unchecked(read_i)); + let mut iter = f(e).into_iter(); + read_i += 1; + + while let Some(e) = iter.next() { + if write_i < read_i { + ptr::write(self.get_unchecked_mut(write_i), e); + write_i += 1; + } else { + // If this is reached we ran out of space + // in the middle of the vector. + // However, the vector is in a valid state here, + // so we just do a somewhat inefficient insert. + self.set_len(old_len); + self.insert(write_i, e); + + old_len = self.len(); + self.set_len(0); + + read_i += 1; + write_i += 1; + } + } + } + + // write_i tracks the number of actually written new items. + self.set_len(write_i); + } + + self + } +} diff --git a/src/libsyntax/util/node_count.rs b/src/libsyntax/util/node_count.rs index 14244bbddd..b90802d1e7 100644 --- a/src/libsyntax/util/node_count.rs +++ b/src/libsyntax/util/node_count.rs @@ -26,7 +26,7 @@ impl NodeCounter { } } -impl Visitor for NodeCounter { +impl<'ast> Visitor<'ast> for NodeCounter { fn visit_ident(&mut self, span: Span, ident: Ident) { self.count += 1; walk_ident(self, span, ident); @@ -75,9 +75,9 @@ impl Visitor for NodeCounter { self.count += 1; walk_generics(self, g) } - fn visit_fn(&mut self, fk: FnKind, fd: &FnDecl, b: &Block, s: Span, _: NodeId) { + fn visit_fn(&mut self, fk: FnKind, fd: &FnDecl, s: Span, _: NodeId) { self.count += 1; - walk_fn(self, fk, fd, b, s) + walk_fn(self, fk, fd, s) } fn visit_trait_item(&mut self, ti: &TraitItem) { self.count += 1; diff --git a/src/libsyntax/util/parser.rs b/src/libsyntax/util/parser.rs index df4eb1c9ed..ce24fe1eb6 100644 --- a/src/libsyntax/util/parser.rs +++ b/src/libsyntax/util/parser.rs @@ -7,7 +7,8 @@ // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. -use parse::token::{Token, BinOpToken, keywords}; +use parse::token::{Token, BinOpToken}; +use symbol::keywords; use ast::BinOpKind; /// Associative operator with precedence. diff --git a/src/libsyntax/util/parser_testing.rs b/src/libsyntax/util/parser_testing.rs index 76d3f2a063..e703dc6b41 100644 --- a/src/libsyntax/util/parser_testing.rs +++ b/src/libsyntax/util/parser_testing.rs @@ -8,11 +8,10 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use ast; +use ast::{self, Ident}; use parse::{ParseSess,PResult,filemap_to_tts}; use parse::{lexer, new_parser_from_source_str}; use parse::parser::Parser; -use parse::token; use ptr::P; use tokenstream; use std::iter::Peekable; @@ -78,9 +77,9 @@ pub fn string_to_pat(source_str: String) -> P { }) } -/// Convert a vector of strings to a vector of ast::Ident's -pub fn strs_to_idents(ids: Vec<&str> ) -> Vec { - ids.iter().map(|u| token::str_to_ident(*u)).collect() +/// Convert a vector of strings to a vector of Ident's +pub fn strs_to_idents(ids: Vec<&str> ) -> Vec { + ids.iter().map(|u| Ident::from_str(*u)).collect() } /// Does the given string match the pattern? whitespace in the first string diff --git a/src/libsyntax/util/small_vector.rs b/src/libsyntax/util/small_vector.rs index 9be7dbd681..31e675836f 100644 --- a/src/libsyntax/util/small_vector.rs +++ b/src/libsyntax/util/small_vector.rs @@ -8,253 +8,9 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -use self::SmallVectorRepr::*; -use self::IntoIterRepr::*; +use rustc_data_structures::small_vec::SmallVec; -use core::ops; -use std::iter::{IntoIterator, FromIterator}; -use std::mem; -use std::slice; -use std::vec; - -use util::move_map::MoveMap; - -/// A vector type optimized for cases where the size is almost always 0 or 1 -#[derive(Clone)] -pub struct SmallVector { - repr: SmallVectorRepr, -} - -#[derive(Clone)] -enum SmallVectorRepr { - Zero, - One(T), - Many(Vec), -} - -impl Default for SmallVector { - fn default() -> Self { - SmallVector { repr: Zero } - } -} - -impl Into> for SmallVector { - fn into(self) -> Vec { - match self.repr { - Zero => Vec::new(), - One(t) => vec![t], - Many(vec) => vec, - } - } -} - -impl FromIterator for SmallVector { - fn from_iter>(iter: I) -> SmallVector { - let mut v = SmallVector::zero(); - v.extend(iter); - v - } -} - -impl Extend for SmallVector { - fn extend>(&mut self, iter: I) { - for val in iter { - self.push(val); - } - } -} - -impl SmallVector { - pub fn zero() -> SmallVector { - SmallVector { repr: Zero } - } - - pub fn one(v: T) -> SmallVector { - SmallVector { repr: One(v) } - } - - pub fn many(vs: Vec) -> SmallVector { - SmallVector { repr: Many(vs) } - } - - pub fn as_slice(&self) -> &[T] { - self - } - - pub fn as_mut_slice(&mut self) -> &mut [T] { - self - } - - pub fn pop(&mut self) -> Option { - match self.repr { - Zero => None, - One(..) => { - let one = mem::replace(&mut self.repr, Zero); - match one { - One(v1) => Some(v1), - _ => unreachable!() - } - } - Many(ref mut vs) => vs.pop(), - } - } - - pub fn push(&mut self, v: T) { - match self.repr { - Zero => self.repr = One(v), - One(..) => { - let one = mem::replace(&mut self.repr, Zero); - match one { - One(v1) => mem::replace(&mut self.repr, Many(vec![v1, v])), - _ => unreachable!() - }; - } - Many(ref mut vs) => vs.push(v) - } - } - - pub fn push_all(&mut self, other: SmallVector) { - for v in other.into_iter() { - self.push(v); - } - } - - pub fn get(&self, idx: usize) -> &T { - match self.repr { - One(ref v) if idx == 0 => v, - Many(ref vs) => &vs[idx], - _ => panic!("out of bounds access") - } - } - - pub fn expect_one(self, err: &'static str) -> T { - match self.repr { - One(v) => v, - Many(v) => { - if v.len() == 1 { - v.into_iter().next().unwrap() - } else { - panic!(err) - } - } - _ => panic!(err) - } - } - - pub fn len(&self) -> usize { - match self.repr { - Zero => 0, - One(..) => 1, - Many(ref vals) => vals.len() - } - } - - pub fn is_empty(&self) -> bool { self.len() == 0 } - - pub fn map U>(self, mut f: F) -> SmallVector { - let repr = match self.repr { - Zero => Zero, - One(t) => One(f(t)), - Many(vec) => Many(vec.into_iter().map(f).collect()), - }; - SmallVector { repr: repr } - } -} - -impl ops::Deref for SmallVector { - type Target = [T]; - - fn deref(&self) -> &[T] { - match self.repr { - Zero => { - let result: &[T] = &[]; - result - } - One(ref v) => { - unsafe { slice::from_raw_parts(v, 1) } - } - Many(ref vs) => vs - } - } -} - -impl ops::DerefMut for SmallVector { - fn deref_mut(&mut self) -> &mut [T] { - match self.repr { - Zero => { - let result: &mut [T] = &mut []; - result - } - One(ref mut v) => { - unsafe { slice::from_raw_parts_mut(v, 1) } - } - Many(ref mut vs) => vs - } - } -} - -impl IntoIterator for SmallVector { - type Item = T; - type IntoIter = IntoIter; - fn into_iter(self) -> Self::IntoIter { - let repr = match self.repr { - Zero => ZeroIterator, - One(v) => OneIterator(v), - Many(vs) => ManyIterator(vs.into_iter()) - }; - IntoIter { repr: repr } - } -} - -pub struct IntoIter { - repr: IntoIterRepr, -} - -enum IntoIterRepr { - ZeroIterator, - OneIterator(T), - ManyIterator(vec::IntoIter), -} - -impl Iterator for IntoIter { - type Item = T; - - fn next(&mut self) -> Option { - match self.repr { - ZeroIterator => None, - OneIterator(..) => { - let mut replacement = ZeroIterator; - mem::swap(&mut self.repr, &mut replacement); - match replacement { - OneIterator(v) => Some(v), - _ => unreachable!() - } - } - ManyIterator(ref mut inner) => inner.next() - } - } - - fn size_hint(&self) -> (usize, Option) { - match self.repr { - ZeroIterator => (0, Some(0)), - OneIterator(..) => (1, Some(1)), - ManyIterator(ref inner) => inner.size_hint() - } - } -} - -impl MoveMap for SmallVector { - fn move_flat_map(self, mut f: F) -> Self - where F: FnMut(T) -> I, - I: IntoIterator - { - match self.repr { - Zero => Self::zero(), - One(v) => f(v).into_iter().collect(), - Many(vs) => SmallVector { repr: Many(vs.move_flat_map(f)) }, - } - } -} +pub type SmallVector = SmallVec<[T; 1]>; #[cfg(test)] mod tests { @@ -262,7 +18,7 @@ mod tests { #[test] fn test_len() { - let v: SmallVector = SmallVector::zero(); + let v: SmallVector = SmallVector::new(); assert_eq!(0, v.len()); assert_eq!(1, SmallVector::one(1).len()); @@ -271,30 +27,30 @@ mod tests { #[test] fn test_push_get() { - let mut v = SmallVector::zero(); + let mut v = SmallVector::new(); v.push(1); assert_eq!(1, v.len()); - assert_eq!(&1, v.get(0)); + assert_eq!(1, v[0]); v.push(2); assert_eq!(2, v.len()); - assert_eq!(&2, v.get(1)); + assert_eq!(2, v[1]); v.push(3); assert_eq!(3, v.len()); - assert_eq!(&3, v.get(2)); + assert_eq!(3, v[2]); } #[test] fn test_from_iter() { let v: SmallVector = (vec![1, 2, 3]).into_iter().collect(); assert_eq!(3, v.len()); - assert_eq!(&1, v.get(0)); - assert_eq!(&2, v.get(1)); - assert_eq!(&3, v.get(2)); + assert_eq!(1, v[0]); + assert_eq!(2, v[1]); + assert_eq!(3, v[2]); } #[test] fn test_move_iter() { - let v = SmallVector::zero(); + let v = SmallVector::new(); let v: Vec = v.into_iter().collect(); assert_eq!(v, Vec::new()); @@ -308,7 +64,7 @@ mod tests { #[test] #[should_panic] fn test_expect_one_zero() { - let _: isize = SmallVector::zero().expect_one(""); + let _: isize = SmallVector::new().expect_one(""); } #[test] diff --git a/src/libsyntax/visit.rs b/src/libsyntax/visit.rs index 7fb3e5c6be..3e0353d532 100644 --- a/src/libsyntax/visit.rs +++ b/src/libsyntax/visit.rs @@ -31,13 +31,13 @@ use codemap::Spanned; #[derive(Copy, Clone, PartialEq, Eq)] pub enum FnKind<'a> { /// fn foo() or extern "Abi" fn foo() - ItemFn(Ident, &'a Generics, Unsafety, Spanned, Abi, &'a Visibility), + ItemFn(Ident, &'a Generics, Unsafety, Spanned, Abi, &'a Visibility, &'a Block), /// fn foo(&self) - Method(Ident, &'a MethodSig, Option<&'a Visibility>), + Method(Ident, &'a MethodSig, Option<&'a Visibility>, &'a Block), - /// |x, y| {} - Closure, + /// |x, y| body + Closure(&'a Expr), } /// Each method of the Visitor trait is a hook to be potentially @@ -49,56 +49,56 @@ pub enum FnKind<'a> { /// explicitly, you need to override each method. (And you also need /// to monitor future changes to `Visitor` in case a new method with a /// new default implementation gets introduced.) -pub trait Visitor: Sized { +pub trait Visitor<'ast>: Sized { fn visit_name(&mut self, _span: Span, _name: Name) { // Nothing to do. } fn visit_ident(&mut self, span: Span, ident: Ident) { walk_ident(self, span, ident); } - fn visit_mod(&mut self, m: &Mod, _s: Span, _n: NodeId) { walk_mod(self, m) } - fn visit_foreign_item(&mut self, i: &ForeignItem) { walk_foreign_item(self, i) } - fn visit_item(&mut self, i: &Item) { walk_item(self, i) } - fn visit_local(&mut self, l: &Local) { walk_local(self, l) } - fn visit_block(&mut self, b: &Block) { walk_block(self, b) } - fn visit_stmt(&mut self, s: &Stmt) { walk_stmt(self, s) } - fn visit_arm(&mut self, a: &Arm) { walk_arm(self, a) } - fn visit_pat(&mut self, p: &Pat) { walk_pat(self, p) } - fn visit_expr(&mut self, ex: &Expr) { walk_expr(self, ex) } - fn visit_expr_post(&mut self, _ex: &Expr) { } - fn visit_ty(&mut self, t: &Ty) { walk_ty(self, t) } - fn visit_generics(&mut self, g: &Generics) { walk_generics(self, g) } - fn visit_fn(&mut self, fk: FnKind, fd: &FnDecl, b: &Block, s: Span, _: NodeId) { - walk_fn(self, fk, fd, b, s) + fn visit_mod(&mut self, m: &'ast Mod, _s: Span, _n: NodeId) { walk_mod(self, m) } + fn visit_foreign_item(&mut self, i: &'ast ForeignItem) { walk_foreign_item(self, i) } + fn visit_item(&mut self, i: &'ast Item) { walk_item(self, i) } + fn visit_local(&mut self, l: &'ast Local) { walk_local(self, l) } + fn visit_block(&mut self, b: &'ast Block) { walk_block(self, b) } + fn visit_stmt(&mut self, s: &'ast Stmt) { walk_stmt(self, s) } + fn visit_arm(&mut self, a: &'ast Arm) { walk_arm(self, a) } + fn visit_pat(&mut self, p: &'ast Pat) { walk_pat(self, p) } + fn visit_expr(&mut self, ex: &'ast Expr) { walk_expr(self, ex) } + fn visit_expr_post(&mut self, _ex: &'ast Expr) { } + fn visit_ty(&mut self, t: &'ast Ty) { walk_ty(self, t) } + fn visit_generics(&mut self, g: &'ast Generics) { walk_generics(self, g) } + fn visit_fn(&mut self, fk: FnKind<'ast>, fd: &'ast FnDecl, s: Span, _: NodeId) { + walk_fn(self, fk, fd, s) } - fn visit_trait_item(&mut self, ti: &TraitItem) { walk_trait_item(self, ti) } - fn visit_impl_item(&mut self, ii: &ImplItem) { walk_impl_item(self, ii) } - fn visit_trait_ref(&mut self, t: &TraitRef) { walk_trait_ref(self, t) } - fn visit_ty_param_bound(&mut self, bounds: &TyParamBound) { + fn visit_trait_item(&mut self, ti: &'ast TraitItem) { walk_trait_item(self, ti) } + fn visit_impl_item(&mut self, ii: &'ast ImplItem) { walk_impl_item(self, ii) } + fn visit_trait_ref(&mut self, t: &'ast TraitRef) { walk_trait_ref(self, t) } + fn visit_ty_param_bound(&mut self, bounds: &'ast TyParamBound) { walk_ty_param_bound(self, bounds) } - fn visit_poly_trait_ref(&mut self, t: &PolyTraitRef, m: &TraitBoundModifier) { + fn visit_poly_trait_ref(&mut self, t: &'ast PolyTraitRef, m: &'ast TraitBoundModifier) { walk_poly_trait_ref(self, t, m) } - fn visit_variant_data(&mut self, s: &VariantData, _: Ident, - _: &Generics, _: NodeId, _: Span) { + fn visit_variant_data(&mut self, s: &'ast VariantData, _: Ident, + _: &'ast Generics, _: NodeId, _: Span) { walk_struct_def(self, s) } - fn visit_struct_field(&mut self, s: &StructField) { walk_struct_field(self, s) } - fn visit_enum_def(&mut self, enum_definition: &EnumDef, - generics: &Generics, item_id: NodeId, _: Span) { + fn visit_struct_field(&mut self, s: &'ast StructField) { walk_struct_field(self, s) } + fn visit_enum_def(&mut self, enum_definition: &'ast EnumDef, + generics: &'ast Generics, item_id: NodeId, _: Span) { walk_enum_def(self, enum_definition, generics, item_id) } - fn visit_variant(&mut self, v: &Variant, g: &Generics, item_id: NodeId) { + fn visit_variant(&mut self, v: &'ast Variant, g: &'ast Generics, item_id: NodeId) { walk_variant(self, v, g, item_id) } - fn visit_lifetime(&mut self, lifetime: &Lifetime) { + fn visit_lifetime(&mut self, lifetime: &'ast Lifetime) { walk_lifetime(self, lifetime) } - fn visit_lifetime_def(&mut self, lifetime: &LifetimeDef) { + fn visit_lifetime_def(&mut self, lifetime: &'ast LifetimeDef) { walk_lifetime_def(self, lifetime) } - fn visit_mac(&mut self, _mac: &Mac) { + fn visit_mac(&mut self, _mac: &'ast Mac) { panic!("visit_mac disabled by default"); // NB: see note about macros above. // if you really want a visitor that @@ -106,29 +106,29 @@ pub trait Visitor: Sized { // definition in your trait impl: // visit::walk_mac(self, _mac) } - fn visit_path(&mut self, path: &Path, _id: NodeId) { + fn visit_path(&mut self, path: &'ast Path, _id: NodeId) { walk_path(self, path) } - fn visit_path_list_item(&mut self, prefix: &Path, item: &PathListItem) { + fn visit_path_list_item(&mut self, prefix: &'ast Path, item: &'ast PathListItem) { walk_path_list_item(self, prefix, item) } - fn visit_path_segment(&mut self, path_span: Span, path_segment: &PathSegment) { + fn visit_path_segment(&mut self, path_span: Span, path_segment: &'ast PathSegment) { walk_path_segment(self, path_span, path_segment) } - fn visit_path_parameters(&mut self, path_span: Span, path_parameters: &PathParameters) { + fn visit_path_parameters(&mut self, path_span: Span, path_parameters: &'ast PathParameters) { walk_path_parameters(self, path_span, path_parameters) } - fn visit_assoc_type_binding(&mut self, type_binding: &TypeBinding) { + fn visit_assoc_type_binding(&mut self, type_binding: &'ast TypeBinding) { walk_assoc_type_binding(self, type_binding) } - fn visit_attribute(&mut self, _attr: &Attribute) {} - fn visit_macro_def(&mut self, macro_def: &MacroDef) { + fn visit_attribute(&mut self, _attr: &'ast Attribute) {} + fn visit_macro_def(&mut self, macro_def: &'ast MacroDef) { walk_macro_def(self, macro_def) } - fn visit_vis(&mut self, vis: &Visibility) { + fn visit_vis(&mut self, vis: &'ast Visibility) { walk_vis(self, vis) } - fn visit_fn_ret_ty(&mut self, ret_ty: &FunctionRetTy) { + fn visit_fn_ret_ty(&mut self, ret_ty: &'ast FunctionRetTy) { walk_fn_ret_ty(self, ret_ty) } } @@ -147,45 +147,46 @@ macro_rules! walk_list { } } -pub fn walk_opt_name(visitor: &mut V, span: Span, opt_name: Option) { +pub fn walk_opt_name<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, opt_name: Option) { if let Some(name) = opt_name { visitor.visit_name(span, name); } } -pub fn walk_opt_ident(visitor: &mut V, span: Span, opt_ident: Option) { +pub fn walk_opt_ident<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, opt_ident: Option) { if let Some(ident) = opt_ident { visitor.visit_ident(span, ident); } } -pub fn walk_opt_sp_ident(visitor: &mut V, opt_sp_ident: &Option>) { +pub fn walk_opt_sp_ident<'a, V: Visitor<'a>>(visitor: &mut V, + opt_sp_ident: &Option>) { if let Some(ref sp_ident) = *opt_sp_ident { visitor.visit_ident(sp_ident.span, sp_ident.node); } } -pub fn walk_ident(visitor: &mut V, span: Span, ident: Ident) { +pub fn walk_ident<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, ident: Ident) { visitor.visit_name(span, ident.name); } -pub fn walk_crate(visitor: &mut V, krate: &Crate) { +pub fn walk_crate<'a, V: Visitor<'a>>(visitor: &mut V, krate: &'a Crate) { visitor.visit_mod(&krate.module, krate.span, CRATE_NODE_ID); walk_list!(visitor, visit_attribute, &krate.attrs); walk_list!(visitor, visit_macro_def, &krate.exported_macros); } -pub fn walk_macro_def(visitor: &mut V, macro_def: &MacroDef) { +pub fn walk_macro_def<'a, V: Visitor<'a>>(visitor: &mut V, macro_def: &'a MacroDef) { visitor.visit_ident(macro_def.span, macro_def.ident); walk_opt_ident(visitor, macro_def.span, macro_def.imported_from); walk_list!(visitor, visit_attribute, ¯o_def.attrs); } -pub fn walk_mod(visitor: &mut V, module: &Mod) { +pub fn walk_mod<'a, V: Visitor<'a>>(visitor: &mut V, module: &'a Mod) { walk_list!(visitor, visit_item, &module.items); } -pub fn walk_local(visitor: &mut V, local: &Local) { +pub fn walk_local<'a, V: Visitor<'a>>(visitor: &mut V, local: &'a Local) { for attr in local.attrs.iter() { visitor.visit_attribute(attr); } @@ -194,28 +195,30 @@ pub fn walk_local(visitor: &mut V, local: &Local) { walk_list!(visitor, visit_expr, &local.init); } -pub fn walk_lifetime(visitor: &mut V, lifetime: &Lifetime) { +pub fn walk_lifetime<'a, V: Visitor<'a>>(visitor: &mut V, lifetime: &'a Lifetime) { visitor.visit_name(lifetime.span, lifetime.name); } -pub fn walk_lifetime_def(visitor: &mut V, lifetime_def: &LifetimeDef) { +pub fn walk_lifetime_def<'a, V: Visitor<'a>>(visitor: &mut V, lifetime_def: &'a LifetimeDef) { visitor.visit_lifetime(&lifetime_def.lifetime); walk_list!(visitor, visit_lifetime, &lifetime_def.bounds); walk_list!(visitor, visit_attribute, &*lifetime_def.attrs); } -pub fn walk_poly_trait_ref(visitor: &mut V, trait_ref: &PolyTraitRef, _: &TraitBoundModifier) - where V: Visitor, +pub fn walk_poly_trait_ref<'a, V>(visitor: &mut V, + trait_ref: &'a PolyTraitRef, + _: &TraitBoundModifier) + where V: Visitor<'a>, { walk_list!(visitor, visit_lifetime_def, &trait_ref.bound_lifetimes); visitor.visit_trait_ref(&trait_ref.trait_ref); } -pub fn walk_trait_ref(visitor: &mut V, trait_ref: &TraitRef) { +pub fn walk_trait_ref<'a, V: Visitor<'a>>(visitor: &mut V, trait_ref: &'a TraitRef) { visitor.visit_path(&trait_ref.path, trait_ref.ref_id) } -pub fn walk_item(visitor: &mut V, item: &Item) { +pub fn walk_item<'a, V: Visitor<'a>>(visitor: &mut V, item: &'a Item) { visitor.visit_vis(&item.vis); visitor.visit_ident(item.span, item.ident); match item.node { @@ -246,9 +249,8 @@ pub fn walk_item(visitor: &mut V, item: &Item) { } ItemKind::Fn(ref declaration, unsafety, constness, abi, ref generics, ref body) => { visitor.visit_fn(FnKind::ItemFn(item.ident, generics, unsafety, - constness, abi, &item.vis), + constness, abi, &item.vis, body), declaration, - body, item.span, item.id) } @@ -295,15 +297,18 @@ pub fn walk_item(visitor: &mut V, item: &Item) { walk_list!(visitor, visit_attribute, &item.attrs); } -pub fn walk_enum_def(visitor: &mut V, - enum_definition: &EnumDef, - generics: &Generics, +pub fn walk_enum_def<'a, V: Visitor<'a>>(visitor: &mut V, + enum_definition: &'a EnumDef, + generics: &'a Generics, item_id: NodeId) { walk_list!(visitor, visit_variant, &enum_definition.variants, generics, item_id); } -pub fn walk_variant(visitor: &mut V, variant: &Variant, generics: &Generics, item_id: NodeId) - where V: Visitor, +pub fn walk_variant<'a, V>(visitor: &mut V, + variant: &'a Variant, + generics: &'a Generics, + item_id: NodeId) + where V: Visitor<'a>, { visitor.visit_ident(variant.span, variant.node.name); visitor.visit_variant_data(&variant.node.data, variant.node.name, @@ -312,7 +317,7 @@ pub fn walk_variant(visitor: &mut V, variant: &Variant, generics: &Generics, walk_list!(visitor, visit_attribute, &variant.node.attrs); } -pub fn walk_ty(visitor: &mut V, typ: &Ty) { +pub fn walk_ty<'a, V: Visitor<'a>>(visitor: &mut V, typ: &'a Ty) { match typ.node { TyKind::Slice(ref ty) | TyKind::Paren(ref ty) => { visitor.visit_ty(ty) @@ -362,24 +367,30 @@ pub fn walk_ty(visitor: &mut V, typ: &Ty) { } } -pub fn walk_path(visitor: &mut V, path: &Path) { +pub fn walk_path<'a, V: Visitor<'a>>(visitor: &mut V, path: &'a Path) { for segment in &path.segments { visitor.visit_path_segment(path.span, segment); } } -pub fn walk_path_list_item(visitor: &mut V, _prefix: &Path, item: &PathListItem) { +pub fn walk_path_list_item<'a, V: Visitor<'a>>(visitor: &mut V, + _prefix: &Path, + item: &'a PathListItem) { visitor.visit_ident(item.span, item.node.name); walk_opt_ident(visitor, item.span, item.node.rename); } -pub fn walk_path_segment(visitor: &mut V, path_span: Span, segment: &PathSegment) { +pub fn walk_path_segment<'a, V: Visitor<'a>>(visitor: &mut V, + path_span: Span, + segment: &'a PathSegment) { visitor.visit_ident(path_span, segment.identifier); visitor.visit_path_parameters(path_span, &segment.parameters); } -pub fn walk_path_parameters(visitor: &mut V, _path_span: Span, path_parameters: &PathParameters) - where V: Visitor, +pub fn walk_path_parameters<'a, V>(visitor: &mut V, + _path_span: Span, + path_parameters: &'a PathParameters) + where V: Visitor<'a>, { match *path_parameters { PathParameters::AngleBracketed(ref data) => { @@ -394,12 +405,13 @@ pub fn walk_path_parameters(visitor: &mut V, _path_span: Span, path_parameter } } -pub fn walk_assoc_type_binding(visitor: &mut V, type_binding: &TypeBinding) { +pub fn walk_assoc_type_binding<'a, V: Visitor<'a>>(visitor: &mut V, + type_binding: &'a TypeBinding) { visitor.visit_ident(type_binding.span, type_binding.ident); visitor.visit_ty(&type_binding.ty); } -pub fn walk_pat(visitor: &mut V, pattern: &Pat) { +pub fn walk_pat<'a, V: Visitor<'a>>(visitor: &mut V, pattern: &'a Pat) { match pattern.node { PatKind::TupleStruct(ref path, ref children, _) => { visitor.visit_path(path, pattern.id); @@ -444,7 +456,7 @@ pub fn walk_pat(visitor: &mut V, pattern: &Pat) { } } -pub fn walk_foreign_item(visitor: &mut V, foreign_item: &ForeignItem) { +pub fn walk_foreign_item<'a, V: Visitor<'a>>(visitor: &mut V, foreign_item: &'a ForeignItem) { visitor.visit_vis(&foreign_item.vis); visitor.visit_ident(foreign_item.span, foreign_item.ident); @@ -459,7 +471,7 @@ pub fn walk_foreign_item(visitor: &mut V, foreign_item: &ForeignItem walk_list!(visitor, visit_attribute, &foreign_item.attrs); } -pub fn walk_ty_param_bound(visitor: &mut V, bound: &TyParamBound) { +pub fn walk_ty_param_bound<'a, V: Visitor<'a>>(visitor: &mut V, bound: &'a TyParamBound) { match *bound { TraitTyParamBound(ref typ, ref modifier) => { visitor.visit_poly_trait_ref(typ, modifier); @@ -470,7 +482,7 @@ pub fn walk_ty_param_bound(visitor: &mut V, bound: &TyParamBound) { } } -pub fn walk_generics(visitor: &mut V, generics: &Generics) { +pub fn walk_generics<'a, V: Visitor<'a>>(visitor: &mut V, generics: &'a Generics) { for param in &generics.ty_params { visitor.visit_ident(param.span, param.ident); walk_list!(visitor, visit_ty_param_bound, ¶m.bounds); @@ -505,13 +517,13 @@ pub fn walk_generics(visitor: &mut V, generics: &Generics) { } } -pub fn walk_fn_ret_ty(visitor: &mut V, ret_ty: &FunctionRetTy) { +pub fn walk_fn_ret_ty<'a, V: Visitor<'a>>(visitor: &mut V, ret_ty: &'a FunctionRetTy) { if let FunctionRetTy::Ty(ref output_ty) = *ret_ty { visitor.visit_ty(output_ty) } } -pub fn walk_fn_decl(visitor: &mut V, function_declaration: &FnDecl) { +pub fn walk_fn_decl<'a, V: Visitor<'a>>(visitor: &mut V, function_declaration: &'a FnDecl) { for argument in &function_declaration.inputs { visitor.visit_pat(&argument.pat); visitor.visit_ty(&argument.ty) @@ -519,27 +531,28 @@ pub fn walk_fn_decl(visitor: &mut V, function_declaration: &FnDecl) visitor.visit_fn_ret_ty(&function_declaration.output) } -pub fn walk_fn_kind(visitor: &mut V, function_kind: FnKind) { - match function_kind { - FnKind::ItemFn(_, generics, _, _, _, _) => { +pub fn walk_fn<'a, V>(visitor: &mut V, kind: FnKind<'a>, declaration: &'a FnDecl, _span: Span) + where V: Visitor<'a>, +{ + match kind { + FnKind::ItemFn(_, generics, _, _, _, _, body) => { visitor.visit_generics(generics); + walk_fn_decl(visitor, declaration); + visitor.visit_block(body); } - FnKind::Method(_, ref sig, _) => { + FnKind::Method(_, ref sig, _, body) => { visitor.visit_generics(&sig.generics); + walk_fn_decl(visitor, declaration); + visitor.visit_block(body); + } + FnKind::Closure(body) => { + walk_fn_decl(visitor, declaration); + visitor.visit_expr(body); } - FnKind::Closure => {} } } -pub fn walk_fn(visitor: &mut V, kind: FnKind, declaration: &FnDecl, body: &Block, _span: Span) - where V: Visitor, -{ - walk_fn_kind(visitor, kind); - walk_fn_decl(visitor, declaration); - visitor.visit_block(body) -} - -pub fn walk_trait_item(visitor: &mut V, trait_item: &TraitItem) { +pub fn walk_trait_item<'a, V: Visitor<'a>>(visitor: &mut V, trait_item: &'a TraitItem) { visitor.visit_ident(trait_item.span, trait_item.ident); walk_list!(visitor, visit_attribute, &trait_item.attrs); match trait_item.node { @@ -552,8 +565,8 @@ pub fn walk_trait_item(visitor: &mut V, trait_item: &TraitItem) { walk_fn_decl(visitor, &sig.decl); } TraitItemKind::Method(ref sig, Some(ref body)) => { - visitor.visit_fn(FnKind::Method(trait_item.ident, sig, None), &sig.decl, - body, trait_item.span, trait_item.id); + visitor.visit_fn(FnKind::Method(trait_item.ident, sig, None, body), + &sig.decl, trait_item.span, trait_item.id); } TraitItemKind::Type(ref bounds, ref default) => { walk_list!(visitor, visit_ty_param_bound, bounds); @@ -565,7 +578,7 @@ pub fn walk_trait_item(visitor: &mut V, trait_item: &TraitItem) { } } -pub fn walk_impl_item(visitor: &mut V, impl_item: &ImplItem) { +pub fn walk_impl_item<'a, V: Visitor<'a>>(visitor: &mut V, impl_item: &'a ImplItem) { visitor.visit_vis(&impl_item.vis); visitor.visit_ident(impl_item.span, impl_item.ident); walk_list!(visitor, visit_attribute, &impl_item.attrs); @@ -575,8 +588,8 @@ pub fn walk_impl_item(visitor: &mut V, impl_item: &ImplItem) { visitor.visit_expr(expr); } ImplItemKind::Method(ref sig, ref body) => { - visitor.visit_fn(FnKind::Method(impl_item.ident, sig, Some(&impl_item.vis)), &sig.decl, - body, impl_item.span, impl_item.id); + visitor.visit_fn(FnKind::Method(impl_item.ident, sig, Some(&impl_item.vis), body), + &sig.decl, impl_item.span, impl_item.id); } ImplItemKind::Type(ref ty) => { visitor.visit_ty(ty); @@ -587,22 +600,22 @@ pub fn walk_impl_item(visitor: &mut V, impl_item: &ImplItem) { } } -pub fn walk_struct_def(visitor: &mut V, struct_definition: &VariantData) { +pub fn walk_struct_def<'a, V: Visitor<'a>>(visitor: &mut V, struct_definition: &'a VariantData) { walk_list!(visitor, visit_struct_field, struct_definition.fields()); } -pub fn walk_struct_field(visitor: &mut V, struct_field: &StructField) { +pub fn walk_struct_field<'a, V: Visitor<'a>>(visitor: &mut V, struct_field: &'a StructField) { visitor.visit_vis(&struct_field.vis); walk_opt_ident(visitor, struct_field.span, struct_field.ident); visitor.visit_ty(&struct_field.ty); walk_list!(visitor, visit_attribute, &struct_field.attrs); } -pub fn walk_block(visitor: &mut V, block: &Block) { +pub fn walk_block<'a, V: Visitor<'a>>(visitor: &mut V, block: &'a Block) { walk_list!(visitor, visit_stmt, &block.stmts); } -pub fn walk_stmt(visitor: &mut V, statement: &Stmt) { +pub fn walk_stmt<'a, V: Visitor<'a>>(visitor: &mut V, statement: &'a Stmt) { match statement.node { StmtKind::Local(ref local) => visitor.visit_local(local), StmtKind::Item(ref item) => visitor.visit_item(item), @@ -619,11 +632,11 @@ pub fn walk_stmt(visitor: &mut V, statement: &Stmt) { } } -pub fn walk_mac(_: &mut V, _: &Mac) { +pub fn walk_mac<'a, V: Visitor<'a>>(_: &mut V, _: &Mac) { // Empty! } -pub fn walk_expr(visitor: &mut V, expression: &Expr) { +pub fn walk_expr<'a, V: Visitor<'a>>(visitor: &mut V, expression: &'a Expr) { for attr in expression.attrs.iter() { visitor.visit_attribute(attr); } @@ -711,9 +724,8 @@ pub fn walk_expr(visitor: &mut V, expression: &Expr) { walk_list!(visitor, visit_arm, arms); } ExprKind::Closure(_, ref function_declaration, ref body, _decl_span) => { - visitor.visit_fn(FnKind::Closure, + visitor.visit_fn(FnKind::Closure(body), function_declaration, - body, expression.span, expression.id) } @@ -747,7 +759,11 @@ pub fn walk_expr(visitor: &mut V, expression: &Expr) { } visitor.visit_path(path, expression.id) } - ExprKind::Break(ref opt_sp_ident) | ExprKind::Continue(ref opt_sp_ident) => { + ExprKind::Break(ref opt_sp_ident, ref opt_expr) => { + walk_opt_sp_ident(visitor, opt_sp_ident); + walk_list!(visitor, visit_expr, opt_expr); + } + ExprKind::Continue(ref opt_sp_ident) => { walk_opt_sp_ident(visitor, opt_sp_ident); } ExprKind::Ret(ref optional_expression) => { @@ -773,14 +789,14 @@ pub fn walk_expr(visitor: &mut V, expression: &Expr) { visitor.visit_expr_post(expression) } -pub fn walk_arm(visitor: &mut V, arm: &Arm) { +pub fn walk_arm<'a, V: Visitor<'a>>(visitor: &mut V, arm: &'a Arm) { walk_list!(visitor, visit_pat, &arm.pats); walk_list!(visitor, visit_expr, &arm.guard); visitor.visit_expr(&arm.body); walk_list!(visitor, visit_attribute, &arm.attrs); } -pub fn walk_vis(visitor: &mut V, vis: &Visibility) { +pub fn walk_vis<'a, V: Visitor<'a>>(visitor: &mut V, vis: &'a Visibility) { if let Visibility::Restricted { ref path, id } = *vis { visitor.visit_path(path, id); } diff --git a/src/libsyntax_ext/asm.rs b/src/libsyntax_ext/asm.rs index e4d0cb7404..a5e083f926 100644 --- a/src/libsyntax_ext/asm.rs +++ b/src/libsyntax_ext/asm.rs @@ -17,9 +17,9 @@ use syntax::codemap; use syntax::ext::base; use syntax::ext::base::*; use syntax::feature_gate; -use syntax::parse::token::intern; use syntax::parse::{self, token}; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax::ast::AsmDialect; use syntax_pos::Span; use syntax::tokenstream; @@ -73,7 +73,7 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, }) .unwrap_or(tts.len()); let mut p = cx.new_parser_from_tts(&tts[first_colon..]); - let mut asm = token::InternedString::new(""); + let mut asm = Symbol::intern(""); let mut asm_str_style = None; let mut outputs = Vec::new(); let mut inputs = Vec::new(); @@ -135,11 +135,12 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, // It's the opposite of '=&' which means that the memory // cannot be shared with any other operand (usually when // a register is clobbered early.) - let mut ch = constraint.chars(); + let constraint_str = constraint.as_str(); + let mut ch = constraint_str.chars(); let output = match ch.next() { Some('=') => None, Some('+') => { - Some(token::intern_and_get_ident(&format!("={}", ch.as_str()))) + Some(Symbol::intern(&format!("={}", ch.as_str()))) } _ => { cx.span_err(span, "output operand constraint lacks '=' or '+'"); @@ -148,9 +149,9 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, }; let is_rw = output.is_some(); - let is_indirect = constraint.contains("*"); + let is_indirect = constraint_str.contains("*"); outputs.push(ast::InlineAsmOutput { - constraint: output.unwrap_or(constraint.clone()), + constraint: output.unwrap_or(constraint), expr: out, is_rw: is_rw, is_indirect: is_indirect, @@ -166,9 +167,9 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, let (constraint, _str_style) = panictry!(p.parse_str()); - if constraint.starts_with("=") { + if constraint.as_str().starts_with("=") { cx.span_err(p.prev_span, "input operand constraint contains '='"); - } else if constraint.starts_with("+") { + } else if constraint.as_str().starts_with("+") { cx.span_err(p.prev_span, "input operand constraint contains '+'"); } @@ -190,7 +191,7 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, if OPTIONS.iter().any(|&opt| s == opt) { cx.span_warn(p.prev_span, "expected a clobber, found an option"); - } else if s.starts_with("{") || s.ends_with("}") { + } else if s.as_str().starts_with("{") || s.as_str().ends_with("}") { cx.span_err(p.prev_span, "clobber should not be surrounded by braces"); } @@ -242,7 +243,7 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, let expn_id = cx.codemap().record_expansion(codemap::ExpnInfo { call_site: sp, callee: codemap::NameAndSpan { - format: codemap::MacroBang(intern("asm")), + format: codemap::MacroBang(Symbol::intern("asm")), span: None, allow_internal_unstable: false, }, @@ -251,7 +252,7 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, MacEager::expr(P(ast::Expr { id: ast::DUMMY_NODE_ID, node: ast::ExprKind::InlineAsm(P(ast::InlineAsm { - asm: token::intern_and_get_ident(&asm), + asm: asm, asm_str_style: asm_str_style.unwrap(), outputs: outputs, inputs: inputs, diff --git a/src/libsyntax_ext/concat.rs b/src/libsyntax_ext/concat.rs index 02b44f2d01..bfe18dc406 100644 --- a/src/libsyntax_ext/concat.rs +++ b/src/libsyntax_ext/concat.rs @@ -11,7 +11,7 @@ use syntax::ast; use syntax::ext::base; use syntax::ext::build::AstBuilder; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax_pos; use syntax::tokenstream; @@ -33,7 +33,7 @@ pub fn expand_syntax_ext(cx: &mut base::ExtCtxt, ast::LitKind::Str(ref s, _) | ast::LitKind::Float(ref s, _) | ast::LitKind::FloatUnsuffixed(ref s) => { - accumulator.push_str(&s); + accumulator.push_str(&s.as_str()); } ast::LitKind::Char(c) => { accumulator.push(c); @@ -57,5 +57,5 @@ pub fn expand_syntax_ext(cx: &mut base::ExtCtxt, } } } - base::MacEager::expr(cx.expr_str(sp, token::intern_and_get_ident(&accumulator[..]))) + base::MacEager::expr(cx.expr_str(sp, Symbol::intern(&accumulator))) } diff --git a/src/libsyntax_ext/concat_idents.rs b/src/libsyntax_ext/concat_idents.rs index e56c6e2229..b26e33eb38 100644 --- a/src/libsyntax_ext/concat_idents.rs +++ b/src/libsyntax_ext/concat_idents.rs @@ -13,7 +13,6 @@ use syntax::ext::base::*; use syntax::ext::base; use syntax::feature_gate; use syntax::parse::token; -use syntax::parse::token::str_to_ident; use syntax::ptr::P; use syntax_pos::Span; use syntax::tokenstream::TokenTree; @@ -51,7 +50,7 @@ pub fn expand_syntax_ext<'cx>(cx: &'cx mut ExtCtxt, } } } - let res = str_to_ident(&res_str); + let res = ast::Ident::from_str(&res_str); struct Result { ident: ast::Ident, diff --git a/src/libsyntax_ext/deriving/clone.rs b/src/libsyntax_ext/deriving/clone.rs index d7bc2a6fae..d14b59d6c7 100644 --- a/src/libsyntax_ext/deriving/clone.rs +++ b/src/libsyntax_ext/deriving/clone.rs @@ -15,8 +15,8 @@ use syntax::ast::{self, Expr, Generics, ItemKind, MetaItem, VariantData}; use syntax::attr; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::{keywords, InternedString}; use syntax::ptr::P; +use syntax::symbol::{Symbol, keywords}; use syntax_pos::Span; pub fn expand_deriving_clone(cx: &mut ExtCtxt, @@ -74,7 +74,7 @@ pub fn expand_deriving_clone(cx: &mut ExtCtxt, _ => cx.span_bug(span, "#[derive(Clone)] on trait item or impl item"), } - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; let trait_def = TraitDef { span: span, diff --git a/src/libsyntax_ext/deriving/cmp/eq.rs b/src/libsyntax_ext/deriving/cmp/eq.rs index fa0fb2492c..6ab5987a15 100644 --- a/src/libsyntax_ext/deriving/cmp/eq.rs +++ b/src/libsyntax_ext/deriving/cmp/eq.rs @@ -14,8 +14,8 @@ use deriving::generic::ty::*; use syntax::ast::{self, Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_eq(cx: &mut ExtCtxt, @@ -23,9 +23,9 @@ pub fn expand_deriving_eq(cx: &mut ExtCtxt, mitem: &MetaItem, item: &Annotatable, push: &mut FnMut(Annotatable)) { - let inline = cx.meta_word(span, InternedString::new("inline")); - let hidden = cx.meta_list_item_word(span, InternedString::new("hidden")); - let doc = cx.meta_list(span, InternedString::new("doc"), vec![hidden]); + let inline = cx.meta_word(span, Symbol::intern("inline")); + let hidden = cx.meta_list_item_word(span, Symbol::intern("hidden")); + let doc = cx.meta_list(span, Symbol::intern("doc"), vec![hidden]); let attrs = vec![cx.attribute(span, inline), cx.attribute(span, doc)]; let trait_def = TraitDef { span: span, diff --git a/src/libsyntax_ext/deriving/cmp/ord.rs b/src/libsyntax_ext/deriving/cmp/ord.rs index 6b2e36e63b..9fc3d99758 100644 --- a/src/libsyntax_ext/deriving/cmp/ord.rs +++ b/src/libsyntax_ext/deriving/cmp/ord.rs @@ -14,8 +14,8 @@ use deriving::generic::ty::*; use syntax::ast::{self, Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_ord(cx: &mut ExtCtxt, @@ -23,7 +23,7 @@ pub fn expand_deriving_ord(cx: &mut ExtCtxt, mitem: &MetaItem, item: &Annotatable, push: &mut FnMut(Annotatable)) { - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; let trait_def = TraitDef { span: span, diff --git a/src/libsyntax_ext/deriving/cmp/partial_eq.rs b/src/libsyntax_ext/deriving/cmp/partial_eq.rs index c46d4b3417..f2a050ce97 100644 --- a/src/libsyntax_ext/deriving/cmp/partial_eq.rs +++ b/src/libsyntax_ext/deriving/cmp/partial_eq.rs @@ -14,8 +14,8 @@ use deriving::generic::ty::*; use syntax::ast::{BinOpKind, Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_partial_eq(cx: &mut ExtCtxt, @@ -64,7 +64,7 @@ pub fn expand_deriving_partial_eq(cx: &mut ExtCtxt, macro_rules! md { ($name:expr, $f:ident) => { { - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; MethodDef { name: $name, diff --git a/src/libsyntax_ext/deriving/cmp/partial_ord.rs b/src/libsyntax_ext/deriving/cmp/partial_ord.rs index 597ff306b3..ce4d549d69 100644 --- a/src/libsyntax_ext/deriving/cmp/partial_ord.rs +++ b/src/libsyntax_ext/deriving/cmp/partial_ord.rs @@ -16,8 +16,8 @@ use deriving::generic::ty::*; use syntax::ast::{self, BinOpKind, Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_partial_ord(cx: &mut ExtCtxt, @@ -27,7 +27,7 @@ pub fn expand_deriving_partial_ord(cx: &mut ExtCtxt, push: &mut FnMut(Annotatable)) { macro_rules! md { ($name:expr, $op:expr, $equal:expr) => { { - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; MethodDef { name: $name, @@ -51,7 +51,7 @@ pub fn expand_deriving_partial_ord(cx: &mut ExtCtxt, vec![Box::new(ordering_ty)], true)); - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; let partial_cmp_def = MethodDef { diff --git a/src/libsyntax_ext/deriving/custom.rs b/src/libsyntax_ext/deriving/custom.rs index f8cb1294a6..64ec460a52 100644 --- a/src/libsyntax_ext/deriving/custom.rs +++ b/src/libsyntax_ext/deriving/custom.rs @@ -12,20 +12,34 @@ use std::panic; use errors::FatalError; use proc_macro::{TokenStream, __internal}; -use syntax::ast::{self, ItemKind}; -use syntax::codemap::{ExpnInfo, MacroAttribute, NameAndSpan, Span}; +use syntax::ast::{self, ItemKind, Attribute, Mac}; +use syntax::attr::{mark_used, mark_known}; +use syntax::codemap::Span; use syntax::ext::base::*; use syntax::fold::Folder; -use syntax::parse::token::intern; -use syntax::print::pprust; +use syntax::visit::Visitor; + +struct MarkAttrs<'a>(&'a [ast::Name]); + +impl<'a> Visitor<'a> for MarkAttrs<'a> { + fn visit_attribute(&mut self, attr: &Attribute) { + if self.0.contains(&attr.name()) { + mark_used(attr); + mark_known(attr); + } + } + + fn visit_mac(&mut self, _mac: &Mac) {} +} pub struct CustomDerive { inner: fn(TokenStream) -> TokenStream, + attrs: Vec, } impl CustomDerive { - pub fn new(inner: fn(TokenStream) -> TokenStream) -> CustomDerive { - CustomDerive { inner: inner } + pub fn new(inner: fn(TokenStream) -> TokenStream, attrs: Vec) -> CustomDerive { + CustomDerive { inner: inner, attrs: attrs } } } @@ -33,7 +47,7 @@ impl MultiItemModifier for CustomDerive { fn expand(&self, ecx: &mut ExtCtxt, span: Span, - meta_item: &ast::MetaItem, + _meta_item: &ast::MetaItem, item: Annotatable) -> Vec { let item = match item { @@ -47,7 +61,7 @@ impl MultiItemModifier for CustomDerive { }; match item.node { ItemKind::Struct(..) | - ItemKind::Enum(..) => {} + ItemKind::Enum(..) => {}, _ => { ecx.span_err(span, "custom derive attributes may only be \ applied to struct/enum items"); @@ -55,23 +69,15 @@ impl MultiItemModifier for CustomDerive { } } - let input_span = Span { - expn_id: ecx.codemap().record_expansion(ExpnInfo { - call_site: span, - callee: NameAndSpan { - format: MacroAttribute(intern(&pprust::meta_item_to_string(meta_item))), - span: Some(span), - allow_internal_unstable: true, - }, - }), - ..item.span - }; - let input = __internal::new_token_stream(item); + // Mark attributes as known, and used. + MarkAttrs(&self.attrs).visit_item(&item); + + let input = __internal::new_token_stream(ecx.resolver.eliminate_crate_var(item.clone())); let res = __internal::set_parse_sess(&ecx.parse_sess, || { let inner = self.inner; panic::catch_unwind(panic::AssertUnwindSafe(|| inner(input))) }); - let item = match res { + let new_items = match res { Ok(stream) => __internal::token_stream_items(stream), Err(e) => { let msg = "custom derive attribute panicked"; @@ -88,12 +94,12 @@ impl MultiItemModifier for CustomDerive { } }; - // Right now we have no knowledge of spans at all in custom derive - // macros, everything is just parsed as a string. Reassign all spans to - // the input `item` for better errors here. - item.into_iter().flat_map(|item| { - ChangeSpan { span: input_span }.fold_item(item) - }).map(Annotatable::Item).collect() + let mut res = vec![Annotatable::Item(item)]; + // Reassign spans of all expanded items to the input `item` + // for better errors here. + res.extend(new_items.into_iter().flat_map(|item| { + ChangeSpan { span: span }.fold_item(item) + }).map(Annotatable::Item)); + res } } - diff --git a/src/libsyntax_ext/deriving/debug.rs b/src/libsyntax_ext/deriving/debug.rs index f367fed9cc..a767716466 100644 --- a/src/libsyntax_ext/deriving/debug.rs +++ b/src/libsyntax_ext/deriving/debug.rs @@ -11,11 +11,10 @@ use deriving::generic::*; use deriving::generic::ty::*; -use syntax::ast; +use syntax::ast::{self, Ident}; use syntax::ast::{Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token; use syntax::ptr::P; use syntax_pos::{DUMMY_SP, Span}; @@ -69,9 +68,8 @@ fn show_substructure(cx: &mut ExtCtxt, span: Span, substr: &Substructure) -> P P P P P P unreachable!(), }; - let expr = cx.expr_method_call(span, builder_expr, token::str_to_ident("finish"), vec![]); + let expr = cx.expr_method_call(span, builder_expr, Ident::from_str("finish"), vec![]); stmts.push(cx.stmt_expr(expr)); let block = cx.block(span, stmts); diff --git a/src/libsyntax_ext/deriving/decodable.rs b/src/libsyntax_ext/deriving/decodable.rs index 10db56d46f..e2634c60dc 100644 --- a/src/libsyntax_ext/deriving/decodable.rs +++ b/src/libsyntax_ext/deriving/decodable.rs @@ -18,9 +18,8 @@ use syntax::ast; use syntax::ast::{Expr, MetaItem, Mutability}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; -use syntax::parse::token; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_rustc_decodable(cx: &mut ExtCtxt, @@ -131,9 +130,9 @@ fn decodable_substructure(cx: &mut ExtCtxt, cx.expr_method_call(trait_span, decoder, cx.ident_of("read_struct"), - vec![cx.expr_str(trait_span, substr.type_ident.name.as_str()), + vec![cx.expr_str(trait_span, substr.type_ident.name), cx.expr_usize(trait_span, nfields), - cx.lambda_expr_1(trait_span, result, blkarg)]) + cx.lambda1(trait_span, result, blkarg)]) } StaticEnum(_, ref fields) => { let variant = cx.ident_of("i"); @@ -143,7 +142,7 @@ fn decodable_substructure(cx: &mut ExtCtxt, let rvariant_arg = cx.ident_of("read_enum_variant_arg"); for (i, &(ident, v_span, ref parts)) in fields.iter().enumerate() { - variants.push(cx.expr_str(v_span, ident.name.as_str())); + variants.push(cx.expr_str(v_span, ident.name)); let path = cx.path(trait_span, vec![substr.type_ident, ident]); let decoded = decode_static_fields(cx, v_span, path, parts, |cx, span, _, field| { @@ -165,7 +164,7 @@ fn decodable_substructure(cx: &mut ExtCtxt, let result = cx.expr_ok(trait_span, cx.expr_match(trait_span, cx.expr_ident(trait_span, variant), arms)); - let lambda = cx.lambda_expr(trait_span, vec![blkarg, variant], result); + let lambda = cx.lambda(trait_span, vec![blkarg, variant], result); let variant_vec = cx.expr_vec(trait_span, variants); let variant_vec = cx.expr_addr_of(trait_span, variant_vec); let result = cx.expr_method_call(trait_span, @@ -175,8 +174,8 @@ fn decodable_substructure(cx: &mut ExtCtxt, cx.expr_method_call(trait_span, decoder, cx.ident_of("read_enum"), - vec![cx.expr_str(trait_span, substr.type_ident.name.as_str()), - cx.lambda_expr_1(trait_span, result, blkarg)]) + vec![cx.expr_str(trait_span, substr.type_ident.name), + cx.lambda1(trait_span, result, blkarg)]) } _ => cx.bug("expected StaticEnum or StaticStruct in derive(Decodable)"), }; @@ -191,7 +190,7 @@ fn decode_static_fields(cx: &mut ExtCtxt, fields: &StaticFields, mut getarg: F) -> P - where F: FnMut(&mut ExtCtxt, Span, InternedString, usize) -> P + where F: FnMut(&mut ExtCtxt, Span, Symbol, usize) -> P { match *fields { Unnamed(ref fields, is_tuple) => { @@ -202,10 +201,7 @@ fn decode_static_fields(cx: &mut ExtCtxt, let fields = fields.iter() .enumerate() .map(|(i, &span)| { - getarg(cx, - span, - token::intern_and_get_ident(&format!("_field{}", i)), - i) + getarg(cx, span, Symbol::intern(&format!("_field{}", i)), i) }) .collect(); @@ -217,7 +213,7 @@ fn decode_static_fields(cx: &mut ExtCtxt, let fields = fields.iter() .enumerate() .map(|(i, &(ident, span))| { - let arg = getarg(cx, span, ident.name.as_str(), i); + let arg = getarg(cx, span, ident.name, i); cx.field_imm(span, ident, arg) }) .collect(); diff --git a/src/libsyntax_ext/deriving/default.rs b/src/libsyntax_ext/deriving/default.rs index b15fd2b49a..69391f48c2 100644 --- a/src/libsyntax_ext/deriving/default.rs +++ b/src/libsyntax_ext/deriving/default.rs @@ -14,8 +14,8 @@ use deriving::generic::ty::*; use syntax::ast::{Expr, MetaItem}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token::InternedString; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_default(cx: &mut ExtCtxt, @@ -23,7 +23,7 @@ pub fn expand_deriving_default(cx: &mut ExtCtxt, mitem: &MetaItem, item: &Annotatable, push: &mut FnMut(Annotatable)) { - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; let trait_def = TraitDef { span: span, diff --git a/src/libsyntax_ext/deriving/encodable.rs b/src/libsyntax_ext/deriving/encodable.rs index 640296d7f0..092738ab8a 100644 --- a/src/libsyntax_ext/deriving/encodable.rs +++ b/src/libsyntax_ext/deriving/encodable.rs @@ -95,8 +95,8 @@ use deriving::generic::ty::*; use syntax::ast::{Expr, ExprKind, MetaItem, Mutability}; use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; -use syntax::parse::token; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; pub fn expand_deriving_rustc_encodable(cx: &mut ExtCtxt, @@ -192,12 +192,12 @@ fn encodable_substructure(cx: &mut ExtCtxt, let mut stmts = Vec::new(); for (i, &FieldInfo { name, ref self_, span, .. }) in fields.iter().enumerate() { let name = match name { - Some(id) => id.name.as_str(), - None => token::intern_and_get_ident(&format!("_field{}", i)), + Some(id) => id.name, + None => Symbol::intern(&format!("_field{}", i)), }; let self_ref = cx.expr_addr_of(span, self_.clone()); let enc = cx.expr_call(span, fn_path.clone(), vec![self_ref, blkencoder.clone()]); - let lambda = cx.lambda_expr_1(span, enc, blkarg); + let lambda = cx.lambda1(span, enc, blkarg); let call = cx.expr_method_call(span, blkencoder.clone(), emit_struct_field, @@ -226,7 +226,7 @@ fn encodable_substructure(cx: &mut ExtCtxt, cx.expr_method_call(trait_span, encoder, cx.ident_of("emit_struct"), - vec![cx.expr_str(trait_span, substr.type_ident.name.as_str()), + vec![cx.expr_str(trait_span, substr.type_ident.name), cx.expr_usize(trait_span, fields.len()), blk]) } @@ -246,7 +246,7 @@ fn encodable_substructure(cx: &mut ExtCtxt, let self_ref = cx.expr_addr_of(span, self_.clone()); let enc = cx.expr_call(span, fn_path.clone(), vec![self_ref, blkencoder.clone()]); - let lambda = cx.lambda_expr_1(span, enc, blkarg); + let lambda = cx.lambda1(span, enc, blkarg); let call = cx.expr_method_call(span, blkencoder.clone(), emit_variant_arg, @@ -265,7 +265,7 @@ fn encodable_substructure(cx: &mut ExtCtxt, } let blk = cx.lambda_stmts_1(trait_span, stmts, blkarg); - let name = cx.expr_str(trait_span, variant.node.name.name.as_str()); + let name = cx.expr_str(trait_span, variant.node.name.name); let call = cx.expr_method_call(trait_span, blkencoder, cx.ident_of("emit_enum_variant"), @@ -273,12 +273,11 @@ fn encodable_substructure(cx: &mut ExtCtxt, cx.expr_usize(trait_span, idx), cx.expr_usize(trait_span, fields.len()), blk]); - let blk = cx.lambda_expr_1(trait_span, call, blkarg); + let blk = cx.lambda1(trait_span, call, blkarg); let ret = cx.expr_method_call(trait_span, encoder, cx.ident_of("emit_enum"), - vec![cx.expr_str(trait_span, - substr.type_ident.name.as_str()), + vec![cx.expr_str(trait_span ,substr.type_ident.name), blk]); cx.expr_block(cx.block(trait_span, vec![me, cx.stmt_expr(ret)])) } diff --git a/src/libsyntax_ext/deriving/generic/mod.rs b/src/libsyntax_ext/deriving/generic/mod.rs index e6b63be3ef..51199819df 100644 --- a/src/libsyntax_ext/deriving/generic/mod.rs +++ b/src/libsyntax_ext/deriving/generic/mod.rs @@ -198,8 +198,8 @@ use syntax::ext::base::{Annotatable, ExtCtxt}; use syntax::ext::build::AstBuilder; use syntax::codemap::{self, dummy_spanned, respan}; use syntax::util::move_map::MoveMap; -use syntax::parse::token::{InternedString, keywords}; use syntax::ptr::P; +use syntax::symbol::{Symbol, keywords}; use syntax_pos::{DUMMY_SP, Span}; use errors::Handler; @@ -361,8 +361,8 @@ fn find_type_parameters(ty: &ast::Ty, types: Vec>, } - impl<'a, 'b> visit::Visitor for Visitor<'a, 'b> { - fn visit_ty(&mut self, ty: &ast::Ty) { + impl<'a, 'b> visit::Visitor<'a> for Visitor<'a, 'b> { + fn visit_ty(&mut self, ty: &'a ast::Ty) { match ty.node { ast::TyKind::Path(_, ref path) if !path.global => { if let Some(segment) = path.segments.first() { @@ -442,7 +442,7 @@ impl<'a> TraitDef<'a> { attrs.extend(item.attrs .iter() .filter(|a| { - match &a.name()[..] { + match &*a.name().as_str() { "allow" | "warn" | "deny" | "forbid" | "stable" | "unstable" => true, _ => false, } @@ -639,15 +639,15 @@ impl<'a> TraitDef<'a> { let attr = cx.attribute(self.span, cx.meta_word(self.span, - InternedString::new("automatically_derived"))); + Symbol::intern("automatically_derived"))); // Just mark it now since we know that it'll end up used downstream attr::mark_used(&attr); let opt_trait_ref = Some(trait_ref); - let unused_qual = cx.attribute(self.span, - cx.meta_list(self.span, - InternedString::new("allow"), - vec![cx.meta_list_item_word(self.span, - InternedString::new("unused_qualifications"))])); + let unused_qual = { + let word = cx.meta_list_item_word(self.span, Symbol::intern("unused_qualifications")); + cx.attribute(self.span, cx.meta_list(self.span, Symbol::intern("allow"), vec![word])) + }; + let mut a = vec![attr, unused_qual]; a.extend(self.attributes.iter().cloned()); diff --git a/src/libsyntax_ext/deriving/mod.rs b/src/libsyntax_ext/deriving/mod.rs index c2bfead568..ac4b07bbf2 100644 --- a/src/libsyntax_ext/deriving/mod.rs +++ b/src/libsyntax_ext/deriving/mod.rs @@ -12,12 +12,12 @@ use syntax::ast::{self, MetaItem}; use syntax::attr::HasAttrs; +use syntax::codemap; use syntax::ext::base::{Annotatable, ExtCtxt, SyntaxExtension}; use syntax::ext::build::AstBuilder; use syntax::feature_gate; -use syntax::codemap; -use syntax::parse::token::{intern, intern_and_get_ident}; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; macro_rules! pathvec { @@ -80,7 +80,7 @@ fn allow_unstable(cx: &mut ExtCtxt, span: Span, attr_name: &str) -> Span { expn_id: cx.codemap().record_expansion(codemap::ExpnInfo { call_site: span, callee: codemap::NameAndSpan { - format: codemap::MacroAttribute(intern(attr_name)), + format: codemap::MacroAttribute(Symbol::intern(attr_name)), span: Some(span), allow_internal_unstable: true, }, @@ -105,9 +105,10 @@ pub fn expand_derive(cx: &mut ExtCtxt, } }; + let derive = Symbol::intern("derive"); let mut derive_attrs = Vec::new(); item = item.map_attrs(|attrs| { - let partition = attrs.into_iter().partition(|attr| &attr.name() == "derive"); + let partition = attrs.into_iter().partition(|attr| attr.name() == derive); derive_attrs = partition.0; partition.1 }); @@ -115,7 +116,7 @@ pub fn expand_derive(cx: &mut ExtCtxt, // Expand `#[derive]`s after other attribute macro invocations. if cx.resolver.find_attr_invoc(&mut item.attrs.clone()).is_some() { return vec![Annotatable::Item(item.map_attrs(|mut attrs| { - attrs.push(cx.attribute(span, P(mitem.clone()))); + attrs.push(cx.attribute(span, mitem.clone())); attrs.extend(derive_attrs); attrs }))]; @@ -135,7 +136,7 @@ pub fn expand_derive(cx: &mut ExtCtxt, let mut traits = get_traits(mitem, cx); for derive_attr in derive_attrs { - traits.extend(get_traits(&derive_attr.node.value, cx)); + traits.extend(get_traits(&derive_attr.value, cx)); } // First, weed out malformed #[derive] @@ -158,9 +159,8 @@ pub fn expand_derive(cx: &mut ExtCtxt, let tword = titem.word().unwrap(); let tname = tword.name(); - if is_builtin_trait(&tname) || { - let derive_mode = - ast::Path::from_ident(titem.span, ast::Ident::with_empty_ctxt(intern(&tname))); + if is_builtin_trait(tname) || { + let derive_mode = ast::Path::from_ident(titem.span, ast::Ident::with_empty_ctxt(tname)); cx.resolver.resolve_macro(cx.current_expansion.mark, &derive_mode, false).map(|ext| { if let SyntaxExtension::CustomDerive(_) = *ext { true } else { false } }).unwrap_or(false) @@ -176,7 +176,7 @@ pub fn expand_derive(cx: &mut ExtCtxt, feature_gate::EXPLAIN_CUSTOM_DERIVE); } else { cx.span_warn(titem.span, feature_gate::EXPLAIN_DEPR_CUSTOM_DERIVE); - let name = intern_and_get_ident(&format!("derive_{}", tname)); + let name = Symbol::intern(&format!("derive_{}", tname)); let mitem = cx.meta_word(titem.span, name); new_attributes.push(cx.attribute(mitem.span, mitem)); } @@ -186,9 +186,7 @@ pub fn expand_derive(cx: &mut ExtCtxt, item = item.map(|mut i| { i.attrs.extend(new_attributes); if traits.len() > 0 { - let list = cx.meta_list(mitem.span, - intern_and_get_ident("derive"), - traits); + let list = cx.meta_list(mitem.span, derive, traits); i.attrs.push(cx.attribute(mitem.span, list)); } i @@ -217,27 +215,23 @@ pub fn expand_derive(cx: &mut ExtCtxt, let macros_11_derive = traits.iter() .cloned() .enumerate() - .filter(|&(_, ref name)| !is_builtin_trait(&name.name().unwrap())) + .filter(|&(_, ref name)| !is_builtin_trait(name.name().unwrap())) .next(); if let Some((i, titem)) = macros_11_derive { - let tname = ast::Ident::with_empty_ctxt(intern(&titem.name().unwrap())); + let tname = ast::Ident::with_empty_ctxt(titem.name().unwrap()); let path = ast::Path::from_ident(titem.span, tname); let ext = cx.resolver.resolve_macro(cx.current_expansion.mark, &path, false).unwrap(); traits.remove(i); if traits.len() > 0 { item = item.map(|mut i| { - let list = cx.meta_list(mitem.span, - intern_and_get_ident("derive"), - traits); + let list = cx.meta_list(mitem.span, derive, traits); i.attrs.push(cx.attribute(mitem.span, list)); i }); } let titem = cx.meta_list_item_word(titem.span, titem.name().unwrap()); - let mitem = cx.meta_list(titem.span, - intern_and_get_ident("derive"), - vec![titem]); + let mitem = cx.meta_list(titem.span, derive, vec![titem]); let item = Annotatable::Item(item); if let SyntaxExtension::CustomDerive(ref ext) = *ext { return ext.expand(cx, mitem.span, &mitem, item); @@ -251,9 +245,10 @@ pub fn expand_derive(cx: &mut ExtCtxt, // RFC #1445. `#[derive(PartialEq, Eq)]` adds a (trusted) // `#[structural_match]` attribute. - if traits.iter().filter_map(|t| t.name()).any(|t| t == "PartialEq") && - traits.iter().filter_map(|t| t.name()).any(|t| t == "Eq") { - let structural_match = intern_and_get_ident("structural_match"); + let (partial_eq, eq) = (Symbol::intern("PartialEq"), Symbol::intern("Eq")); + if traits.iter().any(|t| t.name() == Some(partial_eq)) && + traits.iter().any(|t| t.name() == Some(eq)) { + let structural_match = Symbol::intern("structural_match"); let span = allow_unstable(cx, span, "derive(PartialEq, Eq)"); let meta = cx.meta_word(span, structural_match); item = item.map(|mut i| { @@ -266,9 +261,10 @@ pub fn expand_derive(cx: &mut ExtCtxt, // the same as the copy implementation. // // Add a marker attribute here picked up during #[derive(Clone)] - if traits.iter().filter_map(|t| t.name()).any(|t| t == "Clone") && - traits.iter().filter_map(|t| t.name()).any(|t| t == "Copy") { - let marker = intern_and_get_ident("rustc_copy_clone_marker"); + let (copy, clone) = (Symbol::intern("Copy"), Symbol::intern("Clone")); + if traits.iter().any(|t| t.name() == Some(clone)) && + traits.iter().any(|t| t.name() == Some(copy)) { + let marker = Symbol::intern("rustc_copy_clone_marker"); let span = allow_unstable(cx, span, "derive(Copy, Clone)"); let meta = cx.meta_word(span, marker); item = item.map(|mut i| { @@ -280,14 +276,14 @@ pub fn expand_derive(cx: &mut ExtCtxt, let mut items = Vec::new(); for titem in traits.iter() { let tname = titem.word().unwrap().name(); - let name = intern_and_get_ident(&format!("derive({})", tname)); + let name = Symbol::intern(&format!("derive({})", tname)); let mitem = cx.meta_word(titem.span, name); let span = Span { expn_id: cx.codemap().record_expansion(codemap::ExpnInfo { call_site: titem.span, callee: codemap::NameAndSpan { - format: codemap::MacroAttribute(intern(&format!("derive({})", tname))), + format: codemap::MacroAttribute(Symbol::intern(&format!("derive({})", tname))), span: Some(titem.span), allow_internal_unstable: true, }, @@ -296,7 +292,7 @@ pub fn expand_derive(cx: &mut ExtCtxt, }; let my_item = Annotatable::Item(item); - expand_builtin(&tname, cx, span, &mitem, &my_item, &mut |a| { + expand_builtin(&tname.as_str(), cx, span, &mitem, &my_item, &mut |a| { items.push(a); }); item = my_item.expect_item(); @@ -308,8 +304,8 @@ pub fn expand_derive(cx: &mut ExtCtxt, macro_rules! derive_traits { ($( $name:expr => $func:path, )+) => { - pub fn is_builtin_trait(name: &str) -> bool { - match name { + pub fn is_builtin_trait(name: ast::Name) -> bool { + match &*name.as_str() { $( $name )|+ => true, _ => false, } @@ -406,7 +402,7 @@ fn call_intrinsic(cx: &ExtCtxt, span.expn_id = cx.codemap().record_expansion(codemap::ExpnInfo { call_site: span, callee: codemap::NameAndSpan { - format: codemap::MacroAttribute(intern("derive")), + format: codemap::MacroAttribute(Symbol::intern("derive")), span: Some(span), allow_internal_unstable: true, }, diff --git a/src/libsyntax_ext/env.rs b/src/libsyntax_ext/env.rs index 5c081b9896..ecf0a8f377 100644 --- a/src/libsyntax_ext/env.rs +++ b/src/libsyntax_ext/env.rs @@ -17,7 +17,7 @@ use syntax::ast; use syntax::ext::base::*; use syntax::ext::base; use syntax::ext::build::AstBuilder; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax_pos::Span; use syntax::tokenstream; @@ -32,7 +32,7 @@ pub fn expand_option_env<'cx>(cx: &'cx mut ExtCtxt, Some(v) => v, }; - let e = match env::var(&var[..]) { + let e = match env::var(&*var.as_str()) { Err(..) => { cx.expr_path(cx.path_all(sp, true, @@ -49,7 +49,7 @@ pub fn expand_option_env<'cx>(cx: &'cx mut ExtCtxt, Ok(s) => { cx.expr_call_global(sp, cx.std_path(&["option", "Option", "Some"]), - vec![cx.expr_str(sp, token::intern_and_get_ident(&s[..]))]) + vec![cx.expr_str(sp, Symbol::intern(&s))]) } }; MacEager::expr(e) @@ -73,7 +73,7 @@ pub fn expand_env<'cx>(cx: &'cx mut ExtCtxt, Some((v, _style)) => v, }; let msg = match exprs.next() { - None => token::intern_and_get_ident(&format!("environment variable `{}` not defined", var)), + None => Symbol::intern(&format!("environment variable `{}` not defined", var)), Some(second) => { match expr_to_string(cx, second, "expected string literal") { None => return DummyResult::expr(sp), @@ -87,12 +87,12 @@ pub fn expand_env<'cx>(cx: &'cx mut ExtCtxt, return DummyResult::expr(sp); } - let e = match env::var(&var[..]) { + let e = match env::var(&*var.as_str()) { Err(_) => { - cx.span_err(sp, &msg); + cx.span_err(sp, &msg.as_str()); cx.expr_usize(sp, 0) } - Ok(s) => cx.expr_str(sp, token::intern_and_get_ident(&s)), + Ok(s) => cx.expr_str(sp, Symbol::intern(&s)), }; MacEager::expr(e) } diff --git a/src/libsyntax_ext/format.rs b/src/libsyntax_ext/format.rs index de78f859f0..d2afa08cad 100644 --- a/src/libsyntax_ext/format.rs +++ b/src/libsyntax_ext/format.rs @@ -17,12 +17,13 @@ use syntax::ast; use syntax::ext::base::*; use syntax::ext::base; use syntax::ext::build::AstBuilder; -use syntax::parse::token::{self, keywords}; +use syntax::parse::token; use syntax::ptr::P; +use syntax::symbol::{Symbol, keywords}; use syntax_pos::{Span, DUMMY_SP}; use syntax::tokenstream; -use std::collections::HashMap; +use std::collections::{HashMap, HashSet}; use std::collections::hash_map::Entry; #[derive(PartialEq)] @@ -369,7 +370,7 @@ impl<'a, 'b> Context<'a, 'b> { /// Translate the accumulated string literals to a literal expression fn trans_literal_string(&mut self) -> P { let sp = self.fmtsp; - let s = token::intern_and_get_ident(&self.literal); + let s = Symbol::intern(&self.literal); self.literal.clear(); self.ecx.expr_str(sp, s) } @@ -727,7 +728,8 @@ pub fn expand_preparsed_format_args(ecx: &mut ExtCtxt, fmtsp: fmt.span, }; - let mut parser = parse::Parser::new(&fmt.node.0); + let fmt_str = &*fmt.node.0.as_str(); + let mut parser = parse::Parser::new(fmt_str); let mut pieces = vec![]; loop { @@ -756,8 +758,12 @@ pub fn expand_preparsed_format_args(ecx: &mut ExtCtxt, } if !parser.errors.is_empty() { - cx.ecx.span_err(cx.fmtsp, - &format!("invalid format string: {}", parser.errors.remove(0))); + let (err, note) = parser.errors.remove(0); + let mut e = cx.ecx.struct_span_err(cx.fmtsp, &format!("invalid format string: {}", err)); + if let Some(note) = note { + e.note(¬e); + } + e.emit(); return DummyResult::raw_expr(sp); } if !cx.literal.is_empty() { @@ -767,6 +773,7 @@ pub fn expand_preparsed_format_args(ecx: &mut ExtCtxt, // Make sure that all arguments were used and all arguments have types. let num_pos_args = cx.args.len() - cx.names.len(); + let mut errs = vec![]; for (i, ty) in cx.arg_types.iter().enumerate() { if ty.len() == 0 { if cx.count_positions.contains_key(&i) { @@ -779,9 +786,79 @@ pub fn expand_preparsed_format_args(ecx: &mut ExtCtxt, // positional argument "argument never used" }; - cx.ecx.span_err(cx.args[i].span, msg); + errs.push((cx.args[i].span, msg)); } } + if errs.len() > 0 { + let args_used = cx.arg_types.len() - errs.len(); + let args_unused = errs.len(); + + let mut diag = { + if errs.len() == 1 { + let (sp, msg) = errs.into_iter().next().unwrap(); + cx.ecx.struct_span_err(sp, msg) + } else { + let mut diag = cx.ecx.struct_span_err(cx.fmtsp, + "multiple unused formatting arguments"); + for (sp, msg) in errs { + diag.span_note(sp, msg); + } + diag + } + }; + + // Decide if we want to look for foreign formatting directives. + if args_used < args_unused { + use super::format_foreign as foreign; + + // The set of foreign substitutions we've explained. This prevents spamming the user + // with `%d should be written as {}` over and over again. + let mut explained = HashSet::new(); + + // Used to ensure we only report translations for *one* kind of foreign format. + let mut found_foreign = false; + + macro_rules! check_foreign { + ($kind:ident) => {{ + let mut show_doc_note = false; + + for sub in foreign::$kind::iter_subs(fmt_str) { + let trn = match sub.translate() { + Some(trn) => trn, + + // If it has no translation, don't call it out specifically. + None => continue, + }; + + let sub = String::from(sub.as_str()); + if explained.contains(&sub) { + continue; + } + explained.insert(sub.clone()); + + if !found_foreign { + found_foreign = true; + show_doc_note = true; + } + + diag.help(&format!("`{}` should be written as `{}`", sub, trn)); + } + + if show_doc_note { + diag.note(concat!(stringify!($kind), " formatting not supported; see \ + the documentation for `std::fmt`")); + } + }}; + } + + check_foreign!(printf); + if !found_foreign { + check_foreign!(shell); + } + } + + diag.emit(); + } cx.into_expr() } diff --git a/src/libsyntax_ext/format_foreign.rs b/src/libsyntax_ext/format_foreign.rs new file mode 100644 index 0000000000..3c802e8334 --- /dev/null +++ b/src/libsyntax_ext/format_foreign.rs @@ -0,0 +1,1013 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +macro_rules! try_opt { + ($e:expr) => { + match $e { + Some(v) => v, + None => return None, + } + }; +} + +pub mod printf { + use super::strcursor::StrCursor as Cur; + + /// Represents a single `printf`-style substitution. + #[derive(Clone, Eq, PartialEq, Debug)] + pub enum Substitution<'a> { + /// A formatted output substitution. + Format(Format<'a>), + /// A literal `%%` escape. + Escape, + } + + impl<'a> Substitution<'a> { + pub fn as_str(&self) -> &str { + match *self { + Substitution::Format(ref fmt) => fmt.span, + Substitution::Escape => "%%", + } + } + + /// Translate this substitution into an equivalent Rust formatting directive. + /// + /// This ignores cases where the substitution does not have an exact equivalent, or where + /// the substitution would be unnecessary. + pub fn translate(&self) -> Option { + match *self { + Substitution::Format(ref fmt) => fmt.translate(), + Substitution::Escape => None, + } + } + } + + #[derive(Clone, Eq, PartialEq, Debug)] + /// A single `printf`-style formatting directive. + pub struct Format<'a> { + /// The entire original formatting directive. + pub span: &'a str, + /// The (1-based) parameter to be converted. + pub parameter: Option, + /// Formatting flags. + pub flags: &'a str, + /// Minimum width of the output. + pub width: Option, + /// Precision of the conversion. + pub precision: Option, + /// Length modifier for the conversion. + pub length: Option<&'a str>, + /// Type of parameter being converted. + pub type_: &'a str, + } + + impl<'a> Format<'a> { + /// Translate this directive into an equivalent Rust formatting directive. + /// + /// Returns `None` in cases where the `printf` directive does not have an exact Rust + /// equivalent, rather than guessing. + pub fn translate(&self) -> Option { + use std::fmt::Write; + + let (c_alt, c_zero, c_left, c_plus) = { + let mut c_alt = false; + let mut c_zero = false; + let mut c_left = false; + let mut c_plus = false; + for c in self.flags.chars() { + match c { + '#' => c_alt = true, + '0' => c_zero = true, + '-' => c_left = true, + '+' => c_plus = true, + _ => return None + } + } + (c_alt, c_zero, c_left, c_plus) + }; + + // Has a special form in Rust for numbers. + let fill = if c_zero { Some("0") } else { None }; + + let align = if c_left { Some("<") } else { None }; + + // Rust doesn't have an equivalent to the `' '` flag. + let sign = if c_plus { Some("+") } else { None }; + + // Not *quite* the same, depending on the type... + let alt = c_alt; + + let width = match self.width { + Some(Num::Next) => { + // NOTE: Rust doesn't support this. + return None; + } + w @ Some(Num::Arg(_)) => w, + w @ Some(Num::Num(_)) => w, + None => None, + }; + + let precision = self.precision; + + // NOTE: although length *can* have an effect, we can't duplicate the effect in Rust, so + // we just ignore it. + + let (type_, use_zero_fill, is_int) = match self.type_ { + "d" | "i" | "u" => (None, true, true), + "f" | "F" => (None, false, false), + "s" | "c" => (None, false, false), + "e" | "E" => (Some(self.type_), true, false), + "x" | "X" | "o" => (Some(self.type_), true, true), + "p" => (Some(self.type_), false, true), + "g" => (Some("e"), true, false), + "G" => (Some("E"), true, false), + _ => return None, + }; + + let (fill, width, precision) = match (is_int, width, precision) { + (true, Some(_), Some(_)) => { + // Rust can't duplicate this insanity. + return None; + }, + (true, None, Some(p)) => (Some("0"), Some(p), None), + (true, w, None) => (fill, w, None), + (false, w, p) => (fill, w, p), + }; + + let align = match (self.type_, width.is_some(), align.is_some()) { + ("s", true, false) => Some(">"), + _ => align, + }; + + let (fill, zero_fill) = match (fill, use_zero_fill) { + (Some("0"), true) => (None, true), + (fill, _) => (fill, false), + }; + + let alt = match type_ { + Some("x") | Some("X") => alt, + _ => false, + }; + + let has_options = fill.is_some() + || align.is_some() + || sign.is_some() + || alt + || zero_fill + || width.is_some() + || precision.is_some() + || type_.is_some() + ; + + // Initialise with a rough guess. + let cap = self.span.len() + if has_options { 2 } else { 0 }; + let mut s = String::with_capacity(cap); + + s.push_str("{"); + + if let Some(arg) = self.parameter { + try_opt!(write!(s, "{}", try_opt!(arg.checked_sub(1))).ok()); + } + + if has_options { + s.push_str(":"); + + let align = if let Some(fill) = fill { + s.push_str(fill); + align.or(Some(">")) + } else { + align + }; + + if let Some(align) = align { + s.push_str(align); + } + + if let Some(sign) = sign { + s.push_str(sign); + } + + if alt { + s.push_str("#"); + } + + if zero_fill { + s.push_str("0"); + } + + if let Some(width) = width { + try_opt!(width.translate(&mut s).ok()); + } + + if let Some(precision) = precision { + s.push_str("."); + try_opt!(precision.translate(&mut s).ok()); + } + + if let Some(type_) = type_ { + s.push_str(type_); + } + } + + s.push_str("}"); + Some(s) + } + } + + /// A general number used in a `printf` formatting directive. + #[derive(Copy, Clone, Eq, PartialEq, Debug)] + pub enum Num { + // The range of these values is technically bounded by `NL_ARGMAX`... but, at least for GNU + // libc, it apparently has no real fixed limit. A `u16` is used here on the basis that it + // is *vanishingly* unlikely that *anyone* is going to try formatting something wider, or + // with more precision, than 32 thousand positions which is so wide it couldn't possibly fit + // on a screen. + + /// A specific, fixed value. + Num(u16), + /// The value is derived from a positional argument. + Arg(u16), + /// The value is derived from the "next" unconverted argument. + Next, + } + + impl Num { + fn from_str(s: &str, arg: Option<&str>) -> Self { + if let Some(arg) = arg { + Num::Arg(arg.parse().expect(&format!("invalid format arg `{:?}`", arg))) + } else if s == "*" { + Num::Next + } else { + Num::Num(s.parse().expect(&format!("invalid format num `{:?}`", s))) + } + } + + fn translate(&self, s: &mut String) -> ::std::fmt::Result { + use std::fmt::Write; + match *self { + Num::Num(n) => write!(s, "{}", n), + Num::Arg(n) => { + let n = try!(n.checked_sub(1).ok_or(::std::fmt::Error)); + write!(s, "{}$", n) + }, + Num::Next => write!(s, "*"), + } + } + } + + /// Returns an iterator over all substitutions in a given string. + pub fn iter_subs(s: &str) -> Substitutions { + Substitutions { + s: s, + } + } + + /// Iterator over substitutions in a string. + pub struct Substitutions<'a> { + s: &'a str, + } + + impl<'a> Iterator for Substitutions<'a> { + type Item = Substitution<'a>; + fn next(&mut self) -> Option { + match parse_next_substitution(self.s) { + Some((sub, tail)) => { + self.s = tail; + Some(sub) + }, + None => None, + } + } + } + + enum State { + Start, + Flags, + Width, + WidthArg, + Prec, + PrecInner, + Length, + Type, + } + + /// Parse the next substitution from the input string. + pub fn parse_next_substitution(s: &str) -> Option<(Substitution, &str)> { + use self::State::*; + + let at = { + let start = try_opt!(s.find('%')); + match s[start+1..].chars().next() { + Some('%') => return Some((Substitution::Escape, &s[start+2..])), + Some(_) => {/* fall-through */}, + None => return None, + } + + Cur::new_at_start(&s[start..]) + }; + + // This is meant to be a translation of the following regex: + // + // ```regex + // (?x) + // ^ % + // (?: (?P \d+) \$ )? + // (?P [-+ 0\#']* ) + // (?P \d+ | \* (?: (?P \d+) \$ )? )? + // (?: \. (?P \d+ | \* (?: (?P \d+) \$ )? ) )? + // (?P + // # Standard + // hh | h | ll | l | L | z | j | t + // + // # Other + // | I32 | I64 | I | q + // )? + // (?P . ) + // ``` + + // Used to establish the full span at the end. + let start = at; + // The current position within the string. + let mut at = try_opt!(at.at_next_cp()); + // `c` is the next codepoint, `next` is a cursor after it. + let (mut c, mut next) = try_opt!(at.next_cp()); + + // Update `at`, `c`, and `next`, exiting if we're out of input. + macro_rules! move_to { + ($cur:expr) => { + { + at = $cur; + let (c_, next_) = try_opt!(at.next_cp()); + c = c_; + next = next_; + } + }; + } + + // Constructs a result when parsing fails. + // + // Note: `move` used to capture copies of the cursors as they are *now*. + let fallback = move || { + return Some(( + Substitution::Format(Format { + span: start.slice_between(next).unwrap(), + parameter: None, + flags: "", + width: None, + precision: None, + length: None, + type_: at.slice_between(next).unwrap(), + }), + next.slice_after() + )); + }; + + // Next parsing state. + let mut state = Start; + + // Sadly, Rust isn't *quite* smart enough to know these *must* be initialised by the end. + let mut parameter: Option = None; + let mut flags: &str = ""; + let mut width: Option = None; + let mut precision: Option = None; + let mut length: Option<&str> = None; + let mut type_: &str = ""; + let end: Cur; + + if let Start = state { + match c { + '1'...'9' => { + let end = at_next_cp_while(next, is_digit); + match end.next_cp() { + // Yes, this *is* the parameter. + Some(('$', end2)) => { + state = Flags; + parameter = Some(at.slice_between(end).unwrap().parse().unwrap()); + move_to!(end2); + }, + // Wait, no, actually, it's the width. + Some(_) => { + state = Prec; + parameter = None; + flags = ""; + width = Some(Num::from_str(at.slice_between(end).unwrap(), None)); + move_to!(end); + }, + // It's invalid, is what it is. + None => return fallback(), + } + }, + _ => { + state = Flags; + parameter = None; + move_to!(at); + } + } + } + + if let Flags = state { + let end = at_next_cp_while(at, is_flag); + state = Width; + flags = at.slice_between(end).unwrap(); + move_to!(end); + } + + if let Width = state { + match c { + '*' => { + state = WidthArg; + move_to!(next); + }, + '1' ... '9' => { + let end = at_next_cp_while(next, is_digit); + state = Prec; + width = Some(Num::from_str(at.slice_between(end).unwrap(), None)); + move_to!(end); + }, + _ => { + state = Prec; + width = None; + move_to!(at); + } + } + } + + if let WidthArg = state { + let end = at_next_cp_while(at, is_digit); + match end.next_cp() { + Some(('$', end2)) => { + state = Prec; + width = Some(Num::from_str("", Some(at.slice_between(end).unwrap()))); + move_to!(end2); + }, + _ => { + state = Prec; + width = Some(Num::Next); + move_to!(end); + } + } + } + + if let Prec = state { + match c { + '.' => { + state = PrecInner; + move_to!(next); + }, + _ => { + state = Length; + precision = None; + move_to!(at); + } + } + } + + if let PrecInner = state { + match c { + '*' => { + let end = at_next_cp_while(next, is_digit); + match end.next_cp() { + Some(('$', end2)) => { + state = Length; + precision = Some(Num::from_str("*", next.slice_between(end))); + move_to!(end2); + }, + _ => { + state = Length; + precision = Some(Num::Next); + move_to!(end); + } + } + }, + '0' ... '9' => { + let end = at_next_cp_while(next, is_digit); + state = Length; + precision = Some(Num::from_str(at.slice_between(end).unwrap(), None)); + move_to!(end); + }, + _ => return fallback(), + } + } + + if let Length = state { + let c1_next1 = next.next_cp(); + match (c, c1_next1) { + ('h', Some(('h', next1))) + | ('l', Some(('l', next1))) + => { + state = Type; + length = Some(at.slice_between(next1).unwrap()); + move_to!(next1); + }, + + ('h', _) | ('l', _) | ('L', _) + | ('z', _) | ('j', _) | ('t', _) + | ('q', _) + => { + state = Type; + length = Some(at.slice_between(next).unwrap()); + move_to!(next); + }, + + ('I', _) => { + let end = next.at_next_cp() + .and_then(|end| end.at_next_cp()) + .map(|end| (next.slice_between(end).unwrap(), end)); + let end = match end { + Some(("32", end)) => end, + Some(("64", end)) => end, + _ => next + }; + state = Type; + length = Some(at.slice_between(end).unwrap()); + move_to!(end); + }, + + _ => { + state = Type; + length = None; + move_to!(at); + } + } + } + + if let Type = state { + drop(c); + type_ = at.slice_between(next).unwrap(); + + // Don't use `move_to!` here, as we *can* be at the end of the input. + at = next; + } + + drop(c); + drop(next); + + end = at; + + let f = Format { + span: start.slice_between(end).unwrap(), + parameter: parameter, + flags: flags, + width: width, + precision: precision, + length: length, + type_: type_, + }; + Some((Substitution::Format(f), end.slice_after())) + } + + fn at_next_cp_while(mut cur: Cur, mut pred: F) -> Cur + where F: FnMut(char) -> bool { + loop { + match cur.next_cp() { + Some((c, next)) => if pred(c) { + cur = next; + } else { + return cur; + }, + None => return cur, + } + } + } + + fn is_digit(c: char) -> bool { + match c { + '0' ... '9' => true, + _ => false + } + } + + fn is_flag(c: char) -> bool { + match c { + '0' | '-' | '+' | ' ' | '#' | '\'' => true, + _ => false + } + } + + #[cfg(test)] + mod tests { + use super::{ + Format as F, + Num as N, + Substitution as S, + iter_subs, + parse_next_substitution as pns, + }; + + macro_rules! assert_eq_pnsat { + ($lhs:expr, $rhs:expr) => { + assert_eq!( + pns($lhs).and_then(|(s, _)| s.translate()), + $rhs.map(>::from) + ) + }; + } + + #[test] + fn test_escape() { + assert_eq!(pns("has no escapes"), None); + assert_eq!(pns("has no escapes, either %"), None); + assert_eq!(pns("*so* has a %% escape"), Some((S::Escape," escape"))); + assert_eq!(pns("%% leading escape"), Some((S::Escape, " leading escape"))); + assert_eq!(pns("trailing escape %%"), Some((S::Escape, ""))); + } + + #[test] + fn test_parse() { + macro_rules! assert_pns_eq_sub { + ($in_:expr, { + $param:expr, $flags:expr, + $width:expr, $prec:expr, $len:expr, $type_:expr, + }) => { + assert_eq!( + pns(concat!($in_, "!")), + Some(( + S::Format(F { + span: $in_, + parameter: $param, + flags: $flags, + width: $width, + precision: $prec, + length: $len, + type_: $type_, + }), + "!" + )) + ) + }; + } + + assert_pns_eq_sub!("%!", + { None, "", None, None, None, "!", }); + assert_pns_eq_sub!("%c", + { None, "", None, None, None, "c", }); + assert_pns_eq_sub!("%s", + { None, "", None, None, None, "s", }); + assert_pns_eq_sub!("%06d", + { None, "0", Some(N::Num(6)), None, None, "d", }); + assert_pns_eq_sub!("%4.2f", + { None, "", Some(N::Num(4)), Some(N::Num(2)), None, "f", }); + assert_pns_eq_sub!("%#x", + { None, "#", None, None, None, "x", }); + assert_pns_eq_sub!("%-10s", + { None, "-", Some(N::Num(10)), None, None, "s", }); + assert_pns_eq_sub!("%*s", + { None, "", Some(N::Next), None, None, "s", }); + assert_pns_eq_sub!("%-10.*s", + { None, "-", Some(N::Num(10)), Some(N::Next), None, "s", }); + assert_pns_eq_sub!("%-*.*s", + { None, "-", Some(N::Next), Some(N::Next), None, "s", }); + assert_pns_eq_sub!("%.6i", + { None, "", None, Some(N::Num(6)), None, "i", }); + assert_pns_eq_sub!("%+i", + { None, "+", None, None, None, "i", }); + assert_pns_eq_sub!("%08X", + { None, "0", Some(N::Num(8)), None, None, "X", }); + assert_pns_eq_sub!("%lu", + { None, "", None, None, Some("l"), "u", }); + assert_pns_eq_sub!("%Iu", + { None, "", None, None, Some("I"), "u", }); + assert_pns_eq_sub!("%I32u", + { None, "", None, None, Some("I32"), "u", }); + assert_pns_eq_sub!("%I64u", + { None, "", None, None, Some("I64"), "u", }); + assert_pns_eq_sub!("%'d", + { None, "'", None, None, None, "d", }); + assert_pns_eq_sub!("%10s", + { None, "", Some(N::Num(10)), None, None, "s", }); + assert_pns_eq_sub!("%-10.10s", + { None, "-", Some(N::Num(10)), Some(N::Num(10)), None, "s", }); + assert_pns_eq_sub!("%1$d", + { Some(1), "", None, None, None, "d", }); + assert_pns_eq_sub!("%2$.*3$d", + { Some(2), "", None, Some(N::Arg(3)), None, "d", }); + assert_pns_eq_sub!("%1$*2$.*3$d", + { Some(1), "", Some(N::Arg(2)), Some(N::Arg(3)), None, "d", }); + assert_pns_eq_sub!("%-8ld", + { None, "-", Some(N::Num(8)), None, Some("l"), "d", }); + } + + #[test] + fn test_iter() { + let s = "The %d'th word %% is: `%.*s` %!\n"; + let subs: Vec<_> = iter_subs(s).map(|sub| sub.translate()).collect(); + assert_eq!( + subs.iter().map(|ms| ms.as_ref().map(|s| &s[..])).collect::>(), + vec![Some("{}"), None, Some("{:.*}"), None] + ); + } + + /// Check that the translations are what we expect. + #[test] + fn test_trans() { + assert_eq_pnsat!("%c", Some("{}")); + assert_eq_pnsat!("%d", Some("{}")); + assert_eq_pnsat!("%u", Some("{}")); + assert_eq_pnsat!("%x", Some("{:x}")); + assert_eq_pnsat!("%X", Some("{:X}")); + assert_eq_pnsat!("%e", Some("{:e}")); + assert_eq_pnsat!("%E", Some("{:E}")); + assert_eq_pnsat!("%f", Some("{}")); + assert_eq_pnsat!("%g", Some("{:e}")); + assert_eq_pnsat!("%G", Some("{:E}")); + assert_eq_pnsat!("%s", Some("{}")); + assert_eq_pnsat!("%p", Some("{:p}")); + + assert_eq_pnsat!("%06d", Some("{:06}")); + assert_eq_pnsat!("%4.2f", Some("{:4.2}")); + assert_eq_pnsat!("%#x", Some("{:#x}")); + assert_eq_pnsat!("%-10s", Some("{:<10}")); + assert_eq_pnsat!("%*s", None); + assert_eq_pnsat!("%-10.*s", Some("{:<10.*}")); + assert_eq_pnsat!("%-*.*s", None); + assert_eq_pnsat!("%.6i", Some("{:06}")); + assert_eq_pnsat!("%+i", Some("{:+}")); + assert_eq_pnsat!("%08X", Some("{:08X}")); + assert_eq_pnsat!("%lu", Some("{}")); + assert_eq_pnsat!("%Iu", Some("{}")); + assert_eq_pnsat!("%I32u", Some("{}")); + assert_eq_pnsat!("%I64u", Some("{}")); + assert_eq_pnsat!("%'d", None); + assert_eq_pnsat!("%10s", Some("{:>10}")); + assert_eq_pnsat!("%-10.10s", Some("{:<10.10}")); + assert_eq_pnsat!("%1$d", Some("{0}")); + assert_eq_pnsat!("%2$.*3$d", Some("{1:02$}")); + assert_eq_pnsat!("%1$*2$.*3$s", Some("{0:>1$.2$}")); + assert_eq_pnsat!("%-8ld", Some("{:<8}")); + } + } +} + +pub mod shell { + use super::strcursor::StrCursor as Cur; + + #[derive(Clone, Eq, PartialEq, Debug)] + pub enum Substitution<'a> { + Ordinal(u8), + Name(&'a str), + Escape, + } + + impl<'a> Substitution<'a> { + pub fn as_str(&self) -> String { + match *self { + Substitution::Ordinal(n) => format!("${}", n), + Substitution::Name(n) => format!("${}", n), + Substitution::Escape => "$$".into(), + } + } + + pub fn translate(&self) -> Option { + match *self { + Substitution::Ordinal(n) => Some(format!("{{{}}}", n)), + Substitution::Name(n) => Some(format!("{{{}}}", n)), + Substitution::Escape => None, + } + } + } + + /// Returns an iterator over all substitutions in a given string. + pub fn iter_subs(s: &str) -> Substitutions { + Substitutions { + s: s, + } + } + + /// Iterator over substitutions in a string. + pub struct Substitutions<'a> { + s: &'a str, + } + + impl<'a> Iterator for Substitutions<'a> { + type Item = Substitution<'a>; + fn next(&mut self) -> Option { + match parse_next_substitution(self.s) { + Some((sub, tail)) => { + self.s = tail; + Some(sub) + }, + None => None, + } + } + } + + /// Parse the next substitution from the input string. + pub fn parse_next_substitution(s: &str) -> Option<(Substitution, &str)> { + let at = { + let start = try_opt!(s.find('$')); + match s[start+1..].chars().next() { + Some('$') => return Some((Substitution::Escape, &s[start+2..])), + Some(c @ '0' ... '9') => { + let n = (c as u8) - b'0'; + return Some((Substitution::Ordinal(n), &s[start+2..])); + }, + Some(_) => {/* fall-through */}, + None => return None, + } + + Cur::new_at_start(&s[start..]) + }; + + let at = try_opt!(at.at_next_cp()); + match at.next_cp() { + Some((c, inner)) => { + if !is_ident_head(c) { + None + } else { + let end = at_next_cp_while(inner, is_ident_tail); + Some((Substitution::Name(at.slice_between(end).unwrap()), end.slice_after())) + } + }, + _ => None + } + } + + fn at_next_cp_while(mut cur: Cur, mut pred: F) -> Cur + where F: FnMut(char) -> bool { + loop { + match cur.next_cp() { + Some((c, next)) => if pred(c) { + cur = next; + } else { + return cur; + }, + None => return cur, + } + } + } + + fn is_ident_head(c: char) -> bool { + match c { + 'a' ... 'z' | 'A' ... 'Z' | '_' => true, + _ => false + } + } + + fn is_ident_tail(c: char) -> bool { + match c { + '0' ... '9' => true, + c => is_ident_head(c) + } + } + + #[cfg(test)] + mod tests { + use super::{ + Substitution as S, + parse_next_substitution as pns, + }; + + macro_rules! assert_eq_pnsat { + ($lhs:expr, $rhs:expr) => { + assert_eq!( + pns($lhs).and_then(|(f, _)| f.translate()), + $rhs.map(>::from) + ) + }; + } + + #[test] + fn test_escape() { + assert_eq!(pns("has no escapes"), None); + assert_eq!(pns("has no escapes, either $"), None); + assert_eq!(pns("*so* has a $$ escape"), Some((S::Escape, " escape"))); + assert_eq!(pns("$$ leading escape"), Some((S::Escape, " leading escape"))); + assert_eq!(pns("trailing escape $$"), Some((S::Escape, ""))); + } + + #[test] + fn test_parse() { + macro_rules! assert_pns_eq_sub { + ($in_:expr, $kind:ident($arg:expr)) => { + assert_eq!(pns(concat!($in_, "!")), Some((S::$kind($arg.into()), "!"))) + }; + } + + assert_pns_eq_sub!("$0", Ordinal(0)); + assert_pns_eq_sub!("$1", Ordinal(1)); + assert_pns_eq_sub!("$9", Ordinal(9)); + assert_pns_eq_sub!("$N", Name("N")); + assert_pns_eq_sub!("$NAME", Name("NAME")); + } + + #[test] + fn test_iter() { + use super::iter_subs; + let s = "The $0'th word $$ is: `$WORD` $!\n"; + let subs: Vec<_> = iter_subs(s).map(|sub| sub.translate()).collect(); + assert_eq!( + subs.iter().map(|ms| ms.as_ref().map(|s| &s[..])).collect::>(), + vec![Some("{0}"), None, Some("{WORD}")] + ); + } + + #[test] + fn test_trans() { + assert_eq_pnsat!("$0", Some("{0}")); + assert_eq_pnsat!("$9", Some("{9}")); + assert_eq_pnsat!("$1", Some("{1}")); + assert_eq_pnsat!("$10", Some("{1}")); + assert_eq_pnsat!("$stuff", Some("{stuff}")); + assert_eq_pnsat!("$NAME", Some("{NAME}")); + assert_eq_pnsat!("$PREFIX/bin", Some("{PREFIX}")); + } + + } +} + +mod strcursor { + use std; + + pub struct StrCursor<'a> { + s: &'a str, + at: usize, + } + + impl<'a> StrCursor<'a> { + pub fn new_at_start(s: &'a str) -> StrCursor<'a> { + StrCursor { + s: s, + at: 0, + } + } + + pub fn at_next_cp(mut self) -> Option> { + match self.try_seek_right_cp() { + true => Some(self), + false => None + } + } + + pub fn next_cp(mut self) -> Option<(char, StrCursor<'a>)> { + let cp = match self.cp_after() { + Some(cp) => cp, + None => return None, + }; + self.seek_right(cp.len_utf8()); + Some((cp, self)) + } + + fn slice_before(&self) -> &'a str { + &self.s[0..self.at] + } + + pub fn slice_after(&self) -> &'a str { + &self.s[self.at..] + } + + pub fn slice_between(&self, until: StrCursor<'a>) -> Option<&'a str> { + if !str_eq_literal(self.s, until.s) { + None + } else { + use std::cmp::{max, min}; + let beg = min(self.at, until.at); + let end = max(self.at, until.at); + Some(&self.s[beg..end]) + } + } + + fn cp_after(&self) -> Option { + self.slice_after().chars().next() + } + + fn try_seek_right_cp(&mut self) -> bool { + match self.slice_after().chars().next() { + Some(c) => { + self.at += c.len_utf8(); + true + }, + None => false, + } + } + + fn seek_right(&mut self, bytes: usize) { + self.at += bytes; + } + } + + impl<'a> Copy for StrCursor<'a> {} + + impl<'a> Clone for StrCursor<'a> { + fn clone(&self) -> StrCursor<'a> { + *self + } + } + + impl<'a> std::fmt::Debug for StrCursor<'a> { + fn fmt(&self, fmt: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> { + write!(fmt, "StrCursor({:?} | {:?})", self.slice_before(), self.slice_after()) + } + } + + fn str_eq_literal(a: &str, b: &str) -> bool { + a.as_bytes().as_ptr() == b.as_bytes().as_ptr() + && a.len() == b.len() + } +} diff --git a/src/libsyntax_ext/lib.rs b/src/libsyntax_ext/lib.rs index e1542c9e46..74464ca531 100644 --- a/src/libsyntax_ext/lib.rs +++ b/src/libsyntax_ext/lib.rs @@ -19,8 +19,6 @@ html_root_url = "https://doc.rust-lang.org/nightly/")] #![cfg_attr(not(stage0), deny(warnings))] -#![cfg_attr(stage0, feature(dotdot_in_tuple_patterns))] -#![feature(proc_macro_lib)] #![feature(proc_macro_internals)] #![feature(rustc_private)] #![feature(staged_api)] @@ -40,6 +38,7 @@ mod concat; mod concat_idents; mod env; mod format; +mod format_foreign; mod log_syntax; mod trace_macros; @@ -52,7 +51,7 @@ use std::rc::Rc; use syntax::ast; use syntax::ext::base::{MacroExpanderFn, NormalTT, IdentTT, MultiModifier, NamedSyntaxExtension}; use syntax::ext::tt::macro_rules::MacroRulesExpander; -use syntax::parse::token::intern; +use syntax::symbol::Symbol; pub fn register_builtins(resolver: &mut syntax::ext::base::Resolver, user_exts: Vec, @@ -61,11 +60,11 @@ pub fn register_builtins(resolver: &mut syntax::ext::base::Resolver, resolver.add_ext(ast::Ident::with_empty_ctxt(name), Rc::new(ext)); }; - register(intern("macro_rules"), IdentTT(Box::new(MacroRulesExpander), None, false)); + register(Symbol::intern("macro_rules"), IdentTT(Box::new(MacroRulesExpander), None, false)); macro_rules! register { ($( $name:ident: $f:expr, )*) => { $( - register(intern(stringify!($name)), + register(Symbol::intern(stringify!($name)), NormalTT(Box::new($f as MacroExpanderFn), None, false)); )* } } @@ -111,9 +110,10 @@ pub fn register_builtins(resolver: &mut syntax::ext::base::Resolver, } // format_args uses `unstable` things internally. - register(intern("format_args"), NormalTT(Box::new(format::expand_format_args), None, true)); + register(Symbol::intern("format_args"), + NormalTT(Box::new(format::expand_format_args), None, true)); - register(intern("derive"), MultiModifier(Box::new(deriving::expand_derive))); + register(Symbol::intern("derive"), MultiModifier(Box::new(deriving::expand_derive))); for (name, ext) in user_exts { register(name, ext); diff --git a/src/libsyntax_ext/proc_macro_registrar.rs b/src/libsyntax_ext/proc_macro_registrar.rs index a8accd63dc..c93e2c054d 100644 --- a/src/libsyntax_ext/proc_macro_registrar.rs +++ b/src/libsyntax_ext/proc_macro_registrar.rs @@ -17,19 +17,19 @@ use syntax::ext::base::ExtCtxt; use syntax::ext::build::AstBuilder; use syntax::ext::expand::ExpansionConfig; use syntax::parse::ParseSess; -use syntax::parse::token::{self, InternedString}; -use syntax::feature_gate::Features; use syntax::fold::Folder; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::{Span, DUMMY_SP}; use syntax::visit::{self, Visitor}; use deriving; struct CustomDerive { - trait_name: InternedString, + trait_name: ast::Name, function_name: Ident, span: Span, + attrs: Vec, } struct CollectCustomDerives<'a> { @@ -37,41 +37,44 @@ struct CollectCustomDerives<'a> { in_root: bool, handler: &'a errors::Handler, is_proc_macro_crate: bool, + is_test_crate: bool, } pub fn modify(sess: &ParseSess, resolver: &mut ::syntax::ext::base::Resolver, mut krate: ast::Crate, is_proc_macro_crate: bool, + is_test_crate: bool, num_crate_types: usize, - handler: &errors::Handler, - features: &Features) -> ast::Crate { + handler: &errors::Handler) -> ast::Crate { let ecfg = ExpansionConfig::default("proc_macro".to_string()); let mut cx = ExtCtxt::new(sess, ecfg, resolver); - let mut collect = CollectCustomDerives { - derives: Vec::new(), - in_root: true, - handler: handler, - is_proc_macro_crate: is_proc_macro_crate, + let derives = { + let mut collect = CollectCustomDerives { + derives: Vec::new(), + in_root: true, + handler: handler, + is_proc_macro_crate: is_proc_macro_crate, + is_test_crate: is_test_crate, + }; + visit::walk_crate(&mut collect, &krate); + collect.derives }; - visit::walk_crate(&mut collect, &krate); if !is_proc_macro_crate { return krate - } else if !features.proc_macro { - let mut err = handler.struct_err("the `proc-macro` crate type is \ - experimental"); - err.help("add #![feature(proc_macro)] to the crate attributes to \ - enable"); - err.emit(); } if num_crate_types > 1 { handler.err("cannot mix `proc-macro` crate type with others"); } - krate.module.items.push(mk_registrar(&mut cx, &collect.derives)); + if is_test_crate { + return krate; + } + + krate.module.items.push(mk_registrar(&mut cx, &derives)); if krate.exported_macros.len() > 0 { handler.err("cannot export macro_rules! macros from a `proc-macro` \ @@ -95,8 +98,10 @@ impl<'a> CollectCustomDerives<'a> { } } -impl<'a> Visitor for CollectCustomDerives<'a> { - fn visit_item(&mut self, item: &ast::Item) { +impl<'a> Visitor<'a> for CollectCustomDerives<'a> { + fn visit_item(&mut self, item: &'a ast::Item) { + let mut attrs = item.attrs.iter().filter(|a| a.check_name("proc_macro_derive")); + // First up, make sure we're checking a bare function. If we're not then // we're just not interested in this item. // @@ -106,10 +111,7 @@ impl<'a> Visitor for CollectCustomDerives<'a> { ast::ItemKind::Fn(..) => {} _ => { // Check for invalid use of proc_macro_derive - let attr = item.attrs.iter() - .filter(|a| a.check_name("proc_macro_derive")) - .next(); - if let Some(attr) = attr { + if let Some(attr) = attrs.next() { self.handler.span_err(attr.span(), "the `#[proc_macro_derive]` \ attribute may only be used \ @@ -121,8 +123,6 @@ impl<'a> Visitor for CollectCustomDerives<'a> { } } - let mut attrs = item.attrs.iter() - .filter(|a| a.check_name("proc_macro_derive")); let attr = match attrs.next() { Some(attr) => attr, None => { @@ -136,6 +136,10 @@ impl<'a> Visitor for CollectCustomDerives<'a> { attributes found"); } + if self.is_test_crate { + return; + } + if !self.is_proc_macro_crate { self.handler.span_err(attr.span(), "the `#[proc_macro_derive]` attribute is \ @@ -144,7 +148,8 @@ impl<'a> Visitor for CollectCustomDerives<'a> { } // Once we've located the `#[proc_macro_derive]` attribute, verify - // that it's of the form `#[proc_macro_derive(Foo)]` + // that it's of the form `#[proc_macro_derive(Foo)]` or + // `#[proc_macro_derive(Foo, attributes(A, ..))]` let list = match attr.meta_item_list() { Some(list) => list, None => { @@ -154,49 +159,84 @@ impl<'a> Visitor for CollectCustomDerives<'a> { return } }; - if list.len() != 1 { + if list.len() != 1 && list.len() != 2 { self.handler.span_err(attr.span(), - "attribute must only have one argument"); + "attribute must have either one or two arguments"); return } - let attr = &list[0]; - let trait_name = match attr.name() { + let trait_attr = &list[0]; + let attributes_attr = list.get(1); + let trait_name = match trait_attr.name() { Some(name) => name, _ => { - self.handler.span_err(attr.span(), "not a meta item"); + self.handler.span_err(trait_attr.span(), "not a meta item"); return } }; - if !attr.is_word() { - self.handler.span_err(attr.span(), "must only be one word"); + if !trait_attr.is_word() { + self.handler.span_err(trait_attr.span(), "must only be one word"); } - if deriving::is_builtin_trait(&trait_name) { - self.handler.span_err(attr.span(), + if deriving::is_builtin_trait(trait_name) { + self.handler.span_err(trait_attr.span(), "cannot override a built-in #[derive] mode"); } if self.derives.iter().any(|d| d.trait_name == trait_name) { - self.handler.span_err(attr.span(), + self.handler.span_err(trait_attr.span(), "derive mode defined twice in this crate"); } - if self.in_root { + let proc_attrs: Vec<_> = if let Some(attr) = attributes_attr { + if !attr.check_name("attributes") { + self.handler.span_err(attr.span(), "second argument must be `attributes`") + } + attr.meta_item_list().unwrap_or_else(|| { + self.handler.span_err(attr.span(), + "attribute must be of form: \ + `attributes(foo, bar)`"); + &[] + }).into_iter().filter_map(|attr| { + let name = match attr.name() { + Some(name) => name, + _ => { + self.handler.span_err(attr.span(), "not a meta item"); + return None; + }, + }; + + if !attr.is_word() { + self.handler.span_err(attr.span(), "must only be one word"); + return None; + } + + Some(name) + }).collect() + } else { + Vec::new() + }; + + if self.in_root && item.vis == ast::Visibility::Public { self.derives.push(CustomDerive { span: item.span, trait_name: trait_name, function_name: item.ident, + attrs: proc_attrs, }); } else { - let msg = "functions tagged with `#[proc_macro_derive]` must \ - currently reside in the root of the crate"; + let msg = if !self.in_root { + "functions tagged with `#[proc_macro_derive]` must \ + currently reside in the root of the crate" + } else { + "functions tagged with `#[proc_macro_derive]` must be `pub`" + }; self.handler.span_err(item.span, msg); } visit::walk_item(self, item); } - fn visit_mod(&mut self, m: &ast::Mod, _s: Span, id: NodeId) { + fn visit_mod(&mut self, m: &'a ast::Mod, _s: Span, id: NodeId) { let mut prev_in_root = self.in_root; if id != ast::CRATE_NODE_ID { prev_in_root = mem::replace(&mut self.in_root, false); @@ -219,8 +259,8 @@ impl<'a> Visitor for CollectCustomDerives<'a> { // // #[plugin_registrar] // fn registrar(registrar: &mut Registry) { -// registrar.register_custom_derive($name_trait1, ::$name1); -// registrar.register_custom_derive($name_trait2, ::$name2); +// registrar.register_custom_derive($name_trait1, ::$name1, &[]); +// registrar.register_custom_derive($name_trait2, ::$name2, &["attribute_name"]); // // ... // } // } @@ -229,34 +269,38 @@ fn mk_registrar(cx: &mut ExtCtxt, let eid = cx.codemap().record_expansion(ExpnInfo { call_site: DUMMY_SP, callee: NameAndSpan { - format: MacroAttribute(token::intern("proc_macro")), + format: MacroAttribute(Symbol::intern("proc_macro")), span: None, allow_internal_unstable: true, } }); let span = Span { expn_id: eid, ..DUMMY_SP }; - let proc_macro = token::str_to_ident("proc_macro"); + let proc_macro = Ident::from_str("proc_macro"); let krate = cx.item(span, proc_macro, Vec::new(), ast::ItemKind::ExternCrate(None)); - let __internal = token::str_to_ident("__internal"); - let registry = token::str_to_ident("Registry"); - let registrar = token::str_to_ident("registrar"); - let register_custom_derive = token::str_to_ident("register_custom_derive"); + let __internal = Ident::from_str("__internal"); + let registry = Ident::from_str("Registry"); + let registrar = Ident::from_str("registrar"); + let register_custom_derive = Ident::from_str("register_custom_derive"); let stmts = custom_derives.iter().map(|cd| { let path = cx.path_global(cd.span, vec![cd.function_name]); - let trait_name = cx.expr_str(cd.span, cd.trait_name.clone()); - (path, trait_name) - }).map(|(path, trait_name)| { + let trait_name = cx.expr_str(cd.span, cd.trait_name); + let attrs = cx.expr_vec_slice( + span, + cd.attrs.iter().map(|&s| cx.expr_str(cd.span, s)).collect::>() + ); + (path, trait_name, attrs) + }).map(|(path, trait_name, attrs)| { let registrar = cx.expr_ident(span, registrar); let ufcs_path = cx.path(span, vec![proc_macro, __internal, registry, register_custom_derive]); cx.expr_call(span, cx.expr_path(ufcs_path), - vec![registrar, trait_name, cx.expr_path(path)]) + vec![registrar, trait_name, cx.expr_path(path), attrs]) }).map(|expr| { cx.stmt_expr(expr) }).collect::>(); @@ -270,15 +314,14 @@ fn mk_registrar(cx: &mut ExtCtxt, cx.ty(span, ast::TyKind::Tup(Vec::new())), cx.block(span, stmts)); - let derive_registrar = token::intern_and_get_ident("rustc_derive_registrar"); - let derive_registrar = cx.meta_word(span, derive_registrar); + let derive_registrar = cx.meta_word(span, Symbol::intern("rustc_derive_registrar")); let derive_registrar = cx.attribute(span, derive_registrar); let func = func.map(|mut i| { i.attrs.push(derive_registrar); i.vis = ast::Visibility::Public; i }); - let ident = ast::Ident::with_empty_ctxt(token::gensym("registrar")); + let ident = ast::Ident::with_empty_ctxt(Symbol::gensym("registrar")); let module = cx.item_mod(span, span, ident, Vec::new(), vec![krate, func]).map(|mut i| { i.vis = ast::Visibility::Public; i diff --git a/src/libsyntax_ext/trace_macros.rs b/src/libsyntax_ext/trace_macros.rs index 9578af6810..48be8e0c53 100644 --- a/src/libsyntax_ext/trace_macros.rs +++ b/src/libsyntax_ext/trace_macros.rs @@ -11,7 +11,7 @@ use syntax::ext::base::ExtCtxt; use syntax::ext::base; use syntax::feature_gate; -use syntax::parse::token::keywords; +use syntax::symbol::keywords; use syntax_pos::Span; use syntax::tokenstream::TokenTree; diff --git a/src/libsyntax_pos/lib.rs b/src/libsyntax_pos/lib.rs index d99850332c..44067b3132 100644 --- a/src/libsyntax_pos/lib.rs +++ b/src/libsyntax_pos/lib.rs @@ -27,7 +27,6 @@ #![allow(unused_attributes)] #![feature(rustc_private)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(specialization)] use std::cell::{Cell, RefCell}; @@ -52,7 +51,7 @@ pub type FileName = String; /// able to use many of the functions on spans in codemap and you cannot assume /// that the length of the span = hi - lo; there may be space in the BytePos /// range between files. -#[derive(Clone, Copy, Hash, PartialEq, Eq)] +#[derive(Clone, Copy, Hash, PartialEq, Eq, Ord, PartialOrd)] pub struct Span { pub lo: BytePos, pub hi: BytePos, @@ -67,7 +66,7 @@ pub struct Span { /// the error, and would be rendered with `^^^`. /// - they can have a *label*. In this case, the label is written next /// to the mark in the snippet when we render. -#[derive(Clone, Debug, PartialEq, Eq)] +#[derive(Clone, Debug, Hash, PartialEq, Eq)] pub struct MultiSpan { primary_spans: Vec, span_labels: Vec<(Span, String)>, @@ -254,7 +253,7 @@ impl From for MultiSpan { } } -#[derive(PartialEq, Eq, Clone, Debug, Hash, RustcEncodable, RustcDecodable, Copy)] +#[derive(PartialEq, Eq, Clone, Debug, Hash, RustcEncodable, RustcDecodable, Copy, Ord, PartialOrd)] pub struct ExpnId(pub u32); pub const NO_EXPANSION: ExpnId = ExpnId(!0); diff --git a/src/libterm/lib.rs b/src/libterm/lib.rs index caef808f47..01daa93814 100644 --- a/src/libterm/lib.rs +++ b/src/libterm/lib.rs @@ -59,7 +59,6 @@ #![cfg_attr(windows, feature(libc))] // Handle rustfmt skips #![feature(custom_attribute)] -#![cfg_attr(stage0, feature(question_mark))] #![allow(unused_attributes)] use std::io::prelude::*; diff --git a/src/libterm/terminfo/searcher.rs b/src/libterm/terminfo/searcher.rs index 4b1df7d170..011d06b1c0 100644 --- a/src/libterm/terminfo/searcher.rs +++ b/src/libterm/terminfo/searcher.rs @@ -26,38 +26,34 @@ pub fn get_dbpath_for_term(term: &str) -> Option { }; // Find search directory - match env::var_os("TERMINFO") { - Some(dir) => dirs_to_search.push(PathBuf::from(dir)), - None => { - if let Some(mut homedir) = env::home_dir() { - // ncurses compatibility; - homedir.push(".terminfo"); - dirs_to_search.push(homedir) - } - match env::var("TERMINFO_DIRS") { - Ok(dirs) => { - for i in dirs.split(':') { - if i == "" { - dirs_to_search.push(PathBuf::from("/usr/share/terminfo")); - } else { - dirs_to_search.push(PathBuf::from(i)); - } - } - } - // Found nothing in TERMINFO_DIRS, use the default paths: - // According to /etc/terminfo/README, after looking at - // ~/.terminfo, ncurses will search /etc/terminfo, then - // /lib/terminfo, and eventually /usr/share/terminfo. - // On Haiku the database can be found at /boot/system/data/terminfo - Err(..) => { - dirs_to_search.push(PathBuf::from("/etc/terminfo")); - dirs_to_search.push(PathBuf::from("/lib/terminfo")); - dirs_to_search.push(PathBuf::from("/usr/share/terminfo")); - dirs_to_search.push(PathBuf::from("/boot/system/data/terminfo")); - } + if let Some(dir) = env::var_os("TERMINFO") { + dirs_to_search.push(PathBuf::from(dir)); + } + + if let Ok(dirs) = env::var("TERMINFO_DIRS") { + for i in dirs.split(':') { + if i == "" { + dirs_to_search.push(PathBuf::from("/usr/share/terminfo")); + } else { + dirs_to_search.push(PathBuf::from(i)); } } - }; + } else { + // Found nothing in TERMINFO_DIRS, use the default paths: + // According to /etc/terminfo/README, after looking at + // ~/.terminfo, ncurses will search /etc/terminfo, then + // /lib/terminfo, and eventually /usr/share/terminfo. + // On Haiku the database can be found at /boot/system/data/terminfo + if let Some(mut homedir) = env::home_dir() { + homedir.push(".terminfo"); + dirs_to_search.push(homedir) + } + + dirs_to_search.push(PathBuf::from("/etc/terminfo")); + dirs_to_search.push(PathBuf::from("/lib/terminfo")); + dirs_to_search.push(PathBuf::from("/usr/share/terminfo")); + dirs_to_search.push(PathBuf::from("/boot/system/data/terminfo")); + } // Look for the terminal in all of the search directories for mut p in dirs_to_search { diff --git a/src/libtest/lib.rs b/src/libtest/lib.rs index 95ae6eb2ef..f5546b6aac 100644 --- a/src/libtest/lib.rs +++ b/src/libtest/lib.rs @@ -38,7 +38,6 @@ #![feature(rustc_private)] #![feature(set_stdio)] #![feature(staged_api)] -#![cfg_attr(stage0, feature(question_mark))] #![feature(panic_unwind)] extern crate getopts; @@ -75,9 +74,9 @@ const TEST_WARN_TIMEOUT_S: u64 = 60; // to be used by rustc to compile tests in libtest pub mod test { pub use {Bencher, TestName, TestResult, TestDesc, TestDescAndFn, TestOpts, TrFailed, - TrIgnored, TrOk, Metric, MetricMap, StaticTestFn, StaticTestName, DynTestName, - DynTestFn, run_test, test_main, test_main_static, filter_tests, parse_opts, - StaticBenchFn, ShouldPanic}; + TrFailedMsg, TrIgnored, TrOk, Metric, MetricMap, StaticTestFn, StaticTestName, + DynTestName, DynTestFn, run_test, test_main, test_main_static, filter_tests, + parse_opts, StaticBenchFn, ShouldPanic}; } pub mod stats; @@ -255,10 +254,16 @@ pub fn test_main(args: &[String], tests: Vec) { Some(Err(msg)) => panic!("{:?}", msg), None => return, }; - match run_tests_console(&opts, tests) { - Ok(true) => {} - Ok(false) => std::process::exit(101), - Err(e) => panic!("io error when running tests: {:?}", e), + if opts.list { + if let Err(e) = list_tests_console(&opts, tests) { + panic!("io error when listing tests: {:?}", e); + } + } else { + match run_tests_console(&opts, tests) { + Ok(true) => {} + Ok(false) => std::process::exit(101), + Err(e) => panic!("io error when running tests: {:?}", e), + } } } @@ -301,7 +306,9 @@ pub enum ColorConfig { } pub struct TestOpts { + pub list: bool, pub filter: Option, + pub filter_exact: bool, pub run_ignored: bool, pub run_tests: bool, pub bench_benchmarks: bool, @@ -317,7 +324,9 @@ impl TestOpts { #[cfg(test)] fn new() -> TestOpts { TestOpts { + list: false, filter: None, + filter_exact: false, run_ignored: false, run_tests: false, bench_benchmarks: false, @@ -339,6 +348,7 @@ fn optgroups() -> Vec { vec![getopts::optflag("", "ignored", "Run ignored tests"), getopts::optflag("", "test", "Run tests and not benchmarks"), getopts::optflag("", "bench", "Run benchmarks instead of tests"), + getopts::optflag("", "list", "List all tests and benchmarks"), getopts::optflag("h", "help", "Display this message (longer with --help)"), getopts::optopt("", "logfile", "Write logs to the specified file instead \ of stdout", "PATH"), @@ -349,6 +359,7 @@ fn optgroups() -> Vec { getopts::optmulti("", "skip", "Skip tests whose names contain FILTER (this flag can \ be used multiple times)","FILTER"), getopts::optflag("q", "quiet", "Display one character per test instead of one line"), + getopts::optflag("", "exact", "Exactly match filters rather than by substring"), getopts::optopt("", "color", "Configure coloring of output: auto = colorize if stdout is a tty and tests are run on serially (default); always = always colorize output; @@ -408,6 +419,8 @@ pub fn parse_opts(args: &[String]) -> Option { let run_ignored = matches.opt_present("ignored"); let quiet = matches.opt_present("quiet"); + let exact = matches.opt_present("exact"); + let list = matches.opt_present("list"); let logfile = matches.opt_str("logfile"); let logfile = logfile.map(|s| PathBuf::from(&s)); @@ -448,7 +461,9 @@ pub fn parse_opts(args: &[String]) -> Option { }; let test_opts = TestOpts { + list: list, filter: filter, + filter_exact: exact, run_ignored: run_ignored, run_tests: run_tests, bench_benchmarks: bench_benchmarks, @@ -473,6 +488,7 @@ pub struct BenchSamples { pub enum TestResult { TrOk, TrFailed, + TrFailedMsg(String), TrIgnored, TrMetrics(MetricMap), TrBench(BenchSamples), @@ -576,7 +592,8 @@ impl ConsoleTestState { } } - pub fn write_plain(&mut self, s: &str) -> io::Result<()> { + pub fn write_plain>(&mut self, s: S) -> io::Result<()> { + let s = s.as_ref(); match self.out { Pretty(ref mut term) => { term.write_all(s.as_bytes())?; @@ -611,7 +628,7 @@ impl ConsoleTestState { pub fn write_result(&mut self, result: &TestResult) -> io::Result<()> { match *result { TrOk => self.write_ok(), - TrFailed => self.write_failed(), + TrFailed | TrFailedMsg(_) => self.write_failed(), TrIgnored => self.write_ignored(), TrMetrics(ref mm) => { self.write_metric()?; @@ -630,24 +647,28 @@ impl ConsoleTestState { TEST_WARN_TIMEOUT_S)) } - pub fn write_log(&mut self, test: &TestDesc, result: &TestResult) -> io::Result<()> { + pub fn write_log>(&mut self, msg: S) -> io::Result<()> { + let msg = msg.as_ref(); match self.log_out { None => Ok(()), - Some(ref mut o) => { - let s = format!("{} {}\n", - match *result { - TrOk => "ok".to_owned(), - TrFailed => "failed".to_owned(), - TrIgnored => "ignored".to_owned(), - TrMetrics(ref mm) => mm.fmt_metrics(), - TrBench(ref bs) => fmt_bench_samples(bs), - }, - test.name); - o.write_all(s.as_bytes()) - } + Some(ref mut o) => o.write_all(msg.as_bytes()), } } + pub fn write_log_result(&mut self, test: &TestDesc, result: &TestResult) -> io::Result<()> { + self.write_log( + format!("{} {}\n", + match *result { + TrOk => "ok".to_owned(), + TrFailed => "failed".to_owned(), + TrFailedMsg(ref msg) => format!("failed: {}", msg), + TrIgnored => "ignored".to_owned(), + TrMetrics(ref mm) => mm.fmt_metrics(), + TrBench(ref bs) => fmt_bench_samples(bs), + }, + test.name)) + } + pub fn write_failures(&mut self) -> io::Result<()> { self.write_plain("\nfailures:\n")?; let mut failures = Vec::new(); @@ -740,6 +761,49 @@ pub fn fmt_bench_samples(bs: &BenchSamples) -> String { output } +// List the tests to console, and optionally to logfile. Filters are honored. +pub fn list_tests_console(opts: &TestOpts, tests: Vec) -> io::Result<()> { + let mut st = ConsoleTestState::new(opts, None::)?; + + let mut ntest = 0; + let mut nbench = 0; + let mut nmetric = 0; + + for test in filter_tests(&opts, tests) { + use TestFn::*; + + let TestDescAndFn { desc: TestDesc { name, .. }, testfn } = test; + + let fntype = match testfn { + StaticTestFn(..) | DynTestFn(..) => { ntest += 1; "test" }, + StaticBenchFn(..) | DynBenchFn(..) => { nbench += 1; "benchmark" }, + StaticMetricFn(..) | DynMetricFn(..) => { nmetric += 1; "metric" }, + }; + + st.write_plain(format!("{}: {}\n", name, fntype))?; + st.write_log(format!("{} {}\n", fntype, name))?; + } + + fn plural(count: u32, s: &str) -> String { + match count { + 1 => format!("{} {}", 1, s), + n => format!("{} {}s", n, s), + } + } + + if !opts.quiet { + if ntest != 0 || nbench != 0 || nmetric != 0 { + st.write_plain("\n")?; + } + st.write_plain(format!("{}, {}, {}\n", + plural(ntest, "test"), + plural(nbench, "benchmark"), + plural(nmetric, "metric")))?; + } + + Ok(()) +} + // A simple console test runner pub fn run_tests_console(opts: &TestOpts, tests: Vec) -> io::Result { @@ -749,7 +813,7 @@ pub fn run_tests_console(opts: &TestOpts, tests: Vec) -> io::Resu TeWait(ref test, padding) => st.write_test_start(test, padding), TeTimeout(ref test) => st.write_timeout(test), TeResult(test, result, stdout) => { - st.write_log(&test, &result)?; + st.write_log_result(&test, &result)?; st.write_result(&result)?; match result { TrOk => st.passed += 1, @@ -773,6 +837,14 @@ pub fn run_tests_console(opts: &TestOpts, tests: Vec) -> io::Resu st.failed += 1; st.failures.push((test, stdout)); } + TrFailedMsg(msg) => { + st.failed += 1; + let mut stdout = stdout; + stdout.extend_from_slice( + format!("note: {}", msg).as_bytes() + ); + st.failures.push((test, stdout)); + } } Ok(()) } @@ -1109,14 +1181,26 @@ pub fn filter_tests(opts: &TestOpts, tests: Vec) -> Vec filtered, Some(ref filter) => { filtered.into_iter() - .filter(|test| test.desc.name.as_slice().contains(&filter[..])) + .filter(|test| { + if opts.filter_exact { + test.desc.name.as_slice() == &filter[..] + } else { + test.desc.name.as_slice().contains(&filter[..]) + } + }) .collect() } }; // Skip tests that match any of the skip filters filtered = filtered.into_iter() - .filter(|t| !opts.skip.iter().any(|sf| t.desc.name.as_slice().contains(&sf[..]))) + .filter(|t| !opts.skip.iter().any(|sf| { + if opts.filter_exact { + t.desc.name.as_slice() == &sf[..] + } else { + t.desc.name.as_slice().contains(&sf[..]) + } + })) .collect(); // Maybe pull out the ignored test and unignore them @@ -1270,12 +1354,16 @@ fn calc_result(desc: &TestDesc, task_result: Result<(), Box>) -> Tes match (&desc.should_panic, task_result) { (&ShouldPanic::No, Ok(())) | (&ShouldPanic::Yes, Err(_)) => TrOk, - (&ShouldPanic::YesWithMessage(msg), Err(ref err)) + (&ShouldPanic::YesWithMessage(msg), Err(ref err)) => if err.downcast_ref::() - .map(|e| &**e) - .or_else(|| err.downcast_ref::<&'static str>().map(|e| *e)) - .map(|e| e.contains(msg)) - .unwrap_or(false) => TrOk, + .map(|e| &**e) + .or_else(|| err.downcast_ref::<&'static str>().map(|e| *e)) + .map(|e| e.contains(msg)) + .unwrap_or(false) { + TrOk + } else { + TrFailedMsg(format!("Panic did not include expected string '{}'", msg)) + }, _ => TrFailed, } } @@ -1482,8 +1570,9 @@ pub mod bench { #[cfg(test)] mod tests { - use test::{TrFailed, TrIgnored, TrOk, filter_tests, parse_opts, TestDesc, TestDescAndFn, - TestOpts, run_test, MetricMap, StaticTestName, DynTestName, DynTestFn, ShouldPanic}; + use test::{TrFailed, TrFailedMsg, TrIgnored, TrOk, filter_tests, parse_opts, TestDesc, + TestDescAndFn, TestOpts, run_test, MetricMap, StaticTestName, DynTestName, + DynTestFn, ShouldPanic}; use std::sync::mpsc::channel; #[test] @@ -1565,18 +1654,20 @@ mod tests { fn f() { panic!("an error message"); } + let expected = "foobar"; + let failed_msg = "Panic did not include expected string"; let desc = TestDescAndFn { desc: TestDesc { name: StaticTestName("whatever"), ignore: false, - should_panic: ShouldPanic::YesWithMessage("foobar"), + should_panic: ShouldPanic::YesWithMessage(expected), }, testfn: DynTestFn(Box::new(move |()| f())), }; let (tx, rx) = channel(); run_test(&TestOpts::new(), false, desc, tx); let (_, res, _) = rx.recv().unwrap(); - assert!(res == TrFailed); + assert!(res == TrFailedMsg(format!("{} '{}'", failed_msg, expected))); } #[test] @@ -1638,6 +1729,77 @@ mod tests { assert!(!filtered[0].desc.ignore); } + #[test] + pub fn exact_filter_match() { + fn tests() -> Vec { + vec!["base", + "base::test", + "base::test1", + "base::test2", + ].into_iter() + .map(|name| TestDescAndFn { + desc: TestDesc { + name: StaticTestName(name), + ignore: false, + should_panic: ShouldPanic::No, + }, + testfn: DynTestFn(Box::new(move |()| {})) + }) + .collect() + } + + let substr = filter_tests(&TestOpts { + filter: Some("base".into()), + ..TestOpts::new() + }, tests()); + assert_eq!(substr.len(), 4); + + let substr = filter_tests(&TestOpts { + filter: Some("bas".into()), + ..TestOpts::new() + }, tests()); + assert_eq!(substr.len(), 4); + + let substr = filter_tests(&TestOpts { + filter: Some("::test".into()), + ..TestOpts::new() + }, tests()); + assert_eq!(substr.len(), 3); + + let substr = filter_tests(&TestOpts { + filter: Some("base::test".into()), + ..TestOpts::new() + }, tests()); + assert_eq!(substr.len(), 3); + + let exact = filter_tests(&TestOpts { + filter: Some("base".into()), + filter_exact: true, ..TestOpts::new() + }, tests()); + assert_eq!(exact.len(), 1); + + let exact = filter_tests(&TestOpts { + filter: Some("bas".into()), + filter_exact: true, + ..TestOpts::new() + }, tests()); + assert_eq!(exact.len(), 0); + + let exact = filter_tests(&TestOpts { + filter: Some("::test".into()), + filter_exact: true, + ..TestOpts::new() + }, tests()); + assert_eq!(exact.len(), 0); + + let exact = filter_tests(&TestOpts { + filter: Some("base::test".into()), + filter_exact: true, + ..TestOpts::new() + }, tests()); + assert_eq!(exact.len(), 1); + } + #[test] pub fn sort_tests() { let mut opts = TestOpts::new(); diff --git a/src/libunwind/Cargo.toml b/src/libunwind/Cargo.toml index b537c6b1b7..fbd9789d2f 100644 --- a/src/libunwind/Cargo.toml +++ b/src/libunwind/Cargo.toml @@ -8,6 +8,8 @@ build = "build.rs" name = "unwind" path = "lib.rs" test = false +bench = false +doc = false [dependencies] core = { path = "../libcore" } diff --git a/src/rustc/Cargo.toml b/src/rustc/Cargo.toml index 24499cb8f0..dce1a0a8ec 100644 --- a/src/rustc/Cargo.toml +++ b/src/rustc/Cargo.toml @@ -11,17 +11,6 @@ path = "rustc.rs" name = "rustdoc" path = "rustdoc.rs" -[profile.release] -opt-level = 2 -[profile.bench] -opt-level = 2 - -# These options are controlled from our rustc wrapper script, so turn them off -# here and have them controlled elsewhere. -[profile.dev] -debug = false -debug-assertions = false - # All optional dependencies so the features passed to this Cargo.toml select # what should actually be built. [dependencies] diff --git a/src/rustc/libc_shim/Cargo.toml b/src/rustc/libc_shim/Cargo.toml index 8fc713e0f1..39df3528be 100644 --- a/src/rustc/libc_shim/Cargo.toml +++ b/src/rustc/libc_shim/Cargo.toml @@ -16,6 +16,8 @@ build = "build.rs" name = "libc" path = "../../liblibc/src/lib.rs" test = false +bench = false +doc = false [dependencies] core = { path = "../../libcore" } diff --git a/src/rustc/std_shim/Cargo.toml b/src/rustc/std_shim/Cargo.toml index 58a7bd8a1c..1fa9177243 100644 --- a/src/rustc/std_shim/Cargo.toml +++ b/src/rustc/std_shim/Cargo.toml @@ -27,17 +27,7 @@ authors = ["The Rust Project Developers"] [lib] name = "std_shim" path = "lib.rs" - -[profile.release] -opt-level = 2 -[profile.bench] -opt-level = 2 - -# These options are controlled from our rustc wrapper script, so turn them off -# here and have them controlled elsewhere. -[profile.dev] -debug = false -debug-assertions = false +doc = false [dependencies] std = { path = "../../libstd" } @@ -45,6 +35,7 @@ core = { path = "../../libcore" } # Reexport features from std [features] -jemalloc = ["std/jemalloc"] -debug-jemalloc = ["std/debug-jemalloc"] backtrace = ["std/backtrace"] +debug-jemalloc = ["std/debug-jemalloc"] +jemalloc = ["std/jemalloc"] +panic-unwind = ["std/panic-unwind"] diff --git a/src/rustc/test_shim/Cargo.toml b/src/rustc/test_shim/Cargo.toml index 87f2ccd51e..ac7842770f 100644 --- a/src/rustc/test_shim/Cargo.toml +++ b/src/rustc/test_shim/Cargo.toml @@ -12,16 +12,5 @@ authors = ["The Rust Project Developers"] name = "test_shim" path = "lib.rs" -[profile.release] -opt-level = 2 -[profile.bench] -opt-level = 2 - -# These options are controlled from our rustc wrapper script, so turn them off -# here and have them controlled elsewhere. -[profile.dev] -debug = false -debug-assertions = false - [dependencies] test = { path = "../../libtest" } diff --git a/src/rustllvm/ArchiveWrapper.cpp b/src/rustllvm/ArchiveWrapper.cpp index 12cd81ec70..c7f426fbfa 100644 --- a/src/rustllvm/ArchiveWrapper.cpp +++ b/src/rustllvm/ArchiveWrapper.cpp @@ -37,6 +37,8 @@ struct RustArchiveIterator { Archive::child_iterator end; #if LLVM_VERSION_GE(3, 9) Error err; + + RustArchiveIterator() : err(Error::success()) { } #endif }; @@ -163,9 +165,20 @@ LLVMRustArchiveIteratorFree(LLVMRustArchiveIteratorRef rai) { extern "C" const char* LLVMRustArchiveChildName(LLVMRustArchiveChildConstRef child, size_t *size) { +#if LLVM_VERSION_GE(4, 0) + Expected name_or_err = child->getName(); + if (!name_or_err) { + // rustc_llvm currently doesn't use this error string, but it might be useful + // in the future, and in the mean time this tells LLVM that the error was + // not ignored and that it shouldn't abort the process. + LLVMRustSetLastError(toString(name_or_err.takeError()).c_str()); + return NULL; + } +#else ErrorOr name_or_err = child->getName(); if (name_or_err.getError()) return NULL; +#endif StringRef name = name_or_err.get(); *size = name.size(); return name.data(); @@ -174,11 +187,19 @@ LLVMRustArchiveChildName(LLVMRustArchiveChildConstRef child, size_t *size) { extern "C" const char* LLVMRustArchiveChildData(LLVMRustArchiveChildRef child, size_t *size) { StringRef buf; +#if LLVM_VERSION_GE(4, 0) + Expected buf_or_err = child->getBuffer(); + if (!buf_or_err) { + LLVMRustSetLastError(toString(buf_or_err.takeError()).c_str()); + return NULL; + } +#else ErrorOr buf_or_err = child->getBuffer(); if (buf_or_err.getError()) { LLVMRustSetLastError(buf_or_err.getError().message().c_str()); return NULL; } +#endif buf = buf_or_err.get(); *size = buf.size(); return buf.data(); diff --git a/src/rustllvm/PassWrapper.cpp b/src/rustllvm/PassWrapper.cpp index 60093e9bd3..c45d1c2d08 100644 --- a/src/rustllvm/PassWrapper.cpp +++ b/src/rustllvm/PassWrapper.cpp @@ -22,6 +22,9 @@ #include "llvm/Target/TargetSubtargetInfo.h" #include "llvm/Transforms/IPO/PassManagerBuilder.h" +#if LLVM_VERSION_GE(4, 0) +#include "llvm/Transforms/IPO/AlwaysInliner.h" +#endif #include "llvm-c/Transforms/PassManagerBuilder.h" @@ -137,13 +140,20 @@ LLVMRustAddPass(LLVMPassManagerRef PM, LLVMPassRef rust_pass) { #define SUBTARGET_SYSTEMZ #endif +#ifdef LLVM_COMPONENT_MSP430 +#define SUBTARGET_MSP430 SUBTARGET(MSP430) +#else +#define SUBTARGET_MSP430 +#endif + #define GEN_SUBTARGETS \ SUBTARGET_X86 \ SUBTARGET_ARM \ SUBTARGET_AARCH64 \ SUBTARGET_MIPS \ SUBTARGET_PPC \ - SUBTARGET_SYSTEMZ + SUBTARGET_SYSTEMZ \ + SUBTARGET_MSP430 #define SUBTARGET(x) namespace llvm { \ extern const SubtargetFeatureKV x##FeatureKV[]; \ @@ -519,10 +529,22 @@ LLVMRustPrintPasses() { LLVMInitializePasses(); struct MyListener : PassRegistrationListener { void passEnumerate(const PassInfo *info) { +#if LLVM_VERSION_GE(4, 0) + StringRef PassArg = info->getPassArgument(); + StringRef PassName = info->getPassName(); + if (!PassArg.empty()) { + // These unsigned->signed casts could theoretically overflow, but + // realistically never will (and even if, the result is implementation + // defined rather plain UB). + printf("%15.*s - %.*s\n", (int)PassArg.size(), PassArg.data(), + (int)PassName.size(), PassName.data()); + } +#else if (info->getPassArgument() && *info->getPassArgument()) { printf("%15s - %s\n", info->getPassArgument(), info->getPassName()); } +#endif } } listener; @@ -532,7 +554,11 @@ LLVMRustPrintPasses() { extern "C" void LLVMRustAddAlwaysInlinePass(LLVMPassManagerBuilderRef PMB, bool AddLifetimes) { +#if LLVM_VERSION_GE(4, 0) + unwrap(PMB)->Inliner = llvm::createAlwaysInlinerLegacyPass(AddLifetimes); +#else unwrap(PMB)->Inliner = createAlwaysInlinerPass(AddLifetimes); +#endif } extern "C" void diff --git a/src/rustllvm/RustWrapper.cpp b/src/rustllvm/RustWrapper.cpp index 369388caa0..f5fa66f1b0 100644 --- a/src/rustllvm/RustWrapper.cpp +++ b/src/rustllvm/RustWrapper.cpp @@ -109,37 +109,82 @@ extern "C" LLVMTypeRef LLVMRustMetadataTypeInContext(LLVMContextRef C) { return wrap(Type::getMetadataTy(*unwrap(C))); } -extern "C" void LLVMRustAddCallSiteAttribute(LLVMValueRef Instr, unsigned index, uint64_t Val) { +static Attribute::AttrKind +from_rust(LLVMRustAttribute kind) { + switch (kind) { + case AlwaysInline: + return Attribute::AlwaysInline; + case ByVal: + return Attribute::ByVal; + case Cold: + return Attribute::Cold; + case InlineHint: + return Attribute::InlineHint; + case MinSize: + return Attribute::MinSize; + case Naked: + return Attribute::Naked; + case NoAlias: + return Attribute::NoAlias; + case NoCapture: + return Attribute::NoCapture; + case NoInline: + return Attribute::NoInline; + case NonNull: + return Attribute::NonNull; + case NoRedZone: + return Attribute::NoRedZone; + case NoReturn: + return Attribute::NoReturn; + case NoUnwind: + return Attribute::NoUnwind; + case OptimizeForSize: + return Attribute::OptimizeForSize; + case ReadOnly: + return Attribute::ReadOnly; + case SExt: + return Attribute::SExt; + case StructRet: + return Attribute::StructRet; + case UWTable: + return Attribute::UWTable; + case ZExt: + return Attribute::ZExt; + default: + llvm_unreachable("bad AttributeKind"); + } +} + +extern "C" void LLVMRustAddCallSiteAttribute(LLVMValueRef Instr, unsigned index, LLVMRustAttribute attr) { CallSite Call = CallSite(unwrap(Instr)); - AttrBuilder B; - B.addRawValue(Val); + Attribute Attr = Attribute::get(Call->getContext(), from_rust(attr)); + AttrBuilder B(Attr); Call.setAttributes( Call.getAttributes().addAttributes(Call->getContext(), index, AttributeSet::get(Call->getContext(), index, B))); } - extern "C" void LLVMRustAddDereferenceableCallSiteAttr(LLVMValueRef Instr, - unsigned idx, - uint64_t b) + unsigned index, + uint64_t bytes) { CallSite Call = CallSite(unwrap(Instr)); AttrBuilder B; - B.addDereferenceableAttr(b); + B.addDereferenceableAttr(bytes); Call.setAttributes( - Call.getAttributes().addAttributes(Call->getContext(), idx, + Call.getAttributes().addAttributes(Call->getContext(), index, AttributeSet::get(Call->getContext(), - idx, B))); + index, B))); } extern "C" void LLVMRustAddFunctionAttribute(LLVMValueRef Fn, unsigned index, - uint64_t Val) + LLVMRustAttribute attr) { Function *A = unwrap(Fn); - AttrBuilder B; - B.addRawValue(Val); + Attribute Attr = Attribute::get(A->getContext(), from_rust(attr)); + AttrBuilder B(Attr); A->addAttributes(index, AttributeSet::get(A->getContext(), index, B)); } @@ -153,16 +198,6 @@ extern "C" void LLVMRustAddDereferenceableAttr(LLVMValueRef Fn, A->addAttributes(index, AttributeSet::get(A->getContext(), index, B)); } -extern "C" void LLVMRustAddFunctionAttrString(LLVMValueRef Fn, - unsigned index, - const char *Name) -{ - Function *F = unwrap(Fn); - AttrBuilder B; - B.addAttribute(Name); - F->addAttributes(index, AttributeSet::get(F->getContext(), index, B)); -} - extern "C" void LLVMRustAddFunctionAttrStringValue(LLVMValueRef Fn, unsigned index, const char *Name, @@ -175,31 +210,16 @@ extern "C" void LLVMRustAddFunctionAttrStringValue(LLVMValueRef Fn, extern "C" void LLVMRustRemoveFunctionAttributes(LLVMValueRef Fn, unsigned index, - uint64_t Val) + LLVMRustAttribute attr) { - Function *A = unwrap(Fn); - const AttributeSet PAL = A->getAttributes(); - AttrBuilder B(Val); + Function *F = unwrap(Fn); + const AttributeSet PAL = F->getAttributes(); + Attribute Attr = Attribute::get(F->getContext(), from_rust(attr)); + AttrBuilder B(Attr); const AttributeSet PALnew = - PAL.removeAttributes(A->getContext(), index, - AttributeSet::get(A->getContext(), index, B)); - A->setAttributes(PALnew); -} - -extern "C" void LLVMRustRemoveFunctionAttrString(LLVMValueRef fn, - unsigned index, - const char *Name) -{ - Function *f = unwrap(fn); - LLVMContext &C = f->getContext(); - AttrBuilder B; - B.addAttribute(Name); - AttributeSet to_remove = AttributeSet::get(C, index, B); - - AttributeSet attrs = f->getAttributes(); - f->setAttributes(attrs.removeAttributes(f->getContext(), - index, - to_remove)); + PAL.removeAttributes(F->getContext(), index, + AttributeSet::get(F->getContext(), index, B)); + F->setAttributes(PALnew); } // enable fpmath flag UnsafeAlgebra @@ -332,6 +352,91 @@ DIT* unwrapDIptr(LLVMRustMetadataRef ref) { #define DIArray DINodeArray #define unwrapDI unwrapDIptr +// These values **must** match debuginfo::DIFlags! They also *happen* +// to match LLVM, but that isn't required as we do giant sets of +// matching below. The value shouldn't be directly passed to LLVM. +enum class LLVMRustDIFlags : uint32_t { + FlagZero = 0, + FlagPrivate = 1, + FlagProtected = 2, + FlagPublic = 3, + FlagFwdDecl = (1 << 2), + FlagAppleBlock = (1 << 3), + FlagBlockByrefStruct = (1 << 4), + FlagVirtual = (1 << 5), + FlagArtificial = (1 << 6), + FlagExplicit = (1 << 7), + FlagPrototyped = (1 << 8), + FlagObjcClassComplete = (1 << 9), + FlagObjectPointer = (1 << 10), + FlagVector = (1 << 11), + FlagStaticMember = (1 << 12), + FlagLValueReference = (1 << 13), + FlagRValueReference = (1 << 14), + // Do not add values that are not supported by the minimum LLVM + // version we support! +}; + +inline LLVMRustDIFlags operator& (LLVMRustDIFlags a, LLVMRustDIFlags b) { + return static_cast(static_cast(a) & static_cast(b)); +} + +inline LLVMRustDIFlags operator| (LLVMRustDIFlags a, LLVMRustDIFlags b) { + return static_cast(static_cast(a) | static_cast(b)); +} + +inline LLVMRustDIFlags& operator|= (LLVMRustDIFlags& a, LLVMRustDIFlags b) { + return a = a | b; +} + +inline bool is_set(LLVMRustDIFlags f) { + return f != LLVMRustDIFlags::FlagZero; +} + +inline LLVMRustDIFlags visibility(LLVMRustDIFlags f) { + return static_cast(static_cast(f) & 0x3); +} + +#if LLVM_VERSION_GE(4, 0) +static DINode::DIFlags from_rust(LLVMRustDIFlags flags) { + DINode::DIFlags result = DINode::DIFlags::FlagZero; +#else +static unsigned from_rust(LLVMRustDIFlags flags) { + unsigned result = 0; +#endif + + switch (visibility(flags)) { + case LLVMRustDIFlags::FlagPrivate: + result |= DINode::DIFlags::FlagPrivate; + break; + case LLVMRustDIFlags::FlagProtected: + result |= DINode::DIFlags::FlagProtected; + break; + case LLVMRustDIFlags::FlagPublic: + result |= DINode::DIFlags::FlagPublic; + break; + default: + // The rest are handled below + break; + } + + if (is_set(flags & LLVMRustDIFlags::FlagFwdDecl)) { result |= DINode::DIFlags::FlagFwdDecl; } + if (is_set(flags & LLVMRustDIFlags::FlagAppleBlock)) { result |= DINode::DIFlags::FlagAppleBlock; } + if (is_set(flags & LLVMRustDIFlags::FlagBlockByrefStruct)) { result |= DINode::DIFlags::FlagBlockByrefStruct; } + if (is_set(flags & LLVMRustDIFlags::FlagVirtual)) { result |= DINode::DIFlags::FlagVirtual; } + if (is_set(flags & LLVMRustDIFlags::FlagArtificial)) { result |= DINode::DIFlags::FlagArtificial; } + if (is_set(flags & LLVMRustDIFlags::FlagExplicit)) { result |= DINode::DIFlags::FlagExplicit; } + if (is_set(flags & LLVMRustDIFlags::FlagPrototyped)) { result |= DINode::DIFlags::FlagPrototyped; } + if (is_set(flags & LLVMRustDIFlags::FlagObjcClassComplete)) { result |= DINode::DIFlags::FlagObjcClassComplete; } + if (is_set(flags & LLVMRustDIFlags::FlagObjectPointer)) { result |= DINode::DIFlags::FlagObjectPointer; } + if (is_set(flags & LLVMRustDIFlags::FlagVector)) { result |= DINode::DIFlags::FlagVector; } + if (is_set(flags & LLVMRustDIFlags::FlagStaticMember)) { result |= DINode::DIFlags::FlagStaticMember; } + if (is_set(flags & LLVMRustDIFlags::FlagLValueReference)) { result |= DINode::DIFlags::FlagLValueReference; } + if (is_set(flags & LLVMRustDIFlags::FlagRValueReference)) { result |= DINode::DIFlags::FlagRValueReference; } + + return result; +} + extern "C" uint32_t LLVMRustDebugMetadataVersion() { return DEBUG_METADATA_VERSION; } @@ -411,7 +516,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateFunction( bool isLocalToUnit, bool isDefinition, unsigned ScopeLine, - unsigned Flags, + LLVMRustDIFlags Flags, bool isOptimized, LLVMValueRef Fn, LLVMRustMetadataRef TParam, @@ -423,7 +528,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateFunction( unwrapDI(Scope), Name, LinkageName, unwrapDI(File), LineNo, unwrapDI(Ty), isLocalToUnit, isDefinition, ScopeLine, - Flags, isOptimized, + from_rust(Flags), isOptimized, TParams, unwrapDIptr(Decl)); unwrap(Fn)->setSubprogram(Sub); @@ -433,7 +538,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateFunction( unwrapDI(Scope), Name, LinkageName, unwrapDI(File), LineNo, unwrapDI(Ty), isLocalToUnit, isDefinition, ScopeLine, - Flags, isOptimized, + from_rust(Flags), isOptimized, unwrap(Fn), unwrapDIptr(TParam), unwrapDIptr(Decl))); @@ -447,8 +552,13 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateBasicType( uint64_t AlignInBits, unsigned Encoding) { return wrap(Builder->createBasicType( - Name, SizeInBits, - AlignInBits, Encoding)); + Name, + SizeInBits, +#if LLVM_VERSION_LE(3, 9) + AlignInBits, +#endif + Encoding + )); } extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreatePointerType( @@ -469,7 +579,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateStructType( unsigned LineNumber, uint64_t SizeInBits, uint64_t AlignInBits, - unsigned Flags, + LLVMRustDIFlags Flags, LLVMRustMetadataRef DerivedFrom, LLVMRustMetadataRef Elements, unsigned RunTimeLang, @@ -482,7 +592,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateStructType( LineNumber, SizeInBits, AlignInBits, - Flags, + from_rust(Flags), unwrapDI(DerivedFrom), DINodeArray(unwrapDI(Elements)), RunTimeLang, @@ -500,12 +610,12 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateMemberType( uint64_t SizeInBits, uint64_t AlignInBits, uint64_t OffsetInBits, - unsigned Flags, + LLVMRustDIFlags Flags, LLVMRustMetadataRef Ty) { return wrap(Builder->createMemberType( unwrapDI(Scope), Name, unwrapDI(File), LineNo, - SizeInBits, AlignInBits, OffsetInBits, Flags, + SizeInBits, AlignInBits, OffsetInBits, from_rust(Flags), unwrapDI(Ty))); } @@ -540,7 +650,21 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateStaticVariable( LLVMRustMetadataRef Ty, bool isLocalToUnit, LLVMValueRef Val, - LLVMRustMetadataRef Decl = NULL) { + LLVMRustMetadataRef Decl = NULL, + uint64_t AlignInBits = 0) { + Constant *InitVal = cast(unwrap(Val)); + +#if LLVM_VERSION_GE(4, 0) + llvm::DIExpression *InitExpr = nullptr; + if (llvm::ConstantInt *IntVal = llvm::dyn_cast(InitVal)) { + InitExpr = Builder->createConstantValueExpression( + IntVal->getValue().getSExtValue()); + } else if (llvm::ConstantFP *FPVal = llvm::dyn_cast(InitVal)) { + InitExpr = Builder->createConstantValueExpression( + FPVal->getValueAPF().bitcastToAPInt().getZExtValue()); + } +#endif + return wrap(Builder->createGlobalVariable(unwrapDI(Context), Name, LinkageName, @@ -548,8 +672,16 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateStaticVariable( LineNo, unwrapDI(Ty), isLocalToUnit, - cast(unwrap(Val)), - unwrapDIptr(Decl))); +#if LLVM_VERSION_GE(4, 0) + InitExpr, +#else + InitVal, +#endif + unwrapDIptr(Decl) +#if LLVM_VERSION_GE(4, 0) + , AlignInBits +#endif + )); } extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateVariable( @@ -561,28 +693,37 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateVariable( unsigned LineNo, LLVMRustMetadataRef Ty, bool AlwaysPreserve, - unsigned Flags, - unsigned ArgNo) { + LLVMRustDIFlags Flags, + unsigned ArgNo, + uint64_t AlignInBits) +{ #if LLVM_VERSION_GE(3, 8) if (Tag == 0x100) { // DW_TAG_auto_variable return wrap(Builder->createAutoVariable( - unwrapDI(Scope), Name, + unwrapDI(Scope), + Name, unwrapDI(File), LineNo, - unwrapDI(Ty), AlwaysPreserve, Flags)); + unwrapDI(Ty), + AlwaysPreserve, + from_rust(Flags) +#if LLVM_VERSION_GE(4,0) + , AlignInBits +#endif + )); } else { return wrap(Builder->createParameterVariable( unwrapDI(Scope), Name, ArgNo, unwrapDI(File), LineNo, - unwrapDI(Ty), AlwaysPreserve, Flags)); + unwrapDI(Ty), AlwaysPreserve, from_rust(Flags))); } #else return wrap(Builder->createLocalVariable(Tag, unwrapDI(Scope), Name, unwrapDI(File), LineNo, - unwrapDI(Ty), AlwaysPreserve, Flags, ArgNo)); + unwrapDI(Ty), AlwaysPreserve, from_rust(Flags), ArgNo)); #endif } @@ -681,7 +822,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateUnionType( unsigned LineNumber, uint64_t SizeInBits, uint64_t AlignInBits, - unsigned Flags, + LLVMRustDIFlags Flags, LLVMRustMetadataRef Elements, unsigned RunTimeLang, const char* UniqueId) @@ -693,7 +834,7 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateUnionType( LineNumber, SizeInBits, AlignInBits, - Flags, + from_rust(Flags), DINodeArray(unwrapDI(Elements)), RunTimeLang, UniqueId @@ -727,7 +868,11 @@ extern "C" LLVMRustMetadataRef LLVMRustDIBuilderCreateNameSpace( unwrapDI(Scope), Name, unwrapDI(File), - LineNo)); + LineNo +#if LLVM_VERSION_GE(4, 0) + , false // ExportSymbols (only relevant for C++ anonymous namespaces) +#endif + )); } extern "C" void LLVMRustDICompositeTypeSetTypeArray( @@ -783,19 +928,34 @@ extern "C" void LLVMRustWriteValueToString(LLVMValueRef Value, RustStringRef str extern "C" bool LLVMRustLinkInExternalBitcode(LLVMModuleRef dst, char *bc, size_t len) { Module *Dst = unwrap(dst); + std::unique_ptr buf = MemoryBuffer::getMemBufferCopy(StringRef(bc, len)); + +#if LLVM_VERSION_GE(4, 0) + Expected> SrcOrError = + llvm::getLazyBitcodeModule(buf->getMemBufferRef(), Dst->getContext()); + if (!SrcOrError) { + LLVMRustSetLastError(toString(SrcOrError.takeError()).c_str()); + return false; + } + + auto Src = std::move(*SrcOrError); +#else ErrorOr> Src = llvm::getLazyBitcodeModule(std::move(buf), Dst->getContext()); if (!Src) { LLVMRustSetLastError(Src.getError().message().c_str()); return false; } +#endif std::string Err; raw_string_ostream Stream(Err); DiagnosticPrinterRawOStream DP(Stream); -#if LLVM_VERSION_GE(3, 8) +#if LLVM_VERSION_GE(4, 0) + if (Linker::linkModules(*Dst, std::move(Src))) { +#elif LLVM_VERSION_GE(3, 8) if (Linker::linkModules(*Dst, std::move(Src.get()))) { #else if (Linker::LinkModules(Dst, Src->get(), [&](const DiagnosticInfo &DI) { DI.print(DP); })) { @@ -848,19 +1008,21 @@ LLVMRustWriteTwineToString(LLVMTwineRef T, RustStringRef str) { extern "C" void LLVMRustUnpackOptimizationDiagnostic( LLVMDiagnosticInfoRef di, - const char **pass_name_out, + RustStringRef pass_name_out, LLVMValueRef *function_out, LLVMDebugLocRef *debugloc_out, - LLVMTwineRef *message_out) + RustStringRef message_out) { // Undefined to call this not on an optimization diagnostic! llvm::DiagnosticInfoOptimizationBase *opt = static_cast(unwrap(di)); - *pass_name_out = opt->getPassName(); + raw_rust_string_ostream pass_name_os(pass_name_out); + pass_name_os << opt->getPassName(); *function_out = wrap(&opt->getFunction()); *debugloc_out = wrap(&opt->getDebugLoc()); - *message_out = wrap(&opt->getMsg()); + raw_rust_string_ostream message_os(message_out); + message_os << opt->getMsg(); } extern "C" void @@ -1293,3 +1455,49 @@ extern "C" LLVMRustLinkage LLVMRustGetLinkage(LLVMValueRef V) { extern "C" void LLVMRustSetLinkage(LLVMValueRef V, LLVMRustLinkage RustLinkage) { LLVMSetLinkage(V, from_rust(RustLinkage)); } + +extern "C" LLVMContextRef LLVMRustGetValueContext(LLVMValueRef V) { + return wrap(&unwrap(V)->getContext()); +} + +enum class LLVMRustVisibility { + Default = 0, + Hidden = 1, + Protected = 2, +}; + +static LLVMRustVisibility to_rust(LLVMVisibility vis) { + switch (vis) { + case LLVMDefaultVisibility: + return LLVMRustVisibility::Default; + case LLVMHiddenVisibility: + return LLVMRustVisibility::Hidden; + case LLVMProtectedVisibility: + return LLVMRustVisibility::Protected; + + default: + llvm_unreachable("Invalid LLVMRustVisibility value!"); + } +} + +static LLVMVisibility from_rust(LLVMRustVisibility vis) { + switch (vis) { + case LLVMRustVisibility::Default: + return LLVMDefaultVisibility; + case LLVMRustVisibility::Hidden: + return LLVMHiddenVisibility; + case LLVMRustVisibility::Protected: + return LLVMProtectedVisibility; + + default: + llvm_unreachable("Invalid LLVMRustVisibility value!"); + } +} + +extern "C" LLVMRustVisibility LLVMRustGetVisibility(LLVMValueRef V) { + return to_rust(LLVMGetVisibility(V)); +} + +extern "C" void LLVMRustSetVisibility(LLVMValueRef V, LLVMRustVisibility RustVisibility) { + LLVMSetVisibility(V, from_rust(RustVisibility)); +} diff --git a/src/rustllvm/llvm-auto-clean-trigger b/src/rustllvm/llvm-auto-clean-trigger index 37fded948e..73c8bb97a1 100644 --- a/src/rustllvm/llvm-auto-clean-trigger +++ b/src/rustllvm/llvm-auto-clean-trigger @@ -1,4 +1,4 @@ # If this file is modified, then llvm will be forcibly cleaned and then rebuilt. # The actual contents of this file do not matter, but to trigger a change on the # build bots then the contents should be changed so git updates the mtime. -2016-10-29 +2016-12-16 diff --git a/src/rustllvm/rustllvm.h b/src/rustllvm/rustllvm.h index ffe94d1e22..b8c4076f4c 100644 --- a/src/rustllvm/rustllvm.h +++ b/src/rustllvm/rustllvm.h @@ -39,7 +39,6 @@ #include "llvm/Transforms/IPO.h" #include "llvm/Transforms/Instrumentation.h" #include "llvm/Transforms/Vectorize.h" -#include "llvm/Bitcode/ReaderWriter.h" #include "llvm-c/Core.h" #include "llvm-c/BitReader.h" #include "llvm-c/ExecutionEngine.h" @@ -60,6 +59,13 @@ #include "llvm/PassManager.h" #endif +#if LLVM_VERSION_GE(4, 0) +#include "llvm/Bitcode/BitcodeReader.h" +#include "llvm/Bitcode/BitcodeWriter.h" +#else +#include "llvm/Bitcode/ReaderWriter.h" +#endif + #include "llvm/IR/IRPrintingPasses.h" #include "llvm/IR/DebugInfo.h" #include "llvm/IR/DIBuilder.h" @@ -72,6 +78,28 @@ enum class LLVMRustResult { Failure }; +enum LLVMRustAttribute { + AlwaysInline = 0, + ByVal = 1, + Cold = 2, + InlineHint = 3, + MinSize = 4, + Naked = 5, + NoAlias = 6, + NoCapture = 7, + NoInline = 8, + NonNull = 9, + NoRedZone = 10, + NoReturn = 11, + NoUnwind = 12, + OptimizeForSize = 13, + ReadOnly = 14, + SExt = 15, + StructRet = 16, + UWTable = 17, + ZExt = 18, +}; + typedef struct OpaqueRustString *RustStringRef; typedef struct LLVMOpaqueTwine *LLVMTwineRef; typedef struct LLVMOpaqueDebugLoc *LLVMDebugLocRef; diff --git a/src/stage0.txt b/src/stage0.txt index 7535c32146..43310a2c36 100644 --- a/src/stage0.txt +++ b/src/stage0.txt @@ -2,7 +2,7 @@ # compiler itself. For the rustbuild build system, this also describes the # relevant Cargo revision that we're using. # -# Currently Rust always bootstrap from the previous stable release, and in our +# Currently Rust always bootstraps from the previous stable release, and in our # train model this means that the master branch bootstraps from beta, beta # bootstraps from current stable, and stable bootstraps from the previous stable # release. @@ -12,5 +12,5 @@ # tarball for a stable release you'll likely see `1.x.0-$date` where `1.x.0` was # released on `$date` -rustc: 1.13.0-2016-11-08 -cargo: nightly-2016-11-02 +rustc: 1.14.0-2016-12-18 +cargo: fbeea902d2c9a5be6d99cc35681565d8f7832592 diff --git a/src/test/codegen/dllimports/auxiliary/dummy.rs b/src/test/codegen/dllimports/auxiliary/dummy.rs new file mode 100644 index 0000000000..06001c6b01 --- /dev/null +++ b/src/test/codegen/dllimports/auxiliary/dummy.rs @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic +#![crate_type = "staticlib"] + +// Since codegen tests don't actually perform linking, this library doesn't need to export +// any symbols. It's here just to satisfy the compiler looking for a .lib file when processing +// #[link(...)] attributes in wrapper.rs. diff --git a/src/test/codegen/dllimports/auxiliary/wrapper.rs b/src/test/codegen/dllimports/auxiliary/wrapper.rs new file mode 100644 index 0000000000..c03f88092e --- /dev/null +++ b/src/test/codegen/dllimports/auxiliary/wrapper.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic +#![crate_type = "rlib"] + +#[link(name = "dummy", kind="dylib")] +extern "C" { + pub fn dylib_func2(x: i32) -> i32; + pub static dylib_global2: i32; +} + +#[link(name = "dummy", kind="static")] +extern "C" { + pub fn static_func2(x: i32) -> i32; + pub static static_global2: i32; +} diff --git a/src/test/codegen/dllimports/main.rs b/src/test/codegen/dllimports/main.rs new file mode 100644 index 0000000000..64f516aa27 --- /dev/null +++ b/src/test/codegen/dllimports/main.rs @@ -0,0 +1,64 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// This test is for *-windows-msvc only. +// ignore-gnu +// ignore-android +// ignore-bitrig +// ignore-macos +// ignore-dragonfly +// ignore-freebsd +// ignore-haiku +// ignore-ios +// ignore-linux +// ignore-netbsd +// ignore-openbsd +// ignore-solaris +// ignore-emscripten + +// aux-build:dummy.rs +// aux-build:wrapper.rs + +extern crate wrapper; + +// Check that external symbols coming from foreign dylibs are adorned with 'dllimport', +// whereas symbols coming from foreign staticlibs are not. (RFC-1717) + +// CHECK: @dylib_global1 = external dllimport local_unnamed_addr global i32 +// CHECK: @dylib_global2 = external dllimport local_unnamed_addr global i32 +// CHECK: @static_global1 = external local_unnamed_addr global i32 +// CHECK: @static_global2 = external local_unnamed_addr global i32 + +// CHECK: declare dllimport i32 @dylib_func1(i32) +// CHECK: declare dllimport i32 @dylib_func2(i32) +// CHECK: declare i32 @static_func1(i32) +// CHECK: declare i32 @static_func2(i32) + +#[link(name = "dummy", kind="dylib")] +extern "C" { + pub fn dylib_func1(x: i32) -> i32; + pub static dylib_global1: i32; +} + +#[link(name = "dummy", kind="static")] +extern "C" { + pub fn static_func1(x: i32) -> i32; + pub static static_global1: i32; +} + +fn main() { + unsafe { + dylib_func1(dylib_global1); + wrapper::dylib_func2(wrapper::dylib_global2); + + static_func1(static_global1); + wrapper::static_func2(wrapper::static_global2); + } +} diff --git a/src/test/codegen/lifetime_start_end.rs b/src/test/codegen/lifetime_start_end.rs index 81f6cf309d..e3b35cf355 100644 --- a/src/test/codegen/lifetime_start_end.rs +++ b/src/test/codegen/lifetime_start_end.rs @@ -27,16 +27,16 @@ pub fn test() { let b = &Some(a); &b; // keep variable in an alloca -// CHECK: [[S_b:%[0-9]+]] = bitcast %"2.std::option::Option"** %b to i8* +// CHECK: [[S_b:%[0-9]+]] = bitcast %"core::option::Option"** %b to i8* // CHECK: call void @llvm.lifetime.start(i{{[0-9 ]+}}, i8* [[S_b]]) -// CHECK: [[S__5:%[0-9]+]] = bitcast %"2.std::option::Option"* %_5 to i8* +// CHECK: [[S__5:%[0-9]+]] = bitcast %"core::option::Option"* %_5 to i8* // CHECK: call void @llvm.lifetime.start(i{{[0-9 ]+}}, i8* [[S__5]]) -// CHECK: [[E__5:%[0-9]+]] = bitcast %"2.std::option::Option"* %_5 to i8* +// CHECK: [[E__5:%[0-9]+]] = bitcast %"core::option::Option"* %_5 to i8* // CHECK: call void @llvm.lifetime.end(i{{[0-9 ]+}}, i8* [[E__5]]) -// CHECK: [[E_b:%[0-9]+]] = bitcast %"2.std::option::Option"** %b to i8* +// CHECK: [[E_b:%[0-9]+]] = bitcast %"core::option::Option"** %b to i8* // CHECK: call void @llvm.lifetime.end(i{{[0-9 ]+}}, i8* [[E_b]]) } diff --git a/src/test/compile-fail-fulldeps/auxiliary/lint_for_crate.rs b/src/test/compile-fail-fulldeps/auxiliary/lint_for_crate.rs index a424517da1..fc53031e7f 100644 --- a/src/test/compile-fail-fulldeps/auxiliary/lint_for_crate.rs +++ b/src/test/compile-fail-fulldeps/auxiliary/lint_for_crate.rs @@ -32,7 +32,7 @@ impl LintPass for Pass { } } -impl LateLintPass for Pass { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass { fn check_crate(&mut self, cx: &LateContext, krate: &hir::Crate) { if !attr::contains_name(&krate.attrs, "crate_okay") { cx.span_lint(CRATE_NOT_OKAY, krate.span, @@ -43,5 +43,5 @@ impl LateLintPass for Pass { #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_late_lint_pass(box Pass as LateLintPassObject); + reg.register_late_lint_pass(box Pass); } diff --git a/src/test/compile-fail-fulldeps/auxiliary/lint_group_plugin_test.rs b/src/test/compile-fail-fulldeps/auxiliary/lint_group_plugin_test.rs index 1e9a77724a..490aa0d469 100644 --- a/src/test/compile-fail-fulldeps/auxiliary/lint_group_plugin_test.rs +++ b/src/test/compile-fail-fulldeps/auxiliary/lint_group_plugin_test.rs @@ -34,7 +34,7 @@ impl LintPass for Pass { } } -impl LateLintPass for Pass { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { match &*it.name.as_str() { "lintme" => cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"), @@ -46,6 +46,6 @@ impl LateLintPass for Pass { #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_late_lint_pass(box Pass as LateLintPassObject); + reg.register_late_lint_pass(box Pass); reg.register_lint_group("lint_me", vec![TEST_LINT, PLEASE_LINT]); } diff --git a/src/test/compile-fail-fulldeps/auxiliary/lint_plugin_test.rs b/src/test/compile-fail-fulldeps/auxiliary/lint_plugin_test.rs index 8ea131da33..8647797270 100644 --- a/src/test/compile-fail-fulldeps/auxiliary/lint_plugin_test.rs +++ b/src/test/compile-fail-fulldeps/auxiliary/lint_plugin_test.rs @@ -36,7 +36,7 @@ impl LintPass for Pass { impl EarlyLintPass for Pass { fn check_item(&mut self, cx: &EarlyContext, it: &ast::Item) { - if it.ident.name.as_str() == "lintme" { + if it.ident.name == "lintme" { cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"); } } diff --git a/src/test/compile-fail-fulldeps/auxiliary/macro_crate_test.rs b/src/test/compile-fail-fulldeps/auxiliary/macro_crate_test.rs index 409f9dbf03..dc88bfc405 100644 --- a/src/test/compile-fail-fulldeps/auxiliary/macro_crate_test.rs +++ b/src/test/compile-fail-fulldeps/auxiliary/macro_crate_test.rs @@ -19,8 +19,9 @@ extern crate rustc_plugin; use syntax::ast::{self, Item, MetaItem, ItemKind}; use syntax::ext::base::*; -use syntax::parse::{self, token}; +use syntax::parse; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax::tokenstream::TokenTree; use syntax_pos::Span; use rustc_plugin::Registry; @@ -34,11 +35,11 @@ pub fn plugin_registrar(reg: &mut Registry) { reg.register_macro("make_a_1", expand_make_a_1); reg.register_macro("identity", expand_identity); reg.register_syntax_extension( - token::intern("into_multi_foo"), + Symbol::intern("into_multi_foo"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. MultiModifier(Box::new(expand_into_foo_multi))); reg.register_syntax_extension( - token::intern("duplicate"), + Symbol::intern("duplicate"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. MultiDecorator(Box::new(expand_duplicate))); } @@ -102,9 +103,9 @@ fn expand_duplicate(cx: &mut ExtCtxt, push: &mut FnMut(Annotatable)) { let copy_name = match mi.node { - ast::MetaItemKind::List(_, ref xs) => { + ast::MetaItemKind::List(ref xs) => { if let Some(word) = xs[0].word() { - token::str_to_ident(&word.name()) + ast::Ident::with_empty_ctxt(word.name()) } else { cx.span_err(mi.span, "Expected word"); return; diff --git a/src/test/compile-fail-fulldeps/auxiliary/pub_and_stability.rs b/src/test/compile-fail-fulldeps/auxiliary/pub_and_stability.rs new file mode 100644 index 0000000000..9dc4cf1252 --- /dev/null +++ b/src/test/compile-fail-fulldeps/auxiliary/pub_and_stability.rs @@ -0,0 +1,144 @@ +// Copyright 2017 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// This crate attempts to enumerate the various scenarios for how a +// type can define fields and methods with various visiblities and +// stabilities. +// +// The basic stability pattern in this file has four cases: +// 1. no stability attribute at all +// 2. a stable attribute (feature "unit_test") +// 3. an unstable attribute that unit test declares (feature "unstable_declared") +// 4. an unstable attribute that unit test fails to declare (feature "unstable_undeclared") +// +// This file also covers four kinds of visibility: private, +// pub(module), pub(crate), and pub. +// +// However, since stability attributes can only be observed in +// cross-crate linkage scenarios, there is little reason to take the +// cross-product (4 stability cases * 4 visiblity cases), because the +// first three visibility cases cannot be accessed outside this crate, +// and therefore stability is only relevant when the visibility is pub +// to the whole universe. +// +// (The only reason to do so would be if one were worried about the +// compiler having some subtle bug where adding a stability attribute +// introduces a privacy violation. As a way to provide evidence that +// this is not occurring, I have put stability attributes on some +// non-pub fields, marked with SILLY below) + +#![feature(staged_api)] +#![feature(pub_restricted)] + +#![stable(feature = "unit_test", since = "0.0.0")] + +#[stable(feature = "unit_test", since = "0.0.0")] +pub use m::{Record, Trait, Tuple}; + +mod m { + #[derive(Default)] + #[stable(feature = "unit_test", since = "0.0.0")] + pub struct Record { + #[stable(feature = "unit_test", since = "0.0.0")] + pub a_stable_pub: i32, + #[unstable(feature = "unstable_declared", issue = "38412")] + pub a_unstable_declared_pub: i32, + #[unstable(feature = "unstable_undeclared", issue = "38412")] + pub a_unstable_undeclared_pub: i32, + #[unstable(feature = "unstable_undeclared", issue = "38412")] // SILLY + pub(crate) b_crate: i32, + #[unstable(feature = "unstable_declared", issue = "38412")] // SILLY + pub(m) c_mod: i32, + #[stable(feature = "unit_test", since = "0.0.0")] // SILLY + d_priv: i32 + } + + #[derive(Default)] + #[stable(feature = "unit_test", since = "1.0.0")] + pub struct Tuple( + #[stable(feature = "unit_test", since = "0.0.0")] + pub i32, + #[unstable(feature = "unstable_declared", issue = "38412")] + pub i32, + #[unstable(feature = "unstable_undeclared", issue = "38412")] + pub i32, + + pub(crate) i32, + pub(m) i32, + i32); + + impl Record { + #[stable(feature = "unit_test", since = "1.0.0")] + pub fn new() -> Self { Default::default() } + } + + impl Tuple { + #[stable(feature = "unit_test", since = "1.0.0")] + pub fn new() -> Self { Default::default() } + } + + + #[stable(feature = "unit_test", since = "0.0.0")] + pub trait Trait { + #[stable(feature = "unit_test", since = "0.0.0")] + type Type; + #[stable(feature = "unit_test", since = "0.0.0")] + fn stable_trait_method(&self) -> Self::Type; + #[unstable(feature = "unstable_undeclared", issue = "38412")] + fn unstable_undeclared_trait_method(&self) -> Self::Type; + #[unstable(feature = "unstable_declared", issue = "38412")] + fn unstable_declared_trait_method(&self) -> Self::Type; + } + + #[stable(feature = "unit_test", since = "0.0.0")] + impl Trait for Record { + type Type = i32; + fn stable_trait_method(&self) -> i32 { self.d_priv } + fn unstable_undeclared_trait_method(&self) -> i32 { self.d_priv } + fn unstable_declared_trait_method(&self) -> i32 { self.d_priv } + } + + #[stable(feature = "unit_test", since = "0.0.0")] + impl Trait for Tuple { + type Type = i32; + fn stable_trait_method(&self) -> i32 { self.3 } + fn unstable_undeclared_trait_method(&self) -> i32 { self.3 } + fn unstable_declared_trait_method(&self) -> i32 { self.3 } + } + + impl Record { + #[unstable(feature = "unstable_undeclared", issue = "38412")] + pub fn unstable_undeclared(&self) -> i32 { self.d_priv } + #[unstable(feature = "unstable_declared", issue = "38412")] + pub fn unstable_declared(&self) -> i32 { self.d_priv } + #[stable(feature = "unit_test", since = "0.0.0")] + pub fn stable(&self) -> i32 { self.d_priv } + + #[unstable(feature = "unstable_undeclared", issue = "38412")] // SILLY + pub(crate) fn pub_crate(&self) -> i32 { self.d_priv } + #[unstable(feature = "unstable_declared", issue = "38412")] // SILLY + pub(m) fn pub_mod(&self) -> i32 { self.d_priv } + #[stable(feature = "unit_test", since = "0.0.0")] // SILLY + fn private(&self) -> i32 { self.d_priv } + } + + impl Tuple { + #[unstable(feature = "unstable_undeclared", issue = "38412")] + pub fn unstable_undeclared(&self) -> i32 { self.0 } + #[unstable(feature = "unstable_declared", issue = "38412")] + pub fn unstable_declared(&self) -> i32 { self.0 } + #[stable(feature = "unit_test", since = "0.0.0")] + pub fn stable(&self) -> i32 { self.0 } + + pub(crate) fn pub_crate(&self) -> i32 { self.0 } + pub(m) fn pub_mod(&self) -> i32 { self.0 } + fn private(&self) -> i32 { self.0 } + } +} diff --git a/src/test/compile-fail-fulldeps/dropck_tarena_cycle_checked.rs b/src/test/compile-fail-fulldeps/dropck_tarena_cycle_checked.rs index bbdc59c843..7de6e58c78 100644 --- a/src/test/compile-fail-fulldeps/dropck_tarena_cycle_checked.rs +++ b/src/test/compile-fail-fulldeps/dropck_tarena_cycle_checked.rs @@ -16,7 +16,7 @@ // which is a reduction of this code to more directly show the reason // for the error message we see here.) -#![feature(const_fn)] +#![feature(const_fn, rustc_private)] extern crate arena; diff --git a/src/test/compile-fail-fulldeps/dropck_tarena_unsound_drop.rs b/src/test/compile-fail-fulldeps/dropck_tarena_unsound_drop.rs index 46cb760557..30829847a3 100644 --- a/src/test/compile-fail-fulldeps/dropck_tarena_unsound_drop.rs +++ b/src/test/compile-fail-fulldeps/dropck_tarena_unsound_drop.rs @@ -19,6 +19,8 @@ // (Also compare against dropck_tarena_cycle_checked.rs, from which // this was reduced to better understand its error message.) +#![feature(rustc_private)] + extern crate arena; use arena::TypedArena; diff --git a/src/test/compile-fail-fulldeps/explore-issue-38412.rs b/src/test/compile-fail-fulldeps/explore-issue-38412.rs new file mode 100644 index 0000000000..aab9257532 --- /dev/null +++ b/src/test/compile-fail-fulldeps/explore-issue-38412.rs @@ -0,0 +1,85 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:pub_and_stability.rs + +#![feature(staged_api)] +#![feature(unused_feature)] + +// A big point of this test is that we *declare* `unstable_declared`, +// but do *not* declare `unstable_undeclared`. This way we can check +// that the compiler is letting in uses of declared feature-gated +// stuff but still rejecting uses of undeclared feature-gated stuff. +#![feature(unstable_declared)] + +extern crate pub_and_stability; +use pub_and_stability::{Record, Trait, Tuple}; + +fn main() { + // Okay + let Record { .. } = Record::new(); + // Okay (for now; see RFC Issue #902) + let Tuple(..) = Tuple::new(); + + // Okay + let Record { a_stable_pub: _, a_unstable_declared_pub: _, .. } = Record::new(); + // Okay (for now; see RFC Issue #902) + let Tuple(_, _, ..) = Tuple::new(); // analogous to above + + let Record { a_stable_pub: _, a_unstable_declared_pub: _, a_unstable_undeclared_pub: _, .. } = + Record::new(); + //~^^ ERROR use of unstable library feature 'unstable_undeclared' + + let Tuple(_, _, _, ..) = Tuple::new(); // analogous to previous + //~^ ERROR use of unstable library feature 'unstable_undeclared' + + let r = Record::new(); + let t = Tuple::new(); + + r.a_stable_pub; + r.a_unstable_declared_pub; + r.a_unstable_undeclared_pub; //~ ERROR use of unstable library feature + r.b_crate; //~ ERROR is private + r.c_mod; //~ ERROR is private + r.d_priv; //~ ERROR is private + + t.0; + t.1; + t.2; //~ ERROR use of unstable library feature + t.3; //~ ERROR is private + t.4; //~ ERROR is private + t.5; //~ ERROR is private + + r.stable_trait_method(); + r.unstable_declared_trait_method(); + r.unstable_undeclared_trait_method(); //~ ERROR use of unstable library feature + + r.stable(); + r.unstable_declared(); + r.unstable_undeclared(); //~ ERROR use of unstable library feature + + r.pub_crate(); //~ ERROR `pub_crate` is private + r.pub_mod(); //~ ERROR `pub_mod` is private + r.private(); //~ ERROR `private` is private + + let t = Tuple::new(); + t.stable_trait_method(); + t.unstable_declared_trait_method(); + t.unstable_undeclared_trait_method(); //~ ERROR use of unstable library feature + + t.stable(); + t.unstable_declared(); + t.unstable_undeclared(); //~ ERROR use of unstable library feature + + t.pub_crate(); //~ ERROR `pub_crate` is private + t.pub_mod(); //~ ERROR `pub_mod` is private + t.private(); //~ ERROR `private` is private + +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/attribute.rs b/src/test/compile-fail-fulldeps/proc-macro/attribute.rs index d1b2aa330e..5d5e61270b 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/attribute.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/attribute.rs @@ -9,7 +9,6 @@ // except according to those terms. #![crate_type = "proc-macro"] -#![feature(proc_macro)] extern crate proc_macro; @@ -33,8 +32,8 @@ pub fn foo3(input: proc_macro::TokenStream) -> proc_macro::TokenStream { input } -#[proc_macro_derive(b, c)] -//~^ ERROR: attribute must only have one argument +#[proc_macro_derive(b, c, d)] +//~^ ERROR: attribute must have either one or two arguments pub fn foo4(input: proc_macro::TokenStream) -> proc_macro::TokenStream { input } @@ -44,3 +43,21 @@ pub fn foo4(input: proc_macro::TokenStream) -> proc_macro::TokenStream { pub fn foo5(input: proc_macro::TokenStream) -> proc_macro::TokenStream { input } + +#[proc_macro_derive(f, attributes(g = "h"))] +//~^ ERROR: must only be one word +pub fn foo6(input: proc_macro::TokenStream) -> proc_macro::TokenStream { + input +} + +#[proc_macro_derive(i, attributes(j(k)))] +//~^ ERROR: must only be one word +pub fn foo7(input: proc_macro::TokenStream) -> proc_macro::TokenStream { + input +} + +#[proc_macro_derive(l, attributes(m), n)] +//~^ ERROR: attribute must have either one or two arguments +pub fn foo8(input: proc_macro::TokenStream) -> proc_macro::TokenStream { + input +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a-b.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a-b.rs new file mode 100644 index 0000000000..cd8750bc89 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a-b.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// force-host +// no-prefer-dynamic + +#![crate_type = "proc-macro"] + +extern crate proc_macro; +use proc_macro::TokenStream; + +#[proc_macro_derive(A)] +pub fn derive_a(_: TokenStream) -> TokenStream { + "".parse().unwrap() +} + +#[proc_macro_derive(B)] +pub fn derive_b(_: TokenStream) -> TokenStream { + "".parse().unwrap() +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a.rs index 4aa4238611..53b2c23e5d 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-a.rs @@ -11,8 +11,6 @@ // force-host // no-prefer-dynamic -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; @@ -21,5 +19,5 @@ use proc_macro::TokenStream; #[proc_macro_derive(A)] pub fn derive_a(input: TokenStream) -> TokenStream { - input + "".parse().unwrap() } diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-b.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-b.rs new file mode 100644 index 0000000000..5787546fb1 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-b.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// force-host +// no-prefer-dynamic + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +use proc_macro::TokenStream; + +#[proc_macro_derive(B, attributes(B))] +pub fn derive_b(input: TokenStream) -> TokenStream { + "".parse().unwrap() +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-bad.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-bad.rs index aae8b63e25..841b39eaed 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-bad.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-bad.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic // force-host -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-panic.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-panic.rs index f426fe5437..3274f0324e 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-panic.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-panic.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic // force-host -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable-2.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable-2.rs index d8952e3478..2d492d341e 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable-2.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable-2.rs @@ -11,8 +11,6 @@ // force-host // no-prefer-dynamic -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable.rs index 1187b5102e..a7b5d1e3e5 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/derive-unstable.rs @@ -11,8 +11,6 @@ // force-host // no-prefer-dynamic -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/auxiliary/issue_38586.rs b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/issue_38586.rs new file mode 100644 index 0000000000..10da846a86 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/auxiliary/issue_38586.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// force-host +// no-prefer-dynamic + +#![feature(proc_macro, proc_macro_lib)] +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +#[proc_macro_derive(A)] +pub fn derive_a(_: proc_macro::TokenStream) -> proc_macro::TokenStream { + "fn f() { println!(\"{}\", foo); }".parse().unwrap() +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/define-two.rs b/src/test/compile-fail-fulldeps/proc-macro/define-two.rs index 420249b258..87b32096d7 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/define-two.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/define-two.rs @@ -11,7 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/derive-bad.rs b/src/test/compile-fail-fulldeps/proc-macro/derive-bad.rs index 4cc6b9f984..0e4ac9fe1e 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/derive-bad.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/derive-bad.rs @@ -10,8 +10,6 @@ // aux-build:derive-bad.rs -#![feature(proc_macro)] - #[macro_use] extern crate derive_bad; diff --git a/src/test/compile-fail-fulldeps/proc-macro/derive-still-gated.rs b/src/test/compile-fail-fulldeps/proc-macro/derive-still-gated.rs index eb0bb00be9..f36236c535 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/derive-still-gated.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/derive-still-gated.rs @@ -10,7 +10,6 @@ // aux-build:derive-a.rs -#![feature(proc_macro)] #![allow(warnings)] #[macro_use] diff --git a/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable-2.rs b/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable-2.rs index 23dcbe03b5..e4fcbb117a 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable-2.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable-2.rs @@ -10,15 +10,14 @@ // aux-build:derive-unstable-2.rs -#![feature(proc_macro)] #![allow(warnings)] #[macro_use] extern crate derive_unstable_2; #[derive(Unstable)] -struct A; //~^ ERROR: reserved for internal compiler +struct A; fn main() { foo(); diff --git a/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable.rs b/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable.rs index fb86f6f1b6..836e336fc2 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/expand-to-unstable.rs @@ -10,15 +10,14 @@ // aux-build:derive-unstable.rs -#![feature(proc_macro)] #![allow(warnings)] #[macro_use] extern crate derive_unstable; #[derive(Unstable)] -struct A; //~^ ERROR: use of unstable library feature +struct A; fn main() { unsafe { foo(); } diff --git a/src/test/compile-fail-fulldeps/proc-macro/export-macro.rs b/src/test/compile-fail-fulldeps/proc-macro/export-macro.rs index 48b73e7333..477039bd7a 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/export-macro.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/export-macro.rs @@ -11,7 +11,6 @@ // error-pattern: cannot export macro_rules! macros from a `proc-macro` crate #![crate_type = "proc-macro"] -#![feature(proc_macro)] #[macro_export] macro_rules! foo { diff --git a/src/test/compile-fail-fulldeps/proc-macro/illegal-proc-macro-derive-use.rs b/src/test/compile-fail-fulldeps/proc-macro/illegal-proc-macro-derive-use.rs index 405994b36e..f37980c5e0 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/illegal-proc-macro-derive-use.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/illegal-proc-macro-derive-use.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(proc_macro)] - extern crate proc_macro; #[proc_macro_derive(Foo)] diff --git a/src/test/compile-fail-fulldeps/proc-macro/import.rs b/src/test/compile-fail-fulldeps/proc-macro/import.rs index 8f907183cc..fae2439344 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/import.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/import.rs @@ -10,7 +10,6 @@ // aux-build:derive-a.rs -#![feature(proc_macro)] #![allow(warnings)] #[macro_use] diff --git a/src/test/compile-fail-fulldeps/proc-macro/issue-37788.rs b/src/test/compile-fail-fulldeps/proc-macro/issue-37788.rs new file mode 100644 index 0000000000..691c280580 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/issue-37788.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-a-b.rs + +#[macro_use] +extern crate derive_a_b; + +fn main() { + // Test that constructing the `visible_parent_map` (in `cstore_impl.rs`) does not ICE. + std::cell::Cell::new(0) //~ ERROR mismatched types +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/issue-38586.rs b/src/test/compile-fail-fulldeps/proc-macro/issue-38586.rs new file mode 100644 index 0000000000..42475e6de9 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/issue-38586.rs @@ -0,0 +1,21 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:issue_38586.rs + +#![feature(proc_macro)] + +#[macro_use] +extern crate issue_38586; + +#[derive(A)] //~ ERROR `foo` +struct A; + +fn main() {} diff --git a/src/test/compile-fail-fulldeps/proc-macro/item-error.rs b/src/test/compile-fail-fulldeps/proc-macro/item-error.rs new file mode 100644 index 0000000000..4133e75e3a --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/item-error.rs @@ -0,0 +1,25 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-b.rs + +#![allow(warnings)] + +#[macro_use] +extern crate derive_b; + +#[derive(B)] +struct A { + a: &u64 +//~^ ERROR: missing lifetime specifier +} + +fn main() { +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/load-panic.rs b/src/test/compile-fail-fulldeps/proc-macro/load-panic.rs index 39c27e82fa..f9906b650f 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/load-panic.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/load-panic.rs @@ -10,8 +10,6 @@ // aux-build:derive-panic.rs -#![feature(proc_macro)] - #[macro_use] extern crate derive_panic; diff --git a/src/test/compile-fail-fulldeps/proc-macro/no-macro-use-attr.rs b/src/test/compile-fail-fulldeps/proc-macro/no-macro-use-attr.rs new file mode 100644 index 0000000000..f61b8b4073 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/no-macro-use-attr.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-a.rs + +#![feature(rustc_attrs)] + +extern crate derive_a; +//~^ WARN custom derive crates and `#[no_link]` crates have no effect without `#[macro_use]` + +#[rustc_error] +fn main() {} //~ ERROR compilation successful diff --git a/src/test/compile-fail-fulldeps/proc-macro/proc-macro-attributes.rs b/src/test/compile-fail-fulldeps/proc-macro/proc-macro-attributes.rs new file mode 100644 index 0000000000..4ad1cf79d6 --- /dev/null +++ b/src/test/compile-fail-fulldeps/proc-macro/proc-macro-attributes.rs @@ -0,0 +1,25 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-b.rs + +#![allow(warnings)] + +#[macro_use] +extern crate derive_b; + +#[derive(B)] +#[B] +#[C] //~ ERROR: The attribute `C` is currently unknown to the compiler +#[B(D)] +#[B(E = "foo")] +struct B; + +fn main() {} diff --git a/src/test/compile-fail-fulldeps/proc-macro/at-the-root.rs b/src/test/compile-fail-fulldeps/proc-macro/pub-at-crate-root.rs similarity index 82% rename from src/test/compile-fail-fulldeps/proc-macro/at-the-root.rs rename to src/test/compile-fail-fulldeps/proc-macro/pub-at-crate-root.rs index b03e1e4f91..238dde5160 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/at-the-root.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/pub-at-crate-root.rs @@ -9,7 +9,6 @@ // except according to those terms. #![crate_type = "proc-macro"] -#![feature(proc_macro)] extern crate proc_macro; @@ -23,3 +22,8 @@ pub mod a { //~ `proc-macro` crate types cannot export any items } } +#[proc_macro_derive(B)] +fn bar(a: proc_macro::TokenStream) -> proc_macro::TokenStream { +//~^ ERROR: functions tagged with `#[proc_macro_derive]` must be `pub` + a +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/shadow-builtin.rs b/src/test/compile-fail-fulldeps/proc-macro/shadow-builtin.rs index 5cb2cc59aa..8c5affb7b5 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/shadow-builtin.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/shadow-builtin.rs @@ -9,7 +9,6 @@ // except according to those terms. #![crate_type = "proc-macro"] -#![feature(proc_macro)] extern crate proc_macro; diff --git a/src/test/compile-fail-fulldeps/proc-macro/shadow.rs b/src/test/compile-fail-fulldeps/proc-macro/shadow.rs index a04756ca19..d76cf003ed 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/shadow.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/shadow.rs @@ -10,11 +10,9 @@ // aux-build:derive-a.rs -#![feature(proc_macro)] - #[macro_use] extern crate derive_a; #[macro_use] -extern crate derive_a; //~ ERROR `derive_a` has already been defined +extern crate derive_a; //~ ERROR `derive_a` has already been imported fn main() {} diff --git a/src/test/compile-fail-fulldeps/proc-macro/signature.rs b/src/test/compile-fail-fulldeps/proc-macro/signature.rs index 468c970599..6d6b5a23e9 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/signature.rs +++ b/src/test/compile-fail-fulldeps/proc-macro/signature.rs @@ -9,13 +9,12 @@ // except according to those terms. #![crate_type = "proc-macro"] -#![feature(proc_macro)] #![allow(warnings)] extern crate proc_macro; #[proc_macro_derive(A)] -unsafe extern fn foo(a: i32, b: u32) -> u32 { +pub unsafe extern fn foo(a: i32, b: u32) -> u32 { //~^ ERROR: mismatched types //~| NOTE: expected normal fn, found unsafe fn //~| NOTE: expected type `fn(proc_macro::TokenStream) -> proc_macro::TokenStream` diff --git a/src/test/compile-fail-fulldeps/qquote.rs b/src/test/compile-fail-fulldeps/qquote.rs index 4a7033d44b..8acab3369e 100644 --- a/src/test/compile-fail-fulldeps/qquote.rs +++ b/src/test/compile-fail-fulldeps/qquote.rs @@ -16,8 +16,8 @@ extern crate syntax; extern crate syntax_pos; use syntax::ast; -use syntax::parse; use syntax::print::pprust; +use syntax::symbol::Symbol; use syntax_pos::DUMMY_SP; fn main() { @@ -30,7 +30,7 @@ fn main() { cx.bt_push(syntax::codemap::ExpnInfo { call_site: DUMMY_SP, callee: syntax::codemap::NameAndSpan { - format: syntax::codemap::MacroBang(parse::token::intern("")), + format: syntax::codemap::MacroBang(Symbol::intern("")), allow_internal_unstable: false, span: None, } diff --git a/src/test/compile-fail/E0060.rs b/src/test/compile-fail/E0060.rs index 5182a2bf5a..8246c42a4d 100644 --- a/src/test/compile-fail/E0060.rs +++ b/src/test/compile-fail/E0060.rs @@ -10,10 +10,11 @@ extern "C" { fn printf(_: *const u8, ...) -> u32; + //~^ NOTE defined here } fn main() { unsafe { printf(); } //~^ ERROR E0060 - //~| NOTE the following parameter type was expected: *const u8 + //~| expected at least 1 parameter } diff --git a/src/test/compile-fail/E0061.rs b/src/test/compile-fail/E0061.rs index 4c7c0dfd44..ebd4ad2e29 100644 --- a/src/test/compile-fail/E0061.rs +++ b/src/test/compile-fail/E0061.rs @@ -9,16 +9,17 @@ // except according to those terms. fn f(a: u16, b: &str) {} +//~^ NOTE defined here fn f2(a: u16) {} +//~^ NOTE defined here fn main() { f(0); //~^ ERROR E0061 - //~| NOTE the following parameter types were expected: - //~| NOTE u16, &str + //~| expected 2 parameters f2(); //~^ ERROR E0061 - //~| NOTE the following parameter type was expected: u16 + //~| expected 1 parameter } diff --git a/src/test/compile-fail/E0088.rs b/src/test/compile-fail/E0088.rs index 0b235aa240..9ec0960322 100644 --- a/src/test/compile-fail/E0088.rs +++ b/src/test/compile-fail/E0088.rs @@ -9,7 +9,12 @@ // except according to those terms. fn f() {} +fn g<'a>() {} fn main() { f::<'static>(); //~ ERROR E0088 + //~^ unexpected lifetime parameter + + g::<'static, 'static>(); //~ ERROR E0088 + //~^ unexpected lifetime parameters } diff --git a/src/test/compile-fail/E0090.rs b/src/test/compile-fail/E0090.rs new file mode 100644 index 0000000000..4600d2d638 --- /dev/null +++ b/src/test/compile-fail/E0090.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn foo<'a: 'b, 'b: 'a>() {} +fn main() { + foo::<'static>();//~ ERROR E0090 + //~^ too few lifetime parameters +} diff --git a/src/test/compile-fail/E0138.rs b/src/test/compile-fail/E0138.rs index d4630d7c2e..11d90658ab 100644 --- a/src/test/compile-fail/E0138.rs +++ b/src/test/compile-fail/E0138.rs @@ -11,10 +11,10 @@ #![feature(start)] #[start] -fn foo(argc: isize, argv: *const *const u8) -> isize {} +fn foo(argc: isize, argv: *const *const u8) -> isize { 0 } //~^ NOTE previous `start` function here #[start] -fn f(argc: isize, argv: *const *const u8) -> isize {} +fn f(argc: isize, argv: *const *const u8) -> isize { 0 } //~^ ERROR E0138 //~| NOTE multiple `start` functions diff --git a/src/test/compile-fail/E0225.rs b/src/test/compile-fail/E0225.rs index b013788cef..8c79c15e3d 100644 --- a/src/test/compile-fail/E0225.rs +++ b/src/test/compile-fail/E0225.rs @@ -10,6 +10,6 @@ fn main() { let _: Box; - //~^ ERROR only the builtin traits can be used as closure or object bounds [E0225] - //~| NOTE non-builtin trait used as bounds + //~^ ERROR only Send/Sync traits can be used as additional traits in a trait object [E0225] + //~| NOTE non-Send/Sync additional trait } diff --git a/src/test/compile-fail/E0243.rs b/src/test/compile-fail/E0243.rs index 4434723e12..d20435a37f 100644 --- a/src/test/compile-fail/E0243.rs +++ b/src/test/compile-fail/E0243.rs @@ -10,8 +10,8 @@ struct Foo { x: T } struct Bar { x: Foo } - //~^ ERROR E0243 - //~| NOTE expected 1 type argument, found 0 + //~^ ERROR wrong number of type arguments: expected 1, found 0 [E0243] + //~| NOTE expected 1 type argument fn main() { } diff --git a/src/test/compile-fail/E0244.rs b/src/test/compile-fail/E0244.rs index 5678a7fd45..02d4b33789 100644 --- a/src/test/compile-fail/E0244.rs +++ b/src/test/compile-fail/E0244.rs @@ -10,8 +10,8 @@ struct Foo { x: bool } struct Bar { x: Foo } - //~^ ERROR E0244 - //~| NOTE expected no type arguments, found 2 + //~^ ERROR wrong number of type arguments: expected 0, found 2 [E0244] + //~| NOTE expected no type arguments fn main() { diff --git a/src/test/compile-fail/E0254.rs b/src/test/compile-fail/E0254.rs index 3e4b7b9cad..fe7ee1c129 100644 --- a/src/test/compile-fail/E0254.rs +++ b/src/test/compile-fail/E0254.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(collections)] + extern crate collections; //~^ NOTE previous import of `collections` here diff --git a/src/test/compile-fail/E0259.rs b/src/test/compile-fail/E0259.rs index d3e876e252..95be48b5ff 100644 --- a/src/test/compile-fail/E0259.rs +++ b/src/test/compile-fail/E0259.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(collections, libc)] + extern crate collections; //~^ NOTE previous import of `collections` here diff --git a/src/test/compile-fail/E0260.rs b/src/test/compile-fail/E0260.rs index 63647cb410..ae018d2ada 100644 --- a/src/test/compile-fail/E0260.rs +++ b/src/test/compile-fail/E0260.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(collections)] + extern crate collections; //~^ NOTE previous import of `collections` here diff --git a/src/test/compile-fail/E0428.rs b/src/test/compile-fail/E0428.rs index 63b4efb73f..f8502140c4 100644 --- a/src/test/compile-fail/E0428.rs +++ b/src/test/compile-fail/E0428.rs @@ -9,11 +9,8 @@ // except according to those terms. struct Bar; //~ previous definition of `Bar` here - //~| previous definition of `Bar` here struct Bar; //~ ERROR E0428 //~| NOTE already defined - //~| ERROR E0428 - //~| NOTE already defined fn main () { } diff --git a/src/test/compile-fail/E0445.rs b/src/test/compile-fail/E0445.rs index 7c5c862a6f..efef8305e5 100644 --- a/src/test/compile-fail/E0445.rs +++ b/src/test/compile-fail/E0445.rs @@ -13,13 +13,13 @@ trait Foo { } pub trait Bar : Foo {} -//~^ ERROR private trait in public interface [E0445] +//~^ ERROR private trait `Foo` in public interface [E0445] //~| NOTE private trait can't be public pub struct Bar2(pub T); -//~^ ERROR private trait in public interface [E0445] +//~^ ERROR private trait `Foo` in public interface [E0445] //~| NOTE private trait can't be public pub fn foo (t: T) {} -//~^ ERROR private trait in public interface [E0445] +//~^ ERROR private trait `Foo` in public interface [E0445] //~| NOTE private trait can't be public fn main() {} diff --git a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-2.rs b/src/test/compile-fail/E0572.rs similarity index 87% rename from src/test/compile-fail-fulldeps/proc-macro/feature-gate-2.rs rename to src/test/compile-fail/E0572.rs index 9c4053266f..bbaab102de 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-2.rs +++ b/src/test/compile-fail/E0572.rs @@ -8,6 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -extern crate proc_macro; //~ ERROR: use of unstable library feature +const FOO: u32 = return 0; //~ ERROR E0572 fn main() {} diff --git a/src/test/compile-fail/auxiliary/empty-struct.rs b/src/test/compile-fail/auxiliary/empty-struct.rs index dcbb0ce178..4a30286563 100644 --- a/src/test/compile-fail/auxiliary/empty-struct.rs +++ b/src/test/compile-fail/auxiliary/empty-struct.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(relaxed_adts)] - pub struct XEmpty1 {} pub struct XEmpty2; pub struct XEmpty6(); diff --git a/src/test/compile-fail/auxiliary/import_crate_var.rs b/src/test/compile-fail/auxiliary/import_crate_var.rs index 1dfc7a128a..a8a55afa41 100644 --- a/src/test/compile-fail/auxiliary/import_crate_var.rs +++ b/src/test/compile-fail/auxiliary/import_crate_var.rs @@ -8,5 +8,10 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +pub fn f() {} + #[macro_export] -macro_rules! m { () => { use $crate; } } +macro_rules! m { () => { + use $crate; + import_crate_var::f(); +} } diff --git a/src/test/compile-fail/auxiliary/namespace-mix-old.rs b/src/test/compile-fail/auxiliary/namespace-mix-old.rs deleted file mode 100644 index 7bbba7163b..0000000000 --- a/src/test/compile-fail/auxiliary/namespace-mix-old.rs +++ /dev/null @@ -1,85 +0,0 @@ -// Copyright 2016 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// FIXME: Remove when `item_like_imports` is stabilized. - -#![feature(relaxed_adts)] - -pub mod c { - pub struct S {} - pub struct TS(); - pub struct US; - pub enum E { - V {}, - TV(), - UV, - } - - pub struct Item; -} - -pub mod proxy { - pub use c::*; - pub use c::E::*; -} - -pub mod xm1 { - pub use ::proxy::*; - pub type S = ::c::Item; -} -pub mod xm2 { - pub use ::proxy::*; - pub const S: ::c::Item = ::c::Item; -} - -pub mod xm3 { - pub use ::proxy::*; - pub type TS = ::c::Item; -} -pub mod xm4 { - pub use ::proxy::*; - pub const TS: ::c::Item = ::c::Item; -} - -pub mod xm5 { - pub use ::proxy::*; - pub type US = ::c::Item; -} -pub mod xm6 { - pub use ::proxy::*; - pub const US: ::c::Item = ::c::Item; -} - -pub mod xm7 { - pub use ::proxy::*; - pub type V = ::c::Item; -} -pub mod xm8 { - pub use ::proxy::*; - pub const V: ::c::Item = ::c::Item; -} - -pub mod xm9 { - pub use ::proxy::*; - pub type TV = ::c::Item; -} -pub mod xmA { - pub use ::proxy::*; - pub const TV: ::c::Item = ::c::Item; -} - -pub mod xmB { - pub use ::proxy::*; - pub type UV = ::c::Item; -} -pub mod xmC { - pub use ::proxy::*; - pub const UV: ::c::Item = ::c::Item; -} diff --git a/src/test/compile-fail/auxiliary/namespace-mix-new.rs b/src/test/compile-fail/auxiliary/namespace-mix.rs similarity index 97% rename from src/test/compile-fail/auxiliary/namespace-mix-new.rs rename to src/test/compile-fail/auxiliary/namespace-mix.rs index 88e8b0d56f..d82e9bb702 100644 --- a/src/test/compile-fail/auxiliary/namespace-mix-new.rs +++ b/src/test/compile-fail/auxiliary/namespace-mix.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(item_like_imports, relaxed_adts)] - pub mod c { pub struct S {} pub struct TS(); diff --git a/src/test/compile-fail/bad-sized.rs b/src/test/compile-fail/bad-sized.rs index e9d0b986c1..a2e2e5caaf 100644 --- a/src/test/compile-fail/bad-sized.rs +++ b/src/test/compile-fail/bad-sized.rs @@ -12,8 +12,7 @@ trait Trait {} pub fn main() { let x: Vec = Vec::new(); - //~^ ERROR `Trait + Sized: std::marker::Sized` is not satisfied - //~| ERROR the trait `std::marker::Sized` cannot be made into an object - //~| ERROR `Trait + Sized: std::marker::Sized` is not satisfied - //~| ERROR the trait `std::marker::Sized` cannot be made into an object + //~^ ERROR only Send/Sync traits can be used as additional traits in a trait object + //~| ERROR the trait bound `Trait: std::marker::Sized` is not satisfied + //~| ERROR the trait bound `Trait: std::marker::Sized` is not satisfied } diff --git a/src/test/compile-fail/blind-item-block-item-shadow.rs b/src/test/compile-fail/blind-item-block-item-shadow.rs index 03af0d51ec..2d53aee39e 100644 --- a/src/test/compile-fail/blind-item-block-item-shadow.rs +++ b/src/test/compile-fail/blind-item-block-item-shadow.rs @@ -15,6 +15,5 @@ fn main() { struct Bar; use foo::Bar; //~^ ERROR a type named `Bar` has already been defined in this block - //~^^ ERROR a value named `Bar` has already been defined in this block } } diff --git a/src/test/compile-fail/borrowck/borrowck-borrowed-uniq-rvalue.rs b/src/test/compile-fail/borrowck/borrowck-borrowed-uniq-rvalue.rs index 8bbecfd48c..e4eca7e7ec 100644 --- a/src/test/compile-fail/borrowck/borrowck-borrowed-uniq-rvalue.rs +++ b/src/test/compile-fail/borrowck/borrowck-borrowed-uniq-rvalue.rs @@ -12,7 +12,6 @@ #![feature(box_syntax)] -extern crate collections; use std::collections::HashMap; fn main() { diff --git a/src/test/compile-fail/borrowck/borrowck-insert-during-each.rs b/src/test/compile-fail/borrowck/borrowck-insert-during-each.rs index 2c63486598..8499ebb8ac 100644 --- a/src/test/compile-fail/borrowck/borrowck-insert-during-each.rs +++ b/src/test/compile-fail/borrowck/borrowck-insert-during-each.rs @@ -8,7 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -extern crate collections; use std::collections::HashSet; struct Foo { diff --git a/src/test/compile-fail/borrowck/borrowck-overloaded-call.rs b/src/test/compile-fail/borrowck/borrowck-overloaded-call.rs index 93c37524bf..4c20688331 100644 --- a/src/test/compile-fail/borrowck/borrowck-overloaded-call.rs +++ b/src/test/compile-fail/borrowck/borrowck-overloaded-call.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] use std::ops::{Fn, FnMut, FnOnce}; diff --git a/src/test/compile-fail/cast-rfc0401.rs b/src/test/compile-fail/cast-rfc0401.rs index 0c373057c7..b98f464c90 100644 --- a/src/test/compile-fail/cast-rfc0401.rs +++ b/src/test/compile-fail/cast-rfc0401.rs @@ -48,16 +48,13 @@ fn main() let _ = v as f32; //~^ ERROR casting - //~^^ HELP through a usize first let _ = main as f64; //~^ ERROR casting - //~^^ HELP through a usize first let _ = &v as usize; //~^ ERROR casting //~^^ HELP through a raw pointer first let _ = f as *const u8; //~^ ERROR casting - //~^^ HELP through a usize first let _ = 3_i32 as bool; //~^ ERROR cannot cast as `bool` [E0054] //~| unsupported cast @@ -80,13 +77,10 @@ fn main() let _ = false as *const u8; //~^ ERROR casting - //~^^ HELP through a usize first let _ = E::A as *const u8; //~^ ERROR casting - //~^^ HELP through a usize first let _ = 'a' as *const u8; //~^ ERROR casting - //~^^ HELP through a usize first let _ = 42usize as *const [u8]; //~ ERROR casting let _ = v as *const [u8]; //~ ERROR cannot cast @@ -121,4 +115,9 @@ fn main() let _ = cf as *const Bar; //~^ ERROR casting //~^^ NOTE vtable kinds + + vec![0.0].iter().map(|s| s as f32).collect::>(); + //~^ ERROR casting `&{float}` as `f32` is invalid + //~| NOTE cannot cast `&{float}` as `f32` + //~| NOTE did you mean `*s`? } diff --git a/src/test/compile-fail/coherence-error-suppression.rs b/src/test/compile-fail/coherence-error-suppression.rs new file mode 100644 index 0000000000..b33f27fbc8 --- /dev/null +++ b/src/test/compile-fail/coherence-error-suppression.rs @@ -0,0 +1,25 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// check that error types in coherence do not cause error cascades. + +trait Foo {} + +impl Foo for i8 {} +impl Foo for i16 {} +impl Foo for i32 {} +impl Foo for i64 {} +impl Foo for DoesNotExist {} //~ ERROR `DoesNotExist` is undefined +impl Foo for u8 {} +impl Foo for u16 {} +impl Foo for u32 {} +impl Foo for u64 {} + +fn main() {} diff --git a/src/test/compile-fail/consider-removing-last-semi.rs b/src/test/compile-fail/consider-removing-last-semi.rs index 2e110cb3d0..530a0e4156 100644 --- a/src/test/compile-fail/consider-removing-last-semi.rs +++ b/src/test/compile-fail/consider-removing-last-semi.rs @@ -8,12 +8,12 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -fn f() -> String { //~ ERROR E0269 +fn f() -> String { //~ ERROR mismatched types 0u8; "bla".to_string(); //~ HELP consider removing this semicolon } -fn g() -> String { //~ ERROR E0269 +fn g() -> String { //~ ERROR mismatched types "this won't work".to_string(); "removeme".to_string(); //~ HELP consider removing this semicolon } diff --git a/src/test/compile-fail/const-unsized.rs b/src/test/compile-fail/const-unsized.rs index 226b567c54..23cff4ac6a 100644 --- a/src/test/compile-fail/const-unsized.rs +++ b/src/test/compile-fail/const-unsized.rs @@ -11,8 +11,8 @@ use std::fmt::Debug; const CONST_0: Debug+Sync = *(&0 as &(Debug+Sync)); -//~^ ERROR `std::fmt::Debug + Sync + 'static: std::marker::Sized` is not satisfied -//~| NOTE the trait `std::marker::Sized` is not implemented for `std::fmt::Debug + Sync + 'static` +//~^ ERROR `std::fmt::Debug + std::marker::Sync + 'static: std::marker::Sized` is not satisfied +//~| NOTE the trait `std::marker::Sized` is not implemented for `std::fmt::Debug + std::marker::Syn //~| NOTE does not have a constant size known at compile-time //~| NOTE constant expressions must have a statically known size @@ -23,8 +23,8 @@ const CONST_FOO: str = *"foo"; //~| NOTE constant expressions must have a statically known size static STATIC_1: Debug+Sync = *(&1 as &(Debug+Sync)); -//~^ ERROR `std::fmt::Debug + Sync + 'static: std::marker::Sized` is not satisfied -//~| NOTE the trait `std::marker::Sized` is not implemented for `std::fmt::Debug + Sync + 'static` +//~^ ERROR `std::fmt::Debug + std::marker::Sync + 'static: std::marker::Sized` is not satisfied +//~| NOTE the trait `std::marker::Sized` is not implemented for `std::fmt::Debug + std::marker::Syn //~| NOTE does not have a constant size known at compile-time //~| NOTE constant expressions must have a statically known size diff --git a/src/test/compile-fail-fulldeps/proc-macro/cannot-link.rs b/src/test/compile-fail/crt-static-gated.rs similarity index 79% rename from src/test/compile-fail-fulldeps/proc-macro/cannot-link.rs rename to src/test/compile-fail/crt-static-gated.rs index f6f1be37fc..6c7c60b653 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/cannot-link.rs +++ b/src/test/compile-fail/crt-static-gated.rs @@ -8,9 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// aux-build:derive-a.rs - -extern crate derive_a; -//~^ ERROR: crates of the `proc-macro` crate type cannot be linked at runtime +// compile-flags:-C target-feature=+crt-static +// error-pattern: specifying the `crt-static` target feature is only allowed fn main() {} diff --git a/src/test/compile-fail/dep-graph-type-alias.rs b/src/test/compile-fail/dep-graph-type-alias.rs index 80cc9e71c7..2e33f11c04 100644 --- a/src/test/compile-fail/dep-graph-type-alias.rs +++ b/src/test/compile-fail/dep-graph-type-alias.rs @@ -42,8 +42,9 @@ trait Trait { struct SomeType; -#[rustc_then_this_would_need(ItemSignature)] //~ ERROR OK +#[rustc_then_this_would_need(ItemSignature)] //~ ERROR no path impl SomeType { + #[rustc_then_this_would_need(ItemSignature)] //~ ERROR OK fn method(&self, _: TypeAlias) {} } diff --git a/src/test/compile-fail/deriving-span-Clone-enum-struct-variant.rs b/src/test/compile-fail/derives-span-Clone-enum-struct-variant.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Clone-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-Clone-enum-struct-variant.rs index 9badb5b262..0b73f5bebb 100644 --- a/src/test/compile-fail/deriving-span-Clone-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-Clone-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Clone-enum.rs b/src/test/compile-fail/derives-span-Clone-enum.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Clone-enum.rs rename to src/test/compile-fail/derives-span-Clone-enum.rs index 6b71610778..6944ea38b3 100644 --- a/src/test/compile-fail/deriving-span-Clone-enum.rs +++ b/src/test/compile-fail/derives-span-Clone-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Clone-struct.rs b/src/test/compile-fail/derives-span-Clone-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Clone-struct.rs rename to src/test/compile-fail/derives-span-Clone-struct.rs index 845da771de..92bf148ccb 100644 --- a/src/test/compile-fail/deriving-span-Clone-struct.rs +++ b/src/test/compile-fail/derives-span-Clone-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Clone-tuple-struct.rs b/src/test/compile-fail/derives-span-Clone-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Clone-tuple-struct.rs rename to src/test/compile-fail/derives-span-Clone-tuple-struct.rs index 698e5a79be..21adfc9030 100644 --- a/src/test/compile-fail/deriving-span-Clone-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Clone-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Show-enum-struct-variant.rs b/src/test/compile-fail/derives-span-Debug-enum-struct-variant.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Show-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-Debug-enum-struct-variant.rs index 1d9099e8ed..da777e8a14 100644 --- a/src/test/compile-fail/deriving-span-Show-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-Debug-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Show-enum.rs b/src/test/compile-fail/derives-span-Debug-enum.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Show-enum.rs rename to src/test/compile-fail/derives-span-Debug-enum.rs index ab31ca95bd..bf5d3f2d81 100644 --- a/src/test/compile-fail/deriving-span-Show-enum.rs +++ b/src/test/compile-fail/derives-span-Debug-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Show-struct.rs b/src/test/compile-fail/derives-span-Debug-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Show-struct.rs rename to src/test/compile-fail/derives-span-Debug-struct.rs index eb8ac4649f..b0b275fa2d 100644 --- a/src/test/compile-fail/deriving-span-Show-struct.rs +++ b/src/test/compile-fail/derives-span-Debug-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Show-tuple-struct.rs b/src/test/compile-fail/derives-span-Debug-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Show-tuple-struct.rs rename to src/test/compile-fail/derives-span-Debug-tuple-struct.rs index b93db4ab53..9689054a7b 100644 --- a/src/test/compile-fail/deriving-span-Show-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Debug-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Default-struct.rs b/src/test/compile-fail/derives-span-Default-struct.rs similarity index 88% rename from src/test/compile-fail/deriving-span-Default-struct.rs rename to src/test/compile-fail/derives-span-Default-struct.rs index 56fb386117..68b99ed25b 100644 --- a/src/test/compile-fail/deriving-span-Default-struct.rs +++ b/src/test/compile-fail/derives-span-Default-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Default-tuple-struct.rs b/src/test/compile-fail/derives-span-Default-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Default-tuple-struct.rs rename to src/test/compile-fail/derives-span-Default-tuple-struct.rs index d0b9a7a3db..822abe975a 100644 --- a/src/test/compile-fail/deriving-span-Default-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Default-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalEq-enum-struct-variant.rs b/src/test/compile-fail/derives-span-Eq-enum-struct-variant.rs similarity index 87% rename from src/test/compile-fail/deriving-span-TotalEq-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-Eq-enum-struct-variant.rs index 6994aa76ff..fdc74d5fef 100644 --- a/src/test/compile-fail/deriving-span-TotalEq-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-Eq-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalEq-enum.rs b/src/test/compile-fail/derives-span-Eq-enum.rs similarity index 87% rename from src/test/compile-fail/deriving-span-TotalEq-enum.rs rename to src/test/compile-fail/derives-span-Eq-enum.rs index 279368d64a..4bf30fdf93 100644 --- a/src/test/compile-fail/deriving-span-TotalEq-enum.rs +++ b/src/test/compile-fail/derives-span-Eq-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalEq-struct.rs b/src/test/compile-fail/derives-span-Eq-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-TotalEq-struct.rs rename to src/test/compile-fail/derives-span-Eq-struct.rs index 8672e8e050..685188f133 100644 --- a/src/test/compile-fail/deriving-span-TotalEq-struct.rs +++ b/src/test/compile-fail/derives-span-Eq-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalEq-tuple-struct.rs b/src/test/compile-fail/derives-span-Eq-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-TotalEq-tuple-struct.rs rename to src/test/compile-fail/derives-span-Eq-tuple-struct.rs index e79b3b9741..0e636d027d 100644 --- a/src/test/compile-fail/deriving-span-TotalEq-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Eq-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-Hash-enum-struct-variant.rs b/src/test/compile-fail/derives-span-Hash-enum-struct-variant.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Hash-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-Hash-enum-struct-variant.rs index d9f4bfe102..bfb6566223 100644 --- a/src/test/compile-fail/deriving-span-Hash-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-Hash-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Hash-enum.rs b/src/test/compile-fail/derives-span-Hash-enum.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Hash-enum.rs rename to src/test/compile-fail/derives-span-Hash-enum.rs index 1f5a5d5201..99f28b376d 100644 --- a/src/test/compile-fail/deriving-span-Hash-enum.rs +++ b/src/test/compile-fail/derives-span-Hash-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Hash-struct.rs b/src/test/compile-fail/derives-span-Hash-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Hash-struct.rs rename to src/test/compile-fail/derives-span-Hash-struct.rs index 55a5e9ee6b..acfd5aa7b2 100644 --- a/src/test/compile-fail/deriving-span-Hash-struct.rs +++ b/src/test/compile-fail/derives-span-Hash-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-Hash-tuple-struct.rs b/src/test/compile-fail/derives-span-Hash-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-Hash-tuple-struct.rs rename to src/test/compile-fail/derives-span-Hash-tuple-struct.rs index 5c81c57dbc..3d76b29834 100644 --- a/src/test/compile-fail/deriving-span-Hash-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Hash-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalOrd-enum-struct-variant.rs b/src/test/compile-fail/derives-span-Ord-enum-struct-variant.rs similarity index 88% rename from src/test/compile-fail/deriving-span-TotalOrd-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-Ord-enum-struct-variant.rs index 6d5e1fb75d..06ee588e69 100644 --- a/src/test/compile-fail/deriving-span-TotalOrd-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-Ord-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(Eq,PartialOrd,PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalOrd-enum.rs b/src/test/compile-fail/derives-span-Ord-enum.rs similarity index 88% rename from src/test/compile-fail/deriving-span-TotalOrd-enum.rs rename to src/test/compile-fail/derives-span-Ord-enum.rs index 5b34290133..af9cfbc911 100644 --- a/src/test/compile-fail/deriving-span-TotalOrd-enum.rs +++ b/src/test/compile-fail/derives-span-Ord-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(Eq,PartialOrd,PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalOrd-struct.rs b/src/test/compile-fail/derives-span-Ord-struct.rs similarity index 88% rename from src/test/compile-fail/deriving-span-TotalOrd-struct.rs rename to src/test/compile-fail/derives-span-Ord-struct.rs index 61d9d8a76a..4477d933a6 100644 --- a/src/test/compile-fail/deriving-span-TotalOrd-struct.rs +++ b/src/test/compile-fail/derives-span-Ord-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(Eq,PartialOrd,PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-TotalOrd-tuple-struct.rs b/src/test/compile-fail/derives-span-Ord-tuple-struct.rs similarity index 88% rename from src/test/compile-fail/deriving-span-TotalOrd-tuple-struct.rs rename to src/test/compile-fail/derives-span-Ord-tuple-struct.rs index caef796875..ebc7518641 100644 --- a/src/test/compile-fail/deriving-span-TotalOrd-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-Ord-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(Eq,PartialOrd,PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialEq-enum-struct-variant.rs b/src/test/compile-fail/derives-span-PartialEq-enum-struct-variant.rs similarity index 87% rename from src/test/compile-fail/deriving-span-PartialEq-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-PartialEq-enum-struct-variant.rs index c340ad8a46..7c98dcc2a6 100644 --- a/src/test/compile-fail/deriving-span-PartialEq-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-PartialEq-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialEq-enum.rs b/src/test/compile-fail/derives-span-PartialEq-enum.rs similarity index 87% rename from src/test/compile-fail/deriving-span-PartialEq-enum.rs rename to src/test/compile-fail/derives-span-PartialEq-enum.rs index 9051a6371f..fe6355e456 100644 --- a/src/test/compile-fail/deriving-span-PartialEq-enum.rs +++ b/src/test/compile-fail/derives-span-PartialEq-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialEq-struct.rs b/src/test/compile-fail/derives-span-PartialEq-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-PartialEq-struct.rs rename to src/test/compile-fail/derives-span-PartialEq-struct.rs index 310d4ecd03..10d9d64277 100644 --- a/src/test/compile-fail/deriving-span-PartialEq-struct.rs +++ b/src/test/compile-fail/derives-span-PartialEq-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialEq-tuple-struct.rs b/src/test/compile-fail/derives-span-PartialEq-tuple-struct.rs similarity index 87% rename from src/test/compile-fail/deriving-span-PartialEq-tuple-struct.rs rename to src/test/compile-fail/derives-span-PartialEq-tuple-struct.rs index 9b6df0e77e..c92eb0f63c 100644 --- a/src/test/compile-fail/deriving-span-PartialEq-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-PartialEq-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialOrd-enum-struct-variant.rs b/src/test/compile-fail/derives-span-PartialOrd-enum-struct-variant.rs similarity index 89% rename from src/test/compile-fail/deriving-span-PartialOrd-enum-struct-variant.rs rename to src/test/compile-fail/derives-span-PartialOrd-enum-struct-variant.rs index 5a2d2063d1..898104d0ab 100644 --- a/src/test/compile-fail/deriving-span-PartialOrd-enum-struct-variant.rs +++ b/src/test/compile-fail/derives-span-PartialOrd-enum-struct-variant.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialOrd-enum.rs b/src/test/compile-fail/derives-span-PartialOrd-enum.rs similarity index 89% rename from src/test/compile-fail/deriving-span-PartialOrd-enum.rs rename to src/test/compile-fail/derives-span-PartialOrd-enum.rs index 9341b6c3e8..c058599947 100644 --- a/src/test/compile-fail/deriving-span-PartialOrd-enum.rs +++ b/src/test/compile-fail/derives-span-PartialOrd-enum.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialOrd-struct.rs b/src/test/compile-fail/derives-span-PartialOrd-struct.rs similarity index 89% rename from src/test/compile-fail/deriving-span-PartialOrd-struct.rs rename to src/test/compile-fail/derives-span-PartialOrd-struct.rs index 8a707566ef..af05434af9 100644 --- a/src/test/compile-fail/deriving-span-PartialOrd-struct.rs +++ b/src/test/compile-fail/derives-span-PartialOrd-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/deriving-span-PartialOrd-tuple-struct.rs b/src/test/compile-fail/derives-span-PartialOrd-tuple-struct.rs similarity index 89% rename from src/test/compile-fail/deriving-span-PartialOrd-tuple-struct.rs rename to src/test/compile-fail/derives-span-PartialOrd-tuple-struct.rs index ae1b8b4437..1afb7bc2b4 100644 --- a/src/test/compile-fail/deriving-span-PartialOrd-tuple-struct.rs +++ b/src/test/compile-fail/derives-span-PartialOrd-tuple-struct.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -10,8 +10,6 @@ // This file was auto-generated using 'src/etc/generate-deriving-span-tests.py' -extern crate rand; - #[derive(PartialEq)] struct Error; diff --git a/src/test/compile-fail/directory_ownership/backcompat-warnings.rs b/src/test/compile-fail/directory_ownership/backcompat-warnings.rs new file mode 100644 index 0000000000..75e3426a39 --- /dev/null +++ b/src/test/compile-fail/directory_ownership/backcompat-warnings.rs @@ -0,0 +1,21 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// error-pattern: cannot declare a new module at this location +// error-pattern: will become a hard error +// error-pattern: compilation successful + +#![feature(rustc_attrs)] + +#[path="mod_file_not_owning_aux3.rs"] +mod foo; + +#[rustc_error] +fn main() {} diff --git a/src/test/compile-fail/macro-expanded-mod.rs b/src/test/compile-fail/directory_ownership/macro-expanded-mod.rs similarity index 100% rename from src/test/compile-fail/macro-expanded-mod.rs rename to src/test/compile-fail/directory_ownership/macro-expanded-mod.rs diff --git a/src/test/compile-fail/macro_expanded_mod_helper/foo/bar.rs b/src/test/compile-fail/directory_ownership/macro_expanded_mod_helper/foo/bar.rs similarity index 100% rename from src/test/compile-fail/macro_expanded_mod_helper/foo/bar.rs rename to src/test/compile-fail/directory_ownership/macro_expanded_mod_helper/foo/bar.rs diff --git a/src/test/compile-fail/macro_expanded_mod_helper/foo/mod.rs b/src/test/compile-fail/directory_ownership/macro_expanded_mod_helper/foo/mod.rs similarity index 100% rename from src/test/compile-fail/macro_expanded_mod_helper/foo/mod.rs rename to src/test/compile-fail/directory_ownership/macro_expanded_mod_helper/foo/mod.rs diff --git a/src/test/compile-fail/mod_file_not_owning.rs b/src/test/compile-fail/directory_ownership/mod_file_not_owning.rs similarity index 94% rename from src/test/compile-fail/mod_file_not_owning.rs rename to src/test/compile-fail/directory_ownership/mod_file_not_owning.rs index 7dcff6e664..adbcedd91f 100644 --- a/src/test/compile-fail/mod_file_not_owning.rs +++ b/src/test/compile-fail/directory_ownership/mod_file_not_owning.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// compile-flags: -Z parse-only - // error-pattern: cannot declare a new module at this location mod mod_file_not_owning_aux1; diff --git a/src/test/compile-fail/mod_file_not_owning_aux1.rs b/src/test/compile-fail/directory_ownership/mod_file_not_owning_aux1.rs similarity index 87% rename from src/test/compile-fail/mod_file_not_owning_aux1.rs rename to src/test/compile-fail/directory_ownership/mod_file_not_owning_aux1.rs index 2d522be6dc..4ac94a92e3 100644 --- a/src/test/compile-fail/mod_file_not_owning_aux1.rs +++ b/src/test/compile-fail/directory_ownership/mod_file_not_owning_aux1.rs @@ -10,4 +10,7 @@ // ignore-test this is not a test -mod mod_file_not_owning_aux2; +macro_rules! m { + () => { mod mod_file_not_owning_aux2; } +} +m!(); diff --git a/src/test/compile-fail/mod_file_not_owning_aux2.rs b/src/test/compile-fail/directory_ownership/mod_file_not_owning_aux2.rs similarity index 100% rename from src/test/compile-fail/mod_file_not_owning_aux2.rs rename to src/test/compile-fail/directory_ownership/mod_file_not_owning_aux2.rs diff --git a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-5.rs b/src/test/compile-fail/directory_ownership/mod_file_not_owning_aux3.rs similarity index 86% rename from src/test/compile-fail-fulldeps/proc-macro/feature-gate-5.rs rename to src/test/compile-fail/directory_ownership/mod_file_not_owning_aux3.rs index 672579ce8f..3a164fd55d 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-5.rs +++ b/src/test/compile-fail/directory_ownership/mod_file_not_owning_aux3.rs @@ -8,5 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#[cfg(proc_macro)] //~ ERROR: experimental and subject to change -fn foo() {} +// ignore-test this is not a test + +mod mod_file_not_owning_aux2; diff --git a/src/test/compile-fail/non-inline-mod-restriction.rs b/src/test/compile-fail/directory_ownership/non-inline-mod-restriction.rs similarity index 100% rename from src/test/compile-fail/non-inline-mod-restriction.rs rename to src/test/compile-fail/directory_ownership/non-inline-mod-restriction.rs diff --git a/src/test/compile-fail/directory_ownership/unowned_mod_with_path.rs b/src/test/compile-fail/directory_ownership/unowned_mod_with_path.rs new file mode 100644 index 0000000000..854f790bef --- /dev/null +++ b/src/test/compile-fail/directory_ownership/unowned_mod_with_path.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// error-pattern: cannot declare a new module at this location + +// This is not a directory owner since the file name is not "mod.rs". +#[path = "mod_file_not_owning_aux1.rs"] +mod foo; diff --git a/src/test/compile-fail/diverging-fn-tail-35849.rs b/src/test/compile-fail/diverging-fn-tail-35849.rs index 6dc447b4dc..3a27c08413 100644 --- a/src/test/compile-fail/diverging-fn-tail-35849.rs +++ b/src/test/compile-fail/diverging-fn-tail-35849.rs @@ -8,8 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -fn _converge() -> ! { //~ ERROR computation may converge - 42 +fn _converge() -> ! { + 42 //~ ERROR mismatched types } fn main() { } diff --git a/src/test/compile-fail/double-type-import.rs b/src/test/compile-fail/double-type-import.rs index 923f95e69d..760612c05c 100644 --- a/src/test/compile-fail/double-type-import.rs +++ b/src/test/compile-fail/double-type-import.rs @@ -11,8 +11,7 @@ mod foo { pub use self::bar::X; use self::bar::X; - //~^ ERROR a value named `X` has already been imported in this module - //~| ERROR a type named `X` has already been imported in this module + //~^ ERROR a type named `X` has already been imported in this module mod bar { pub struct X; diff --git a/src/test/compile-fail/empty-struct-braces-pat-2.rs b/src/test/compile-fail/empty-struct-braces-pat-2.rs index 58e3ca6b3a..4349e72c5d 100644 --- a/src/test/compile-fail/empty-struct-braces-pat-2.rs +++ b/src/test/compile-fail/empty-struct-braces-pat-2.rs @@ -12,8 +12,6 @@ // aux-build:empty-struct.rs -#![feature(relaxed_adts)] - extern crate empty_struct; use empty_struct::*; diff --git a/src/test/compile-fail/empty-struct-braces-pat-3.rs b/src/test/compile-fail/empty-struct-braces-pat-3.rs index 1960eca9f8..d6c5b95349 100644 --- a/src/test/compile-fail/empty-struct-braces-pat-3.rs +++ b/src/test/compile-fail/empty-struct-braces-pat-3.rs @@ -12,8 +12,6 @@ // aux-build:empty-struct.rs -#![feature(relaxed_adts)] - extern crate empty_struct; use empty_struct::*; diff --git a/src/test/compile-fail/empty-struct-tuple-pat.rs b/src/test/compile-fail/empty-struct-tuple-pat.rs index f15c126a12..5e683eafad 100644 --- a/src/test/compile-fail/empty-struct-tuple-pat.rs +++ b/src/test/compile-fail/empty-struct-tuple-pat.rs @@ -12,8 +12,6 @@ // aux-build:empty-struct.rs -#![feature(relaxed_adts)] - extern crate empty_struct; use empty_struct::*; diff --git a/src/test/compile-fail/empty-struct-unit-expr.rs b/src/test/compile-fail/empty-struct-unit-expr.rs index 350b96c764..273ce91a7c 100644 --- a/src/test/compile-fail/empty-struct-unit-expr.rs +++ b/src/test/compile-fail/empty-struct-unit-expr.rs @@ -23,7 +23,11 @@ enum E { fn main() { let e2 = Empty2(); //~ ERROR expected function, found `Empty2` - let e4 = E::Empty4(); //~ ERROR expected function, found `E` + let e4 = E::Empty4(); + //~^ ERROR `E::Empty4` is being called, but it is not a function + //~| HELP did you mean to write `E::Empty4`? let xe2 = XEmpty2(); //~ ERROR expected function, found `empty_struct::XEmpty2` - let xe4 = XE::XEmpty4(); //~ ERROR expected function, found `empty_struct::XE` + let xe4 = XE::XEmpty4(); + //~^ ERROR `XE::XEmpty4` is being called, but it is not a function + //~| HELP did you mean to write `XE::XEmpty4`? } diff --git a/src/test/compile-fail/empty-struct-unit-pat.rs b/src/test/compile-fail/empty-struct-unit-pat.rs index 90f6ae5755..532c2d8505 100644 --- a/src/test/compile-fail/empty-struct-unit-pat.rs +++ b/src/test/compile-fail/empty-struct-unit-pat.rs @@ -12,8 +12,6 @@ // aux-build:empty-struct.rs -#![feature(relaxed_adts)] - extern crate empty_struct; use empty_struct::*; diff --git a/src/test/compile-fail/fat-ptr-cast.rs b/src/test/compile-fail/fat-ptr-cast.rs index b2fd11d4b3..c62987a5b9 100644 --- a/src/test/compile-fail/fat-ptr-cast.rs +++ b/src/test/compile-fail/fat-ptr-cast.rs @@ -19,6 +19,9 @@ fn main() { a as usize; //~ ERROR casting //~^ HELP cast through a raw pointer first + a as isize; //~ ERROR casting + a as i16; //~ ERROR casting `&[i32]` as `i16` is invalid + a as u32; //~ ERROR casting `&[i32]` as `u32` is invalid b as usize; //~ ERROR non-scalar cast p as usize; //~^ ERROR casting diff --git a/src/test/compile-fail/feature-gate-loop-break-value.rs b/src/test/compile-fail/feature-gate-loop-break-value.rs new file mode 100644 index 0000000000..1632c40d59 --- /dev/null +++ b/src/test/compile-fail/feature-gate-loop-break-value.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn main() { + loop { + break 123; //~ ERROR `break` with a value is experimental + } +} diff --git a/src/test/compile-fail/forget-init-unsafe.rs b/src/test/compile-fail/forget-init-unsafe.rs index 46a18c9818..521f122f8a 100644 --- a/src/test/compile-fail/forget-init-unsafe.rs +++ b/src/test/compile-fail/forget-init-unsafe.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(core_intrinsics)] + use std::intrinsics::{init, forget}; // Test that the `forget` and `init` intrinsics are really unsafe diff --git a/src/test/compile-fail/gated-target_feature.rs b/src/test/compile-fail/gated-target_feature.rs new file mode 100644 index 0000000000..da2e41a0f5 --- /dev/null +++ b/src/test/compile-fail/gated-target_feature.rs @@ -0,0 +1,13 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[target_feature = "+sse2"] +//~^ the `#[target_feature]` attribute is an experimental feature +fn foo() {} diff --git a/src/test/compile-fail/generic-type-less-params-with-defaults.rs b/src/test/compile-fail/generic-type-less-params-with-defaults.rs index 9b1f3e5164..9b19e09eea 100644 --- a/src/test/compile-fail/generic-type-less-params-with-defaults.rs +++ b/src/test/compile-fail/generic-type-less-params-with-defaults.rs @@ -17,6 +17,6 @@ struct Vec( fn main() { let _: Vec; - //~^ ERROR E0243 - //~| NOTE expected at least 1 type argument, found 0 + //~^ ERROR wrong number of type arguments: expected at least 1, found 0 [E0243] + //~| NOTE expected at least 1 type argument } diff --git a/src/test/compile-fail/generic-type-more-params-with-defaults.rs b/src/test/compile-fail/generic-type-more-params-with-defaults.rs index 8f733ddfce..b5764ef89a 100644 --- a/src/test/compile-fail/generic-type-more-params-with-defaults.rs +++ b/src/test/compile-fail/generic-type-more-params-with-defaults.rs @@ -17,6 +17,6 @@ struct Vec( fn main() { let _: Vec; - //~^ ERROR E0244 - //~| NOTE expected at most 2 type arguments, found 3 + //~^ ERROR wrong number of type arguments: expected at most 2, found 3 [E0244] + //~| NOTE expected at most 2 type arguments } diff --git a/src/test/compile-fail/glob-cycles.rs b/src/test/compile-fail/glob-cycles.rs index 077ae19b4c..8f1b8ec91d 100644 --- a/src/test/compile-fail/glob-cycles.rs +++ b/src/test/compile-fail/glob-cycles.rs @@ -8,9 +8,11 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(rustc_attrs)] + mod foo { pub use bar::*; - pub use main as f; //~ ERROR has already been imported + pub use main as f; } mod bar { @@ -18,9 +20,10 @@ mod bar { } pub use foo::*; -pub use baz::*; //~ ERROR has already been imported +pub use baz::*; mod baz { pub use super::*; } -pub fn main() {} +#[rustc_error] +pub fn main() {} //~ ERROR compilation successful diff --git a/src/test/compile-fail/ifmt-bad-arg.rs b/src/test/compile-fail/ifmt-bad-arg.rs index 59c61a42e0..a23b4b0774 100644 --- a/src/test/compile-fail/ifmt-bad-arg.rs +++ b/src/test/compile-fail/ifmt-bad-arg.rs @@ -17,6 +17,7 @@ fn main() { //~^ ERROR: argument never used format!("{foo}"); //~ ERROR: no argument named `foo` + format!("", 1, 2); //~ ERROR: multiple unused formatting arguments format!("{}", 1, 2); //~ ERROR: argument never used format!("{1}", 1, 2); //~ ERROR: argument never used format!("{}", 1, foo=2); //~ ERROR: named argument never used @@ -53,4 +54,6 @@ fn main() { format!("foo } bar"); //~ ERROR: unmatched `}` found format!("foo }"); //~ ERROR: unmatched `}` found + + format!("foo %s baz", "bar"); //~ ERROR: argument never used } diff --git a/src/test/compile-fail/import-crate-var.rs b/src/test/compile-fail/import-crate-var.rs index 9f57394548..e58ba2c889 100644 --- a/src/test/compile-fail/import-crate-var.rs +++ b/src/test/compile-fail/import-crate-var.rs @@ -11,11 +11,13 @@ // aux-build:import_crate_var.rs // error-pattern: `$crate` may not be imported // error-pattern: `use $crate;` was erroneously allowed and will become a hard error +// error-pattern: compilation successful #![feature(rustc_attrs)] #[macro_use] extern crate import_crate_var; -m!(); #[rustc_error] -fn main() {} +fn main() { + m!(); +} diff --git a/src/test/compile-fail/import-shadow-4.rs b/src/test/compile-fail/import-shadow-4.rs deleted file mode 100644 index f21fdaae47..0000000000 --- a/src/test/compile-fail/import-shadow-4.rs +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use foo::*; -use bar::Baz; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; -} - -mod bar { - pub type Baz = isize; -} - -mod qux { - pub use bar::Baz; -} - -fn main() {} diff --git a/src/test/compile-fail/import-shadow-5.rs b/src/test/compile-fail/import-shadow-5.rs deleted file mode 100644 index dc300bc7ba..0000000000 --- a/src/test/compile-fail/import-shadow-5.rs +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use foo::Baz; -use bar::Baz; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; -} - -mod bar { - pub type Baz = isize; -} - -mod qux { - pub use bar::Baz; -} - -fn main() {} diff --git a/src/test/compile-fail/import-shadow-6.rs b/src/test/compile-fail/import-shadow-6.rs deleted file mode 100644 index fa3b75c70f..0000000000 --- a/src/test/compile-fail/import-shadow-6.rs +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use qux::*; -use foo::*; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; -} - -mod bar { - pub type Baz = isize; -} - -mod qux { - pub use bar::Baz; -} - -fn main() {} diff --git a/src/test/compile-fail/import-shadow-7.rs b/src/test/compile-fail/import-shadow-7.rs deleted file mode 100644 index 34aba15b39..0000000000 --- a/src/test/compile-fail/import-shadow-7.rs +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use foo::*; -use qux::*; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; -} - -mod bar { - pub type Baz = isize; -} - -mod qux { - pub use bar::Baz; -} - -fn main() {} diff --git a/src/test/compile-fail/import.rs b/src/test/compile-fail/import.rs index 1ca1c06041..81a5334ed7 100644 --- a/src/test/compile-fail/import.rs +++ b/src/test/compile-fail/import.rs @@ -20,6 +20,6 @@ mod zed { } fn main() { - zed::foo(); //~ ERROR unresolved name + zed::foo(); //~ ERROR `foo` is private bar(); } diff --git a/src/test/compile-fail/imports/auxiliary/two_macros.rs b/src/test/compile-fail/imports/auxiliary/two_macros.rs new file mode 100644 index 0000000000..2ac8e3ef98 --- /dev/null +++ b/src/test/compile-fail/imports/auxiliary/two_macros.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[macro_export] +macro_rules! m { ($($t:tt)*) => { $($t)* } } + +#[macro_export] +macro_rules! n { ($($t:tt)*) => { $($t)* } } diff --git a/src/test/compile-fail/imports/duplicate.rs b/src/test/compile-fail/imports/duplicate.rs index fb61bb8e48..8dd69d8c24 100644 --- a/src/test/compile-fail/imports/duplicate.rs +++ b/src/test/compile-fail/imports/duplicate.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(item_like_imports)] - mod a { pub fn foo() {} } @@ -46,9 +44,9 @@ mod g { fn main() { e::foo(); f::foo(); //~ ERROR `foo` is ambiguous - //~| NOTE Consider adding an explicit import of `foo` to disambiguate + //~| NOTE consider adding an explicit import of `foo` to disambiguate g::foo(); //~ ERROR `foo` is ambiguous - //~| NOTE Consider adding an explicit import of `foo` to disambiguate + //~| NOTE consider adding an explicit import of `foo` to disambiguate } mod ambiguous_module_errors { diff --git a/src/test/compile-fail/imports/macro-paths.rs b/src/test/compile-fail/imports/macro-paths.rs new file mode 100644 index 0000000000..97c05392e7 --- /dev/null +++ b/src/test/compile-fail/imports/macro-paths.rs @@ -0,0 +1,42 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:two_macros.rs + +#![feature(use_extern_macros)] + +extern crate two_macros; + +mod foo { + pub mod bar { + pub use two_macros::m; + } +} + +fn f() { + use foo::*; //~ NOTE could also resolve to the name imported here + bar::m! { //~ ERROR ambiguous + //~| NOTE macro-expanded items do not shadow when used in a macro invocation path + mod bar { pub use two_macros::m; } //~ NOTE could resolve to the name defined here + //~^^^ NOTE in this expansion + } +} + +pub mod baz { //~ NOTE could also resolve to the name defined here + pub use two_macros::m; +} + +fn g() { + baz::m! { //~ ERROR ambiguous + //~| NOTE macro-expanded items do not shadow when used in a macro invocation path + mod baz { pub use two_macros::m; } //~ NOTE could resolve to the name defined here + //~^^^ NOTE in this expansion + } +} diff --git a/src/test/compile-fail/imports/macros.rs b/src/test/compile-fail/imports/macros.rs new file mode 100644 index 0000000000..c11d2aab7c --- /dev/null +++ b/src/test/compile-fail/imports/macros.rs @@ -0,0 +1,55 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:two_macros.rs + +#![feature(item_like_imports, use_extern_macros)] + +extern crate two_macros; // two identity macros `m` and `n` + +mod foo { + pub use two_macros::n as m; +} + +mod m1 { + m!(use two_macros::*;); + use foo::m; // This shadows the glob import +} + +mod m2 { + use two_macros::*; //~ NOTE could also resolve + m! { //~ ERROR ambiguous + //~| NOTE macro-expanded macro imports do not shadow + use foo::m; //~ NOTE could resolve to the name imported here + //~^^^ NOTE in this expansion + } +} + +mod m3 { + use two_macros::m; //~ NOTE could also resolve + fn f() { + use two_macros::n as m; // This shadows the above import + m!(); + } + + fn g() { + m! { //~ ERROR ambiguous + //~| NOTE macro-expanded macro imports do not shadow + use two_macros::n as m; //~ NOTE could resolve to the name imported here + //~^^^ NOTE in this expansion + } + } +} + +mod m4 { + macro_rules! m { () => {} } //~ NOTE could resolve to the macro defined here + use two_macros::m; //~ NOTE could also resolve to the macro imported here + m!(); //~ ERROR ambiguous +} diff --git a/src/test/compile-fail/imports/reexports.rs b/src/test/compile-fail/imports/reexports.rs index fc46b23351..65e6e8d01b 100644 --- a/src/test/compile-fail/imports/reexports.rs +++ b/src/test/compile-fail/imports/reexports.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(item_like_imports)] - mod a { fn foo() {} mod foo {} diff --git a/src/test/compile-fail/imports/rfc-1560-warning-cycle.rs b/src/test/compile-fail/imports/rfc-1560-warning-cycle.rs new file mode 100644 index 0000000000..bed10c87ae --- /dev/null +++ b/src/test/compile-fail/imports/rfc-1560-warning-cycle.rs @@ -0,0 +1,30 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(rustc_attrs)] +#![allow(unused)] + +pub struct Foo; + +mod bar { + struct Foo; + + mod baz { + use *; //~ NOTE `Foo` could resolve to the name imported here + use bar::*; //~ NOTE `Foo` could also resolve to the name imported here + fn f(_: Foo) {} + //~^ WARN `Foo` is ambiguous + //~| WARN hard error in a future release + //~| NOTE see issue #38260 + } +} + +#[rustc_error] +fn main() {} //~ ERROR compilation successful diff --git a/src/test/compile-fail/imports/unused.rs b/src/test/compile-fail/imports/unused.rs index 4ec9987df4..05ecc781af 100644 --- a/src/test/compile-fail/imports/unused.rs +++ b/src/test/compile-fail/imports/unused.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(pub_restricted, item_like_imports)] +#![feature(pub_restricted)] #![deny(unused)] mod foo { diff --git a/src/test/compile-fail/incr_comp_with_macro_export.rs b/src/test/compile-fail/incr_comp_with_macro_export.rs new file mode 100644 index 0000000000..eafef17230 --- /dev/null +++ b/src/test/compile-fail/incr_comp_with_macro_export.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Zincremental=tmp/cfail-tests/incr_comp_with_macro_export +// must-compile-successfully + + +// This test case makes sure that we can compile with incremental compilation +// enabled when there are macros exported from this crate. (See #37756) + +#![crate_type="rlib"] + +#[macro_export] +macro_rules! some_macro { + ($e:expr) => ($e + 1) +} diff --git a/src/test/compile-fail/indexing-requires-a-uint.rs b/src/test/compile-fail/indexing-requires-a-uint.rs index 61d54b3f8e..1889d76c03 100644 --- a/src/test/compile-fail/indexing-requires-a-uint.rs +++ b/src/test/compile-fail/indexing-requires-a-uint.rs @@ -13,7 +13,7 @@ fn main() { fn bar(_: T) {} - [0][0u8]; //~ ERROR: `[{integer}]: std::ops::Index` is not satisfied + [0][0u8]; //~ ERROR: the trait bound `u8: std::slice::SliceIndex<{integer}>` is not satisfied [0][0]; // should infer to be a usize diff --git a/src/test/compile-fail/integral-indexing.rs b/src/test/compile-fail/integral-indexing.rs index c8f33c3caf..1815d0e978 100644 --- a/src/test/compile-fail/integral-indexing.rs +++ b/src/test/compile-fail/integral-indexing.rs @@ -19,8 +19,8 @@ pub fn main() { v[3i32]; //~ERROR : std::ops::Index` is not satisfied s.as_bytes()[3_usize]; s.as_bytes()[3]; - s.as_bytes()[3u8]; //~ERROR : std::ops::Index` is not satisfied - s.as_bytes()[3i8]; //~ERROR : std::ops::Index` is not satisfied - s.as_bytes()[3u32]; //~ERROR : std::ops::Index` is not satisfied - s.as_bytes()[3i32]; //~ERROR : std::ops::Index` is not satisfied + s.as_bytes()[3u8]; //~ERROR : std::slice::SliceIndex` is not satisfied + s.as_bytes()[3i8]; //~ERROR : std::slice::SliceIndex` is not satisfied + s.as_bytes()[3u32]; //~ERROR : std::slice::SliceIndex` is not satisfied + s.as_bytes()[3i32]; //~ERROR : std::slice::SliceIndex` is not satisfied } diff --git a/src/test/compile-fail/issue-11714.rs b/src/test/compile-fail/issue-11714.rs index 998576097a..192f78e41c 100644 --- a/src/test/compile-fail/issue-11714.rs +++ b/src/test/compile-fail/issue-11714.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -fn blah() -> i32 { //~ ERROR not all control paths return a value +fn blah() -> i32 { //~ ERROR mismatched types 1 ; //~ HELP consider removing this semicolon: diff --git a/src/test/compile-fail/issue-12612.rs b/src/test/compile-fail/issue-12612.rs deleted file mode 100644 index c6f76ca788..0000000000 --- a/src/test/compile-fail/issue-12612.rs +++ /dev/null @@ -1,22 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// aux-build:issue_12612_1.rs - -extern crate issue_12612_1 as foo; - -use foo::bar; - -mod test { - use bar::foo; //~ ERROR unresolved import `bar::foo` [E0432] - //~^ Maybe a missing `extern crate bar;`? -} - -fn main() {} diff --git a/src/test/compile-fail/issue-13428.rs b/src/test/compile-fail/issue-13428.rs index c771970650..9406199afc 100644 --- a/src/test/compile-fail/issue-13428.rs +++ b/src/test/compile-fail/issue-13428.rs @@ -10,7 +10,7 @@ // Regression test for #13428 -fn foo() -> String { //~ ERROR not all control paths return a value +fn foo() -> String { //~ ERROR mismatched types format!("Hello {}", "world") // Put the trailing semicolon on its own line to test that the @@ -18,7 +18,7 @@ fn foo() -> String { //~ ERROR not all control paths return a value ; //~ HELP consider removing this semicolon } -fn bar() -> String { //~ ERROR not all control paths return a value +fn bar() -> String { //~ ERROR mismatched types "foobar".to_string() ; //~ HELP consider removing this semicolon } diff --git a/src/test/compile-fail/issue-14092.rs b/src/test/compile-fail/issue-14092.rs index df8707ab82..85dd88e614 100644 --- a/src/test/compile-fail/issue-14092.rs +++ b/src/test/compile-fail/issue-14092.rs @@ -9,7 +9,7 @@ // except according to those terms. fn fn1(0: Box) {} - //~^ ERROR E0243 - //~| NOTE expected 1 type argument, found 0 + //~^ ERROR wrong number of type arguments: expected 1, found 0 [E0243] + //~| NOTE expected 1 type argument fn main() {} diff --git a/src/test/compile-fail/issue-15094.rs b/src/test/compile-fail/issue-15094.rs index da48bbb3ec..1dd6763cbe 100644 --- a/src/test/compile-fail/issue-15094.rs +++ b/src/test/compile-fail/issue-15094.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] use std::{fmt, ops}; diff --git a/src/test/compile-fail/issue-16966.rs b/src/test/compile-fail/issue-16966.rs index 5dbf7546de..508442fcb9 100644 --- a/src/test/compile-fail/issue-16966.rs +++ b/src/test/compile-fail/issue-16966.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// error-pattern:type annotations required +// error-pattern:type annotations or generic parameter binding required fn main() { panic!( std::default::Default::default() diff --git a/src/test/compile-fail/issue-17444.rs b/src/test/compile-fail/issue-17444.rs index c1d5827eb9..dafcff2383 100644 --- a/src/test/compile-fail/issue-17444.rs +++ b/src/test/compile-fail/issue-17444.rs @@ -15,5 +15,4 @@ enum Test { fn main() { let _x = Test::Foo as *const isize; //~^ ERROR casting `Test` as `*const isize` is invalid - //~^^ HELP cast through a usize first } diff --git a/src/test/compile-fail/issue-17545.rs b/src/test/compile-fail/issue-17545.rs index 49435f83ce..45bc5ee07a 100644 --- a/src/test/compile-fail/issue-17545.rs +++ b/src/test/compile-fail/issue-17545.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(fn_traits)] + pub fn foo<'a, F: Fn(&'a ())>(bar: F) { bar.call(( &(), //~ ERROR borrowed value does not live long enough diff --git a/src/test/compile-fail/issue-17800.rs b/src/test/compile-fail/issue-17800.rs index d5f1614c14..f7cae91aa9 100644 --- a/src/test/compile-fail/issue-17800.rs +++ b/src/test/compile-fail/issue-17800.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(relaxed_adts)] - enum MyOption { MySome(T), MyNone, diff --git a/src/test/compile-fail/issue-18389.rs b/src/test/compile-fail/issue-18389.rs index 300fc5a6ef..aad3d52153 100644 --- a/src/test/compile-fail/issue-18389.rs +++ b/src/test/compile-fail/issue-18389.rs @@ -14,7 +14,8 @@ use std::any::TypeId; trait Private { fn call(&self, p: P, r: R); } -pub trait Public: Private< //~ ERROR private trait in public interface +pub trait Public: Private< +//~^ ERROR private trait `Private<::P, ::R>` in public interface ::P, ::R > { diff --git a/src/test/compile-fail/issue-18819.rs b/src/test/compile-fail/issue-18819.rs index 8035d798e3..148eea31ec 100644 --- a/src/test/compile-fail/issue-18819.rs +++ b/src/test/compile-fail/issue-18819.rs @@ -19,12 +19,12 @@ impl Foo for X { } fn print_x(_: &Foo, extra: &str) { + //~^ NOTE defined here println!("{}", extra); } fn main() { print_x(X); - //~^ ERROR this function takes 2 parameters but 1 parameter was supplied - //~| NOTE the following parameter types were expected: - //~| NOTE &Foo, &str + //~^ ERROR E0061 + //~| NOTE expected 2 parameters } diff --git a/src/test/compile-fail/issue-20225.rs b/src/test/compile-fail/issue-20225.rs index f38961c427..da98f21e46 100644 --- a/src/test/compile-fail/issue-20225.rs +++ b/src/test/compile-fail/issue-20225.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] struct Foo; diff --git a/src/test/compile-fail/issue-21554.rs b/src/test/compile-fail/issue-21554.rs index 741707a47b..1b87862a05 100644 --- a/src/test/compile-fail/issue-21554.rs +++ b/src/test/compile-fail/issue-21554.rs @@ -13,5 +13,4 @@ struct Inches(i32); fn main() { Inches as f32; //~^ ERROR casting - //~^^ cast through a usize first } diff --git a/src/test/compile-fail/issue-22034.rs b/src/test/compile-fail/issue-22034.rs index 3e0ab6d892..dfa9520f38 100644 --- a/src/test/compile-fail/issue-22034.rs +++ b/src/test/compile-fail/issue-22034.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(libc)] + extern crate libc; fn main() { diff --git a/src/test/compile-fail/issue-22560.rs b/src/test/compile-fail/issue-22560.rs index 45b110bf56..2ad804fc8c 100644 --- a/src/test/compile-fail/issue-22560.rs +++ b/src/test/compile-fail/issue-22560.rs @@ -20,6 +20,6 @@ type Test = Add + //~| NOTE missing associated type `Output` value Sub; //~^ ERROR E0225 - //~| NOTE non-builtin trait used as bounds + //~| NOTE non-Send/Sync additional trait fn main() { } diff --git a/src/test/compile-fail/issue-22638.rs b/src/test/compile-fail/issue-22638.rs index 0c8c2311dc..65d1d837d7 100644 --- a/src/test/compile-fail/issue-22638.rs +++ b/src/test/compile-fail/issue-22638.rs @@ -10,7 +10,8 @@ #![allow(unused)] -#![recursion_limit = "32"] +#![recursion_limit = "20"] +#![type_length_limit = "20000000"] #[derive(Clone)] struct A (B); diff --git a/src/test/compile-fail/issue-22645.rs b/src/test/compile-fail/issue-22645.rs index 402b9a0449..81f66e3e2c 100644 --- a/src/test/compile-fail/issue-22645.rs +++ b/src/test/compile-fail/issue-22645.rs @@ -17,7 +17,7 @@ struct Bob; impl Add for Bob { type Output = Bob; - fn add(self, rhs : RHS) -> Bob {} + fn add(self, rhs : RHS) -> Bob { Bob } } fn main() { diff --git a/src/test/compile-fail/issue-22684.rs b/src/test/compile-fail/issue-22684.rs index b7ffbefba6..a791758ad1 100644 --- a/src/test/compile-fail/issue-22684.rs +++ b/src/test/compile-fail/issue-22684.rs @@ -15,7 +15,7 @@ mod foo { } pub trait Baz { - fn bar(&self) -> bool {} + fn bar(&self) -> bool { true } } impl Baz for Foo {} } diff --git a/src/test/compile-fail/issue-23024.rs b/src/test/compile-fail/issue-23024.rs index e266f00431..5d9b49f486 100644 --- a/src/test/compile-fail/issue-23024.rs +++ b/src/test/compile-fail/issue-23024.rs @@ -18,6 +18,6 @@ fn main() vfnfer.push(box h); println!("{:?}",(vfnfer[0] as Fn)(3)); //~^ ERROR the precise format of `Fn`-family traits' - //~| ERROR E0243 + //~| ERROR wrong number of type arguments: expected 1, found 0 [E0243] //~| ERROR the value of the associated type `Output` (from the trait `std::ops::FnOnce`) } diff --git a/src/test/compile-fail/issue-23046.rs b/src/test/compile-fail/issue-23046.rs index dba9c32f9b..c274665530 100644 --- a/src/test/compile-fail/issue-23046.rs +++ b/src/test/compile-fail/issue-23046.rs @@ -25,6 +25,6 @@ pub fn let_<'var, VAR, F: for<'v: 'var> Fn(Expr<'v, VAR>) -> Expr<'v, VAR>> fn main() { let ex = |x| { - let_(add(x,x), |y| { //~ ERROR unable to infer enough type information about `_` + let_(add(x,x), |y| { //~ ERROR unable to infer enough type information about `VAR` let_(add(x, x), |x|x)})}; } diff --git a/src/test/compile-fail/issue-2392.rs b/src/test/compile-fail/issue-2392.rs index 790b774bd2..805725dd74 100644 --- a/src/test/compile-fail/issue-2392.rs +++ b/src/test/compile-fail/issue-2392.rs @@ -8,7 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(core)] +#![feature(core, fnbox)] + use std::boxed::FnBox; struct FuncContainer { diff --git a/src/test/compile-fail/issue-27942.rs b/src/test/compile-fail/issue-27942.rs new file mode 100644 index 0000000000..b8552794eb --- /dev/null +++ b/src/test/compile-fail/issue-27942.rs @@ -0,0 +1,31 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +pub trait Resources<'a> {} + +pub trait Buffer<'a, R: Resources<'a>> { + fn select(&self) -> BufferViewHandle; + //~^ ERROR mismatched types + //~| lifetime mismatch + //~| NOTE expected type `Resources<'_>` + //~| NOTE found type `Resources<'a>` + //~| NOTE the lifetime 'a as defined on the method body at 14:4... + //~| NOTE ...does not necessarily outlive the anonymous lifetime #1 defined on the method body + //~| ERROR mismatched types + //~| lifetime mismatch + //~| NOTE expected type `Resources<'_>` + //~| NOTE found type `Resources<'a>` + //~| NOTE the anonymous lifetime #1 defined on the method body at 14:4... + //~| NOTE ...does not necessarily outlive the lifetime 'a as defined on the method body +} + +pub struct BufferViewHandle<'a, R: 'a+Resources<'a>>(&'a R); + +fn main() {} diff --git a/src/test/compile-fail/issue-28075.rs b/src/test/compile-fail/issue-28075.rs index d75f5f606a..3e3d898e36 100644 --- a/src/test/compile-fail/issue-28075.rs +++ b/src/test/compile-fail/issue-28075.rs @@ -17,7 +17,6 @@ extern crate lint_stability; use lint_stability::{unstable, deprecated}; //~ ERROR use of unstable library feature 'test_feature' -//~^ WARNING use of deprecated item use lint_stability::unstable::{self as u}; //~ ERROR use of unstable library feature 'test_feature' diff --git a/src/test/compile-fail/issue-28388-1.rs b/src/test/compile-fail/issue-28388-1.rs index ef97b400b0..ed7851ec0f 100644 --- a/src/test/compile-fail/issue-28388-1.rs +++ b/src/test/compile-fail/issue-28388-1.rs @@ -10,6 +10,8 @@ // Prefix in imports with empty braces should be resolved and checked privacy, stability, etc. -use foo::{}; //~ ERROR failed to resolve. foo +use foo::{}; +//~^ ERROR failed to resolve. Maybe a missing `extern crate foo;`? +//~| NOTE foo fn main() {} diff --git a/src/test/compile-fail/issue-28388-2.rs b/src/test/compile-fail/issue-28388-2.rs index 837dc67c80..4ed5bfab06 100644 --- a/src/test/compile-fail/issue-28388-2.rs +++ b/src/test/compile-fail/issue-28388-2.rs @@ -14,6 +14,7 @@ mod m { mod n {} } -use m::n::{}; //~ ERROR module `n` is private +use m::n::{}; +//~^ ERROR module `n` is private fn main() {} diff --git a/src/test/compile-fail/issue-28388-3.rs b/src/test/compile-fail/issue-28388-3.rs index 0cb669f5f8..4baaa16e77 100644 --- a/src/test/compile-fail/issue-28388-3.rs +++ b/src/test/compile-fail/issue-28388-3.rs @@ -14,7 +14,8 @@ extern crate lint_stability; -use lint_stability::UnstableStruct::{}; //~ ERROR use of unstable library feature 'test_feature' +use lint_stability::UnstableStruct::{}; +//~^ ERROR use of unstable library feature 'test_feature' use lint_stability::StableStruct::{}; // OK fn main() {} diff --git a/src/test/compile-fail/issue-28514.rs b/src/test/compile-fail/issue-28514.rs index fb25166531..3488310b12 100644 --- a/src/test/compile-fail/issue-28514.rs +++ b/src/test/compile-fail/issue-28514.rs @@ -21,7 +21,7 @@ mod inner { fn b(&self) { } } - pub trait C: A + B { //~ ERROR private trait in public interface + pub trait C: A + B { //~ ERROR private trait `inner::A` in public interface //~^ WARN will become a hard error fn c(&self) { } } diff --git a/src/test/compile-fail/issue-28992-empty.rs b/src/test/compile-fail/issue-28992-empty.rs index d47fdda020..48aabce708 100644 --- a/src/test/compile-fail/issue-28992-empty.rs +++ b/src/test/compile-fail/issue-28992-empty.rs @@ -23,5 +23,5 @@ impl S { fn main() { if let C1(..) = 0 {} //~ ERROR expected tuple struct/variant, found constant `C1` if let S::C2(..) = 0 {} - //~^ ERROR expected tuple struct/variant, found associated constant `S::C2` + //~^ ERROR expected tuple struct/variant, found associated constant `::C2` } diff --git a/src/test/compile-fail/issue-29161.rs b/src/test/compile-fail/issue-29161.rs index bc09f61a75..97ba222fe4 100644 --- a/src/test/compile-fail/issue-29161.rs +++ b/src/test/compile-fail/issue-29161.rs @@ -13,7 +13,7 @@ mod a { impl Default for A { pub fn default() -> A { //~ ERROR unnecessary visibility qualifier - A; + A } } } diff --git a/src/test/compile-fail/issue-30079.rs b/src/test/compile-fail/issue-30079.rs index 6a54e53f14..15b7edb32d 100644 --- a/src/test/compile-fail/issue-30079.rs +++ b/src/test/compile-fail/issue-30079.rs @@ -16,7 +16,7 @@ struct SemiPriv; mod m1 { struct Priv; impl ::SemiPriv { - pub fn f(_: Priv) {} //~ ERROR private type in public interface + pub fn f(_: Priv) {} //~ ERROR private type `m1::Priv` in public interface //~^ WARNING hard error } @@ -28,7 +28,7 @@ mod m1 { mod m2 { struct Priv; impl ::std::ops::Deref for ::SemiPriv { - type Target = Priv; //~ ERROR private type in public interface + type Target = Priv; //~ ERROR private type `m2::Priv` in public interface //~^ WARNING hard error fn deref(&self) -> &Self::Target { unimplemented!() } } @@ -46,7 +46,7 @@ trait SemiPrivTrait { mod m3 { struct Priv; impl ::SemiPrivTrait for () { - type Assoc = Priv; //~ ERROR private type in public interface + type Assoc = Priv; //~ ERROR private type `m3::Priv` in public interface //~^ WARNING hard error } } diff --git a/src/test/compile-fail/issue-3044.rs b/src/test/compile-fail/issue-3044.rs index c7b276da57..4c63f7761a 100644 --- a/src/test/compile-fail/issue-3044.rs +++ b/src/test/compile-fail/issue-3044.rs @@ -14,7 +14,5 @@ fn main() { needlesArr.iter().fold(|x, y| { }); //~^^ ERROR this function takes 2 parameters but 1 parameter was supplied - //~| NOTE the following parameter types were expected - //~| NOTE _, _ - // the first error is, um, non-ideal. + //~| NOTE expected 2 parameters } diff --git a/src/test/compile-fail/issue-32119.rs b/src/test/compile-fail/issue-32119.rs index 4743b779ef..e630a01a59 100644 --- a/src/test/compile-fail/issue-32119.rs +++ b/src/test/compile-fail/issue-32119.rs @@ -9,6 +9,7 @@ // except according to those terms. #![feature(rustc_attrs)] +#![allow(dead_code)] pub type T = (); mod foo { pub use super::T; } diff --git a/src/test/compile-fail/issue-3214.rs b/src/test/compile-fail/issue-3214.rs index d3b932fbc5..010cfb54c1 100644 --- a/src/test/compile-fail/issue-3214.rs +++ b/src/test/compile-fail/issue-3214.rs @@ -15,7 +15,6 @@ fn foo() { impl Drop for foo { //~^ ERROR wrong number of type arguments - //~^^ ERROR the type parameter `T` is not constrained fn drop(&mut self) {} } } diff --git a/src/test/compile-fail/issue-32323.rs b/src/test/compile-fail/issue-32323.rs index e3461e52e1..e5cb813032 100644 --- a/src/test/compile-fail/issue-32323.rs +++ b/src/test/compile-fail/issue-32323.rs @@ -13,6 +13,6 @@ pub trait Tr<'a> { } pub fn f<'a, T: Tr<'a>>() -> >::Out {} -//~^ ERROR not all control paths return a value +//~^ ERROR mismatched types pub fn main() {} diff --git a/src/test/compile-fail/issue-32797.rs b/src/test/compile-fail/issue-32797.rs index af75783a71..2c54ed3e85 100644 --- a/src/test/compile-fail/issue-32797.rs +++ b/src/test/compile-fail/issue-32797.rs @@ -8,14 +8,17 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(rustc_attrs)] + pub use bar::*; mod bar { pub use super::*; } -pub use baz::*; //~ ERROR already been imported +pub use baz::*; mod baz { pub use main as f; } -pub fn main() {} +#[rustc_error] +pub fn main() {} //~ ERROR compilation successful diff --git a/src/test/compile-fail/issue-32833.rs b/src/test/compile-fail/issue-32833.rs index d610e8b483..41383e9360 100644 --- a/src/test/compile-fail/issue-32833.rs +++ b/src/test/compile-fail/issue-32833.rs @@ -11,8 +11,7 @@ use bar::Foo; //~ ERROR unresolved import `bar::Foo` [E0432] //~^ no `Foo` in `bar` mod bar { - use Foo; //~ ERROR unresolved import `Foo` [E0432] - //~^ no `Foo` in the root + use Foo; } fn main() {} diff --git a/src/test/compile-fail/issue-32963.rs b/src/test/compile-fail/issue-32963.rs index 8ba95d1493..f146cfbe68 100644 --- a/src/test/compile-fail/issue-32963.rs +++ b/src/test/compile-fail/issue-32963.rs @@ -16,6 +16,6 @@ fn size_of_copy() -> usize { mem::size_of::() } fn main() { size_of_copy::(); - //~^ ERROR `Misc + Copy: std::marker::Copy` is not satisfied - //~| ERROR the trait `std::marker::Copy` cannot be made into an object + //~^ ERROR only Send/Sync traits can be used as additional traits in a trait object + //~| ERROR the trait bound `Misc: std::marker::Copy` is not satisfied } diff --git a/src/test/compile-fail/issue-3521.rs b/src/test/compile-fail/issue-3521.rs index 1b6e4b1d28..e2acdcee3d 100644 --- a/src/test/compile-fail/issue-3521.rs +++ b/src/test/compile-fail/issue-3521.rs @@ -16,7 +16,7 @@ fn main() { Bar = foo //~^ ERROR attempt to use a non-constant value in a constant //~^^ ERROR constant evaluation error - //~| non-constant path in constant expression + //~| unresolved path in constant expression } println!("{}", Stuff::Bar); diff --git a/src/test/compile-fail/import-shadow-3.rs b/src/test/compile-fail/issue-36163.rs similarity index 52% rename from src/test/compile-fail/import-shadow-3.rs rename to src/test/compile-fail/issue-36163.rs index bf90973c28..9dad6ede77 100644 --- a/src/test/compile-fail/import-shadow-3.rs +++ b/src/test/compile-fail/issue-36163.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2012 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -8,23 +8,19 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// Test that import shadowing using globs causes errors +const A: i32 = Foo::B; //~ ERROR E0265 + //~^ NOTE recursion not allowed in constant -#![no_implicit_prelude] - -use foo::Baz; -use bar::*; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; +enum Foo { + B = A, //~ ERROR E0265 + //~^ NOTE recursion not allowed in constant } -mod bar { - pub type Baz = isize; +enum Bar { + C = Bar::C, //~ ERROR E0265 + //~^ NOTE recursion not allowed in constant } -mod qux { - pub use bar::Baz; -} +const D: i32 = A; fn main() {} diff --git a/src/test/compile-fail/issue-36881.rs b/src/test/compile-fail/issue-36881.rs index cca20e968e..d75ac0c7f2 100644 --- a/src/test/compile-fail/issue-36881.rs +++ b/src/test/compile-fail/issue-36881.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(rand)] + fn main() { extern crate rand; use rand::Rng; //~ ERROR unresolved import diff --git a/src/test/compile-fail/issue-37131.rs b/src/test/compile-fail/issue-37131.rs new file mode 100644 index 0000000000..88c6eb7f51 --- /dev/null +++ b/src/test/compile-fail/issue-37131.rs @@ -0,0 +1,18 @@ +// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Tests that compiling for a target which is not installed will result in a helpful +// error message. + +// compile-flags: --target=s390x-unknown-linux-gnu +// ignore s390x + +// error-pattern:target may not be installed +fn main() { } diff --git a/src/test/compile-fail/issue-37884.rs b/src/test/compile-fail/issue-37884.rs new file mode 100644 index 0000000000..a73b1dbe34 --- /dev/null +++ b/src/test/compile-fail/issue-37884.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +struct RepeatMut<'a, T>(T, &'a ()); + +impl<'a, T: 'a> Iterator for RepeatMut<'a, T> { + type Item = &'a mut T; + fn next(&'a mut self) -> Option + //~^ ERROR method not compatible with trait + //~| lifetime mismatch + //~| NOTE expected type `fn(&mut RepeatMut<'a, T>) -> std::option::Option<&mut T>` + //~| NOTE found type `fn(&'a mut RepeatMut<'a, T>) -> std::option::Option<&mut T>` + { + //~^ NOTE the anonymous lifetime #1 defined on the body + //~| NOTE ...does not necessarily outlive the lifetime 'a as defined on the body + Some(&mut self.0) + } +} + +fn main() {} diff --git a/src/test/compile-fail/feature-gate-relaxed-adts-2.rs b/src/test/compile-fail/issue-38412.rs similarity index 57% rename from src/test/compile-fail/feature-gate-relaxed-adts-2.rs rename to src/test/compile-fail/issue-38412.rs index a75f2647f4..00305eb2bc 100644 --- a/src/test/compile-fail/feature-gate-relaxed-adts-2.rs +++ b/src/test/compile-fail/issue-38412.rs @@ -8,20 +8,13 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -struct Z(u8, u8); - -enum E { - U(u8, u8), -} - fn main() { - match Z(0, 1) { - Z{..} => {} //~ ERROR tuple structs and variants in struct patterns are unstable - } - match E::U(0, 1) { - E::U{..} => {} //~ ERROR tuple structs and variants in struct patterns are unstable - } + let Box(a) = loop { }; + //~^ ERROR field `0` of struct `std::boxed::Box` is private - let z1 = Z(0, 1); - let z2 = Z { ..z1 }; //~ ERROR tuple structs and variants in struct patterns are unstable + // (The below is a trick to allow compiler to infer a type for + // variable `a` without attempting to ascribe a type to the + // pattern or otherwise attempting to name the Box type, which + // would run afoul of issue #22207) + let _b: *mut i32 = *a; } diff --git a/src/test/compile-fail/issue-38919.rs b/src/test/compile-fail/issue-38919.rs new file mode 100644 index 0000000000..e6cee4afd5 --- /dev/null +++ b/src/test/compile-fail/issue-38919.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn foo() { + T::Item; //~ ERROR no associated item named `Item` found for type `T` in the current scope +} + +fn main() { } diff --git a/src/test/compile-fail/issue-3907.rs b/src/test/compile-fail/issue-3907.rs index 93556577ad..86906ed9af 100644 --- a/src/test/compile-fail/issue-3907.rs +++ b/src/test/compile-fail/issue-3907.rs @@ -18,7 +18,7 @@ struct S { } impl Foo for S { //~ ERROR: `Foo` is not a trait - //~| NOTE: not a trait + //~| NOTE: expected trait, found type alias //~| NOTE: type aliases cannot be used for traits fn bar() { } } diff --git a/src/test/compile-fail/issue-4335.rs b/src/test/compile-fail/issue-4335.rs index 09371fbafc..51f5fc5ee9 100644 --- a/src/test/compile-fail/issue-4335.rs +++ b/src/test/compile-fail/issue-4335.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(fn_traits)] + fn id(t: T) -> T { t } fn f<'r, T>(v: &'r T) -> Box T + 'r> { diff --git a/src/test/compile-fail/issue-4736.rs b/src/test/compile-fail/issue-4736.rs index c93e75042d..19803079d0 100644 --- a/src/test/compile-fail/issue-4736.rs +++ b/src/test/compile-fail/issue-4736.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(relaxed_adts)] - struct NonCopyable(()); fn main() { diff --git a/src/test/compile-fail/issue-4935.rs b/src/test/compile-fail/issue-4935.rs index 08707a187d..e9f8367378 100644 --- a/src/test/compile-fail/issue-4935.rs +++ b/src/test/compile-fail/issue-4935.rs @@ -11,6 +11,7 @@ // Regression test for issue #4935 fn foo(a: usize) {} +//~^ defined here fn main() { foo(5, 6) } //~^ ERROR this function takes 1 parameter but 2 parameters were supplied -//~| NOTE the following parameter type was expected +//~| NOTE expected 1 parameter diff --git a/src/test/compile-fail/issue-5035.rs b/src/test/compile-fail/issue-5035.rs index 7a36012925..8ebcba4713 100644 --- a/src/test/compile-fail/issue-5035.rs +++ b/src/test/compile-fail/issue-5035.rs @@ -11,7 +11,7 @@ trait I {} type K = I; impl K for isize {} //~ ERROR: `K` is not a trait - //~| NOTE: not a trait + //~| NOTE: expected trait, found type alias //~| NOTE: aliases cannot be used for traits use ImportError; //~ ERROR unresolved import `ImportError` [E0432] diff --git a/src/test/compile-fail/issue-5062.rs b/src/test/compile-fail/issue-5062.rs index f5aa4fadbe..cf78d6d8c0 100644 --- a/src/test/compile-fail/issue-5062.rs +++ b/src/test/compile-fail/issue-5062.rs @@ -9,4 +9,4 @@ // except according to those terms. fn main() { format!("{:?}", None); } - //~^ ERROR unable to infer enough type information about `_` [E0282] + //~^ ERROR unable to infer enough type information about `T` [E0282] diff --git a/src/test/compile-fail/issue-5239-1.rs b/src/test/compile-fail/issue-5239-1.rs index 06e3c9a207..a77b27150d 100644 --- a/src/test/compile-fail/issue-5239-1.rs +++ b/src/test/compile-fail/issue-5239-1.rs @@ -11,7 +11,7 @@ // Regression test for issue #5239 fn main() { - let x = |ref x: isize| -> isize { x += 1; }; + let x = |ref x: isize| { x += 1; }; //~^ ERROR E0368 //~| NOTE cannot use `+=` on type `&isize` } diff --git a/src/test/compile-fail/issue-6458-2.rs b/src/test/compile-fail/issue-6458-2.rs index 71f2805457..3816896d43 100644 --- a/src/test/compile-fail/issue-6458-2.rs +++ b/src/test/compile-fail/issue-6458-2.rs @@ -11,5 +11,5 @@ fn main() { // Unconstrained type: format!("{:?}", None); - //~^ ERROR unable to infer enough type information about `_` [E0282] + //~^ ERROR unable to infer enough type information about `T` [E0282] } diff --git a/src/test/compile-fail/issue-6458-3.rs b/src/test/compile-fail/issue-6458-3.rs index e397805565..8029522f5d 100644 --- a/src/test/compile-fail/issue-6458-3.rs +++ b/src/test/compile-fail/issue-6458-3.rs @@ -12,7 +12,7 @@ use std::mem; fn main() { mem::transmute(0); - //~^ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + //~^ ERROR unable to infer enough type information about `U` [E0282] + //~| NOTE cannot infer type for `U` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/issue-6458-4.rs b/src/test/compile-fail/issue-6458-4.rs index c3f3a718ad..a078cdea4a 100644 --- a/src/test/compile-fail/issue-6458-4.rs +++ b/src/test/compile-fail/issue-6458-4.rs @@ -8,11 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -fn foo(b: bool) -> Result { - Err("bar".to_string()); - //~^ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` - //~| NOTE type annotations or generic parameter binding +fn foo(b: bool) -> Result { //~ ERROR mismatched types + Err("bar".to_string()); //~ HELP consider removing this semicolon } fn main() { diff --git a/src/test/compile-fail/issue-6458.rs b/src/test/compile-fail/issue-6458.rs index a64522a0e5..f8354ddbf1 100644 --- a/src/test/compile-fail/issue-6458.rs +++ b/src/test/compile-fail/issue-6458.rs @@ -17,8 +17,8 @@ pub fn foo(_: TypeWithState) {} pub fn bar() { foo(TypeWithState(marker::PhantomData)); - //~^ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + //~^ ERROR unable to infer enough type information about `State` [E0282] + //~| NOTE cannot infer type for `State` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/issue-7813.rs b/src/test/compile-fail/issue-7813.rs index e3cb1d0c7d..e37a881642 100644 --- a/src/test/compile-fail/issue-7813.rs +++ b/src/test/compile-fail/issue-7813.rs @@ -10,7 +10,7 @@ fn main() { let v = &[]; - let it = v.iter(); //~ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + let it = v.iter(); //~ ERROR unable to infer enough type information about `T` [E0282] + //~| NOTE cannot infer type for `T` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/lifetime-inference-give-expl-lifetime-param.rs b/src/test/compile-fail/lifetime-inference-give-expl-lifetime-param.rs index 6da87fca3f..4323929e2e 100644 --- a/src/test/compile-fail/lifetime-inference-give-expl-lifetime-param.rs +++ b/src/test/compile-fail/lifetime-inference-give-expl-lifetime-param.rs @@ -49,8 +49,6 @@ struct Baz<'x> { impl<'a> Baz<'a> { fn baz2<'b>(&self, x: &isize) -> (&'b isize, &'b isize) { - //~^ HELP consider using an explicit lifetime parameter as shown: fn baz2<'b>(&self, x: &' - // FIXME #35038: The above suggestion is different on Linux and Mac. (self.bar, x) //~ ERROR E0312 //~^ ERROR E0312 } diff --git a/src/test/compile-fail/link-cfg-gated.rs b/src/test/compile-fail/link-cfg-gated.rs new file mode 100644 index 0000000000..27918a27ca --- /dev/null +++ b/src/test/compile-fail/link-cfg-gated.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[link(name = "foo", cfg(foo))] +//~^ ERROR: is feature gated +extern {} + +fn main() {} diff --git a/src/test/compile-fail/lint-dead-code-type-alias.rs b/src/test/compile-fail/lint-dead-code-type-alias.rs new file mode 100644 index 0000000000..aaa01aa6bb --- /dev/null +++ b/src/test/compile-fail/lint-dead-code-type-alias.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![deny(dead_code)] + +type Used = u8; +type Unused = u8; //~ ERROR type alias is never used + +fn id(x: Used) -> Used { x } + +fn main() { + id(0); +} diff --git a/src/test/compile-fail/lint-output-format-2.rs b/src/test/compile-fail/lint-output-format-2.rs new file mode 100644 index 0000000000..2f74325d19 --- /dev/null +++ b/src/test/compile-fail/lint-output-format-2.rs @@ -0,0 +1,25 @@ +// Copyright 2013 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -F unused_features +// aux-build:lint_output_format.rs + +#![feature(foo)] //~ ERROR unused or unknown feature + +#![feature(test_feature)] + +extern crate lint_output_format; +use lint_output_format::{foo, bar}; +//~^ WARNING use of deprecated item: text, + +fn main() { + let _x = foo(); //~ WARNING #[warn(deprecated)] on by default + let _y = bar(); +} diff --git a/src/test/compile-fail/lint-output-format.rs b/src/test/compile-fail/lint-output-format.rs index c22ad3182d..81e0b708b8 100644 --- a/src/test/compile-fail/lint-output-format.rs +++ b/src/test/compile-fail/lint-output-format.rs @@ -11,13 +11,12 @@ // compile-flags: -F unused_features // aux-build:lint_output_format.rs -#![feature(foo)] //~ ERROR unused or unknown feature +#![allow(deprecated)] extern crate lint_output_format; //~ ERROR use of unstable library feature use lint_output_format::{foo, bar}; //~ ERROR use of unstable library feature -//~^ WARNING use of deprecated item: text, fn main() { - let _x = foo(); //~ WARNING #[warn(deprecated)] on by default + let _x = foo(); let _y = bar(); //~ ERROR use of unstable library feature } diff --git a/src/test/compile-fail/lint-qualification.rs b/src/test/compile-fail/lint-qualification.rs index af9b21dadd..57c2166565 100644 --- a/src/test/compile-fail/lint-qualification.rs +++ b/src/test/compile-fail/lint-qualification.rs @@ -21,8 +21,9 @@ fn main() { let _ = || -> Result<(), ()> { try!(Ok(())); Ok(()) }; // issue #37345 - macro_rules! m { - () => { $crate::foo::bar(); } - } - m!(); // issue #37357 + macro_rules! m { () => { + $crate::foo::bar(); // issue #37357 + ::foo::bar(); // issue #38682 + } } + m!(); } diff --git a/src/test/compile-fail/lint-stability-2.rs b/src/test/compile-fail/lint-stability-2.rs new file mode 100644 index 0000000000..77917ff3cd --- /dev/null +++ b/src/test/compile-fail/lint-stability-2.rs @@ -0,0 +1,423 @@ +// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:lint_stability.rs +// aux-build:stability_cfg1.rs + +#![allow(deprecated)] +#![allow(dead_code)] +#![feature(staged_api)] + +#![stable(feature = "rust1", since = "1.0.0")] + +#[macro_use] +extern crate lint_stability; + +mod cross_crate { + extern crate stability_cfg1; + + use lint_stability::*; + + fn test() { + type Foo = MethodTester; + let foo = MethodTester; + + deprecated(); + foo.method_deprecated(); + Foo::method_deprecated(&foo); + ::method_deprecated(&foo); + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + + deprecated_text(); + foo.method_deprecated_text(); + Foo::method_deprecated_text(&foo); + ::method_deprecated_text(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + + foo.method_deprecated_unstable(); + //~^ ERROR use of unstable library feature + Foo::method_deprecated_unstable(&foo); + //~^ ERROR use of unstable library feature + ::method_deprecated_unstable(&foo); + //~^ ERROR use of unstable library feature + foo.trait_deprecated_unstable(); + //~^ ERROR use of unstable library feature + ::trait_deprecated_unstable(&foo); + //~^ ERROR use of unstable library feature + + foo.method_deprecated_unstable_text(); + //~^ ERROR use of unstable library feature + Foo::method_deprecated_unstable_text(&foo); + //~^ ERROR use of unstable library feature + ::method_deprecated_unstable_text(&foo); + //~^ ERROR use of unstable library feature + foo.trait_deprecated_unstable_text(); + //~^ ERROR use of unstable library feature + ::trait_deprecated_unstable_text(&foo); + //~^ ERROR use of unstable library feature + + foo.method_unstable(); //~ ERROR use of unstable library feature + Foo::method_unstable(&foo); //~ ERROR use of unstable library feature + ::method_unstable(&foo); //~ ERROR use of unstable library feature + foo.trait_unstable(); //~ ERROR use of unstable library feature + ::trait_unstable(&foo); //~ ERROR use of unstable library feature + + foo.method_unstable_text(); + //~^ ERROR use of unstable library feature 'test_feature': text + Foo::method_unstable_text(&foo); + //~^ ERROR use of unstable library feature 'test_feature': text + ::method_unstable_text(&foo); + //~^ ERROR use of unstable library feature 'test_feature': text + foo.trait_unstable_text(); + //~^ ERROR use of unstable library feature 'test_feature': text + ::trait_unstable_text(&foo); + //~^ ERROR use of unstable library feature 'test_feature': text + + stable(); + foo.method_stable(); + Foo::method_stable(&foo); + ::method_stable(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + + stable_text(); + foo.method_stable_text(); + Foo::method_stable_text(&foo); + ::method_stable_text(&foo); + foo.trait_stable_text(); + Trait::trait_stable_text(&foo); + ::trait_stable_text(&foo); + ::trait_stable_text(&foo); + + struct S2(T::TypeDeprecated); + + let _ = DeprecatedStruct { + i: 0 + }; + let _ = StableStruct { i: 0 }; + + let _ = DeprecatedUnitStruct; + let _ = StableUnitStruct; + + let _ = Enum::DeprecatedVariant; + let _ = Enum::StableVariant; + + let _ = DeprecatedTupleStruct (1); + let _ = StableTupleStruct (1); + + // At the moment, the lint checker only checks stability in + // in the arguments of macros. + // Eventually, we will want to lint the contents of the + // macro in the module *defining* it. Also, stability levels + // on macros themselves are not yet linted. + macro_test_arg!(deprecated_text()); + macro_test_arg!(macro_test_arg!(deprecated_text())); + } + + fn test_method_param(foo: Foo) { + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + foo.trait_deprecated_unstable(); + //~^ ERROR use of unstable library feature + ::trait_deprecated_unstable(&foo); + //~^ ERROR use of unstable library feature + foo.trait_deprecated_unstable_text(); + //~^ ERROR use of unstable library feature + ::trait_deprecated_unstable_text(&foo); + //~^ ERROR use of unstable library feature + foo.trait_unstable(); //~ ERROR use of unstable library feature + ::trait_unstable(&foo); //~ ERROR use of unstable library feature + foo.trait_unstable_text(); + //~^ ERROR use of unstable library feature 'test_feature': text + ::trait_unstable_text(&foo); + //~^ ERROR use of unstable library feature 'test_feature': text + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + } + + fn test_method_object(foo: &Trait) { + foo.trait_deprecated(); + foo.trait_deprecated_text(); + foo.trait_deprecated_unstable(); + //~^ ERROR use of unstable library feature + foo.trait_deprecated_unstable_text(); + //~^ ERROR use of unstable library feature + foo.trait_unstable(); //~ ERROR use of unstable library feature + foo.trait_unstable_text(); + //~^ ERROR use of unstable library feature 'test_feature': text + foo.trait_stable(); + } + + struct S; + + impl DeprecatedTrait for S {} + trait LocalTrait2 : DeprecatedTrait { } +} + +mod this_crate { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn deprecated() {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn deprecated_text() {} + + #[unstable(feature = "test_feature", issue = "0")] + pub fn unstable() {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + pub fn unstable_text() {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub fn stable() {} + #[stable(feature = "rust1", since = "1.0.0")] + pub fn stable_text() {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub struct MethodTester; + + impl MethodTester { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn method_deprecated(&self) {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn method_deprecated_text(&self) {} + + #[unstable(feature = "test_feature", issue = "0")] + pub fn method_unstable(&self) {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + pub fn method_unstable_text(&self) {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub fn method_stable(&self) {} + #[stable(feature = "rust1", since = "1.0.0")] + pub fn method_stable_text(&self) {} + } + + pub trait Trait { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn trait_deprecated(&self) {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn trait_deprecated_text(&self) {} + + #[unstable(feature = "test_feature", issue = "0")] + fn trait_unstable(&self) {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + fn trait_unstable_text(&self) {} + + #[stable(feature = "rust1", since = "1.0.0")] + fn trait_stable(&self) {} + #[stable(feature = "rust1", since = "1.0.0")] + fn trait_stable_text(&self) {} + } + + impl Trait for MethodTester {} + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedUnitStruct; + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableUnitStruct; + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableUnitStruct; + + pub enum Enum { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + DeprecatedVariant, + #[unstable(feature = "test_feature", issue = "0")] + UnstableVariant, + + #[stable(feature = "rust1", since = "1.0.0")] + StableVariant, + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedTupleStruct(isize); + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableTupleStruct(isize); + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableTupleStruct(isize); + + fn test() { + // Only the deprecated cases of the following should generate + // errors, because other stability attributes now have meaning + // only *across* crates, not within a single crate. + + type Foo = MethodTester; + let foo = MethodTester; + + deprecated(); + foo.method_deprecated(); + Foo::method_deprecated(&foo); + ::method_deprecated(&foo); + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + + deprecated_text(); + foo.method_deprecated_text(); + Foo::method_deprecated_text(&foo); + ::method_deprecated_text(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + + unstable(); + foo.method_unstable(); + Foo::method_unstable(&foo); + ::method_unstable(&foo); + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + + unstable_text(); + foo.method_unstable_text(); + Foo::method_unstable_text(&foo); + ::method_unstable_text(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + + stable(); + foo.method_stable(); + Foo::method_stable(&foo); + ::method_stable(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + + stable_text(); + foo.method_stable_text(); + Foo::method_stable_text(&foo); + ::method_stable_text(&foo); + foo.trait_stable_text(); + Trait::trait_stable_text(&foo); + ::trait_stable_text(&foo); + ::trait_stable_text(&foo); + + let _ = DeprecatedStruct { + i: 0 + }; + let _ = UnstableStruct { i: 0 }; + let _ = StableStruct { i: 0 }; + + let _ = DeprecatedUnitStruct; + let _ = UnstableUnitStruct; + let _ = StableUnitStruct; + + let _ = Enum::DeprecatedVariant; + let _ = Enum::UnstableVariant; + let _ = Enum::StableVariant; + + let _ = DeprecatedTupleStruct (1); + let _ = UnstableTupleStruct (1); + let _ = StableTupleStruct (1); + } + + fn test_method_param(foo: Foo) { + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + } + + fn test_method_object(foo: &Trait) { + foo.trait_deprecated(); + foo.trait_deprecated_text(); + foo.trait_unstable(); + foo.trait_unstable_text(); + foo.trait_stable(); + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn test_fn_body() { + fn fn_in_body() {} + fn_in_body(); + } + + impl MethodTester { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn test_method_body(&self) { + fn fn_in_body() {} + fn_in_body(); + } + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub trait DeprecatedTrait { + fn dummy(&self) { } + } + + struct S; + + impl DeprecatedTrait for S { } + + trait LocalTrait : DeprecatedTrait { } +} + +fn main() {} diff --git a/src/test/compile-fail/lint-stability-deprecated.rs b/src/test/compile-fail/lint-stability-deprecated.rs new file mode 100644 index 0000000000..d8813b6a61 --- /dev/null +++ b/src/test/compile-fail/lint-stability-deprecated.rs @@ -0,0 +1,467 @@ +// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:lint_stability.rs +// aux-build:inherited_stability.rs +// aux-build:stability_cfg1.rs +// aux-build:stability_cfg2.rs + +#![deny(deprecated)] +#![allow(dead_code)] +#![feature(staged_api, test_feature)] + +#![stable(feature = "rust1", since = "1.0.0")] + +#[macro_use] +extern crate lint_stability; + +mod cross_crate { + extern crate stability_cfg1; + extern crate stability_cfg2; + + use lint_stability::*; + + fn test() { + type Foo = MethodTester; + let foo = MethodTester; + + deprecated(); //~ ERROR use of deprecated item + foo.method_deprecated(); //~ ERROR use of deprecated item + Foo::method_deprecated(&foo); //~ ERROR use of deprecated item + ::method_deprecated(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated(); //~ ERROR use of deprecated item + Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + + deprecated_text(); //~ ERROR use of deprecated item: text + foo.method_deprecated_text(); //~ ERROR use of deprecated item: text + Foo::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + + deprecated_unstable(); //~ ERROR use of deprecated item + foo.method_deprecated_unstable(); //~ ERROR use of deprecated item + Foo::method_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::method_deprecated_unstable(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item + Trait::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + + deprecated_unstable_text(); //~ ERROR use of deprecated item: text + foo.method_deprecated_unstable_text(); //~ ERROR use of deprecated item: text + Foo::method_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::method_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + + unstable(); + foo.method_unstable(); + Foo::method_unstable(&foo); + ::method_unstable(&foo); + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + + unstable_text(); + foo.method_unstable_text(); + Foo::method_unstable_text(&foo); + ::method_unstable_text(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + + stable(); + foo.method_stable(); + Foo::method_stable(&foo); + ::method_stable(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + + stable_text(); + foo.method_stable_text(); + Foo::method_stable_text(&foo); + ::method_stable_text(&foo); + foo.trait_stable_text(); + Trait::trait_stable_text(&foo); + ::trait_stable_text(&foo); + ::trait_stable_text(&foo); + + struct S1(T::TypeUnstable); + struct S2(T::TypeDeprecated); + //~^ ERROR use of deprecated item + + let _ = DeprecatedStruct { //~ ERROR use of deprecated item + i: 0 //~ ERROR use of deprecated item + }; + let _ = DeprecatedUnstableStruct { + //~^ ERROR use of deprecated item + i: 0 //~ ERROR use of deprecated item + }; + let _ = UnstableStruct { i: 0 }; + let _ = StableStruct { i: 0 }; + + let _ = DeprecatedUnitStruct; //~ ERROR use of deprecated item + let _ = DeprecatedUnstableUnitStruct; //~ ERROR use of deprecated item + let _ = UnstableUnitStruct; + let _ = StableUnitStruct; + + let _ = Enum::DeprecatedVariant; //~ ERROR use of deprecated item + let _ = Enum::DeprecatedUnstableVariant; //~ ERROR use of deprecated item + let _ = Enum::UnstableVariant; + let _ = Enum::StableVariant; + + let _ = DeprecatedTupleStruct (1); //~ ERROR use of deprecated item + let _ = DeprecatedUnstableTupleStruct (1); //~ ERROR use of deprecated item + let _ = UnstableTupleStruct (1); + let _ = StableTupleStruct (1); + + // At the moment, the lint checker only checks stability in + // in the arguments of macros. + // Eventually, we will want to lint the contents of the + // macro in the module *defining* it. Also, stability levels + // on macros themselves are not yet linted. + macro_test_arg!(deprecated_text()); //~ ERROR use of deprecated item: text + macro_test_arg!(deprecated_unstable_text()); //~ ERROR use of deprecated item: text + macro_test_arg!(macro_test_arg!(deprecated_text())); //~ ERROR use of deprecated item: text + } + + fn test_method_param(foo: Foo) { + foo.trait_deprecated(); //~ ERROR use of deprecated item + Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item + Trait::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + } + + fn test_method_object(foo: &Trait) { + foo.trait_deprecated(); //~ ERROR use of deprecated item + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item + foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text + foo.trait_unstable(); + foo.trait_unstable_text(); + foo.trait_stable(); + } + + struct S; + + impl UnstableTrait for S { } + impl DeprecatedTrait for S {} //~ ERROR use of deprecated item: text + trait LocalTrait : UnstableTrait { } + trait LocalTrait2 : DeprecatedTrait { } //~ ERROR use of deprecated item: text + + impl Trait for S { + fn trait_stable(&self) {} + fn trait_unstable(&self) {} + } +} + +mod inheritance { + extern crate inherited_stability; + use self::inherited_stability::*; + + fn test_inheritance() { + unstable(); + stable(); + + stable_mod::unstable(); + stable_mod::stable(); + + unstable_mod::deprecated(); //~ ERROR use of deprecated item + unstable_mod::unstable(); + + let _ = Unstable::UnstableVariant; + let _ = Unstable::StableVariant; + + let x: usize = 0; + x.unstable(); + x.stable(); + } +} + +mod this_crate { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn deprecated() {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn deprecated_text() {} + + #[unstable(feature = "test_feature", issue = "0")] + pub fn unstable() {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + pub fn unstable_text() {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub fn stable() {} + #[stable(feature = "rust1", since = "1.0.0")] + pub fn stable_text() {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub struct MethodTester; + + impl MethodTester { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn method_deprecated(&self) {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub fn method_deprecated_text(&self) {} + + #[unstable(feature = "test_feature", issue = "0")] + pub fn method_unstable(&self) {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + pub fn method_unstable_text(&self) {} + + #[stable(feature = "rust1", since = "1.0.0")] + pub fn method_stable(&self) {} + #[stable(feature = "rust1", since = "1.0.0")] + pub fn method_stable_text(&self) {} + } + + pub trait Trait { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn trait_deprecated(&self) {} + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn trait_deprecated_text(&self) {} + + #[unstable(feature = "test_feature", issue = "0")] + fn trait_unstable(&self) {} + #[unstable(feature = "test_feature", reason = "text", issue = "0")] + fn trait_unstable_text(&self) {} + + #[stable(feature = "rust1", since = "1.0.0")] + fn trait_stable(&self) {} + #[stable(feature = "rust1", since = "1.0.0")] + fn trait_stable_text(&self) {} + } + + impl Trait for MethodTester {} + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableStruct { + #[stable(feature = "test_feature", since = "1.0.0")] i: isize + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedUnitStruct; + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableUnitStruct; + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableUnitStruct; + + pub enum Enum { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + DeprecatedVariant, + #[unstable(feature = "test_feature", issue = "0")] + UnstableVariant, + + #[stable(feature = "rust1", since = "1.0.0")] + StableVariant, + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub struct DeprecatedTupleStruct(isize); + #[unstable(feature = "test_feature", issue = "0")] + pub struct UnstableTupleStruct(isize); + #[stable(feature = "rust1", since = "1.0.0")] + pub struct StableTupleStruct(isize); + + fn test() { + // Only the deprecated cases of the following should generate + // errors, because other stability attributes now have meaning + // only *across* crates, not within a single crate. + + type Foo = MethodTester; + let foo = MethodTester; + + deprecated(); //~ ERROR use of deprecated item + foo.method_deprecated(); //~ ERROR use of deprecated item + Foo::method_deprecated(&foo); //~ ERROR use of deprecated item + ::method_deprecated(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated(); //~ ERROR use of deprecated item + Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + + deprecated_text(); //~ ERROR use of deprecated item: text + foo.method_deprecated_text(); //~ ERROR use of deprecated item: text + Foo::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + + unstable(); + foo.method_unstable(); + Foo::method_unstable(&foo); + ::method_unstable(&foo); + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + + unstable_text(); + foo.method_unstable_text(); + Foo::method_unstable_text(&foo); + ::method_unstable_text(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + + stable(); + foo.method_stable(); + Foo::method_stable(&foo); + ::method_stable(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + + stable_text(); + foo.method_stable_text(); + Foo::method_stable_text(&foo); + ::method_stable_text(&foo); + foo.trait_stable_text(); + Trait::trait_stable_text(&foo); + ::trait_stable_text(&foo); + ::trait_stable_text(&foo); + + let _ = DeprecatedStruct { + //~^ ERROR use of deprecated item + i: 0 //~ ERROR use of deprecated item + }; + let _ = UnstableStruct { i: 0 }; + let _ = StableStruct { i: 0 }; + + let _ = DeprecatedUnitStruct; //~ ERROR use of deprecated item + let _ = UnstableUnitStruct; + let _ = StableUnitStruct; + + let _ = Enum::DeprecatedVariant; //~ ERROR use of deprecated item + let _ = Enum::UnstableVariant; + let _ = Enum::StableVariant; + + let _ = DeprecatedTupleStruct (1); //~ ERROR use of deprecated item + let _ = UnstableTupleStruct (1); + let _ = StableTupleStruct (1); + } + + fn test_method_param(foo: Foo) { + foo.trait_deprecated(); //~ ERROR use of deprecated item + Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + ::trait_deprecated(&foo); //~ ERROR use of deprecated item + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_unstable(); + Trait::trait_unstable(&foo); + ::trait_unstable(&foo); + ::trait_unstable(&foo); + foo.trait_unstable_text(); + Trait::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + ::trait_unstable_text(&foo); + foo.trait_stable(); + Trait::trait_stable(&foo); + ::trait_stable(&foo); + ::trait_stable(&foo); + } + + fn test_method_object(foo: &Trait) { + foo.trait_deprecated(); //~ ERROR use of deprecated item + foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + foo.trait_unstable(); + foo.trait_unstable_text(); + foo.trait_stable(); + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn test_fn_body() { + fn fn_in_body() {} + fn_in_body(); //~ ERROR use of deprecated item: text + } + + impl MethodTester { + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + fn test_method_body(&self) { + fn fn_in_body() {} + fn_in_body(); //~ ERROR use of deprecated item: text + } + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + pub trait DeprecatedTrait { + fn dummy(&self) { } + } + + struct S; + + impl DeprecatedTrait for S { } //~ ERROR use of deprecated item + + trait LocalTrait : DeprecatedTrait { } //~ ERROR use of deprecated item +} + +fn main() {} diff --git a/src/test/compile-fail/lint-stability-fields-deprecated.rs b/src/test/compile-fail/lint-stability-fields-deprecated.rs new file mode 100644 index 0000000000..5da3e1a930 --- /dev/null +++ b/src/test/compile-fail/lint-stability-fields-deprecated.rs @@ -0,0 +1,348 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:lint_stability_fields.rs +#![deny(deprecated)] +#![allow(dead_code)] +#![feature(staged_api, test_feature)] + +#![stable(feature = "rust1", since = "1.0.0")] + +mod cross_crate { + extern crate lint_stability_fields; + + use self::lint_stability_fields::*; + + pub fn foo() { + let x = Stable { + inherit: 1, + override1: 2, + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + let _ = x.override1; + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Stable { + inherit: _, + override1: _, + override2: _ + //~^ ERROR use of deprecated item + } = x; + // all fine + let Stable { .. } = x; + + let x = Stable2(1, 2, 3); + + let _ = x.0; + let _ = x.1; + let _ = x.2; + //~^ ERROR use of deprecated item + + let Stable2(_, + _, + _) + //~^ ERROR use of deprecated item + = x; + // all fine + let Stable2(..) = x; + + + let x = Unstable { + inherit: 1, + override1: 2, + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + let _ = x.override1; + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Unstable { + inherit: _, + override1: _, + override2: _ + //~^ ERROR use of deprecated item + } = x; + + let Unstable + // the patterns are all fine: + { .. } = x; + + + let x = Unstable2(1, 2, 3); + + let _ = x.0; + let _ = x.1; + let _ = x.2; + //~^ ERROR use of deprecated item + + let Unstable2 + (_, + _, + _) + //~^ ERROR use of deprecated item + = x; + let Unstable2 + // the patterns are all fine: + (..) = x; + + + let x = Deprecated { + //~^ ERROR use of deprecated item + inherit: 1, + //~^ ERROR use of deprecated item + override1: 2, + //~^ ERROR use of deprecated item + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + //~^ ERROR use of deprecated item + let _ = x.override1; + //~^ ERROR use of deprecated item + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Deprecated { + //~^ ERROR use of deprecated item + inherit: _, + //~^ ERROR use of deprecated item + override1: _, + //~^ ERROR use of deprecated item + override2: _ + //~^ ERROR use of deprecated item + } = x; + + let Deprecated + //~^ ERROR use of deprecated item + // the patterns are all fine: + { .. } = x; + + let x = Deprecated2(1, 2, 3); + //~^ ERROR use of deprecated item + + let _ = x.0; + //~^ ERROR use of deprecated item + let _ = x.1; + //~^ ERROR use of deprecated item + let _ = x.2; + //~^ ERROR use of deprecated item + + let Deprecated2 + //~^ ERROR use of deprecated item + (_, + //~^ ERROR use of deprecated item + _, + //~^ ERROR use of deprecated item + _) + //~^ ERROR use of deprecated item + = x; + let Deprecated2 + //~^ ERROR use of deprecated item + // the patterns are all fine: + (..) = x; + } +} + +mod this_crate { + #[stable(feature = "rust1", since = "1.0.0")] + struct Stable { + inherit: u8, + #[unstable(feature = "test_feature", issue = "0")] + override1: u8, + #[rustc_deprecated(since = "1.0.0", reason = "text")] + #[unstable(feature = "test_feature", issue = "0")] + override2: u8, + } + + #[stable(feature = "rust1", since = "1.0.0")] + struct Stable2(u8, + #[stable(feature = "rust1", since = "1.0.0")] u8, + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] u8); + + #[unstable(feature = "test_feature", issue = "0")] + struct Unstable { + inherit: u8, + #[stable(feature = "rust1", since = "1.0.0")] + override1: u8, + #[rustc_deprecated(since = "1.0.0", reason = "text")] + #[unstable(feature = "test_feature", issue = "0")] + override2: u8, + } + + #[unstable(feature = "test_feature", issue = "0")] + struct Unstable2(u8, + #[stable(feature = "rust1", since = "1.0.0")] u8, + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] u8); + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + struct Deprecated { + inherit: u8, + #[stable(feature = "rust1", since = "1.0.0")] + override1: u8, + #[unstable(feature = "test_feature", issue = "0")] + override2: u8, + } + + #[unstable(feature = "test_feature", issue = "0")] + #[rustc_deprecated(since = "1.0.0", reason = "text")] + struct Deprecated2(u8, + #[stable(feature = "rust1", since = "1.0.0")] u8, + #[unstable(feature = "test_feature", issue = "0")] u8); + + pub fn foo() { + let x = Stable { + inherit: 1, + override1: 2, + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + let _ = x.override1; + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Stable { + inherit: _, + override1: _, + override2: _ + //~^ ERROR use of deprecated item + } = x; + // all fine + let Stable { .. } = x; + + let x = Stable2(1, 2, 3); + + let _ = x.0; + let _ = x.1; + let _ = x.2; + //~^ ERROR use of deprecated item + + let Stable2(_, + _, + _) + //~^ ERROR use of deprecated item + = x; + // all fine + let Stable2(..) = x; + + + let x = Unstable { + inherit: 1, + override1: 2, + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + let _ = x.override1; + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Unstable { + inherit: _, + override1: _, + override2: _ + //~^ ERROR use of deprecated item + } = x; + + let Unstable + // the patterns are all fine: + { .. } = x; + + + let x = Unstable2(1, 2, 3); + + let _ = x.0; + let _ = x.1; + let _ = x.2; + //~^ ERROR use of deprecated item + + let Unstable2 + (_, + _, + _) + //~^ ERROR use of deprecated item + = x; + let Unstable2 + // the patterns are all fine: + (..) = x; + + + let x = Deprecated { + //~^ ERROR use of deprecated item + inherit: 1, + //~^ ERROR use of deprecated item + override1: 2, + //~^ ERROR use of deprecated item + override2: 3, + //~^ ERROR use of deprecated item + }; + + let _ = x.inherit; + //~^ ERROR use of deprecated item + let _ = x.override1; + //~^ ERROR use of deprecated item + let _ = x.override2; + //~^ ERROR use of deprecated item + + let Deprecated { + //~^ ERROR use of deprecated item + inherit: _, + //~^ ERROR use of deprecated item + override1: _, + //~^ ERROR use of deprecated item + override2: _ + //~^ ERROR use of deprecated item + } = x; + + let Deprecated + //~^ ERROR use of deprecated item + // the patterns are all fine: + { .. } = x; + + let x = Deprecated2(1, 2, 3); + //~^ ERROR use of deprecated item + + let _ = x.0; + //~^ ERROR use of deprecated item + let _ = x.1; + //~^ ERROR use of deprecated item + let _ = x.2; + //~^ ERROR use of deprecated item + + let Deprecated2 + //~^ ERROR use of deprecated item + (_, + //~^ ERROR use of deprecated item + _, + //~^ ERROR use of deprecated item + _) + //~^ ERROR use of deprecated item + = x; + let Deprecated2 + //~^ ERROR use of deprecated item + // the patterns are all fine: + (..) = x; + } +} + +fn main() {} diff --git a/src/test/compile-fail/lint-stability-fields.rs b/src/test/compile-fail/lint-stability-fields.rs index d63e1f901f..1b605bdb89 100644 --- a/src/test/compile-fail/lint-stability-fields.rs +++ b/src/test/compile-fail/lint-stability-fields.rs @@ -9,7 +9,7 @@ // except according to those terms. // aux-build:lint_stability_fields.rs -#![deny(deprecated)] +#![allow(deprecated)] #![allow(dead_code)] #![feature(staged_api)] @@ -24,23 +24,17 @@ mod cross_crate { let x = Stable { inherit: 1, override1: 2, //~ ERROR use of unstable - override2: 3, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + override2: 3, //~ ERROR use of unstable }; let _ = x.inherit; let _ = x.override1; //~ ERROR use of unstable - let _ = x.override2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.override2; //~ ERROR use of unstable let Stable { inherit: _, override1: _, //~ ERROR use of unstable - override2: _ - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + override2: _ //~ ERROR use of unstable } = x; // all fine let Stable { .. } = x; @@ -49,15 +43,11 @@ mod cross_crate { let _ = x.0; let _ = x.1; //~ ERROR use of unstable - let _ = x.2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.2; //~ ERROR use of unstable let Stable2(_, _, //~ ERROR use of unstable - _) - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + _) //~ ERROR use of unstable = x; // all fine let Stable2(..) = x; @@ -66,23 +56,17 @@ mod cross_crate { let x = Unstable { //~ ERROR use of unstable inherit: 1, //~ ERROR use of unstable override1: 2, - override2: 3, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + override2: 3, //~ ERROR use of unstable }; let _ = x.inherit; //~ ERROR use of unstable let _ = x.override1; - let _ = x.override2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.override2; //~ ERROR use of unstable let Unstable { //~ ERROR use of unstable inherit: _, //~ ERROR use of unstable override1: _, - override2: _ - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + override2: _ //~ ERROR use of unstable } = x; let Unstable //~ ERROR use of unstable @@ -94,91 +78,50 @@ mod cross_crate { let _ = x.0; //~ ERROR use of unstable let _ = x.1; - let _ = x.2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.2; //~ ERROR use of unstable let Unstable2 //~ ERROR use of unstable (_, //~ ERROR use of unstable _, - _) - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + _) //~ ERROR use of unstable = x; let Unstable2 //~ ERROR use of unstable // the patterns are all fine: (..) = x; - let x = Deprecated { - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable - inherit: 1, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let x = Deprecated { //~ ERROR use of unstable + inherit: 1, //~ ERROR use of unstable override1: 2, - //~^ ERROR use of deprecated item - override2: 3, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + override2: 3, //~ ERROR use of unstable }; - let _ = x.inherit; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.inherit; //~ ERROR use of unstable let _ = x.override1; - //~^ ERROR use of deprecated item - let _ = x.override2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.override2; //~ ERROR use of unstable - let Deprecated { - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable - inherit: _, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let Deprecated { //~ ERROR use of unstable + inherit: _, //~ ERROR use of unstable override1: _, - //~^ ERROR use of deprecated item - override2: _ - //~^ ERROR use of unstable - //~^^ ERROR use of deprecated item + override2: _ //~ ERROR use of unstable } = x; - let Deprecated - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let Deprecated //~ ERROR use of unstable // the patterns are all fine: { .. } = x; - let x = Deprecated2(1, 2, 3); - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let x = Deprecated2(1, 2, 3); //~ ERROR use of unstable - let _ = x.0; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.0; //~ ERROR use of unstable let _ = x.1; - //~^ ERROR use of deprecated item - let _ = x.2; - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let _ = x.2; //~ ERROR use of unstable - let Deprecated2 - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable - (_, - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let Deprecated2 //~ ERROR use of unstable + (_, //~ ERROR use of unstable _, - //~^ ERROR use of deprecated item - _) - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + _) //~ ERROR use of unstable = x; - let Deprecated2 - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable + let Deprecated2 //~ ERROR use of unstable // the patterns are all fine: (..) = x; } @@ -238,19 +181,16 @@ mod this_crate { inherit: 1, override1: 2, override2: 3, - //~^ ERROR use of deprecated item }; let _ = x.inherit; let _ = x.override1; let _ = x.override2; - //~^ ERROR use of deprecated item let Stable { inherit: _, override1: _, override2: _ - //~^ ERROR use of deprecated item } = x; // all fine let Stable { .. } = x; @@ -260,12 +200,10 @@ mod this_crate { let _ = x.0; let _ = x.1; let _ = x.2; - //~^ ERROR use of deprecated item let Stable2(_, _, _) - //~^ ERROR use of deprecated item = x; // all fine let Stable2(..) = x; @@ -275,19 +213,16 @@ mod this_crate { inherit: 1, override1: 2, override2: 3, - //~^ ERROR use of deprecated item }; let _ = x.inherit; let _ = x.override1; let _ = x.override2; - //~^ ERROR use of deprecated item let Unstable { inherit: _, override1: _, override2: _ - //~^ ERROR use of deprecated item } = x; let Unstable @@ -300,13 +235,11 @@ mod this_crate { let _ = x.0; let _ = x.1; let _ = x.2; - //~^ ERROR use of deprecated item let Unstable2 (_, _, _) - //~^ ERROR use of deprecated item = x; let Unstable2 // the patterns are all fine: @@ -314,58 +247,37 @@ mod this_crate { let x = Deprecated { - //~^ ERROR use of deprecated item inherit: 1, - //~^ ERROR use of deprecated item override1: 2, - //~^ ERROR use of deprecated item override2: 3, - //~^ ERROR use of deprecated item }; let _ = x.inherit; - //~^ ERROR use of deprecated item let _ = x.override1; - //~^ ERROR use of deprecated item let _ = x.override2; - //~^ ERROR use of deprecated item let Deprecated { - //~^ ERROR use of deprecated item inherit: _, - //~^ ERROR use of deprecated item override1: _, - //~^ ERROR use of deprecated item override2: _ - //~^ ERROR use of deprecated item } = x; let Deprecated - //~^ ERROR use of deprecated item // the patterns are all fine: { .. } = x; let x = Deprecated2(1, 2, 3); - //~^ ERROR use of deprecated item let _ = x.0; - //~^ ERROR use of deprecated item let _ = x.1; - //~^ ERROR use of deprecated item let _ = x.2; - //~^ ERROR use of deprecated item let Deprecated2 - //~^ ERROR use of deprecated item (_, - //~^ ERROR use of deprecated item _, - //~^ ERROR use of deprecated item _) - //~^ ERROR use of deprecated item = x; let Deprecated2 - //~^ ERROR use of deprecated item // the patterns are all fine: (..) = x; } diff --git a/src/test/compile-fail/lint-stability.rs b/src/test/compile-fail/lint-stability.rs index 953cd4a2ff..1ece7a0b8e 100644 --- a/src/test/compile-fail/lint-stability.rs +++ b/src/test/compile-fail/lint-stability.rs @@ -13,7 +13,7 @@ // aux-build:stability_cfg1.rs // aux-build:stability_cfg2.rs -#![deny(deprecated)] +#![allow(deprecated)] #![allow(dead_code)] #![feature(staged_api)] @@ -32,81 +32,46 @@ mod cross_crate { type Foo = MethodTester; let foo = MethodTester; - deprecated(); //~ ERROR use of deprecated item - foo.method_deprecated(); //~ ERROR use of deprecated item - Foo::method_deprecated(&foo); //~ ERROR use of deprecated item - ::method_deprecated(&foo); //~ ERROR use of deprecated item - foo.trait_deprecated(); //~ ERROR use of deprecated item - Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item + deprecated(); + foo.method_deprecated(); + Foo::method_deprecated(&foo); + ::method_deprecated(&foo); + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); - deprecated_text(); //~ ERROR use of deprecated item: text - foo.method_deprecated_text(); //~ ERROR use of deprecated item: text - Foo::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text - Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + deprecated_text(); + foo.method_deprecated_text(); + Foo::method_deprecated_text(&foo); + ::method_deprecated_text(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); - deprecated_unstable(); //~ ERROR use of deprecated item + deprecated_unstable(); //~^ ERROR use of unstable library feature - foo.method_deprecated_unstable(); //~ ERROR use of deprecated item + Trait::trait_deprecated_unstable(&foo); //~^ ERROR use of unstable library feature - Foo::method_deprecated_unstable(&foo); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - ::method_deprecated_unstable(&foo); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - Trait::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~^ ERROR use of unstable library feature - deprecated_unstable_text(); //~ ERROR use of deprecated item: text + deprecated_unstable_text(); //~^ ERROR use of unstable library feature - foo.method_deprecated_unstable_text(); //~ ERROR use of deprecated item: text + Trait::trait_deprecated_unstable_text(&foo); //~^ ERROR use of unstable library feature - Foo::method_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - ::method_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - Trait::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text + ::trait_deprecated_unstable_text(&foo); //~^ ERROR use of unstable library feature unstable(); //~ ERROR use of unstable library feature - foo.method_unstable(); //~ ERROR use of unstable library feature - Foo::method_unstable(&foo); //~ ERROR use of unstable library feature - ::method_unstable(&foo); //~ ERROR use of unstable library feature - foo.trait_unstable(); //~ ERROR use of unstable library feature Trait::trait_unstable(&foo); //~ ERROR use of unstable library feature - ::trait_unstable(&foo); //~ ERROR use of unstable library feature ::trait_unstable(&foo); //~ ERROR use of unstable library feature unstable_text(); //~^ ERROR use of unstable library feature 'test_feature': text - foo.method_unstable_text(); - //~^ ERROR use of unstable library feature 'test_feature': text - Foo::method_unstable_text(&foo); - //~^ ERROR use of unstable library feature 'test_feature': text - ::method_unstable_text(&foo); - //~^ ERROR use of unstable library feature 'test_feature': text - foo.trait_unstable_text(); - //~^ ERROR use of unstable library feature 'test_feature': text Trait::trait_unstable_text(&foo); //~^ ERROR use of unstable library feature 'test_feature': text - ::trait_unstable_text(&foo); - //~^ ERROR use of unstable library feature 'test_feature': text ::trait_unstable_text(&foo); //~^ ERROR use of unstable library feature 'test_feature': text @@ -131,33 +96,31 @@ mod cross_crate { struct S1(T::TypeUnstable); //~^ ERROR use of unstable library feature struct S2(T::TypeDeprecated); - //~^ ERROR use of deprecated item - let _ = DeprecatedStruct { //~ ERROR use of deprecated item - i: 0 //~ ERROR use of deprecated item + let _ = DeprecatedStruct { + i: 0 }; let _ = DeprecatedUnstableStruct { - //~^ ERROR use of deprecated item - //~^^ ERROR use of unstable library feature - i: 0 //~ ERROR use of deprecated item + //~^ ERROR use of unstable library feature + i: 0 }; let _ = UnstableStruct { i: 0 }; //~ ERROR use of unstable library feature let _ = StableStruct { i: 0 }; - let _ = DeprecatedUnitStruct; //~ ERROR use of deprecated item - let _ = DeprecatedUnstableUnitStruct; //~ ERROR use of deprecated item + let _ = DeprecatedUnitStruct; + let _ = DeprecatedUnstableUnitStruct; //~^ ERROR use of unstable library feature let _ = UnstableUnitStruct; //~ ERROR use of unstable library feature let _ = StableUnitStruct; - let _ = Enum::DeprecatedVariant; //~ ERROR use of deprecated item - let _ = Enum::DeprecatedUnstableVariant; //~ ERROR use of deprecated item + let _ = Enum::DeprecatedVariant; + let _ = Enum::DeprecatedUnstableVariant; //~^ ERROR use of unstable library feature let _ = Enum::UnstableVariant; //~ ERROR use of unstable library feature let _ = Enum::StableVariant; - let _ = DeprecatedTupleStruct (1); //~ ERROR use of deprecated item - let _ = DeprecatedUnstableTupleStruct (1); //~ ERROR use of deprecated item + let _ = DeprecatedTupleStruct (1); + let _ = DeprecatedUnstableTupleStruct (1); //~^ ERROR use of unstable library feature let _ = UnstableTupleStruct (1); //~ ERROR use of unstable library feature let _ = StableTupleStruct (1); @@ -167,47 +130,33 @@ mod cross_crate { // Eventually, we will want to lint the contents of the // macro in the module *defining* it. Also, stability levels // on macros themselves are not yet linted. - macro_test_arg!(deprecated_text()); //~ ERROR use of deprecated item: text - macro_test_arg!(deprecated_unstable_text()); //~ ERROR use of deprecated item: text + macro_test_arg!(deprecated_text()); + macro_test_arg!(deprecated_unstable_text()); //~^ ERROR use of unstable library feature - macro_test_arg!(macro_test_arg!(deprecated_text())); //~ ERROR use of deprecated item: text + macro_test_arg!(macro_test_arg!(deprecated_text())); } fn test_method_param(foo: Foo) { - foo.trait_deprecated(); //~ ERROR use of deprecated item - Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text - Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + Trait::trait_deprecated_unstable(&foo); //~^ ERROR use of unstable library feature - Trait::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable(&foo); //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + Trait::trait_deprecated_unstable_text(&foo); //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable(&foo); //~ ERROR use of deprecated item + ::trait_deprecated_unstable_text(&foo); //~^ ERROR use of unstable library feature - foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - Trait::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - ::trait_deprecated_unstable_text(&foo); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - foo.trait_unstable(); //~ ERROR use of unstable library feature Trait::trait_unstable(&foo); //~ ERROR use of unstable library feature - ::trait_unstable(&foo); //~ ERROR use of unstable library feature ::trait_unstable(&foo); //~ ERROR use of unstable library feature - foo.trait_unstable_text(); - //~^ ERROR use of unstable library feature 'test_feature': text Trait::trait_unstable_text(&foo); //~^ ERROR use of unstable library feature 'test_feature': text - ::trait_unstable_text(&foo); - //~^ ERROR use of unstable library feature 'test_feature': text ::trait_unstable_text(&foo); //~^ ERROR use of unstable library feature 'test_feature': text foo.trait_stable(); @@ -217,24 +166,17 @@ mod cross_crate { } fn test_method_object(foo: &Trait) { - foo.trait_deprecated(); //~ ERROR use of deprecated item - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text - foo.trait_deprecated_unstable(); //~ ERROR use of deprecated item - //~^ ERROR use of unstable library feature - foo.trait_deprecated_unstable_text(); //~ ERROR use of deprecated item: text - //~^ ERROR use of unstable library feature - foo.trait_unstable(); //~ ERROR use of unstable library feature - foo.trait_unstable_text(); - //~^ ERROR use of unstable library feature 'test_feature': text + foo.trait_deprecated(); + foo.trait_deprecated_text(); foo.trait_stable(); } struct S; impl UnstableTrait for S { } //~ ERROR use of unstable library feature - impl DeprecatedTrait for S {} //~ ERROR use of deprecated item: text + impl DeprecatedTrait for S {} trait LocalTrait : UnstableTrait { } //~ ERROR use of unstable library feature - trait LocalTrait2 : DeprecatedTrait { } //~ ERROR use of deprecated item: text + trait LocalTrait2 : DeprecatedTrait { } impl Trait for S { fn trait_stable(&self) {} @@ -253,14 +195,13 @@ mod inheritance { stable_mod::unstable(); //~ ERROR use of unstable library feature stable_mod::stable(); - unstable_mod::deprecated(); //~ ERROR use of deprecated item + unstable_mod::deprecated(); unstable_mod::unstable(); //~ ERROR use of unstable library feature let _ = Unstable::UnstableVariant; //~ ERROR use of unstable library feature let _ = Unstable::StableVariant; let x: usize = 0; - x.unstable(); //~ ERROR use of unstable library feature x.stable(); } } @@ -375,23 +316,23 @@ mod this_crate { type Foo = MethodTester; let foo = MethodTester; - deprecated(); //~ ERROR use of deprecated item - foo.method_deprecated(); //~ ERROR use of deprecated item - Foo::method_deprecated(&foo); //~ ERROR use of deprecated item - ::method_deprecated(&foo); //~ ERROR use of deprecated item - foo.trait_deprecated(); //~ ERROR use of deprecated item - Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item + deprecated(); + foo.method_deprecated(); + Foo::method_deprecated(&foo); + ::method_deprecated(&foo); + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); - deprecated_text(); //~ ERROR use of deprecated item: text - foo.method_deprecated_text(); //~ ERROR use of deprecated item: text - Foo::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::method_deprecated_text(&foo); //~ ERROR use of deprecated item: text - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text - Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + deprecated_text(); + foo.method_deprecated_text(); + Foo::method_deprecated_text(&foo); + ::method_deprecated_text(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); unstable(); foo.method_unstable(); @@ -430,34 +371,33 @@ mod this_crate { ::trait_stable_text(&foo); let _ = DeprecatedStruct { - //~^ ERROR use of deprecated item - i: 0 //~ ERROR use of deprecated item + i: 0 }; let _ = UnstableStruct { i: 0 }; let _ = StableStruct { i: 0 }; - let _ = DeprecatedUnitStruct; //~ ERROR use of deprecated item + let _ = DeprecatedUnitStruct; let _ = UnstableUnitStruct; let _ = StableUnitStruct; - let _ = Enum::DeprecatedVariant; //~ ERROR use of deprecated item + let _ = Enum::DeprecatedVariant; let _ = Enum::UnstableVariant; let _ = Enum::StableVariant; - let _ = DeprecatedTupleStruct (1); //~ ERROR use of deprecated item + let _ = DeprecatedTupleStruct (1); let _ = UnstableTupleStruct (1); let _ = StableTupleStruct (1); } fn test_method_param(foo: Foo) { - foo.trait_deprecated(); //~ ERROR use of deprecated item - Trait::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - ::trait_deprecated(&foo); //~ ERROR use of deprecated item - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text - Trait::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text - ::trait_deprecated_text(&foo); //~ ERROR use of deprecated item: text + foo.trait_deprecated(); + Trait::trait_deprecated(&foo); + ::trait_deprecated(&foo); + ::trait_deprecated(&foo); + foo.trait_deprecated_text(); + Trait::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); + ::trait_deprecated_text(&foo); foo.trait_unstable(); Trait::trait_unstable(&foo); ::trait_unstable(&foo); @@ -473,8 +413,8 @@ mod this_crate { } fn test_method_object(foo: &Trait) { - foo.trait_deprecated(); //~ ERROR use of deprecated item - foo.trait_deprecated_text(); //~ ERROR use of deprecated item: text + foo.trait_deprecated(); + foo.trait_deprecated_text(); foo.trait_unstable(); foo.trait_unstable_text(); foo.trait_stable(); @@ -484,7 +424,7 @@ mod this_crate { #[rustc_deprecated(since = "1.0.0", reason = "text")] fn test_fn_body() { fn fn_in_body() {} - fn_in_body(); //~ ERROR use of deprecated item: text + fn_in_body(); } impl MethodTester { @@ -492,7 +432,7 @@ mod this_crate { #[rustc_deprecated(since = "1.0.0", reason = "text")] fn test_method_body(&self) { fn fn_in_body() {} - fn_in_body(); //~ ERROR use of deprecated item: text + fn_in_body(); } } @@ -504,9 +444,9 @@ mod this_crate { struct S; - impl DeprecatedTrait for S { } //~ ERROR use of deprecated item + impl DeprecatedTrait for S { } - trait LocalTrait : DeprecatedTrait { } //~ ERROR use of deprecated item + trait LocalTrait : DeprecatedTrait { } } fn main() {} diff --git a/src/test/compile-fail/lint-unused-imports.rs b/src/test/compile-fail/lint-unused-imports.rs index 3f91c3e1e5..f6f7c210f4 100644 --- a/src/test/compile-fail/lint-unused-imports.rs +++ b/src/test/compile-fail/lint-unused-imports.rs @@ -15,10 +15,13 @@ use bar::c::cc as cal; use std::mem::*; // shouldn't get errors for not using // everything imported +use std::fmt::{}; +//~^ ERROR unused import: `use std::fmt::{};` // Should get errors for both 'Some' and 'None' -use std::option::Option::{Some, None}; //~ ERROR unused import: `Some` - //~^ ERROR unused import: `None` +use std::option::Option::{Some, None}; +//~^ ERROR unused imports: `None`, `Some` +//~| ERROR unused imports: `None`, `Some` use test::A; //~ ERROR unused import: `test::A` // Be sure that if we just bring some methods into scope that they're also diff --git a/src/test/compile-fail/liveness-forgot-ret.rs b/src/test/compile-fail/liveness-forgot-ret.rs index e08515e40a..1ee4be08a1 100644 --- a/src/test/compile-fail/liveness-forgot-ret.rs +++ b/src/test/compile-fail/liveness-forgot-ret.rs @@ -8,10 +8,9 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// error-pattern: not all control paths return a value - fn god_exists(a: isize) -> bool { return god_exists(a); } fn f(a: isize) -> isize { if god_exists(a) { return 5; }; } +//~^ ERROR mismatched types fn main() { f(12); } diff --git a/src/test/compile-fail/liveness-issue-2163.rs b/src/test/compile-fail/liveness-issue-2163.rs index 7c94e33b47..69bceec8c3 100644 --- a/src/test/compile-fail/liveness-issue-2163.rs +++ b/src/test/compile-fail/liveness-issue-2163.rs @@ -13,6 +13,6 @@ use std::vec::Vec; fn main() { let a: Vec = Vec::new(); a.iter().all(|_| -> bool { - //~^ ERROR not all control paths return a value + //~^ ERROR mismatched types }); } diff --git a/src/test/compile-fail/liveness-missing-ret2.rs b/src/test/compile-fail/liveness-missing-ret2.rs index b53bb6159e..a35eb1af4f 100644 --- a/src/test/compile-fail/liveness-missing-ret2.rs +++ b/src/test/compile-fail/liveness-missing-ret2.rs @@ -8,9 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// error-pattern: not all control paths return a value - -fn f() -> isize { +fn f() -> isize { //~ ERROR mismatched types // Make sure typestate doesn't interpret this match expression as // the function result match true { true => { } _ => {} }; diff --git a/src/test/compile-fail/liveness-return-last-stmt-semi.rs b/src/test/compile-fail/liveness-return-last-stmt-semi.rs index 03733cc2eb..ada91c38d4 100644 --- a/src/test/compile-fail/liveness-return-last-stmt-semi.rs +++ b/src/test/compile-fail/liveness-return-last-stmt-semi.rs @@ -11,16 +11,16 @@ // regression test for #8005 macro_rules! test { () => { fn foo() -> i32 { 1; } } } - //~^ ERROR not all control paths return a value + //~^ ERROR mismatched types //~| HELP consider removing this semicolon -fn no_return() -> i32 {} //~ ERROR not all control paths return a value +fn no_return() -> i32 {} //~ ERROR mismatched types -fn bar(x: u32) -> u32 { //~ ERROR not all control paths return a value +fn bar(x: u32) -> u32 { //~ ERROR mismatched types x * 2; //~ HELP consider removing this semicolon } -fn baz(x: u64) -> u32 { //~ ERROR not all control paths return a value +fn baz(x: u64) -> u32 { //~ ERROR mismatched types x * 2; } diff --git a/src/test/compile-fail/loop-break-value.rs b/src/test/compile-fail/loop-break-value.rs new file mode 100644 index 0000000000..d4f2959748 --- /dev/null +++ b/src/test/compile-fail/loop-break-value.rs @@ -0,0 +1,101 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(loop_break_value)] +#![feature(never_type)] + +fn main() { + let val: ! = loop { break break; }; + //~^ ERROR mismatched types + + loop { + if true { + break "asdf"; + } else { + break 123; //~ ERROR mismatched types + } + }; + + let _: i32 = loop { + break "asdf"; //~ ERROR mismatched types + }; + + let _: i32 = 'outer_loop: loop { + loop { + break 'outer_loop "nope"; //~ ERROR mismatched types + break "ok"; + }; + }; + + 'while_loop: while true { + break; + break (); //~ ERROR `break` with value from a `while` loop + loop { + break 'while_loop 123; + //~^ ERROR `break` with value from a `while` loop + //~| ERROR mismatched types + break 456; + break 789; + }; + } + + 'while_let_loop: while let Some(_) = Some(()) { + if break () { //~ ERROR `break` with value from a `while let` loop + break; + break None; + //~^ ERROR `break` with value from a `while let` loop + //~| ERROR mismatched types + } + loop { + break 'while_let_loop "nope"; + //~^ ERROR `break` with value from a `while let` loop + //~| ERROR mismatched types + break 33; + }; + } + + 'for_loop: for _ in &[1,2,3] { + break (); //~ ERROR `break` with value from a `for` loop + break [()]; + //~^ ERROR `break` with value from a `for` loop + //~| ERROR mismatched types + loop { + break Some(3); + break 'for_loop Some(17); + //~^ ERROR `break` with value from a `for` loop + //~| ERROR mismatched types + }; + } + + let _: i32 = 'a: loop { + let _: () = 'b: loop { + break ('c: loop { + break; + break 'c 123; //~ ERROR mismatched types + }); + break 'a 123; + }; + }; + + loop { + break (break, break); //~ ERROR mismatched types + }; + + loop { + break; + break 2; //~ ERROR mismatched types + }; + + loop { + break 2; + break; //~ ERROR mismatched types + break 4; + }; +} diff --git a/src/test/compile-fail/macro-tt-matchers.rs b/src/test/compile-fail/macro-tt-matchers.rs index 945490cefb..969f150071 100644 --- a/src/test/compile-fail/macro-tt-matchers.rs +++ b/src/test/compile-fail/macro-tt-matchers.rs @@ -9,6 +9,7 @@ // except according to those terms. #![feature(rustc_attrs)] +#![allow(dead_code)] macro_rules! foo { ($x:tt) => (type Alias = $x;) diff --git a/src/test/compile-fail/macro-with-seps-err-msg.rs b/src/test/compile-fail/macro-with-seps-err-msg.rs index 408bb15ba2..d5fc9a510f 100644 --- a/src/test/compile-fail/macro-with-seps-err-msg.rs +++ b/src/test/compile-fail/macro-with-seps-err-msg.rs @@ -9,7 +9,7 @@ // except according to those terms. fn main() { - globnar::brotz!(); //~ ERROR expected macro name without module separators - ::foo!(); //~ ERROR expected macro name without module separators - foo::!(); //~ ERROR expected macro name without module separators + globnar::brotz!(); //~ ERROR non-ident macro paths are experimental + ::foo!(); //~ ERROR non-ident macro paths are experimental + foo::!(); //~ ERROR type parameters are not allowed on macros } diff --git a/src/test/compile-fail/main-wrong-type-2.rs b/src/test/compile-fail/main-wrong-type-2.rs index 7434a6c960..2878cbc7fc 100644 --- a/src/test/compile-fail/main-wrong-type-2.rs +++ b/src/test/compile-fail/main-wrong-type-2.rs @@ -10,4 +10,5 @@ fn main() -> char { //~^ ERROR: main function has wrong type + ' ' } diff --git a/src/test/compile-fail/map-types.rs b/src/test/compile-fail/map-types.rs index a419c6480e..e24441c549 100644 --- a/src/test/compile-fail/map-types.rs +++ b/src/test/compile-fail/map-types.rs @@ -10,8 +10,6 @@ #![feature(box_syntax)] -extern crate collections; - use std::collections::HashMap; trait Map diff --git a/src/test/compile-fail/match-slice-patterns.rs b/src/test/compile-fail/match-slice-patterns.rs new file mode 100644 index 0000000000..c0fc75f971 --- /dev/null +++ b/src/test/compile-fail/match-slice-patterns.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(advanced_slice_patterns, slice_patterns)] + +fn check(list: &[Option<()>]) { + match list { + //~^ ERROR `&[None, Some(_), None, _]` and `&[Some(_), Some(_), None, _]` not covered + &[] => {}, + &[_] => {}, + &[_, _] => {}, + &[_, None, ..] => {}, + &[.., Some(_), _] => {}, + } +} + +fn main() {} diff --git a/src/test/compile-fail/maybe-bounds-where-cpass.rs b/src/test/compile-fail/maybe-bounds-where-cpass.rs new file mode 100644 index 0000000000..f10526200f --- /dev/null +++ b/src/test/compile-fail/maybe-bounds-where-cpass.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(rustc_attrs)] + +struct S(*const T) where T: ?Sized; + +#[rustc_error] +fn main() { //~ ERROR compilation successful + let u = vec![1, 2, 3]; + let _s: S<[u8]> = S(&u[..]); +} diff --git a/src/test/compile-fail/maybe-bounds-where.rs b/src/test/compile-fail/maybe-bounds-where.rs new file mode 100644 index 0000000000..211fac2ee2 --- /dev/null +++ b/src/test/compile-fail/maybe-bounds-where.rs @@ -0,0 +1,37 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +struct S1(T) where (T): ?Sized; +//~^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared + +struct S2(T) where u8: ?Sized; +//~^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared + +struct S3(T) where &'static T: ?Sized; +//~^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared + +trait Trait<'a> {} + +struct S4(T) where for<'a> T: ?Trait<'a>; +//~^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared + +struct S5(*const T) where T: ?Trait<'static> + ?Sized; +//~^ ERROR type parameter has more than one relaxed default bound +//~| WARN default bound relaxed for a type parameter + +impl S1 { + fn f() where T: ?Sized {} + //~^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared +} + +fn main() { + let u = vec![1, 2, 3]; + let _s: S5<[u8]> = S5(&u[..]); // OK +} diff --git a/src/test/compile-fail/maybe-bounds.rs b/src/test/compile-fail/maybe-bounds.rs new file mode 100644 index 0000000000..b0b412bbf8 --- /dev/null +++ b/src/test/compile-fail/maybe-bounds.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +trait Tr: ?Sized {} //~ ERROR `?Trait` is not permitted in supertraits + //~^ NOTE traits are `?Sized` by default + +type A1 = Tr + ?Sized; //~ ERROR `?Trait` is not permitted in trait object types +type A2 = for<'a> Tr + ?Sized; //~ ERROR `?Trait` is not permitted in trait object types + +fn main() {} diff --git a/src/test/compile-fail/method-ambig-one-trait-unknown-int-type.rs b/src/test/compile-fail/method-ambig-one-trait-unknown-int-type.rs index 4f86909765..1cf41f95a2 100644 --- a/src/test/compile-fail/method-ambig-one-trait-unknown-int-type.rs +++ b/src/test/compile-fail/method-ambig-one-trait-unknown-int-type.rs @@ -32,7 +32,7 @@ impl foo for Vec { fn m1() { // we couldn't infer the type of the vector just based on calling foo()... let mut x = Vec::new(); - //~^ ERROR unable to infer enough type information about `_` [E0282] + //~^ ERROR unable to infer enough type information about `T` [E0282] x.foo(); } diff --git a/src/test/compile-fail/method-call-err-msg.rs b/src/test/compile-fail/method-call-err-msg.rs index b7e0c5b81d..b8eb8434b3 100644 --- a/src/test/compile-fail/method-call-err-msg.rs +++ b/src/test/compile-fail/method-call-err-msg.rs @@ -13,8 +13,11 @@ pub struct Foo; impl Foo { fn zero(self) -> Foo { self } + //~^ NOTE defined here fn one(self, _: isize) -> Foo { self } + //~^ NOTE defined here fn two(self, _: isize, _: isize) -> Foo { self } + //~^ NOTE defined here } fn main() { @@ -22,10 +25,9 @@ fn main() { x.zero(0) //~ ERROR this function takes 0 parameters but 1 parameter was supplied //~^ NOTE expected 0 parameters .one() //~ ERROR this function takes 1 parameter but 0 parameters were supplied - //~^ NOTE the following parameter type was expected + //~^ NOTE expected 1 parameter .two(0); //~ ERROR this function takes 2 parameters but 1 parameter was supplied - //~^ NOTE the following parameter types were expected - //~| NOTE isize, isize + //~^ NOTE expected 2 parameters let y = Foo; y.zero() diff --git a/src/test/compile-fail/method-path-in-pattern.rs b/src/test/compile-fail/method-path-in-pattern.rs index aaa89b2282..671a518073 100644 --- a/src/test/compile-fail/method-path-in-pattern.rs +++ b/src/test/compile-fail/method-path-in-pattern.rs @@ -22,13 +22,15 @@ impl MyTrait for Foo {} fn main() { match 0u32 { - Foo::bar => {} //~ ERROR expected unit struct/variant or constant, found method `Foo::bar` + Foo::bar => {} + //~^ ERROR expected unit struct/variant or constant, found method `::bar` } match 0u32 { - ::bar => {} //~ ERROR expected unit struct/variant or constant, found method `bar` + ::bar => {} + //~^ ERROR expected unit struct/variant or constant, found method `::bar` } match 0u32 { ::trait_bar => {} - //~^ ERROR expected unit struct/variant or constant, found method `trait_bar` + //~^ ERROR expected unit struct/variant or constant, found method `::trait_bar` } } diff --git a/src/test/compile-fail/mir-dataflow/def-inits-1.rs b/src/test/compile-fail/mir-dataflow/def-inits-1.rs index 1ba1bb35bb..f3c9f29821 100644 --- a/src/test/compile-fail/mir-dataflow/def-inits-1.rs +++ b/src/test/compile-fail/mir-dataflow/def-inits-1.rs @@ -10,7 +10,7 @@ // General test of maybe_uninits state computed by MIR dataflow. -#![feature(rustc_attrs)] +#![feature(core_intrinsics, rustc_attrs)] use std::intrinsics::rustc_peek; use std::mem::{drop, replace}; diff --git a/src/test/compile-fail/mir-dataflow/inits-1.rs b/src/test/compile-fail/mir-dataflow/inits-1.rs index c8cf44adb9..8a5ab6e420 100644 --- a/src/test/compile-fail/mir-dataflow/inits-1.rs +++ b/src/test/compile-fail/mir-dataflow/inits-1.rs @@ -10,7 +10,7 @@ // General test of maybe_inits state computed by MIR dataflow. -#![feature(rustc_attrs)] +#![feature(core_intrinsics, rustc_attrs)] use std::intrinsics::rustc_peek; use std::mem::{drop, replace}; diff --git a/src/test/compile-fail/mir-dataflow/uninits-1.rs b/src/test/compile-fail/mir-dataflow/uninits-1.rs index a82bfc8969..8df66ea815 100644 --- a/src/test/compile-fail/mir-dataflow/uninits-1.rs +++ b/src/test/compile-fail/mir-dataflow/uninits-1.rs @@ -10,7 +10,7 @@ // General test of maybe_uninits state computed by MIR dataflow. -#![feature(rustc_attrs)] +#![feature(core_intrinsics, rustc_attrs)] use std::intrinsics::rustc_peek; use std::mem::{drop, replace}; diff --git a/src/test/compile-fail/mir-dataflow/uninits-2.rs b/src/test/compile-fail/mir-dataflow/uninits-2.rs index 8cfdae5066..2edd275e78 100644 --- a/src/test/compile-fail/mir-dataflow/uninits-2.rs +++ b/src/test/compile-fail/mir-dataflow/uninits-2.rs @@ -10,7 +10,7 @@ // General test of maybe_uninits state computed by MIR dataflow. -#![feature(rustc_attrs)] +#![feature(core_intrinsics, rustc_attrs)] use std::intrinsics::rustc_peek; use std::mem::{drop, replace}; diff --git a/src/test/compile-fail/namespace-mix-old.rs b/src/test/compile-fail/namespace-mix-old.rs deleted file mode 100644 index ad67664419..0000000000 --- a/src/test/compile-fail/namespace-mix-old.rs +++ /dev/null @@ -1,174 +0,0 @@ -// Copyright 2016 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -// FIXME: Remove when `item_like_imports` is stabilized. - -// aux-build:namespace-mix-old.rs - -#![feature(relaxed_adts)] - -extern crate namespace_mix_old; -use namespace_mix_old::{xm1, xm2, xm3, xm4, xm5, xm6, xm7, xm8, xm9, xmA, xmB, xmC}; - -mod c { - pub struct S {} - pub struct TS(); - pub struct US; - pub enum E { - V {}, - TV(), - UV, - } - - pub struct Item; -} - -mod proxy { - pub use c::*; - pub use c::E::*; -} - -// Use something emitting the type argument name, e.g. unsatisfied bound. -trait Impossible {} -fn check(_: T) {} - -mod m1 { - pub use ::proxy::*; - pub type S = ::c::Item; -} -mod m2 { - pub use ::proxy::*; - pub const S: ::c::Item = ::c::Item; -} - -fn f12() { - check(m1::S{}); //~ ERROR c::Item - check(m1::S); //~ ERROR unresolved name - check(m2::S{}); //~ ERROR c::S - check(m2::S); //~ ERROR c::Item -} -fn xf12() { - check(xm1::S{}); //~ ERROR c::Item - check(xm1::S); //~ ERROR unresolved name - check(xm2::S{}); //~ ERROR c::S - check(xm2::S); //~ ERROR c::Item -} - -mod m3 { - pub use ::proxy::*; - pub type TS = ::c::Item; -} -mod m4 { - pub use ::proxy::*; - pub const TS: ::c::Item = ::c::Item; -} - -fn f34() { - check(m3::TS{}); //~ ERROR c::Item - check(m3::TS); //~ ERROR c::TS - check(m4::TS{}); //~ ERROR c::TS - check(m4::TS); //~ ERROR c::Item -} -fn xf34() { - check(xm3::TS{}); //~ ERROR c::Item - check(xm3::TS); //~ ERROR c::TS - check(xm4::TS{}); //~ ERROR c::TS - check(xm4::TS); //~ ERROR c::Item -} - -mod m5 { - pub use ::proxy::*; - pub type US = ::c::Item; -} -mod m6 { - pub use ::proxy::*; - pub const US: ::c::Item = ::c::Item; -} - -fn f56() { - check(m5::US{}); //~ ERROR c::Item - check(m5::US); //~ ERROR c::US - check(m6::US{}); //~ ERROR c::US - check(m6::US); //~ ERROR c::Item -} -fn xf56() { - check(xm5::US{}); //~ ERROR c::Item - check(xm5::US); //~ ERROR c::US - check(xm6::US{}); //~ ERROR c::US - check(xm6::US); //~ ERROR c::Item -} - -mod m7 { - pub use ::proxy::*; - pub type V = ::c::Item; -} -mod m8 { - pub use ::proxy::*; - pub const V: ::c::Item = ::c::Item; -} - -fn f78() { - check(m7::V{}); //~ ERROR c::Item - check(m7::V); //~ ERROR name of a struct or struct variant - check(m8::V{}); //~ ERROR c::E - check(m8::V); //~ ERROR c::Item -} -fn xf78() { - check(xm7::V{}); //~ ERROR c::Item - check(xm7::V); //~ ERROR name of a struct or struct variant - check(xm8::V{}); //~ ERROR c::E - check(xm8::V); //~ ERROR c::Item -} - -mod m9 { - pub use ::proxy::*; - pub type TV = ::c::Item; -} -mod mA { - pub use ::proxy::*; - pub const TV: ::c::Item = ::c::Item; -} - -fn f9A() { - check(m9::TV{}); //~ ERROR c::Item - check(m9::TV); //~ ERROR c::E - check(mA::TV{}); //~ ERROR c::E - check(mA::TV); //~ ERROR c::Item -} -fn xf9A() { - check(xm9::TV{}); //~ ERROR c::Item - check(xm9::TV); //~ ERROR c::E - check(xmA::TV{}); //~ ERROR c::E - check(xmA::TV); //~ ERROR c::Item -} - -mod mB { - pub use ::proxy::*; - pub type UV = ::c::Item; -} -mod mC { - pub use ::proxy::*; - pub const UV: ::c::Item = ::c::Item; -} - -fn fBC() { - check(mB::UV{}); //~ ERROR c::Item - check(mB::UV); //~ ERROR c::E - check(mC::UV{}); //~ ERROR c::E - check(mC::UV); //~ ERROR c::Item -} -fn xfBC() { - check(xmB::UV{}); //~ ERROR c::Item - check(xmB::UV); //~ ERROR c::E - check(xmC::UV{}); //~ ERROR c::E - check(xmC::UV); //~ ERROR c::Item -} - -fn main() {} diff --git a/src/test/compile-fail/namespace-mix-new.rs b/src/test/compile-fail/namespace-mix.rs similarity index 96% rename from src/test/compile-fail/namespace-mix-new.rs rename to src/test/compile-fail/namespace-mix.rs index 0abe8bd439..cb7894b726 100644 --- a/src/test/compile-fail/namespace-mix-new.rs +++ b/src/test/compile-fail/namespace-mix.rs @@ -8,12 +8,10 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// aux-build:namespace-mix-new.rs +// aux-build:namespace-mix.rs -#![feature(item_like_imports, relaxed_adts)] - -extern crate namespace_mix_new; -use namespace_mix_new::*; +extern crate namespace_mix; +use namespace_mix::*; mod c { pub struct S {} diff --git a/src/test/compile-fail/no-link.rs b/src/test/compile-fail/no-link.rs index 8f6da99806..c4737a3739 100644 --- a/src/test/compile-fail/no-link.rs +++ b/src/test/compile-fail/no-link.rs @@ -8,11 +8,12 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +// aux-build:empty-struct.rs + #[no_link] -extern crate libc; +extern crate empty_struct; +//~^ WARN custom derive crates and `#[no_link]` crates have no effect without `#[macro_use]` fn main() { - unsafe { - libc::abs(0); //~ ERROR unresolved name - } + empty_struct::XEmpty1; //~ ERROR unresolved name } diff --git a/src/test/compile-fail/no-method-suggested-traits.rs b/src/test/compile-fail/no-method-suggested-traits.rs index 9ccc7cc75a..ea8796d38f 100644 --- a/src/test/compile-fail/no-method-suggested-traits.rs +++ b/src/test/compile-fail/no-method-suggested-traits.rs @@ -34,31 +34,31 @@ fn main() { 1u32.method(); //~^ HELP following traits are implemented but not in scope, perhaps add a `use` for one of them //~^^ ERROR no method named - //~^^^ HELP `use foo::Bar` - //~^^^^ HELP `use no_method_suggested_traits::foo::PubPub` + //~^^^ HELP `use foo::Bar;` + //~^^^^ HELP `use no_method_suggested_traits::foo::PubPub;` std::rc::Rc::new(&mut Box::new(&1u32)).method(); //~^ HELP following traits are implemented but not in scope, perhaps add a `use` for one of them //~^^ ERROR no method named - //~^^^ HELP `use foo::Bar` - //~^^^^ HELP `use no_method_suggested_traits::foo::PubPub` + //~^^^ HELP `use foo::Bar;` + //~^^^^ HELP `use no_method_suggested_traits::foo::PubPub;` 'a'.method(); //~^ ERROR no method named //~^^ HELP the following trait is implemented but not in scope, perhaps add a `use` for it: - //~^^^ HELP `use foo::Bar` + //~^^^ HELP `use foo::Bar;` std::rc::Rc::new(&mut Box::new(&'a')).method(); //~^ ERROR no method named //~^^ HELP the following trait is implemented but not in scope, perhaps add a `use` for it: - //~^^^ HELP `use foo::Bar` + //~^^^ HELP `use foo::Bar;` 1i32.method(); //~^ ERROR no method named //~^^ HELP the following trait is implemented but not in scope, perhaps add a `use` for it: - //~^^^ HELP `use no_method_suggested_traits::foo::PubPub` + //~^^^ HELP `use no_method_suggested_traits::foo::PubPub;` std::rc::Rc::new(&mut Box::new(&1i32)).method(); //~^ ERROR no method named //~^^ HELP the following trait is implemented but not in scope, perhaps add a `use` for it: - //~^^^ HELP `use no_method_suggested_traits::foo::PubPub` + //~^^^ HELP `use no_method_suggested_traits::foo::PubPub;` Foo.method(); //~^ ERROR no method named diff --git a/src/test/compile-fail/non-copyable-void.rs b/src/test/compile-fail/non-copyable-void.rs index 6067b71280..4383f3ede0 100644 --- a/src/test/compile-fail/non-copyable-void.rs +++ b/src/test/compile-fail/non-copyable-void.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(libc)] + extern crate libc; fn main() { diff --git a/src/test/compile-fail/not-enough-arguments.rs b/src/test/compile-fail/not-enough-arguments.rs index 660d48da4d..e13008df0d 100644 --- a/src/test/compile-fail/not-enough-arguments.rs +++ b/src/test/compile-fail/not-enough-arguments.rs @@ -13,12 +13,12 @@ // unrelated errors. fn foo(a: isize, b: isize, c: isize, d:isize) { + //~^ NOTE defined here panic!(); } fn main() { foo(1, 2, 3); //~^ ERROR this function takes 4 parameters but 3 - //~| NOTE the following parameter types were expected: - //~| NOTE isize, isize, isize, isize + //~| NOTE expected 4 parameters } diff --git a/src/test/compile-fail/numeric-fields-feature-gate.rs b/src/test/compile-fail/numeric-fields-feature-gate.rs new file mode 100644 index 0000000000..3ce85813a9 --- /dev/null +++ b/src/test/compile-fail/numeric-fields-feature-gate.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +struct S(u8); + +fn main() { + let s = S{0: 10}; //~ ERROR numeric fields in struct expressions are unstable + match s { + S{0: a, ..} => {} //~ ERROR numeric fields in struct patterns are unstable + } +} diff --git a/src/test/compile-fail/on-unimplemented/on-trait.rs b/src/test/compile-fail/on-unimplemented/on-trait.rs index 3a789f3fae..0f4b0919b6 100644 --- a/src/test/compile-fail/on-unimplemented/on-trait.rs +++ b/src/test/compile-fail/on-unimplemented/on-trait.rs @@ -16,7 +16,7 @@ trait Foo {} fn foobar>() -> T { - + panic!() } #[rustc_on_unimplemented="a collection of type `{Self}` cannot be built from an iterator over elements of type `{A}`"] diff --git a/src/test/compile-fail/on-unimplemented/slice-index.rs b/src/test/compile-fail/on-unimplemented/slice-index.rs index d528d0e626..d28b823ddc 100644 --- a/src/test/compile-fail/on-unimplemented/slice-index.rs +++ b/src/test/compile-fail/on-unimplemented/slice-index.rs @@ -9,6 +9,7 @@ // except according to those terms. // Test new Index error message for slices +// ignore-tidy-linelength #![feature(rustc_attrs)] @@ -17,12 +18,12 @@ use std::ops::Index; #[rustc_error] fn main() { let x = &[1, 2, 3] as &[i32]; - x[1i32]; - //~^ ERROR E0277 - //~| NOTE the trait `std::ops::Index` is not implemented for `[i32]` - //~| NOTE slice indices are of type `usize` - x[..1i32]; - //~^ ERROR E0277 - //~| NOTE the trait `std::ops::Index>` is not implemented for `[i32]` - //~| NOTE slice indices are of type `usize` + x[1i32]; //~ ERROR E0277 + //~| NOTE slice indices are of type `usize` or ranges of `usize` + //~| NOTE trait `std::slice::SliceIndex` is not implemented for `i32` + //~| NOTE required because of the requirements on the impl of `std::ops::Index` + x[..1i32]; //~ ERROR E0277 + //~| NOTE slice indices are of type `usize` or ranges of `usize` + //~| NOTE trait `std::slice::SliceIndex` is not implemented for `std::ops::RangeTo` + //~| NOTE requirements on the impl of `std::ops::Index>` } diff --git a/src/test/compile-fail/overloaded-calls-bad.rs b/src/test/compile-fail/overloaded-calls-bad.rs index 0aa9af3c8d..3295e2bebd 100644 --- a/src/test/compile-fail/overloaded-calls-bad.rs +++ b/src/test/compile-fail/overloaded-calls-bad.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] use std::ops::FnMut; @@ -41,8 +41,8 @@ fn main() { //~| NOTE found type let ans = s(); //~^ ERROR this function takes 1 parameter but 0 parameters were supplied - //~| NOTE the following parameter type was expected + //~| NOTE expected 1 parameter let ans = s("burma", "shave"); //~^ ERROR this function takes 1 parameter but 2 parameters were supplied - //~| NOTE the following parameter type was expected + //~| NOTE expected 1 parameter } diff --git a/src/test/compile-fail/overloaded-calls-nontuple.rs b/src/test/compile-fail/overloaded-calls-nontuple.rs index ea47d67641..7113224664 100644 --- a/src/test/compile-fail/overloaded-calls-nontuple.rs +++ b/src/test/compile-fail/overloaded-calls-nontuple.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] use std::ops::FnMut; diff --git a/src/test/compile-fail/paths-in-macro-invocations.rs b/src/test/compile-fail/paths-in-macro-invocations.rs deleted file mode 100644 index c69b7e526c..0000000000 --- a/src/test/compile-fail/paths-in-macro-invocations.rs +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright 2016 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -::foo::bar!(); //~ ERROR expected macro name without module separators -foo::bar!(); //~ ERROR expected macro name without module separators - -trait T { - foo::bar!(); //~ ERROR expected macro name without module separators - ::foo::bar!(); //~ ERROR expected macro name without module separators -} - -struct S { - x: foo::bar!(), //~ ERROR expected macro name without module separators - y: ::foo::bar!(), //~ ERROR expected macro name without module separators -} - -impl S { - foo::bar!(); //~ ERROR expected macro name without module separators - ::foo::bar!(); //~ ERROR expected macro name without module separators -} - -fn main() { - foo::bar!(); //~ ERROR expected macro name without module separators - ::foo::bar!(); //~ ERROR expected macro name without module separators - - let _ = foo::bar!(); //~ ERROR expected macro name without module separators - let _ = ::foo::bar!(); //~ ERROR expected macro name without module separators - - let foo::bar!() = 0; //~ ERROR expected macro name without module separators - let ::foo::bar!() = 0; //~ ERROR expected macro name without module separators -} diff --git a/src/test/compile-fail/privacy2.rs b/src/test/compile-fail/privacy2.rs index 376e95312b..113dd28794 100644 --- a/src/test/compile-fail/privacy2.rs +++ b/src/test/compile-fail/privacy2.rs @@ -31,8 +31,7 @@ fn test1() { fn test2() { use bar::glob::foo; - //~^ ERROR unresolved import `bar::glob::foo` [E0432] - //~| no `foo` in `bar::glob` + //~^ ERROR `foo` is private } #[start] fn main(_: isize, _: *const *const u8) -> isize { 3 } diff --git a/src/test/compile-fail/private-in-public-lint.rs b/src/test/compile-fail/private-in-public-lint.rs index 8e23bfcfb1..030fbfc491 100644 --- a/src/test/compile-fail/private-in-public-lint.rs +++ b/src/test/compile-fail/private-in-public-lint.rs @@ -13,7 +13,7 @@ mod m1 { struct Priv; impl Pub { - pub fn f() -> Priv {} //~ ERROR private type in public interface + pub fn f() -> Priv {Priv} //~ ERROR private type `m1::Priv` in public interface } } @@ -24,7 +24,7 @@ mod m2 { struct Priv; impl Pub { - pub fn f() -> Priv {} //~ ERROR private type in public interface + pub fn f() -> Priv {Priv} //~ ERROR private type `m2::Priv` in public interface } } diff --git a/src/test/compile-fail/private-in-public-warn.rs b/src/test/compile-fail/private-in-public-warn.rs index 455de37aee..3496348985 100644 --- a/src/test/compile-fail/private-in-public-warn.rs +++ b/src/test/compile-fail/private-in-public-warn.rs @@ -24,34 +24,34 @@ mod types { type Alias; } - pub type Alias = Priv; //~ ERROR private type in public interface + pub type Alias = Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error pub enum E { - V1(Priv), //~ ERROR private type in public interface + V1(Priv), //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - V2 { field: Priv }, //~ ERROR private type in public interface + V2 { field: Priv }, //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error } pub trait Tr { - const C: Priv = Priv; //~ ERROR private type in public interface + const C: Priv = Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - type Alias = Priv; //~ ERROR private type in public interface + type Alias = Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - fn f1(arg: Priv) {} //~ ERROR private type in public interface + fn f1(arg: Priv) {} //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - fn f2() -> Priv { panic!() } //~ ERROR private type in public interface + fn f2() -> Priv { panic!() } //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error } extern { - pub static ES: Priv; //~ ERROR private type in public interface + pub static ES: Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - pub fn ef1(arg: Priv); //~ ERROR private type in public interface + pub fn ef1(arg: Priv); //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error - pub fn ef2() -> Priv; //~ ERROR private type in public interface + pub fn ef2() -> Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error } impl PubTr for Pub { - type Alias = Priv; //~ ERROR private type in public interface + type Alias = Priv; //~ ERROR private type `types::Priv` in public interface //~^ WARNING hard error } } @@ -61,22 +61,23 @@ mod traits { pub struct Pub(T); pub trait PubTr {} - pub type Alias = T; //~ ERROR private trait in public interface + pub type Alias = T; //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARN trait bounds are not (yet) enforced in type definitions //~| WARNING hard error - pub trait Tr1: PrivTr {} //~ ERROR private trait in public interface + pub trait Tr1: PrivTr {} //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARNING hard error - pub trait Tr2 {} //~ ERROR private trait in public interface + pub trait Tr2 {} //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARNING hard error pub trait Tr3 { - type Alias: PrivTr; //~ ERROR private trait in public interface - //~^ WARNING hard error - fn f(arg: T) {} //~ ERROR private trait in public interface + //~^ ERROR private trait `traits::PrivTr` in public interface + //~| WARNING hard error + type Alias: PrivTr; + fn f(arg: T) {} //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARNING hard error } - impl Pub {} //~ ERROR private trait in public interface + impl Pub {} //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARNING hard error - impl PubTr for Pub {} //~ ERROR private trait in public interface + impl PubTr for Pub {} //~ ERROR private trait `traits::PrivTr` in public interface //~^ WARNING hard error } @@ -85,18 +86,23 @@ mod traits_where { pub struct Pub(T); pub trait PubTr {} - pub type Alias where T: PrivTr = T; //~ ERROR private trait in public interface - //~^ WARNING hard error - pub trait Tr2 where T: PrivTr {} //~ ERROR private trait in public interface - //~^ WARNING hard error + pub type Alias where T: PrivTr = T; + //~^ ERROR private trait `traits_where::PrivTr` in public interface + //~| WARNING hard error + pub trait Tr2 where T: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface + //~| WARNING hard error pub trait Tr3 { - fn f(arg: T) where T: PrivTr {} //~ ERROR private trait in public interface - //~^ WARNING hard error + fn f(arg: T) where T: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface + //~| WARNING hard error } - impl Pub where T: PrivTr {} //~ ERROR private trait in public interface - //~^ WARNING hard error - impl PubTr for Pub where T: PrivTr {} //~ ERROR private trait in public interface - //~^ WARNING hard error + impl Pub where T: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface + //~| WARNING hard error + impl PubTr for Pub where T: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface + //~| WARNING hard error } mod generics { @@ -105,13 +111,14 @@ mod generics { trait PrivTr {} pub trait PubTr {} - pub trait Tr1: PrivTr {} //~ ERROR private trait in public interface + pub trait Tr1: PrivTr {} + //~^ ERROR private trait `generics::PrivTr` in public interface + //~| WARNING hard error + pub trait Tr2: PubTr {} //~ ERROR private type `generics::Priv` in public interface //~^ WARNING hard error - pub trait Tr2: PubTr {} //~ ERROR private type in public interface + pub trait Tr3: PubTr<[Priv; 1]> {} //~ ERROR private type `generics::Priv` in public interface //~^ WARNING hard error - pub trait Tr3: PubTr<[Priv; 1]> {} //~ ERROR private type in public interface - //~^ WARNING hard error - pub trait Tr4: PubTr> {} //~ ERROR private type in public interface + pub trait Tr4: PubTr> {} //~ ERROR private type `generics::Priv` in public interface //~^ WARNING hard error } @@ -138,7 +145,7 @@ mod impls { type Alias = Priv; // OK } impl PubTr for Pub { - type Alias = Priv; //~ ERROR private type in public interface + type Alias = Priv; //~ ERROR private type `impls::Priv` in public interface //~^ WARNING hard error } } @@ -210,23 +217,23 @@ mod aliases_pub { pub trait Tr2: PrivUseAliasTr {} // OK impl PrivAlias { - pub fn f(arg: Priv) {} //~ ERROR private type in public interface + pub fn f(arg: Priv) {} //~ ERROR private type `aliases_pub::Priv` in public interface //~^ WARNING hard error } // This doesn't even parse // impl ::AssocAlias { - // pub fn f(arg: Priv) {} // ERROR private type in public interface + // pub fn f(arg: Priv) {} // ERROR private type `aliases_pub::Priv` in public interface // } impl PrivUseAliasTr for PrivUseAlias { - type Check = Priv; //~ ERROR private type in public interface + type Check = Priv; //~ ERROR private type `aliases_pub::Priv` in public interface //~^ WARNING hard error } impl PrivUseAliasTr for PrivAlias { - type Check = Priv; //~ ERROR private type in public interface + type Check = Priv; //~ ERROR private type `aliases_pub::Priv` in public interface //~^ WARNING hard error } impl PrivUseAliasTr for ::AssocAlias { - type Check = Priv; //~ ERROR private type in public interface + type Check = Priv; //~ ERROR private type `aliases_pub::Priv` in public interface //~^ WARNING hard error } } @@ -251,11 +258,13 @@ mod aliases_priv { type AssocAlias = Priv3; } - pub trait Tr1: PrivUseAliasTr {} //~ ERROR private trait in public interface - //~^ WARNING hard error - pub trait Tr2: PrivUseAliasTr {} //~ ERROR private trait in public interface - //~^ ERROR private type in public interface + pub trait Tr1: PrivUseAliasTr {} + //~^ ERROR private trait `aliases_priv::PrivTr1` in public interface //~| WARNING hard error + pub trait Tr2: PrivUseAliasTr {} + //~^ ERROR private trait `aliases_priv::PrivTr1` in public interface + //~| WARNING hard error + //~| ERROR private type `aliases_priv::Priv2` in public interface //~| WARNING hard error impl PrivUseAlias { diff --git a/src/test/compile-fail/private-in-public.rs b/src/test/compile-fail/private-in-public.rs index 7d4dcfd314..b819ef116e 100644 --- a/src/test/compile-fail/private-in-public.rs +++ b/src/test/compile-fail/private-in-public.rs @@ -21,16 +21,16 @@ mod types { type Alias; } - pub const C: Priv = Priv; //~ ERROR private type in public interface - pub static S: Priv = Priv; //~ ERROR private type in public interface - pub fn f1(arg: Priv) {} //~ ERROR private type in public interface - pub fn f2() -> Priv { panic!() } //~ ERROR private type in public interface - pub struct S1(pub Priv); //~ ERROR private type in public interface - pub struct S2 { pub field: Priv } //~ ERROR private type in public interface + pub const C: Priv = Priv; //~ ERROR private type `types::Priv` in public interface + pub static S: Priv = Priv; //~ ERROR private type `types::Priv` in public interface + pub fn f1(arg: Priv) {} //~ ERROR private type `types::Priv` in public interface + pub fn f2() -> Priv { panic!() } //~ ERROR private type `types::Priv` in public interface + pub struct S1(pub Priv); //~ ERROR private type `types::Priv` in public interface + pub struct S2 { pub field: Priv } //~ ERROR private type `types::Priv` in public interface impl Pub { - pub const C: Priv = Priv; //~ ERROR private type in public interface - pub fn f1(arg: Priv) {} //~ ERROR private type in public interface - pub fn f2() -> Priv { panic!() } //~ ERROR private type in public interface + pub const C: Priv = Priv; //~ ERROR private type `types::Priv` in public interface + pub fn f1(arg: Priv) {} //~ ERROR private type `types::Priv` in public interface + pub fn f2() -> Priv { panic!() } //~ ERROR private type `types::Priv` in public interface } } @@ -39,11 +39,11 @@ mod traits { pub struct Pub(T); pub trait PubTr {} - pub enum E { V(T) } //~ ERROR private trait in public interface - pub fn f(arg: T) {} //~ ERROR private trait in public interface - pub struct S1(T); //~ ERROR private trait in public interface - impl Pub { - pub fn f(arg: U) {} //~ ERROR private trait in public interface + pub enum E { V(T) } //~ ERROR private trait `traits::PrivTr` in public interface + pub fn f(arg: T) {} //~ ERROR private trait `traits::PrivTr` in public interface + pub struct S1(T); //~ ERROR private trait `traits::PrivTr` in public interface + impl Pub { //~ ERROR private trait `traits::PrivTr` in public interface + pub fn f(arg: U) {} //~ ERROR private trait `traits::PrivTr` in public interface } } @@ -52,11 +52,16 @@ mod traits_where { pub struct Pub(T); pub trait PubTr {} - pub enum E where T: PrivTr { V(T) } //~ ERROR private trait in public interface - pub fn f(arg: T) where T: PrivTr {} //~ ERROR private trait in public interface - pub struct S1(T) where T: PrivTr; //~ ERROR private trait in public interface + pub enum E where T: PrivTr { V(T) } + //~^ ERROR private trait `traits_where::PrivTr` in public interface + pub fn f(arg: T) where T: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface + pub struct S1(T) where T: PrivTr; + //~^ ERROR private trait `traits_where::PrivTr` in public interface impl Pub where T: PrivTr { - pub fn f(arg: U) where U: PrivTr {} //~ ERROR private trait in public interface + //~^ ERROR private trait `traits_where::PrivTr` in public interface + pub fn f(arg: U) where U: PrivTr {} + //~^ ERROR private trait `traits_where::PrivTr` in public interface } } @@ -66,9 +71,10 @@ mod generics { trait PrivTr {} pub trait PubTr {} - pub fn f1(arg: [Priv; 1]) {} //~ ERROR private type in public interface - pub fn f2(arg: Pub) {} //~ ERROR private type in public interface - pub fn f3(arg: Priv) {} //~ ERROR private type in public interface + pub fn f1(arg: [Priv; 1]) {} //~ ERROR private type `generics::Priv` in public interface + pub fn f2(arg: Pub) {} //~ ERROR private type `generics::Priv` in public interface + pub fn f3(arg: Priv) {} + //~^ ERROR private type `generics::Priv` in public interface } mod impls { @@ -82,7 +88,7 @@ mod impls { } impl Pub { - pub fn f(arg: Priv) {} //~ ERROR private type in public interface + pub fn f(arg: Priv) {} //~ ERROR private type `impls::Priv` in public interface } } @@ -101,15 +107,17 @@ mod aliases_pub { use self::m::PubTr as PrivUseAliasTr; type PrivAlias = m::Pub2; trait PrivTr { - type AssocAlias = m::Pub3; + type Assoc = m::Pub3; } impl PrivTr for Priv {} // This should be OK, but associated type aliases are not substituted yet - pub fn f3(arg: ::AssocAlias) {} //~ ERROR private type in public interface + pub fn f3(arg: ::Assoc) {} + //~^ ERROR private type `::Assoc` in public interface + //~| ERROR private type `aliases_pub::Priv` in public interface impl PrivUseAlias { - pub fn f(arg: Priv) {} //~ ERROR private type in public interface + pub fn f(arg: Priv) {} //~ ERROR private type `aliases_pub::Priv` in public interface } } @@ -127,13 +135,15 @@ mod aliases_priv { use self::PrivTr1 as PrivUseAliasTr; type PrivAlias = Priv2; trait PrivTr { - type AssocAlias = Priv3; + type Assoc = Priv3; } impl PrivTr for Priv {} - pub fn f1(arg: PrivUseAlias) {} //~ ERROR private type in public interface - pub fn f2(arg: PrivAlias) {} //~ ERROR private type in public interface - pub fn f3(arg: ::AssocAlias) {} //~ ERROR private type in public interface + pub fn f1(arg: PrivUseAlias) {} //~ ERROR private type `aliases_priv::Priv1` in public interface + pub fn f2(arg: PrivAlias) {} //~ ERROR private type `aliases_priv::Priv2` in public interface + pub fn f3(arg: ::Assoc) {} + //~^ ERROR private type `::Assoc` in public + //~| ERROR private type `aliases_priv::Priv` in public interface } mod aliases_params { @@ -141,8 +151,9 @@ mod aliases_params { type PrivAliasGeneric = T; type Result = ::std::result::Result; - pub fn f2(arg: PrivAliasGeneric) {} //~ ERROR private type in public interface - pub fn f3(arg: Result) {} //~ ERROR private type in public interface + pub fn f2(arg: PrivAliasGeneric) {} + //~^ ERROR private type `aliases_params::Priv` in public interface + pub fn f3(arg: Result) {} //~ ERROR private type `aliases_params::Priv` in public interface } fn main() {} diff --git a/src/test/compile-fail/qualified-path-params.rs b/src/test/compile-fail/qualified-path-params.rs index 82b0536a64..a7bc27e174 100644 --- a/src/test/compile-fail/qualified-path-params.rs +++ b/src/test/compile-fail/qualified-path-params.rs @@ -28,7 +28,7 @@ impl S { fn main() { match 10 { ::A::f:: => {} - //~^ ERROR expected unit struct/variant or constant, found method `Tr::A::f` + //~^ ERROR expected unit struct/variant or constant, found method `<::A>::f` 0 ... ::A::f:: => {} //~ ERROR only char and numeric types are allowed in range } } diff --git a/src/test/compile-fail/range_inclusive_gate.rs b/src/test/compile-fail/range_inclusive_gate.rs index deac152ec8..1d1153e951 100644 --- a/src/test/compile-fail/range_inclusive_gate.rs +++ b/src/test/compile-fail/range_inclusive_gate.rs @@ -16,10 +16,8 @@ pub fn main() { let _: std::ops::RangeInclusive<_> = { use std::intrinsics; 1 } ... { use std::intrinsics; 2 }; //~^ ERROR use of unstable library feature 'inclusive_range' - //~^^ ERROR core_intrinsics - //~^^^ ERROR core_intrinsics - //~^^^^ WARN unused_imports - //~^^^^^ WARN unused_imports + //~| ERROR core_intrinsics + //~| ERROR core_intrinsics } diff --git a/src/test/compile-fail/recursive-reexports.rs b/src/test/compile-fail/recursive-reexports.rs index 6fd52beeec..aa444d45ee 100644 --- a/src/test/compile-fail/recursive-reexports.rs +++ b/src/test/compile-fail/recursive-reexports.rs @@ -10,6 +10,8 @@ // aux-build:recursive_reexports.rs -fn f() -> recursive_reexports::S {} //~ ERROR undeclared +extern crate recursive_reexports; + +fn f() -> recursive_reexports::S {} //~ ERROR type name `recursive_reexports::S` is undefined fn main() {} diff --git a/src/test/compile-fail/reflect-assoc.rs b/src/test/compile-fail/reflect-assoc.rs index 7cac3f41d5..47da97daaf 100644 --- a/src/test/compile-fail/reflect-assoc.rs +++ b/src/test/compile-fail/reflect-assoc.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(reflect_marker)] + // Test that types that appear in assoc bindings in an object // type are subject to the reflect check. diff --git a/src/test/compile-fail/reflect-object-param.rs b/src/test/compile-fail/reflect-object-param.rs index 476b498ae6..be0dbd801b 100644 --- a/src/test/compile-fail/reflect-object-param.rs +++ b/src/test/compile-fail/reflect-object-param.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(reflect_marker)] + // Test that types that appear in input types in an object type are // subject to the reflect check. diff --git a/src/test/compile-fail/reflect.rs b/src/test/compile-fail/reflect.rs index fdd569e2c1..28ff7c82c2 100644 --- a/src/test/compile-fail/reflect.rs +++ b/src/test/compile-fail/reflect.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(reflect_marker)] + // Test that there is no way to get a generic type `T` to be // considered as `Reflect` (or accessible via something that is // considered `Reflect`) without a reflect bound, but that any diff --git a/src/test/compile-fail/regions-steal-closure.rs b/src/test/compile-fail/regions-steal-closure.rs index 8ade8b239b..59fe1ce3af 100644 --- a/src/test/compile-fail/regions-steal-closure.rs +++ b/src/test/compile-fail/regions-steal-closure.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(fn_traits)] + struct closure_box<'a> { cl: Box, } diff --git a/src/test/compile-fail/required-lang-item.rs b/src/test/compile-fail/required-lang-item.rs index 1aa22a1676..ce40702b3d 100644 --- a/src/test/compile-fail/required-lang-item.rs +++ b/src/test/compile-fail/required-lang-item.rs @@ -11,6 +11,7 @@ #![feature(lang_items, no_core)] #![no_core] +#[lang="copy"] pub trait Copy { } #[lang="sized"] pub trait Sized { } // error-pattern:requires `start` lang_item diff --git a/src/test/compile-fail/resolve-primitive-fallback.rs b/src/test/compile-fail/resolve-primitive-fallback.rs new file mode 100644 index 0000000000..1e43933ad0 --- /dev/null +++ b/src/test/compile-fail/resolve-primitive-fallback.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn main() { + // Make sure primitive type fallback doesn't work in value namespace + std::mem::size_of(u16); + //~^ ERROR unresolved name `u16` + //~| ERROR this function takes 0 parameters but 1 parameter was supplied + + // Make sure primitive type fallback doesn't work with global paths + let _: ::u8; + //~^ ERROR type name `u8` is undefined or not in scope +} diff --git a/src/test/compile-fail/resolve_self_super_hint.rs b/src/test/compile-fail/resolve_self_super_hint.rs index a23ac80fca..530dc873f7 100644 --- a/src/test/compile-fail/resolve_self_super_hint.rs +++ b/src/test/compile-fail/resolve_self_super_hint.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(collections)] + mod a { extern crate collections; use collections::HashMap; diff --git a/src/test/compile-fail/rfc1717/missing-link-attr.rs b/src/test/compile-fail/rfc1717/missing-link-attr.rs new file mode 100644 index 0000000000..810efdedfd --- /dev/null +++ b/src/test/compile-fail/rfc1717/missing-link-attr.rs @@ -0,0 +1,14 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -l foo:bar +// error-pattern: renaming of the library `foo` was specified + +#![crate_type = "lib"] diff --git a/src/test/compile-fail/rfc1717/multiple-renames.rs b/src/test/compile-fail/rfc1717/multiple-renames.rs new file mode 100644 index 0000000000..e75c1a14b2 --- /dev/null +++ b/src/test/compile-fail/rfc1717/multiple-renames.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -l foo:bar -l foo:baz +// error-pattern: multiple renamings were specified for library + +#![crate_type = "lib"] + +#[link(name = "foo")] +extern "C" {} diff --git a/src/test/compile-fail/rfc1717/rename-to-empty.rs b/src/test/compile-fail/rfc1717/rename-to-empty.rs new file mode 100644 index 0000000000..ab8c238bc2 --- /dev/null +++ b/src/test/compile-fail/rfc1717/rename-to-empty.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -l foo: +// error-pattern: an empty renaming target was specified for library + +#![crate_type = "lib"] + +#[link(name = "foo")] +extern "C" {} diff --git a/src/test/compile-fail/shadowed-use-visibility.rs b/src/test/compile-fail/shadowed-use-visibility.rs index 1bf7f39338..e7e57a73de 100644 --- a/src/test/compile-fail/shadowed-use-visibility.rs +++ b/src/test/compile-fail/shadowed-use-visibility.rs @@ -16,11 +16,11 @@ mod foo { } mod bar { - use foo::bar::f as g; //~ ERROR unresolved import + use foo::bar::f as g; //~ ERROR module `bar` is private use foo as f; pub use foo::*; } -use bar::f::f; //~ ERROR unresolved import +use bar::f::f; //~ ERROR module `f` is private fn main() {} diff --git a/src/test/compile-fail/stability-attribute-sanity-2.rs b/src/test/compile-fail/stability-attribute-sanity-2.rs index d978d4ce0e..0ddc3a8dce 100644 --- a/src/test/compile-fail/stability-attribute-sanity-2.rs +++ b/src/test/compile-fail/stability-attribute-sanity-2.rs @@ -23,9 +23,4 @@ fn f2() { } #[unstable(feature = "a", issue = "no")] //~ ERROR incorrect 'issue' fn f3() { } -#[macro_export] -macro_rules! mac { //~ ERROR This node does not have a stability attribute - () => () -} - fn main() { } diff --git a/src/test/compile-fail/stability-attribute-sanity-3.rs b/src/test/compile-fail/stability-attribute-sanity-3.rs new file mode 100644 index 0000000000..ddefd24b92 --- /dev/null +++ b/src/test/compile-fail/stability-attribute-sanity-3.rs @@ -0,0 +1,22 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// More checks that stability attributes are used correctly + +#![feature(staged_api)] + +#![stable(feature = "test_feature", since = "1.0.0")] + +#[macro_export] +macro_rules! mac { //~ ERROR This node does not have a stability attribute + () => () +} + +fn main() { } diff --git a/src/test/compile-fail/static-mut-foreign-requires-unsafe.rs b/src/test/compile-fail/static-mut-foreign-requires-unsafe.rs index 0e44af19a7..f52b128e7e 100644 --- a/src/test/compile-fail/static-mut-foreign-requires-unsafe.rs +++ b/src/test/compile-fail/static-mut-foreign-requires-unsafe.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(libc)] + extern crate libc; extern { diff --git a/src/test/compile-fail/task-rng-isnt-sendable.rs b/src/test/compile-fail/task-rng-isnt-sendable.rs index c987d9f2f4..d85717f8ce 100644 --- a/src/test/compile-fail/task-rng-isnt-sendable.rs +++ b/src/test/compile-fail/task-rng-isnt-sendable.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(rand)] + // ensure that the ThreadRng isn't/doesn't become accidentally sendable. use std::__rand::ThreadRng; diff --git a/src/test/compile-fail/trait-bounds-cant-coerce.rs b/src/test/compile-fail/trait-bounds-cant-coerce.rs index 1fff812af5..9f832c7b6e 100644 --- a/src/test/compile-fail/trait-bounds-cant-coerce.rs +++ b/src/test/compile-fail/trait-bounds-cant-coerce.rs @@ -21,10 +21,10 @@ fn c(x: Box) { } fn d(x: Box) { - a(x); //~ ERROR mismatched types - //~| expected type `Box` - //~| found type `Box` - //~| expected bounds `Send`, found no bounds + a(x); //~ ERROR mismatched types [E0308] + //~| NOTE expected type `Box` + //~| NOTE found type `Box` + //~| NOTE expected trait `Foo + std::marker::Send`, found trait `Foo` } fn main() { } diff --git a/src/test/compile-fail/trait-bounds-not-on-bare-trait.rs b/src/test/compile-fail/trait-bounds-not-on-bare-trait.rs index fd46d1a629..983c66ec1c 100644 --- a/src/test/compile-fail/trait-bounds-not-on-bare-trait.rs +++ b/src/test/compile-fail/trait-bounds-not-on-bare-trait.rs @@ -15,7 +15,7 @@ trait Foo { // This should emit the less confusing error, not the more confusing one. fn foo(_x: Foo + Send) { - //~^ ERROR `Foo + Send + 'static: std::marker::Sized` is not satisfied + //~^ ERROR the trait bound `Foo + std::marker::Send + 'static: std::marker::Sized` is not } fn main() { } diff --git a/src/test/compile-fail/traits-multidispatch-convert-ambig-dest.rs b/src/test/compile-fail/traits-multidispatch-convert-ambig-dest.rs index e6545063db..ed2ffa995e 100644 --- a/src/test/compile-fail/traits-multidispatch-convert-ambig-dest.rs +++ b/src/test/compile-fail/traits-multidispatch-convert-ambig-dest.rs @@ -34,8 +34,8 @@ where T : Convert fn a() { test(22, std::default::Default::default()); - //~^ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + //~^ ERROR unable to infer enough type information about `U` [E0282] + //~| NOTE cannot infer type for `U` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/type_length_limit.rs b/src/test/compile-fail/type_length_limit.rs new file mode 100644 index 0000000000..d283f392d7 --- /dev/null +++ b/src/test/compile-fail/type_length_limit.rs @@ -0,0 +1,35 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// error-pattern: reached the type-length limit while instantiating + +// Test that the type length limit can be changed. + +#![allow(dead_code)] +#![type_length_limit="256"] + +macro_rules! link { + ($id:ident, $t:ty) => { + pub type $id = ($t, $t, $t); + } +} + +link! { A, B } +link! { B, C } +link! { C, D } +link! { D, E } +link! { E, F } +link! { F, G } + +pub struct G; + +fn main() { + drop::>(None); +} diff --git a/src/test/compile-fail/typeck-builtin-bound-type-parameters.rs b/src/test/compile-fail/typeck-builtin-bound-type-parameters.rs index 41242a44f5..0d98e044ab 100644 --- a/src/test/compile-fail/typeck-builtin-bound-type-parameters.rs +++ b/src/test/compile-fail/typeck-builtin-bound-type-parameters.rs @@ -9,16 +9,16 @@ // except according to those terms. fn foo1, U>(x: T) {} -//~^ ERROR E0244 -//~| NOTE expected no type arguments, found 1 +//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244] +//~| NOTE expected no type arguments trait Trait: Copy {} -//~^ ERROR E0244 -//~| NOTE expected no type arguments, found 1 +//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244] +//~| NOTE expected no type arguments struct MyStruct1>; -//~^ ERROR E0244 -//~| NOTE expected no type arguments, found 1 +//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244] +//~| NOTE expected no type arguments struct MyStruct2<'a, T: Copy<'a>>; //~^ ERROR: wrong number of lifetime parameters: expected 0, found 1 @@ -26,8 +26,8 @@ struct MyStruct2<'a, T: Copy<'a>>; fn foo2<'a, T:Copy<'a, U>, U>(x: T) {} -//~^ ERROR E0244 -//~| NOTE expected no type arguments, found 1 +//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244] +//~| NOTE expected no type arguments //~| ERROR: wrong number of lifetime parameters: expected 0, found 1 //~| NOTE unexpected lifetime parameter diff --git a/src/test/compile-fail/typeck-cast-pointer-to-float.rs b/src/test/compile-fail/typeck-cast-pointer-to-float.rs index 2277b1bad7..3f8b8f49cb 100644 --- a/src/test/compile-fail/typeck-cast-pointer-to-float.rs +++ b/src/test/compile-fail/typeck-cast-pointer-to-float.rs @@ -12,5 +12,4 @@ fn main() { let x : i16 = 22; ((&x) as *const i16) as f32; //~^ ERROR casting `*const i16` as `f32` is invalid - //~^^ HELP cast through a usize first } diff --git a/src/test/compile-fail/typeck_type_placeholder_lifetime_1.rs b/src/test/compile-fail/typeck_type_placeholder_lifetime_1.rs index f40445a030..ad57752b6f 100644 --- a/src/test/compile-fail/typeck_type_placeholder_lifetime_1.rs +++ b/src/test/compile-fail/typeck_type_placeholder_lifetime_1.rs @@ -17,6 +17,6 @@ struct Foo<'a, T:'a> { pub fn main() { let c: Foo<_, _> = Foo { r: &5 }; - //~^ ERROR E0244 - //~| NOTE expected 1 type argument, found 2 + //~^ ERROR wrong number of type arguments: expected 1, found 2 [E0244] + //~| NOTE expected 1 type argument } diff --git a/src/test/compile-fail/typeck_type_placeholder_lifetime_2.rs b/src/test/compile-fail/typeck_type_placeholder_lifetime_2.rs index 47898690fc..f1ecad0056 100644 --- a/src/test/compile-fail/typeck_type_placeholder_lifetime_2.rs +++ b/src/test/compile-fail/typeck_type_placeholder_lifetime_2.rs @@ -17,6 +17,6 @@ struct Foo<'a, T:'a> { pub fn main() { let c: Foo<_, usize> = Foo { r: &5 }; - //~^ ERROR E0244 - //~| NOTE expected 1 type argument, found 2 + //~^ ERROR wrong number of type arguments: expected 1, found 2 [E0244] + //~| NOTE expected 1 type argument } diff --git a/src/test/compile-fail/unboxed-closure-sugar-wrong-trait.rs b/src/test/compile-fail/unboxed-closure-sugar-wrong-trait.rs index 50f4f3b98b..95d78c0750 100644 --- a/src/test/compile-fail/unboxed-closure-sugar-wrong-trait.rs +++ b/src/test/compile-fail/unboxed-closure-sugar-wrong-trait.rs @@ -13,8 +13,8 @@ trait Trait {} fn f isize>(x: F) {} -//~^ ERROR E0244 -//~| NOTE expected no type arguments, found 1 +//~^ ERROR wrong number of type arguments: expected 0, found 1 [E0244] +//~| NOTE expected no type arguments //~| ERROR E0220 //~| NOTE associated type `Output` not found diff --git a/src/test/compile-fail/unboxed-closures-fnmut-as-fn.rs b/src/test/compile-fail/unboxed-closures-fnmut-as-fn.rs index b25b331880..2e865b2aac 100644 --- a/src/test/compile-fail/unboxed-closures-fnmut-as-fn.rs +++ b/src/test/compile-fail/unboxed-closures-fnmut-as-fn.rs @@ -11,7 +11,7 @@ // Checks that the Fn trait hierarchy rules do not permit // Fn to be used where FnMut is implemented. -#![feature(unboxed_closures)] +#![feature(fn_traits, unboxed_closures)] #![feature(overloaded_calls)] use std::ops::{Fn,FnMut,FnOnce}; diff --git a/src/test/compile-fail/unboxed-closures-infer-argument-types-two-region-pointers.rs b/src/test/compile-fail/unboxed-closures-infer-argument-types-two-region-pointers.rs index 5436a855ee..bfb24c5872 100644 --- a/src/test/compile-fail/unboxed-closures-infer-argument-types-two-region-pointers.rs +++ b/src/test/compile-fail/unboxed-closures-infer-argument-types-two-region-pointers.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(fn_traits)] + // That a closure whose expected argument types include two distinct // bound regions. diff --git a/src/test/compile-fail/unboxed-closures-recursive-fn-using-fn-mut.rs b/src/test/compile-fail/unboxed-closures-recursive-fn-using-fn-mut.rs index 23306823c7..433c0c839c 100644 --- a/src/test/compile-fail/unboxed-closures-recursive-fn-using-fn-mut.rs +++ b/src/test/compile-fail/unboxed-closures-recursive-fn-using-fn-mut.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(core,unboxed_closures)] +#![feature(core, fn_traits, unboxed_closures)] use std::marker::PhantomData; diff --git a/src/test/compile-fail/unconstrained-none.rs b/src/test/compile-fail/unconstrained-none.rs index 380cdd266c..88080bc70c 100644 --- a/src/test/compile-fail/unconstrained-none.rs +++ b/src/test/compile-fail/unconstrained-none.rs @@ -11,7 +11,7 @@ // Issue #5062 fn main() { - None; //~ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + None; //~ ERROR unable to infer enough type information about `T` [E0282] + //~| NOTE cannot infer type for `T` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/unconstrained-ref.rs b/src/test/compile-fail/unconstrained-ref.rs index ba94bf613d..1227854921 100644 --- a/src/test/compile-fail/unconstrained-ref.rs +++ b/src/test/compile-fail/unconstrained-ref.rs @@ -13,7 +13,7 @@ struct S<'a, T:'a> { } fn main() { - S { o: &None }; //~ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + S { o: &None }; //~ ERROR unable to infer enough type information about `T` [E0282] + //~| NOTE cannot infer type for `T` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/unreachable-in-call.rs b/src/test/compile-fail/unreachable-in-call.rs index 5a3257d54d..7246246843 100644 --- a/src/test/compile-fail/unreachable-in-call.rs +++ b/src/test/compile-fail/unreachable-in-call.rs @@ -24,7 +24,7 @@ fn diverge_first() { get_u8()); //~ ERROR unreachable expression } fn diverge_second() { - call( //~ ERROR unreachable call + call( //~ ERROR unreachable expression get_u8(), diverge()); } diff --git a/src/test/compile-fail/unspecified-self-in-trait-ref.rs b/src/test/compile-fail/unspecified-self-in-trait-ref.rs index 2c2f113a77..84bcca3fc7 100644 --- a/src/test/compile-fail/unspecified-self-in-trait-ref.rs +++ b/src/test/compile-fail/unspecified-self-in-trait-ref.rs @@ -9,11 +9,11 @@ // except according to those terms. pub trait Foo { - fn foo(); + fn foo(&self); } pub trait Bar { - fn foo(); + fn foo(&self); } fn main() { diff --git a/src/test/compile-fail/variadic-ffi-3.rs b/src/test/compile-fail/variadic-ffi-3.rs index 334b8bb08a..565d8549b3 100644 --- a/src/test/compile-fail/variadic-ffi-3.rs +++ b/src/test/compile-fail/variadic-ffi-3.rs @@ -10,6 +10,8 @@ extern { fn foo(f: isize, x: u8, ...); + //~^ defined here + //~| defined here } extern "C" fn bar(f: isize, x: u8) {} @@ -17,11 +19,9 @@ extern "C" fn bar(f: isize, x: u8) {} fn main() { unsafe { foo(); //~ ERROR: this function takes at least 2 parameters but 0 parameters were supplied - //~^ NOTE the following parameter types were expected: - //~| NOTE isize, u8 + //~| NOTE expected at least 2 parameters foo(1); //~ ERROR: this function takes at least 2 parameters but 1 parameter was supplied - //~^ NOTE the following parameter types were expected: - //~| NOTE isize, u8 + //~| NOTE expected at least 2 parameters let x: unsafe extern "C" fn(f: isize, x: u8) = foo; //~^ ERROR: mismatched types diff --git a/src/test/compile-fail/variant-namespacing.rs b/src/test/compile-fail/variant-namespacing.rs index a8bb94b78f..44e9260770 100644 --- a/src/test/compile-fail/variant-namespacing.rs +++ b/src/test/compile-fail/variant-namespacing.rs @@ -31,19 +31,13 @@ const XTuple: u8 = 0; const XUnit: u8 = 0; extern crate variant_namespacing; -pub use variant_namespacing::XE::*; +pub use variant_namespacing::XE::{XStruct, XTuple, XUnit}; //~^ ERROR `XStruct` has already been defined -//~| ERROR `XStruct` has already been defined -//~| ERROR `XTuple` has already been defined //~| ERROR `XTuple` has already been defined //~| ERROR `XUnit` has already been defined -//~| ERROR `XUnit` has already been defined -pub use E::*; +pub use E::{Struct, Tuple, Unit}; //~^ ERROR `Struct` has already been defined -//~| ERROR `Struct` has already been defined //~| ERROR `Tuple` has already been defined -//~| ERROR `Tuple` has already been defined -//~| ERROR `Unit` has already been defined //~| ERROR `Unit` has already been defined fn main() {} diff --git a/src/test/compile-fail/vector-no-ann.rs b/src/test/compile-fail/vector-no-ann.rs index 25709f3524..d559caf77a 100644 --- a/src/test/compile-fail/vector-no-ann.rs +++ b/src/test/compile-fail/vector-no-ann.rs @@ -11,7 +11,7 @@ fn main() { let _foo = Vec::new(); - //~^ ERROR unable to infer enough type information about `_` [E0282] - //~| NOTE cannot infer type for `_` + //~^ ERROR unable to infer enough type information about `T` [E0282] + //~| NOTE cannot infer type for `T` //~| NOTE type annotations or generic parameter binding } diff --git a/src/test/compile-fail/where-clauses-unsatisfied.rs b/src/test/compile-fail/where-clauses-unsatisfied.rs index 278a8db4e1..ffc39008c4 100644 --- a/src/test/compile-fail/where-clauses-unsatisfied.rs +++ b/src/test/compile-fail/where-clauses-unsatisfied.rs @@ -8,8 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -fn equal(_: &T, _: &T) -> bool where T : Eq { -} +fn equal(a: &T, b: &T) -> bool where T : Eq { a == b } struct Struct; diff --git a/src/test/debuginfo/borrowed-enum.rs b/src/test/debuginfo/borrowed-enum.rs index ddc29c6430..f34fc3b20d 100644 --- a/src/test/debuginfo/borrowed-enum.rs +++ b/src/test/debuginfo/borrowed-enum.rs @@ -18,11 +18,11 @@ // gdb-command:run // gdb-command:print *the_a_ref -// gdbg-check:$1 = {{RUST$ENUM$DISR = TheA, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = TheA, __0 = 0, __1 = 2088533116, __2 = 2088533116}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = TheA, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = TheA, [...]}} // gdbr-check:$1 = borrowed_enum::ABC::TheA{x: 0, y: 8970181431921507452} // gdb-command:print *the_b_ref -// gdbg-check:$2 = {{RUST$ENUM$DISR = TheB, x = 0, y = 1229782938247303441}, {RUST$ENUM$DISR = TheB, __0 = 0, __1 = 286331153, __2 = 286331153}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = TheB, [...]}, {RUST$ENUM$DISR = TheB, __0 = 0, __1 = 286331153, __2 = 286331153}} // gdbr-check:$2 = borrowed_enum::ABC::TheB(0, 286331153, 286331153) // gdb-command:print *univariant_ref diff --git a/src/test/debuginfo/by-value-non-immediate-argument.rs b/src/test/debuginfo/by-value-non-immediate-argument.rs index 6d821dbc15..0fe08c3a22 100644 --- a/src/test/debuginfo/by-value-non-immediate-argument.rs +++ b/src/test/debuginfo/by-value-non-immediate-argument.rs @@ -42,7 +42,7 @@ // gdb-command:continue // gdb-command:print x -// gdbg-check:$7 = {{RUST$ENUM$DISR = Case1, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 2088533116, __2 = 2088533116}} +// gdbg-check:$7 = {{RUST$ENUM$DISR = Case1, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$7 = by_value_non_immediate_argument::Enum::Case1{x: 0, y: 8970181431921507452} // gdb-command:continue diff --git a/src/test/debuginfo/generic-struct-style-enum.rs b/src/test/debuginfo/generic-struct-style-enum.rs index dba9422721..a328eec689 100644 --- a/src/test/debuginfo/generic-struct-style-enum.rs +++ b/src/test/debuginfo/generic-struct-style-enum.rs @@ -17,15 +17,15 @@ // gdb-command:run // gdb-command:print case1 -// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, a = 0, b = 31868, c = 31868, d = 31868, e = 31868}, {RUST$ENUM$DISR = Case1, a = 0, b = 2088533116, c = 2088533116}, {RUST$ENUM$DISR = Case1, a = 0, b = 8970181431921507452}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, a = 0, b = 31868, c = 31868, d = 31868, e = 31868}, {RUST$ENUM$DISR = Case1, [...]}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$1 = generic_struct_style_enum::Regular::Case1{a: 0, b: 31868, c: 31868, d: 31868, e: 31868} // gdb-command:print case2 -// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, a = 0, b = 4369, c = 4369, d = 4369, e = 4369}, {RUST$ENUM$DISR = Case2, a = 0, b = 286331153, c = 286331153}, {RUST$ENUM$DISR = Case2, a = 0, b = 1229782938247303441}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, [...]}, {RUST$ENUM$DISR = Case2, a = 0, b = 286331153, c = 286331153}, {RUST$ENUM$DISR = Case2, [...]}} // gdbr-check:$2 = generic_struct_style_enum::Regular::Case2{a: 0, b: 286331153, c: 286331153} // gdb-command:print case3 -// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, a = 0, b = 22873, c = 22873, d = 22873, e = 22873}, {RUST$ENUM$DISR = Case3, a = 0, b = 1499027801, c = 1499027801}, {RUST$ENUM$DISR = Case3, a = 0, b = 6438275382588823897}} +// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, a = 0, b = 6438275382588823897}} // gdbr-check:$3 = generic_struct_style_enum::Regular::Case3{a: 0, b: 6438275382588823897} // gdb-command:print univariant diff --git a/src/test/debuginfo/generic-tuple-style-enum.rs b/src/test/debuginfo/generic-tuple-style-enum.rs index 01d2ff4e33..9ada5fdeff 100644 --- a/src/test/debuginfo/generic-tuple-style-enum.rs +++ b/src/test/debuginfo/generic-tuple-style-enum.rs @@ -19,15 +19,15 @@ // gdb-command:run // gdb-command:print case1 -// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = 31868, __2 = 31868, __3 = 31868, __4 = 31868}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 2088533116, __2 = 2088533116}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 8970181431921507452}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = 31868, __2 = 31868, __3 = 31868, __4 = 31868}, {RUST$ENUM$DISR = Case1, [...]}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$1 = generic_tuple_style_enum::Regular::Case1(0, 31868, 31868, 31868, 31868) // gdb-command:print case2 -// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, __0 = 0, __1 = 4369, __2 = 4369, __3 = 4369, __4 = 4369}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 286331153, __2 = 286331153}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 1229782938247303441}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, [...]}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 286331153, __2 = 286331153}, {RUST$ENUM$DISR = Case2, [...]}} // gdbr-check:$2 = generic_tuple_style_enum::Regular::Case2(0, 286331153, 286331153) // gdb-command:print case3 -// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, __0 = 0, __1 = 22873, __2 = 22873, __3 = 22873, __4 = 22873}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 1499027801, __2 = 1499027801}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 6438275382588823897}} +// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 6438275382588823897}} // gdbr-check:$3 = generic_tuple_style_enum::Regular::Case3(0, 6438275382588823897) // gdb-command:print univariant diff --git a/src/test/debuginfo/macro-stepping.rs b/src/test/debuginfo/macro-stepping.rs index 52a2a58ed7..37355ed377 100644 --- a/src/test/debuginfo/macro-stepping.rs +++ b/src/test/debuginfo/macro-stepping.rs @@ -10,6 +10,7 @@ // ignore-windows // ignore-android +// ignore-aarch64 // min-lldb-version: 310 // aux-build:macro-stepping.rs diff --git a/src/test/debuginfo/struct-in-enum.rs b/src/test/debuginfo/struct-in-enum.rs index d0aceaa4f3..ffd36ae14a 100644 --- a/src/test/debuginfo/struct-in-enum.rs +++ b/src/test/debuginfo/struct-in-enum.rs @@ -19,11 +19,11 @@ // gdb-command:run // gdb-command:print case1 -// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = {x = 2088533116, y = 2088533116, z = 31868}}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 8970181431921507452, __2 = 31868}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = {x = 2088533116, y = 2088533116, z = 31868}}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$1 = struct_in_enum::Regular::Case1(0, struct_in_enum::Struct {x: 2088533116, y: 2088533116, z: 31868}) // gdb-command:print case2 -// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, __0 = 0, __1 = {x = 286331153, y = 286331153, z = 4369}}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 1229782938247303441, __2 = 4369}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, [...]}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 1229782938247303441, __2 = 4369}} // gdbr-check:$2 = struct_in_enum::Regular::Case2(0, 1229782938247303441, 4369) // gdb-command:print univariant diff --git a/src/test/debuginfo/struct-style-enum.rs b/src/test/debuginfo/struct-style-enum.rs index 8abc139eb1..b6196daaa4 100644 --- a/src/test/debuginfo/struct-style-enum.rs +++ b/src/test/debuginfo/struct-style-enum.rs @@ -19,15 +19,15 @@ // gdb-command:run // gdb-command:print case1 -// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, a = 0, b = 31868, c = 31868, d = 31868, e = 31868}, {RUST$ENUM$DISR = Case1, a = 0, b = 2088533116, c = 2088533116}, {RUST$ENUM$DISR = Case1, a = 0, b = 8970181431921507452}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, a = 0, b = 31868, c = 31868, d = 31868, e = 31868}, {RUST$ENUM$DISR = Case1, [...]}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$1 = struct_style_enum::Regular::Case1{a: 0, b: 31868, c: 31868, d: 31868, e: 31868} // gdb-command:print case2 -// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, a = 0, b = 4369, c = 4369, d = 4369, e = 4369}, {RUST$ENUM$DISR = Case2, a = 0, b = 286331153, c = 286331153}, {RUST$ENUM$DISR = Case2, a = 0, b = 1229782938247303441}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, [...]}, {RUST$ENUM$DISR = Case2, a = 0, b = 286331153, c = 286331153}, {RUST$ENUM$DISR = Case2, [...]}} // gdbr-check:$2 = struct_style_enum::Regular::Case2{a: 0, b: 286331153, c: 286331153} // gdb-command:print case3 -// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, a = 0, b = 22873, c = 22873, d = 22873, e = 22873}, {RUST$ENUM$DISR = Case3, a = 0, b = 1499027801, c = 1499027801}, {RUST$ENUM$DISR = Case3, a = 0, b = 6438275382588823897}} +// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, a = 0, b = 6438275382588823897}} // gdbr-check:$3 = struct_style_enum::Regular::Case3{a: 0, b: 6438275382588823897} // gdb-command:print univariant diff --git a/src/test/debuginfo/tuple-style-enum.rs b/src/test/debuginfo/tuple-style-enum.rs index d05edec3e7..988f223b3b 100644 --- a/src/test/debuginfo/tuple-style-enum.rs +++ b/src/test/debuginfo/tuple-style-enum.rs @@ -19,15 +19,15 @@ // gdb-command:run // gdb-command:print case1 -// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = 31868, __2 = 31868, __3 = 31868, __4 = 31868}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 2088533116, __2 = 2088533116}, {RUST$ENUM$DISR = Case1, __0 = 0, __1 = 8970181431921507452}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = Case1, __0 = 0, __1 = 31868, __2 = 31868, __3 = 31868, __4 = 31868}, {RUST$ENUM$DISR = Case1, [...]}, {RUST$ENUM$DISR = Case1, [...]}} // gdbr-check:$1 = tuple_style_enum::Regular::Case1(0, 31868, 31868, 31868, 31868) // gdb-command:print case2 -// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, __0 = 0, __1 = 4369, __2 = 4369, __3 = 4369, __4 = 4369}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 286331153, __2 = 286331153}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 1229782938247303441}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = Case2, [...]}, {RUST$ENUM$DISR = Case2, __0 = 0, __1 = 286331153, __2 = 286331153}, {RUST$ENUM$DISR = Case2, [...]}} // gdbr-check:$2 = tuple_style_enum::Regular::Case2(0, 286331153, 286331153) // gdb-command:print case3 -// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, __0 = 0, __1 = 22873, __2 = 22873, __3 = 22873, __4 = 22873}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 1499027801, __2 = 1499027801}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 6438275382588823897}} +// gdbg-check:$3 = {{RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, [...]}, {RUST$ENUM$DISR = Case3, __0 = 0, __1 = 6438275382588823897}} // gdbr-check:$3 = tuple_style_enum::Regular::Case3(0, 6438275382588823897) // gdb-command:print univariant diff --git a/src/test/debuginfo/unique-enum.rs b/src/test/debuginfo/unique-enum.rs index e882544b80..cf8d90e30f 100644 --- a/src/test/debuginfo/unique-enum.rs +++ b/src/test/debuginfo/unique-enum.rs @@ -18,11 +18,11 @@ // gdb-command:run // gdb-command:print *the_a -// gdbg-check:$1 = {{RUST$ENUM$DISR = TheA, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = TheA, __0 = 0, __1 = 2088533116, __2 = 2088533116}} +// gdbg-check:$1 = {{RUST$ENUM$DISR = TheA, x = 0, y = 8970181431921507452}, {RUST$ENUM$DISR = TheA, [...]}} // gdbr-check:$1 = unique_enum::ABC::TheA{x: 0, y: 8970181431921507452} // gdb-command:print *the_b -// gdbg-check:$2 = {{RUST$ENUM$DISR = TheB, x = 0, y = 1229782938247303441}, {RUST$ENUM$DISR = TheB, __0 = 0, __1 = 286331153, __2 = 286331153}} +// gdbg-check:$2 = {{RUST$ENUM$DISR = TheB, [...]}, {RUST$ENUM$DISR = TheB, __0 = 0, __1 = 286331153, __2 = 286331153}} // gdbr-check:$2 = unique_enum::ABC::TheB(0, 286331153, 286331153) // gdb-command:print *univariant diff --git a/src/test/debuginfo/vec-slices.rs b/src/test/debuginfo/vec-slices.rs index 5553f8427e..d321df8431 100644 --- a/src/test/debuginfo/vec-slices.rs +++ b/src/test/debuginfo/vec-slices.rs @@ -21,21 +21,21 @@ // gdb-command:print singleton.length // gdb-check:$2 = 1 -// gdbg-command:print *((int64_t[1]*)(singleton.data_ptr)) +// gdbg-command:print *((i64[1]*)(singleton.data_ptr)) // gdbr-command:print *(singleton.data_ptr as &[i64; 1]) // gdbg-check:$3 = {1} // gdbr-check:$3 = [1] // gdb-command:print multiple.length // gdb-check:$4 = 4 -// gdbg-command:print *((int64_t[4]*)(multiple.data_ptr)) +// gdbg-command:print *((i64[4]*)(multiple.data_ptr)) // gdbr-command:print *(multiple.data_ptr as &[i64; 4]) // gdbg-check:$5 = {2, 3, 4, 5} // gdbr-check:$5 = [2, 3, 4, 5] // gdb-command:print slice_of_slice.length // gdb-check:$6 = 2 -// gdbg-command:print *((int64_t[2]*)(slice_of_slice.data_ptr)) +// gdbg-command:print *((i64[2]*)(slice_of_slice.data_ptr)) // gdbr-command:print *(slice_of_slice.data_ptr as &[i64; 2]) // gdbg-check:$7 = {3, 4} // gdbr-check:$7 = [3, 4] @@ -61,14 +61,14 @@ // gdbg-command:print 'vec_slices::MUT_VECT_SLICE'.length // gdbr-command:print MUT_VECT_SLICE.length // gdb-check:$14 = 2 -// gdbg-command:print *((int64_t[2]*)('vec_slices::MUT_VECT_SLICE'.data_ptr)) +// gdbg-command:print *((i64[2]*)('vec_slices::MUT_VECT_SLICE'.data_ptr)) // gdbr-command:print *(MUT_VECT_SLICE.data_ptr as &[i64; 2]) // gdbg-check:$15 = {64, 65} // gdbr-check:$15 = [64, 65] //gdb-command:print mut_slice.length //gdb-check:$16 = 5 -//gdbg-command:print *((int64_t[5]*)(mut_slice.data_ptr)) +//gdbg-command:print *((i64[5]*)(mut_slice.data_ptr)) //gdbr-command:print *(mut_slice.data_ptr as &[i64; 5]) //gdbg-check:$17 = {1, 2, 3, 4, 5} //gdbr-check:$17 = [1, 2, 3, 4, 5] diff --git a/src/test/compile-fail/import-shadow-1.rs b/src/test/incremental/add_private_fn_at_krate_root_cc/auxiliary/point.rs similarity index 61% rename from src/test/compile-fail/import-shadow-1.rs rename to src/test/incremental/add_private_fn_at_krate_root_cc/auxiliary/point.rs index 503fa4eca5..adc2b23441 100644 --- a/src/test/compile-fail/import-shadow-1.rs +++ b/src/test/incremental/add_private_fn_at_krate_root_cc/auxiliary/point.rs @@ -8,23 +8,21 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use foo::*; -use bar::*; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; +pub struct Point { + pub x: f32, + pub y: f32, } -mod bar { - pub type Baz = isize; +#[cfg(rpass2)] +fn unused_helper() { } -mod qux { - pub use bar::Baz; +pub fn distance_squared(this: &Point) -> f32 { + return this.x * this.x + this.y * this.y; } -fn main() {} +impl Point { + pub fn distance_from_origin(&self) -> f32 { + distance_squared(self).sqrt() + } +} diff --git a/src/test/incremental/add_private_fn_at_krate_root_cc/struct_point.rs b/src/test/incremental/add_private_fn_at_krate_root_cc/struct_point.rs new file mode 100644 index 0000000000..489427ba1c --- /dev/null +++ b/src/test/incremental/add_private_fn_at_krate_root_cc/struct_point.rs @@ -0,0 +1,84 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test where we add a private item into the root of an external. +// crate. This should not cause anything we use to be invalidated. +// Regression test for #36168. + +// revisions:rpass1 rpass2 +// compile-flags: -Z query-dep-graph +// aux-build:point.rs + +#![feature(rustc_attrs)] +#![feature(stmt_expr_attributes)] +#![allow(dead_code)] + +#![rustc_partition_reused(module="struct_point-fn_calls_methods_in_same_impl", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_calls_free_fn", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] + +extern crate point; + +/// A fn item that calls (public) methods on `Point` from the same impl +mod fn_calls_methods_in_same_impl { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let x = Point { x: 2.0, y: 2.0 }; + x.distance_from_origin(); + } +} + +/// A fn item that calls (public) methods on `Point` from another impl +mod fn_calls_free_fn { + use point::{self, Point}; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let x = Point { x: 2.0, y: 2.0 }; + point::distance_squared(&x); + } +} + +/// A fn item that makes an instance of `Point` but does not invoke methods +mod fn_make_struct { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn make_origin() -> Point { + Point { x: 2.0, y: 2.0 } + } +} + +/// A fn item that reads fields from `Point` but does not invoke methods +mod fn_read_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn get_x(p: Point) -> f32 { + p.x + } +} + +/// A fn item that writes to a field of `Point` but does not invoke methods +mod fn_write_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn inc_x(p: &mut Point) { + p.x += 1.0; + } +} + +fn main() { +} diff --git a/src/test/incremental/change_add_field/struct_point.rs b/src/test/incremental/change_add_field/struct_point.rs new file mode 100644 index 0000000000..261eb38a51 --- /dev/null +++ b/src/test/incremental/change_add_field/struct_point.rs @@ -0,0 +1,164 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test where we change a type definition by adding a field. Fns with +// this type in their signature are recompiled, as are their callers. +// Fns with that type used only in their body are also recompiled, but +// their callers are not. + +// revisions:rpass1 rpass2 +// compile-flags: -Z query-dep-graph + +#![feature(rustc_attrs)] +#![feature(stmt_expr_attributes)] +#![feature(static_in_const)] +#![allow(dead_code)] + +// These are expected to require translation. +#![rustc_partition_translated(module="struct_point-point", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_with_type_in_sig", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-call_fn_with_type_in_sig", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_with_type_in_body", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_make_struct", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_read_field", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_write_field", cfg="rpass2")] + +#![rustc_partition_reused(module="struct_point-call_fn_with_type_in_body", cfg="rpass2")] + +mod point { + #[cfg(rpass1)] + pub struct Point { + pub x: f32, + pub y: f32, + } + + #[cfg(rpass2)] + pub struct Point { + pub x: f32, + pub y: f32, + pub z: f32, + } + + impl Point { + pub fn origin() -> Point { + #[cfg(rpass1)] + return Point { x: 0.0, y: 0.0 }; + + #[cfg(rpass2)] + return Point { x: 0.0, y: 0.0, z: 0.0 }; + } + + pub fn total(&self) -> f32 { + #[cfg(rpass1)] + return self.x + self.y; + + #[cfg(rpass2)] + return self.x + self.y + self.z; + } + + pub fn x(&self) -> f32 { + self.x + } + } +} + +/// A fn that has the changed type in its signature; must currently be +/// rebuilt. +/// +/// You could imagine that, in the future, if the change were +/// sufficiently "private", we might not need to type-check again. +/// Rebuilding is probably always necessary since the layout may be +/// affected. +mod fn_with_type_in_sig { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn boop(p: Option<&Point>) -> f32 { + p.map(|p| p.total()).unwrap_or(0.0) + } +} + +/// Call a fn that has the changed type in its signature; this +/// currently must also be rebuilt. +/// +/// You could imagine that, in the future, if the change were +/// sufficiently "private", we might not need to type-check again. +/// Rebuilding is probably always necessary since the layout may be +/// affected. +mod call_fn_with_type_in_sig { + use fn_with_type_in_sig; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn bip() -> f32 { + fn_with_type_in_sig::boop(None) + } +} + +/// A fn that uses the changed type, but only in its body, not its +/// signature. +/// +/// You could imagine that, in the future, if the change were +/// sufficiently "private", we might not need to type-check again. +/// Rebuilding is probably always necessary since the layout may be +/// affected. +mod fn_with_type_in_body { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn boop() -> f32 { + Point::origin().total() + } +} + +/// A fn X that calls a fn Y, where Y uses the changed type in its +/// body. In this case, the effects of the change should be contained +/// to Y; X should not have to be rebuilt, nor should it need to be +/// typechecked again. +mod call_fn_with_type_in_body { + use fn_with_type_in_body; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn bip() -> f32 { + fn_with_type_in_body::boop() + } +} + +/// A fn item that makes an instance of `Point` but does not invoke methods +mod fn_make_struct { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn make_origin(p: Point) -> Point { + Point { ..p } + } +} + +/// A fn item that reads fields from `Point` but does not invoke methods +mod fn_read_field { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn get_x(p: Point) -> f32 { + p.x + } +} + +/// A fn item that writes to a field of `Point` but does not invoke methods +mod fn_write_field { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn inc_x(p: &mut Point) { + p.x += 1.0; + } +} + +fn main() { +} diff --git a/src/test/incremental/change_private_fn_cc/struct_point.rs b/src/test/incremental/change_private_fn_cc/struct_point.rs index d6d2b5436f..ded87dd27f 100644 --- a/src/test/incremental/change_private_fn_cc/struct_point.rs +++ b/src/test/incremental/change_private_fn_cc/struct_point.rs @@ -23,10 +23,7 @@ #![rustc_partition_reused(module="struct_point-fn_calls_methods_in_another_impl", cfg="rpass2")] #![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] #![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] - -// FIXME(#37335) -- should be reused, but an errant Krate edge causes -// it to get translated (at least I *think* this is that same problem) -#![rustc_partition_translated(module="struct_point-fn_make_struct", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] extern crate point; diff --git a/src/test/incremental/change_private_impl_method/struct_point.rs b/src/test/incremental/change_private_impl_method/struct_point.rs index 8fa34bde17..46e5a88eef 100644 --- a/src/test/incremental/change_private_impl_method/struct_point.rs +++ b/src/test/incremental/change_private_impl_method/struct_point.rs @@ -20,9 +20,8 @@ #![rustc_partition_translated(module="struct_point-point", cfg="rpass2")] -// FIXME(#37121) -- the following two modules *should* be reused but are not -#![rustc_partition_translated(module="struct_point-fn_calls_methods_in_same_impl", cfg="rpass2")] -#![rustc_partition_translated(module="struct_point-fn_calls_methods_in_another_impl", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_calls_methods_in_same_impl", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_calls_methods_in_another_impl", cfg="rpass2")] #![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] #![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] #![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] @@ -60,8 +59,7 @@ mod point { mod fn_calls_methods_in_same_impl { use point::Point; - // FIXME(#37121) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn check() { let x = Point { x: 2.0, y: 2.0 }; x.distance_from_origin(); @@ -72,8 +70,7 @@ mod fn_calls_methods_in_same_impl { mod fn_calls_methods_in_another_impl { use point::Point; - // FIXME(#37121) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn check() { let mut x = Point { x: 2.0, y: 2.0 }; x.translate(3.0, 3.0); diff --git a/src/test/incremental/change_private_impl_method_cc/struct_point.rs b/src/test/incremental/change_private_impl_method_cc/struct_point.rs index d8e5fbadad..4d9ca77969 100644 --- a/src/test/incremental/change_private_impl_method_cc/struct_point.rs +++ b/src/test/incremental/change_private_impl_method_cc/struct_point.rs @@ -19,12 +19,12 @@ #![feature(stmt_expr_attributes)] #![allow(dead_code)] -// FIXME(#37333) -- the following modules *should* be reused but are not -#![rustc_partition_translated(module="struct_point-fn_calls_methods_in_same_impl", cfg="rpass2")] -#![rustc_partition_translated(module="struct_point-fn_calls_methods_in_another_impl", cfg="rpass2")] -#![rustc_partition_translated(module="struct_point-fn_make_struct", cfg="rpass2")] -#![rustc_partition_translated(module="struct_point-fn_read_field", cfg="rpass2")] -#![rustc_partition_translated(module="struct_point-fn_write_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] + +#![rustc_partition_reused(module="struct_point-fn_calls_methods_in_same_impl", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_calls_methods_in_another_impl", cfg="rpass2")] extern crate point; @@ -32,8 +32,7 @@ extern crate point; mod fn_calls_methods_in_same_impl { use point::Point; - // FIXME(#37333) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn check() { let x = Point { x: 2.0, y: 2.0 }; x.distance_from_origin(); @@ -44,9 +43,8 @@ mod fn_calls_methods_in_same_impl { mod fn_calls_methods_in_another_impl { use point::Point; - // FIXME(#37333) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] - pub fn check() { + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn dirty() { let mut x = Point { x: 2.0, y: 2.0 }; x.translate(3.0, 3.0); } @@ -56,8 +54,7 @@ mod fn_calls_methods_in_another_impl { mod fn_make_struct { use point::Point; - // FIXME(#37333) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn make_origin() -> Point { Point { x: 2.0, y: 2.0 } } @@ -67,8 +64,7 @@ mod fn_make_struct { mod fn_read_field { use point::Point; - // FIXME(#37333) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn get_x(p: Point) -> f32 { p.x } @@ -78,8 +74,7 @@ mod fn_read_field { mod fn_write_field { use point::Point; - // FIXME(#37333) -- we should not need to typeck this again - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn inc_x(p: &mut Point) { p.x += 1.0; } diff --git a/src/test/incremental/change_pub_inherent_method_body/struct_point.rs b/src/test/incremental/change_pub_inherent_method_body/struct_point.rs new file mode 100644 index 0000000000..e0047e5ec6 --- /dev/null +++ b/src/test/incremental/change_pub_inherent_method_body/struct_point.rs @@ -0,0 +1,102 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test where we change the body of a public, inherent method. + +// revisions:rpass1 rpass2 +// compile-flags: -Z query-dep-graph + +#![feature(rustc_attrs)] +#![feature(stmt_expr_attributes)] +#![allow(dead_code)] + +#![rustc_partition_translated(module="struct_point-point", cfg="rpass2")] + +#![rustc_partition_reused(module="struct_point-fn_calls_changed_method", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_calls_another_method", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] + +mod point { + pub struct Point { + pub x: f32, + pub y: f32, + } + + impl Point { + pub fn distance_from_origin(&self) -> f32 { + #[cfg(rpass1)] + return self.x * self.x + self.y * self.y; + + #[cfg(rpass2)] + return (self.x * self.x + self.y * self.y).sqrt(); + } + + pub fn x(&self) -> f32 { + self.x + } + } +} + +/// A fn item that calls the method on `Point` which changed +mod fn_calls_changed_method { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let p = Point { x: 2.0, y: 2.0 }; + p.distance_from_origin(); + } +} + +/// A fn item that calls a method on `Point` which did not change +mod fn_calls_another_method { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let p = Point { x: 2.0, y: 2.0 }; + p.x(); + } +} + +/// A fn item that makes an instance of `Point` but does not invoke methods +mod fn_make_struct { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn make_origin() -> Point { + Point { x: 2.0, y: 2.0 } + } +} + +/// A fn item that reads fields from `Point` but does not invoke methods +mod fn_read_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn get_x(p: Point) -> f32 { + p.x + } +} + +/// A fn item that writes to a field of `Point` but does not invoke methods +mod fn_write_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn inc_x(p: &mut Point) { + p.x += 1.0; + } +} + +fn main() { +} diff --git a/src/test/incremental/change_pub_inherent_method_sig/struct_point.rs b/src/test/incremental/change_pub_inherent_method_sig/struct_point.rs new file mode 100644 index 0000000000..54e06e1699 --- /dev/null +++ b/src/test/incremental/change_pub_inherent_method_sig/struct_point.rs @@ -0,0 +1,113 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test where we change the *signature* of a public, inherent method. + +// revisions:rpass1 rpass2 +// compile-flags: -Z query-dep-graph + +#![feature(rustc_attrs)] +#![feature(stmt_expr_attributes)] +#![feature(static_in_const)] +#![allow(dead_code)] + +// These are expected to require translation. +#![rustc_partition_translated(module="struct_point-point", cfg="rpass2")] +#![rustc_partition_translated(module="struct_point-fn_calls_changed_method", cfg="rpass2")] + +#![rustc_partition_reused(module="struct_point-fn_calls_another_method", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_make_struct", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_read_field", cfg="rpass2")] +#![rustc_partition_reused(module="struct_point-fn_write_field", cfg="rpass2")] + +mod point { + pub struct Point { + pub x: f32, + pub y: f32, + } + + impl Point { + #[cfg(rpass1)] + pub fn distance_from_point(&self, p: Option) -> f32 { + let p = p.unwrap_or(Point { x: 0.0, y: 0.0 }); + let x_diff = self.x - p.x; + let y_diff = self.y - p.y; + return x_diff * x_diff + y_diff * y_diff; + } + + #[cfg(rpass2)] + pub fn distance_from_point(&self, p: Option<&Point>) -> f32 { + const ORIGIN: &Point = &Point { x: 0.0, y: 0.0 }; + let p = p.unwrap_or(ORIGIN); + let x_diff = self.x - p.x; + let y_diff = self.y - p.y; + return x_diff * x_diff + y_diff * y_diff; + } + + pub fn x(&self) -> f32 { + self.x + } + } +} + +/// A fn item that calls the method that was changed +mod fn_calls_changed_method { + use point::Point; + + #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let p = Point { x: 2.0, y: 2.0 }; + p.distance_from_point(None); + } +} + +/// A fn item that calls a method that was not changed +mod fn_calls_another_method { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn check() { + let p = Point { x: 2.0, y: 2.0 }; + p.x(); + } +} + +/// A fn item that makes an instance of `Point` but does not invoke methods +mod fn_make_struct { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn make_origin() -> Point { + Point { x: 2.0, y: 2.0 } + } +} + +/// A fn item that reads fields from `Point` but does not invoke methods +mod fn_read_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn get_x(p: Point) -> f32 { + p.x + } +} + +/// A fn item that writes to a field of `Point` but does not invoke methods +mod fn_write_field { + use point::Point; + + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn inc_x(p: &mut Point) { + p.x += 1.0; + } +} + +fn main() { +} diff --git a/src/test/incremental/change_symbol_export_status.rs b/src/test/incremental/change_symbol_export_status.rs new file mode 100644 index 0000000000..71f46c641b --- /dev/null +++ b/src/test/incremental/change_symbol_export_status.rs @@ -0,0 +1,42 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// revisions: rpass1 rpass2 + +#![feature(rustc_attrs)] +#![allow(private_no_mangle_fns)] + +#![rustc_partition_reused(module="change_symbol_export_status", cfg="rpass2")] +#![rustc_partition_translated(module="change_symbol_export_status-mod1", cfg="rpass2")] + + +// This test case makes sure that a change in symbol visibility is detected by +// our dependency tracking. We do this by changing a module's visibility to +// `private` in rpass2, causing the contained function to go from `default` to +// `hidden` visibility. +// The function is marked with #[no_mangle] so it is considered for exporting +// even from an executable. Plain Rust functions are only exported from Rust +// libraries, which our test infrastructure does not support. + +#[cfg(rpass1)] +pub mod mod1 { + #[no_mangle] + pub fn foo() {} +} + +#[cfg(rpass2)] +mod mod1 { + #[no_mangle] + pub fn foo() {} +} + +fn main() { + mod1::foo(); +} diff --git a/src/test/incremental/hashes/call_expressions.rs b/src/test/incremental/hashes/call_expressions.rs new file mode 100644 index 0000000000..647ff5dedf --- /dev/null +++ b/src/test/incremental/hashes/call_expressions.rs @@ -0,0 +1,221 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for function and method call expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +fn callee1(_x: u32, _y: i64) {} +fn callee2(_x: u32, _y: i64) {} + + +// Change Callee (Function) ---------------------------------------------------- +#[cfg(cfail1)] +pub fn change_callee_function() { + callee1(1, 2) +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_callee_function() { + callee2(1, 2) +} + + + +// Change Argument (Function) -------------------------------------------------- +#[cfg(cfail1)] +pub fn change_argument_function() { + callee1(1, 2) +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_argument_function() { + callee1(1, 3) +} + + + +// Change Callee Indirectly (Function) ----------------------------------------- +mod change_callee_indirectly_function { + #[cfg(cfail1)] + use super::callee1 as callee; + #[cfg(not(cfail1))] + use super::callee2 as callee; + + #[rustc_clean(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_clean(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn change_callee_indirectly_function() { + callee(1, 2) + } +} + + +struct Struct; +impl Struct { + fn method1(&self, _x: char, _y: bool) {} + fn method2(&self, _x: char, _y: bool) {} +} + +// Change Callee (Method) ------------------------------------------------------ +#[cfg(cfail1)] +pub fn change_callee_method() { + let s = Struct; + s.method1('x', true); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_callee_method() { + let s = Struct; + s.method2('x', true); +} + + + +// Change Argument (Method) ---------------------------------------------------- +#[cfg(cfail1)] +pub fn change_argument_method() { + let s = Struct; + s.method1('x', true); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_argument_method() { + let s = Struct; + s.method1('y', true); +} + + + +// Change Callee (Method, UFCS) ------------------------------------------------ +#[cfg(cfail1)] +pub fn change_ufcs_callee_method() { + let s = Struct; + Struct::method1(&s, 'x', true); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_ufcs_callee_method() { + let s = Struct; + Struct::method2(&s, 'x', true); +} + + + +// Change Argument (Method, UFCS) ---------------------------------------------- +#[cfg(cfail1)] +pub fn change_argument_method_ufcs() { + let s = Struct; + Struct::method1(&s, 'x', true); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_argument_method_ufcs() { + let s = Struct; + Struct::method1(&s, 'x', false); +} + + + +// Change To UFCS -------------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_to_ufcs() { + let s = Struct; + s.method1('x', true); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_to_ufcs() { + let s = Struct; + Struct::method1(&s, 'x', true); +} + + +struct Struct2; +impl Struct2 { + fn method1(&self, _x: char, _y: bool) {} +} + +// Change UFCS Callee Indirectly ----------------------------------------------- +mod change_ufcs_callee_indirectly { + #[cfg(cfail1)] + use super::Struct as Struct; + #[cfg(not(cfail1))] + use super::Struct2 as Struct; + + #[rustc_clean(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_clean(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn change_ufcs_callee_indirectly() { + let s = Struct; + Struct::method1(&s, 'q', false) + } +} diff --git a/src/test/incremental/hashes/closure_expressions.rs b/src/test/incremental/hashes/closure_expressions.rs new file mode 100644 index 0000000000..38fe5cdffe --- /dev/null +++ b/src/test/incremental/hashes/closure_expressions.rs @@ -0,0 +1,144 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for closure expression. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change closure body --------------------------------------------------------- +#[cfg(cfail1)] +fn change_closure_body() { + let _ = || 1u32; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_closure_body() { + let _ = || 3u32; +} + + + +// Add parameter --------------------------------------------------------------- +#[cfg(cfail1)] +fn add_parameter() { + let x = 0u32; + let _ = || x + 1; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_parameter() { + let x = 0u32; + let _ = |x: u32| x + 1; +} + + + +// Change parameter pattern ---------------------------------------------------- +#[cfg(cfail1)] +fn change_parameter_pattern() { + let _ = |x: &u32| x; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_parameter_pattern() { + let _ = |&x: &u32| x; +} + + + +// Add `move` to closure ------------------------------------------------------- +#[cfg(cfail1)] +fn add_move() { + let _ = || 1; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_move() { + let _ = move || 1; +} + + + +// Add type ascription to parameter -------------------------------------------- +#[cfg(cfail1)] +fn add_type_ascription_to_parameter() { + let closure = |x| x + 1u32; + let _: u32 = closure(1); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_type_ascription_to_parameter() { + let closure = |x: u32| x + 1u32; + let _: u32 = closure(1); +} + + + +// Change parameter type ------------------------------------------------------- +#[cfg(cfail1)] +fn change_parameter_type() { + let closure = |x: u32| (x as u64) + 1; + let _ = closure(1); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_parameter_type() { + let closure = |x: u16| (x as u64) + 1; + let _ = closure(1); +} diff --git a/src/test/incremental/hashes/enum_constructors.rs b/src/test/incremental/hashes/enum_constructors.rs new file mode 100644 index 0000000000..7f991b30fc --- /dev/null +++ b/src/test/incremental/hashes/enum_constructors.rs @@ -0,0 +1,387 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for struct constructor expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +enum Enum { + Struct { + x: i32, + y: i64, + z: i16, + }, + Tuple(i32, i64, i16) +} + +// Change field value (struct-like) ----------------------------------------- +#[cfg(cfail1)] +fn change_field_value_struct_like() -> Enum { + Enum::Struct { + x: 0, + y: 1, + z: 2, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_value_struct_like() -> Enum { + Enum::Struct { + x: 0, + y: 2, + z: 2, + } +} + + + +// Change field order (struct-like) ----------------------------------------- +#[cfg(cfail1)] +fn change_field_order_struct_like() -> Enum { + Enum::Struct { + x: 3, + y: 4, + z: 5, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_order_struct_like() -> Enum { + Enum::Struct { + y: 4, + x: 3, + z: 5, + } +} + + +enum Enum2 { + Struct { + x: i8, + y: i8, + z: i8, + }, + Struct2 { + x: i8, + y: i8, + z: i8, + }, + Tuple(u16, u16, u16), + Tuple2(u64, u64, u64), +} + +// Change constructor path (struct-like) ------------------------------------ +#[cfg(cfail1)] +fn change_constructor_path_struct_like() { + let _ = Enum::Struct { + x: 0, + y: 1, + z: 2, + }; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_path_struct_like() { + let _ = Enum2::Struct { + x: 0, + y: 1, + z: 2, + }; +} + + + +// Change variant (regular struct) ------------------------------------ +#[cfg(cfail1)] +fn change_constructor_variant_struct_like() { + let _ = Enum2::Struct { + x: 0, + y: 1, + z: 2, + }; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_variant_struct_like() { + let _ = Enum2::Struct2 { + x: 0, + y: 1, + z: 2, + }; +} + + +// Change constructor path indirectly (struct-like) ------------------------- +mod change_constructor_path_indirectly_struct_like { + #[cfg(cfail1)] + use super::Enum as TheEnum; + #[cfg(not(cfail1))] + use super::Enum2 as TheEnum; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> TheEnum { + TheEnum::Struct { + x: 0, + y: 1, + z: 2, + } + } +} + + +// Change constructor variant indirectly (struct-like) --------------------------- +mod change_constructor_variant_indirectly_struct_like { + use super::Enum2; + #[cfg(cfail1)] + use super::Enum2::Struct as Variant; + #[cfg(not(cfail1))] + use super::Enum2::Struct2 as Variant; + + #[rustc_clean(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_clean(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> Enum2 { + Variant { + x: 0, + y: 1, + z: 2, + } + } +} + + +// Change field value (tuple-like) ------------------------------------------- +#[cfg(cfail1)] +fn change_field_value_tuple_like() -> Enum { + Enum::Tuple(0, 1, 2) +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_value_tuple_like() -> Enum { + Enum::Tuple(0, 1, 3) +} + + + +// Change constructor path (tuple-like) -------------------------------------- +#[cfg(cfail1)] +fn change_constructor_path_tuple_like() { + let _ = Enum::Tuple(0, 1, 2); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_path_tuple_like() { + let _ = Enum2::Tuple(0, 1, 2); +} + + + +// Change constructor variant (tuple-like) -------------------------------------- +#[cfg(cfail1)] +fn change_constructor_variant_tuple_like() { + let _ = Enum2::Tuple(0, 1, 2); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_variant_tuple_like() { + let _ = Enum2::Tuple2(0, 1, 2); +} + + +// Change constructor path indirectly (tuple-like) --------------------------- +mod change_constructor_path_indirectly_tuple_like { + #[cfg(cfail1)] + use super::Enum as TheEnum; + #[cfg(not(cfail1))] + use super::Enum2 as TheEnum; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> TheEnum { + TheEnum::Tuple(0, 1, 2) + } +} + + + +// Change constructor variant indirectly (tuple-like) --------------------------- +mod change_constructor_variant_indirectly_tuple_like { + use super::Enum2; + #[cfg(cfail1)] + use super::Enum2::Tuple as Variant; + #[cfg(not(cfail1))] + use super::Enum2::Tuple2 as Variant; + + #[rustc_clean(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_clean(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> Enum2 { + Variant(0, 1, 2) + } +} + + +enum Clike { + A, + B, + C +} + +enum Clike2 { + B, + C, + D +} + +// Change constructor path (C-like) -------------------------------------- +#[cfg(cfail1)] +fn change_constructor_path_c_like() { + let _ = Clike::B; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_path_c_like() { + let _ = Clike2::B; +} + + + +// Change constructor variant (C-like) -------------------------------------- +#[cfg(cfail1)] +fn change_constructor_variant_c_like() { + let _ = Clike::A; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_variant_c_like() { + let _ = Clike::C; +} + + +// Change constructor path indirectly (C-like) --------------------------- +mod change_constructor_path_indirectly_c_like { + #[cfg(cfail1)] + use super::Clike as TheEnum; + #[cfg(not(cfail1))] + use super::Clike2 as TheEnum; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> TheEnum { + TheEnum::B + } +} + + + +// Change constructor variant indirectly (C-like) --------------------------- +mod change_constructor_variant_indirectly_c_like { + use super::Clike; + #[cfg(cfail1)] + use super::Clike::A as Variant; + #[cfg(not(cfail1))] + use super::Clike::B as Variant; + + #[rustc_clean(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_clean(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> Clike { + Variant + } +} diff --git a/src/test/incremental/hashes/exported_vs_not.rs b/src/test/incremental/hashes/exported_vs_not.rs new file mode 100644 index 0000000000..082badacc6 --- /dev/null +++ b/src/test/incremental/hashes/exported_vs_not.rs @@ -0,0 +1,86 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +// Case 1: The function body is not exported to metadata. If the body changes, +// the hash of the HirBody node should change, but not the hash of +// either the Hir or the Metadata node. + +#[cfg(cfail1)] +pub fn body_not_exported_to_metadata() -> u32 { + 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn body_not_exported_to_metadata() -> u32 { + 2 +} + + + +// Case 2: The function body *is* exported to metadata because the function is +// marked as #[inline]. Only the hash of the Hir depnode should be +// unaffected by a change to the body. + +#[cfg(cfail1)] +#[inline] +pub fn body_exported_to_metadata_because_of_inline() -> u32 { + 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +#[inline] +pub fn body_exported_to_metadata_because_of_inline() -> u32 { + 2 +} + + + +// Case 2: The function body *is* exported to metadata because the function is +// generic. Only the hash of the Hir depnode should be +// unaffected by a change to the body. + +#[cfg(cfail1)] +#[inline] +pub fn body_exported_to_metadata_because_of_generic() -> u32 { + 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +#[inline] +pub fn body_exported_to_metadata_because_of_generic() -> u32 { + 2 +} + diff --git a/src/test/incremental/hashes/for_loops.rs b/src/test/incremental/hashes/for_loops.rs new file mode 100644 index 0000000000..bae3c9bf59 --- /dev/null +++ b/src/test/incremental/hashes/for_loops.rs @@ -0,0 +1,328 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for `for` loops. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_body() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_body() { + let mut _x = 0; + for _ in 0..1 { + _x = 2; + break; + } +} + + + +// Change iteration variable name ---------------------------------------------- +#[cfg(cfail1)] +fn change_iteration_variable_name() { + let mut _x = 0; + for _i in 0..1 { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_iteration_variable_name() { + let mut _x = 0; + for _a in 0..1 { + _x = 1; + break; + } +} + + + +// Change iteration variable pattern ------------------------------------------- +#[cfg(cfail1)] +fn change_iteration_variable_pattern() { + let mut _x = 0; + for _i in &[0, 1, 2] { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_iteration_variable_pattern() { + let mut _x = 0; + for &_i in &[0, 1, 2] { + _x = 1; + break; + } +} + + + +// Change iterable ------------------------------------------------------------- +#[cfg(cfail1)] +fn change_iterable() { + let mut _x = 0; + for _ in &[0, 1, 2] { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_iterable() { + let mut _x = 0; + for _ in &[0, 1, 3] { + _x = 1; + break; + } +} + + + +// Add break ------------------------------------------------------------------- +#[cfg(cfail1)] +fn add_break() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_break() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + break; + } +} + + + +// Add loop label -------------------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label() { + let mut _x = 0; + 'label: for _ in 0..1 { + _x = 1; + break; + } +} + + + +// Add loop label to break ----------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: for _ in 0..1 { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: for _ in 0..1 { + _x = 1; + break 'label; + } +} + + + +// Change break label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_break_label() { + let mut _x = 0; + 'outer: for _ in 0..1 { + 'inner: for _ in 0..1 { + _x = 1; + break 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_break_label() { + let mut _x = 0; + 'outer: for _ in 0..1 { + 'inner: for _ in 0..1 { + _x = 1; + break 'outer; + } + } +} + + + +// Add loop label to continue -------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: for _ in 0..1 { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: for _ in 0..1 { + _x = 1; + continue 'label; + } +} + + + +// Change continue label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_label() { + let mut _x = 0; + 'outer: for _ in 0..1 { + 'inner: for _ in 0..1 { + _x = 1; + continue 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_label() { + let mut _x = 0; + 'outer: for _ in 0..1 { + 'inner: for _ in 0..1 { + _x = 1; + continue 'outer; + } + } +} + + + +// Change continue to break ---------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_to_break() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_to_break() { + let mut _x = 0; + for _ in 0..1 { + _x = 1; + break; + } +} diff --git a/src/test/incremental/hashes/if_expressions.rs b/src/test/incremental/hashes/if_expressions.rs new file mode 100644 index 0000000000..c39eeab34c --- /dev/null +++ b/src/test/incremental/hashes/if_expressions.rs @@ -0,0 +1,248 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for if expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +// Change condition (if) ------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_condition(x: bool) -> u32 { + if x { + return 1 + } + + return 0 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_condition(x: bool) -> u32 { + if !x { + return 1 + } + + return 0 +} + +// Change then branch (if) ----------------------------------------------------- +#[cfg(cfail1)] +pub fn change_then_branch(x: bool) -> u32 { + if x { + return 1 + } + + return 0 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_then_branch(x: bool) -> u32 { + if x { + return 2 + } + + return 0 +} + + + +// Change else branch (if) ----------------------------------------------------- +#[cfg(cfail1)] +pub fn change_else_branch(x: bool) -> u32 { + if x { + 1 + } else { + 2 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_else_branch(x: bool) -> u32 { + if x { + 1 + } else { + 3 + } +} + + + +// Add else branch (if) -------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_else_branch(x: bool) -> u32 { + let mut ret = 1; + + if x { + ret += 1; + } + + ret +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_else_branch(x: bool) -> u32 { + let mut ret = 1; + + if x { + ret += 1; + } else { + } + + ret +} + + + +// Change condition (if let) --------------------------------------------------- +#[cfg(cfail1)] +pub fn change_condition_if_let(x: Option) -> u32 { + if let Some(_x) = x { + return 1 + } + + 0 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_condition_if_let(x: Option) -> u32 { + if let Some(_) = x { + return 1 + } + + 0 +} + + + +// Change then branch (if let) ------------------------------------------------- +#[cfg(cfail1)] +pub fn change_then_branch_if_let(x: Option) -> u32 { + if let Some(x) = x { + return x + } + + 0 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_then_branch_if_let(x: Option) -> u32 { + if let Some(x) = x { + return x + 1 + } + + 0 +} + + + +// Change else branch (if let) ------------------------------------------------- +#[cfg(cfail1)] +pub fn change_else_branch_if_let(x: Option) -> u32 { + if let Some(x) = x { + x + } else { + 1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_else_branch_if_let(x: Option) -> u32 { + if let Some(x) = x { + x + } else { + 2 + } +} + + + +// Add else branch (if let) ---------------------------------------------------- +#[cfg(cfail1)] +pub fn add_else_branch_if_let(x: Option) -> u32 { + let mut ret = 1; + + if let Some(x) = x { + ret += x; + } + + ret +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_else_branch_if_let(x: Option) -> u32 { + let mut ret = 1; + + if let Some(x) = x { + ret += x; + } else { + } + + ret +} diff --git a/src/test/incremental/hashes/indexing_expressions.rs b/src/test/incremental/hashes/indexing_expressions.rs new file mode 100644 index 0000000000..bb31982d93 --- /dev/null +++ b/src/test/incremental/hashes/indexing_expressions.rs @@ -0,0 +1,157 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for closure expression. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] +#![feature(inclusive_range_syntax)] + +// Change simple index --------------------------------------------------------- +#[cfg(cfail1)] +fn change_simple_index(slice: &[u32]) -> u32 { + slice[3] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_simple_index(slice: &[u32]) -> u32 { + slice[4] +} + + + +// Change lower bound ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_lower_bound(slice: &[u32]) -> &[u32] { + &slice[3..5] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_lower_bound(slice: &[u32]) -> &[u32] { + &slice[2..5] +} + + + +// Change upper bound ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_upper_bound(slice: &[u32]) -> &[u32] { + &slice[3..5] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_upper_bound(slice: &[u32]) -> &[u32] { + &slice[3..7] +} + + + +// Add lower bound ------------------------------------------------------------- +#[cfg(cfail1)] +fn add_lower_bound(slice: &[u32]) -> &[u32] { + &slice[..4] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_lower_bound(slice: &[u32]) -> &[u32] { + &slice[3..4] +} + + + +// Add upper bound ------------------------------------------------------------- +#[cfg(cfail1)] +fn add_upper_bound(slice: &[u32]) -> &[u32] { + &slice[3..] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_upper_bound(slice: &[u32]) -> &[u32] { + &slice[3..7] +} + + + +// Change mutability ----------------------------------------------------------- +#[cfg(cfail1)] +fn change_mutability(slice: &mut [u32]) -> u32 { + (&mut slice[3..5])[0] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_mutability(slice: &mut [u32]) -> u32 { + (&slice[3..5])[0] +} + + + +// Exclusive to inclusive range ------------------------------------------------ +#[cfg(cfail1)] +fn exclusive_to_inclusive_range(slice: &[u32]) -> &[u32] { + &slice[3..7] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn exclusive_to_inclusive_range(slice: &[u32]) -> &[u32] { + &slice[3...7] +} diff --git a/src/test/incremental/hashes/inherent_impls.rs b/src/test/incremental/hashes/inherent_impls.rs new file mode 100644 index 0000000000..f7a390e874 --- /dev/null +++ b/src/test/incremental/hashes/inherent_impls.rs @@ -0,0 +1,128 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for let expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +struct Foo; + +// Change Method Name ----------------------------------------------------------- +#[cfg(cfail1)] +impl Foo { + pub fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn method_name2() { } +} + +// Change Method Body ----------------------------------------------------------- +// +// This should affect the method itself, but not the impl. +#[cfg(cfail1)] +impl Foo { + pub fn method_body() { } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn method_body() { + println!("Hello, world!"); + } +} + +// Change Method Privacy ----------------------------------------------------------- +#[cfg(cfail1)] +impl Foo { + pub fn method_privacy() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_privacy() { } +} + +// Change Method Selfness ----------------------------------------------------------- +#[cfg(cfail1)] +impl Foo { + pub fn method_selfness() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn method_selfness(&self) { } +} + +// Change Method Selfmutness ----------------------------------------------------------- +#[cfg(cfail1)] +impl Foo { + pub fn method_selfmutness(&self) { } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + pub fn method_selfmutness(&mut self) { } +} + diff --git a/src/test/incremental/hashes/let_expressions.rs b/src/test/incremental/hashes/let_expressions.rs new file mode 100644 index 0000000000..9e532548e1 --- /dev/null +++ b/src/test/incremental/hashes/let_expressions.rs @@ -0,0 +1,252 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for let expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +// Change Name ----------------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_name() { + let _x = 2u64; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_name() { + let _y = 2u64; +} + + + +// Add Type -------------------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_type() { + let _x = 2u32; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_type() { + let _x: u32 = 2u32; +} + + + +// Change Type ----------------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_type() { + let _x: u64 = 2; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_type() { + let _x: u8 = 2; +} + + + +// Change Mutability of Reference Type ----------------------------------------- +#[cfg(cfail1)] +pub fn change_mutability_of_reference_type() { + let _x: &u64; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_mutability_of_reference_type() { + let _x: &mut u64; +} + + + +// Change Mutability of Slot --------------------------------------------------- +#[cfg(cfail1)] +pub fn change_mutability_of_slot() { + let mut _x: u64 = 0; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_mutability_of_slot() { + let _x: u64 = 0; +} + + + +// Change Simple Binding to Pattern -------------------------------------------- +#[cfg(cfail1)] +pub fn change_simple_binding_to_pattern() { + let _x = (0u8, 'x'); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_simple_binding_to_pattern() { + let (_a, _b) = (0u8, 'x'); +} + + + +// Change Name in Pattern ------------------------------------------------------ +#[cfg(cfail1)] +pub fn change_name_in_pattern() { + let (_a, _b) = (1u8, 'y'); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_name_in_pattern() { + let (_a, _c) = (1u8, 'y'); +} + + + +// Add `ref` in Pattern -------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_ref_in_pattern() { + let (_a, _b) = (1u8, 'y'); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_ref_in_pattern() { + let (ref _a, _b) = (1u8, 'y'); +} + + + +// Add `&` in Pattern ---------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_amp_in_pattern() { + let (_a, _b) = (&1u8, 'y'); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_amp_in_pattern() { + let (&_a, _b) = (&1u8, 'y'); +} + + + +// Change Mutability of Binding in Pattern ------------------------------------- +#[cfg(cfail1)] +pub fn change_mutability_of_binding_in_pattern() { + let (_a, _b) = (99u8, 'q'); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_mutability_of_binding_in_pattern() { + let (mut _a, _b) = (99u8, 'q'); +} + + + +// Add Initializer ------------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_initializer() { + let _x: i16; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_initializer() { + let _x: i16 = 3i16; +} + + + +// Change Initializer ---------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_initializer() { + let _x = 4u16; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_initializer() { + let _x = 5u16; +} diff --git a/src/test/incremental/hashes/loop_expressions.rs b/src/test/incremental/hashes/loop_expressions.rs new file mode 100644 index 0000000000..da43ef3c46 --- /dev/null +++ b/src/test/incremental/hashes/loop_expressions.rs @@ -0,0 +1,247 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for `loop` loops. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_body() { + let mut _x = 0; + loop { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_body() { + let mut _x = 0; + loop { + _x = 2; + break; + } +} + + + +// Add break ------------------------------------------------------------------- +#[cfg(cfail1)] +fn add_break() { + let mut _x = 0; + loop { + _x = 1; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_break() { + let mut _x = 0; + loop { + _x = 1; + break; + } +} + + + +// Add loop label -------------------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label() { + let mut _x = 0; + loop { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label() { + let mut _x = 0; + 'label: loop { + _x = 1; + break; + } +} + + + +// Add loop label to break ----------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: loop { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: loop { + _x = 1; + break 'label; + } +} + + + +// Change break label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_break_label() { + let mut _x = 0; + 'outer: loop { + 'inner: loop { + _x = 1; + break 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_break_label() { + let mut _x = 0; + 'outer: loop { + 'inner: loop { + _x = 1; + break 'outer; + } + } +} + + + +// Add loop label to continue -------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: loop { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: loop { + _x = 1; + continue 'label; + } +} + + + +// Change continue label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_label() { + let mut _x = 0; + 'outer: loop { + 'inner: loop { + _x = 1; + continue 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_label() { + let mut _x = 0; + 'outer: loop { + 'inner: loop { + _x = 1; + continue 'outer; + } + } +} + + + +// Change continue to break ---------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_to_break() { + let mut _x = 0; + loop { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_to_break() { + let mut _x = 0; + loop { + _x = 1; + break; + } +} diff --git a/src/test/incremental/hashes/match_expressions.rs b/src/test/incremental/hashes/match_expressions.rs new file mode 100644 index 0000000000..48f99b834c --- /dev/null +++ b/src/test/incremental/hashes/match_expressions.rs @@ -0,0 +1,368 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for match expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + +// Add Arm --------------------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_arm(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => 100, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_arm(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + 2 => 2, + _ => 100, + } +} + + + +// Change Order Of Arms -------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_order_of_arms(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => 100, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_order_of_arms(x: u32) -> u32 { + match x { + 1 => 1, + 0 => 0, + _ => 100, + } +} + + + +// Add Guard Clause ------------------------------------------------------------ +#[cfg(cfail1)] +pub fn add_guard_clause(x: u32, y: bool) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => 100, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_guard_clause(x: u32, y: bool) -> u32 { + match x { + 0 => 0, + 1 if y => 1, + _ => 100, + } +} + + + +// Change Guard Clause ------------------------------------------------------------ +#[cfg(cfail1)] +pub fn change_guard_clause(x: u32, y: bool) -> u32 { + match x { + 0 => 0, + 1 if y => 1, + _ => 100, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_guard_clause(x: u32, y: bool) -> u32 { + match x { + 0 => 0, + 1 if !y => 1, + _ => 100, + } +} + + + +// Add @-Binding --------------------------------------------------------------- +#[cfg(cfail1)] +pub fn add_at_binding(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => x, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_at_binding(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + x @ _ => x, + } +} + + + +// Change Name of @-Binding ---------------------------------------------------- +#[cfg(cfail1)] +pub fn change_name_of_at_binding(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + x @ _ => 7, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_name_of_at_binding(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + y @ _ => 7, + } +} + + + +// Change Simple Binding To Pattern -------------------------------------------- +#[cfg(cfail1)] +pub fn change_simple_name_to_pattern(x: u32) -> u32 { + match (x, x & 1) { + (0, 0) => 0, + a => 1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_simple_name_to_pattern(x: u32) -> u32 { + match (x, x & 1) { + (0, 0) => 0, + (x, y) => 1 + } +} + + + +// Change Name In Pattern ------------------------------------------------------ +#[cfg(cfail1)] +pub fn change_name_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (a, 0) => 0, + (a, 1) => a, + _ => 100, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_name_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (b, 0) => 0, + (a, 1) => a, + _ => 100, + } +} + + + +// Change Mutability Of Binding In Pattern ------------------------------------- +#[cfg(cfail1)] +pub fn change_mutability_of_binding_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (a, 0) => 0, + _ => 1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_mutability_of_binding_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (mut a, 0) => 0, + _ => 1 + } +} + + + +// Add `ref` To Binding In Pattern ------------------------------------- +#[cfg(cfail1)] +pub fn add_ref_to_binding_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (a, 0) => 0, + _ => 1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_ref_to_binding_in_pattern(x: u32) -> u32 { + match (x, x & 1) { + (ref a, 0) => 0, + _ => 1, + } +} + + + +// Add `&` To Binding In Pattern ------------------------------------- +#[cfg(cfail1)] +pub fn add_amp_to_binding_in_pattern(x: u32) -> u32 { + match (&x, x & 1) { + (a, 0) => 0, + _ => 1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_amp_to_binding_in_pattern(x: u32) -> u32 { + match (&x, x & 1) { + (&a, 0) => 0, + _ => 1, + } +} + + + +// Change RHS Of Arm ----------------------------------------------------------- +#[cfg(cfail1)] +pub fn change_rhs_of_arm(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => 2, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn change_rhs_of_arm(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 3, + _ => 2, + } +} + + + +// Add Alternative To Arm ------------------------------------------------------ +#[cfg(cfail1)] +pub fn add_alternative_to_arm(x: u32) -> u32 { + match x { + 0 => 0, + 1 => 1, + _ => 2, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn add_alternative_to_arm(x: u32) -> u32 { + match x { + 0 | 7 => 0, + 1 => 3, + _ => 2, + } +} diff --git a/src/test/incremental/hashes/panic_exprs.rs b/src/test/incremental/hashes/panic_exprs.rs index f5f4c0042b..5d4d434fd6 100644 --- a/src/test/incremental/hashes/panic_exprs.rs +++ b/src/test/incremental/hashes/panic_exprs.rs @@ -34,9 +34,11 @@ pub fn indexing(slice: &[u8]) -> u8 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn indexing(slice: &[u8]) -> u8 { slice[100] @@ -50,9 +52,11 @@ pub fn arithmetic_overflow_plus(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_plus(val: i32) -> i32 { val + 1 @@ -66,9 +70,11 @@ pub fn arithmetic_overflow_minus(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_minus(val: i32) -> i32 { val - 1 @@ -82,9 +88,11 @@ pub fn arithmetic_overflow_mult(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_mult(val: i32) -> i32 { val * 2 @@ -98,9 +106,11 @@ pub fn arithmetic_overflow_negation(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_negation(val: i32) -> i32 { -val @@ -114,9 +124,11 @@ pub fn division_by_zero(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn division_by_zero(val: i32) -> i32 { 2 / val @@ -129,9 +141,11 @@ pub fn mod_by_zero(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn mod_by_zero(val: i32) -> i32 { 2 % val @@ -150,6 +164,8 @@ pub fn bitwise(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn bitwise(val: i32) -> i32 { @@ -166,6 +182,8 @@ pub fn logical(val1: bool, val2: bool, val3: bool) -> bool { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn logical(val1: bool, val2: bool, val3: bool) -> bool { diff --git a/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs b/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs index b84b7f5f37..b3fc8e2d36 100644 --- a/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs +++ b/src/test/incremental/hashes/panic_exprs_no_overflow_checks.rs @@ -41,9 +41,11 @@ pub fn indexing(slice: &[u8]) -> u8 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn indexing(slice: &[u8]) -> u8 { slice[100] @@ -58,9 +60,11 @@ pub fn arithmetic_overflow_plus_inherit(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] #[rustc_inherit_overflow_checks] pub fn arithmetic_overflow_plus_inherit(val: i32) -> i32 { @@ -76,9 +80,11 @@ pub fn arithmetic_overflow_minus_inherit(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] #[rustc_inherit_overflow_checks] pub fn arithmetic_overflow_minus_inherit(val: i32) -> i32 { @@ -94,9 +100,11 @@ pub fn arithmetic_overflow_mult_inherit(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] #[rustc_inherit_overflow_checks] pub fn arithmetic_overflow_mult_inherit(val: i32) -> i32 { @@ -112,9 +120,11 @@ pub fn arithmetic_overflow_negation_inherit(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] #[rustc_inherit_overflow_checks] pub fn arithmetic_overflow_negation_inherit(val: i32) -> i32 { @@ -129,9 +139,11 @@ pub fn division_by_zero(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn division_by_zero(val: i32) -> i32 { 2 / val @@ -144,9 +156,11 @@ pub fn mod_by_zero(val: i32) -> i32 { } #[cfg(not(cfail1))] -#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] -#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn mod_by_zero(val: i32) -> i32 { 2 % val @@ -165,6 +179,8 @@ pub fn bitwise(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn bitwise(val: i32) -> i32 { @@ -181,6 +197,8 @@ pub fn logical(val1: bool, val2: bool, val3: bool) -> bool { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn logical(val1: bool, val2: bool, val3: bool) -> bool { @@ -196,6 +214,8 @@ pub fn arithmetic_overflow_plus(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_plus(val: i32) -> i32 { @@ -212,6 +232,8 @@ pub fn arithmetic_overflow_minus(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_minus(val: i32) -> i32 { @@ -228,6 +250,8 @@ pub fn arithmetic_overflow_mult(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_mult(val: i32) -> i32 { @@ -244,6 +268,8 @@ pub fn arithmetic_overflow_negation(val: i32) -> i32 { #[cfg(not(cfail1))] #[rustc_clean(label="Hir", cfg="cfail2")] #[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_clean(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] #[rustc_metadata_clean(cfg="cfail2")] #[rustc_metadata_clean(cfg="cfail3")] pub fn arithmetic_overflow_negation(val: i32) -> i32 { diff --git a/src/test/incremental/hashes/struct_constructors.rs b/src/test/incremental/hashes/struct_constructors.rs new file mode 100644 index 0000000000..0e23d953ba --- /dev/null +++ b/src/test/incremental/hashes/struct_constructors.rs @@ -0,0 +1,280 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for struct constructor expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +struct RegularStruct { + x: i32, + y: i64, + z: i16, +} + +// Change field value (regular struct) ----------------------------------------- +#[cfg(cfail1)] +fn change_field_value_regular_struct() -> RegularStruct { + RegularStruct { + x: 0, + y: 1, + z: 2, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_value_regular_struct() -> RegularStruct { + RegularStruct { + x: 0, + y: 2, + z: 2, + } +} + + + +// Change field order (regular struct) ----------------------------------------- +#[cfg(cfail1)] +fn change_field_order_regular_struct() -> RegularStruct { + RegularStruct { + x: 3, + y: 4, + z: 5, + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_order_regular_struct() -> RegularStruct { + RegularStruct { + y: 4, + x: 3, + z: 5, + } +} + + + +// Add field (regular struct) -------------------------------------------------- +#[cfg(cfail1)] +fn add_field_regular_struct() -> RegularStruct { + let struct1 = RegularStruct { + x: 3, + y: 4, + z: 5, + }; + + RegularStruct { + x: 7, + .. struct1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_field_regular_struct() -> RegularStruct { + let struct1 = RegularStruct { + x: 3, + y: 4, + z: 5, + }; + + RegularStruct { + x: 7, + y: 8, + .. struct1 + } +} + + + +// Change field label (regular struct) ----------------------------------------- +#[cfg(cfail1)] +fn change_field_label_regular_struct() -> RegularStruct { + let struct1 = RegularStruct { + x: 3, + y: 4, + z: 5, + }; + + RegularStruct { + x: 7, + y: 9, + .. struct1 + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_label_regular_struct() -> RegularStruct { + let struct1 = RegularStruct { + x: 3, + y: 4, + z: 5, + }; + + RegularStruct { + x: 7, + z: 9, + .. struct1 + } +} + + + +struct RegularStruct2 { + x: i8, + y: i8, + z: i8, +} + +// Change constructor path (regular struct) ------------------------------------ +#[cfg(cfail1)] +fn change_constructor_path_regular_struct() { + let _ = RegularStruct { + x: 0, + y: 1, + z: 2, + }; +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_path_regular_struct() { + let _ = RegularStruct2 { + x: 0, + y: 1, + z: 2, + }; +} + + + +// Change constructor path indirectly (regular struct) ------------------------- +mod change_constructor_path_indirectly_regular_struct { + #[cfg(cfail1)] + use super::RegularStruct as Struct; + #[cfg(not(cfail1))] + use super::RegularStruct2 as Struct; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> Struct { + Struct { + x: 0, + y: 1, + z: 2, + } + } +} + + + +struct TupleStruct(i32, i64, i16); + +// Change field value (tuple struct) ------------------------------------------- +#[cfg(cfail1)] +fn change_field_value_tuple_struct() -> TupleStruct { + TupleStruct(0, 1, 2) +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_field_value_tuple_struct() -> TupleStruct { + TupleStruct(0, 1, 3) +} + + + +struct TupleStruct2(u16, u16, u16); + +// Change constructor path (tuple struct) -------------------------------------- +#[cfg(cfail1)] +fn change_constructor_path_tuple_struct() { + let _ = TupleStruct(0, 1, 2); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_constructor_path_tuple_struct() { + let _ = TupleStruct2(0, 1, 2); +} + + + +// Change constructor path indirectly (tuple struct) --------------------------- +mod change_constructor_path_indirectly_tuple_struct { + #[cfg(cfail1)] + use super::TupleStruct as Struct; + #[cfg(not(cfail1))] + use super::TupleStruct2 as Struct; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_dirty(label="HirBody", cfg="cfail2")] + #[rustc_clean(label="HirBody", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn function() -> Struct { + Struct(0, 1, 2) + } +} diff --git a/src/test/incremental/hashes/trait_impls.rs b/src/test/incremental/hashes/trait_impls.rs new file mode 100644 index 0000000000..500aaf5232 --- /dev/null +++ b/src/test/incremental/hashes/trait_impls.rs @@ -0,0 +1,404 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for let expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![feature(specialization)] +#![crate_type="rlib"] + +struct Foo; + +// Change Method Name ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait ChangeMethodNameTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl ChangeMethodNameTrait for Foo { + fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub trait ChangeMethodNameTrait { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name2(); +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeMethodNameTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name2() { } +} + +// Change Method Body ----------------------------------------------------------- +// +// This should affect the method itself, but not the trait. + +pub trait ChangeMethodBodyTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl ChangeMethodBodyTrait for Foo { + fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeMethodBodyTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name() { + () + } +} + +// Change Method Selfness ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait ChangeMethodSelfnessTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl ChangeMethodSelfnessTrait for Foo { + fn method_name() { } +} + +#[cfg(not(cfail1))] +pub trait ChangeMethodSelfnessTrait { + fn method_name(&self); +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeMethodSelfnessTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name(&self) { + () + } +} + +// Change Method Selfness ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait RemoveMethodSelfnessTrait { + fn method_name(&self); +} + +#[cfg(cfail1)] +impl RemoveMethodSelfnessTrait for Foo { + fn method_name(&self) { } +} + +#[cfg(not(cfail1))] +pub trait RemoveMethodSelfnessTrait { + fn method_name(); +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl RemoveMethodSelfnessTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name() { + () + } +} + +// Change Method Selfmutness ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait ChangeMethodSelfmutnessTrait { + fn method_name(&self); +} + +#[cfg(cfail1)] +impl ChangeMethodSelfmutnessTrait for Foo { + fn method_name(&self) { } +} + +#[cfg(not(cfail1))] +pub trait ChangeMethodSelfmutnessTrait { + fn method_name(&mut self); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeMethodSelfmutnessTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name(&mut self) { + () + } +} + +// Change item kind ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait ChangeItemKindTrait { + fn name(); +} + +#[cfg(cfail1)] +impl ChangeItemKindTrait for Foo { + fn name() { } +} + +#[cfg(not(cfail1))] +pub trait ChangeItemKindTrait { + type name; +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeItemKindTrait for Foo { + type name = (); +} + +// Remove item ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait RemoveItemTrait { + type TypeName; + fn method_name(); +} + +#[cfg(cfail1)] +impl RemoveItemTrait for Foo { + type TypeName = (); + fn method_name() { } +} + +#[cfg(not(cfail1))] +pub trait RemoveItemTrait { + type TypeName; +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl RemoveItemTrait for Foo { + type TypeName = (); +} + +// Add item ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait AddItemTrait { + type TypeName; +} + +#[cfg(cfail1)] +impl AddItemTrait for Foo { + type TypeName = (); +} + +#[cfg(not(cfail1))] +pub trait AddItemTrait { + type TypeName; + fn method_name(); +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl AddItemTrait for Foo { + type TypeName = (); + fn method_name() { } +} + +// Change has-value ----------------------------------------------------------- + +#[cfg(cfail1)] +pub trait ChangeHasValueTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl ChangeHasValueTrait for Foo { + fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub trait ChangeHasValueTrait { + fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeHasValueTrait for Foo { + fn method_name() { } +} + +// Add default + +pub trait AddDefaultTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl AddDefaultTrait for Foo { + fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl AddDefaultTrait for Foo { + default fn method_name() { } +} + +// Remove default + +pub trait RemoveDefaultTrait { + fn method_name(); +} + +#[cfg(cfail1)] +impl RemoveDefaultTrait for Foo { + default fn method_name() { } +} + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl RemoveDefaultTrait for Foo { + fn method_name() { } +} + +// Add arguments + +#[cfg(cfail1)] +pub trait AddArgumentTrait { + fn method_name(&self); +} + +#[cfg(cfail1)] +impl AddArgumentTrait for Foo { + fn method_name(&self) { } +} + +#[cfg(not(cfail1))] +pub trait AddArgumentTrait { + fn method_name(&self, x: u32); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl AddArgumentTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name(&self, _x: u32) { } +} + +// Change argument type + +#[cfg(cfail1)] +pub trait ChangeArgumentTypeTrait { + fn method_name(&self, x: u32); +} + +#[cfg(cfail1)] +impl ChangeArgumentTypeTrait for Foo { + fn method_name(&self, _x: u32) { } +} + +#[cfg(not(cfail1))] +pub trait ChangeArgumentTypeTrait { + fn method_name(&self, x: char); +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_dirty(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +impl ChangeArgumentTypeTrait for Foo { + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + fn method_name(&self, _x: char) { } +} + diff --git a/src/test/incremental/hashes/type_defs.rs b/src/test/incremental/hashes/type_defs.rs new file mode 100644 index 0000000000..35fb583cd4 --- /dev/null +++ b/src/test/incremental/hashes/type_defs.rs @@ -0,0 +1,249 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for `type` definitions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// We also test the ICH for `type` definitions exported in metadata. Same as +// above, we want to make sure that the change between rev1 and rev2 also +// results in a change of the ICH for the enum's metadata, and that it stays +// the same between rev2 and rev3. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change type (primitive) ----------------------------------------------------- +#[cfg(cfail1)] +type ChangePrimitiveType = i32; + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangePrimitiveType = i64; + + + +// Change mutability ----------------------------------------------------------- +#[cfg(cfail1)] +type ChangeMutability = &'static i32; + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeMutability = &'static mut i32; + + + +// Change mutability ----------------------------------------------------------- +#[cfg(cfail1)] +type ChangeLifetime<'a> = (&'static i32, &'a i32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeLifetime<'a> = (&'a i32, &'a i32); + + + +// Change type (struct) ----------------------------------------------------------- +struct Struct1; +struct Struct2; + +#[cfg(cfail1)] +type ChangeTypeStruct = Struct1; + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeTypeStruct = Struct2; + + + +// Change type (tuple) --------------------------------------------------------- +#[cfg(cfail1)] +type ChangeTypeTuple = (u32, u64); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeTypeTuple = (u32, i64); + + + +// Change type (enum) ---------------------------------------------------------- +enum Enum1 { + Var1, + Var2, +} +enum Enum2 { + Var1, + Var2, +} + +#[cfg(cfail1)] +type ChangeTypeEnum = Enum1; + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeTypeEnum = Enum2; + + + +// Add tuple field ------------------------------------------------------------- +#[cfg(cfail1)] +type AddTupleField = (i32, i64); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddTupleField = (i32, i64, i16); + + + +// Change nested tuple field --------------------------------------------------- +#[cfg(cfail1)] +type ChangeNestedTupleField = (i32, (i64, i16)); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type ChangeNestedTupleField = (i32, (i64, i8)); + + + +// Add type param -------------------------------------------------------------- +#[cfg(cfail1)] +type AddTypeParam = (T1, T1); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddTypeParam = (T1, T2); + + + +// Add type param bound -------------------------------------------------------- +#[cfg(cfail1)] +type AddTypeParamBound = (T1, u32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddTypeParamBound = (T1, u32); + + + +// Add type param bound in where clause ---------------------------------------- +#[cfg(cfail1)] +type AddTypeParamBoundWhereClause where T1: Clone = (T1, u32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddTypeParamBoundWhereClause where T1: Clone+Copy = (T1, u32); + + + +// Add lifetime param ---------------------------------------------------------- +#[cfg(cfail1)] +type AddLifetimeParam<'a> = (&'a u32, &'a u32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddLifetimeParam<'a, 'b> = (&'a u32, &'b u32); + + + +// Add lifetime param bound ---------------------------------------------------- +#[cfg(cfail1)] +type AddLifetimeParamBound<'a, 'b> = (&'a u32, &'b u32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddLifetimeParamBound<'a, 'b: 'a> = (&'a u32, &'b u32); + + + +// Add lifetime param bound in where clause ------------------------------------ +#[cfg(cfail1)] +type AddLifetimeParamBoundWhereClause<'a, 'b, 'c> +where 'b: 'a + = (&'a u32, &'b u32, &'c u32); + +#[cfg(not(cfail1))] +#[rustc_dirty(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail3")] +type AddLifetimeParamBoundWhereClause<'a, 'b, 'c> +where 'b: 'a, + 'c: 'a + = (&'a u32, &'b u32, &'c u32); + + + +// Change Trait Bound Indirectly ----------------------------------------------- +trait ReferencedTrait1 {} +trait ReferencedTrait2 {} + +mod change_trait_bound_indirectly { + #[cfg(cfail1)] + use super::ReferencedTrait1 as Trait; + #[cfg(not(cfail1))] + use super::ReferencedTrait2 as Trait; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + type ChangeTraitBoundIndirectly = (T, u32); +} + + + +// Change Trait Bound Indirectly In Where Clause ------------------------------- +mod change_trait_bound_indirectly_in_where_clause { + #[cfg(cfail1)] + use super::ReferencedTrait1 as Trait; + #[cfg(not(cfail1))] + use super::ReferencedTrait2 as Trait; + + #[rustc_dirty(label="Hir", cfg="cfail2")] + #[rustc_clean(label="Hir", cfg="cfail3")] + #[rustc_metadata_dirty(cfg="cfail2")] + #[rustc_metadata_clean(cfg="cfail3")] + type ChangeTraitBoundIndirectly where T : Trait = (T, u32); +} diff --git a/src/test/incremental/hashes/unary_and_binary_exprs.rs b/src/test/incremental/hashes/unary_and_binary_exprs.rs new file mode 100644 index 0000000000..05b0dec4e7 --- /dev/null +++ b/src/test/incremental/hashes/unary_and_binary_exprs.rs @@ -0,0 +1,570 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for unary and binary expressions. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph -Z force-overflow-checks=off + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change constant operand of negation ----------------------------------------- +#[cfg(cfail1)] +pub fn const_negation() -> i32 { + -10 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn const_negation() -> i32 { + -1 +} + + + +// Change constant operand of bitwise not -------------------------------------- +#[cfg(cfail1)] +pub fn const_bitwise_not() -> i32 { + !100 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn const_bitwise_not() -> i32 { + !99 +} + + + +// Change variable operand of negation ----------------------------------------- +#[cfg(cfail1)] +pub fn var_negation(x: i32, y: i32) -> i32 { + -x +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn var_negation(x: i32, y: i32) -> i32 { + -y +} + + + +// Change variable operand of bitwise not -------------------------------------- +#[cfg(cfail1)] +pub fn var_bitwise_not(x: i32, y: i32) -> i32 { + !x +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn var_bitwise_not(x: i32, y: i32) -> i32 { + !y +} + + + +// Change variable operand of deref -------------------------------------------- +#[cfg(cfail1)] +pub fn var_deref(x: &i32, y: &i32) -> i32 { + *x +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn var_deref(x: &i32, y: &i32) -> i32 { + *y +} + + + +// Change first constant operand of addition ----------------------------------- +#[cfg(cfail1)] +pub fn first_const_add() -> i32 { + 1 + 3 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn first_const_add() -> i32 { + 2 + 3 +} + + + +// Change second constant operand of addition ----------------------------------- +#[cfg(cfail1)] +pub fn second_const_add() -> i32 { + 1 + 2 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn second_const_add() -> i32 { + 1 + 3 +} + + + +// Change first variable operand of addition ----------------------------------- +#[cfg(cfail1)] +pub fn first_var_add(a: i32, b: i32) -> i32 { + a + 2 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn first_var_add(a: i32, b: i32) -> i32 { + b + 2 +} + + + +// Change second variable operand of addition ---------------------------------- +#[cfg(cfail1)] +pub fn second_var_add(a: i32, b: i32) -> i32 { + 1 + a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn second_var_add(a: i32, b: i32) -> i32 { + 1 + b +} + + + +// Change operator from + to - ------------------------------------------------- +#[cfg(cfail1)] +pub fn plus_to_minus(a: i32) -> i32 { + 1 + a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn plus_to_minus(a: i32) -> i32 { + 1 - a +} + + + +// Change operator from + to * ------------------------------------------------- +#[cfg(cfail1)] +pub fn plus_to_mult(a: i32) -> i32 { + 1 + a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn plus_to_mult(a: i32) -> i32 { + 1 * a +} + + + +// Change operator from + to / ------------------------------------------------- +#[cfg(cfail1)] +pub fn plus_to_div(a: i32) -> i32 { + 1 + a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn plus_to_div(a: i32) -> i32 { + 1 / a +} + + + +// Change operator from + to % ------------------------------------------------- +#[cfg(cfail1)] +pub fn plus_to_mod(a: i32) -> i32 { + 1 + a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn plus_to_mod(a: i32) -> i32 { + 1 % a +} + + + +// Change operator from && to || ----------------------------------------------- +#[cfg(cfail1)] +pub fn and_to_or(a: bool, b: bool) -> bool { + a && b +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn and_to_or(a: bool, b: bool) -> bool { + a || b +} + + + +// Change operator from & to | ------------------------------------------------- +#[cfg(cfail1)] +pub fn bitwise_and_to_bitwise_or(a: i32) -> i32 { + 1 & a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn bitwise_and_to_bitwise_or(a: i32) -> i32 { + 1 | a +} + + + +// Change operator from & to ^ ------------------------------------------------- +#[cfg(cfail1)] +pub fn bitwise_and_to_bitwise_xor(a: i32) -> i32 { + 1 & a +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn bitwise_and_to_bitwise_xor(a: i32) -> i32 { + 1 ^ a +} + + + +// Change operator from & to << ------------------------------------------------ +#[cfg(cfail1)] +pub fn bitwise_and_to_lshift(a: i32) -> i32 { + a & 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn bitwise_and_to_lshift(a: i32) -> i32 { + a << 1 +} + + + +// Change operator from & to >> ------------------------------------------------ +#[cfg(cfail1)] +pub fn bitwise_and_to_rshift(a: i32) -> i32 { + a & 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn bitwise_and_to_rshift(a: i32) -> i32 { + a >> 1 +} + + + +// Change operator from == to != ----------------------------------------------- +#[cfg(cfail1)] +pub fn eq_to_uneq(a: i32) -> bool { + a == 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn eq_to_uneq(a: i32) -> bool { + a != 1 +} + + + +// Change operator from == to < ------------------------------------------------ +#[cfg(cfail1)] +pub fn eq_to_lt(a: i32) -> bool { + a == 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn eq_to_lt(a: i32) -> bool { + a < 1 +} + + + +// Change operator from == to > ------------------------------------------------ +#[cfg(cfail1)] +pub fn eq_to_gt(a: i32) -> bool { + a == 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn eq_to_gt(a: i32) -> bool { + a > 1 +} + + + +// Change operator from == to <= ----------------------------------------------- +#[cfg(cfail1)] +pub fn eq_to_le(a: i32) -> bool { + a == 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn eq_to_le(a: i32) -> bool { + a <= 1 +} + + + +// Change operator from == to >= ----------------------------------------------- +#[cfg(cfail1)] +pub fn eq_to_ge(a: i32) -> bool { + a == 1 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn eq_to_ge(a: i32) -> bool { + a >= 1 +} + + + +// Change type in cast expression ---------------------------------------------- +#[cfg(cfail1)] +pub fn type_cast(a: u8) -> u64 { + let b = a as i32; + let c = b as u64; + c +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn type_cast(a: u8) -> u64 { + let b = a as u32; + let c = b as u64; + c +} + + + +// Change value in cast expression --------------------------------------------- +#[cfg(cfail1)] +pub fn value_cast(a: u32) -> i32 { + 1 as i32 +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn value_cast(a: u32) -> i32 { + 2 as i32 +} + + + +// Change l-value in assignment ------------------------------------------------ +#[cfg(cfail1)] +pub fn lvalue() -> i32 { + let mut x = 10; + let mut y = 11; + x = 9; + x +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn lvalue() -> i32 { + let mut x = 10; + let mut y = 11; + y = 9; + x +} + + + +// Change r-value in assignment ------------------------------------------------ +#[cfg(cfail1)] +pub fn rvalue() -> i32 { + let mut x = 10; + x = 9; + x +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn rvalue() -> i32 { + let mut x = 10; + x = 8; + x +} + + + +// Change index into slice ----------------------------------------------------- +#[cfg(cfail1)] +pub fn index_to_slice(s: &[u8], i: usize, j: usize) -> u8 { + s[i] +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +pub fn index_to_slice(s: &[u8], i: usize, j: usize) -> u8 { + s[j] +} diff --git a/src/test/incremental/hashes/while_let_loops.rs b/src/test/incremental/hashes/while_let_loops.rs new file mode 100644 index 0000000000..f4fd7e709b --- /dev/null +++ b/src/test/incremental/hashes/while_let_loops.rs @@ -0,0 +1,274 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for `while let` loops. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_body() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_body() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 2; + break; + } +} + + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_condition() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_condition() { + let mut _x = 0; + while let Some(1u32) = None { + _x = 1; + break; + } +} + + + +// Add break ------------------------------------------------------------------- +#[cfg(cfail1)] +fn add_break() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_break() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + break; + } +} + + + +// Add loop label -------------------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label() { + let mut _x = 0; + 'label: while let Some(0u32) = None { + _x = 1; + break; + } +} + + + +// Add loop label to break ----------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: while let Some(0u32) = None { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: while let Some(0u32) = None { + _x = 1; + break 'label; + } +} + + + +// Change break label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_break_label() { + let mut _x = 0; + 'outer: while let Some(0u32) = None { + 'inner: while let Some(0u32) = None { + _x = 1; + break 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_break_label() { + let mut _x = 0; + 'outer: while let Some(0u32) = None { + 'inner: while let Some(0u32) = None { + _x = 1; + break 'outer; + } + } +} + + + +// Add loop label to continue -------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: while let Some(0u32) = None { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: while let Some(0u32) = None { + _x = 1; + continue 'label; + } +} + + + +// Change continue label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_label() { + let mut _x = 0; + 'outer: while let Some(0u32) = None { + 'inner: while let Some(0u32) = None { + _x = 1; + continue 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_label() { + let mut _x = 0; + 'outer: while let Some(0u32) = None { + 'inner: while let Some(0u32) = None { + _x = 1; + continue 'outer; + } + } +} + + + +// Change continue to break ---------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_to_break() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_to_break() { + let mut _x = 0; + while let Some(0u32) = None { + _x = 1; + break; + } +} diff --git a/src/test/incremental/hashes/while_loops.rs b/src/test/incremental/hashes/while_loops.rs new file mode 100644 index 0000000000..aa70d7e9fc --- /dev/null +++ b/src/test/incremental/hashes/while_loops.rs @@ -0,0 +1,274 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// This test case tests the incremental compilation hash (ICH) implementation +// for `while` loops. + +// The general pattern followed here is: Change one thing between rev1 and rev2 +// and make sure that the hash has changed, then change nothing between rev2 and +// rev3 and make sure that the hash has not changed. + +// must-compile-successfully +// revisions: cfail1 cfail2 cfail3 +// compile-flags: -Z query-dep-graph + +#![allow(warnings)] +#![feature(rustc_attrs)] +#![crate_type="rlib"] + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_body() { + let mut _x = 0; + while true { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_body() { + let mut _x = 0; + while true { + _x = 2; + break; + } +} + + + +// Change loop body ------------------------------------------------------------ +#[cfg(cfail1)] +fn change_loop_condition() { + let mut _x = 0; + while true { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_loop_condition() { + let mut _x = 0; + while false { + _x = 1; + break; + } +} + + + +// Add break ------------------------------------------------------------------- +#[cfg(cfail1)] +fn add_break() { + let mut _x = 0; + while true { + _x = 1; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_break() { + let mut _x = 0; + while true { + _x = 1; + break; + } +} + + + +// Add loop label -------------------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label() { + let mut _x = 0; + while true { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label() { + let mut _x = 0; + 'label: while true { + _x = 1; + break; + } +} + + + +// Add loop label to break ----------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: while true { + _x = 1; + break; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_break() { + let mut _x = 0; + 'label: while true { + _x = 1; + break 'label; + } +} + + + +// Change break label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_break_label() { + let mut _x = 0; + 'outer: while true { + 'inner: while true { + _x = 1; + break 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_break_label() { + let mut _x = 0; + 'outer: while true { + 'inner: while true { + _x = 1; + break 'outer; + } + } +} + + + +// Add loop label to continue -------------------------------------------------- +#[cfg(cfail1)] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: while true { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn add_loop_label_to_continue() { + let mut _x = 0; + 'label: while true { + _x = 1; + continue 'label; + } +} + + + +// Change continue label ---------------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_label() { + let mut _x = 0; + 'outer: while true { + 'inner: while true { + _x = 1; + continue 'inner; + } + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_label() { + let mut _x = 0; + 'outer: while true { + 'inner: while true { + _x = 1; + continue 'outer; + } + } +} + + + +// Change continue to break ---------------------------------------------------- +#[cfg(cfail1)] +fn change_continue_to_break() { + let mut _x = 0; + while true { + _x = 1; + continue; + } +} + +#[cfg(not(cfail1))] +#[rustc_clean(label="Hir", cfg="cfail2")] +#[rustc_clean(label="Hir", cfg="cfail3")] +#[rustc_dirty(label="HirBody", cfg="cfail2")] +#[rustc_clean(label="HirBody", cfg="cfail3")] +#[rustc_metadata_clean(cfg="cfail2")] +#[rustc_metadata_clean(cfg="cfail3")] +fn change_continue_to_break() { + let mut _x = 0; + while true { + _x = 1; + break; + } +} diff --git a/src/test/incremental/hello_world.rs b/src/test/incremental/hello_world.rs index a06c25ac05..b7f90c09b5 100644 --- a/src/test/incremental/hello_world.rs +++ b/src/test/incremental/hello_world.rs @@ -18,12 +18,12 @@ fn main() { } mod x { #[cfg(rpass1)] - pub fn x() -> i32 { + pub fn xxxx() -> i32 { 1 } #[cfg(rpass2)] - pub fn x() -> i32 { + pub fn xxxx() -> i32 { 2 } } @@ -31,9 +31,9 @@ mod x { mod y { use x; - #[rustc_dirty(label="TypeckItemBody", cfg="rpass2")] - pub fn y() { - x::x(); + #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] + pub fn yyyy() { + x::xxxx(); } } @@ -42,6 +42,6 @@ mod z { #[rustc_clean(label="TypeckItemBody", cfg="rpass2")] pub fn z() { - y::y(); + y::yyyy(); } } diff --git a/src/test/incremental/ich_method_call_trait_scope.rs b/src/test/incremental/ich_method_call_trait_scope.rs index f28ecf74dd..0a36e3c693 100644 --- a/src/test/incremental/ich_method_call_trait_scope.rs +++ b/src/test/incremental/ich_method_call_trait_scope.rs @@ -46,12 +46,14 @@ mod mod3 { mod mod3 { use Trait2; - #[rustc_dirty(label="Hir", cfg="rpass2")] + #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_dirty(label="HirBody", cfg="rpass2")] fn bar() { ().method(); } #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_clean(label="HirBody", cfg="rpass2")] fn baz() { 22; // no method call, traits in scope don't matter } diff --git a/src/test/incremental/ich_nested_items.rs b/src/test/incremental/ich_nested_items.rs index 4466cfb131..e8e40d57b1 100644 --- a/src/test/incremental/ich_nested_items.rs +++ b/src/test/incremental/ich_nested_items.rs @@ -23,11 +23,14 @@ fn foo() { #[cfg(rpass2)] #[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_clean(label="HirBody", cfg="rpass2")] fn foo() { #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_clean(label="HirBody", cfg="rpass2")] fn baz() { } // order is different... #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_clean(label="HirBody", cfg="rpass2")] fn bar() { } // but that doesn't matter. fn bap() { } // neither does adding a new item diff --git a/src/test/incremental/ich_resolve_results.rs b/src/test/incremental/ich_resolve_results.rs index 680a91da09..49a88c530f 100644 --- a/src/test/incremental/ich_resolve_results.rs +++ b/src/test/incremental/ich_resolve_results.rs @@ -45,11 +45,13 @@ mod mod3 { use test; #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_clean(label="HirBody", cfg="rpass2")] fn in_expr() { Foo(0); } #[rustc_clean(label="Hir", cfg="rpass2")] + #[rustc_clean(label="HirBody", cfg="rpass2")] fn in_type() { test::(); } @@ -60,12 +62,14 @@ mod mod3 { use test; use mod2::Foo; // <-- This changed! - #[rustc_dirty(label="Hir", cfg="rpass3")] + #[rustc_clean(label="Hir", cfg="rpass3")] + #[rustc_dirty(label="HirBody", cfg="rpass3")] fn in_expr() { Foo(0); } - #[rustc_dirty(label="Hir", cfg="rpass3")] + #[rustc_clean(label="Hir", cfg="rpass3")] + #[rustc_dirty(label="HirBody", cfg="rpass3")] fn in_type() { test::(); } diff --git a/src/test/incremental/issue-38222.rs b/src/test/incremental/issue-38222.rs new file mode 100644 index 0000000000..d14b1cfd6c --- /dev/null +++ b/src/test/incremental/issue-38222.rs @@ -0,0 +1,40 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test that debuginfo does not introduce a dependency edge to the Krate +// dep-node. + +// revisions:rpass1 rpass2 + +#![feature(rustc_attrs)] + + +#![rustc_partition_translated(module="issue_38222-mod1", cfg="rpass2")] + +// If trans had added a dependency edge to the Krate dep-node, nothing would +// be re-used, so checking that this module was re-used is sufficient. +#![rustc_partition_reused(module="issue_38222", cfg="rpass2")] + +//[rpass1] compile-flags: -C debuginfo=1 +//[rpass2] compile-flags: -C debuginfo=1 + +pub fn main() { + mod1::some_fn(); +} + +mod mod1 { + pub fn some_fn() { + let _ = 1; + } + + #[cfg(rpass2)] + fn _some_other_fn() { + } +} diff --git a/src/test/incremental/source_loc_macros.rs b/src/test/incremental/source_loc_macros.rs index f922ac0da4..36d1b3ecbc 100644 --- a/src/test/incremental/source_loc_macros.rs +++ b/src/test/incremental/source_loc_macros.rs @@ -18,16 +18,19 @@ #![feature(rustc_attrs)] #[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_clean(label="HirBody", cfg="rpass2")] fn line_same() { let _ = line!(); } #[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_clean(label="HirBody", cfg="rpass2")] fn col_same() { let _ = column!(); } #[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_clean(label="HirBody", cfg="rpass2")] fn file_same() { let _ = file!(); } @@ -38,7 +41,8 @@ fn line_different() { } #[cfg(rpass2)] -#[rustc_dirty(label="Hir", cfg="rpass2")] +#[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_dirty(label="HirBody", cfg="rpass2")] fn line_different() { let _ = line!(); } @@ -49,7 +53,8 @@ fn col_different() { } #[cfg(rpass2)] -#[rustc_dirty(label="Hir", cfg="rpass2")] +#[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_dirty(label="HirBody", cfg="rpass2")] fn col_different() { let _ = column!(); } diff --git a/src/test/incremental/spans_insignificant_w_o_debuginfo.rs b/src/test/incremental/spans_insignificant_w_o_debuginfo.rs index 9c8b855249..90ec4a9d55 100644 --- a/src/test/incremental/spans_insignificant_w_o_debuginfo.rs +++ b/src/test/incremental/spans_insignificant_w_o_debuginfo.rs @@ -22,4 +22,5 @@ pub fn main() {} #[cfg(rpass2)] #[rustc_clean(label="Hir", cfg="rpass2")] +#[rustc_clean(label="HirBody", cfg="rpass2")] pub fn main() {} diff --git a/src/test/incremental/spans_significant_w_debuginfo.rs b/src/test/incremental/spans_significant_w_debuginfo.rs index b0920aa1fa..cdab8de982 100644 --- a/src/test/incremental/spans_significant_w_debuginfo.rs +++ b/src/test/incremental/spans_significant_w_debuginfo.rs @@ -22,4 +22,5 @@ pub fn main() {} #[cfg(rpass2)] #[rustc_dirty(label="Hir", cfg="rpass2")] +#[rustc_dirty(label="HirBody", cfg="rpass2")] pub fn main() {} diff --git a/src/test/mir-opt/copy_propagation.rs b/src/test/mir-opt/copy_propagation.rs new file mode 100644 index 0000000000..26b042d034 --- /dev/null +++ b/src/test/mir-opt/copy_propagation.rs @@ -0,0 +1,34 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn test(x: u32) -> u32 { + let y = x; + y +} + +fn main() { } + +// END RUST SOURCE +// START rustc.node4.CopyPropagation.before.mir +// bb0: { +// _2 = _1; +// _4 = _2; +// _3 = _4; +// _5 = _3; +// _0 = _5; +// return; +// } +// END rustc.node4.CopyPropagation.before.mir +// START rustc.node4.CopyPropagation.after.mir +// bb0: { +// _0 = _1; +// return; +// } +// END rustc.node4.CopyPropagation.after.mir diff --git a/src/test/mir-opt/deaggregator_test_enum_2.rs b/src/test/mir-opt/deaggregator_test_enum_2.rs new file mode 100644 index 0000000000..02d496b290 --- /dev/null +++ b/src/test/mir-opt/deaggregator_test_enum_2.rs @@ -0,0 +1,57 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test that deaggregate fires in more than one basic block + +enum Foo { + A(i32), + B(i32), +} + +fn test1(x: bool, y: i32) -> Foo { + if x { + Foo::A(y) + } else { + Foo::B(y) + } +} + +fn main() {} + +// END RUST SOURCE +// START rustc.node12.Deaggregator.before.mir +// bb1: { +// _6 = _4; +// _0 = Foo::A(_6,); +// goto -> bb3; +// } +// +// bb2: { +// _7 = _4; +// _0 = Foo::B(_7,); +// goto -> bb3; +// } +// END rustc.node12.Deaggregator.before.mir +// START rustc.node12.Deaggregator.after.mir +// bb1: { +// _6 = _4; +// ((_0 as A).0: i32) = _6; +// discriminant(_0) = 0; +// goto -> bb3; +// } +// +// bb2: { +// _7 = _4; +// ((_0 as B).0: i32) = _7; +// discriminant(_0) = 1; +// goto -> bb3; +// } +// END rustc.node12.Deaggregator.after.mir +// diff --git a/src/test/mir-opt/deaggregator_test_multiple.rs b/src/test/mir-opt/deaggregator_test_multiple.rs new file mode 100644 index 0000000000..a180a69be5 --- /dev/null +++ b/src/test/mir-opt/deaggregator_test_multiple.rs @@ -0,0 +1,48 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Test that deaggregate fires more than once per block + +enum Foo { + A(i32), + B, +} + +fn test(x: i32) -> [Foo; 2] { + [Foo::A(x), Foo::A(x)] +} + +fn main() { } + +// END RUST SOURCE +// START rustc.node10.Deaggregator.before.mir +// bb0: { +// _2 = _1; +// _4 = _2; +// _3 = Foo::A(_4,); +// _6 = _2; +// _5 = Foo::A(_6,); +// _0 = [_3, _5]; +// return; +// } +// END rustc.node10.Deaggregator.before.mir +// START rustc.node10.Deaggregator.after.mir +// bb0: { +// _2 = _1; +// _4 = _2; +// ((_3 as A).0: i32) = _4; +// discriminant(_3) = 0; +// _6 = _2; +// ((_5 as A).0: i32) = _6; +// discriminant(_5) = 0; +// _0 = [_3, _5]; +// return; +// } +// END rustc.node10.Deaggregator.after.mir diff --git a/src/test/parse-fail/attr-bad-meta.rs b/src/test/parse-fail/attr-bad-meta.rs index 7def91da5e..092adbf29e 100644 --- a/src/test/parse-fail/attr-bad-meta.rs +++ b/src/test/parse-fail/attr-bad-meta.rs @@ -10,7 +10,7 @@ // compile-flags: -Z parse-only -// error-pattern:expected `]` +// error-pattern:expected one of `=` or `]` // asterisk is bogus #[attr*] diff --git a/src/test/parse-fail/circular_modules_hello.rs b/src/test/parse-fail/circular_modules_hello.rs index 4de817dbd9..94770aa875 100644 --- a/src/test/parse-fail/circular_modules_hello.rs +++ b/src/test/parse-fail/circular_modules_hello.rs @@ -12,6 +12,7 @@ // ignore-test: this is an auxiliary file for circular-modules-main.rs +#[path = "circular_modules_main.rs"] mod circular_modules_main; pub fn say_hello() { diff --git a/src/test/parse-fail/closure-return-syntax.rs b/src/test/parse-fail/closure-return-syntax.rs index da6245597f..1da6735918 100644 --- a/src/test/parse-fail/closure-return-syntax.rs +++ b/src/test/parse-fail/closure-return-syntax.rs @@ -12,5 +12,6 @@ // unless it uses braces. fn main() { - let x = || -> i32 22; //~ ERROR expected `{`, found `22` + let x = || -> i32 22; + //~^ ERROR expected one of `!`, `(`, `::`, `<`, or `{`, found `22` } diff --git a/src/test/parse-fail/issue-37234.rs b/src/test/parse-fail/issue-37234.rs new file mode 100644 index 0000000000..651e11d9d2 --- /dev/null +++ b/src/test/parse-fail/issue-37234.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +macro_rules! failed { + () => {{ + let x = 5 ""; //~ ERROR found `""` + }} //~ ERROR macro expansion ignores token `}` +} + +fn main() { + failed!(); +} diff --git a/src/test/parse-fail/where-clauses-no-bounds-or-predicates.rs b/src/test/parse-fail/where-clauses-no-bounds-or-predicates.rs index 45165b76c4..78d9745408 100644 --- a/src/test/parse-fail/where-clauses-no-bounds-or-predicates.rs +++ b/src/test/parse-fail/where-clauses-no-bounds-or-predicates.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -20,5 +20,8 @@ fn equal2(_: &T, _: &T) -> bool where T: { true } +fn foo<'a>() where 'a {} +//~^ ERROR expected `:`, found `{` + fn main() { } diff --git a/src/test/parse-fail/where_with_bound.rs b/src/test/parse-fail/where_with_bound.rs new file mode 100644 index 0000000000..cb57500df7 --- /dev/null +++ b/src/test/parse-fail/where_with_bound.rs @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z parse-only + +fn foo() where ::Item: ToString, T: Iterator { } + //~^ syntax `where` is reserved for future use + +fn main() {} diff --git a/src/test/pretty/issue-4264.pp b/src/test/pretty/issue-4264.pp index 40ff4852e3..fdb7f9c68b 100644 --- a/src/test/pretty/issue-4264.pp +++ b/src/test/pretty/issue-4264.pp @@ -18,18 +18,18 @@ extern crate std as std; // #4264 fixed-length vector types -pub fn foo(_: [i32; (3 as usize)]) { } +pub fn foo(_: [i32; (3 as usize)]) ({ } as ()) -pub fn bar() { - const FOO: usize = ((5 as usize) - (4 as usize) as usize); - let _: [(); (FOO as usize)] = ([(() as ())] as [(); 1]); +pub fn bar() ({ + const FOO: usize = ((5 as usize) - (4 as usize) as usize); + let _: [(); (FOO as usize)] = ([(() as ())] as [(); 1]); - let _: [(); (1 as usize)] = ([(() as ())] as [(); 1]); + let _: [(); (1 as usize)] = ([(() as ())] as [(); 1]); - let _ = - (((&([(1 as i32), (2 as i32), (3 as i32)] as [i32; 3]) as &[i32; 3]) - as *const _ as *const [i32; 3]) as *const [i32; (3 as usize)] as - *const [i32; 3]); + let _ = + (((&([(1 as i32), (2 as i32), (3 as i32)] as [i32; 3]) + as &[i32; 3]) as *const _ as *const [i32; 3]) as + *const [i32; (3 as usize)] as *const [i32; 3]); @@ -38,58 +38,66 @@ pub fn bar() { - (($crate::fmt::format as - fn(std::fmt::Arguments<'_>) -> std::string::String {std::fmt::format})(((::std::fmt::Arguments::new_v1 - as - fn(&[&str], &[std::fmt::ArgumentV1<'_>]) -> std::fmt::Arguments<'_> {std::fmt::Arguments<'_>::new_v1})(({ - static __STATIC_FMTSTR: - &'static [&'static str] - = - (&([("test" - as - &'static str)] - as - [&'static str; 1]) - as - &'static [&'static str; 1]); - (__STATIC_FMTSTR - as - &'static [&'static str]) - } - as - &[&str]), - (&(match (() - as - ()) - { - () - => - ([] - as - [std::fmt::ArgumentV1<'_>; 0]), - } - as - [std::fmt::ArgumentV1<'_>; 0]) - as - &[std::fmt::ArgumentV1<'_>; 0])) - as - std::fmt::Arguments<'_>)) - as std::string::String); -} + + (($crate::fmt::format as + fn(std::fmt::Arguments<'_>) -> std::string::String {std::fmt::format})(((<::std::fmt::Arguments>::new_v1 + as + fn(&[&str], &[std::fmt::ArgumentV1<'_>]) -> std::fmt::Arguments<'_> {std::fmt::Arguments<'_>::new_v1})(({ + static __STATIC_FMTSTR: + &'static [&'static str] + = + (&([("test" + as + &'static str)] + as + [&'static str; 1]) + as + &'static [&'static str; 1]); + (__STATIC_FMTSTR + as + &'static [&'static str]) + } + as + &[&str]), + (&(match (() + as + ()) + { + () + => + ([] + as + [std::fmt::ArgumentV1<'_>; 0]), + } + as + [std::fmt::ArgumentV1<'_>; 0]) + as + &[std::fmt::ArgumentV1<'_>; 0])) + as + std::fmt::Arguments<'_>)) + as std::string::String); + } as ()) pub type Foo = [i32; (3 as usize)]; pub struct Bar { pub x: [i32; (3 as usize)], } pub struct TupleBar([i32; (4 as usize)]); pub enum Baz { BazVariant([i32; (5 as usize)]), } -pub fn id(x: T) -> T { (x as T) } -pub fn use_id() { - let _ = - ((id::<[i32; (3 as usize)]> as - fn([i32; 3]) -> [i32; 3] {id::<[i32; 3]>})(([(1 as i32), - (2 as i32), - (3 as i32)] as - [i32; 3])) as - [i32; 3]); -} -fn main() { } +pub fn id(x: T) -> T ({ (x as T) } as T) +pub fn use_id() ({ + let _ = + ((id::<[i32; (3 as usize)]> as + fn([i32; 3]) -> [i32; 3] {id::<[i32; 3]>})(([(1 + as + i32), + (2 + as + i32), + (3 + as + i32)] + as + [i32; 3])) + as [i32; 3]); + } as ()) +fn main() ({ } as ()) diff --git a/src/test/pretty/stmt_expr_attributes.rs b/src/test/pretty/stmt_expr_attributes.rs index e52932cd7b..1c443020d2 100644 --- a/src/test/pretty/stmt_expr_attributes.rs +++ b/src/test/pretty/stmt_expr_attributes.rs @@ -198,14 +198,20 @@ fn _11() { }; let _ = #[attr] || #[attr] (); let _ = #[attr] move || #[attr] (); - let _ = #[attr] || { - #![attr] - #[attr] - () }; - let _ = #[attr] move || { - #![attr] - #[attr] - () }; + let _ = + #[attr] || + { + #![attr] + #[attr] + () + }; + let _ = + #[attr] move || + { + #![attr] + #[attr] + () + }; let _ = #[attr] { #![attr] diff --git a/src/test/run-fail-fulldeps/qquote.rs b/src/test/run-fail-fulldeps/qquote.rs index d2a16ac750..d692bb519c 100644 --- a/src/test/run-fail-fulldeps/qquote.rs +++ b/src/test/run-fail-fulldeps/qquote.rs @@ -19,8 +19,8 @@ extern crate syntax_pos; use syntax::ast; use syntax::codemap; -use syntax::parse; use syntax::print::pprust; +use syntax::symbol::Symbol; use syntax_pos::DUMMY_SP; fn main() { @@ -33,7 +33,7 @@ fn main() { cx.bt_push(syntax::codemap::ExpnInfo { call_site: DUMMY_SP, callee: syntax::codemap::NameAndSpan { - format: syntax::codemap::MacroBang(parse::token::intern("")), + format: syntax::codemap::MacroBang(Symbol::intern("")), allow_internal_unstable: false, span: None, } diff --git a/src/test/run-fail/test-should-panic-bad-message.rs b/src/test/run-fail/test-should-panic-bad-message.rs new file mode 100644 index 0000000000..7186672b40 --- /dev/null +++ b/src/test/run-fail/test-should-panic-bad-message.rs @@ -0,0 +1,19 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --test + +// error-pattern:panicked at 'bar' +// check-stdout +#[test] +#[should_panic(expected = "foo")] +pub fn test_bar() { + panic!("bar") +} diff --git a/src/test/run-fail/test-should-panic-no-message.rs b/src/test/run-fail/test-should-panic-no-message.rs new file mode 100644 index 0000000000..50dc2aed8e --- /dev/null +++ b/src/test/run-fail/test-should-panic-no-message.rs @@ -0,0 +1,19 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --test + +// error-pattern:panicked at 'explicit panic' +// check-stdout +#[test] +#[should_panic(expected = "foo")] +pub fn test_explicit() { + panic!() +} diff --git a/src/test/run-make/c-static-dylib/foo.rs b/src/test/run-make/c-static-dylib/foo.rs index 04253be71d..44be5ac890 100644 --- a/src/test/run-make/c-static-dylib/foo.rs +++ b/src/test/run-make/c-static-dylib/foo.rs @@ -10,7 +10,7 @@ #![crate_type = "dylib"] -#[link(name = "cfoo")] +#[link(name = "cfoo", kind = "static")] extern { fn foo(); } diff --git a/src/test/run-make/c-static-rlib/foo.rs b/src/test/run-make/c-static-rlib/foo.rs index a1f01bd2b6..cbd7b020bd 100644 --- a/src/test/run-make/c-static-rlib/foo.rs +++ b/src/test/run-make/c-static-rlib/foo.rs @@ -10,7 +10,7 @@ #![crate_type = "rlib"] -#[link(name = "cfoo")] +#[link(name = "cfoo", kind = "static")] extern { fn foo(); } diff --git a/src/test/run-make/codegen-options-parsing/Makefile b/src/test/run-make/codegen-options-parsing/Makefile index 9543fad8e5..2b8b0712cc 100644 --- a/src/test/run-make/codegen-options-parsing/Makefile +++ b/src/test/run-make/codegen-options-parsing/Makefile @@ -25,7 +25,7 @@ all: # Should not link dead code... $(RUSTC) -Z print-link-args dummy.rs 2>&1 | \ - grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF,ICF' + grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF' # ... unless you specifically ask to keep it $(RUSTC) -Z print-link-args -C link-dead-code dummy.rs 2>&1 | \ - (! grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF,ICF') + (! grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF') diff --git a/src/test/run-make/extern-fn-generic/test.rs b/src/test/run-make/extern-fn-generic/test.rs index ee0485683e..8f5ff091b3 100644 --- a/src/test/run-make/extern-fn-generic/test.rs +++ b/src/test/run-make/extern-fn-generic/test.rs @@ -12,7 +12,7 @@ extern crate testcrate; extern "C" fn bar(ts: testcrate::TestStruct) -> T { ts.y } -#[link(name = "test")] +#[link(name = "test", kind = "static")] extern { fn call(c: extern "C" fn(testcrate::TestStruct) -> i32) -> i32; } diff --git a/src/test/run-make/extern-fn-generic/testcrate.rs b/src/test/run-make/extern-fn-generic/testcrate.rs index 5fd61bb419..d02c05047c 100644 --- a/src/test/run-make/extern-fn-generic/testcrate.rs +++ b/src/test/run-make/extern-fn-generic/testcrate.rs @@ -18,7 +18,7 @@ pub struct TestStruct { pub extern "C" fn foo(ts: TestStruct) -> T { ts.y } -#[link(name = "test")] +#[link(name = "test", kind = "static")] extern { pub fn call(c: extern "C" fn(TestStruct) -> i32) -> i32; } diff --git a/src/test/run-make/graphviz-flowgraph/f00.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f00.dot-expected.dot index f699771ef2..8ea8370ab2 100644 --- a/src/test/run-make/graphviz-flowgraph/f00.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f00.dot-expected.dot @@ -2,6 +2,8 @@ digraph block { N0[label="entry"]; N1[label="exit"]; N2[label="block { }"]; + N3[label="expr { }"]; N0 -> N2; - N2 -> N1; + N2 -> N3; + N3 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f01.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f01.dot-expected.dot index d924890b31..5982fbea76 100644 --- a/src/test/run-make/graphviz-flowgraph/f01.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f01.dot-expected.dot @@ -4,8 +4,10 @@ digraph block { N2[label="expr 1"]; N3[label="stmt 1;"]; N4[label="block { 1; }"]; + N5[label="expr { 1; }"]; N0 -> N2; N2 -> N3; N3 -> N4; - N4 -> N1; + N4 -> N5; + N5 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f02.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f02.dot-expected.dot index 1f4a58ba0a..1639785bd6 100644 --- a/src/test/run-make/graphviz-flowgraph/f02.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f02.dot-expected.dot @@ -4,8 +4,10 @@ digraph block { N2[label="local _x"]; N3[label="stmt let _x: isize;"]; N4[label="block { let _x: isize; }"]; + N5[label="expr { let _x: isize; }"]; N0 -> N2; N2 -> N3; N3 -> N4; - N4 -> N1; + N4 -> N5; + N5 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f03.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f03.dot-expected.dot index 8b65007618..b0ae00d816 100644 --- a/src/test/run-make/graphviz-flowgraph/f03.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f03.dot-expected.dot @@ -6,10 +6,12 @@ digraph block { N4[label="expr 3 + 4"]; N5[label="stmt 3 + 4;"]; N6[label="block { 3 + 4; }"]; + N7[label="expr { 3 + 4; }"]; N0 -> N2; N2 -> N3; N3 -> N4; N4 -> N5; N5 -> N6; - N6 -> N1; + N6 -> N7; + N7 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f04.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f04.dot-expected.dot index fde6cc2900..41ace15a4c 100644 --- a/src/test/run-make/graphviz-flowgraph/f04.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f04.dot-expected.dot @@ -5,9 +5,11 @@ digraph block { N3[label="local _x"]; N4[label="stmt let _x = 4;"]; N5[label="block { let _x = 4; }"]; + N6[label="expr { let _x = 4; }"]; N0 -> N2; N2 -> N3; N3 -> N4; N4 -> N5; - N5 -> N1; + N5 -> N6; + N6 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f05.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f05.dot-expected.dot index efd56cd0c7..72b8ae7175 100644 --- a/src/test/run-make/graphviz-flowgraph/f05.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f05.dot-expected.dot @@ -9,6 +9,7 @@ digraph block { N7[label="pat (_x, _y)"]; N8[label="stmt let (_x, _y) = (5, 55);"]; N9[label="block { let (_x, _y) = (5, 55); }"]; + N10[label="expr { let (_x, _y) = (5, 55); }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -17,5 +18,6 @@ digraph block { N6 -> N7; N7 -> N8; N8 -> N9; - N9 -> N1; + N9 -> N10; + N10 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f06.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f06.dot-expected.dot index 54e9d89d3f..acba71ef62 100644 --- a/src/test/run-make/graphviz-flowgraph/f06.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f06.dot-expected.dot @@ -7,11 +7,13 @@ digraph block { N5[label="pat S6 { val: _x }"]; N6[label="stmt let S6 { val: _x } = S6{val: 6,};"]; N7[label="block { let S6 { val: _x } = S6{val: 6,}; }"]; + N8[label="expr { let S6 { val: _x } = S6{val: 6,}; }"]; N0 -> N2; N2 -> N3; N3 -> N4; N4 -> N5; N5 -> N6; N6 -> N7; - N7 -> N1; + N7 -> N8; + N8 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f07.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f07.dot-expected.dot index c60cd1cfd2..251e2b39f1 100644 --- a/src/test/run-make/graphviz-flowgraph/f07.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f07.dot-expected.dot @@ -17,6 +17,7 @@ digraph block { N15[label="expr x + y"]; N16[label="stmt match [7, 77, 777, 7777] { [x, y, ..] => x + y, };"]; N17[label="block { match [7, 77, 777, 7777] { [x, y, ..] => x + y, }; }"]; + N18[label="expr { match [7, 77, 777, 7777] { [x, y, ..] => x + y, }; }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -33,5 +34,6 @@ digraph block { N15 -> N7; N7 -> N16; N16 -> N17; - N17 -> N1; + N17 -> N18; + N18 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f08.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f08.dot-expected.dot index da0120b7bd..e2779c9414 100644 --- a/src/test/run-make/graphviz-flowgraph/f08.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f08.dot-expected.dot @@ -16,6 +16,7 @@ digraph block { N14[label="block { _y = 888; }"]; N15[label="expr if x > 88 { _y = 888; }"]; N16[label="block { let x = 8; let _y; if x > 88 { _y = 888; } }"]; + N17[label="expr { let x = 8; let _y; if x > 88 { _y = 888; } }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -32,5 +33,6 @@ digraph block { N9 -> N15; N14 -> N15; N15 -> N16; - N16 -> N1; + N16 -> N17; + N17 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f09.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f09.dot-expected.dot index c98d1b0bed..536abde91e 100644 --- a/src/test/run-make/graphviz-flowgraph/f09.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f09.dot-expected.dot @@ -24,6 +24,7 @@ digraph block { N22[label="expr { _y = 94 + 95; }"]; N23[label="expr if x > 92 { _y = 93; } else { _y = 94 + 95; }"]; N24[label="block { let x = 91; let _y; if x > 92 { _y = 93; } else { _y = 94 + 95; } }"]; + N25[label="expr { let x = 91; let _y; if x > 92 { _y = 93; } else { _y = 94 + 95; } }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -48,5 +49,6 @@ digraph block { N14 -> N23; N22 -> N23; N23 -> N24; - N24 -> N1; + N24 -> N25; + N25 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f10.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f10.dot-expected.dot index 516c39ef56..a3b531b1e2 100644 --- a/src/test/run-make/graphviz-flowgraph/f10.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f10.dot-expected.dot @@ -15,6 +15,7 @@ digraph block { N13[label="stmt x -= 1;"]; N14[label="block { x -= 1; }"]; N15[label="block { let mut x = 10; while x > 0 { x -= 1; } }"]; + N16[label="expr { let mut x = 10; while x > 0 { x -= 1; } }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -30,5 +31,6 @@ digraph block { N13 -> N14; N14 -> N5; N9 -> N15; - N15 -> N1; + N15 -> N16; + N16 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f11.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f11.dot-expected.dot index 9b66fd581c..70034d299b 100644 --- a/src/test/run-make/graphviz-flowgraph/f11.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f11.dot-expected.dot @@ -15,6 +15,7 @@ digraph block { N13[label="expr \"unreachable\""]; N14[label="stmt \"unreachable\";"]; N15[label="block { let mut _x = 11; loop { _x -= 1; } \"unreachable\"; }"]; + N16[label="expr { let mut _x = 11; loop { _x -= 1; } \"unreachable\"; }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -29,5 +30,6 @@ digraph block { N12 -> N13; N13 -> N14; N14 -> N15; - N15 -> N1; + N15 -> N16; + N16 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f12.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f12.dot-expected.dot index 071af6faf6..245afc4350 100644 --- a/src/test/run-make/graphviz-flowgraph/f12.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f12.dot-expected.dot @@ -22,6 +22,7 @@ digraph block { N20[label="expr if x == 2 { break ; \"unreachable\"; }"]; N21[label="block { x -= 1; if x == 2 { break ; \"unreachable\"; } }"]; N22[label="block { let mut x = 12; loop { x -= 1; if x == 2 { break ; \"unreachable\"; } } }"]; + N23[label="expr { let mut x = 12; loop { x -= 1; if x == 2 { break ; \"unreachable\"; } } }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -44,5 +45,6 @@ digraph block { N20 -> N21; N21 -> N5; N6 -> N22; - N22 -> N1; + N22 -> N23; + N23 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f13.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f13.dot-expected.dot index fb7d2ad97b..0f268bd0f2 100644 --- a/src/test/run-make/graphviz-flowgraph/f13.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f13.dot-expected.dot @@ -24,6 +24,7 @@ digraph block { N22[label="expr _y"]; N23[label="expr _y = v + 1"]; N24[label="block {\l let x = E13::E13b(13);\l let _y;\l match x { E13::E13a => _y = 1, E13::E13b(v) => _y = v + 1, }\l}\l"]; + N25[label="expr {\l let x = E13::E13b(13);\l let _y;\l match x { E13::E13a => _y = 1, E13::E13b(v) => _y = v + 1, }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -48,5 +49,6 @@ digraph block { N22 -> N23; N23 -> N10; N10 -> N24; - N24 -> N1; + N24 -> N25; + N25 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f14.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f14.dot-expected.dot index 66250aa441..719a6cf261 100644 --- a/src/test/run-make/graphviz-flowgraph/f14.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f14.dot-expected.dot @@ -15,6 +15,7 @@ digraph block { N13[label="block { return; \"unreachable\"; }"]; N14[label="expr if x > 1 { return; \"unreachable\"; }"]; N15[label="block { let x = 14; if x > 1 { return; \"unreachable\"; } }"]; + N16[label="expr { let x = 14; if x > 1 { return; \"unreachable\"; } }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -30,5 +31,6 @@ digraph block { N7 -> N14; N13 -> N14; N14 -> N15; - N15 -> N1; + N15 -> N16; + N16 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f15.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f15.dot-expected.dot index 4c94630f4e..d8cbd8411e 100644 --- a/src/test/run-make/graphviz-flowgraph/f15.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f15.dot-expected.dot @@ -49,6 +49,7 @@ digraph block { N47[label="stmt x -= 5;"]; N48[label="block {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { break ; \"unreachable\"; }\l y -= 3;\l }\l y -= 4;\l x -= 5;\l}\l"]; N49[label="block {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { break ; \"unreachable\"; }\l y -= 3;\l }\l y -= 4;\l x -= 5;\l }\l}\l"]; + N50[label="expr {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { break ; \"unreachable\"; }\l y -= 3;\l }\l y -= 4;\l x -= 5;\l }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -99,5 +100,6 @@ digraph block { N47 -> N48; N48 -> N8; N9 -> N49; - N49 -> N1; + N49 -> N50; + N50 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f16.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f16.dot-expected.dot index d7d027cefb..b11881247f 100644 --- a/src/test/run-make/graphviz-flowgraph/f16.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f16.dot-expected.dot @@ -52,6 +52,7 @@ digraph block { N50[label="expr \"unreachable\""]; N51[label="stmt \"unreachable\";"]; N52[label="block {\l let mut x = 16;\l let mut y = 16;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { continue \'outer ; \"unreachable\"; }\l if y >= 1 { break ; \"unreachable\"; }\l y -= 1;\l }\l y -= 1;\l x -= 1;\l }\l \"unreachable\";\l}\l"]; + N53[label="expr {\l let mut x = 16;\l let mut y = 16;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { continue \'outer ; \"unreachable\"; }\l if y >= 1 { break ; \"unreachable\"; }\l y -= 1;\l }\l y -= 1;\l x -= 1;\l }\l \"unreachable\";\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -105,5 +106,6 @@ digraph block { N49 -> N50; N50 -> N51; N51 -> N52; - N52 -> N1; + N52 -> N53; + N53 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f17.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f17.dot-expected.dot index f87b70a71c..705eece775 100644 --- a/src/test/run-make/graphviz-flowgraph/f17.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f17.dot-expected.dot @@ -8,6 +8,7 @@ digraph block { N6[label="local _v"]; N7[label="stmt let _v = [1, 7, 17];"]; N8[label="block { let _v = [1, 7, 17]; }"]; + N9[label="expr { let _v = [1, 7, 17]; }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -15,5 +16,6 @@ digraph block { N5 -> N6; N6 -> N7; N7 -> N8; - N8 -> N1; + N8 -> N9; + N9 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f18.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f18.dot-expected.dot index 8ea4256133..c1d6e3023f 100644 --- a/src/test/run-make/graphviz-flowgraph/f18.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f18.dot-expected.dot @@ -9,6 +9,7 @@ digraph block { N7[label="expr inner(inner(18))"]; N8[label="stmt inner(inner(18));"]; N9[label="block { inner(inner(18)); }"]; + N10[label="expr { inner(inner(18)); }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -17,5 +18,6 @@ digraph block { N6 -> N7; N7 -> N8; N8 -> N9; - N9 -> N1; + N9 -> N10; + N10 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f19.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f19.dot-expected.dot index bc0ca08d42..d2f9f41f64 100644 --- a/src/test/run-make/graphviz-flowgraph/f19.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f19.dot-expected.dot @@ -12,6 +12,7 @@ digraph block { N10[label="expr s.inner().inner()"]; N11[label="stmt s.inner().inner();"]; N12[label="block { let s = S19{x: 19,}; s.inner().inner(); }"]; + N13[label="expr { let s = S19{x: 19,}; s.inner().inner(); }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -23,5 +24,6 @@ digraph block { N9 -> N10; N10 -> N11; N11 -> N12; - N12 -> N1; + N12 -> N13; + N13 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f20.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f20.dot-expected.dot index 21e84fb858..120eab4dac 100644 --- a/src/test/run-make/graphviz-flowgraph/f20.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f20.dot-expected.dot @@ -12,6 +12,7 @@ digraph block { N10[label="expr v[20]"]; N11[label="stmt v[20];"]; N12[label="block { let v = [2, 0, 20]; v[20]; }"]; + N13[label="expr { let v = [2, 0, 20]; v[20]; }"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -23,5 +24,6 @@ digraph block { N9 -> N10; N10 -> N11; N11 -> N12; - N12 -> N1; + N12 -> N13; + N13 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f21.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f21.dot-expected.dot index 796bf4910c..370dcdd855 100644 --- a/src/test/run-make/graphviz-flowgraph/f21.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f21.dot-expected.dot @@ -47,6 +47,7 @@ digraph block { N45[label="stmt \"unreachable\";"]; N46[label="block {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { return; \"unreachable\"; }\l y -= 3;\l x -= 5;\l }\l \"unreachable\";\l}\l"]; N47[label="block {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { return; \"unreachable\"; }\l y -= 3;\l x -= 5;\l }\l \"unreachable\";\l }\l}\l"]; + N48[label="expr {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { break \'outer ; \"unreachable\"; }\l if y >= 2 { return; \"unreachable\"; }\l y -= 3;\l x -= 5;\l }\l \"unreachable\";\l }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -95,5 +96,6 @@ digraph block { N45 -> N46; N46 -> N8; N9 -> N47; - N47 -> N1; + N47 -> N48; + N48 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f22.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f22.dot-expected.dot index 9e8049f074..9d3bc22831 100644 --- a/src/test/run-make/graphviz-flowgraph/f22.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f22.dot-expected.dot @@ -50,6 +50,7 @@ digraph block { N48[label="expr \"unreachable\""]; N49[label="stmt \"unreachable\";"]; N50[label="block {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { continue \'outer ; \"unreachable\"; }\l if y >= 2 { return; \"unreachable\"; }\l x -= 1;\l y -= 3;\l }\l \"unreachable\";\l }\l \"unreachable\";\l}\l"]; + N51[label="expr {\l let mut x = 15;\l let mut y = 151;\l \'outer:\l loop {\l \'inner:\l loop {\l if x == 1 { continue \'outer ; \"unreachable\"; }\l if y >= 2 { return; \"unreachable\"; }\l x -= 1;\l y -= 3;\l }\l \"unreachable\";\l }\l \"unreachable\";\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -101,5 +102,6 @@ digraph block { N47 -> N48; N48 -> N49; N49 -> N50; - N50 -> N1; + N50 -> N51; + N51 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f23.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f23.dot-expected.dot index b3f285049c..f152977438 100644 --- a/src/test/run-make/graphviz-flowgraph/f23.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f23.dot-expected.dot @@ -52,6 +52,7 @@ digraph block { N50[label="block { y -= 1; while z > 0 { z -= 1; } if x > 10 { return; \"unreachable\"; } }"]; N51[label="block {\l x -= 1;\l while y > 0 {\l y -= 1;\l while z > 0 { z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l}\l"]; N52[label="block {\l let mut x = 23;\l let mut y = 23;\l let mut z = 23;\l while x > 0 {\l x -= 1;\l while y > 0 {\l y -= 1;\l while z > 0 { z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l }\l}\l"]; + N53[label="expr {\l let mut x = 23;\l let mut y = 23;\l let mut z = 23;\l while x > 0 {\l x -= 1;\l while y > 0 {\l y -= 1;\l while z > 0 { z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -107,5 +108,6 @@ digraph block { N24 -> N51; N51 -> N11; N15 -> N52; - N52 -> N1; + N52 -> N53; + N53 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f24.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f24.dot-expected.dot index 43b3295bf3..e40dd014f0 100644 --- a/src/test/run-make/graphviz-flowgraph/f24.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f24.dot-expected.dot @@ -76,6 +76,7 @@ digraph block { N74[label="block {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l}\l"]; N75[label="block {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l}\l"]; N76[label="block {\l let mut x = 24;\l let mut y = 24;\l let mut z = 24;\l loop {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l }\l}\l"]; + N77[label="expr {\l let mut x = 24;\l let mut y = 24;\l let mut z = 24;\l loop {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { return; \"unreachable\"; }\l }\l }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -155,5 +156,6 @@ digraph block { N29 -> N75; N75 -> N11; N12 -> N76; - N76 -> N1; + N76 -> N77; + N77 -> N1; } diff --git a/src/test/run-make/graphviz-flowgraph/f25.dot-expected.dot b/src/test/run-make/graphviz-flowgraph/f25.dot-expected.dot index 50fdffb781..1e2df1ab5e 100644 --- a/src/test/run-make/graphviz-flowgraph/f25.dot-expected.dot +++ b/src/test/run-make/graphviz-flowgraph/f25.dot-expected.dot @@ -76,6 +76,7 @@ digraph block { N74[label="block {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l \'a: loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { continue \'a ; \"unreachable\"; }\l}\l"]; N75[label="block {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l \'a:\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l \'a: loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { continue \'a ; \"unreachable\"; }\l }\l}\l"]; N76[label="block {\l let mut x = 25;\l let mut y = 25;\l let mut z = 25;\l \'a:\l loop {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l \'a:\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l \'a: loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { continue \'a ; \"unreachable\"; }\l }\l }\l}\l"]; + N77[label="expr {\l let mut x = 25;\l let mut y = 25;\l let mut z = 25;\l \'a:\l loop {\l if x == 0 { break ; \"unreachable\"; }\l x -= 1;\l \'a:\l loop {\l if y == 0 { break ; \"unreachable\"; }\l y -= 1;\l \'a: loop { if z == 0 { break ; \"unreachable\"; } z -= 1; }\l if x > 10 { continue \'a ; \"unreachable\"; }\l }\l }\l}\l"]; N0 -> N2; N2 -> N3; N3 -> N4; @@ -155,5 +156,6 @@ digraph block { N29 -> N75; N75 -> N11; N12 -> N76; - N76 -> N1; + N76 -> N77; + N77 -> N1; } diff --git a/src/test/run-make/interdependent-c-libraries/bar.rs b/src/test/run-make/interdependent-c-libraries/bar.rs index 88fc98615f..1963976b4b 100644 --- a/src/test/run-make/interdependent-c-libraries/bar.rs +++ b/src/test/run-make/interdependent-c-libraries/bar.rs @@ -12,7 +12,7 @@ extern crate foo; -#[link(name = "bar")] +#[link(name = "bar", kind = "static")] extern { fn bar(); } diff --git a/src/test/run-make/interdependent-c-libraries/foo.rs b/src/test/run-make/interdependent-c-libraries/foo.rs index f94c6edb97..7a0fe6bb18 100644 --- a/src/test/run-make/interdependent-c-libraries/foo.rs +++ b/src/test/run-make/interdependent-c-libraries/foo.rs @@ -10,7 +10,7 @@ #![crate_type = "rlib"] -#[link(name = "foo")] +#[link(name = "foo", kind = "static")] extern { fn foo(); } diff --git a/src/test/run-make/issue-15460/foo.rs b/src/test/run-make/issue-15460/foo.rs index 8b96fe3682..6917fa5557 100644 --- a/src/test/run-make/issue-15460/foo.rs +++ b/src/test/run-make/issue-15460/foo.rs @@ -8,11 +8,9 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(linked_from)] #![crate_type = "dylib"] #[link(name = "foo", kind = "static")] -#[linked_from = "foo"] extern { pub fn foo(); } diff --git a/src/test/run-make/issue-19371/foo.rs b/src/test/run-make/issue-19371/foo.rs index ed127b017b..0336fe277c 100644 --- a/src/test/run-make/issue-19371/foo.rs +++ b/src/test/run-make/issue-19371/foo.rs @@ -25,6 +25,7 @@ use rustc_driver::driver::{compile_input, CompileController, anon_src}; use rustc_metadata::cstore::CStore; use rustc_errors::registry::Registry; +use std::collections::HashSet; use std::path::PathBuf; use std::rc::Rc; @@ -65,7 +66,7 @@ fn basic_sess(sysroot: PathBuf) -> (Session, Rc) { fn compile(code: String, output: PathBuf, sysroot: PathBuf) { let (sess, cstore) = basic_sess(sysroot); - let cfg = build_configuration(&sess, vec![]); + let cfg = build_configuration(&sess, HashSet::new()); let control = CompileController::basic(); let input = Input::Str { name: anon_src(), input: code }; compile_input(&sess, &cstore, &input, &None, &Some(output), None, &control); diff --git a/src/test/run-make/issue-25581/test.rs b/src/test/run-make/issue-25581/test.rs index e2e86df59c..6717d16cb7 100644 --- a/src/test/run-make/issue-25581/test.rs +++ b/src/test/run-make/issue-25581/test.rs @@ -12,7 +12,7 @@ extern crate libc; -#[link(name = "test")] +#[link(name = "test", kind = "static")] extern { fn slice_len(s: &[u8]) -> libc::size_t; fn slice_elem(s: &[u8], idx: libc::size_t) -> u8; diff --git a/src/test/run-make/issue-37839/Makefile b/src/test/run-make/issue-37839/Makefile new file mode 100644 index 0000000000..f17ce537fb --- /dev/null +++ b/src/test/run-make/issue-37839/Makefile @@ -0,0 +1,6 @@ +-include ../tools.mk + +all: + $(RUSTC) a.rs && $(RUSTC) b.rs + $(BARE_RUSTC) c.rs -L dependency=$(TMPDIR) --extern b=$(TMPDIR)/libb.rlib \ + --out-dir=$(TMPDIR) diff --git a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-1.rs b/src/test/run-make/issue-37839/a.rs similarity index 88% rename from src/test/compile-fail-fulldeps/proc-macro/feature-gate-1.rs rename to src/test/run-make/issue-37839/a.rs index f5618fc642..052317438c 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-1.rs +++ b/src/test/run-make/issue-37839/a.rs @@ -8,6 +8,5 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// error-pattern: the `proc-macro` crate type is experimental - +#![allow(unused)] #![crate_type = "proc-macro"] diff --git a/src/test/run-make/issue-37839/b.rs b/src/test/run-make/issue-37839/b.rs new file mode 100644 index 0000000000..82f48f6d8d --- /dev/null +++ b/src/test/run-make/issue-37839/b.rs @@ -0,0 +1,12 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "lib"] +#[macro_use] extern crate a; diff --git a/src/test/run-make/issue-37839/c.rs b/src/test/run-make/issue-37839/c.rs new file mode 100644 index 0000000000..85bece5142 --- /dev/null +++ b/src/test/run-make/issue-37839/c.rs @@ -0,0 +1,12 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "lib"] +extern crate b; diff --git a/src/test/run-make/issue-37893/Makefile b/src/test/run-make/issue-37893/Makefile new file mode 100644 index 0000000000..27b69baf97 --- /dev/null +++ b/src/test/run-make/issue-37893/Makefile @@ -0,0 +1,4 @@ +-include ../tools.mk + +all: + $(RUSTC) a.rs && $(RUSTC) b.rs && $(RUSTC) c.rs diff --git a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-3.rs b/src/test/run-make/issue-37893/a.rs similarity index 85% rename from src/test/compile-fail-fulldeps/proc-macro/feature-gate-3.rs rename to src/test/run-make/issue-37893/a.rs index bb6b575962..052317438c 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-3.rs +++ b/src/test/run-make/issue-37893/a.rs @@ -8,8 +8,5 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![allow(unused)] #![crate_type = "proc-macro"] - -#[proc_macro_derive(Foo)] //~ ERROR: is an experimental feature -pub fn foo() { -} diff --git a/src/test/run-make/issue-37893/b.rs b/src/test/run-make/issue-37893/b.rs new file mode 100644 index 0000000000..82f48f6d8d --- /dev/null +++ b/src/test/run-make/issue-37893/b.rs @@ -0,0 +1,12 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "lib"] +#[macro_use] extern crate a; diff --git a/src/test/run-make/issue-37893/c.rs b/src/test/run-make/issue-37893/c.rs new file mode 100644 index 0000000000..eee55cc236 --- /dev/null +++ b/src/test/run-make/issue-37893/c.rs @@ -0,0 +1,13 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "staticlib"] +extern crate b; +extern crate a; diff --git a/src/test/run-make/issue-38237/Makefile b/src/test/run-make/issue-38237/Makefile new file mode 100644 index 0000000000..0a681401b1 --- /dev/null +++ b/src/test/run-make/issue-38237/Makefile @@ -0,0 +1,5 @@ +-include ../tools.mk + +all: + $(RUSTC) foo.rs; $(RUSTC) bar.rs + $(RUSTDOC) baz.rs -L $(TMPDIR) -o $(TMPDIR) diff --git a/src/test/run-make/issue-38237/bar.rs b/src/test/run-make/issue-38237/bar.rs new file mode 100644 index 0000000000..794e08c2fe --- /dev/null +++ b/src/test/run-make/issue-38237/bar.rs @@ -0,0 +1,14 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "lib"] + +#[derive(Debug)] +pub struct S; diff --git a/src/test/run-make/issue-38237/baz.rs b/src/test/run-make/issue-38237/baz.rs new file mode 100644 index 0000000000..c2a2c89db0 --- /dev/null +++ b/src/test/run-make/issue-38237/baz.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +extern crate foo; +extern crate bar; + +pub struct Bar; +impl ::std::ops::Deref for Bar { + type Target = bar::S; + fn deref(&self) -> &Self::Target { unimplemented!() } +} diff --git a/src/test/run-make/issue-38237/foo.rs b/src/test/run-make/issue-38237/foo.rs new file mode 100644 index 0000000000..6fb315731d --- /dev/null +++ b/src/test/run-make/issue-38237/foo.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +#[proc_macro_derive(A)] +pub fn derive(ts: proc_macro::TokenStream) -> proc_macro::TokenStream { ts } + +#[derive(Debug)] +struct S; diff --git a/src/test/run-make/link-cfg/Makefile b/src/test/run-make/link-cfg/Makefile new file mode 100644 index 0000000000..4abc0caa69 --- /dev/null +++ b/src/test/run-make/link-cfg/Makefile @@ -0,0 +1,22 @@ +-include ../tools.mk + +all: $(call DYLIB,return1) $(call DYLIB,return2) $(call NATIVE_STATICLIB,return3) + ls $(TMPDIR) + $(RUSTC) --print cfg --target x86_64-unknown-linux-musl | grep crt-static + + $(RUSTC) no-deps.rs --cfg foo + $(call RUN,no-deps) + $(RUSTC) no-deps.rs --cfg bar + $(call RUN,no-deps) + + $(RUSTC) dep.rs + $(RUSTC) with-deps.rs --cfg foo + $(call RUN,with-deps) + $(RUSTC) with-deps.rs --cfg bar + $(call RUN,with-deps) + + $(RUSTC) dep-with-staticlib.rs + $(RUSTC) with-staticlib-deps.rs --cfg foo + $(call RUN,with-staticlib-deps) + $(RUSTC) with-staticlib-deps.rs --cfg bar + $(call RUN,with-staticlib-deps) diff --git a/src/test/run-make/link-cfg/dep-with-staticlib.rs b/src/test/run-make/link-cfg/dep-with-staticlib.rs new file mode 100644 index 0000000000..ecc2365ddb --- /dev/null +++ b/src/test/run-make/link-cfg/dep-with-staticlib.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(link_cfg)] +#![crate_type = "rlib"] + +#[link(name = "return1", cfg(foo))] +#[link(name = "return3", kind = "static", cfg(bar))] +extern { + pub fn my_function() -> i32; +} diff --git a/src/test/run-make/link-cfg/dep.rs b/src/test/run-make/link-cfg/dep.rs new file mode 100644 index 0000000000..7da879c2bf --- /dev/null +++ b/src/test/run-make/link-cfg/dep.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(link_cfg)] +#![crate_type = "rlib"] + +#[link(name = "return1", cfg(foo))] +#[link(name = "return2", cfg(bar))] +extern { + pub fn my_function() -> i32; +} diff --git a/src/test/compile-fail/feature-gate-relaxed-adts.rs b/src/test/run-make/link-cfg/no-deps.rs similarity index 57% rename from src/test/compile-fail/feature-gate-relaxed-adts.rs rename to src/test/run-make/link-cfg/no-deps.rs index dc5e347aad..6b11410674 100644 --- a/src/test/compile-fail/feature-gate-relaxed-adts.rs +++ b/src/test/run-make/link-cfg/no-deps.rs @@ -8,19 +8,23 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -struct S(); //~ ERROR empty tuple structs and enum variants are unstable -struct Z(u8, u8); +#![feature(link_cfg)] -enum E { - V(), //~ ERROR empty tuple structs and enum variants are unstable - U(u8, u8), +#[link(name = "return1", cfg(foo))] +#[link(name = "return2", cfg(bar))] +extern { + fn my_function() -> i32; } fn main() { - match S() { - S() => {} //~ ERROR empty tuple structs patterns are unstable - } - match E::V() { - E::V() => {} //~ ERROR empty tuple structs patterns are unstable + unsafe { + let v = my_function(); + if cfg!(foo) { + assert_eq!(v, 1); + } else if cfg!(bar) { + assert_eq!(v, 2); + } else { + panic!("unknown"); + } } } diff --git a/src/test/run-make/link-cfg/return1.c b/src/test/run-make/link-cfg/return1.c new file mode 100644 index 0000000000..a2a3d051dd --- /dev/null +++ b/src/test/run-make/link-cfg/return1.c @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#ifdef _WIN32 +__declspec(dllexport) +#endif +int my_function() { + return 1; +} diff --git a/src/test/run-make/link-cfg/return2.c b/src/test/run-make/link-cfg/return2.c new file mode 100644 index 0000000000..d6ddcccf2f --- /dev/null +++ b/src/test/run-make/link-cfg/return2.c @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#ifdef _WIN32 +__declspec(dllexport) +#endif +int my_function() { + return 2; +} diff --git a/src/test/run-make/link-cfg/return3.c b/src/test/run-make/link-cfg/return3.c new file mode 100644 index 0000000000..6a3b695f20 --- /dev/null +++ b/src/test/run-make/link-cfg/return3.c @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#ifdef _WIN32 +__declspec(dllexport) +#endif +int my_function() { + return 3; +} diff --git a/src/test/run-make/link-cfg/with-deps.rs b/src/test/run-make/link-cfg/with-deps.rs new file mode 100644 index 0000000000..799555c500 --- /dev/null +++ b/src/test/run-make/link-cfg/with-deps.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +extern crate dep; + +fn main() { + unsafe { + let v = dep::my_function(); + if cfg!(foo) { + assert_eq!(v, 1); + } else if cfg!(bar) { + assert_eq!(v, 2); + } else { + panic!("unknown"); + } + } +} diff --git a/src/test/run-make/link-cfg/with-staticlib-deps.rs b/src/test/run-make/link-cfg/with-staticlib-deps.rs new file mode 100644 index 0000000000..33a9c7720e --- /dev/null +++ b/src/test/run-make/link-cfg/with-staticlib-deps.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +extern crate dep_with_staticlib; + +fn main() { + unsafe { + let v = dep_with_staticlib::my_function(); + if cfg!(foo) { + assert_eq!(v, 1); + } else if cfg!(bar) { + assert_eq!(v, 3); + } else { + panic!("unknown"); + } + } +} diff --git a/src/test/run-make/link-path-order/main.rs b/src/test/run-make/link-path-order/main.rs index 450460cf19..aaac3927f1 100644 --- a/src/test/run-make/link-path-order/main.rs +++ b/src/test/run-make/link-path-order/main.rs @@ -12,7 +12,7 @@ extern crate libc; -#[link(name="foo")] +#[link(name="foo", kind = "static")] extern { fn should_return_one() -> libc::c_int; } diff --git a/src/test/run-make/llvm-pass/llvm-function-pass.so.cc b/src/test/run-make/llvm-pass/llvm-function-pass.so.cc index 4470c40076..880c9bce56 100644 --- a/src/test/run-make/llvm-pass/llvm-function-pass.so.cc +++ b/src/test/run-make/llvm-pass/llvm-function-pass.so.cc @@ -28,7 +28,12 @@ namespace { bool runOnFunction(Function &F) override; - const char *getPassName() const override { +#if LLVM_VERSION_MAJOR >= 4 + StringRef +#else + const char * +#endif + getPassName() const override { return "Some LLVM pass"; } diff --git a/src/test/run-make/llvm-pass/llvm-module-pass.so.cc b/src/test/run-make/llvm-pass/llvm-module-pass.so.cc index 510375a5e6..280eca7e8f 100644 --- a/src/test/run-make/llvm-pass/llvm-module-pass.so.cc +++ b/src/test/run-make/llvm-pass/llvm-module-pass.so.cc @@ -27,7 +27,12 @@ namespace { bool runOnModule(Module &M) override; - const char *getPassName() const override { +#if LLVM_VERSION_MAJOR >= 4 + StringRef +#else + const char * +#endif + getPassName() const override { return "Some LLVM pass"; } diff --git a/src/test/run-make/rustc-macro-dep-files/bar.rs b/src/test/run-make/rustc-macro-dep-files/bar.rs index a2db98049d..03330c3d17 100644 --- a/src/test/run-make/rustc-macro-dep-files/bar.rs +++ b/src/test/run-make/rustc-macro-dep-files/bar.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(proc_macro)] - #[macro_use] extern crate foo; diff --git a/src/test/run-make/rustc-macro-dep-files/foo.rs b/src/test/run-make/rustc-macro-dep-files/foo.rs index bd9e9158c5..2f2524f6ef 100644 --- a/src/test/run-make/rustc-macro-dep-files/foo.rs +++ b/src/test/run-make/rustc-macro-dep-files/foo.rs @@ -9,8 +9,6 @@ // except according to those terms. #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] extern crate proc_macro; diff --git a/src/test/run-make/save-analysis-fail/Makefile b/src/test/run-make/save-analysis-fail/Makefile new file mode 100644 index 0000000000..f29f907cf3 --- /dev/null +++ b/src/test/run-make/save-analysis-fail/Makefile @@ -0,0 +1,6 @@ +-include ../tools.mk +all: code +krate2: krate2.rs + $(RUSTC) $< +code: foo.rs krate2 + $(RUSTC) foo.rs -Zsave-analysis || exit 0 diff --git a/src/test/run-make/save-analysis-fail/SameDir.rs b/src/test/run-make/save-analysis-fail/SameDir.rs new file mode 100644 index 0000000000..fe70ac1ede --- /dev/null +++ b/src/test/run-make/save-analysis-fail/SameDir.rs @@ -0,0 +1,15 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// sub-module in the same directory as the main crate file + +pub struct SameStruct { + pub name: String +} diff --git a/src/test/compile-fail/feature-gate-linked-from.rs b/src/test/run-make/save-analysis-fail/SameDir3.rs similarity index 83% rename from src/test/compile-fail/feature-gate-linked-from.rs rename to src/test/run-make/save-analysis-fail/SameDir3.rs index 8705684111..315f900868 100644 --- a/src/test/compile-fail/feature-gate-linked-from.rs +++ b/src/test/run-make/save-analysis-fail/SameDir3.rs @@ -8,9 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#[linked_from = "foo"] //~ ERROR experimental feature -extern { - fn foo(); +pub fn hello(x: isize) { + println!("macro {} :-(", x); } - -fn main() {} diff --git a/src/test/run-make/save-analysis-fail/SubDir/mod.rs b/src/test/run-make/save-analysis-fail/SubDir/mod.rs new file mode 100644 index 0000000000..fe84db08da --- /dev/null +++ b/src/test/run-make/save-analysis-fail/SubDir/mod.rs @@ -0,0 +1,37 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// sub-module in a sub-directory + +use sub::sub2 as msalias; +use sub::sub2; + +static yy: usize = 25; + +mod sub { + pub mod sub2 { + pub mod sub3 { + pub fn hello() { + println!("hello from module 3"); + } + } + pub fn hello() { + println!("hello from a module"); + } + + pub struct nested_struct { + pub field2: u32, + } + } +} + +pub struct SubStruct { + pub name: String +} diff --git a/src/test/run-make/save-analysis-fail/foo.rs b/src/test/run-make/save-analysis-fail/foo.rs new file mode 100644 index 0000000000..e331f65abb --- /dev/null +++ b/src/test/run-make/save-analysis-fail/foo.rs @@ -0,0 +1,450 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![ crate_name = "test" ] +#![feature(box_syntax)] +#![feature(rustc_private)] + +extern crate graphviz; +// A simple rust project + +extern crate krate2; +extern crate krate2 as krate3; +extern crate flate as myflate; + +use graphviz::RenderOption; +use std::collections::{HashMap,HashSet}; +use std::cell::RefCell; +use std::io::Write; + + +use sub::sub2 as msalias; +use sub::sub2; +use sub::sub2::nested_struct as sub_struct; + +use std::mem::size_of; + +use std::char::from_u32; + +static uni: &'static str = "Les Miséééééééérables"; +static yy: usize = 25; + +static bob: Option = None; + +// buglink test - see issue #1337. + +fn test_alias(i: Option<::Item>) { + let s = sub_struct{ field2: 45u32, }; + + // import tests + fn foo(x: &Write) {} + let _: Option<_> = from_u32(45); + + let x = 42usize; + + krate2::hello(); + krate3::hello(); + myflate::deflate_bytes(&[]); + + let x = (3isize, 4usize); + let y = x.1; +} + +// Issue #37700 +const LUT_BITS: usize = 3; +pub struct HuffmanTable { + ac_lut: Option<[(i16, u8); 1 << LUT_BITS]>, +} + +struct TupStruct(isize, isize, Box); + +fn test_tup_struct(x: TupStruct) -> isize { + x.1 +} + +fn println(s: &str) { + std::io::stdout().write_all(s.as_bytes()); +} + +mod sub { + pub mod sub2 { + use std::io::Write; + pub mod sub3 { + use std::io::Write; + pub fn hello() { + ::println("hello from module 3"); + } + } + pub fn hello() { + ::println("hello from a module"); + } + + pub struct nested_struct { + pub field2: u32, + } + + pub enum nested_enum { + Nest2 = 2, + Nest3 = 3 + } + } +} + +pub mod SameDir; +pub mod SubDir; + +#[path = "SameDir3.rs"] +pub mod SameDir2; + +struct nofields; + +#[derive(Clone)] +struct some_fields { + field1: u32, +} + +type SF = some_fields; + +trait SuperTrait { + fn qux(&self) { panic!(); } +} + +trait SomeTrait: SuperTrait { + fn Method(&self, x: u32) -> u32; + + fn prov(&self, x: u32) -> u32 { + println(&x.to_string()); + 42 + } + fn provided_method(&self) -> u32 { + 42 + } +} + +trait SubTrait: SomeTrait { + fn stat2(x: &Self) -> u32 { + 32 + } +} + +trait SizedTrait: Sized {} + +fn error(s: &SizedTrait) { + let foo = 42; + println!("Hello world! {}", foo); +} + +impl SomeTrait for some_fields { + fn Method(&self, x: u32) -> u32 { + println(&x.to_string()); + self.field1 + } +} + +impl SuperTrait for some_fields { +} + +impl SubTrait for some_fields {} + +impl some_fields { + fn stat(x: u32) -> u32 { + println(&x.to_string()); + 42 + } + fn stat2(x: &some_fields) -> u32 { + 42 + } + + fn align_to(&mut self) { + + } + + fn test(&mut self) { + self.align_to::(); + } +} + +impl SuperTrait for nofields { +} +impl SomeTrait for nofields { + fn Method(&self, x: u32) -> u32 { + self.Method(x); + 43 + } + + fn provided_method(&self) -> u32 { + 21 + } +} + +impl SubTrait for nofields {} + +impl SuperTrait for (Box, Box) {} + +fn f_with_params(x: &T) { + x.Method(41); +} + +type MyType = Box; + +enum SomeEnum<'a> { + Ints(isize, isize), + Floats(f64, f64), + Strings(&'a str, &'a str, &'a str), + MyTypes(MyType, MyType) +} + +#[derive(Copy, Clone)] +enum SomeOtherEnum { + SomeConst1, + SomeConst2, + SomeConst3 +} + +enum SomeStructEnum { + EnumStruct{a:isize, b:isize}, + EnumStruct2{f1:MyType, f2:MyType}, + EnumStruct3{f1:MyType, f2:MyType, f3:SomeEnum<'static>} +} + +fn matchSomeEnum(val: SomeEnum) { + match val { + SomeEnum::Ints(int1, int2) => { println(&(int1+int2).to_string()); } + SomeEnum::Floats(float1, float2) => { println(&(float2*float1).to_string()); } + SomeEnum::Strings(.., s3) => { println(s3); } + SomeEnum::MyTypes(mt1, mt2) => { println(&(mt1.field1 - mt2.field1).to_string()); } + } +} + +fn matchSomeStructEnum(se: SomeStructEnum) { + match se { + SomeStructEnum::EnumStruct{a:a, ..} => println(&a.to_string()), + SomeStructEnum::EnumStruct2{f1:f1, f2:f_2} => println(&f_2.field1.to_string()), + SomeStructEnum::EnumStruct3{f1, ..} => println(&f1.field1.to_string()), + } +} + + +fn matchSomeStructEnum2(se: SomeStructEnum) { + use SomeStructEnum::*; + match se { + EnumStruct{a: ref aaa, ..} => println(&aaa.to_string()), + EnumStruct2{f1, f2: f2} => println(&f1.field1.to_string()), + EnumStruct3{f1, f3: SomeEnum::Ints(..), f2} => println(&f1.field1.to_string()), + _ => {}, + } +} + +fn matchSomeOtherEnum(val: SomeOtherEnum) { + use SomeOtherEnum::{SomeConst2, SomeConst3}; + match val { + SomeOtherEnum::SomeConst1 => { println("I'm const1."); } + SomeConst2 | SomeConst3 => { println("I'm const2 or const3."); } + } +} + +fn hello((z, a) : (u32, String), ex: X) { + SameDir2::hello(43); + + println(&yy.to_string()); + let (x, y): (u32, u32) = (5, 3); + println(&x.to_string()); + println(&z.to_string()); + let x: u32 = x; + println(&x.to_string()); + let x = "hello"; + println(x); + + let x = 32.0f32; + let _ = (x + ((x * x) + 1.0).sqrt()).ln(); + + let s: Box = box some_fields {field1: 43}; + let s2: Box = box some_fields {field1: 43}; + let s3 = box nofields; + + s.Method(43); + s3.Method(43); + s2.Method(43); + + ex.prov(43); + + let y: u32 = 56; + // static method on struct + let r = some_fields::stat(y); + // trait static method, calls default + let r = SubTrait::stat2(&*s3); + + let s4 = s3 as Box; + s4.Method(43); + + s4.provided_method(); + s2.prov(45); + + let closure = |x: u32, s: &SomeTrait| { + s.Method(23); + return x + y; + }; + + let z = closure(10, &*s); +} + +pub struct blah { + used_link_args: RefCell<[&'static str; 0]>, +} + +#[macro_use] +mod macro_use_test { + macro_rules! test_rec { + (q, $src: expr) => {{ + print!("{}", $src); + test_rec!($src); + }}; + ($src: expr) => { + print!("{}", $src); + }; + } + + macro_rules! internal_vars { + ($src: ident) => {{ + let mut x = $src; + x += 100; + }}; + } +} + +fn main() { // foo + let s = box some_fields {field1: 43}; + hello((43, "a".to_string()), *s); + sub::sub2::hello(); + sub2::sub3::hello(); + + let h = sub2::sub3::hello; + h(); + + // utf8 chars + let ut = "Les Miséééééééérables"; + + // For some reason, this pattern of macro_rules foiled our generated code + // avoiding strategy. + macro_rules! variable_str(($name:expr) => ( + some_fields { + field1: $name, + } + )); + let vs = variable_str!(32); + + let mut candidates: RefCell> = RefCell::new(HashMap::new()); + let _ = blah { + used_link_args: RefCell::new([]), + }; + let s1 = nofields; + let s2 = SF { field1: 55}; + let s3: some_fields = some_fields{ field1: 55}; + let s4: msalias::nested_struct = sub::sub2::nested_struct{ field2: 55}; + let s4: msalias::nested_struct = sub2::nested_struct{ field2: 55}; + println(&s2.field1.to_string()); + let s5: MyType = box some_fields{ field1: 55}; + let s = SameDir::SameStruct{name: "Bob".to_string()}; + let s = SubDir::SubStruct{name:"Bob".to_string()}; + let s6: SomeEnum = SomeEnum::MyTypes(box s2.clone(), s5); + let s7: SomeEnum = SomeEnum::Strings("one", "two", "three"); + matchSomeEnum(s6); + matchSomeEnum(s7); + let s8: SomeOtherEnum = SomeOtherEnum::SomeConst2; + matchSomeOtherEnum(s8); + let s9: SomeStructEnum = SomeStructEnum::EnumStruct2{ f1: box some_fields{ field1:10 }, + f2: box s2 }; + matchSomeStructEnum(s9); + + for x in &vec![1, 2, 3] { + let _y = x; + } + + let s7: SomeEnum = SomeEnum::Strings("one", "two", "three"); + if let SomeEnum::Strings(..) = s7 { + println!("hello!"); + } + + for i in 0..5 { + foo_foo(i); + } + + if let Some(x) = None { + foo_foo(x); + } + + if false { + } else if let Some(y) = None { + foo_foo(y); + } + + while let Some(z) = None { + foo_foo(z); + } + + let mut x = 4; + test_rec!(q, "Hello"); + assert_eq!(x, 4); + internal_vars!(x); +} + +fn foo_foo(_: i32) {} + +impl Iterator for nofields { + type Item = (usize, usize); + + fn next(&mut self) -> Option<(usize, usize)> { + panic!() + } + + fn size_hint(&self) -> (usize, Option) { + panic!() + } +} + +trait Pattern<'a> { + type Searcher; +} + +struct CharEqPattern; + +impl<'a> Pattern<'a> for CharEqPattern { + type Searcher = CharEqPattern; +} + +struct CharSearcher<'a>(>::Searcher); + +pub trait Error { +} + +impl Error + 'static { + pub fn is(&self) -> bool { + panic!() + } +} + +impl Error + 'static + Send { + pub fn is(&self) -> bool { + ::is::(self) + } +} +extern crate serialize; +#[derive(Clone, Copy, Hash, Encodable, Decodable, PartialEq, Eq, PartialOrd, Ord, Debug, Default)] +struct AllDerives(i32); + +fn test_format_args() { + let x = 1; + let y = 2; + let name = "Joe Blogg"; + println!("Hello {}", name); + print!("Hello {0}", name); + print!("{0} + {} = {}", x, y); + print!("x is {}, y is {1}, name is {n}", x, y, n = name); +} diff --git a/src/test/run-make/save-analysis-fail/krate2.rs b/src/test/run-make/save-analysis-fail/krate2.rs new file mode 100644 index 0000000000..2c6f517ff3 --- /dev/null +++ b/src/test/run-make/save-analysis-fail/krate2.rs @@ -0,0 +1,18 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![ crate_name = "krate2" ] +#![ crate_type = "lib" ] + +use std::io::Write; + +pub fn hello() { + std::io::stdout().write_all(b"hello world!\n"); +} diff --git a/src/test/run-make/save-analysis/foo.rs b/src/test/run-make/save-analysis/foo.rs index e9ed897bf3..e8b69729af 100644 --- a/src/test/run-make/save-analysis/foo.rs +++ b/src/test/run-make/save-analysis/foo.rs @@ -57,6 +57,12 @@ fn test_alias(i: Option<::Item>) { let y = x.1; } +// Issue #37700 +const LUT_BITS: usize = 3; +pub struct HuffmanTable { + ac_lut: Option<[(i16, u8); 1 << LUT_BITS]>, +} + struct TupStruct(isize, isize, Box); fn test_tup_struct(x: TupStruct) -> isize { diff --git a/src/test/run-make/sepcomp-inlining/Makefile b/src/test/run-make/sepcomp-inlining/Makefile index ef43b0d97e..720dfff2c0 100644 --- a/src/test/run-make/sepcomp-inlining/Makefile +++ b/src/test/run-make/sepcomp-inlining/Makefile @@ -10,5 +10,5 @@ all: $(RUSTC) foo.rs --emit=llvm-ir -C codegen-units=3 [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ i32\ .*inlined)" -eq "0" ] [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ internal\ i32\ .*inlined)" -eq "2" ] - [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ i32\ .*normal)" -eq "1" ] - [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c declare\ i32\ .*normal)" -eq "2" ] + [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ hidden\ i32\ .*normal)" -eq "1" ] + [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c declare\ hidden\ i32\ .*normal)" -eq "2" ] diff --git a/src/test/run-make/symbol-visibility/Makefile b/src/test/run-make/symbol-visibility/Makefile new file mode 100644 index 0000000000..988c9473f6 --- /dev/null +++ b/src/test/run-make/symbol-visibility/Makefile @@ -0,0 +1,50 @@ +include ../tools.mk + +ifdef IS_WINDOWS +# Do nothing on MSVC. +# On MINGW the --version-script, --dynamic-list, and --retain-symbol args don't +# seem to work reliably. +all: + exit 0 +else + +NM=nm -D +DYLIB_EXT=so +CDYLIB_NAME=liba_cdylib.so +RDYLIB_NAME=liba_rust_dylib.so +EXE_NAME=an_executable + +ifeq ($(UNAME),Darwin) +NM=nm -gU +DYLIB_EXT=dylib +CDYLIB_NAME=liba_cdylib.dylib +RDYLIB_NAME=liba_rust_dylib.dylib +EXE_NAME=an_executable +endif + +all: + $(RUSTC) an_rlib.rs + $(RUSTC) a_cdylib.rs + $(RUSTC) a_rust_dylib.rs + $(RUSTC) an_executable.rs + + # Check that a cdylib exports its public #[no_mangle] functions + [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c public_c_function_from_cdylib)" -eq "1" ] + # Check that a cdylib exports the public #[no_mangle] functions of dependencies + [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c public_c_function_from_rlib)" -eq "1" ] + # Check that a cdylib DOES NOT export any public Rust functions + [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c _ZN.*h.*E)" -eq "0" ] + + # Check that a Rust dylib exports its monomorphic functions + [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_c_function_from_rust_dylib)" -eq "1" ] + [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c _ZN.*public_rust_function_from_rust_dylib.*E)" -eq "1" ] + + # Check that a Rust dylib exports the monomorphic functions from its dependencies + [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_c_function_from_rlib)" -eq "1" ] + [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_rust_function_from_rlib)" -eq "1" ] + + # Check that an executable does not export any dynamic symbols + [ "$$($(NM) $(TMPDIR)/$(EXE_NAME) | grep -c public_c_function_from_rlib)" -eq "0" ] + [ "$$($(NM) $(TMPDIR)/$(EXE_NAME) | grep -c public_rust_function_from_exe)" -eq "0" ] + +endif diff --git a/src/test/run-make/symbol-visibility/a_cdylib.rs b/src/test/run-make/symbol-visibility/a_cdylib.rs new file mode 100644 index 0000000000..9a70542c06 --- /dev/null +++ b/src/test/run-make/symbol-visibility/a_cdylib.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type="cdylib"] + +extern crate an_rlib; + +// This should not be exported +pub fn public_rust_function_from_cdylib() {} + +// This should be exported +#[no_mangle] +pub extern "C" fn public_c_function_from_cdylib() { + an_rlib::public_c_function_from_rlib(); +} diff --git a/src/test/run-make/symbol-visibility/a_rust_dylib.rs b/src/test/run-make/symbol-visibility/a_rust_dylib.rs new file mode 100644 index 0000000000..b826211c9a --- /dev/null +++ b/src/test/run-make/symbol-visibility/a_rust_dylib.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type="dylib"] + +extern crate an_rlib; + +// This should be exported +pub fn public_rust_function_from_rust_dylib() {} + +// This should be exported +#[no_mangle] +pub extern "C" fn public_c_function_from_rust_dylib() {} diff --git a/src/test/run-make/symbol-visibility/an_executable.rs b/src/test/run-make/symbol-visibility/an_executable.rs new file mode 100644 index 0000000000..73059c5e37 --- /dev/null +++ b/src/test/run-make/symbol-visibility/an_executable.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type="bin"] + +extern crate an_rlib; + +pub fn public_rust_function_from_exe() {} + +fn main() {} diff --git a/src/test/run-make/symbol-visibility/an_rlib.rs b/src/test/run-make/symbol-visibility/an_rlib.rs new file mode 100644 index 0000000000..cd19500d14 --- /dev/null +++ b/src/test/run-make/symbol-visibility/an_rlib.rs @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type="rlib"] + +pub fn public_rust_function_from_rlib() {} + +#[no_mangle] +pub extern "C" fn public_c_function_from_rlib() {} diff --git a/src/test/run-make/target-specs/Makefile b/src/test/run-make/target-specs/Makefile index 0c9a0169c1..6b58ad7b6d 100644 --- a/src/test/run-make/target-specs/Makefile +++ b/src/test/run-make/target-specs/Makefile @@ -6,3 +6,4 @@ all: $(RUSTC) foo.rs --target=my-incomplete-platform.json 2>&1 | grep 'Field llvm-target' RUST_TARGET_PATH=. $(RUSTC) foo.rs --target=my-awesome-platform --crate-type=lib --emit=asm RUST_TARGET_PATH=. $(RUSTC) foo.rs --target=x86_64-unknown-linux-gnu --crate-type=lib --emit=asm + $(RUSTC) -Z unstable-options --target=my-awesome-platform.json --print target-spec-json > $(TMPDIR)/test-platform.json && $(RUSTC) -Z unstable-options --target=$(TMPDIR)/test-platform.json --print target-spec-json | diff -q $(TMPDIR)/test-platform.json - diff --git a/src/test/run-pass-fulldeps/auxiliary/cond_noprelude_plugin.rs b/src/test/run-pass-fulldeps/auxiliary/cond_noprelude_plugin.rs index 48919fe876..664bb9da89 100644 --- a/src/test/run-pass-fulldeps/auxiliary/cond_noprelude_plugin.rs +++ b/src/test/run-pass-fulldeps/auxiliary/cond_noprelude_plugin.rs @@ -20,10 +20,10 @@ extern crate syntax; use proc_macro_tokens::build::ident_eq; +use syntax::ast::Ident; use syntax::ext::base::{ExtCtxt, MacResult}; use syntax::ext::proc_macro_shim::build_block_emitter; use syntax::tokenstream::{TokenTree, TokenStream}; -use syntax::parse::token::str_to_ident; use syntax::codemap::Span; use rustc_plugin::Registry; @@ -57,7 +57,7 @@ fn cond_rec(input: TokenStream) -> TokenStream { let test: TokenStream = clause.slice(0..1); let rhs: TokenStream = clause.slice_from(1..); - if ident_eq(&test[0], str_to_ident("else")) || rest.is_empty() { + if ident_eq(&test[0], Ident::from_str("else")) || rest.is_empty() { qquote!({unquote(rhs)}) } else { qquote!({if unquote(test) { unquote(rhs) } else { cond!(unquote(rest)) } }) diff --git a/src/test/run-pass-fulldeps/auxiliary/cond_plugin.rs b/src/test/run-pass-fulldeps/auxiliary/cond_plugin.rs index 0ea4cec75c..31a5f5968b 100644 --- a/src/test/run-pass-fulldeps/auxiliary/cond_plugin.rs +++ b/src/test/run-pass-fulldeps/auxiliary/cond_plugin.rs @@ -26,7 +26,7 @@ use syntax::ast::Ident; use syntax::codemap::{DUMMY_SP, Span}; use syntax::ext::proc_macro_shim::build_block_emitter; use syntax::ext::base::{ExtCtxt, MacResult}; -use syntax::parse::token::{self, Token, DelimToken, keywords, str_to_ident}; +use syntax::parse::token::{self, Token, DelimToken}; use syntax::tokenstream::{TokenTree, TokenStream}; #[plugin_registrar] @@ -58,7 +58,7 @@ fn cond_rec(input: TokenStream) -> TokenStream { let test: TokenStream = clause.slice(0..1); let rhs: TokenStream = clause.slice_from(1..); - if ident_eq(&test[0], str_to_ident("else")) || rest.is_empty() { + if ident_eq(&test[0], Ident::from_str("else")) || rest.is_empty() { qquote!({unquote(rhs)}) } else { qquote!({if unquote(test) { unquote(rhs) } else { cond!(unquote(rest)) } }) diff --git a/src/test/run-pass-fulldeps/auxiliary/cond_prelude_plugin.rs b/src/test/run-pass-fulldeps/auxiliary/cond_prelude_plugin.rs index 169c96b438..6a2d159a4b 100644 --- a/src/test/run-pass-fulldeps/auxiliary/cond_prelude_plugin.rs +++ b/src/test/run-pass-fulldeps/auxiliary/cond_prelude_plugin.rs @@ -52,7 +52,7 @@ fn cond_rec(input: TokenStream) -> TokenStream { let test: TokenStream = clause.slice(0..1); let rhs: TokenStream = clause.slice_from(1..); - if ident_eq(&test[0], str_to_ident("else")) || rest.is_empty() { + if ident_eq(&test[0], Ident::from_str("else")) || rest.is_empty() { qquote!({unquote(rhs)}) } else { qquote!({if unquote(test) { unquote(rhs) } else { cond!(unquote(rest)) } }) diff --git a/src/test/run-pass-fulldeps/auxiliary/custom_derive_partial_eq.rs b/src/test/run-pass-fulldeps/auxiliary/custom_derive_partial_eq.rs index e750d1fb1e..63dbd4d5be 100644 --- a/src/test/run-pass-fulldeps/auxiliary/custom_derive_partial_eq.rs +++ b/src/test/run-pass-fulldeps/auxiliary/custom_derive_partial_eq.rs @@ -10,7 +10,7 @@ // force-host -#![feature(plugin_registrar, rustc_private, item_like_imports)] +#![feature(plugin_registrar, rustc_private)] extern crate syntax; extern crate syntax_ext; @@ -25,12 +25,12 @@ use syntax::ast::*; use syntax::codemap::Span; use syntax::ext::base::*; use syntax::ext::build::AstBuilder; -use syntax::parse::token::{intern, InternedString}; +use syntax::symbol::Symbol; use syntax::ptr::P; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_syntax_extension(intern("derive_CustomPartialEq"), + reg.register_syntax_extension(Symbol::intern("derive_CustomPartialEq"), MultiDecorator(Box::new(expand_deriving_partial_eq))); } @@ -52,7 +52,7 @@ fn expand_deriving_partial_eq(cx: &mut ExtCtxt, span: Span, mitem: &MetaItem, it substr) } - let inline = cx.meta_word(span, InternedString::new("inline")); + let inline = cx.meta_word(span, Symbol::intern("inline")); let attrs = vec![cx.attribute(span, inline)]; let methods = vec![MethodDef { name: "eq", diff --git a/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin.rs b/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin.rs index 6b688b006b..07f7d6bad7 100644 --- a/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin.rs +++ b/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin.rs @@ -23,7 +23,7 @@ extern crate rustc_plugin; use syntax::ast; use syntax::ext::base::{MultiDecorator, ExtCtxt, Annotatable}; use syntax::ext::build::AstBuilder; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax_ext::deriving::generic::{cs_fold, TraitDef, MethodDef, combine_substructure}; use syntax_ext::deriving::generic::ty::{Literal, LifetimeBounds, Path, borrowed_explicit_self}; use syntax_pos::Span; @@ -32,7 +32,7 @@ use rustc_plugin::Registry; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { reg.register_syntax_extension( - token::intern("derive_TotalSum"), + Symbol::intern("derive_TotalSum"), MultiDecorator(box expand)); } @@ -66,7 +66,7 @@ fn expand(cx: &mut ExtCtxt, |cx, span, subexpr, field, _| { cx.expr_binary(span, ast::BinOpKind::Add, subexpr, cx.expr_method_call(span, field, - token::str_to_ident("total_sum"), vec![])) + ast::Ident::from_str("total_sum"), vec![])) }, zero, box |cx, span, _, _| { cx.span_bug(span, "wtf??"); }, diff --git a/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin_attr.rs b/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin_attr.rs index 6b58fee157..50b16a0e26 100644 --- a/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin_attr.rs +++ b/src/test/run-pass-fulldeps/auxiliary/custom_derive_plugin_attr.rs @@ -23,7 +23,7 @@ extern crate rustc_plugin; use syntax::ast; use syntax::ext::base::{MultiDecorator, ExtCtxt, Annotatable}; use syntax::ext::build::AstBuilder; -use syntax::parse::token; +use syntax::symbol::Symbol; use syntax::ptr::P; use syntax_ext::deriving::generic::{TraitDef, MethodDef, combine_substructure}; use syntax_ext::deriving::generic::{Substructure, Struct, EnumMatching}; @@ -34,7 +34,7 @@ use rustc_plugin::Registry; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { reg.register_syntax_extension( - token::intern("derive_TotalSum"), + Symbol::intern("derive_TotalSum"), MultiDecorator(box expand)); } diff --git a/src/test/run-pass-fulldeps/auxiliary/lint_for_crate.rs b/src/test/run-pass-fulldeps/auxiliary/lint_for_crate.rs index a424517da1..fc53031e7f 100644 --- a/src/test/run-pass-fulldeps/auxiliary/lint_for_crate.rs +++ b/src/test/run-pass-fulldeps/auxiliary/lint_for_crate.rs @@ -32,7 +32,7 @@ impl LintPass for Pass { } } -impl LateLintPass for Pass { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass { fn check_crate(&mut self, cx: &LateContext, krate: &hir::Crate) { if !attr::contains_name(&krate.attrs, "crate_okay") { cx.span_lint(CRATE_NOT_OKAY, krate.span, @@ -43,5 +43,5 @@ impl LateLintPass for Pass { #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_late_lint_pass(box Pass as LateLintPassObject); + reg.register_late_lint_pass(box Pass); } diff --git a/src/test/run-pass-fulldeps/auxiliary/lint_group_plugin_test.rs b/src/test/run-pass-fulldeps/auxiliary/lint_group_plugin_test.rs index 1e9a77724a..490aa0d469 100644 --- a/src/test/run-pass-fulldeps/auxiliary/lint_group_plugin_test.rs +++ b/src/test/run-pass-fulldeps/auxiliary/lint_group_plugin_test.rs @@ -34,7 +34,7 @@ impl LintPass for Pass { } } -impl LateLintPass for Pass { +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass { fn check_item(&mut self, cx: &LateContext, it: &hir::Item) { match &*it.name.as_str() { "lintme" => cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"), @@ -46,6 +46,6 @@ impl LateLintPass for Pass { #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_late_lint_pass(box Pass as LateLintPassObject); + reg.register_late_lint_pass(box Pass); reg.register_lint_group("lint_me", vec![TEST_LINT, PLEASE_LINT]); } diff --git a/src/test/run-pass-fulldeps/auxiliary/lint_plugin_test.rs b/src/test/run-pass-fulldeps/auxiliary/lint_plugin_test.rs index 8ea131da33..8647797270 100644 --- a/src/test/run-pass-fulldeps/auxiliary/lint_plugin_test.rs +++ b/src/test/run-pass-fulldeps/auxiliary/lint_plugin_test.rs @@ -36,7 +36,7 @@ impl LintPass for Pass { impl EarlyLintPass for Pass { fn check_item(&mut self, cx: &EarlyContext, it: &ast::Item) { - if it.ident.name.as_str() == "lintme" { + if it.ident.name == "lintme" { cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"); } } diff --git a/src/test/run-pass-fulldeps/auxiliary/macro_crate_test.rs b/src/test/run-pass-fulldeps/auxiliary/macro_crate_test.rs index 7257444ee8..29cc6b7db9 100644 --- a/src/test/run-pass-fulldeps/auxiliary/macro_crate_test.rs +++ b/src/test/run-pass-fulldeps/auxiliary/macro_crate_test.rs @@ -23,6 +23,7 @@ use syntax::ext::base::*; use syntax::ext::quote::rt::ToTokens; use syntax::parse::{self, token}; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax::tokenstream::TokenTree; use syntax_pos::Span; use rustc_plugin::Registry; @@ -36,15 +37,15 @@ pub fn plugin_registrar(reg: &mut Registry) { reg.register_macro("make_a_1", expand_make_a_1); reg.register_macro("identity", expand_identity); reg.register_syntax_extension( - token::intern("into_multi_foo"), + Symbol::intern("into_multi_foo"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. MultiModifier(Box::new(expand_into_foo_multi))); reg.register_syntax_extension( - token::intern("duplicate"), + Symbol::intern("duplicate"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. MultiDecorator(Box::new(expand_duplicate))); reg.register_syntax_extension( - token::intern("caller"), + Symbol::intern("caller"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. MultiDecorator(Box::new(expand_caller))); } @@ -108,9 +109,9 @@ fn expand_duplicate(cx: &mut ExtCtxt, it: &Annotatable, push: &mut FnMut(Annotatable)) { let copy_name = match mi.node { - ast::MetaItemKind::List(_, ref xs) => { + ast::MetaItemKind::List(ref xs) => { if let Some(word) = xs[0].word() { - token::str_to_ident(&word.name()) + ast::Ident::with_empty_ctxt(word.name()) } else { cx.span_err(mi.span, "Expected word"); return; @@ -179,7 +180,7 @@ fn expand_caller(cx: &mut ExtCtxt, } let fn_name = match list[0].name() { - Some(name) => token::str_to_ident(&name), + Some(name) => ast::Ident::with_empty_ctxt(name), None => cx.span_fatal(list[0].span(), "First parameter must be an ident.") }; diff --git a/src/test/run-pass-fulldeps/auxiliary/plugin_args.rs b/src/test/run-pass-fulldeps/auxiliary/plugin_args.rs index f21c914a76..ba2af77cdb 100644 --- a/src/test/run-pass-fulldeps/auxiliary/plugin_args.rs +++ b/src/test/run-pass-fulldeps/auxiliary/plugin_args.rs @@ -22,9 +22,9 @@ use std::borrow::ToOwned; use syntax::ast; use syntax::ext::build::AstBuilder; use syntax::ext::base::{TTMacroExpander, ExtCtxt, MacResult, MacEager, NormalTT}; -use syntax::parse::token; use syntax::print::pprust; use syntax::ptr::P; +use syntax::symbol::Symbol; use syntax_pos::Span; use syntax::tokenstream; use rustc_plugin::Registry; @@ -40,15 +40,14 @@ impl TTMacroExpander for Expander { _: &[tokenstream::TokenTree]) -> Box { let args = self.args.iter().map(|i| pprust::meta_list_item_to_string(i)) .collect::>().join(", "); - let interned = token::intern_and_get_ident(&args[..]); - MacEager::expr(ecx.expr_str(sp, interned)) + MacEager::expr(ecx.expr_str(sp, Symbol::intern(&args))) } } #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { let args = reg.args().to_owned(); - reg.register_syntax_extension(token::intern("plugin_args"), + reg.register_syntax_extension(Symbol::intern("plugin_args"), // FIXME (#22405): Replace `Box::new` with `box` here when/if possible. NormalTT(Box::new(Expander { args: args, }), None, false)); } diff --git a/src/test/run-pass-fulldeps/auxiliary/proc_macro_def.rs b/src/test/run-pass-fulldeps/auxiliary/proc_macro_def.rs index 9fce19f46f..f97fb04aad 100644 --- a/src/test/run-pass-fulldeps/auxiliary/proc_macro_def.rs +++ b/src/test/run-pass-fulldeps/auxiliary/proc_macro_def.rs @@ -18,18 +18,19 @@ use proc_macro_tokens::prelude::*; use rustc_plugin::Registry; use syntax::ext::base::SyntaxExtension; use syntax::ext::proc_macro_shim::prelude::*; +use syntax::symbol::Symbol; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { - reg.register_syntax_extension(token::intern("attr_tru"), + reg.register_syntax_extension(Symbol::intern("attr_tru"), SyntaxExtension::AttrProcMacro(Box::new(attr_tru))); - reg.register_syntax_extension(token::intern("attr_identity"), + reg.register_syntax_extension(Symbol::intern("attr_identity"), SyntaxExtension::AttrProcMacro(Box::new(attr_identity))); - reg.register_syntax_extension(token::intern("tru"), + reg.register_syntax_extension(Symbol::intern("tru"), SyntaxExtension::ProcMacro(Box::new(tru))); - reg.register_syntax_extension(token::intern("ret_tru"), + reg.register_syntax_extension(Symbol::intern("ret_tru"), SyntaxExtension::ProcMacro(Box::new(ret_tru))); - reg.register_syntax_extension(token::intern("identity"), + reg.register_syntax_extension(Symbol::intern("identity"), SyntaxExtension::ProcMacro(Box::new(identity))); } diff --git a/src/test/run-pass-fulldeps/auxiliary/procedural_mbe_matching.rs b/src/test/run-pass-fulldeps/auxiliary/procedural_mbe_matching.rs index 6ac0d5ad1a..2b3857048f 100644 --- a/src/test/run-pass-fulldeps/auxiliary/procedural_mbe_matching.rs +++ b/src/test/run-pass-fulldeps/auxiliary/procedural_mbe_matching.rs @@ -18,8 +18,8 @@ extern crate syntax_pos; extern crate rustc; extern crate rustc_plugin; -use syntax::parse::token::{str_to_ident, NtExpr, NtPat}; -use syntax::ast::{Pat}; +use syntax::parse::token::{NtExpr, NtPat}; +use syntax::ast::{Ident, Pat}; use syntax::tokenstream::{TokenTree}; use syntax::ext::base::{ExtCtxt, MacResult, MacEager}; use syntax::ext::build::AstBuilder; @@ -44,12 +44,12 @@ fn expand_mbe_matches(cx: &mut ExtCtxt, _: Span, args: &[TokenTree]) } }; - let matched_nt = match *map[&str_to_ident("matched")] { + let matched_nt = match *map[&Ident::from_str("matched")] { MatchedNonterminal(ref nt) => nt.clone(), _ => unreachable!(), }; - let mac_expr = match (&*matched_nt, &*map[&str_to_ident("pat")]) { + let mac_expr = match (&*matched_nt, &*map[&Ident::from_str("pat")]) { (&NtExpr(ref matched_expr), &MatchedSeq(ref pats, seq_sp)) => { let pats: Vec> = pats.iter().map(|pat_nt| { match **pat_nt { diff --git a/src/test/run-pass-fulldeps/empty-struct-braces-derive.rs b/src/test/run-pass-fulldeps/empty-struct-braces-derive.rs index 66ffff9433..79ce3cb68d 100644 --- a/src/test/run-pass-fulldeps/empty-struct-braces-derive.rs +++ b/src/test/run-pass-fulldeps/empty-struct-braces-derive.rs @@ -10,7 +10,6 @@ // `#[derive(Trait)]` works for empty structs/variants with braces or parens. -#![feature(relaxed_adts)] #![feature(rustc_private)] extern crate serialize as rustc_serialize; diff --git a/src/test/run-pass-fulldeps/issue-37290/auxiliary/lint.rs b/src/test/run-pass-fulldeps/issue-37290/auxiliary/lint.rs new file mode 100644 index 0000000000..c6892757c6 --- /dev/null +++ b/src/test/run-pass-fulldeps/issue-37290/auxiliary/lint.rs @@ -0,0 +1,68 @@ +// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// This flag is needed for plugins to work: +// compile-flags: -C prefer-dynamic + +#![feature(plugin_registrar, rustc_private)] +#![crate_type = "dylib"] +#![deny(region_hierarchy)] + +extern crate syntax; +#[macro_use] +extern crate rustc; +extern crate rustc_plugin; + +use rustc::lint::{LateContext, LintPass, LateLintPass, LintArray, LintContext}; +use rustc::hir; +use rustc::hir::intravisit::FnKind; +use rustc::middle::region::CodeExtent; +use rustc::util::nodemap::FxHashMap; + +use syntax::ast::{self, NodeId}; +use syntax::codemap::Span; + +declare_lint!(REGION_HIERARCHY, Warn, "warn about bogus region hierarchy"); + +struct Pass { + map: FxHashMap +} + +impl LintPass for Pass { + fn get_lints(&self) -> LintArray { lint_array!(REGION_HIERARCHY) } +} + +impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass { + fn check_fn(&mut self, cx: &LateContext, + fk: FnKind, _: &hir::FnDecl, expr: &hir::Expr, + span: Span, node: ast::NodeId) + { + if let FnKind::Closure(..) = fk { return } + + let mut extent = cx.tcx.region_maps.node_extent(expr.id); + while let Some(parent) = cx.tcx.region_maps.opt_encl_scope(extent) { + extent = parent; + } + if let Some(other) = self.map.insert(extent, node) { + cx.span_lint(REGION_HIERARCHY, span, &format!( + "different fns {:?}, {:?} with the same root extent {:?}", + cx.tcx.map.local_def_id(other), + cx.tcx.map.local_def_id(node), + extent)); + } + } +} + +#[plugin_registrar] +pub fn plugin_registrar(reg: &mut ::rustc_plugin::Registry) { + reg.register_late_lint_pass(Box::new( + Pass { map: FxHashMap() } + )); +} diff --git a/src/test/run-pass-fulldeps/issue-37290/main.rs b/src/test/run-pass-fulldeps/issue-37290/main.rs new file mode 100644 index 0000000000..394ad92b1d --- /dev/null +++ b/src/test/run-pass-fulldeps/issue-37290/main.rs @@ -0,0 +1,30 @@ +// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:lint.rs + +#![feature(plugin)] +#![plugin(lint)] + +struct Foo { +} + +impl Foo { + fn bar(&self) -> usize { + 22 + } + + fn baz(&self) -> usize { + 22 + } +} + +fn main() { } + diff --git a/src/test/run-pass-fulldeps/macro-quote-1.rs b/src/test/run-pass-fulldeps/macro-quote-1.rs index 914da3f746..948b20c147 100644 --- a/src/test/run-pass-fulldeps/macro-quote-1.rs +++ b/src/test/run-pass-fulldeps/macro-quote-1.rs @@ -18,9 +18,6 @@ extern crate proc_macro_tokens; use proc_macro_tokens::prelude::*; extern crate syntax; -use syntax::ast::Ident; -use syntax::codemap::DUMMY_SP; -use syntax::parse::token::{self, Token, keywords, str_to_ident}; fn main() { let lex_true = lex("true"); diff --git a/src/test/run-pass/myriad-closures.rs b/src/test/run-pass-fulldeps/myriad-closures.rs similarity index 91% rename from src/test/run-pass/myriad-closures.rs rename to src/test/run-pass-fulldeps/myriad-closures.rs index d2c9a5d562..a946ec635b 100644 --- a/src/test/run-pass/myriad-closures.rs +++ b/src/test/run-pass-fulldeps/myriad-closures.rs @@ -13,6 +13,9 @@ // toolchain. // See https://github.com/rust-lang/rust/issues/34793 for more information. +// Make sure we don't optimize anything away: +// compile-flags: -C no-prepopulate-passes + // Expand something exponentially macro_rules! go_bacterial { ($mac:ident) => ($mac!()); @@ -23,10 +26,7 @@ macro_rules! go_bacterial { } macro_rules! mk_closure { - () => ({ - let c = |a: u32| a + 4; - let _ = c(2); - }) + () => ((move || {})()) } macro_rules! mk_fn { diff --git a/src/test/run-pass-fulldeps/proc-macro/add-impl.rs b/src/test/run-pass-fulldeps/proc-macro/add-impl.rs index e82282f3d8..7ea7ceafc2 100644 --- a/src/test/run-pass-fulldeps/proc-macro/add-impl.rs +++ b/src/test/run-pass-fulldeps/proc-macro/add-impl.rs @@ -10,8 +10,6 @@ // aux-build:add-impl.rs -#![feature(proc_macro)] - #[macro_use] extern crate add_impl; diff --git a/src/test/run-pass-fulldeps/proc-macro/append-impl.rs b/src/test/run-pass-fulldeps/proc-macro/append-impl.rs index f062111df9..591f3331d2 100644 --- a/src/test/run-pass-fulldeps/proc-macro/append-impl.rs +++ b/src/test/run-pass-fulldeps/proc-macro/append-impl.rs @@ -10,7 +10,6 @@ // aux-build:append-impl.rs -#![feature(proc_macro)] #![allow(warnings)] #[macro_use] diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/add-impl.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/add-impl.rs index 99586b0bb4..3959eccd81 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/add-impl.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/add-impl.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] extern crate proc_macro; @@ -21,13 +19,12 @@ use proc_macro::TokenStream; #[proc_macro_derive(AddImpl)] // #[cfg(proc_macro)] pub fn derive(input: TokenStream) -> TokenStream { - (input.to_string() + " - impl B { + "impl B { fn foo(&self) {} } fn foo() {} mod bar { pub fn foo() {} } - ").parse().unwrap() + ".parse().unwrap() } diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/append-impl.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/append-impl.rs index 27c3d643ca..fdce709e5b 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/append-impl.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/append-impl.rs @@ -11,8 +11,6 @@ // force-host // no-prefer-dynamic -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; @@ -21,11 +19,8 @@ use proc_macro::TokenStream; #[proc_macro_derive(Append)] pub fn derive_a(input: TokenStream) -> TokenStream { - let mut input = input.to_string(); - input.push_str(" - impl Append for A { - fn foo(&self) {} - } - "); - input.parse().unwrap() + "impl Append for A { + fn foo(&self) {} + } + ".parse().unwrap() } diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-a.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-a.rs index c2de173568..a253c6224a 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-a.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-a.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] extern crate proc_macro; @@ -23,5 +21,5 @@ pub fn derive(input: TokenStream) -> TokenStream { let input = input.to_string(); assert!(input.contains("struct A;")); assert!(input.contains("#[derive(Debug, PartialEq, Eq, Copy, Clone)]")); - "#[derive(Debug, PartialEq, Eq, Copy, Clone)] struct A;".parse().unwrap() + "".parse().unwrap() } diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-atob.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-atob.rs index a942adc4c8..713fb7d10f 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-atob.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-atob.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] extern crate proc_macro; diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-b.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-b.rs new file mode 100644 index 0000000000..c18cda8953 --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-b.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +use proc_macro::TokenStream; + +#[proc_macro_derive(B, attributes(B, C))] +pub fn derive(input: TokenStream) -> TokenStream { + let input = input.to_string(); + assert!(input.contains("#[B]")); + assert!(input.contains("struct B {")); + assert!(input.contains("#[C]")); + assert!(input.contains("#[derive(Debug, PartialEq, Eq, Copy, Clone)]")); + "".parse().unwrap() +} diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-ctod.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-ctod.rs index 50f1a390b2..19caafd17b 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-ctod.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-ctod.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] extern crate proc_macro; diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-nothing.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-nothing.rs new file mode 100644 index 0000000000..cfe428bf5f --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-nothing.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +use proc_macro::TokenStream; + +#[proc_macro_derive(Nothing)] +pub fn nothing(input: TokenStream) -> TokenStream { + "".parse().unwrap() +} diff --git a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-4.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-reexport.rs similarity index 81% rename from src/test/compile-fail-fulldeps/proc-macro/feature-gate-4.rs rename to src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-reexport.rs index 0fdd13bc30..fae2d27ab6 100644 --- a/src/test/compile-fail-fulldeps/proc-macro/feature-gate-4.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-reexport.rs @@ -8,8 +8,9 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// aux-build:derive-a.rs +// ignore-test -#[macro_use] +#![feature(macro_reexport)] + +#[macro_reexport(A)] extern crate derive_a; -//~^ ERROR: loading custom derive macro crates is experimentally supported diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-same-struct.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-same-struct.rs index bd283ca57e..a2c25ae50e 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-same-struct.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/derive-same-struct.rs @@ -11,9 +11,6 @@ // no-prefer-dynamic // compile-flags:--crate-type proc-macro -#![feature(proc_macro)] -#![feature(proc_macro_lib)] - extern crate proc_macro; use proc_macro::TokenStream; @@ -21,7 +18,7 @@ use proc_macro::TokenStream; #[proc_macro_derive(AToB)] pub fn derive1(input: TokenStream) -> TokenStream { println!("input1: {:?}", input.to_string()); - assert_eq!(input.to_string(), "#[derive(BToC)]\nstruct A;\n"); + assert_eq!(input.to_string(), "struct A;\n"); "#[derive(BToC)] struct B;".parse().unwrap() } diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/double.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/double.rs new file mode 100644 index 0000000000..f4404b737f --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/double.rs @@ -0,0 +1,22 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +use proc_macro::TokenStream; + +#[proc_macro_derive(Double)] +pub fn derive(input: TokenStream) -> TokenStream { + format!("mod foo {{ {} }}", input.to_string()).parse().unwrap() +} diff --git a/src/test/run-pass-fulldeps/proc-macro/auxiliary/expand-with-a-macro.rs b/src/test/run-pass-fulldeps/proc-macro/auxiliary/expand-with-a-macro.rs index 155b125690..e6831b6bfd 100644 --- a/src/test/run-pass-fulldeps/proc-macro/auxiliary/expand-with-a-macro.rs +++ b/src/test/run-pass-fulldeps/proc-macro/auxiliary/expand-with-a-macro.rs @@ -11,8 +11,6 @@ // no-prefer-dynamic #![crate_type = "proc-macro"] -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![deny(warnings)] extern crate proc_macro; @@ -24,8 +22,6 @@ pub fn derive(input: TokenStream) -> TokenStream { let input = input.to_string(); assert!(input.contains("struct A;")); r#" - struct A; - impl A { fn a(&self) { panic!("hello"); diff --git a/src/test/run-pass-fulldeps/proc-macro/crate-var.rs b/src/test/run-pass-fulldeps/proc-macro/crate-var.rs new file mode 100644 index 0000000000..ba1417ecb5 --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/crate-var.rs @@ -0,0 +1,26 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:double.rs + +#![allow(unused)] + +#[macro_use] +extern crate double; + +struct Foo; + +macro_rules! m { () => { + #[derive(Double)] + struct Bar($crate::Foo); +} } +m!(); + +fn main() {} diff --git a/src/test/run-pass-fulldeps/proc-macro/derive-b.rs b/src/test/run-pass-fulldeps/proc-macro/derive-b.rs new file mode 100644 index 0000000000..f1e1626ddf --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/derive-b.rs @@ -0,0 +1,30 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-b.rs +// ignore-stage1 + +#[macro_use] +extern crate derive_b; + +#[derive(Debug, PartialEq, B, Eq, Copy, Clone)] +#[B] +struct B { + #[C] + a: u64 +} + +fn main() { + B { a: 3 }; + assert_eq!(B { a: 3 }, B { a: 3 }); + let b = B { a: 3 }; + let _d = b; + let _e = b; +} diff --git a/src/test/run-pass-fulldeps/proc-macro/derive-same-struct.rs b/src/test/run-pass-fulldeps/proc-macro/derive-same-struct.rs index b3edc8f1c3..ce3ba60b0e 100644 --- a/src/test/run-pass-fulldeps/proc-macro/derive-same-struct.rs +++ b/src/test/run-pass-fulldeps/proc-macro/derive-same-struct.rs @@ -10,12 +10,10 @@ // aux-build:derive-same-struct.rs -#![feature(proc_macro)] - #[macro_use] extern crate derive_same_struct; -#[derive(AToB, BToC)] +#[derive(AToB)] struct A; fn main() { diff --git a/src/test/run-pass-fulldeps/proc-macro/derive-test.rs b/src/test/run-pass-fulldeps/proc-macro/derive-test.rs new file mode 100644 index 0000000000..a07e8b6cd7 --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/derive-test.rs @@ -0,0 +1,28 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic +// compile-flags: --test + +#![crate_type = "proc-macro"] + +extern crate proc_macro; + +use proc_macro::TokenStream; + +#[proc_macro_derive(Foo)] +pub fn derive_foo(_input: TokenStream) -> TokenStream { + "".parse().unwrap() +} + +#[test] +pub fn test_derive() { + assert!(true); +} diff --git a/src/test/run-pass-fulldeps/proc-macro/expand-with-a-macro.rs b/src/test/run-pass-fulldeps/proc-macro/expand-with-a-macro.rs index 16f3535ada..4ccd4615fb 100644 --- a/src/test/run-pass-fulldeps/proc-macro/expand-with-a-macro.rs +++ b/src/test/run-pass-fulldeps/proc-macro/expand-with-a-macro.rs @@ -11,7 +11,6 @@ // aux-build:expand-with-a-macro.rs // ignore-stage1 -#![feature(proc_macro)] #![deny(warnings)] #[macro_use] diff --git a/src/test/run-pass-fulldeps/proc-macro/load-two.rs b/src/test/run-pass-fulldeps/proc-macro/load-two.rs index 431c8c5902..d15a83a2cb 100644 --- a/src/test/run-pass-fulldeps/proc-macro/load-two.rs +++ b/src/test/run-pass-fulldeps/proc-macro/load-two.rs @@ -11,8 +11,6 @@ // aux-build:derive-atob.rs // aux-build:derive-ctod.rs -#![feature(proc_macro)] - #[macro_use] extern crate derive_atob; #[macro_use] diff --git a/src/test/run-pass-fulldeps/proc-macro/smoke.rs b/src/test/run-pass-fulldeps/proc-macro/smoke.rs index cd7edb7264..54d651df1f 100644 --- a/src/test/run-pass-fulldeps/proc-macro/smoke.rs +++ b/src/test/run-pass-fulldeps/proc-macro/smoke.rs @@ -11,8 +11,6 @@ // aux-build:derive-a.rs // ignore-stage1 -#![feature(proc_macro)] - #[macro_use] extern crate derive_a; diff --git a/src/test/run-pass-fulldeps/proc-macro/struct-field-macro.rs b/src/test/run-pass-fulldeps/proc-macro/struct-field-macro.rs new file mode 100644 index 0000000000..c9056e08d4 --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/struct-field-macro.rs @@ -0,0 +1,26 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-nothing.rs +// ignore-stage1 + +#[macro_use] +extern crate derive_nothing; + +macro_rules! int { + () => { i32 } +} + +#[derive(Nothing)] +struct S { + x: int!(), +} + +fn main() {} diff --git a/src/test/run-pass-fulldeps/proc-macro/use-reexport.rs b/src/test/run-pass-fulldeps/proc-macro/use-reexport.rs new file mode 100644 index 0000000000..f0a1bfe652 --- /dev/null +++ b/src/test/run-pass-fulldeps/proc-macro/use-reexport.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:derive-a.rs +// aux-build:derive-reexport.rs + +#[macro_use] +extern crate derive_reexport; + +#[derive(Debug, PartialEq, A, Eq, Copy, Clone)] +struct A; + +fn main() {} diff --git a/src/test/run-pass-fulldeps/qquote.rs b/src/test/run-pass-fulldeps/qquote.rs index 7c0c24163f..b4ed57192c 100644 --- a/src/test/run-pass-fulldeps/qquote.rs +++ b/src/test/run-pass-fulldeps/qquote.rs @@ -16,7 +16,7 @@ extern crate syntax; extern crate syntax_pos; use syntax::print::pprust::*; -use syntax::parse::token::intern; +use syntax::symbol::Symbol; use syntax_pos::DUMMY_SP; fn main() { @@ -29,7 +29,7 @@ fn main() { cx.bt_push(syntax::codemap::ExpnInfo { call_site: DUMMY_SP, callee: syntax::codemap::NameAndSpan { - format: syntax::codemap::MacroBang(intern("")), + format: syntax::codemap::MacroBang(Symbol::intern("")), allow_internal_unstable: false, span: None, } @@ -97,7 +97,7 @@ fn main() { // quote_meta_item! let meta = quote_meta_item!(cx, cfg(foo = "bar")); - check!(meta_item_to_string, meta, *quote_meta_item!(cx, $meta); r#"cfg(foo = "bar")"#); + check!(meta_item_to_string, meta, quote_meta_item!(cx, $meta); r#"cfg(foo = "bar")"#); let attr = quote_attr!(cx, #![$meta]); check!(attribute_to_string, attr; r#"#![cfg(foo = "bar")]"#); diff --git a/src/test/run-pass/abi-sysv64-arg-passing.rs b/src/test/run-pass/abi-sysv64-arg-passing.rs index 989155bdfd..23dd060318 100644 --- a/src/test/run-pass/abi-sysv64-arg-passing.rs +++ b/src/test/run-pass/abi-sysv64-arg-passing.rs @@ -98,7 +98,7 @@ mod tests { #[derive(Copy, Clone)] pub struct Floats { a: f64, b: u8, c: f64 } - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern "sysv64" { pub fn rust_int8_to_int32(_: i8) -> i32; pub fn rust_dbg_extern_identity_u8(v: u8) -> u8; diff --git a/src/test/run-pass/anon-extern-mod.rs b/src/test/run-pass/anon-extern-mod.rs index e96b0cc144..208b4df3c3 100644 --- a/src/test/run-pass/anon-extern-mod.rs +++ b/src/test/run-pass/anon-extern-mod.rs @@ -14,7 +14,7 @@ extern crate libc; -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_get_test_int() -> libc::intptr_t; } diff --git a/src/test/compile-fail/issue-16819.rs b/src/test/run-pass/associated-const-const-eval.rs similarity index 65% rename from src/test/compile-fail/issue-16819.rs rename to src/test/run-pass/associated-const-const-eval.rs index 4301b47f2e..0b230df414 100644 --- a/src/test/compile-fail/issue-16819.rs +++ b/src/test/run-pass/associated-const-const-eval.rs @@ -8,19 +8,23 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -struct TS ( //~ ERROR empty tuple structs and enum variants are unstable - #[cfg(untrue)] - i32, -); +#![feature(associated_consts)] -enum E { - TV ( //~ ERROR empty tuple structs and enum variants are unstable - #[cfg(untrue)] - i32, - ) +trait Foo { + const NUM: usize; } +impl Foo for i32 { + const NUM: usize = 1; +} + +const FOO: usize = ::NUM; + fn main() { - let s = TS; - let tv = E::TV; + assert_eq!(1, FOO); + + match 1 { + ::NUM => {}, + _ => assert!(false) + } } diff --git a/src/test/run-pass/associated-const-cross-crate-const-eval.rs b/src/test/run-pass/associated-const-cross-crate-const-eval.rs new file mode 100644 index 0000000000..7d31bb5b1a --- /dev/null +++ b/src/test/run-pass/associated-const-cross-crate-const-eval.rs @@ -0,0 +1,38 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:associated-const-cc-lib.rs + +#![feature(associated_consts)] + +extern crate associated_const_cc_lib as foolib; + +pub struct LocalFoo; + +impl foolib::Foo for LocalFoo { + const BAR: usize = 1; +} + +const FOO_1: usize = ::BAR; +const FOO_2: usize = ::BAR; +const FOO_3: usize = foolib::InherentBar::BAR; + +fn main() { + assert_eq!(0, FOO_1); + assert_eq!(1, FOO_2); + assert_eq!(3, FOO_3); + + match 0 { + ::BAR => {}, + ::BAR => assert!(false), + foolib::InherentBar::BAR => assert!(false), + _ => assert!(false) + } +} diff --git a/src/test/run-pass/auxiliary/anon-extern-mod-cross-crate-1.rs b/src/test/run-pass/auxiliary/anon-extern-mod-cross-crate-1.rs index 197fb9a6d0..741ce351da 100644 --- a/src/test/run-pass/auxiliary/anon-extern-mod-cross-crate-1.rs +++ b/src/test/run-pass/auxiliary/anon-extern-mod-cross-crate-1.rs @@ -13,7 +13,7 @@ extern crate libc; -#[link(name="rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_get_test_int() -> libc::intptr_t; } diff --git a/src/test/run-pass/auxiliary/empty-struct.rs b/src/test/run-pass/auxiliary/empty-struct.rs index b599d7bee7..734e57a774 100644 --- a/src/test/run-pass/auxiliary/empty-struct.rs +++ b/src/test/run-pass/auxiliary/empty-struct.rs @@ -8,8 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(relaxed_adts)] - pub struct XEmpty1 {} pub struct XEmpty2; pub struct XEmpty7(); diff --git a/src/test/run-pass/auxiliary/extern-crosscrate-source.rs b/src/test/run-pass/auxiliary/extern-crosscrate-source.rs index fc2e328f68..150dffeea8 100644 --- a/src/test/run-pass/auxiliary/extern-crosscrate-source.rs +++ b/src/test/run-pass/auxiliary/extern-crosscrate-source.rs @@ -17,7 +17,7 @@ extern crate libc; pub mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t, data: libc::uintptr_t) diff --git a/src/test/run-pass/auxiliary/foreign_lib.rs b/src/test/run-pass/auxiliary/foreign_lib.rs index 460d0a0088..cef36274c6 100644 --- a/src/test/run-pass/auxiliary/foreign_lib.rs +++ b/src/test/run-pass/auxiliary/foreign_lib.rs @@ -15,7 +15,7 @@ pub mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_get_test_int() -> libc::intptr_t; } diff --git a/src/test/run-pass/auxiliary/issue-25185-1.rs b/src/test/run-pass/auxiliary/issue-25185-1.rs index 1ec29501b7..b9da39cbbc 100644 --- a/src/test/run-pass/auxiliary/issue-25185-1.rs +++ b/src/test/run-pass/auxiliary/issue-25185-1.rs @@ -10,12 +10,9 @@ // no-prefer-dynamic -#![feature(linked_from)] - #![crate_type = "rlib"] #[link(name = "rust_test_helpers", kind = "static")] -#[linked_from = "rust_test_helpers"] extern { pub fn rust_dbg_extern_identity_u32(u: u32) -> u32; } diff --git a/src/test/run-pass/auxiliary/issue13507.rs b/src/test/run-pass/auxiliary/issue13507.rs index ca1027b11a..ba50aed42c 100644 --- a/src/test/run-pass/auxiliary/issue13507.rs +++ b/src/test/run-pass/auxiliary/issue13507.rs @@ -70,7 +70,7 @@ pub mod testtypes { // Tests TyFnPtr pub type FooFnPtr = fn(u8) -> bool; - // Tests TyTrait + // Tests TyDynamic pub trait FooTrait { fn foo_method(&self) -> usize; } diff --git a/src/test/run-pass/auxiliary/issue_38190.rs b/src/test/run-pass/auxiliary/issue_38190.rs new file mode 100644 index 0000000000..7fc4390d6d --- /dev/null +++ b/src/test/run-pass/auxiliary/issue_38190.rs @@ -0,0 +1,12 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[macro_export] +macro_rules! m { ([$i:item]) => {} } diff --git a/src/test/run-pass/auxiliary/issue_38226_aux.rs b/src/test/run-pass/auxiliary/issue_38226_aux.rs new file mode 100644 index 0000000000..d48a973368 --- /dev/null +++ b/src/test/run-pass/auxiliary/issue_38226_aux.rs @@ -0,0 +1,33 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_type="rlib"] + +#[inline(never)] +pub fn foo() { + let _: Box = Box::new(SomeTraitImpl); +} + +pub fn bar() { + SomeTraitImpl.bar(); +} + +mod submod { + pub trait SomeTrait { + fn bar(&self) { + panic!("NO") + } + } +} + +use self::submod::SomeTrait; + +pub struct SomeTraitImpl; +impl SomeTrait for SomeTraitImpl {} diff --git a/src/test/run-pass/auxiliary/issue_38715.rs b/src/test/run-pass/auxiliary/issue_38715.rs new file mode 100644 index 0000000000..cad3996ead --- /dev/null +++ b/src/test/run-pass/auxiliary/issue_38715.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[macro_export] +macro_rules! foo { ($i:ident) => {} } + +#[macro_export] +macro_rules! foo { () => {} } diff --git a/src/test/run-pass/auxiliary/link-cfg-works-transitive-dylib.rs b/src/test/run-pass/auxiliary/link-cfg-works-transitive-dylib.rs new file mode 100644 index 0000000000..d41fd490f5 --- /dev/null +++ b/src/test/run-pass/auxiliary/link-cfg-works-transitive-dylib.rs @@ -0,0 +1,14 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(link_cfg)] + +#[link(name = "foo", cfg(foo))] +extern {} diff --git a/src/test/run-pass/auxiliary/link-cfg-works-transitive-rlib.rs b/src/test/run-pass/auxiliary/link-cfg-works-transitive-rlib.rs new file mode 100644 index 0000000000..9f096c351f --- /dev/null +++ b/src/test/run-pass/auxiliary/link-cfg-works-transitive-rlib.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic + +#![feature(link_cfg)] +#![crate_type = "rlib"] + +#[link(name = "foo", cfg(foo))] +extern {} diff --git a/src/test/run-pass/auxiliary/two_macros.rs b/src/test/run-pass/auxiliary/two_macros.rs index 060960f0db..0da6ba1369 100644 --- a/src/test/run-pass/auxiliary/two_macros.rs +++ b/src/test/run-pass/auxiliary/two_macros.rs @@ -9,7 +9,7 @@ // except according to those terms. #[macro_export] -macro_rules! macro_one { () => ("one") } +macro_rules! macro_one { ($($t:tt)*) => ($($t)*) } #[macro_export] -macro_rules! macro_two { () => ("two") } +macro_rules! macro_two { ($($t:tt)*) => ($($t)*) } diff --git a/src/test/run-pass/backtrace.rs b/src/test/run-pass/backtrace.rs index c438c17f51..75c665b04a 100644 --- a/src/test/run-pass/backtrace.rs +++ b/src/test/run-pass/backtrace.rs @@ -10,6 +10,7 @@ // ignore-android FIXME #17520 // ignore-emscripten spawning processes is not supported +// ignore-openbsd no support for libbacktrace without filename // compile-flags:-g use std::env; diff --git a/src/test/run-pass/c-stack-as-value.rs b/src/test/run-pass/c-stack-as-value.rs index b678f149fa..5319693405 100644 --- a/src/test/run-pass/c-stack-as-value.rs +++ b/src/test/run-pass/c-stack-as-value.rs @@ -15,7 +15,7 @@ mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_get_test_int() -> libc::intptr_t; } diff --git a/src/test/run-pass/cabi-int-widening.rs b/src/test/run-pass/cabi-int-widening.rs index c7a2275933..bf94dd1788 100644 --- a/src/test/run-pass/cabi-int-widening.rs +++ b/src/test/run-pass/cabi-int-widening.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_int8_to_int32(_: i8) -> i32; } diff --git a/src/test/run-pass/closure-immediate.rs b/src/test/run-pass/closure-immediate.rs new file mode 100644 index 0000000000..e566c10583 --- /dev/null +++ b/src/test/run-pass/closure-immediate.rs @@ -0,0 +1,22 @@ +// Copyright 2012 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +// After the work to reoptimize structs, it became possible for immediate logic to fail. +// This test verifies that it actually works. + +fn main() { + let c = |a: u8, b: u16, c: u8| { + assert_eq!(a, 1); + assert_eq!(b, 2); + assert_eq!(c, 3); + }; + c(1, 2, 3); +} diff --git a/src/test/run-pass/crt-static-off-works.rs b/src/test/run-pass/crt-static-off-works.rs new file mode 100644 index 0000000000..c94c877c12 --- /dev/null +++ b/src/test/run-pass/crt-static-off-works.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags:-C target-feature=-crt-static -Z unstable-options +// ignore-musl - requires changing the linker which is hard + +#![feature(cfg_target_feature)] + +#[cfg(not(target_feature = "crt-static"))] +fn main() {} diff --git a/src/test/run-pass/crt-static-on-works.rs b/src/test/run-pass/crt-static-on-works.rs new file mode 100644 index 0000000000..ae8e5f6297 --- /dev/null +++ b/src/test/run-pass/crt-static-on-works.rs @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags:-C target-feature=+crt-static -Z unstable-options + +#![feature(cfg_target_feature)] + +#[cfg(target_feature = "crt-static")] +fn main() {} diff --git a/src/test/run-pass/empty-struct-braces.rs b/src/test/run-pass/empty-struct-braces.rs index 48966f24a2..7c161ba8dd 100644 --- a/src/test/run-pass/empty-struct-braces.rs +++ b/src/test/run-pass/empty-struct-braces.rs @@ -13,8 +13,6 @@ // aux-build:empty-struct.rs -#![feature(relaxed_adts)] - extern crate empty_struct; use empty_struct::*; diff --git a/src/test/run-pass/enum-size-variance.rs b/src/test/run-pass/enum-size-variance.rs index 26deb0ed72..a3e95a1534 100644 --- a/src/test/run-pass/enum-size-variance.rs +++ b/src/test/run-pass/enum-size-variance.rs @@ -11,6 +11,9 @@ #![warn(variant_size_differences)] #![allow(dead_code)] +// Note that the following test works because all fields of the enum variants are of the same size. +// If this test is modified and the reordering logic in librustc/ty/layout.rs kicks in, it fails. + enum Enum1 { } enum Enum2 { A, B, C } diff --git a/src/test/run-pass/extern-call-deep.rs b/src/test/run-pass/extern-call-deep.rs index 2138b12fb1..6a9da767ad 100644 --- a/src/test/run-pass/extern-call-deep.rs +++ b/src/test/run-pass/extern-call-deep.rs @@ -15,7 +15,7 @@ extern crate libc; mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t, data: libc::uintptr_t) diff --git a/src/test/run-pass/extern-call-deep2.rs b/src/test/run-pass/extern-call-deep2.rs index 1a0191b705..3bdc8c1886 100644 --- a/src/test/run-pass/extern-call-deep2.rs +++ b/src/test/run-pass/extern-call-deep2.rs @@ -18,7 +18,7 @@ use std::thread; mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t, data: libc::uintptr_t) diff --git a/src/test/run-pass/extern-call-indirect.rs b/src/test/run-pass/extern-call-indirect.rs index 4f1abbeb5c..256eedccb8 100644 --- a/src/test/run-pass/extern-call-indirect.rs +++ b/src/test/run-pass/extern-call-indirect.rs @@ -15,7 +15,7 @@ extern crate libc; mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t, data: libc::uintptr_t) diff --git a/src/test/run-pass/extern-call-scrub.rs b/src/test/run-pass/extern-call-scrub.rs index 1beb6d3519..a27474dcf8 100644 --- a/src/test/run-pass/extern-call-scrub.rs +++ b/src/test/run-pass/extern-call-scrub.rs @@ -22,7 +22,7 @@ use std::thread; mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t, data: libc::uintptr_t) diff --git a/src/test/run-pass/extern-pass-TwoU16s.rs b/src/test/run-pass/extern-pass-TwoU16s.rs index 9d304ea9e1..afdd53db77 100644 --- a/src/test/run-pass/extern-pass-TwoU16s.rs +++ b/src/test/run-pass/extern-pass-TwoU16s.rs @@ -16,7 +16,7 @@ pub struct TwoU16s { one: u16, two: u16 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_TwoU16s(v: TwoU16s) -> TwoU16s; } diff --git a/src/test/run-pass/extern-pass-TwoU32s.rs b/src/test/run-pass/extern-pass-TwoU32s.rs index 8dae0473fd..035084ae9b 100644 --- a/src/test/run-pass/extern-pass-TwoU32s.rs +++ b/src/test/run-pass/extern-pass-TwoU32s.rs @@ -16,7 +16,7 @@ pub struct TwoU32s { one: u32, two: u32 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_TwoU32s(v: TwoU32s) -> TwoU32s; } diff --git a/src/test/run-pass/extern-pass-TwoU64s.rs b/src/test/run-pass/extern-pass-TwoU64s.rs index 14aeea3465..cb1a4d2782 100644 --- a/src/test/run-pass/extern-pass-TwoU64s.rs +++ b/src/test/run-pass/extern-pass-TwoU64s.rs @@ -16,7 +16,7 @@ pub struct TwoU64s { one: u64, two: u64 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_TwoU64s(v: TwoU64s) -> TwoU64s; } diff --git a/src/test/run-pass/extern-pass-TwoU8s.rs b/src/test/run-pass/extern-pass-TwoU8s.rs index 75a109e442..657348c99a 100644 --- a/src/test/run-pass/extern-pass-TwoU8s.rs +++ b/src/test/run-pass/extern-pass-TwoU8s.rs @@ -16,7 +16,7 @@ pub struct TwoU8s { one: u8, two: u8 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_TwoU8s(v: TwoU8s) -> TwoU8s; } diff --git a/src/test/run-pass/extern-pass-char.rs b/src/test/run-pass/extern-pass-char.rs index e75aa2d72c..9042aed663 100644 --- a/src/test/run-pass/extern-pass-char.rs +++ b/src/test/run-pass/extern-pass-char.rs @@ -11,7 +11,7 @@ // Test a function that takes/returns a u8. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_u8(v: u8) -> u8; } diff --git a/src/test/run-pass/extern-pass-double.rs b/src/test/run-pass/extern-pass-double.rs index e92f9b6a1a..38d29180fb 100644 --- a/src/test/run-pass/extern-pass-double.rs +++ b/src/test/run-pass/extern-pass-double.rs @@ -9,7 +9,7 @@ // except according to those terms. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_double(v: f64) -> f64; } diff --git a/src/test/run-pass/extern-pass-empty.rs b/src/test/run-pass/extern-pass-empty.rs index 801a3c40ab..2606c92868 100644 --- a/src/test/run-pass/extern-pass-empty.rs +++ b/src/test/run-pass/extern-pass-empty.rs @@ -14,11 +14,13 @@ // ignore-msvc // ignore-emscripten +#[repr(C)] struct TwoU8s { one: u8, two: u8, } +#[repr(C)] struct ManyInts { arg1: i8, arg2: i16, @@ -28,9 +30,10 @@ struct ManyInts { arg6: TwoU8s, } +#[repr(C)] struct Empty; -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_dbg_extern_empty_struct(v1: ManyInts, e: Empty, v2: ManyInts); } diff --git a/src/test/run-pass/extern-pass-u32.rs b/src/test/run-pass/extern-pass-u32.rs index 0753ea1bcf..ed254ac46f 100644 --- a/src/test/run-pass/extern-pass-u32.rs +++ b/src/test/run-pass/extern-pass-u32.rs @@ -11,7 +11,7 @@ // Test a function that takes/returns a u32. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_u32(v: u32) -> u32; } diff --git a/src/test/run-pass/extern-pass-u64.rs b/src/test/run-pass/extern-pass-u64.rs index 89faa3bb47..6fc630e6d7 100644 --- a/src/test/run-pass/extern-pass-u64.rs +++ b/src/test/run-pass/extern-pass-u64.rs @@ -11,7 +11,7 @@ // Test a call to a function that takes/returns a u64. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_identity_u64(v: u64) -> u64; } diff --git a/src/test/run-pass/extern-return-TwoU16s.rs b/src/test/run-pass/extern-return-TwoU16s.rs index 3c58646e0c..ec1c6130e7 100644 --- a/src/test/run-pass/extern-return-TwoU16s.rs +++ b/src/test/run-pass/extern-return-TwoU16s.rs @@ -13,7 +13,7 @@ pub struct TwoU16s { one: u16, two: u16 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_return_TwoU16s() -> TwoU16s; } diff --git a/src/test/run-pass/extern-return-TwoU32s.rs b/src/test/run-pass/extern-return-TwoU32s.rs index 0eb6be2d68..e829e99305 100644 --- a/src/test/run-pass/extern-return-TwoU32s.rs +++ b/src/test/run-pass/extern-return-TwoU32s.rs @@ -13,7 +13,7 @@ pub struct TwoU32s { one: u32, two: u32 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_return_TwoU32s() -> TwoU32s; } diff --git a/src/test/run-pass/extern-return-TwoU64s.rs b/src/test/run-pass/extern-return-TwoU64s.rs index d5eab86351..ef7325b33f 100644 --- a/src/test/run-pass/extern-return-TwoU64s.rs +++ b/src/test/run-pass/extern-return-TwoU64s.rs @@ -13,7 +13,7 @@ pub struct TwoU64s { one: u64, two: u64 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_return_TwoU64s() -> TwoU64s; } diff --git a/src/test/run-pass/extern-return-TwoU8s.rs b/src/test/run-pass/extern-return-TwoU8s.rs index d8f476bcd0..46f2e81a55 100644 --- a/src/test/run-pass/extern-return-TwoU8s.rs +++ b/src/test/run-pass/extern-return-TwoU8s.rs @@ -13,7 +13,7 @@ pub struct TwoU8s { one: u8, two: u8 } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_extern_return_TwoU8s() -> TwoU8s; } diff --git a/src/test/run-pass/foreign-call-no-runtime.rs b/src/test/run-pass/foreign-call-no-runtime.rs index ca11889979..697e9074c4 100644 --- a/src/test/run-pass/foreign-call-no-runtime.rs +++ b/src/test/run-pass/foreign-call-no-runtime.rs @@ -18,7 +18,7 @@ extern crate libc; use std::mem; use std::thread; -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t), data: libc::uintptr_t) -> libc::uintptr_t; diff --git a/src/test/run-pass/foreign-fn-with-byval.rs b/src/test/run-pass/foreign-fn-with-byval.rs index d3d872620c..2d4542540e 100644 --- a/src/test/run-pass/foreign-fn-with-byval.rs +++ b/src/test/run-pass/foreign-fn-with-byval.rs @@ -16,7 +16,7 @@ pub struct S { z: u64, } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { pub fn get_x(x: S) -> u64; pub fn get_y(x: S) -> u64; diff --git a/src/test/run-pass/foreign-no-abi.rs b/src/test/run-pass/foreign-no-abi.rs index a9b3f60566..979e57eba9 100644 --- a/src/test/run-pass/foreign-no-abi.rs +++ b/src/test/run-pass/foreign-no-abi.rs @@ -17,7 +17,7 @@ mod rustrt { extern crate libc; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_get_test_int() -> libc::intptr_t; } diff --git a/src/test/run-pass/impl-trait/auxiliary/xcrate.rs b/src/test/run-pass/impl-trait/auxiliary/xcrate.rs new file mode 100644 index 0000000000..be353f6d56 --- /dev/null +++ b/src/test/run-pass/impl-trait/auxiliary/xcrate.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(conservative_impl_trait)] + +pub fn fourway_add(a: i32) -> impl Fn(i32) -> impl Fn(i32) -> impl Fn(i32) -> i32 { + move |b| move |c| move |d| a + b + c + d +} diff --git a/src/test/run-pass/impl-trait/xcrate.rs b/src/test/run-pass/impl-trait/xcrate.rs new file mode 100644 index 0000000000..fe3ed7b346 --- /dev/null +++ b/src/test/run-pass/impl-trait/xcrate.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:xcrate.rs + +extern crate xcrate; + +fn main() { + assert_eq!(xcrate::fourway_add(1)(2)(3)(4), 10); +} diff --git a/src/test/run-pass/imports.rs b/src/test/run-pass/imports.rs index 195b99c978..f845a2ee57 100644 --- a/src/test/run-pass/imports.rs +++ b/src/test/run-pass/imports.rs @@ -8,7 +8,6 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![feature(item_like_imports)] #![allow(unused)] // Like other items, private imports can be imported and used non-lexically in paths. diff --git a/src/test/run-pass/issue-23699.rs b/src/test/run-pass/issue-23699.rs new file mode 100644 index 0000000000..1909be4df7 --- /dev/null +++ b/src/test/run-pass/issue-23699.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn gimme_a_raw_pointer(_: *const T) { } + +fn test(t: T) { } + +fn main() { + // Clearly `pointer` must be of type `*const ()`. + let pointer = &() as *const _; + gimme_a_raw_pointer(pointer); + + let t = test as fn (i32); + t(0i32); +} + diff --git a/src/test/run-pass/issue-28676.rs b/src/test/run-pass/issue-28676.rs index b8d43c392d..8f83e51f0a 100644 --- a/src/test/run-pass/issue-28676.rs +++ b/src/test/run-pass/issue-28676.rs @@ -15,7 +15,7 @@ pub struct Quad { a: u64, b: u64, c: u64, d: u64 } mod rustrt { use super::Quad; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn get_c_many_params(_: *const (), _: *const (), _: *const (), _: *const (), f: Quad) -> u64; diff --git a/src/test/run-pass/issue-37598.rs b/src/test/run-pass/issue-37598.rs new file mode 100644 index 0000000000..d32d2fc295 --- /dev/null +++ b/src/test/run-pass/issue-37598.rs @@ -0,0 +1,21 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(advanced_slice_patterns, slice_patterns)] + +fn check(list: &[u8]) { + match list { + &[] => {}, + &[_u1, _u2, ref _next..] => {}, + &[_u1] => {}, + } +} + +fn main() {} diff --git a/src/test/run-pass/issue-37655.rs b/src/test/run-pass/issue-37655.rs new file mode 100644 index 0000000000..d229bcacc5 --- /dev/null +++ b/src/test/run-pass/issue-37655.rs @@ -0,0 +1,46 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Regression test for #37655. The problem was a false edge created by +// coercion that wound up requiring that `'a` (in `split()`) outlive +// `'b`, which shouldn't be necessary. + +#![allow(warnings)] + +trait SliceExt { + type Item; + + fn get_me(&self, index: I) -> &I::Output + where I: SliceIndex; +} + +impl SliceExt for [T] { + type Item = T; + + fn get_me(&self, index: I) -> &I::Output + where I: SliceIndex + { + panic!() + } +} + +pub trait SliceIndex { + type Output: ?Sized; +} + +impl SliceIndex for usize { + type Output = T; +} + +fn foo<'a, 'b>(split: &'b [&'a [u8]]) -> &'a [u8] { + split.get_me(0) +} + +fn main() { } diff --git a/src/test/run-pass/issue-37733.rs b/src/test/run-pass/issue-37733.rs new file mode 100644 index 0000000000..358b93254d --- /dev/null +++ b/src/test/run-pass/issue-37733.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +type A = for<> fn(); + +type B = for<'a,> fn(); + +pub fn main() {} diff --git a/src/test/run-pass/issue-37991.rs b/src/test/run-pass/issue-37991.rs new file mode 100644 index 0000000000..9bdde02d00 --- /dev/null +++ b/src/test/run-pass/issue-37991.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(const_fn)] + +const fn foo() -> i64 { + 3 +} + +const fn bar(x: i64) -> i64 { + x*2 +} + +fn main() { + let val = &(foo() % 2); + assert_eq!(*val, 1); + + let val2 = &(bar(1+1) % 3); + assert_eq!(*val2, 1); +} diff --git a/src/test/run-pass/issue-38190.rs b/src/test/run-pass/issue-38190.rs new file mode 100644 index 0000000000..ed9bf9e809 --- /dev/null +++ b/src/test/run-pass/issue-38190.rs @@ -0,0 +1,21 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:issue_38190.rs +// ignore-pretty issue #37195 + +#[macro_use] +extern crate issue_38190; + +mod auxiliary { + m!([mod issue_38190;]); +} + +fn main() {} diff --git a/src/test/run-pass/issue-38226.rs b/src/test/run-pass/issue-38226.rs new file mode 100644 index 0000000000..33604212af --- /dev/null +++ b/src/test/run-pass/issue-38226.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// This test makes sure that we don't run into a linker error because of the +// middle::reachable pass missing trait methods with default impls. + +// aux-build:issue_38226_aux.rs + +// Need -Cno-prepopulate-passes to really disable inlining, otherwise the faulty +// code gets optimized out: +// compile-flags: -Cno-prepopulate-passes + +extern crate issue_38226_aux; + +fn main() { + issue_38226_aux::foo::<()>(); +} diff --git a/src/test/run-pass/issue-38437.rs b/src/test/run-pass/issue-38437.rs new file mode 100644 index 0000000000..a6e7df1c01 --- /dev/null +++ b/src/test/run-pass/issue-38437.rs @@ -0,0 +1,54 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Check that drop elaboration clears the "master" discriminant +// drop flag even if it protects no fields. + +struct Good(usize); +impl Drop for Good { + #[inline(never)] + fn drop(&mut self) { + println!("dropping Good({})", self.0); + } +} + +struct Void; +impl Drop for Void { + #[inline(never)] + fn drop(&mut self) { + panic!("Suddenly, a Void appears."); + } +} + +enum E { + Never(Void), + Fine(Good) +} + +fn main() { + let mut go = true; + + loop { + let next; + match go { + true => next = E::Fine(Good(123)), + false => return, + } + + match next { + E::Never(_) => return, + E::Fine(_good) => go = false, + } + + // `next` is dropped and StorageDead'd here. We must reset the + // discriminant's drop flag to avoid random variants being + // dropped. + } +} diff --git a/src/test/run-pass/issue-38715.rs b/src/test/run-pass/issue-38715.rs new file mode 100644 index 0000000000..054785e62b --- /dev/null +++ b/src/test/run-pass/issue-38715.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:issue_38715.rs + +// Test that `#[macro_export] macro_rules!` shadow earlier `#[macro_export] macro_rules!` + +#[macro_use] +extern crate issue_38715; + +fn main() { + foo!(); +} diff --git a/src/test/run-pass/issue-38727.rs b/src/test/run-pass/issue-38727.rs new file mode 100644 index 0000000000..e60b6a99f9 --- /dev/null +++ b/src/test/run-pass/issue-38727.rs @@ -0,0 +1,21 @@ +// Copyright 2017 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#[repr(u64)] +enum A { + A = 0u64, + B = !0u64, +} + +fn cmp() -> A { + A::B +} + +fn main() {} diff --git a/src/test/run-pass/issue-8521.rs b/src/test/run-pass/issue-8521.rs new file mode 100644 index 0000000000..ce362c4bcd --- /dev/null +++ b/src/test/run-pass/issue-8521.rs @@ -0,0 +1,34 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +trait Foo1 {} + +trait A {} + +macro_rules! foo1(($t:path) => { + impl Foo1 for T {} +}); + +foo1!(A); + +trait Foo2 {} + +trait B {} + +#[allow(unused)] +struct C {} + +macro_rules! foo2(($t:path) => { + impl Foo2 for T {} +}); + +foo2!(B); + +fn main() {} diff --git a/src/test/run-pass/link-cfg-works.rs b/src/test/run-pass/link-cfg-works.rs new file mode 100644 index 0000000000..7db948c7da --- /dev/null +++ b/src/test/run-pass/link-cfg-works.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:link-cfg-works-transitive-rlib.rs +// aux-build:link-cfg-works-transitive-dylib.rs + +#![feature(link_cfg)] + +extern crate link_cfg_works_transitive_rlib; +extern crate link_cfg_works_transitive_dylib; + +#[link(name = "foo", cfg(foo))] +extern {} + +fn main() {} + diff --git a/src/test/run-pass/loop-break-value.rs b/src/test/run-pass/loop-break-value.rs new file mode 100644 index 0000000000..6a5e051c0c --- /dev/null +++ b/src/test/run-pass/loop-break-value.rs @@ -0,0 +1,133 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(loop_break_value)] +#![feature(never_type)] + +#[allow(unused)] +fn never_returns() { + loop { + break loop {}; + } +} + +pub fn main() { + let value = 'outer: loop { + if 1 == 1 { + break 13; + } else { + let _never: ! = loop { + break loop { + break 'outer panic!(); + } + }; + } + }; + assert_eq!(value, 13); + + let x = [1, 3u32, 5]; + let y = [17]; + let z = []; + let coerced: &[_] = loop { + match 2 { + 1 => break &x, + 2 => break &y, + 3 => break &z, + _ => (), + } + }; + assert_eq!(coerced, &[17u32]); + + let trait_unified = loop { + break if true { + break Default::default() + } else { + break [13, 14] + }; + }; + assert_eq!(trait_unified, [0, 0]); + + let trait_unified_2 = loop { + if false { + break [String::from("Hello")] + } else { + break Default::default() + }; + }; + assert_eq!(trait_unified_2, [""]); + + let trait_unified_3 = loop { + break if false { + break [String::from("Hello")] + } else { + ["Yes".into()] + }; + }; + assert_eq!(trait_unified_3, ["Yes"]); + + let regular_break = loop { + if true { + break; + } else { + break break Default::default(); + } + }; + assert_eq!(regular_break, ()); + + let regular_break_2 = loop { + if true { + break Default::default(); + } else { + break; + } + }; + assert_eq!(regular_break_2, ()); + + let regular_break_3 = loop { + break if true { + Default::default() + } else { + break; + } + }; + assert_eq!(regular_break_3, ()); + + let regular_break_4 = loop { + break (); + break; + }; + assert_eq!(regular_break_4, ()); + + let regular_break_5 = loop { + break; + break (); + }; + assert_eq!(regular_break_5, ()); + + let nested_break_value = 'outer2: loop { + let _a: u32 = 'inner: loop { + if true { + break 'outer2 "hello"; + } else { + break 'inner 17; + } + }; + panic!(); + }; + assert_eq!(nested_break_value, "hello"); + + let break_from_while_cond = loop { + while break { + panic!(); + } + break 123; + }; + assert_eq!(break_from_while_cond, 123); +} diff --git a/src/test/run-pass/mir_trans_calls_variadic.rs b/src/test/run-pass/mir_trans_calls_variadic.rs index 4e06738da4..e4d528e80e 100644 --- a/src/test/run-pass/mir_trans_calls_variadic.rs +++ b/src/test/run-pass/mir_trans_calls_variadic.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_interesting_average(_: i64, ...) -> f64; } diff --git a/src/test/run-pass/multiple-reprs.rs b/src/test/run-pass/multiple-reprs.rs new file mode 100644 index 0000000000..c2fe943eed --- /dev/null +++ b/src/test/run-pass/multiple-reprs.rs @@ -0,0 +1,43 @@ +// Copyright 2012 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + + +use std::mem::size_of; + +// The two enums that follow are designed so that bugs trigger layout optimization. +// Specifically, if either of the following reprs used here is not detected by the compiler, +// then the sizes will be wrong. + +#[repr(C, u8)] +enum E1 { + A(u8, u16, u8), + B(u8, u16, u8) +} + +#[repr(u8, C)] +enum E2 { + A(u8, u16, u8), + B(u8, u16, u8) +} + +// From pr 37429 + +#[repr(C,packed)] +pub struct p0f_api_query { + pub magic: u32, + pub addr_type: u8, + pub addr: [u8; 16], +} + +pub fn main() { + assert_eq!(size_of::(), 6); + assert_eq!(size_of::(), 6); + assert_eq!(size_of::(), 21); +} diff --git a/src/test/run-pass/no-stdio.rs b/src/test/run-pass/no-stdio.rs index ad4d56ec50..85c63e184f 100644 --- a/src/test/run-pass/no-stdio.rs +++ b/src/test/run-pass/no-stdio.rs @@ -9,6 +9,7 @@ // except according to those terms. // ignore-emscripten +// ignore-android #![feature(libc)] diff --git a/src/test/run-pass/nonzero-enum.rs b/src/test/run-pass/nonzero-enum.rs index 266506e04b..fc92c9df9f 100644 --- a/src/test/run-pass/nonzero-enum.rs +++ b/src/test/run-pass/nonzero-enum.rs @@ -26,8 +26,7 @@ fn main() { assert_eq!(size_of::(), 1); assert_eq!(size_of::>(), 1); assert_eq!(size_of::>(), 1); - assert_eq!(size_of::(), 4); - assert_eq!(size_of::>(), 4); + assert_eq!(size_of::>(), size_of::()); let enone = None::; let esome = Some(E::A); if let Some(..) = enone { diff --git a/src/test/run-pass/path-lookahead.rs b/src/test/run-pass/path-lookahead.rs new file mode 100644 index 0000000000..017259af19 --- /dev/null +++ b/src/test/run-pass/path-lookahead.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// Parser test for #37765 + +fn with_parens(arg: T) -> String { //~WARN dead_code + return (::to_string(&arg)); //~WARN unused_parens +} + +fn no_parens(arg: T) -> String { //~WARN dead_code + return ::to_string(&arg); +} + +fn main() { + +} diff --git a/src/test/run-pass/paths-in-macro-invocations.rs b/src/test/run-pass/paths-in-macro-invocations.rs new file mode 100644 index 0000000000..69f8906778 --- /dev/null +++ b/src/test/run-pass/paths-in-macro-invocations.rs @@ -0,0 +1,46 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:two_macros.rs + +#![feature(use_extern_macros)] + +extern crate two_macros; + +::two_macros::macro_one!(); +two_macros::macro_one!(); + +mod foo { pub use two_macros::macro_one as bar; } + +trait T { + foo::bar!(); + ::foo::bar!(); +} + +struct S { + x: foo::bar!(i32), + y: ::foo::bar!(i32), +} + +impl S { + foo::bar!(); + ::foo::bar!(); +} + +fn main() { + foo::bar!(); + ::foo::bar!(); + + let _ = foo::bar!(0); + let _ = ::foo::bar!(0); + + let foo::bar!(_) = 0; + let ::foo::bar!(_) = 0; +} diff --git a/src/test/run-pass/rfc1717/auxiliary/clibrary.rs b/src/test/run-pass/rfc1717/auxiliary/clibrary.rs new file mode 100644 index 0000000000..7438ba21bf --- /dev/null +++ b/src/test/run-pass/rfc1717/auxiliary/clibrary.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// no-prefer-dynamic +#![crate_type = "staticlib"] + +#[no_mangle] +pub extern "C" fn foo(x:i32) -> i32 { x } diff --git a/src/test/run-pass/rfc1717/library-override.rs b/src/test/run-pass/rfc1717/library-override.rs new file mode 100644 index 0000000000..d6ef96c5ad --- /dev/null +++ b/src/test/run-pass/rfc1717/library-override.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:clibrary.rs +// compile-flags: -lstatic=wronglibrary:clibrary + +#[link(name = "wronglibrary", kind = "dylib")] +extern "C" { + pub fn foo(x:i32) -> i32; +} + +fn main() { + unsafe { + foo(42); + } +} diff --git a/src/test/run-pass/segfault-no-out-of-stack.rs b/src/test/run-pass/segfault-no-out-of-stack.rs index df64d7140b..0f98cfe27f 100644 --- a/src/test/run-pass/segfault-no-out-of-stack.rs +++ b/src/test/run-pass/segfault-no-out-of-stack.rs @@ -17,7 +17,7 @@ extern crate libc; use std::process::{Command, ExitStatus}; use std::env; -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_get_null_ptr() -> *mut ::libc::c_char; } diff --git a/src/test/run-pass/specialization/specialization-translate-projections-with-lifetimes.rs b/src/test/run-pass/specialization/specialization-translate-projections-with-lifetimes.rs new file mode 100644 index 0000000000..9702f63241 --- /dev/null +++ b/src/test/run-pass/specialization/specialization-translate-projections-with-lifetimes.rs @@ -0,0 +1,41 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(specialization)] + +trait Iterator { + fn next(&self); +} + +trait WithAssoc { + type Item; +} + +impl<'a> WithAssoc for &'a () { + type Item = &'a u32; +} + +struct Cloned(I); + +impl<'a, I, T: 'a> Iterator for Cloned + where I: WithAssoc, T: Clone +{ + fn next(&self) {} +} + +impl<'a, I, T: 'a> Iterator for Cloned + where I: WithAssoc, T: Copy +{ + +} + +fn main() { + Cloned(&()).next(); +} diff --git a/src/test/run-pass/static-mut-foreign.rs b/src/test/run-pass/static-mut-foreign.rs index 4dcb82c4b4..24d58487f0 100644 --- a/src/test/run-pass/static-mut-foreign.rs +++ b/src/test/run-pass/static-mut-foreign.rs @@ -17,7 +17,7 @@ extern crate libc; -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { static mut rust_dbg_static_mut: libc::c_int; pub fn rust_dbg_static_mut_check_four(); diff --git a/src/test/run-pass/stdio-is-blocking.rs b/src/test/run-pass/stdio-is-blocking.rs new file mode 100644 index 0000000000..74170ca650 --- /dev/null +++ b/src/test/run-pass/stdio-is-blocking.rs @@ -0,0 +1,90 @@ +// Copyright 2017 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::env; +use std::io::prelude::*; +use std::process::Command; +use std::thread; + +const THREADS: usize = 20; +const WRITES: usize = 100; +const WRITE_SIZE: usize = 1024 * 32; + +fn main() { + let args = env::args().collect::>(); + if args.len() == 1 { + parent(); + } else { + child(); + } +} + +fn parent() { + let me = env::current_exe().unwrap(); + let mut cmd = Command::new(me); + cmd.arg("run-the-test"); + let output = cmd.output().unwrap(); + assert!(output.status.success()); + assert_eq!(output.stderr.len(), 0); + assert_eq!(output.stdout.len(), WRITES * THREADS * WRITE_SIZE); + for byte in output.stdout.iter() { + assert_eq!(*byte, b'a'); + } +} + +fn child() { + let threads = (0..THREADS).map(|_| { + thread::spawn(|| { + let buf = [b'a'; WRITE_SIZE]; + for _ in 0..WRITES { + write_all(&buf); + } + }) + }).collect::>(); + + for thread in threads { + thread.join().unwrap(); + } +} + +#[cfg(unix)] +fn write_all(buf: &[u8]) { + use std::fs::File; + use std::mem; + use std::os::unix::prelude::*; + + let mut file = unsafe { File::from_raw_fd(1) }; + let res = file.write_all(buf); + mem::forget(file); + res.unwrap(); +} + +#[cfg(windows)] +fn write_all(buf: &[u8]) { + use std::fs::File; + use std::mem; + use std::os::windows::raw::*; + use std::os::windows::prelude::*; + + const STD_OUTPUT_HANDLE: u32 = (-11i32) as u32; + + extern "system" { + fn GetStdHandle(handle: u32) -> HANDLE; + } + + let mut file = unsafe { + let handle = GetStdHandle(STD_OUTPUT_HANDLE); + assert!(!handle.is_null()); + File::from_raw_handle(handle) + }; + let res = file.write_all(buf); + mem::forget(file); + res.unwrap(); +} diff --git a/src/test/run-pass/struct-return.rs b/src/test/run-pass/struct-return.rs index 6f23263790..ed618cea98 100644 --- a/src/test/run-pass/struct-return.rs +++ b/src/test/run-pass/struct-return.rs @@ -9,16 +9,18 @@ // except according to those terms. // +#[repr(C)] #[derive(Copy, Clone)] pub struct Quad { a: u64, b: u64, c: u64, d: u64 } +#[repr(C)] #[derive(Copy, Clone)] pub struct Floats { a: f64, b: u8, c: f64 } mod rustrt { use super::{Floats, Quad}; - #[link(name = "rust_test_helpers")] + #[link(name = "rust_test_helpers", kind = "static")] extern { pub fn rust_dbg_abi_1(q: Quad) -> Quad; pub fn rust_dbg_abi_2(f: Floats) -> Floats; diff --git a/src/test/run-pass/test-should-panic-attr.rs b/src/test/run-pass/test-should-panic-attr.rs new file mode 100644 index 0000000000..2d068872a4 --- /dev/null +++ b/src/test/run-pass/test-should-panic-attr.rs @@ -0,0 +1,46 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --test + +#[test] +#[should_panic = "foo"] +//~^ WARN: attribute must be of the form: +fn test1() { + panic!(); +} + +#[test] +#[should_panic(expected)] +//~^ WARN: argument must be of the form: +fn test2() { + panic!(); +} + +#[test] +#[should_panic(expect)] +//~^ WARN: argument must be of the form: +fn test3() { + panic!(); +} + +#[test] +#[should_panic(expected(foo, bar))] +//~^ WARN: argument must be of the form: +fn test4() { + panic!(); +} + +#[test] +#[should_panic(expected = "foo", bar)] +//~^ WARN: argument must be of the form: +fn test5() { + panic!(); +} diff --git a/src/test/run-pass/type-sizes.rs b/src/test/run-pass/type-sizes.rs index 86159ce340..bbb01eaaf4 100644 --- a/src/test/run-pass/type-sizes.rs +++ b/src/test/run-pass/type-sizes.rs @@ -26,6 +26,7 @@ enum e2 { a(u32), b } +#[repr(C, u8)] enum e3 { a([u16; 0], u8), b } diff --git a/src/test/run-pass/union/union-c-interop.rs b/src/test/run-pass/union/union-c-interop.rs index bea4d5f923..13dfd41461 100644 --- a/src/test/run-pass/union/union-c-interop.rs +++ b/src/test/run-pass/union/union-c-interop.rs @@ -25,7 +25,7 @@ union LARGE_INTEGER { QuadPart: u64, } -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern "C" { fn increment_all_parts(_: LARGE_INTEGER) -> LARGE_INTEGER; } diff --git a/src/test/run-pass/variadic-ffi.rs b/src/test/run-pass/variadic-ffi.rs index 0131563d36..ec6261febc 100644 --- a/src/test/run-pass/variadic-ffi.rs +++ b/src/test/run-pass/variadic-ffi.rs @@ -8,7 +8,7 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#[link(name = "rust_test_helpers")] +#[link(name = "rust_test_helpers", kind = "static")] extern { fn rust_interesting_average(_: u64, ...) -> f64; } diff --git a/src/test/run-pass/vector-sort-panic-safe.rs b/src/test/run-pass/vector-sort-panic-safe.rs index 911bfc7454..87f1968918 100644 --- a/src/test/run-pass/vector-sort-panic-safe.rs +++ b/src/test/run-pass/vector-sort-panic-safe.rs @@ -17,86 +17,111 @@ use std::sync::atomic::{AtomicUsize, Ordering}; use std::__rand::{thread_rng, Rng}; use std::thread; -const REPEATS: usize = 5; -const MAX_LEN: usize = 32; -static drop_counts: [AtomicUsize; MAX_LEN] = - // FIXME #5244: AtomicUsize is not Copy. - [ - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), - AtomicUsize::new(0), AtomicUsize::new(0), - ]; +const MAX_LEN: usize = 80; -static creation_count: AtomicUsize = AtomicUsize::new(0); +static DROP_COUNTS: [AtomicUsize; MAX_LEN] = [ + // FIXME #5244: AtomicUsize is not Copy. + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), + AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), AtomicUsize::new(0), +]; #[derive(Clone, PartialEq, PartialOrd, Eq, Ord)] -struct DropCounter { x: u32, creation_id: usize } +struct DropCounter { + x: u32, + id: usize, +} impl Drop for DropCounter { fn drop(&mut self) { - drop_counts[self.creation_id].fetch_add(1, Ordering::Relaxed); + DROP_COUNTS[self.id].fetch_add(1, Ordering::Relaxed); } } -pub fn main() { - // len can't go above 64. - for len in 2..MAX_LEN { - for _ in 0..REPEATS { - // reset the count for these new DropCounters, so their - // IDs start from 0. - creation_count.store(0, Ordering::Relaxed); +fn test(input: &[DropCounter]) { + let len = input.len(); - let mut rng = thread_rng(); - let main = (0..len).map(|_| { - DropCounter { - x: rng.next_u32(), - creation_id: creation_count.fetch_add(1, Ordering::Relaxed), + // Work out the total number of comparisons required to sort + // this array... + let mut count = 0usize; + input.to_owned().sort_by(|a, b| { count += 1; a.cmp(b) }); + + // ... and then panic on each and every single one. + for panic_countdown in 0..count { + // Refresh the counters. + for i in 0..len { + DROP_COUNTS[i].store(0, Ordering::Relaxed); + } + + let v = input.to_owned(); + let _ = thread::spawn(move || { + let mut v = v; + let mut panic_countdown = panic_countdown; + v.sort_by(|a, b| { + if panic_countdown == 0 { + panic!(); } - }).collect::>(); + panic_countdown -= 1; + a.cmp(b) + }) + }).join(); - // work out the total number of comparisons required to sort - // this array... - let mut count = 0_usize; - main.clone().sort_by(|a, b| { count += 1; a.cmp(b) }); - - // ... and then panic on each and every single one. - for panic_countdown in 0..count { - // refresh the counters. - for c in &drop_counts { - c.store(0, Ordering::Relaxed); - } - - let v = main.clone(); - - let _ = thread::spawn(move|| { - let mut v = v; - let mut panic_countdown = panic_countdown; - v.sort_by(|a, b| { - if panic_countdown == 0 { - panic!() - } - panic_countdown -= 1; - a.cmp(b) - }) - }).join(); - - // check that the number of things dropped is exactly - // what we expect (i.e. the contents of `v`). - for (i, c) in drop_counts.iter().enumerate().take(len) { - let count = c.load(Ordering::Relaxed); - assert!(count == 1, - "found drop count == {} for i == {}, len == {}", - count, i, len); - } - } + // Check that the number of things dropped is exactly + // what we expect (i.e. the contents of `v`). + for (i, c) in DROP_COUNTS.iter().enumerate().take(len) { + let count = c.load(Ordering::Relaxed); + assert!(count == 1, + "found drop count == {} for i == {}, len == {}", + count, i, len); + } + } +} + +fn main() { + for len in (1..20).chain(70..MAX_LEN) { + // Test on a random array. + let mut rng = thread_rng(); + let input = (0..len).map(|id| { + DropCounter { + x: rng.next_u32(), + id: id, + } + }).collect::>(); + test(&input); + + // Test on a sorted array with two elements randomly swapped, creating several natural + // runs of random lengths. Such arrays have very high chances of hitting all code paths in + // the merge procedure. + for _ in 0..5 { + let mut input = (0..len).map(|i| + DropCounter { + x: i as u32, + id: i, + } + ).collect::>(); + + let a = rng.gen::() % len; + let b = rng.gen::() % len; + input.swap(a, b); + + test(&input); } } } diff --git a/src/test/rustdoc/inline_local/glob-extern-no-defaults.rs b/src/test/rustdoc/inline_local/glob-extern-no-defaults.rs new file mode 100644 index 0000000000..fd2fdd7b8d --- /dev/null +++ b/src/test/rustdoc/inline_local/glob-extern-no-defaults.rs @@ -0,0 +1,35 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --no-defaults + +#![crate_name = "foo"] + +mod mod1 { + extern { + pub fn public_fn(); + fn private_fn(); + } +} + +pub use mod1::*; + +// @has foo/index.html +// @has - "mod1" +// @has - "public_fn" +// @!has - "private_fn" +// @has foo/fn.public_fn.html +// @!has foo/fn.private_fn.html + +// @has foo/mod1/index.html +// @has - "public_fn" +// @has - "private_fn" +// @has foo/mod1/fn.public_fn.html +// @has foo/mod1/fn.private_fn.html diff --git a/src/test/rustdoc/inline_local/glob-extern.rs b/src/test/rustdoc/inline_local/glob-extern.rs new file mode 100644 index 0000000000..cf899d7728 --- /dev/null +++ b/src/test/rustdoc/inline_local/glob-extern.rs @@ -0,0 +1,31 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_name = "foo"] + +mod mod1 { + extern { + pub fn public_fn(); + fn private_fn(); + } +} + +pub use mod1::*; + +// @has foo/index.html +// @!has - "mod1" +// @has - "public_fn" +// @!has - "private_fn" +// @has foo/fn.public_fn.html +// @!has foo/fn.private_fn.html + +// @!has foo/mod1/index.html +// @has foo/mod1/fn.public_fn.html +// @!has foo/mod1/fn.private_fn.html diff --git a/src/test/rustdoc/inline_local/glob-private-no-defaults.rs b/src/test/rustdoc/inline_local/glob-private-no-defaults.rs new file mode 100644 index 0000000000..420b60f2ac --- /dev/null +++ b/src/test/rustdoc/inline_local/glob-private-no-defaults.rs @@ -0,0 +1,58 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --no-defaults + +#![crate_name = "foo"] + +mod mod1 { + mod mod2 { + pub struct Mod2Public; + struct Mod2Private; + } + pub use self::mod2::*; + + pub struct Mod1Public; + struct Mod1Private; +} +pub use mod1::*; + +// @has foo/index.html +// @has - "mod1" +// @has - "Mod1Public" +// @!has - "Mod1Private" +// @!has - "mod2" +// @has - "Mod2Public" +// @!has - "Mod2Private" +// @has foo/struct.Mod1Public.html +// @!has foo/struct.Mod1Private.html +// @has foo/struct.Mod2Public.html +// @!has foo/struct.Mod2Private.html + +// @has foo/mod1/index.html +// @has - "mod2" +// @has - "Mod1Public" +// @has - "Mod1Private" +// @!has - "Mod2Public" +// @!has - "Mod2Private" +// @has foo/mod1/struct.Mod1Public.html +// @has foo/mod1/struct.Mod1Private.html +// @!has foo/mod1/struct.Mod2Public.html +// @!has foo/mod1/struct.Mod2Private.html + +// @has foo/mod1/mod2/index.html +// @has - "Mod2Public" +// @has - "Mod2Private" +// @has foo/mod1/mod2/struct.Mod2Public.html +// @has foo/mod1/mod2/struct.Mod2Private.html + +// @!has foo/mod2/index.html +// @!has foo/mod2/struct.Mod2Public.html +// @!has foo/mod2/struct.Mod2Private.html diff --git a/src/test/rustdoc/inline_local/glob-private.rs b/src/test/rustdoc/inline_local/glob-private.rs new file mode 100644 index 0000000000..b5e256dfdc --- /dev/null +++ b/src/test/rustdoc/inline_local/glob-private.rs @@ -0,0 +1,49 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![crate_name = "foo"] + +mod mod1 { + mod mod2 { + pub struct Mod2Public; + struct Mod2Private; + } + pub use self::mod2::*; + + pub struct Mod1Public; + struct Mod1Private; +} +pub use mod1::*; + +// @has foo/index.html +// @!has - "mod1" +// @has - "Mod1Public" +// @!has - "Mod1Private" +// @!has - "mod2" +// @has - "Mod2Public" +// @!has - "Mod2Private" +// @has foo/struct.Mod1Public.html +// @!has foo/struct.Mod1Private.html +// @has foo/struct.Mod2Public.html +// @!has foo/struct.Mod2Private.html + +// @!has foo/mod1/index.html +// @has foo/mod1/struct.Mod1Public.html +// @!has foo/mod1/struct.Mod1Private.html +// @!has foo/mod1/struct.Mod2Public.html +// @!has foo/mod1/struct.Mod2Private.html + +// @!has foo/mod1/mod2/index.html +// @has foo/mod1/mod2/struct.Mod2Public.html +// @!has foo/mod1/mod2/struct.Mod2Private.html + +// @!has foo/mod2/index.html +// @!has foo/mod2/struct.Mod2Public.html +// @!has foo/mod2/struct.Mod2Private.html diff --git a/src/test/rustdoc/issue-32374.rs b/src/test/rustdoc/issue-32374.rs index 262a1ffce7..dea73317e5 100644 --- a/src/test/rustdoc/issue-32374.rs +++ b/src/test/rustdoc/issue-32374.rs @@ -20,6 +20,16 @@ // 'Deprecated since 1.0.0: text' // @has - 'test' // @has - '#32374' +// @matches issue_32374/struct.T.html '//*[@class="stab unstable"]' \ +// 'Unstable \(test #32374\)$' #[rustc_deprecated(since = "1.0.0", reason = "text")] #[unstable(feature = "test", issue = "32374")] pub struct T; + +// @has issue_32374/struct.U.html '//*[@class="stab deprecated"]' \ +// 'Deprecated since 1.0.0: deprecated' +// @has issue_32374/struct.U.html '//*[@class="stab unstable"]' \ +// 'Unstable (test #32374): unstable' +#[rustc_deprecated(since = "1.0.0", reason = "deprecated")] +#[unstable(feature = "test", issue = "32374", reason = "unstable")] +pub struct U; diff --git a/src/test/rustdoc/issue-34274.rs b/src/test/rustdoc/issue-34274.rs index 971c89b161..12f8804216 100644 --- a/src/test/rustdoc/issue-34274.rs +++ b/src/test/rustdoc/issue-34274.rs @@ -16,5 +16,5 @@ extern crate issue_34274; -// @has foo/fn.extern_c_fn.html '//a/@href' '../issue_34274/fn.extern_c_fn.html?gotosrc=' +// @has foo/fn.extern_c_fn.html '//a/@href' '../src/issue_34274/issue-34274.rs.html#12' pub use issue_34274::extern_c_fn; diff --git a/src/test/rustdoc/issue-38219.rs b/src/test/rustdoc/issue-38219.rs new file mode 100644 index 0000000000..19b338bf56 --- /dev/null +++ b/src/test/rustdoc/issue-38219.rs @@ -0,0 +1,18 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags:--test +// should-fail + +/// ``` +/// fail +/// ``` +#[macro_export] +macro_rules! foo { () => {} } diff --git a/src/test/rustdoc/line-breaks.rs b/src/test/rustdoc/line-breaks.rs index cc608a2447..a1eabb515a 100644 --- a/src/test/rustdoc/line-breaks.rs +++ b/src/test/rustdoc/line-breaks.rs @@ -10,6 +10,9 @@ #![crate_name = "foo"] +use std::ops::Add; +use std::fmt::Display; + //@count foo/fn.function_with_a_really_long_name.html //pre/br 2 pub fn function_with_a_really_long_name(parameter_one: i32, parameter_two: i32) @@ -19,3 +22,19 @@ pub fn function_with_a_really_long_name(parameter_one: i32, //@count foo/fn.short_name.html //pre/br 0 pub fn short_name(param: i32) -> i32 { param + 1 } + +//@count foo/fn.where_clause.html //pre/br 4 +pub fn where_clause(param_one: T, + param_two: U) + where T: Add + Display + Copy, + U: Add + Display + Copy, + T::Output: Display + Add + Copy, + >::Output: Display, + U::Output: Display + Copy +{ + let x = param_one + param_two; + println!("{} + {} = {}", param_one, param_two, x); + let y = param_two + param_one; + println!("{} + {} = {}", param_two, param_one, y); + println!("{} + {} = {}", x, y, x + y); +} diff --git a/src/test/rustdoc/playground-arg.rs b/src/test/rustdoc/playground-arg.rs new file mode 100644 index 0000000000..f0d55ef6e9 --- /dev/null +++ b/src/test/rustdoc/playground-arg.rs @@ -0,0 +1,24 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: --playground-url=https://example.com/ -Z unstable-options +// ignore-tidy-linelength + +#![crate_name = "foo"] + +//! ``` +//! use foo::dummy; +//! dummy(); +//! ``` + +pub fn dummy() {} + +// ensure that `extern crate foo;` was inserted into code snips automatically: +// @matches foo/index.html '//a[@class="test-arrow"][@href="https://example.com/?code=extern%20crate%20foo%3B%0Afn%20main()%20%7B%0Ause%20foo%3A%3Adummy%3B%0Adummy()%3B%0A%7D"]' "Run" diff --git a/src/test/rustdoc/rustc-macro-crate.rs b/src/test/rustdoc/rustc-macro-crate.rs index fe80a90955..dc28732b55 100644 --- a/src/test/rustdoc/rustc-macro-crate.rs +++ b/src/test/rustdoc/rustc-macro-crate.rs @@ -10,8 +10,6 @@ // no-prefer-dynamic -#![feature(proc_macro)] -#![feature(proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; diff --git a/src/test/rustdoc/src-links-external.rs b/src/test/rustdoc/src-links-external.rs index e9db4f519e..d3307bb4d4 100644 --- a/src/test/rustdoc/src-links-external.rs +++ b/src/test/rustdoc/src-links-external.rs @@ -11,12 +11,13 @@ // aux-build:src-links-external.rs // build-aux-docs // ignore-cross-compile +// ignore-tidy-linelength #![crate_name = "foo"] extern crate src_links_external; -// @has foo/bar/index.html '//a/@href' '../src_links_external/index.html?gotosrc=' +// @has foo/bar/index.html '//a/@href' '../../src/src_links_external/src-links-external.rs.html#11' pub use src_links_external as bar; -// @has foo/bar/struct.Foo.html '//a/@href' '../src_links_external/struct.Foo.html?gotosrc=' +// @has foo/bar/struct.Foo.html '//a/@href' '../../src/src_links_external/src-links-external.rs.html#11' diff --git a/src/test/rustdoc/viewpath-rename.rs b/src/test/rustdoc/viewpath-rename.rs index ccc0acab7f..4b6843d33f 100644 --- a/src/test/rustdoc/viewpath-rename.rs +++ b/src/test/rustdoc/viewpath-rename.rs @@ -21,8 +21,11 @@ pub enum Maybe { // @has foo/prelude/index.html pub mod prelude { - // @has foo/prelude/index.html '//code' 'pub use io::{self as FooIo, Reader as FooReader}' + // @has foo/prelude/index.html '//code' 'pub use io as FooIo;' + // @has foo/prelude/index.html '//code' 'pub use io::Reader as FooReader;' #[doc(no_inline)] pub use io::{self as FooIo, Reader as FooReader}; - // @has foo/prelude/index.html '//code' 'pub use Maybe::{self, Just as MaybeJust, Nothing}' + // @has foo/prelude/index.html '//code' 'pub use Maybe;' + // @has foo/prelude/index.html '//code' 'pub use Maybe::Just as MaybeJust;' + // @has foo/prelude/index.html '//code' 'pub use Maybe::Nothing;' #[doc(no_inline)] pub use Maybe::{self, Just as MaybeJust, Nothing}; } diff --git a/src/test/rustdoc/viewpath-self.rs b/src/test/rustdoc/viewpath-self.rs index 65a981353f..000960ad97 100644 --- a/src/test/rustdoc/viewpath-self.rs +++ b/src/test/rustdoc/viewpath-self.rs @@ -21,8 +21,11 @@ pub enum Maybe { // @has foo/prelude/index.html pub mod prelude { - // @has foo/prelude/index.html '//code' 'pub use io::{self, Reader}' + // @has foo/prelude/index.html '//code' 'pub use io;' + // @has foo/prelude/index.html '//code' 'pub use io::Reader;' #[doc(no_inline)] pub use io::{self, Reader}; - // @has foo/prelude/index.html '//code' 'pub use Maybe::{self, Just, Nothing}' + // @has foo/prelude/index.html '//code' 'pub use Maybe;' + // @has foo/prelude/index.html '//code' 'pub use Maybe::Just;' + // @has foo/prelude/index.html '//code' 'pub use Maybe::Nothing;' #[doc(no_inline)] pub use Maybe::{self, Just, Nothing}; } diff --git a/src/test/ui/codemap_tests/repair_span_std_macros.stderr b/src/test/ui/codemap_tests/repair_span_std_macros.stderr index 73a1c5bae8..7e0d778a3b 100644 --- a/src/test/ui/codemap_tests/repair_span_std_macros.stderr +++ b/src/test/ui/codemap_tests/repair_span_std_macros.stderr @@ -1,8 +1,8 @@ -error[E0282]: unable to infer enough type information about `_` +error[E0282]: unable to infer enough type information about `T` --> $DIR/repair_span_std_macros.rs:12:13 | 12 | let x = vec![]; - | ^^^^^^ cannot infer type for `_` + | ^^^^^^ cannot infer type for `T` | = note: type annotations or generic parameter binding required = note: this error originates in a macro outside of the current crate diff --git a/src/test/ui/codemap_tests/two_files.stderr b/src/test/ui/codemap_tests/two_files.stderr index d58e7148f6..d05e6eb2bb 100644 --- a/src/test/ui/codemap_tests/two_files.stderr +++ b/src/test/ui/codemap_tests/two_files.stderr @@ -2,7 +2,7 @@ error[E0404]: `Bar` is not a trait --> $DIR/two_files.rs:15:6 | 15 | impl Bar for Baz { } - | ^^^ not a trait + | ^^^ expected trait, found type alias | = note: type aliases cannot be used for traits diff --git a/src/test/ui/compare-method/region-extra-2.stderr b/src/test/ui/compare-method/region-extra-2.stderr index 54a551bcfe..12b0ecabcc 100644 --- a/src/test/ui/compare-method/region-extra-2.stderr +++ b/src/test/ui/compare-method/region-extra-2.stderr @@ -1,11 +1,15 @@ error[E0276]: impl has stricter requirements than trait --> $DIR/region-extra-2.rs:19:5 | -15 | fn renew<'b: 'a>(self) -> &'b mut [T]; - | -------------------------------------- definition of `renew` from trait +15 | fn renew<'b: 'a>(self) -> &'b mut [T]; + | -------------------------------------- definition of `renew` from trait ... -19 | fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b { - | ^ impl has extra requirement `'a: 'b` +19 | fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b { + | _____^ starting here... +20 | | //~^ ERROR E0276 +21 | | &mut self[..] +22 | | } + | |_____^ ...ending here: impl has extra requirement `'a: 'b` error: aborting due to previous error diff --git a/src/test/ui/compare-method/traits-misc-mismatch-2.stderr b/src/test/ui/compare-method/traits-misc-mismatch-2.stderr index 5003550fd1..77b056f697 100644 --- a/src/test/ui/compare-method/traits-misc-mismatch-2.stderr +++ b/src/test/ui/compare-method/traits-misc-mismatch-2.stderr @@ -1,11 +1,15 @@ error[E0276]: impl has stricter requirements than trait --> $DIR/traits-misc-mismatch-2.rs:23:5 | -19 | fn zip>(self, other: U) -> ZipIterator; - | ------------------------------------------------------------------ definition of `zip` from trait +19 | fn zip>(self, other: U) -> ZipIterator; + | ------------------------------------------------------------------ definition of `zip` from trait ... -23 | fn zip>(self, other: U) -> ZipIterator { - | ^ impl has extra requirement `U: Iterator` +23 | fn zip>(self, other: U) -> ZipIterator { + | _____^ starting here... +24 | | //~^ ERROR E0276 +25 | | ZipIterator{a: self, b: other} +26 | | } + | |_____^ ...ending here: impl has extra requirement `U: Iterator` error: aborting due to previous error diff --git a/src/test/compile-fail/issue-31424.rs b/src/test/ui/did_you_mean/issue-31424.rs similarity index 79% rename from src/test/compile-fail/issue-31424.rs rename to src/test/ui/did_you_mean/issue-31424.rs index 262efab22a..374d06bb71 100644 --- a/src/test/compile-fail/issue-31424.rs +++ b/src/test/ui/did_you_mean/issue-31424.rs @@ -15,15 +15,12 @@ struct Struct; impl Struct { fn foo(&mut self) { (&mut self).bar(); - //~^ ERROR cannot borrow immutable argument `self` as mutable - // ... and no SUGGESTION that suggests `&mut mut self` } // In this case we could keep the suggestion, but to distinguish the // two cases is pretty hard. It's an obscure case anyway. fn bar(self: &mut Self) { (&mut self).bar(); - //~^ ERROR cannot borrow immutable argument `self` as mutable } } diff --git a/src/test/ui/did_you_mean/issue-31424.stderr b/src/test/ui/did_you_mean/issue-31424.stderr new file mode 100644 index 0000000000..4873acf551 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-31424.stderr @@ -0,0 +1,17 @@ +error: cannot borrow immutable argument `self` as mutable + --> $DIR/issue-31424.rs:17:15 + | +17 | (&mut self).bar(); + | ^^^^ + | | + | try removing `&mut` here + | cannot reborrow mutably + +error: cannot borrow immutable argument `self` as mutable + --> $DIR/issue-31424.rs:23:15 + | +23 | (&mut self).bar(); + | ^^^^ cannot borrow mutably + +error: aborting due to 2 previous errors + diff --git a/src/test/ui/did_you_mean/issue-34126.rs b/src/test/ui/did_you_mean/issue-34126.rs new file mode 100644 index 0000000000..9523e6bbf3 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-34126.rs @@ -0,0 +1,23 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +struct Z { } + +impl Z { + fn run(&self, z: &mut Z) { } + fn start(&mut self) { + self.run(&mut self); + } +} + +fn main() { + let mut z = Z {}; + z.start(); +} diff --git a/src/test/ui/did_you_mean/issue-34126.stderr b/src/test/ui/did_you_mean/issue-34126.stderr new file mode 100644 index 0000000000..8011298c80 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-34126.stderr @@ -0,0 +1,11 @@ +error: cannot borrow immutable argument `self` as mutable + --> $DIR/issue-34126.rs:16:23 + | +16 | self.run(&mut self); + | ^^^^ + | | + | try removing `&mut` here + | cannot reborrow mutably + +error: aborting due to previous error + diff --git a/src/test/ui/did_you_mean/issue-34337.rs b/src/test/ui/did_you_mean/issue-34337.rs new file mode 100644 index 0000000000..42853a5d83 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-34337.rs @@ -0,0 +1,17 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn get(key: &mut String) { } + +fn main() { + let mut v: Vec = Vec::new(); + let ref mut key = v[0]; + get(&mut key); +} diff --git a/src/test/ui/did_you_mean/issue-34337.stderr b/src/test/ui/did_you_mean/issue-34337.stderr new file mode 100644 index 0000000000..d658912835 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-34337.stderr @@ -0,0 +1,11 @@ +error: cannot borrow immutable local variable `key` as mutable + --> $DIR/issue-34337.rs:16:14 + | +16 | get(&mut key); + | ^^^ + | | + | try removing `&mut` here + | cannot reborrow mutably + +error: aborting due to previous error + diff --git a/src/test/ui/did_you_mean/issue-37139.rs b/src/test/ui/did_you_mean/issue-37139.rs new file mode 100644 index 0000000000..6518176805 --- /dev/null +++ b/src/test/ui/did_you_mean/issue-37139.rs @@ -0,0 +1,25 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +enum TestEnum { + Item(i32), +} + +fn test(_: &mut i32) { +} + +fn main() { + let mut x = TestEnum::Item(10); + match x { + TestEnum::Item(ref mut x) => { + test(&mut x); + } + } +} diff --git a/src/test/ui/did_you_mean/issue-37139.stderr b/src/test/ui/did_you_mean/issue-37139.stderr new file mode 100644 index 0000000000..b1a8231fdb --- /dev/null +++ b/src/test/ui/did_you_mean/issue-37139.stderr @@ -0,0 +1,11 @@ +error: cannot borrow immutable local variable `x` as mutable + --> $DIR/issue-37139.rs:22:23 + | +22 | test(&mut x); + | ^ + | | + | try removing `&mut` here + | cannot reborrow mutably + +error: aborting due to previous error + diff --git a/src/test/ui/dropck/dropck-eyepatch-implies-unsafe-impl.stderr b/src/test/ui/dropck/dropck-eyepatch-implies-unsafe-impl.stderr index c53cf020a9..92e2fe8e93 100644 --- a/src/test/ui/dropck/dropck-eyepatch-implies-unsafe-impl.stderr +++ b/src/test/ui/dropck/dropck-eyepatch-implies-unsafe-impl.stderr @@ -1,14 +1,26 @@ error[E0569]: requires an `unsafe impl` declaration due to `#[may_dangle]` attribute --> $DIR/dropck-eyepatch-implies-unsafe-impl.rs:32:1 | -32 | impl<#[may_dangle] A, B: fmt::Debug> Drop for Pt { - | ^ +32 | impl<#[may_dangle] A, B: fmt::Debug> Drop for Pt { + | _^ starting here... +33 | | //~^ ERROR requires an `unsafe impl` declaration due to `#[may_dangle]` attribute +34 | | +35 | | // (unsafe to access self.1 due to #[may_dangle] on A) +36 | | fn drop(&mut self) { println!("drop {} {:?}", self.0, self.2); } +37 | | } + | |_^ ...ending here error[E0569]: requires an `unsafe impl` declaration due to `#[may_dangle]` attribute --> $DIR/dropck-eyepatch-implies-unsafe-impl.rs:38:1 | -38 | impl<#[may_dangle] 'a, 'b, B: fmt::Debug> Drop for Pr<'a, 'b, B> { - | ^ +38 | impl<#[may_dangle] 'a, 'b, B: fmt::Debug> Drop for Pr<'a, 'b, B> { + | _^ starting here... +39 | | //~^ ERROR requires an `unsafe impl` declaration due to `#[may_dangle]` attribute +40 | | +41 | | // (unsafe to access self.1 due to #[may_dangle] on 'a) +42 | | fn drop(&mut self) { println!("drop {} {:?}", self.0, self.2); } +43 | | } + | |_^ ...ending here error: aborting due to 2 previous errors diff --git a/src/test/ui/fmt/format-string-error.rs b/src/test/ui/fmt/format-string-error.rs new file mode 100644 index 0000000000..ec715b3f0b --- /dev/null +++ b/src/test/ui/fmt/format-string-error.rs @@ -0,0 +1,16 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn main() { + println!("{"); + println!("{{}}"); + println!("}"); +} + diff --git a/src/test/ui/fmt/format-string-error.stderr b/src/test/ui/fmt/format-string-error.stderr new file mode 100644 index 0000000000..58b392f0b8 --- /dev/null +++ b/src/test/ui/fmt/format-string-error.stderr @@ -0,0 +1,20 @@ +error: invalid format string: expected `'}'` but string was terminated + --> $DIR/format-string-error.rs:12:5 + | +12 | println!("{"); + | ^^^^^^^^^^^^^^ + | + = note: if you intended to print `{`, you can escape it using `{{` + = note: this error originates in a macro outside of the current crate + +error: invalid format string: unmatched `}` found + --> $DIR/format-string-error.rs:14:5 + | +14 | println!("}"); + | ^^^^^^^^^^^^^^ + | + = note: if you intended to print `}`, you can escape it using `}}` + = note: this error originates in a macro outside of the current crate + +error: aborting due to 2 previous errors + diff --git a/src/test/ui/issue-37311-type-length-limit/issue-37311.rs b/src/test/ui/issue-37311-type-length-limit/issue-37311.rs new file mode 100644 index 0000000000..add96461f1 --- /dev/null +++ b/src/test/ui/issue-37311-type-length-limit/issue-37311.rs @@ -0,0 +1,30 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +trait Mirror { + type Image; +} + +impl Mirror for T { type Image = T; } + +trait Foo { + fn recurse(&self); +} + +impl Foo for T { + #[allow(unconditional_recursion)] + fn recurse(&self) { + (self, self).recurse(); + } +} + +fn main() { + ().recurse(); +} diff --git a/src/test/ui/issue-37311-type-length-limit/issue-37311.stderr b/src/test/ui/issue-37311-type-length-limit/issue-37311.stderr new file mode 100644 index 0000000000..5a63d235a7 --- /dev/null +++ b/src/test/ui/issue-37311-type-length-limit/issue-37311.stderr @@ -0,0 +1,13 @@ +error: reached the type-length limit while instantiating `<(&(&(&(&(&(&(&(&(&(&(&(&(&(&(&(&(&(&(&(), &()), &(&()...` + --> $DIR/issue-37311.rs:23:5 + | +23 | fn recurse(&self) { + | _____^ starting here... +24 | | (self, self).recurse(); +25 | | } + | |_____^ ...ending here + | + = note: consider adding a `#![type_length_limit="2097152"]` attribute to your crate + +error: aborting due to previous error + diff --git a/src/test/ui/lifetimes/borrowck-let-suggestion.stderr b/src/test/ui/lifetimes/borrowck-let-suggestion.stderr index 9160034001..d85483f43c 100644 --- a/src/test/ui/lifetimes/borrowck-let-suggestion.stderr +++ b/src/test/ui/lifetimes/borrowck-let-suggestion.stderr @@ -1,8 +1,8 @@ error: borrowed value does not live long enough - --> $DIR/borrowck-let-suggestion.rs:12:13 + --> $DIR/borrowck-let-suggestion.rs:12:23 | 12 | let x = [1].iter(); - | ^^^ - temporary value only lives until here + | --- ^ temporary value dropped here while still borrowed | | | temporary value created here 13 | } diff --git a/src/test/ui/lifetimes/consider-using-explicit-lifetime.rs b/src/test/ui/lifetimes/consider-using-explicit-lifetime.rs new file mode 100644 index 0000000000..603f55af46 --- /dev/null +++ b/src/test/ui/lifetimes/consider-using-explicit-lifetime.rs @@ -0,0 +1,28 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::str::FromStr; + +pub struct Foo<'a> { + field: &'a str, +} + +impl<'a> Foo<'a> { + fn bar(path: &str) -> Result { + Ok(Foo { field: path }) + } +} + +impl<'a> FromStr for Foo<'a> { + type Err = (); + fn from_str(path: &str) -> Result { + Ok(Foo { field: path }) + } +} diff --git a/src/test/ui/lifetimes/consider-using-explicit-lifetime.stderr b/src/test/ui/lifetimes/consider-using-explicit-lifetime.stderr new file mode 100644 index 0000000000..153aaa0783 --- /dev/null +++ b/src/test/ui/lifetimes/consider-using-explicit-lifetime.stderr @@ -0,0 +1,25 @@ +error: main function not found + +error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements + --> $DIR/consider-using-explicit-lifetime.rs:19:12 + | +19 | Ok(Foo { field: path }) + | ^^^ + +error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements + --> $DIR/consider-using-explicit-lifetime.rs:26:12 + | +26 | Ok(Foo { field: path }) + | ^^^ + | +help: consider using an explicit lifetime parameter as shown: fn from_str(path: &'a str) -> Result + --> $DIR/consider-using-explicit-lifetime.rs:25:5 + | +25 | fn from_str(path: &str) -> Result { + | _____^ starting here... +26 | | Ok(Foo { field: path }) +27 | | } + | |_____^ ...ending here + +error: aborting due to 2 previous errors + diff --git a/src/test/ui/macros/format-foreign.rs b/src/test/ui/macros/format-foreign.rs new file mode 100644 index 0000000000..cca45ca9ec --- /dev/null +++ b/src/test/ui/macros/format-foreign.rs @@ -0,0 +1,20 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn main() { + println!("%.*3$s %s!\n", "Hello,", "World", 4); + println!("%1$*2$.*3$f", 123.456); + + // This should *not* produce hints, on the basis that there's equally as + // many "correct" format specifiers. It's *probably* just an actual typo. + println!("{} %f", "one", 2.0); + + println!("Hi there, $NAME.", NAME="Tim"); +} diff --git a/src/test/ui/macros/format-foreign.stderr b/src/test/ui/macros/format-foreign.stderr new file mode 100644 index 0000000000..0283052a89 --- /dev/null +++ b/src/test/ui/macros/format-foreign.stderr @@ -0,0 +1,52 @@ +error: multiple unused formatting arguments + --> $DIR/format-foreign.rs:12:5 + | +12 | println!("%.*3$s %s!/n", "Hello,", "World", 4); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | +note: argument never used + --> $DIR/format-foreign.rs:12:30 + | +12 | println!("%.*3$s %s!/n", "Hello,", "World", 4); + | ^^^^^^^^ +note: argument never used + --> $DIR/format-foreign.rs:12:40 + | +12 | println!("%.*3$s %s!/n", "Hello,", "World", 4); + | ^^^^^^^ +note: argument never used + --> $DIR/format-foreign.rs:12:49 + | +12 | println!("%.*3$s %s!/n", "Hello,", "World", 4); + | ^ + = help: `%.*3$s` should be written as `{:.2$}` + = help: `%s` should be written as `{}` + = note: printf formatting not supported; see the documentation for `std::fmt` + = note: this error originates in a macro outside of the current crate + +error: argument never used + --> $DIR/format-foreign.rs:13:29 + | +13 | println!("%1$*2$.*3$f", 123.456); + | ^^^^^^^ + | + = help: `%1$*2$.*3$f` should be written as `{0:1$.2$}` + = note: printf formatting not supported; see the documentation for `std::fmt` + +error: argument never used + --> $DIR/format-foreign.rs:17:30 + | +17 | println!("{} %f", "one", 2.0); + | ^^^ + +error: named argument never used + --> $DIR/format-foreign.rs:19:39 + | +19 | println!("Hi there, $NAME.", NAME="Tim"); + | ^^^^^ + | + = help: `$NAME` should be written as `{NAME}` + = note: shell formatting not supported; see the documentation for `std::fmt` + +error: aborting due to 4 previous errors + diff --git a/src/test/ui/macros/macro_path_as_generic_bound.rs b/src/test/ui/macros/macro_path_as_generic_bound.rs new file mode 100644 index 0000000000..781ea30ed8 --- /dev/null +++ b/src/test/ui/macros/macro_path_as_generic_bound.rs @@ -0,0 +1,19 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +trait Foo {} + +macro_rules! foo(($t:path) => { + impl Foo for T {} +}); + +foo!(m::m2::A); + +fn main() {} diff --git a/src/test/ui/macros/macro_path_as_generic_bound.stderr b/src/test/ui/macros/macro_path_as_generic_bound.stderr new file mode 100644 index 0000000000..9663503210 --- /dev/null +++ b/src/test/ui/macros/macro_path_as_generic_bound.stderr @@ -0,0 +1,11 @@ +error[E0433]: failed to resolve. Use of undeclared type or module `m` + --> $DIR/macro_path_as_generic_bound.rs:17:6 + | +17 | foo!(m::m2::A); + | -----^^^^^^^^-- + | | | + | | Use of undeclared type or module `m` + | in this macro invocation + +error: cannot continue compilation due to previous error + diff --git a/src/test/ui/mismatched_types/main.stderr b/src/test/ui/mismatched_types/main.stderr index 9e26be6fdd..c87b635521 100644 --- a/src/test/ui/mismatched_types/main.stderr +++ b/src/test/ui/mismatched_types/main.stderr @@ -1,8 +1,10 @@ error[E0308]: mismatched types --> $DIR/main.rs:12:18 | -12 | let x: u32 = ( - | ^ expected u32, found () +12 | let x: u32 = ( + | __________________^ starting here... +13 | | ); + | |_____^ ...ending here: expected u32, found () | = note: expected type `u32` = note: found type `()` diff --git a/src/test/ui/missing-items/auxiliary/m1.rs b/src/test/ui/missing-items/auxiliary/m1.rs new file mode 100644 index 0000000000..f838969226 --- /dev/null +++ b/src/test/ui/missing-items/auxiliary/m1.rs @@ -0,0 +1,17 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(associated_consts)] + +pub trait X { + const CONSTANT: u32; + type Type; + fn method(&self, s: String) -> Self::Type; +} diff --git a/src/test/ui/missing-items/m2.rs b/src/test/ui/missing-items/m2.rs new file mode 100644 index 0000000000..fc09039640 --- /dev/null +++ b/src/test/ui/missing-items/m2.rs @@ -0,0 +1,21 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:m1.rs + +#![feature(associated_consts)] + +extern crate m1; + +struct X { +} + +impl m1::X for X { +} diff --git a/src/test/ui/missing-items/m2.stderr b/src/test/ui/missing-items/m2.stderr new file mode 100644 index 0000000000..3313543454 --- /dev/null +++ b/src/test/ui/missing-items/m2.stderr @@ -0,0 +1,16 @@ +error: main function not found + +error[E0046]: not all trait items implemented, missing: `CONSTANT`, `Type`, `method` + --> $DIR/m2.rs:20:1 + | +20 | impl m1::X for X { + | _^ starting here... +21 | | } + | |_^ ...ending here: missing `CONSTANT`, `Type`, `method` in implementation + | + = note: `CONSTANT` from trait: `const CONSTANT: u32;` + = note: `Type` from trait: `type Type;` + = note: `method` from trait: `fn(&Self, std::string::String) -> ::Type` + +error: aborting due to previous error + diff --git a/src/test/compile-fail/auxiliary/issue_12612_1.rs b/src/test/ui/missing-items/missing-type-parameter.rs similarity index 92% rename from src/test/compile-fail/auxiliary/issue_12612_1.rs rename to src/test/ui/missing-items/missing-type-parameter.rs index a0234c1185..3671abd662 100644 --- a/src/test/compile-fail/auxiliary/issue_12612_1.rs +++ b/src/test/ui/missing-items/missing-type-parameter.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -pub mod bar { - pub fn foo() {} +fn foo() { } + +fn main() { + foo(); } diff --git a/src/test/ui/missing-items/missing-type-parameter.stderr b/src/test/ui/missing-items/missing-type-parameter.stderr new file mode 100644 index 0000000000..2d007af498 --- /dev/null +++ b/src/test/ui/missing-items/missing-type-parameter.stderr @@ -0,0 +1,10 @@ +error[E0282]: unable to infer enough type information about `X` + --> $DIR/missing-type-parameter.rs:14:5 + | +14 | foo(); + | ^^^ cannot infer type for `X` + | + = note: type annotations or generic parameter binding required + +error: aborting due to previous error + diff --git a/src/test/ui/print_type_sizes/anonymous.rs b/src/test/ui/print_type_sizes/anonymous.rs new file mode 100644 index 0000000000..dc93bddbad --- /dev/null +++ b/src/test/ui/print_type_sizes/anonymous.rs @@ -0,0 +1,27 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// All of the types that occur in this function are uninteresting, in +// that one cannot control the sizes of these types with the same sort +// of enum-variant manipulation tricks. + +pub fn main() { + let _byte: u8 = 0; + let _word: usize = 0; + let _tuple: (u8, usize)= (0, 0); + let _array: [u8; 128] = [0; 128]; + let _fn: fn (u8) -> u8 = id; + let _diverging: fn (u8) -> ! = bye; + + fn id(x: u8) -> u8 { x }; + fn bye(_: u8) -> ! { loop { } } +} diff --git a/src/test/ui/print_type_sizes/generics.rs b/src/test/ui/print_type_sizes/generics.rs new file mode 100644 index 0000000000..93bcd1c36e --- /dev/null +++ b/src/test/ui/print_type_sizes/generics.rs @@ -0,0 +1,73 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates how generics are handled: types have to be +// monomorphized, in the MIR of the original function in which they +// occur, to have their size reported. + +// In an ad-hoc attempt to avoid the injection of unwinding code +// (which clutters the output of `-Z print-type-sizes` with types from +// `unwind::libunwind`): +// +// * I am not using Default to build values because that seems to +// cause the injection of unwinding code. (Instead I just make `fn new` +// methods.) +// +// * Pair derive Copy to ensure that we don't inject +// unwinding code into generic uses of Pair when T itself is also +// Copy. +// +// (I suspect this reflect some naivety within the rust compiler +// itself; it should be checking for drop glue, i.e. a destructor +// somewhere in the monomorphized types. It should not matter whether +// the type is Copy.) +#[derive(Copy, Clone)] +pub struct Pair { + _car: T, + _cdr: T, +} + +impl Pair { + fn new(a: T, d: T) -> Self { + Pair { + _car: a, + _cdr: d, + } + } +} + +#[derive(Copy, Clone)] +pub struct SevenBytes([u8; 7]); +pub struct FiftyBytes([u8; 50]); + +pub struct ZeroSized; + +impl SevenBytes { + fn new() -> Self { SevenBytes([0; 7]) } +} + +impl FiftyBytes { + fn new() -> Self { FiftyBytes([0; 50]) } +} + +pub fn f1(x: T) { + let _v: Pair = Pair::new(x, x); + let _v2: Pair = + Pair::new(FiftyBytes::new(), FiftyBytes::new()); +} + +pub fn main() { + let _b: Pair = Pair::new(0, 0); + let _s: Pair = Pair::new(SevenBytes::new(), SevenBytes::new()); + let _z: ZeroSized = ZeroSized; + f1::(SevenBytes::new()); +} diff --git a/src/test/ui/print_type_sizes/generics.stdout b/src/test/ui/print_type_sizes/generics.stdout new file mode 100644 index 0000000000..0f02f39795 --- /dev/null +++ b/src/test/ui/print_type_sizes/generics.stdout @@ -0,0 +1,14 @@ +print-type-size type: `Pair`: 100 bytes, alignment: 1 bytes +print-type-size field `._car`: 50 bytes +print-type-size field `._cdr`: 50 bytes +print-type-size type: `FiftyBytes`: 50 bytes, alignment: 1 bytes +print-type-size field `.0`: 50 bytes +print-type-size type: `Pair`: 14 bytes, alignment: 1 bytes +print-type-size field `._car`: 7 bytes +print-type-size field `._cdr`: 7 bytes +print-type-size type: `SevenBytes`: 7 bytes, alignment: 1 bytes +print-type-size field `.0`: 7 bytes +print-type-size type: `Pair`: 2 bytes, alignment: 1 bytes +print-type-size field `._car`: 1 bytes +print-type-size field `._cdr`: 1 bytes +print-type-size type: `ZeroSized`: 0 bytes, alignment: 1 bytes diff --git a/src/test/ui/print_type_sizes/multiple_types.rs b/src/test/ui/print_type_sizes/multiple_types.rs new file mode 100644 index 0000000000..2b5010767f --- /dev/null +++ b/src/test/ui/print_type_sizes/multiple_types.rs @@ -0,0 +1,28 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates that when multiple structural types occur in +// a function, every one of them is included in the output. + +pub struct SevenBytes([u8; 7]); +pub struct FiftyBytes([u8; 50]); + +pub enum Enum { + Small(SevenBytes), + Large(FiftyBytes), +} + +pub fn main() { + let _e: Enum; + let _f: FiftyBytes; + let _s: SevenBytes; +} diff --git a/src/test/ui/print_type_sizes/multiple_types.stdout b/src/test/ui/print_type_sizes/multiple_types.stdout new file mode 100644 index 0000000000..eed9af2698 --- /dev/null +++ b/src/test/ui/print_type_sizes/multiple_types.stdout @@ -0,0 +1,10 @@ +print-type-size type: `Enum`: 51 bytes, alignment: 1 bytes +print-type-size discriminant: 1 bytes +print-type-size variant `Small`: 7 bytes +print-type-size field `.0`: 7 bytes +print-type-size variant `Large`: 50 bytes +print-type-size field `.0`: 50 bytes +print-type-size type: `FiftyBytes`: 50 bytes, alignment: 1 bytes +print-type-size field `.0`: 50 bytes +print-type-size type: `SevenBytes`: 7 bytes, alignment: 1 bytes +print-type-size field `.0`: 7 bytes diff --git a/src/test/ui/print_type_sizes/no_duplicates.rs b/src/test/ui/print_type_sizes/no_duplicates.rs new file mode 100644 index 0000000000..6008a346c0 --- /dev/null +++ b/src/test/ui/print_type_sizes/no_duplicates.rs @@ -0,0 +1,25 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates that when the same type occurs repeatedly +// (even if multiple functions), it is only printed once in the +// print-type-sizes output. + +pub struct SevenBytes([u8; 7]); + +pub fn f1() { + let _s: SevenBytes = SevenBytes([0; 7]); +} + +pub fn main() { + let _s: SevenBytes = SevenBytes([0; 7]); +} diff --git a/src/test/ui/print_type_sizes/no_duplicates.stdout b/src/test/ui/print_type_sizes/no_duplicates.stdout new file mode 100644 index 0000000000..50180f356e --- /dev/null +++ b/src/test/ui/print_type_sizes/no_duplicates.stdout @@ -0,0 +1,2 @@ +print-type-size type: `SevenBytes`: 7 bytes, alignment: 1 bytes +print-type-size field `.0`: 7 bytes diff --git a/src/test/ui/print_type_sizes/nullable.rs b/src/test/ui/print_type_sizes/nullable.rs new file mode 100644 index 0000000000..f7fdcac81d --- /dev/null +++ b/src/test/ui/print_type_sizes/nullable.rs @@ -0,0 +1,69 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates how enums with a non-null field are handled, +// modelled after cases like `Option<&u32>` and such. +// +// It uses NonZero directly, rather than `&_` or `Unique<_>`, because +// the test is not set up to deal with target-dependent pointer width. +// +// It avoids using u64/i64 because on some targets that is only 4-byte +// aligned (while on most it is 8-byte aligned) and so the resulting +// padding and overall computed sizes can be quite different. + +#![feature(nonzero)] +#![allow(dead_code)] + +extern crate core; +use core::nonzero::{NonZero, Zeroable}; + +pub enum MyOption { None, Some(T) } + +impl Default for MyOption { + fn default() -> Self { MyOption::None } +} + +pub enum EmbeddedDiscr { + None, + Record { pre: u8, val: NonZero, post: u16 }, +} + +impl Default for EmbeddedDiscr { + fn default() -> Self { EmbeddedDiscr::None } +} + +#[derive(Default)] +pub struct IndirectNonZero { + pre: u8, + nested: NestedNonZero, + post: u16, +} + +pub struct NestedNonZero { + pre: u8, + val: NonZero, + post: u16, +} + +impl Default for NestedNonZero { + fn default() -> Self { + unsafe { + NestedNonZero { pre: 0, val: NonZero::new(Default::default()), post: 0 } + } + } +} + +pub fn main() { + let _x: MyOption> = Default::default(); + let _y: EmbeddedDiscr = Default::default(); + let _z: MyOption> = Default::default(); +} diff --git a/src/test/ui/print_type_sizes/nullable.stdout b/src/test/ui/print_type_sizes/nullable.stdout new file mode 100644 index 0000000000..dd999c4a5e --- /dev/null +++ b/src/test/ui/print_type_sizes/nullable.stdout @@ -0,0 +1,27 @@ +print-type-size type: `IndirectNonZero`: 20 bytes, alignment: 4 bytes +print-type-size field `.pre`: 1 bytes +print-type-size padding: 3 bytes +print-type-size field `.nested`: 12 bytes, alignment: 4 bytes +print-type-size field `.post`: 2 bytes +print-type-size end padding: 2 bytes +print-type-size type: `MyOption>`: 20 bytes, alignment: 4 bytes +print-type-size variant `Some`: 20 bytes +print-type-size field `.0`: 20 bytes +print-type-size type: `EmbeddedDiscr`: 12 bytes, alignment: 4 bytes +print-type-size variant `Record`: 10 bytes +print-type-size field `.pre`: 1 bytes +print-type-size padding: 3 bytes +print-type-size field `.val`: 4 bytes, alignment: 4 bytes +print-type-size field `.post`: 2 bytes +print-type-size end padding: 2 bytes +print-type-size type: `NestedNonZero`: 12 bytes, alignment: 4 bytes +print-type-size field `.pre`: 1 bytes +print-type-size padding: 3 bytes +print-type-size field `.val`: 4 bytes, alignment: 4 bytes +print-type-size field `.post`: 2 bytes +print-type-size end padding: 2 bytes +print-type-size type: `MyOption>`: 4 bytes, alignment: 4 bytes +print-type-size variant `Some`: 4 bytes +print-type-size field `.0`: 4 bytes +print-type-size type: `core::nonzero::NonZero`: 4 bytes, alignment: 4 bytes +print-type-size field `.0`: 4 bytes diff --git a/src/test/ui/print_type_sizes/packed.rs b/src/test/ui/print_type_sizes/packed.rs new file mode 100644 index 0000000000..cd7ef86d70 --- /dev/null +++ b/src/test/ui/print_type_sizes/packed.rs @@ -0,0 +1,49 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates how packing is handled; it should cause +// the elimination of padding that would normally be introduced +// to satisfy alignment desirata. +// +// It avoids using u64/i64 because on some targets that is only 4-byte +// aligned (while on most it is 8-byte aligned) and so the resulting +// padding and overall computed sizes can be quite different. + +#![feature(untagged_unions)] + +#![allow(dead_code)] + +#[derive(Default)] +#[repr(packed)] +struct Packed { + a: u8, + b: u8, + g: i32, + c: u8, + h: i16, + d: u8, +} + +#[derive(Default)] +struct Padded { + a: u8, + b: u8, + g: i32, + c: u8, + h: i16, + d: u8, +} + +pub fn main() { + let _c: Packed = Default::default(); + let _d: Padded = Default::default(); +} diff --git a/src/test/ui/print_type_sizes/packed.stdout b/src/test/ui/print_type_sizes/packed.stdout new file mode 100644 index 0000000000..1278a7d7c9 --- /dev/null +++ b/src/test/ui/print_type_sizes/packed.stdout @@ -0,0 +1,17 @@ +print-type-size type: `Padded`: 16 bytes, alignment: 4 bytes +print-type-size field `.a`: 1 bytes +print-type-size field `.b`: 1 bytes +print-type-size padding: 2 bytes +print-type-size field `.g`: 4 bytes, alignment: 4 bytes +print-type-size field `.c`: 1 bytes +print-type-size padding: 1 bytes +print-type-size field `.h`: 2 bytes, alignment: 2 bytes +print-type-size field `.d`: 1 bytes +print-type-size end padding: 3 bytes +print-type-size type: `Packed`: 10 bytes, alignment: 1 bytes +print-type-size field `.a`: 1 bytes +print-type-size field `.b`: 1 bytes +print-type-size field `.g`: 4 bytes +print-type-size field `.c`: 1 bytes +print-type-size field `.h`: 2 bytes +print-type-size field `.d`: 1 bytes diff --git a/src/test/ui/print_type_sizes/padding.rs b/src/test/ui/print_type_sizes/padding.rs new file mode 100644 index 0000000000..af34a908ce --- /dev/null +++ b/src/test/ui/print_type_sizes/padding.rs @@ -0,0 +1,39 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates how padding is handled: alignment +// requirements can lead to the introduction of padding, either before +// fields or at the end of the structure as a whole. +// +// It avoids using u64/i64 because on some targets that is only 4-byte +// aligned (while on most it is 8-byte aligned) and so the resulting +// padding and overall computed sizes can be quite different. + +#![allow(dead_code)] + +struct S { + a: bool, + b: bool, + g: i32, +} + +enum E1 { + A(i32, i8), + B(S), +} + +enum E2 { + A(i8, i32), + B(S), +} + +fn main() { } diff --git a/src/test/ui/print_type_sizes/padding.stdout b/src/test/ui/print_type_sizes/padding.stdout new file mode 100644 index 0000000000..bb95f790bd --- /dev/null +++ b/src/test/ui/print_type_sizes/padding.stdout @@ -0,0 +1,21 @@ +print-type-size type: `E1`: 12 bytes, alignment: 4 bytes +print-type-size discriminant: 4 bytes +print-type-size variant `A`: 5 bytes +print-type-size field `.0`: 4 bytes +print-type-size field `.1`: 1 bytes +print-type-size variant `B`: 8 bytes +print-type-size field `.0`: 8 bytes +print-type-size type: `E2`: 12 bytes, alignment: 4 bytes +print-type-size discriminant: 1 bytes +print-type-size variant `A`: 7 bytes +print-type-size field `.0`: 1 bytes +print-type-size padding: 2 bytes +print-type-size field `.1`: 4 bytes, alignment: 4 bytes +print-type-size variant `B`: 11 bytes +print-type-size padding: 3 bytes +print-type-size field `.0`: 8 bytes, alignment: 4 bytes +print-type-size type: `S`: 8 bytes, alignment: 4 bytes +print-type-size field `.a`: 1 bytes +print-type-size field `.b`: 1 bytes +print-type-size padding: 2 bytes +print-type-size field `.g`: 4 bytes, alignment: 4 bytes diff --git a/src/test/ui/print_type_sizes/variants.rs b/src/test/ui/print_type_sizes/variants.rs new file mode 100644 index 0000000000..875edb4515 --- /dev/null +++ b/src/test/ui/print_type_sizes/variants.rs @@ -0,0 +1,31 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z print-type-sizes + +// This file illustrates two things: +// +// 1. Only types that appear in a monomorphized function appear in the +// print-type-sizes output, and +// +// 2. For an enum, the print-type-sizes output will also include the +// size of each variant. + +pub struct SevenBytes([u8; 7]); +pub struct FiftyBytes([u8; 50]); + +pub enum Enum { + Small(SevenBytes), + Large(FiftyBytes), +} + +pub fn main() { + let _e: Enum; +} diff --git a/src/test/ui/print_type_sizes/variants.stdout b/src/test/ui/print_type_sizes/variants.stdout new file mode 100644 index 0000000000..eed9af2698 --- /dev/null +++ b/src/test/ui/print_type_sizes/variants.stdout @@ -0,0 +1,10 @@ +print-type-size type: `Enum`: 51 bytes, alignment: 1 bytes +print-type-size discriminant: 1 bytes +print-type-size variant `Small`: 7 bytes +print-type-size field `.0`: 7 bytes +print-type-size variant `Large`: 50 bytes +print-type-size field `.0`: 50 bytes +print-type-size type: `FiftyBytes`: 50 bytes, alignment: 1 bytes +print-type-size field `.0`: 50 bytes +print-type-size type: `SevenBytes`: 7 bytes, alignment: 1 bytes +print-type-size field `.0`: 7 bytes diff --git a/src/test/compile-fail/E0046.rs b/src/test/ui/span/E0046.rs similarity index 95% rename from src/test/compile-fail/E0046.rs rename to src/test/ui/span/E0046.rs index a8b56b2b9a..9e757860a8 100644 --- a/src/test/compile-fail/E0046.rs +++ b/src/test/ui/span/E0046.rs @@ -10,6 +10,7 @@ trait Foo { fn foo(); + //~^ NOTE `foo` from trait } struct Bar; diff --git a/src/test/ui/span/E0046.stderr b/src/test/ui/span/E0046.stderr new file mode 100644 index 0000000000..729a515612 --- /dev/null +++ b/src/test/ui/span/E0046.stderr @@ -0,0 +1,11 @@ +error[E0046]: not all trait items implemented, missing: `foo` + --> $DIR/E0046.rs:18:1 + | +12 | fn foo(); + | --------- `foo` from trait +... +18 | impl Foo for Bar {} + | ^^^^^^^^^^^^^^^^^^^ missing `foo` in implementation + +error: aborting due to previous error + diff --git a/src/test/ui/span/E0057.stderr b/src/test/ui/span/E0057.stderr index 656fdbe2b2..0d6b0a552e 100644 --- a/src/test/ui/span/E0057.stderr +++ b/src/test/ui/span/E0057.stderr @@ -2,17 +2,13 @@ error[E0057]: this function takes 1 parameter but 0 parameters were supplied --> $DIR/E0057.rs:13:13 | 13 | let a = f(); //~ ERROR E0057 - | ^^^ - | - = note: the following parameter type was expected: (_,) + | ^^^ expected 1 parameter error[E0057]: this function takes 1 parameter but 2 parameters were supplied --> $DIR/E0057.rs:15:15 | 15 | let c = f(2, 3); //~ ERROR E0057 - | ^^^^ - | - = note: the following parameter type was expected: (_,) + | ^^^^ expected 1 parameter error: aborting due to 2 previous errors diff --git a/src/test/compile-fail/borrowck/borrowck-borrow-overloaded-auto-deref-mut.rs b/src/test/ui/span/borrowck-borrow-overloaded-auto-deref-mut.rs similarity index 100% rename from src/test/compile-fail/borrowck/borrowck-borrow-overloaded-auto-deref-mut.rs rename to src/test/ui/span/borrowck-borrow-overloaded-auto-deref-mut.rs diff --git a/src/test/ui/span/borrowck-borrow-overloaded-auto-deref-mut.stderr b/src/test/ui/span/borrowck-borrow-overloaded-auto-deref-mut.stderr new file mode 100644 index 0000000000..1109351bff --- /dev/null +++ b/src/test/ui/span/borrowck-borrow-overloaded-auto-deref-mut.stderr @@ -0,0 +1,86 @@ +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:63:24 + | +62 | fn deref_mut_field1(x: Own) { + | - use `mut x` here to make mutable +63 | let __isize = &mut x.y; //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:75:10 + | +74 | fn deref_extend_mut_field1(x: &Own) -> &mut isize { + | ----------- use `&mut Own` here to make mutable +75 | &mut x.y //~ ERROR cannot borrow + | ^ + +error[E0499]: cannot borrow `*x` as mutable more than once at a time + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:88:19 + | +87 | let _x = &mut x.x; + | - first mutable borrow occurs here +88 | let _y = &mut x.y; //~ ERROR cannot borrow + | ^ second mutable borrow occurs here +89 | } + | - first borrow ends here + +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:98:5 + | +97 | fn assign_field1<'a>(x: Own) { + | - use `mut x` here to make mutable +98 | x.y = 3; //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:102:5 + | +101 | fn assign_field2<'a>(x: &'a Own) { + | -------------- use `&'a mut Own` here to make mutable +102 | x.y = 3; //~ ERROR cannot borrow + | ^ + +error[E0499]: cannot borrow `*x` as mutable more than once at a time + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:111:5 + | +110 | let _p: &mut Point = &mut **x; + | -- first mutable borrow occurs here +111 | x.y = 3; //~ ERROR cannot borrow + | ^ second mutable borrow occurs here +112 | } + | - first borrow ends here + +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:119:5 + | +118 | fn deref_mut_method1(x: Own) { + | - use `mut x` here to make mutable +119 | x.set(0, 0); //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:131:5 + | +130 | fn deref_extend_mut_method1(x: &Own) -> &mut isize { + | ----------- use `&mut Own` here to make mutable +131 | x.y_mut() //~ ERROR cannot borrow + | ^ + +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:139:6 + | +138 | fn assign_method1<'a>(x: Own) { + | - use `mut x` here to make mutable +139 | *x.y_mut() = 3; //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-auto-deref-mut.rs:143:6 + | +142 | fn assign_method2<'a>(x: &'a Own) { + | -------------- use `&'a mut Own` here to make mutable +143 | *x.y_mut() = 3; //~ ERROR cannot borrow + | ^ + +error: aborting due to 10 previous errors + diff --git a/src/test/compile-fail/borrowck/borrowck-borrow-overloaded-deref-mut.rs b/src/test/ui/span/borrowck-borrow-overloaded-deref-mut.rs similarity index 100% rename from src/test/compile-fail/borrowck/borrowck-borrow-overloaded-deref-mut.rs rename to src/test/ui/span/borrowck-borrow-overloaded-deref-mut.rs diff --git a/src/test/ui/span/borrowck-borrow-overloaded-deref-mut.stderr b/src/test/ui/span/borrowck-borrow-overloaded-deref-mut.stderr new file mode 100644 index 0000000000..a5b7045916 --- /dev/null +++ b/src/test/ui/span/borrowck-borrow-overloaded-deref-mut.stderr @@ -0,0 +1,34 @@ +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-deref-mut.rs:39:25 + | +38 | fn deref_mut1(x: Own) { + | - use `mut x` here to make mutable +39 | let __isize = &mut *x; //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-deref-mut.rs:51:11 + | +50 | fn deref_extend_mut1<'a>(x: &'a Own) -> &'a mut isize { + | -------------- use `&'a mut Own` here to make mutable +51 | &mut **x //~ ERROR cannot borrow + | ^^ + +error: cannot borrow immutable argument `x` as mutable + --> $DIR/borrowck-borrow-overloaded-deref-mut.rs:59:6 + | +58 | fn assign1<'a>(x: Own) { + | - use `mut x` here to make mutable +59 | *x = 3; //~ ERROR cannot borrow + | ^ cannot borrow mutably + +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-borrow-overloaded-deref-mut.rs:63:6 + | +62 | fn assign2<'a>(x: &'a Own) { + | -------------- use `&'a mut Own` here to make mutable +63 | **x = 3; //~ ERROR cannot borrow + | ^^ + +error: aborting due to 4 previous errors + diff --git a/src/test/compile-fail/borrowck/borrowck-call-is-borrow-issue-12224.rs b/src/test/ui/span/borrowck-call-is-borrow-issue-12224.rs similarity index 98% rename from src/test/compile-fail/borrowck/borrowck-call-is-borrow-issue-12224.rs rename to src/test/ui/span/borrowck-call-is-borrow-issue-12224.rs index e4ae565fe9..ba1ae64ec3 100644 --- a/src/test/compile-fail/borrowck/borrowck-call-is-borrow-issue-12224.rs +++ b/src/test/ui/span/borrowck-call-is-borrow-issue-12224.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(fn_traits)] + // Ensure that invoking a closure counts as a unique immutable borrow type Fn<'a> = Box; diff --git a/src/test/ui/span/borrowck-call-is-borrow-issue-12224.stderr b/src/test/ui/span/borrowck-call-is-borrow-issue-12224.stderr new file mode 100644 index 0000000000..16bb600136 --- /dev/null +++ b/src/test/ui/span/borrowck-call-is-borrow-issue-12224.stderr @@ -0,0 +1,43 @@ +error[E0499]: cannot borrow `f` as mutable more than once at a time + --> $DIR/borrowck-call-is-borrow-issue-12224.rs:23:16 + | +23 | f(Box::new(|| { + | - ^^ second mutable borrow occurs here + | | + | first mutable borrow occurs here +24 | //~^ ERROR: cannot borrow `f` as mutable more than once +25 | f((Box::new(|| {}))) + | - borrow occurs due to use of `f` in closure +26 | })); + | - first borrow ends here + +error: cannot borrow immutable borrowed content `*f` as mutable + --> $DIR/borrowck-call-is-borrow-issue-12224.rs:36:5 + | +35 | fn test2(f: &F) where F: FnMut() { + | -- use `&mut F` here to make mutable +36 | (*f)(); //~ ERROR: cannot borrow immutable borrowed content `*f` as mutable + | ^^^^ + +error: cannot borrow immutable `Box` content `*f.f` as mutable + --> $DIR/borrowck-call-is-borrow-issue-12224.rs:44:5 + | +44 | f.f.call_mut(()) //~ ERROR: cannot borrow immutable `Box` content `*f.f` as mutable + | ^^^ + +error[E0504]: cannot move `f` into closure because it is borrowed + --> $DIR/borrowck-call-is-borrow-issue-12224.rs:63:13 + | +62 | f(Box::new(|a| { + | - borrow of `f` occurs here +63 | foo(f); + | ^ move into closure occurs here + +error[E0507]: cannot move out of captured outer variable in an `FnMut` closure + --> $DIR/borrowck-call-is-borrow-issue-12224.rs:63:13 + | +63 | foo(f); + | ^ cannot move out of captured outer variable in an `FnMut` closure + +error: aborting due to 5 previous errors + diff --git a/src/test/compile-fail/borrowck/borrowck-call-method-from-mut-aliasable.rs b/src/test/ui/span/borrowck-call-method-from-mut-aliasable.rs similarity index 100% rename from src/test/compile-fail/borrowck/borrowck-call-method-from-mut-aliasable.rs rename to src/test/ui/span/borrowck-call-method-from-mut-aliasable.rs diff --git a/src/test/ui/span/borrowck-call-method-from-mut-aliasable.stderr b/src/test/ui/span/borrowck-call-method-from-mut-aliasable.stderr new file mode 100644 index 0000000000..a1af1ca740 --- /dev/null +++ b/src/test/ui/span/borrowck-call-method-from-mut-aliasable.stderr @@ -0,0 +1,11 @@ +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-call-method-from-mut-aliasable.rs:27:5 + | +25 | fn b(x: &Foo) { + | ---- use `&mut Foo` here to make mutable +26 | x.f(); +27 | x.h(); //~ ERROR cannot borrow + | ^ + +error: aborting due to previous error + diff --git a/src/test/compile-fail/borrowck/borrowck-fn-in-const-b.rs b/src/test/ui/span/borrowck-fn-in-const-b.rs similarity index 100% rename from src/test/compile-fail/borrowck/borrowck-fn-in-const-b.rs rename to src/test/ui/span/borrowck-fn-in-const-b.rs diff --git a/src/test/ui/span/borrowck-fn-in-const-b.stderr b/src/test/ui/span/borrowck-fn-in-const-b.stderr new file mode 100644 index 0000000000..41f549c708 --- /dev/null +++ b/src/test/ui/span/borrowck-fn-in-const-b.stderr @@ -0,0 +1,10 @@ +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-fn-in-const-b.rs:17:9 + | +16 | fn broken(x: &Vec) { + | ------------ use `&mut Vec` here to make mutable +17 | x.push(format!("this is broken")); + | ^ + +error: aborting due to previous error + diff --git a/src/test/ui/span/borrowck-let-suggestion-suffixes.stderr b/src/test/ui/span/borrowck-let-suggestion-suffixes.stderr index 0bba986e43..5bb656878b 100644 --- a/src/test/ui/span/borrowck-let-suggestion-suffixes.stderr +++ b/src/test/ui/span/borrowck-let-suggestion-suffixes.stderr @@ -10,10 +10,10 @@ error: `young[..]` does not live long enough = note: values in a scope are dropped in the opposite order they are created error: borrowed value does not live long enough - --> $DIR/borrowck-let-suggestion-suffixes.rs:24:14 + --> $DIR/borrowck-let-suggestion-suffixes.rs:24:18 | 24 | v3.push(&'x'); // statement 6 - | ^^^ - temporary value only lives until here + | --- ^ temporary value dropped here while still borrowed | | | temporary value created here ... @@ -23,10 +23,10 @@ error: borrowed value does not live long enough = note: consider using a `let` binding to increase its lifetime error: borrowed value does not live long enough - --> $DIR/borrowck-let-suggestion-suffixes.rs:34:18 + --> $DIR/borrowck-let-suggestion-suffixes.rs:34:22 | 34 | v4.push(&'y'); - | ^^^ - temporary value only lives until here + | --- ^ temporary value dropped here while still borrowed | | | temporary value created here ... @@ -36,10 +36,10 @@ error: borrowed value does not live long enough = note: consider using a `let` binding to increase its lifetime error: borrowed value does not live long enough - --> $DIR/borrowck-let-suggestion-suffixes.rs:45:14 + --> $DIR/borrowck-let-suggestion-suffixes.rs:45:18 | 45 | v5.push(&'z'); - | ^^^ - temporary value only lives until here + | --- ^ temporary value dropped here while still borrowed | | | temporary value created here ... diff --git a/src/test/compile-fail/borrowck/borrowck-object-mutability.rs b/src/test/ui/span/borrowck-object-mutability.rs similarity index 100% rename from src/test/compile-fail/borrowck/borrowck-object-mutability.rs rename to src/test/ui/span/borrowck-object-mutability.rs diff --git a/src/test/ui/span/borrowck-object-mutability.stderr b/src/test/ui/span/borrowck-object-mutability.stderr new file mode 100644 index 0000000000..32e4da1805 --- /dev/null +++ b/src/test/ui/span/borrowck-object-mutability.stderr @@ -0,0 +1,17 @@ +error: cannot borrow immutable borrowed content `*x` as mutable + --> $DIR/borrowck-object-mutability.rs:19:5 + | +17 | fn borrowed_receiver(x: &Foo) { + | ---- use `&mut Foo` here to make mutable +18 | x.borrowed(); +19 | x.borrowed_mut(); //~ ERROR cannot borrow + | ^ + +error: cannot borrow immutable `Box` content `*x` as mutable + --> $DIR/borrowck-object-mutability.rs:29:5 + | +29 | x.borrowed_mut(); //~ ERROR cannot borrow + | ^ + +error: aborting due to 2 previous errors + diff --git a/src/test/compile-fail/impl-wrong-item-for-trait.rs b/src/test/ui/span/impl-wrong-item-for-trait.rs similarity index 92% rename from src/test/compile-fail/impl-wrong-item-for-trait.rs rename to src/test/ui/span/impl-wrong-item-for-trait.rs index 388c9a1729..54ed42af5d 100644 --- a/src/test/compile-fail/impl-wrong-item-for-trait.rs +++ b/src/test/ui/span/impl-wrong-item-for-trait.rs @@ -10,11 +10,11 @@ #![feature(associated_consts)] +use std::fmt::Debug; + trait Foo { fn bar(&self); - //~^ NOTE item in trait - //~| NOTE item in trait - const MY_CONST: u32; //~ NOTE item in trait + const MY_CONST: u32; } pub struct FooConstForMethod; @@ -50,4 +50,7 @@ impl Foo for FooTypeForMethod { const MY_CONST: u32 = 1; } +impl Debug for FooTypeForMethod { +} + fn main () {} diff --git a/src/test/ui/span/impl-wrong-item-for-trait.stderr b/src/test/ui/span/impl-wrong-item-for-trait.stderr new file mode 100644 index 0000000000..815893e0c8 --- /dev/null +++ b/src/test/ui/span/impl-wrong-item-for-trait.stderr @@ -0,0 +1,90 @@ +error[E0323]: item `bar` is an associated const, which doesn't match its trait `Foo` + --> $DIR/impl-wrong-item-for-trait.rs:25:5 + | +16 | fn bar(&self); + | -------------- item in trait +... +25 | const bar: u64 = 1; + | ^^^^^^^^^^^^^^^^^^^ does not match trait + +error[E0046]: not all trait items implemented, missing: `bar` + --> $DIR/impl-wrong-item-for-trait.rs:22:1 + | +16 | fn bar(&self); + | -------------- `bar` from trait +... +22 | impl Foo for FooConstForMethod { + | _^ starting here... +23 | | //~^ ERROR E0046 +24 | | //~| NOTE missing `bar` in implementation +25 | | const bar: u64 = 1; +26 | | //~^ ERROR E0323 +27 | | //~| NOTE does not match trait +28 | | const MY_CONST: u32 = 1; +29 | | } + | |_^ ...ending here: missing `bar` in implementation + +error[E0324]: item `MY_CONST` is an associated method, which doesn't match its trait `Foo` + --> $DIR/impl-wrong-item-for-trait.rs:37:5 + | +17 | const MY_CONST: u32; + | -------------------- item in trait +... +37 | fn MY_CONST() {} + | ^^^^^^^^^^^^^^^^ does not match trait + +error[E0046]: not all trait items implemented, missing: `MY_CONST` + --> $DIR/impl-wrong-item-for-trait.rs:33:1 + | +17 | const MY_CONST: u32; + | -------------------- `MY_CONST` from trait +... +33 | impl Foo for FooMethodForConst { + | _^ starting here... +34 | | //~^ ERROR E0046 +35 | | //~| NOTE missing `MY_CONST` in implementation +36 | | fn bar(&self) {} +37 | | fn MY_CONST() {} +38 | | //~^ ERROR E0324 +39 | | //~| NOTE does not match trait +40 | | } + | |_^ ...ending here: missing `MY_CONST` in implementation + +error[E0325]: item `bar` is an associated type, which doesn't match its trait `Foo` + --> $DIR/impl-wrong-item-for-trait.rs:47:5 + | +16 | fn bar(&self); + | -------------- item in trait +... +47 | type bar = u64; + | ^^^^^^^^^^^^^^^ does not match trait + +error[E0046]: not all trait items implemented, missing: `bar` + --> $DIR/impl-wrong-item-for-trait.rs:44:1 + | +16 | fn bar(&self); + | -------------- `bar` from trait +... +44 | impl Foo for FooTypeForMethod { + | _^ starting here... +45 | | //~^ ERROR E0046 +46 | | //~| NOTE missing `bar` in implementation +47 | | type bar = u64; +48 | | //~^ ERROR E0325 +49 | | //~| NOTE does not match trait +50 | | const MY_CONST: u32 = 1; +51 | | } + | |_^ ...ending here: missing `bar` in implementation + +error[E0046]: not all trait items implemented, missing: `fmt` + --> $DIR/impl-wrong-item-for-trait.rs:53:1 + | +53 | impl Debug for FooTypeForMethod { + | _^ starting here... +54 | | } + | |_^ ...ending here: missing `fmt` in implementation + | + = note: `fmt` from trait: `fn(&Self, &mut std::fmt::Formatter<'_>) -> std::result::Result<(), std::fmt::Error>` + +error: aborting due to 7 previous errors + diff --git a/src/test/compile-fail/issue-15480.rs b/src/test/ui/span/issue-15480.rs similarity index 91% rename from src/test/compile-fail/issue-15480.rs rename to src/test/ui/span/issue-15480.rs index 30f58f909a..ea5f4d3fe6 100644 --- a/src/test/compile-fail/issue-15480.rs +++ b/src/test/ui/span/issue-15480.rs @@ -11,7 +11,6 @@ fn main() { let v = vec![ &3 -//~^ ERROR borrowed value does not live long enough ]; for &&x in &v { diff --git a/src/test/ui/span/issue-15480.stderr b/src/test/ui/span/issue-15480.stderr new file mode 100644 index 0000000000..85f6c41c36 --- /dev/null +++ b/src/test/ui/span/issue-15480.stderr @@ -0,0 +1,15 @@ +error: borrowed value does not live long enough + --> $DIR/issue-15480.rs:14:6 + | +13 | &3 + | - temporary value created here +14 | ]; + | ^ temporary value dropped here while still borrowed +... +19 | } + | - temporary value needs to live until here + | + = note: consider using a `let` binding to increase its lifetime + +error: aborting due to previous error + diff --git a/src/test/compile-fail/issue-23729.rs b/src/test/ui/span/issue-23729.rs similarity index 96% rename from src/test/compile-fail/issue-23729.rs rename to src/test/ui/span/issue-23729.rs index b1047ce18c..66134a03ba 100644 --- a/src/test/compile-fail/issue-23729.rs +++ b/src/test/ui/span/issue-23729.rs @@ -20,6 +20,7 @@ fn main() { impl Iterator for Recurrence { //~^ ERROR E0046 //~| NOTE missing `Item` in implementation + //~| NOTE `Item` from trait: `type Item;` #[inline] fn next(&mut self) -> Option { if self.pos < 2 { diff --git a/src/test/ui/span/issue-23729.stderr b/src/test/ui/span/issue-23729.stderr new file mode 100644 index 0000000000..493ca01778 --- /dev/null +++ b/src/test/ui/span/issue-23729.stderr @@ -0,0 +1,10 @@ +error[E0046]: not all trait items implemented, missing: `Item` + --> $DIR/issue-23729.rs:20:9 + | +20 | impl Iterator for Recurrence { + | ^ missing `Item` in implementation + | + = note: `Item` from trait: `type Item;` + +error: aborting due to previous error + diff --git a/src/test/compile-fail/issue-23827.rs b/src/test/ui/span/issue-23827.rs similarity index 92% rename from src/test/compile-fail/issue-23827.rs rename to src/test/ui/span/issue-23827.rs index 2062e23731..01269714c1 100644 --- a/src/test/compile-fail/issue-23827.rs +++ b/src/test/ui/span/issue-23827.rs @@ -10,7 +10,7 @@ // Regression test for #23827 -#![feature(core, unboxed_closures)] +#![feature(core, fn_traits, unboxed_closures)] pub struct Prototype { pub target: u32 @@ -36,6 +36,7 @@ impl FnMut<(C,)> for Prototype { impl FnOnce<(C,)> for Prototype { //~^ ERROR E0046 //~| NOTE missing `Output` in implementation + //~| NOTE `Output` from trait: `type Output;` extern "rust-call" fn call_once(self, (comp,): (C,)) -> Prototype { Fn::call(&self, (comp,)) } diff --git a/src/test/ui/span/issue-23827.stderr b/src/test/ui/span/issue-23827.stderr new file mode 100644 index 0000000000..6c1c246753 --- /dev/null +++ b/src/test/ui/span/issue-23827.stderr @@ -0,0 +1,18 @@ +error[E0046]: not all trait items implemented, missing: `Output` + --> $DIR/issue-23827.rs:36:1 + | +36 | impl FnOnce<(C,)> for Prototype { + | _^ starting here... +37 | | //~^ ERROR E0046 +38 | | //~| NOTE missing `Output` in implementation +39 | | //~| NOTE `Output` from trait: `type Output;` +40 | | extern "rust-call" fn call_once(self, (comp,): (C,)) -> Prototype { +41 | | Fn::call(&self, (comp,)) +42 | | } +43 | | } + | |_^ ...ending here: missing `Output` in implementation + | + = note: `Output` from trait: `type Output;` + +error: aborting due to previous error + diff --git a/src/test/compile-fail/issue-24356.rs b/src/test/ui/span/issue-24356.rs similarity index 94% rename from src/test/compile-fail/issue-24356.rs rename to src/test/ui/span/issue-24356.rs index d39fd539dc..0997dc802f 100644 --- a/src/test/compile-fail/issue-24356.rs +++ b/src/test/ui/span/issue-24356.rs @@ -30,6 +30,7 @@ fn main() { impl Deref for Thing { //~^ ERROR E0046 //~| NOTE missing `Target` in implementation + //~| NOTE `Target` from trait: `type Target;` fn deref(&self) -> i8 { self.0 } } diff --git a/src/test/ui/span/issue-24356.stderr b/src/test/ui/span/issue-24356.stderr new file mode 100644 index 0000000000..963f4bd9bb --- /dev/null +++ b/src/test/ui/span/issue-24356.stderr @@ -0,0 +1,16 @@ +error[E0046]: not all trait items implemented, missing: `Target` + --> $DIR/issue-24356.rs:30:9 + | +30 | impl Deref for Thing { + | _________^ starting here... +31 | | //~^ ERROR E0046 +32 | | //~| NOTE missing `Target` in implementation +33 | | //~| NOTE `Target` from trait: `type Target;` +34 | | fn deref(&self) -> i8 { self.0 } +35 | | } + | |_________^ ...ending here: missing `Target` in implementation + | + = note: `Target` from trait: `type Target;` + +error: aborting due to previous error + diff --git a/src/test/ui/span/issue-35987.rs b/src/test/ui/span/issue-35987.rs new file mode 100644 index 0000000000..8ff5f3b839 --- /dev/null +++ b/src/test/ui/span/issue-35987.rs @@ -0,0 +1,21 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +struct Foo(T); + +use std::ops::Add; + +impl Add for Foo { + type Output = usize; + + fn add(self, rhs: Self) -> Self::Output { + unimplemented!(); + } +} diff --git a/src/test/ui/span/issue-35987.stderr b/src/test/ui/span/issue-35987.stderr new file mode 100644 index 0000000000..2370b3d6c6 --- /dev/null +++ b/src/test/ui/span/issue-35987.stderr @@ -0,0 +1,12 @@ +error[E0404]: `Add` is not a trait + --> $DIR/issue-35987.rs:15:21 + | +15 | impl Add for Foo { + | --- ^^^ expected trait, found type parameter + | | + | type parameter defined here + +error: main function not found + +error: cannot continue compilation due to previous error + diff --git a/src/test/ui/span/multiline-span-simple.rs b/src/test/ui/span/multiline-span-simple.rs new file mode 100644 index 0000000000..451492ba69 --- /dev/null +++ b/src/test/ui/span/multiline-span-simple.rs @@ -0,0 +1,30 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +fn foo(a: u32, b: u32) { + a + b; +} + +fn bar(a: u32, b: u32) { + a + b; +} + +fn main() { + let x = 1; + let y = 2; + let z = 3; + foo(1 as u32 + + + bar(x, + + y), + + z) +} diff --git a/src/test/ui/span/multiline-span-simple.stderr b/src/test/ui/span/multiline-span-simple.stderr new file mode 100644 index 0000000000..b801325114 --- /dev/null +++ b/src/test/ui/span/multiline-span-simple.stderr @@ -0,0 +1,19 @@ +error[E0277]: the trait bound `u32: std::ops::Add<()>` is not satisfied + --> $DIR/multiline-span-simple.rs:23:9 + | +23 | foo(1 as u32 + + | _________^ starting here... +24 | | +25 | | bar(x, +26 | | +27 | | y), + | |______________^ ...ending here: the trait `std::ops::Add<()>` is not implemented for `u32` + | + = help: the following implementations were found: + = help: + = help: <&'a u32 as std::ops::Add> + = help: > + = help: <&'b u32 as std::ops::Add<&'a u32>> + +error: aborting due to previous error + diff --git a/src/test/ui/span/multispan-import-lint.rs b/src/test/ui/span/multispan-import-lint.rs new file mode 100644 index 0000000000..43b6cd8f85 --- /dev/null +++ b/src/test/ui/span/multispan-import-lint.rs @@ -0,0 +1,15 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::cmp::{Eq, Ord, min, PartialEq, PartialOrd}; + +fn main() { + let _ = min(1, 2); +} diff --git a/src/test/ui/span/multispan-import-lint.stderr b/src/test/ui/span/multispan-import-lint.stderr new file mode 100644 index 0000000000..b581584eee --- /dev/null +++ b/src/test/ui/span/multispan-import-lint.stderr @@ -0,0 +1,6 @@ +warning: unused imports: `Eq`, `Ord`, `PartialEq`, `PartialOrd`, #[warn(unused_imports)] on by default + --> $DIR/multispan-import-lint.rs:11:16 + | +11 | use std::cmp::{Eq, Ord, min, PartialEq, PartialOrd}; + | ^^ ^^^ ^^^^^^^^^ ^^^^^^^^^^ + diff --git a/src/test/compile-fail/import-shadow-2.rs b/src/test/ui/span/mut-arg-hint.rs similarity index 53% rename from src/test/compile-fail/import-shadow-2.rs rename to src/test/ui/span/mut-arg-hint.rs index 0c107cf27f..296ee6ca10 100644 --- a/src/test/compile-fail/import-shadow-2.rs +++ b/src/test/ui/span/mut-arg-hint.rs @@ -1,4 +1,4 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -8,23 +8,25 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -// Test that import shadowing using globs causes errors - -#![no_implicit_prelude] - -use foo::*; -use foo::*; //~ERROR a type named `Baz` has already been imported in this module - -mod foo { - pub type Baz = isize; +trait B { + fn foo(mut a: &String) { + a.push_str("bar"); + } } -mod bar { - pub type Baz = isize; +pub fn foo<'a>(mut a: &'a String) { + a.push_str("foo"); } -mod qux { - pub use bar::Baz; +struct A {} + +impl A { + pub fn foo(mut a: &String) { + a.push_str("foo"); + } } -fn main() {} +fn main() { + foo(&"a".to_string()); + A::foo(&"a".to_string()); +} diff --git a/src/test/ui/span/mut-arg-hint.stderr b/src/test/ui/span/mut-arg-hint.stderr new file mode 100644 index 0000000000..5e9a0b9150 --- /dev/null +++ b/src/test/ui/span/mut-arg-hint.stderr @@ -0,0 +1,26 @@ +error: cannot borrow immutable borrowed content `*a` as mutable + --> $DIR/mut-arg-hint.rs:13:9 + | +12 | fn foo(mut a: &String) { + | ------- use `&mut String` here to make mutable +13 | a.push_str("bar"); + | ^ + +error: cannot borrow immutable borrowed content `*a` as mutable + --> $DIR/mut-arg-hint.rs:18:5 + | +17 | pub fn foo<'a>(mut a: &'a String) { + | ---------- use `&'a mut String` here to make mutable +18 | a.push_str("foo"); + | ^ + +error: cannot borrow immutable borrowed content `*a` as mutable + --> $DIR/mut-arg-hint.rs:25:9 + | +24 | pub fn foo(mut a: &String) { + | ------- use `&mut String` here to make mutable +25 | a.push_str("foo"); + | ^ + +error: aborting due to 3 previous errors + diff --git a/src/test/compile-fail/regions-close-over-borrowed-ref-in-obj.rs b/src/test/ui/span/regions-close-over-borrowed-ref-in-obj.rs similarity index 88% rename from src/test/compile-fail/regions-close-over-borrowed-ref-in-obj.rs rename to src/test/ui/span/regions-close-over-borrowed-ref-in-obj.rs index 25b8137d29..a524562f2d 100644 --- a/src/test/compile-fail/regions-close-over-borrowed-ref-in-obj.rs +++ b/src/test/ui/span/regions-close-over-borrowed-ref-in-obj.rs @@ -17,7 +17,7 @@ impl<'a> Foo for &'a isize { } fn main() { let blah; { - let ss: &isize = &1; //~ ERROR borrowed value does not live long enough + let ss: &isize = &1; blah = box ss as Box; } } diff --git a/src/test/ui/span/regions-close-over-borrowed-ref-in-obj.stderr b/src/test/ui/span/regions-close-over-borrowed-ref-in-obj.stderr new file mode 100644 index 0000000000..205734c25e --- /dev/null +++ b/src/test/ui/span/regions-close-over-borrowed-ref-in-obj.stderr @@ -0,0 +1,13 @@ +error: borrowed value does not live long enough + --> $DIR/regions-close-over-borrowed-ref-in-obj.rs:22:5 + | +20 | let ss: &isize = &1; + | - temporary value created here +21 | blah = box ss as Box; +22 | } + | ^ temporary value dropped here while still borrowed +23 | } + | - temporary value needs to live until here + +error: aborting due to previous error + diff --git a/src/test/compile-fail/slice-borrow.rs b/src/test/ui/span/slice-borrow.rs similarity index 86% rename from src/test/compile-fail/slice-borrow.rs rename to src/test/ui/span/slice-borrow.rs index 0062f66ae2..4ca0ccaa73 100644 --- a/src/test/compile-fail/slice-borrow.rs +++ b/src/test/ui/span/slice-borrow.rs @@ -13,7 +13,7 @@ fn main() { let y; { - let x: &[isize] = &[1, 2, 3, 4, 5]; //~ ERROR borrowed value does not live long enough + let x: &[isize] = &[1, 2, 3, 4, 5]; y = &x[1..]; } } diff --git a/src/test/ui/span/slice-borrow.stderr b/src/test/ui/span/slice-borrow.stderr new file mode 100644 index 0000000000..efe81fd00b --- /dev/null +++ b/src/test/ui/span/slice-borrow.stderr @@ -0,0 +1,13 @@ +error: borrowed value does not live long enough + --> $DIR/slice-borrow.rs:18:5 + | +16 | let x: &[isize] = &[1, 2, 3, 4, 5]; + | --------------- temporary value created here +17 | y = &x[1..]; +18 | } + | ^ temporary value dropped here while still borrowed +19 | } + | - temporary value needs to live until here + +error: aborting due to previous error + diff --git a/src/tools/cargotest/main.rs b/src/tools/cargotest/main.rs index 800186a926..26a2e96f57 100644 --- a/src/tools/cargotest/main.rs +++ b/src/tools/cargotest/main.rs @@ -24,7 +24,7 @@ struct Test { const TEST_REPOS: &'static [Test] = &[Test { name: "cargo", repo: "https://github.com/rust-lang/cargo", - sha: "806e3c368a15f618244a3b4e918bf77f9c403fd0", + sha: "b7be4f2ef2cf743492edc6dfb55d087ed88f2d76", lock: None, }, Test { diff --git a/src/tools/compiletest/Cargo.toml b/src/tools/compiletest/Cargo.toml index e05d57365f..227b695635 100644 --- a/src/tools/compiletest/Cargo.toml +++ b/src/tools/compiletest/Cargo.toml @@ -4,12 +4,6 @@ name = "compiletest" version = "0.0.0" build = "build.rs" -# Curiously, this will segfault if compiled with opt-level=3 on 64-bit MSVC when -# running the compile-fail test suite when a should-fail test panics. But hey if -# this is removed and it gets past the bots, sounds good to me. -[profile.release] -opt-level = 2 - [dependencies] log = "0.3" env_logger = { version = "0.3.5", default-features = false } diff --git a/src/tools/compiletest/src/common.rs b/src/tools/compiletest/src/common.rs index 34f3837d8b..1aeb76c0a0 100644 --- a/src/tools/compiletest/src/common.rs +++ b/src/tools/compiletest/src/common.rs @@ -127,6 +127,9 @@ pub struct Config { // Only run tests that match this filter pub filter: Option, + // Exactly match the filter, rather than a substring + pub filter_exact: bool, + // Write out a parseable log of tests that were run pub logfile: Option, diff --git a/src/tools/compiletest/src/main.rs b/src/tools/compiletest/src/main.rs index 806363679d..e4d9836c56 100644 --- a/src/tools/compiletest/src/main.rs +++ b/src/tools/compiletest/src/main.rs @@ -16,8 +16,6 @@ #![feature(test)] #![feature(libc)] -#![cfg_attr(stage0, feature(question_mark))] - #![deny(warnings)] extern crate libc; @@ -91,6 +89,7 @@ pub fn parse_config(args: Vec ) -> Config { "(compile-fail|parse-fail|run-fail|run-pass|\ run-pass-valgrind|pretty|debug-info|incremental|mir-opt)"), optflag("", "ignored", "run tests marked as ignored"), + optflag("", "exact", "filters match exactly"), optopt("", "runtool", "supervisor program to run tests under \ (eg. emulator, valgrind)", "PROGRAM"), optopt("", "host-rustcflags", "flags to pass to rustc for host", "FLAGS"), @@ -169,6 +168,7 @@ pub fn parse_config(args: Vec ) -> Config { mode: matches.opt_str("mode").unwrap().parse().ok().expect("invalid mode"), run_ignored: matches.opt_present("ignored"), filter: matches.free.first().cloned(), + filter_exact: matches.opt_present("exact"), logfile: matches.opt_str("logfile").map(|s| PathBuf::from(&s)), runtool: matches.opt_str("runtool"), host_rustcflags: matches.opt_str("host-rustcflags"), @@ -218,6 +218,7 @@ pub fn log_config(config: &Config) { opt_str(&config.filter .as_ref() .map(|re| re.to_owned())))); + logv(c, format!("filter_exact: {}", config.filter_exact)); logv(c, format!("runtool: {}", opt_str(&config.runtool))); logv(c, format!("host-rustcflags: {}", opt_str(&config.host_rustcflags))); @@ -311,6 +312,7 @@ pub fn run_tests(config: &Config) { pub fn test_opts(config: &Config) -> test::TestOpts { test::TestOpts { filter: config.filter.clone(), + filter_exact: config.filter_exact, run_ignored: config.run_ignored, quiet: config.quiet, logfile: config.logfile.clone(), @@ -323,6 +325,7 @@ pub fn test_opts(config: &Config) -> test::TestOpts { color: test::AutoColor, test_threads: None, skip: vec![], + list: false, } } diff --git a/src/tools/compiletest/src/runtest.rs b/src/tools/compiletest/src/runtest.rs index 8cb2e3b1c2..94461cd8e0 100644 --- a/src/tools/compiletest/src/runtest.rs +++ b/src/tools/compiletest/src/runtest.rs @@ -1334,8 +1334,18 @@ actual:\n\ // FIXME (#9639): This needs to handle non-utf8 paths let mut args = vec![input_file.to_str().unwrap().to_owned(), "-L".to_owned(), - self.config.build_base.to_str().unwrap().to_owned(), - format!("--target={}", target)]; + self.config.build_base.to_str().unwrap().to_owned()]; + + // Optionally prevent default --target if specified in test compile-flags. + let custom_target = self.props.compile_flags + .iter() + .fold(false, |acc, ref x| acc || x.starts_with("--target")); + + if !custom_target { + args.extend(vec![ + format!("--target={}", target), + ]); + } if let Some(revision) = self.revision { args.extend(vec![ @@ -1456,8 +1466,11 @@ actual:\n\ // If this is emscripten, then run tests under nodejs if self.config.target.contains("emscripten") { - let nodejs = self.config.nodejs.clone().unwrap_or("nodejs".to_string()); - args.push(nodejs); + if let Some(ref p) = self.config.nodejs { + args.push(p.clone()); + } else { + self.fatal("no NodeJS binary found (--nodejs)"); + } } let exe_file = self.make_exe_name(); @@ -2095,7 +2108,16 @@ actual:\n\ } self.create_dir_racy(&tmpdir); - let mut cmd = Command::new("make"); + let host = &self.config.host; + let make = if host.contains("bitrig") || host.contains("dragonfly") || + host.contains("freebsd") || host.contains("netbsd") || + host.contains("openbsd") { + "gmake" + } else { + "make" + }; + + let mut cmd = Command::new(make); cmd.current_dir(&self.testpaths.file) .env("TARGET", &self.config.target) .env("PYTHON", &self.config.docck_python) diff --git a/src/tools/linkchecker/Cargo.toml b/src/tools/linkchecker/Cargo.toml index 415b6f0567..d6b7dafea4 100644 --- a/src/tools/linkchecker/Cargo.toml +++ b/src/tools/linkchecker/Cargo.toml @@ -3,9 +3,6 @@ name = "linkchecker" version = "0.1.0" authors = ["Alex Crichton "] -[dependencies] -url = "1.2" - [[bin]] name = "linkchecker" path = "main.rs" diff --git a/src/tools/linkchecker/main.rs b/src/tools/linkchecker/main.rs index f79cc76e67..0e70c2b432 100644 --- a/src/tools/linkchecker/main.rs +++ b/src/tools/linkchecker/main.rs @@ -24,17 +24,13 @@ //! A few whitelisted exceptions are allowed as there's known bugs in rustdoc, //! but this should catch the majority of "broken link" cases. -extern crate url; - use std::env; use std::fs::File; use std::io::prelude::*; -use std::path::{Path, PathBuf}; +use std::path::{Path, PathBuf, Component}; use std::collections::{HashMap, HashSet}; use std::collections::hash_map::Entry; -use url::Url; - use Redirect::*; macro_rules! t { @@ -47,9 +43,8 @@ macro_rules! t { fn main() { let docs = env::args().nth(1).unwrap(); let docs = env::current_dir().unwrap().join(docs); - let mut url = Url::from_file_path(&docs).unwrap(); let mut errors = false; - walk(&mut HashMap::new(), &docs, &docs, &mut url, &mut errors); + walk(&mut HashMap::new(), &docs, &docs, &mut errors); if errors { panic!("found some broken links"); } @@ -88,15 +83,14 @@ impl FileEntry { } } -fn walk(cache: &mut Cache, root: &Path, dir: &Path, url: &mut Url, errors: &mut bool) { +fn walk(cache: &mut Cache, root: &Path, dir: &Path, errors: &mut bool) { for entry in t!(dir.read_dir()).map(|e| t!(e)) { let path = entry.path(); let kind = t!(entry.file_type()); - url.path_segments_mut().unwrap().push(entry.file_name().to_str().unwrap()); if kind.is_dir() { - walk(cache, root, &path, url, errors); + walk(cache, root, &path, errors); } else { - let pretty_path = check(cache, root, &path, url, errors); + let pretty_path = check(cache, root, &path, errors); if let Some(pretty_path) = pretty_path { let entry = cache.get_mut(&pretty_path).unwrap(); // we don't need the source anymore, @@ -104,14 +98,12 @@ fn walk(cache: &mut Cache, root: &Path, dir: &Path, url: &mut Url, errors: &mut entry.source = String::new(); } } - url.path_segments_mut().unwrap().pop(); } } fn check(cache: &mut Cache, root: &Path, file: &Path, - base: &Url, errors: &mut bool) -> Option { // ignore js files as they are not prone to errors as the rest of the @@ -157,19 +149,28 @@ fn check(cache: &mut Cache, url.starts_with("irc:") || url.starts_with("data:") { return; } + let mut parts = url.splitn(2, "#"); + let url = parts.next().unwrap(); + if url.is_empty() { + return + } + let fragment = parts.next(); + let mut parts = url.splitn(2, "?"); + let url = parts.next().unwrap(); + // Once we've plucked out the URL, parse it using our base url and // then try to extract a file path. - let (parsed_url, path) = match url_to_file_path(&base, url) { - Some((url, path)) => (url, PathBuf::from(path)), - None => { - *errors = true; - println!("{}:{}: invalid link - {}", - pretty_file.display(), - i + 1, - url); - return; + let mut path = file.to_path_buf(); + path.pop(); + for part in Path::new(url).components() { + match part { + Component::Prefix(_) | + Component::RootDir => panic!(), + Component::CurDir => {} + Component::ParentDir => { path.pop(); } + Component::Normal(s) => { path.push(s); } } - }; + } // Alright, if we've found a file name then this file had better // exist! If it doesn't then we register and print an error. @@ -200,7 +201,7 @@ fn check(cache: &mut Cache, Err(LoadError::IsRedirect) => unreachable!(), }; - if let Some(ref fragment) = parsed_url.fragment() { + if let Some(ref fragment) = fragment { // Fragments like `#1-6` are most likely line numbers to be // interpreted by javascript, so we're ignoring these if fragment.splitn(2, '-') @@ -231,7 +232,7 @@ fn check(cache: &mut Cache, fn load_file(cache: &mut Cache, root: &Path, - file: PathBuf, + mut file: PathBuf, redirect: Redirect) -> Result<(PathBuf, String), LoadError> { let mut contents = String::new(); @@ -266,10 +267,9 @@ fn load_file(cache: &mut Cache, maybe } }; - let base = Url::from_file_path(&file).unwrap(); - - match maybe_redirect.and_then(|url| url_to_file_path(&base, &url)) { - Some((_, redirect_file)) => { + file.pop(); + match maybe_redirect.map(|url| file.join(url)) { + Some(redirect_file) => { let path = PathBuf::from(redirect_file); load_file(cache, root, path, FromRedirect(true)) } @@ -293,12 +293,6 @@ fn maybe_redirect(source: &str) -> Option { }) } -fn url_to_file_path(parser: &Url, url: &str) -> Option<(Url, PathBuf)> { - parser.join(url) - .ok() - .and_then(|parsed_url| parsed_url.to_file_path().ok().map(|f| (parsed_url, f))) -} - fn with_attrs_in_source(contents: &str, attr: &str, mut f: F) { for (i, mut line) in contents.lines().enumerate() { while let Some(j) = line.find(attr) { diff --git a/src/tools/tidy/src/cargo.rs b/src/tools/tidy/src/cargo.rs index a7784e65c5..11acb64743 100644 --- a/src/tools/tidy/src/cargo.rs +++ b/src/tools/tidy/src/cargo.rs @@ -20,6 +20,9 @@ use std::fs::File; use std::path::Path; pub fn check(path: &Path, bad: &mut bool) { + if path.ends_with("vendor") { + return + } for entry in t!(path.read_dir(), path).map(|e| t!(e)) { // Look for `Cargo.toml` with a sibling `src/lib.rs` or `lib.rs` if entry.file_name().to_str() == Some("Cargo.toml") { diff --git a/src/tools/tidy/src/cargo_lock.rs b/src/tools/tidy/src/cargo_lock.rs deleted file mode 100644 index 165dd52758..0000000000 --- a/src/tools/tidy/src/cargo_lock.rs +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright 2016 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -use std::path::Path; -use std::ffi::OsStr; - -const CARGO_LOCK: &'static str = "Cargo.lock"; - -pub fn check(path: &Path, bad: &mut bool) { - use std::process::Command; - - super::walk(path, - &mut |path| super::filter_dirs(path) || path.ends_with("src/test"), - &mut |file| { - if let Some(CARGO_LOCK) = file.file_name().and_then(OsStr::to_str) { - let rel_path = file.strip_prefix(path).unwrap(); - let git_friendly_path = rel_path.to_str().unwrap().replace("\\", "/"); - let ret_code = Command::new("git") - .arg("diff") - .arg("--exit-code") - .arg("--patch") - .arg("HEAD") - .arg(&git_friendly_path) - .current_dir(path) - .status() - .unwrap_or_else(|e| { - panic!("could not run git diff-index: {}", e); - }); - if !ret_code.success() { - let parent_path = file.parent().unwrap().join("Cargo.toml"); - print!("dirty lock file found at {} ", rel_path.display()); - println!("please commit your changes or update the lock file by running:"); - println!("\n\tcargo update --manifest-path {}", parent_path.display()); - *bad = true; - } - } - }); -} diff --git a/src/tools/tidy/src/deps.rs b/src/tools/tidy/src/deps.rs new file mode 100644 index 0000000000..7592c09a91 --- /dev/null +++ b/src/tools/tidy/src/deps.rs @@ -0,0 +1,73 @@ +// Copyright 2016 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! Check license of third-party deps by inspecting src/vendor + +use std::fs::File; +use std::io::Read; +use std::path::Path; + +static LICENSES: &'static [&'static str] = &[ + "MIT/Apache-2.0" +]; + +pub fn check(path: &Path, bad: &mut bool) { + let path = path.join("vendor"); + assert!(path.exists(), "vendor directory missing"); + let mut saw_dir = false; + for dir in t!(path.read_dir()) { + saw_dir = true; + let dir = t!(dir); + let toml = dir.path().join("Cargo.toml"); + if !check_license(&toml) { + *bad = true; + } + } + assert!(saw_dir, "no vendored source"); +} + +fn check_license(path: &Path) -> bool { + if !path.exists() { + panic!("{} does not exist", path.display()); + } + let mut contents = String::new(); + t!(t!(File::open(path)).read_to_string(&mut contents)); + + let mut found_license = false; + for line in contents.lines() { + if !line.starts_with("license") { + continue; + } + let license = extract_license(line); + if !LICENSES.contains(&&*license) { + println!("invalid license {} in {}", license, path.display()); + return false; + } + found_license = true; + break; + } + if !found_license { + println!("no license in {}", path.display()); + return false; + } + + true +} + +fn extract_license(line: &str) -> String { + let first_quote = line.find('"'); + let last_quote = line.rfind('"'); + if let (Some(f), Some(l)) = (first_quote, last_quote) { + let license = &line[f + 1 .. l]; + license.into() + } else { + "bad-license-parse".into() + } +} diff --git a/src/tools/tidy/src/features.rs b/src/tools/tidy/src/features.rs index 4ef07f7e4b..ac5dff0980 100644 --- a/src/tools/tidy/src/features.rs +++ b/src/tools/tidy/src/features.rs @@ -67,7 +67,7 @@ pub fn check(path: &Path, bad: &mut bool) { } contents.truncate(0); - t!(t!(File::open(file)).read_to_string(&mut contents)); + t!(t!(File::open(&file), &file).read_to_string(&mut contents)); for (i, line) in contents.lines().enumerate() { let mut err = |msg: &str| { diff --git a/src/tools/tidy/src/main.rs b/src/tools/tidy/src/main.rs index cabaee5d06..7566580b1a 100644 --- a/src/tools/tidy/src/main.rs +++ b/src/tools/tidy/src/main.rs @@ -35,8 +35,8 @@ mod style; mod errors; mod features; mod cargo; -mod cargo_lock; mod pal; +mod deps; fn main() { let path = env::args_os().skip(1).next().expect("need an argument"); @@ -48,8 +48,8 @@ fn main() { errors::check(&path, &mut bad); cargo::check(&path, &mut bad); features::check(&path, &mut bad); - cargo_lock::check(&path, &mut bad); pal::check(&path, &mut bad); + deps::check(&path, &mut bad); if bad { panic!("some tidy checks failed"); @@ -66,6 +66,7 @@ fn filter_dirs(path: &Path) -> bool { "src/rustllvm", "src/rust-installer", "src/liblibc", + "src/vendor", ]; skip.iter().any(|p| path.ends_with(p)) } diff --git a/src/vendor/cmake/.cargo-checksum.json b/src/vendor/cmake/.cargo-checksum.json new file mode 100644 index 0000000000..b81d7d2fa0 --- /dev/null +++ b/src/vendor/cmake/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".gitignore":"c1e953ee360e77de57f7b02f1b7880bd6a3dc22d1a69e953c2ac2c52cc52d247",".travis.yml":"5d83ed1ae0b80cd6cebfc6a25b1fdb58c893ead400f0f84cd0ebf08d9ad48b28","Cargo.toml":"2266412ecb4504137a90d378ebdbf3a41f0e8b7188858cfb149da54792f7f8d9","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"378f5840b258e2779c39418f3f2d7b2ba96f1c7917dd6be0713f88305dbda397","README.md":"8ca528d20639506546044c676ff9069e3e850937b02bff4194dcf9e5c3c50d64","src/lib.rs":"dae5d93c005bf8d16427e29eb3bfb50c5527a1ec7c39a383d0694a8e8e38af90","src/registry.rs":"ca16433f51b5e3aedb0560bba41370b0c42de9238926a5118d1c0a3a072b64b2"},"package":"0e5bcf27e097a184c1df4437654ed98df3d7a516e8508a6ba45d8b092bbdf283"} \ No newline at end of file diff --git a/src/vendor/cmake/.cargo-ok b/src/vendor/cmake/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/cmake/.gitignore b/src/vendor/cmake/.gitignore new file mode 100644 index 0000000000..4fffb2f89c --- /dev/null +++ b/src/vendor/cmake/.gitignore @@ -0,0 +1,2 @@ +/target +/Cargo.lock diff --git a/src/vendor/cmake/.travis.yml b/src/vendor/cmake/.travis.yml new file mode 100644 index 0000000000..3ac040c5c0 --- /dev/null +++ b/src/vendor/cmake/.travis.yml @@ -0,0 +1,19 @@ +language: rust +rust: + - stable + - beta + - nightly +sudo: false +before_script: + - pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH +script: + - cargo test --verbose + - cargo doc --no-deps +after_success: + - travis-cargo --only nightly doc-upload +env: + global: + secure: WSQJRyheeMf7eRdivHextSEQzyFnTIw2yeemO2+ZkHVftp0XYsTXQVca3RGlQNsVmjI0RP8lbDVe7HG23uwbTMeRgm+9hzSwNMa0ndJZ06TNMpPM6nqcXFUaNGeuf7EqU370xcgVBO+ZA0cSh55pJkOBg5ALd9bfRWbjEAjHkx8= +notifications: + email: + on_success: never diff --git a/src/vendor/cmake/Cargo.toml b/src/vendor/cmake/Cargo.toml new file mode 100644 index 0000000000..c17bbff922 --- /dev/null +++ b/src/vendor/cmake/Cargo.toml @@ -0,0 +1,17 @@ +[package] + +name = "cmake" +version = "0.1.18" +authors = ["Alex Crichton "] +license = "MIT/Apache-2.0" +readme = "README.md" +keywords = ["build-dependencies"] +repository = "https://github.com/alexcrichton/cmake-rs" +homepage = "https://github.com/alexcrichton/cmake-rs" +documentation = "http://alexcrichton.com/cmake-rs" +description = """ +A build dependency for running `cmake` to build a native library +""" + +[dependencies] +gcc = "0.3.17" diff --git a/src/vendor/cmake/LICENSE-APACHE b/src/vendor/cmake/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/src/vendor/cmake/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/src/vendor/cmake/LICENSE-MIT b/src/vendor/cmake/LICENSE-MIT new file mode 100644 index 0000000000..39e0ed6602 --- /dev/null +++ b/src/vendor/cmake/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2014 Alex Crichton + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/src/vendor/cmake/README.md b/src/vendor/cmake/README.md new file mode 100644 index 0000000000..8b2586eb01 --- /dev/null +++ b/src/vendor/cmake/README.md @@ -0,0 +1,22 @@ +# cmake + +[![Build Status](https://travis-ci.org/alexcrichton/cmake-rs.svg?branch=master)](https://travis-ci.org/alexcrichton/cmake-rs) + +[Documentation](http://alexcrichton.com/cmake-rs) + +A build dependency for running the `cmake` build tool to compile a native +library. + +```toml +# Cargo.toml +[build-dependencies] +cmake = "0.2" +``` + +# License + +`cmake-rs` is primarily distributed under the terms of both the MIT license and +the Apache License (Version 2.0), with portions covered by various BSD-like +licenses. + +See LICENSE-APACHE, and LICENSE-MIT for details. diff --git a/src/vendor/cmake/src/lib.rs b/src/vendor/cmake/src/lib.rs new file mode 100644 index 0000000000..3607d29026 --- /dev/null +++ b/src/vendor/cmake/src/lib.rs @@ -0,0 +1,522 @@ +//! A build dependency for running `cmake` to build a native library +//! +//! This crate provides some necessary boilerplate and shim support for running +//! the system `cmake` command to build a native library. It will add +//! appropriate cflags for building code to link into Rust, handle cross +//! compilation, and use the necessary generator for the platform being +//! targeted. +//! +//! The builder-style configuration allows for various variables and such to be +//! passed down into the build as well. +//! +//! ## Installation +//! +//! Add this to your `Cargo.toml`: +//! +//! ```toml +//! [build-dependencies] +//! cmake = "0.1" +//! ``` +//! +//! ## Examples +//! +//! ```no_run +//! use cmake; +//! +//! // Builds the project in the directory located in `libfoo`, installing it +//! // into $OUT_DIR +//! let dst = cmake::build("libfoo"); +//! +//! println!("cargo:rustc-link-search=native={}", dst.display()); +//! println!("cargo:rustc-link-lib=static=foo"); +//! ``` +//! +//! ```no_run +//! use cmake::Config; +//! +//! let dst = Config::new("libfoo") +//! .define("FOO", "BAR") +//! .cflag("-foo") +//! .build(); +//! println!("cargo:rustc-link-search=native={}", dst.display()); +//! println!("cargo:rustc-link-lib=static=foo"); +//! ``` + +#![deny(missing_docs)] + +extern crate gcc; + +use std::env; +use std::ffi::{OsString, OsStr}; +use std::fs::{self, File}; +use std::io::ErrorKind; +use std::io::prelude::*; +use std::path::{Path, PathBuf}; +use std::process::Command; + +#[cfg(windows)] +mod registry; + +/// Builder style configuration for a pending CMake build. +pub struct Config { + path: PathBuf, + generator: Option, + cflags: OsString, + cxxflags: OsString, + defines: Vec<(OsString, OsString)>, + deps: Vec, + target: Option, + host: Option, + out_dir: Option, + profile: Option, + build_args: Vec, + cmake_target: Option, +} + +/// Builds the native library rooted at `path` with the default cmake options. +/// This will return the directory in which the library was installed. +/// +/// # Examples +/// +/// ```no_run +/// use cmake; +/// +/// // Builds the project in the directory located in `libfoo`, installing it +/// // into $OUT_DIR +/// let dst = cmake::build("libfoo"); +/// +/// println!("cargo:rustc-link-search=native={}", dst.display()); +/// println!("cargo:rustc-link-lib=static=foo"); +/// ``` +/// +pub fn build>(path: P) -> PathBuf { + Config::new(path.as_ref()).build() +} + +impl Config { + /// Creates a new blank set of configuration to build the project specified + /// at the path `path`. + pub fn new>(path: P) -> Config { + Config { + path: env::current_dir().unwrap().join(path), + generator: None, + cflags: OsString::new(), + cxxflags: OsString::new(), + defines: Vec::new(), + deps: Vec::new(), + profile: None, + out_dir: None, + target: None, + host: None, + build_args: Vec::new(), + cmake_target: None, + } + } + + /// Sets the build-tool generator (`-G`) for this compilation. + pub fn generator>(&mut self, generator: T) -> &mut Config { + self.generator = Some(generator.as_ref().to_owned()); + self + } + + /// Adds a custom flag to pass down to the C compiler, supplementing those + /// that this library already passes. + pub fn cflag>(&mut self, flag: P) -> &mut Config { + self.cflags.push(" "); + self.cflags.push(flag.as_ref()); + self + } + + /// Adds a custom flag to pass down to the C++ compiler, supplementing those + /// that this library already passes. + pub fn cxxflag>(&mut self, flag: P) -> &mut Config { + self.cxxflags.push(" "); + self.cxxflags.push(flag.as_ref()); + self + } + + /// Adds a new `-D` flag to pass to cmake during the generation step. + pub fn define(&mut self, k: K, v: V) -> &mut Config + where K: AsRef, V: AsRef + { + self.defines.push((k.as_ref().to_owned(), v.as_ref().to_owned())); + self + } + + /// Registers a dependency for this compilation on the native library built + /// by Cargo previously. + /// + /// This registration will modify the `CMAKE_PREFIX_PATH` environment + /// variable for the build system generation step. + pub fn register_dep(&mut self, dep: &str) -> &mut Config { + self.deps.push(dep.to_string()); + self + } + + /// Sets the target triple for this compilation. + /// + /// This is automatically scraped from `$TARGET` which is set for Cargo + /// build scripts so it's not necessary to call this from a build script. + pub fn target(&mut self, target: &str) -> &mut Config { + self.target = Some(target.to_string()); + self + } + + /// Sets the host triple for this compilation. + /// + /// This is automatically scraped from `$HOST` which is set for Cargo + /// build scripts so it's not necessary to call this from a build script. + pub fn host(&mut self, host: &str) -> &mut Config { + self.host = Some(host.to_string()); + self + } + + /// Sets the output directory for this compilation. + /// + /// This is automatically scraped from `$OUT_DIR` which is set for Cargo + /// build scripts so it's not necessary to call this from a build script. + pub fn out_dir>(&mut self, out: P) -> &mut Config { + self.out_dir = Some(out.as_ref().to_path_buf()); + self + } + + /// Sets the profile for this compilation. + /// + /// This is automatically scraped from `$PROFILE` which is set for Cargo + /// build scripts so it's not necessary to call this from a build script. + pub fn profile(&mut self, profile: &str) -> &mut Config { + self.profile = Some(profile.to_string()); + self + } + + /// Add an argument to the final `cmake` build step + pub fn build_arg>(&mut self, arg: A) -> &mut Config { + self.build_args.push(arg.as_ref().to_owned()); + self + } + + /// Sets the build target for the final `cmake` build step, this will + /// default to "install" if not specified. + pub fn build_target(&mut self, target: &str) -> &mut Config { + self.cmake_target = Some(target.to_string()); + self + } + + /// Run this configuration, compiling the library with all the configured + /// options. + /// + /// This will run both the build system generator command as well as the + /// command to build the library. + pub fn build(&mut self) -> PathBuf { + let target = self.target.clone().unwrap_or_else(|| { + getenv_unwrap("TARGET") + }); + let host = self.host.clone().unwrap_or_else(|| { + getenv_unwrap("HOST") + }); + let msvc = target.contains("msvc"); + let c_compiler = gcc::Config::new().cargo_metadata(false) + .opt_level(0) + .debug(false) + .target(&target) + .host(&host) + .get_compiler(); + let cxx_compiler = gcc::Config::new().cargo_metadata(false) + .cpp(true) + .opt_level(0) + .debug(false) + .target(&target) + .host(&host) + .get_compiler(); + + let dst = self.out_dir.clone().unwrap_or_else(|| { + PathBuf::from(getenv_unwrap("OUT_DIR")) + }); + let build = dst.join("build"); + self.maybe_clear(&build); + let _ = fs::create_dir(&build); + + // Add all our dependencies to our cmake paths + let mut cmake_prefix_path = Vec::new(); + for dep in &self.deps { + if let Some(root) = env::var_os(&format!("DEP_{}_ROOT", dep)) { + cmake_prefix_path.push(PathBuf::from(root)); + } + } + let system_prefix = env::var_os("CMAKE_PREFIX_PATH") + .unwrap_or(OsString::new()); + cmake_prefix_path.extend(env::split_paths(&system_prefix) + .map(|s| s.to_owned())); + let cmake_prefix_path = env::join_paths(&cmake_prefix_path).unwrap(); + + // Build up the first cmake command to build the build system. + let mut cmd = Command::new("cmake"); + cmd.arg(&self.path) + .current_dir(&build); + if target.contains("windows-gnu") { + if host.contains("windows") { + // On MinGW we need to coerce cmake to not generate a visual + // studio build system but instead use makefiles that MinGW can + // use to build. + if self.generator.is_none() { + cmd.arg("-G").arg("MSYS Makefiles"); + } + } else { + // If we're cross compiling onto windows, then set some + // variables which will hopefully get things to succeed. Some + // systems may need the `windres` or `dlltool` variables set, so + // set them if possible. + if !self.defined("CMAKE_SYSTEM_NAME") { + cmd.arg("-DCMAKE_SYSTEM_NAME=Windows"); + } + if !self.defined("CMAKE_RC_COMPILER") { + let exe = find_exe(c_compiler.path()); + if let Some(name) = exe.file_name().unwrap().to_str() { + let name = name.replace("gcc", "windres"); + let windres = exe.with_file_name(name); + if windres.is_file() { + let mut arg = OsString::from("-DCMAKE_RC_COMPILER="); + arg.push(&windres); + cmd.arg(arg); + } + } + } + } + } else if msvc { + // If we're on MSVC we need to be sure to use the right generator or + // otherwise we won't get 32/64 bit correct automatically. + if self.generator.is_none() { + cmd.arg("-G").arg(self.visual_studio_generator(&target)); + } + } + if let Some(ref generator) = self.generator { + cmd.arg("-G").arg(generator); + } + let profile = self.profile.clone().unwrap_or_else(|| { + match &getenv_unwrap("PROFILE")[..] { + "bench" | "release" => "Release", + // currently we need to always use the same CRT for MSVC + _ if msvc => "Release", + _ => "Debug", + }.to_string() + }); + for &(ref k, ref v) in &self.defines { + let mut os = OsString::from("-D"); + os.push(k); + os.push("="); + os.push(v); + cmd.arg(os); + } + + if !self.defined("CMAKE_INSTALL_PREFIX") { + let mut dstflag = OsString::from("-DCMAKE_INSTALL_PREFIX="); + dstflag.push(&dst); + cmd.arg(dstflag); + } + + { + let mut set_compiler = |kind: &str, + compiler: &gcc::Tool, + extra: &OsString| { + let flag_var = format!("CMAKE_{}_FLAGS", kind); + let tool_var = format!("CMAKE_{}_COMPILER", kind); + if !self.defined(&flag_var) { + let mut flagsflag = OsString::from("-D"); + flagsflag.push(&flag_var); + flagsflag.push("="); + flagsflag.push(extra); + for arg in compiler.args() { + flagsflag.push(" "); + flagsflag.push(arg); + } + cmd.arg(flagsflag); + } + + // Apparently cmake likes to have an absolute path to the + // compiler as otherwise it sometimes thinks that this variable + // changed as it thinks the found compiler, /usr/bin/cc, + // differs from the specified compiler, cc. Not entirely sure + // what's up, but at least this means cmake doesn't get + // confused? + // + // Also don't specify this on Windows as it's not needed for + // MSVC and for MinGW it doesn't really vary. + if !self.defined("CMAKE_TOOLCHAIN_FILE") + && !self.defined(&tool_var) + && env::consts::FAMILY != "windows" { + let mut ccompiler = OsString::from("-D"); + ccompiler.push(&tool_var); + ccompiler.push("="); + ccompiler.push(find_exe(compiler.path())); + cmd.arg(ccompiler); + } + }; + + set_compiler("C", &c_compiler, &self.cflags); + set_compiler("CXX", &cxx_compiler, &self.cxxflags); + } + + if !self.defined("CMAKE_BUILD_TYPE") { + cmd.arg(&format!("-DCMAKE_BUILD_TYPE={}", profile)); + } + + if !self.defined("CMAKE_TOOLCHAIN_FILE") { + if let Ok(s) = env::var("CMAKE_TOOLCHAIN_FILE") { + cmd.arg(&format!("-DCMAKE_TOOLCHAIN_FILE={}", s)); + } + } + + run(cmd.env("CMAKE_PREFIX_PATH", cmake_prefix_path), "cmake"); + + let mut parallel_args = Vec::new(); + if fs::metadata(&dst.join("build/Makefile")).is_ok() { + if let Ok(s) = env::var("NUM_JOBS") { + parallel_args.push(format!("-j{}", s)); + } + } + + // And build! + let target = self.cmake_target.clone().unwrap_or("install".to_string()); + run(Command::new("cmake") + .arg("--build").arg(".") + .arg("--target").arg(target) + .arg("--config").arg(profile) + .arg("--").args(&self.build_args) + .args(¶llel_args) + .current_dir(&build), "cmake"); + + println!("cargo:root={}", dst.display()); + return dst + } + + fn visual_studio_generator(&self, target: &str) -> String { + let base = match std::env::var("VisualStudioVersion") { + Ok(version) => { + match &version[..] { + "15.0" => "Visual Studio 15", + "14.0" => "Visual Studio 14 2015", + "12.0" => "Visual Studio 12 2013", + vers => panic!("\n\n\ + unsupported or unknown VisualStudio version: {}\n\ + if another version is installed consider running \ + the appropriate vcvars script before building this \ + crate\n\ + ", vers), + } + } + _ => { + // Check for the presense of a specific registry key + // that indicates visual studio is installed. + if self.has_msbuild_version("15.0") { + "Visual Studio 15" + } else if self.has_msbuild_version("14.0") { + "Visual Studio 14 2015" + } else if self.has_msbuild_version("12.0") { + "Visual Studio 12 2013" + } else { + panic!("\n\n\ + couldn't determine visual studio generator\n\ + if VisualStudio is installed, however, consider \ + running the appropriate vcvars script before building \ + this crate\n\ + "); + } + } + }; + + if target.contains("i686") { + base.to_string() + } else if target.contains("x86_64") { + format!("{} Win64", base) + } else { + panic!("unsupported msvc target: {}", target); + } + } + + #[cfg(not(windows))] + fn has_msbuild_version(&self, _version: &str) -> bool { + false + } + + #[cfg(windows)] + fn has_msbuild_version(&self, version: &str) -> bool { + let key = format!("SOFTWARE\\Microsoft\\MSBuild\\ToolsVersions\\{}", + version); + registry::LOCAL_MACHINE.open(key.as_ref()).is_ok() + } + + fn defined(&self, var: &str) -> bool { + self.defines.iter().any(|&(ref a, _)| a == var) + } + + // If a cmake project has previously been built (e.g. CMakeCache.txt already + // exists), then cmake will choke if the source directory for the original + // project being built has changed. Detect this situation through the + // `CMAKE_HOME_DIRECTORY` variable that cmake emits and if it doesn't match + // we blow away the build directory and start from scratch (the recommended + // solution apparently [1]). + // + // [1]: https://cmake.org/pipermail/cmake/2012-August/051545.html + fn maybe_clear(&self, dir: &Path) { + // CMake will apparently store canonicalized paths which normally + // isn't relevant to us but we canonicalize it here to ensure + // we're both checking the same thing. + let path = fs::canonicalize(&self.path).unwrap_or(self.path.clone()); + let src = match path.to_str() { + Some(src) => src, + None => return, + }; + let mut f = match File::open(dir.join("CMakeCache.txt")) { + Ok(f) => f, + Err(..) => return, + }; + let mut u8contents = Vec::new(); + match f.read_to_end(&mut u8contents) { + Ok(f) => f, + Err(..) => return, + }; + let contents = String::from_utf8_lossy(&u8contents); + drop(f); + for line in contents.lines() { + if line.contains("CMAKE_HOME_DIRECTORY") && !line.contains(src) { + println!("detected home dir change, cleaning out entire build \ + directory"); + fs::remove_dir_all(dir).unwrap(); + break + } + } + } +} + +fn run(cmd: &mut Command, program: &str) { + println!("running: {:?}", cmd); + let status = match cmd.status() { + Ok(status) => status, + Err(ref e) if e.kind() == ErrorKind::NotFound => { + fail(&format!("failed to execute command: {}\nis `{}` not installed?", + e, program)); + } + Err(e) => fail(&format!("failed to execute command: {}", e)), + }; + if !status.success() { + fail(&format!("command did not execute successfully, got: {}", status)); + } +} + +fn find_exe(path: &Path) -> PathBuf { + env::split_paths(&env::var_os("PATH").unwrap_or(OsString::new())) + .map(|p| p.join(path)) + .find(|p| fs::metadata(p).is_ok()) + .unwrap_or(path.to_owned()) +} + +fn getenv_unwrap(v: &str) -> String { + match env::var(v) { + Ok(s) => s, + Err(..) => fail(&format!("environment variable `{}` not defined", v)), + } +} + +fn fail(s: &str) -> ! { + panic!("\n{}\n\nbuild script failed, must exit now", s) +} diff --git a/src/vendor/cmake/src/registry.rs b/src/vendor/cmake/src/registry.rs new file mode 100644 index 0000000000..8819b09415 --- /dev/null +++ b/src/vendor/cmake/src/registry.rs @@ -0,0 +1,84 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::ffi::OsStr; +use std::io; +use std::os::raw; +use std::os::windows::prelude::*; + +pub struct RegistryKey(Repr); + +type HKEY = *mut u8; +type DWORD = u32; +type LPDWORD = *mut DWORD; +type LPCWSTR = *const u16; +type LPWSTR = *mut u16; +type LONG = raw::c_long; +type PHKEY = *mut HKEY; +type PFILETIME = *mut u8; +type LPBYTE = *mut u8; +type REGSAM = u32; + +const ERROR_SUCCESS: DWORD = 0; +const HKEY_LOCAL_MACHINE: HKEY = 0x80000002 as HKEY; +const KEY_READ: DWORD = 0x20019; +const KEY_WOW64_32KEY: DWORD = 0x200; + +#[link(name = "advapi32")] +extern "system" { + fn RegOpenKeyExW(key: HKEY, + lpSubKey: LPCWSTR, + ulOptions: DWORD, + samDesired: REGSAM, + phkResult: PHKEY) -> LONG; + fn RegCloseKey(hKey: HKEY) -> LONG; +} + +struct OwnedKey(HKEY); + +enum Repr { + Const(HKEY), + Owned(OwnedKey), +} + +unsafe impl Sync for Repr {} +unsafe impl Send for Repr {} + +pub static LOCAL_MACHINE: RegistryKey = + RegistryKey(Repr::Const(HKEY_LOCAL_MACHINE)); + +impl RegistryKey { + fn raw(&self) -> HKEY { + match self.0 { + Repr::Const(val) => val, + Repr::Owned(ref val) => val.0, + } + } + + pub fn open(&self, key: &OsStr) -> io::Result { + let key = key.encode_wide().chain(Some(0)).collect::>(); + let mut ret = 0 as *mut _; + let err = unsafe { + RegOpenKeyExW(self.raw(), key.as_ptr(), 0, + KEY_READ | KEY_WOW64_32KEY, &mut ret) + }; + if err == ERROR_SUCCESS as LONG { + Ok(RegistryKey(Repr::Owned(OwnedKey(ret)))) + } else { + Err(io::Error::from_raw_os_error(err as i32)) + } + } +} + +impl Drop for OwnedKey { + fn drop(&mut self) { + unsafe { RegCloseKey(self.0); } + } +} diff --git a/src/vendor/env_logger/.cargo-checksum.json b/src/vendor/env_logger/.cargo-checksum.json new file mode 100644 index 0000000000..e3d83501ad --- /dev/null +++ b/src/vendor/env_logger/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","Cargo.toml":"4af0565a97a599bba727315d9aff1f57a350dcfee7d9f00986c851e54a24b4ca","src/lib.rs":"484cec14a5f18a25b71d7b1842f7b184f0530165021b71b36dde9fc57b7fc15a","src/regex.rs":"d8e2a6958d4ed8084867063aae4b5c77ffc5d271dc2e17909d56c5a5e1552034","src/string.rs":"26ede9ab41a2673c3ad6001bc1802c005ce9a4f190f55860a24aa66b6b71bbc7","tests/regexp_filter.rs":"a3f9c01623e90e54b247a62c53b25caf5f502d054f28c0bdf92abbea486a95b5"},"package":"15abd780e45b3ea4f76b4e9a26ff4843258dd8a3eed2775a0e7368c2e7936c2f"} \ No newline at end of file diff --git a/src/vendor/env_logger/.cargo-ok b/src/vendor/env_logger/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/env_logger/Cargo.toml b/src/vendor/env_logger/Cargo.toml new file mode 100644 index 0000000000..5efadbf0d6 --- /dev/null +++ b/src/vendor/env_logger/Cargo.toml @@ -0,0 +1,23 @@ +[package] +name = "env_logger" +version = "0.3.5" +authors = ["The Rust Project Developers"] +license = "MIT/Apache-2.0" +repository = "https://github.com/rust-lang/log" +documentation = "http://doc.rust-lang.org/log/env_logger" +homepage = "https://github.com/rust-lang/log" +description = """ +An logging implementation for `log` which is configured via an environment +variable. +""" + +[dependencies] +log = { version = "0.3", path = ".." } +regex = { version = "0.1", optional = true } + +[[test]] +name = "regexp_filter" +harness = false + +[features] +default = ["regex"] diff --git a/src/vendor/env_logger/src/lib.rs b/src/vendor/env_logger/src/lib.rs new file mode 100644 index 0000000000..9105c19c65 --- /dev/null +++ b/src/vendor/env_logger/src/lib.rs @@ -0,0 +1,623 @@ +// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! A logger configured via an environment variable which writes to standard +//! error. +//! +//! ## Example +//! +//! ``` +//! #[macro_use] extern crate log; +//! extern crate env_logger; +//! +//! use log::LogLevel; +//! +//! fn main() { +//! env_logger::init().unwrap(); +//! +//! debug!("this is a debug {}", "message"); +//! error!("this is printed by default"); +//! +//! if log_enabled!(LogLevel::Info) { +//! let x = 3 * 4; // expensive computation +//! info!("the answer was: {}", x); +//! } +//! } +//! ``` +//! +//! Assumes the binary is `main`: +//! +//! ```{.bash} +//! $ RUST_LOG=error ./main +//! ERROR:main: this is printed by default +//! ``` +//! +//! ```{.bash} +//! $ RUST_LOG=info ./main +//! ERROR:main: this is printed by default +//! INFO:main: the answer was: 12 +//! ``` +//! +//! ```{.bash} +//! $ RUST_LOG=debug ./main +//! DEBUG:main: this is a debug message +//! ERROR:main: this is printed by default +//! INFO:main: the answer was: 12 +//! ``` +//! +//! You can also set the log level on a per module basis: +//! +//! ```{.bash} +//! $ RUST_LOG=main=info ./main +//! ERROR:main: this is printed by default +//! INFO:main: the answer was: 12 +//! ``` +//! +//! And enable all logging: +//! +//! ```{.bash} +//! $ RUST_LOG=main ./main +//! DEBUG:main: this is a debug message +//! ERROR:main: this is printed by default +//! INFO:main: the answer was: 12 +//! ``` +//! +//! See the documentation for the log crate for more information about its API. +//! +//! ## Enabling logging +//! +//! Log levels are controlled on a per-module basis, and by default all logging +//! is disabled except for `error!`. Logging is controlled via the `RUST_LOG` +//! environment variable. The value of this environment variable is a +//! comma-separated list of logging directives. A logging directive is of the +//! form: +//! +//! ```text +//! path::to::module=log_level +//! ``` +//! +//! The path to the module is rooted in the name of the crate it was compiled +//! for, so if your program is contained in a file `hello.rs`, for example, to +//! turn on logging for this file you would use a value of `RUST_LOG=hello`. +//! Furthermore, this path is a prefix-search, so all modules nested in the +//! specified module will also have logging enabled. +//! +//! The actual `log_level` is optional to specify. If omitted, all logging will +//! be enabled. If specified, it must be one of the strings `debug`, `error`, +//! `info`, `warn`, or `trace`. +//! +//! As the log level for a module is optional, the module to enable logging for +//! is also optional. If only a `log_level` is provided, then the global log +//! level for all modules is set to this value. +//! +//! Some examples of valid values of `RUST_LOG` are: +//! +//! * `hello` turns on all logging for the 'hello' module +//! * `info` turns on all info logging +//! * `hello=debug` turns on debug logging for 'hello' +//! * `hello,std::option` turns on hello, and std's option logging +//! * `error,hello=warn` turn on global error logging and also warn for hello +//! +//! ## Filtering results +//! +//! A RUST_LOG directive may include a regex filter. The syntax is to append `/` +//! followed by a regex. Each message is checked against the regex, and is only +//! logged if it matches. Note that the matching is done after formatting the +//! log string but before adding any logging meta-data. There is a single filter +//! for all modules. +//! +//! Some examples: +//! +//! * `hello/foo` turns on all logging for the 'hello' module where the log +//! message includes 'foo'. +//! * `info/f.o` turns on all info logging where the log message includes 'foo', +//! 'f1o', 'fao', etc. +//! * `hello=debug/foo*foo` turns on debug logging for 'hello' where the log +//! message includes 'foofoo' or 'fofoo' or 'fooooooofoo', etc. +//! * `error,hello=warn/[0-9] scopes` turn on global error logging and also +//! warn for hello. In both cases the log message must include a single digit +//! number followed by 'scopes'. + +#![doc(html_logo_url = "http://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png", + html_favicon_url = "http://www.rust-lang.org/favicon.ico", + html_root_url = "http://doc.rust-lang.org/env_logger/")] +#![cfg_attr(test, deny(warnings))] + +extern crate log; + +use std::env; +use std::io::prelude::*; +use std::io; +use std::mem; + +use log::{Log, LogLevel, LogLevelFilter, LogRecord, SetLoggerError, LogMetadata}; + +#[cfg(feature = "regex")] +#[path = "regex.rs"] +mod filter; + +#[cfg(not(feature = "regex"))] +#[path = "string.rs"] +mod filter; + +/// The logger. +pub struct Logger { + directives: Vec, + filter: Option, + format: Box String + Sync + Send>, +} + +/// LogBuilder acts as builder for initializing the Logger. +/// It can be used to customize the log format, change the enviromental variable used +/// to provide the logging directives and also set the default log level filter. +/// +/// ## Example +/// +/// ``` +/// #[macro_use] +/// extern crate log; +/// extern crate env_logger; +/// +/// use std::env; +/// use log::{LogRecord, LogLevelFilter}; +/// use env_logger::LogBuilder; +/// +/// fn main() { +/// let format = |record: &LogRecord| { +/// format!("{} - {}", record.level(), record.args()) +/// }; +/// +/// let mut builder = LogBuilder::new(); +/// builder.format(format).filter(None, LogLevelFilter::Info); +/// +/// if env::var("RUST_LOG").is_ok() { +/// builder.parse(&env::var("RUST_LOG").unwrap()); +/// } +/// +/// builder.init().unwrap(); +/// +/// error!("error message"); +/// info!("info message"); +/// } +/// ``` +pub struct LogBuilder { + directives: Vec, + filter: Option, + format: Box String + Sync + Send>, +} + +impl LogBuilder { + /// Initializes the log builder with defaults + pub fn new() -> LogBuilder { + LogBuilder { + directives: Vec::new(), + filter: None, + format: Box::new(|record: &LogRecord| { + format!("{}:{}: {}", record.level(), + record.location().module_path(), record.args()) + }), + } + } + + /// Adds filters to the logger + /// + /// The given module (if any) will log at most the specified level provided. + /// If no module is provided then the filter will apply to all log messages. + pub fn filter(&mut self, + module: Option<&str>, + level: LogLevelFilter) -> &mut Self { + self.directives.push(LogDirective { + name: module.map(|s| s.to_string()), + level: level, + }); + self + } + + /// Sets the format function for formatting the log output. + /// + /// This function is called on each record logged to produce a string which + /// is actually printed out. + pub fn format(&mut self, format: F) -> &mut Self + where F: Fn(&LogRecord) -> String + Sync + Send + { + self.format = Box::new(format); + self + } + + /// Parses the directives string in the same form as the RUST_LOG + /// environment variable. + /// + /// See the module documentation for more details. + pub fn parse(&mut self, filters: &str) -> &mut Self { + let (directives, filter) = parse_logging_spec(filters); + + self.filter = filter; + + for directive in directives { + self.directives.push(directive); + } + self + } + + /// Initializes the global logger with an env logger. + /// + /// This should be called early in the execution of a Rust program, and the + /// global logger may only be initialized once. Future initialization + /// attempts will return an error. + pub fn init(&mut self) -> Result<(), SetLoggerError> { + log::set_logger(|max_level| { + let logger = self.build(); + max_level.set(logger.filter()); + Box::new(logger) + }) + } + + /// Build an env logger. + pub fn build(&mut self) -> Logger { + if self.directives.is_empty() { + // Adds the default filter if none exist + self.directives.push(LogDirective { + name: None, + level: LogLevelFilter::Error, + }); + } else { + // Sort the directives by length of their name, this allows a + // little more efficient lookup at runtime. + self.directives.sort_by(|a, b| { + let alen = a.name.as_ref().map(|a| a.len()).unwrap_or(0); + let blen = b.name.as_ref().map(|b| b.len()).unwrap_or(0); + alen.cmp(&blen) + }); + } + + Logger { + directives: mem::replace(&mut self.directives, Vec::new()), + filter: mem::replace(&mut self.filter, None), + format: mem::replace(&mut self.format, Box::new(|_| String::new())), + } + } +} + +impl Logger { + pub fn new() -> Logger { + let mut builder = LogBuilder::new(); + + if let Ok(s) = env::var("RUST_LOG") { + builder.parse(&s); + } + + builder.build() + } + + pub fn filter(&self) -> LogLevelFilter { + self.directives.iter() + .map(|d| d.level).max() + .unwrap_or(LogLevelFilter::Off) + } + + fn enabled(&self, level: LogLevel, target: &str) -> bool { + // Search for the longest match, the vector is assumed to be pre-sorted. + for directive in self.directives.iter().rev() { + match directive.name { + Some(ref name) if !target.starts_with(&**name) => {}, + Some(..) | None => { + return level <= directive.level + } + } + } + false + } +} + +impl Log for Logger { + fn enabled(&self, metadata: &LogMetadata) -> bool { + self.enabled(metadata.level(), metadata.target()) + } + + fn log(&self, record: &LogRecord) { + if !Log::enabled(self, record.metadata()) { + return; + } + + if let Some(filter) = self.filter.as_ref() { + if !filter.is_match(&*record.args().to_string()) { + return; + } + } + + let _ = writeln!(&mut io::stderr(), "{}", (self.format)(record)); + } +} + +struct LogDirective { + name: Option, + level: LogLevelFilter, +} + +/// Initializes the global logger with an env logger. +/// +/// This should be called early in the execution of a Rust program, and the +/// global logger may only be initialized once. Future initialization attempts +/// will return an error. +pub fn init() -> Result<(), SetLoggerError> { + let mut builder = LogBuilder::new(); + + if let Ok(s) = env::var("RUST_LOG") { + builder.parse(&s); + } + + builder.init() +} + +/// Parse a logging specification string (e.g: "crate1,crate2::mod3,crate3::x=error/foo") +/// and return a vector with log directives. +fn parse_logging_spec(spec: &str) -> (Vec, Option) { + let mut dirs = Vec::new(); + + let mut parts = spec.split('/'); + let mods = parts.next(); + let filter = parts.next(); + if parts.next().is_some() { + println!("warning: invalid logging spec '{}', \ + ignoring it (too many '/'s)", spec); + return (dirs, None); + } + mods.map(|m| { for s in m.split(',') { + if s.len() == 0 { continue } + let mut parts = s.split('='); + let (log_level, name) = match (parts.next(), parts.next().map(|s| s.trim()), parts.next()) { + (Some(part0), None, None) => { + // if the single argument is a log-level string or number, + // treat that as a global fallback + match part0.parse() { + Ok(num) => (num, None), + Err(_) => (LogLevelFilter::max(), Some(part0)), + } + } + (Some(part0), Some(""), None) => (LogLevelFilter::max(), Some(part0)), + (Some(part0), Some(part1), None) => { + match part1.parse() { + Ok(num) => (num, Some(part0)), + _ => { + println!("warning: invalid logging spec '{}', \ + ignoring it", part1); + continue + } + } + }, + _ => { + println!("warning: invalid logging spec '{}', \ + ignoring it", s); + continue + } + }; + dirs.push(LogDirective { + name: name.map(|s| s.to_string()), + level: log_level, + }); + }}); + + let filter = filter.map_or(None, |filter| { + match filter::Filter::new(filter) { + Ok(re) => Some(re), + Err(e) => { + println!("warning: invalid regex filter - {}", e); + None + } + } + }); + + return (dirs, filter); +} + +#[cfg(test)] +mod tests { + use log::{LogLevel, LogLevelFilter}; + + use super::{LogBuilder, Logger, LogDirective, parse_logging_spec}; + + fn make_logger(dirs: Vec) -> Logger { + let mut logger = LogBuilder::new().build(); + logger.directives = dirs; + logger + } + + #[test] + fn filter_info() { + let logger = LogBuilder::new().filter(None, LogLevelFilter::Info).build(); + assert!(logger.enabled(LogLevel::Info, "crate1")); + assert!(!logger.enabled(LogLevel::Debug, "crate1")); + } + + #[test] + fn filter_beginning_longest_match() { + let logger = LogBuilder::new() + .filter(Some("crate2"), LogLevelFilter::Info) + .filter(Some("crate2::mod"), LogLevelFilter::Debug) + .filter(Some("crate1::mod1"), LogLevelFilter::Warn) + .build(); + assert!(logger.enabled(LogLevel::Debug, "crate2::mod1")); + assert!(!logger.enabled(LogLevel::Debug, "crate2")); + } + + #[test] + fn parse_default() { + let logger = LogBuilder::new().parse("info,crate1::mod1=warn").build(); + assert!(logger.enabled(LogLevel::Warn, "crate1::mod1")); + assert!(logger.enabled(LogLevel::Info, "crate2::mod2")); + } + + #[test] + fn match_full_path() { + let logger = make_logger(vec![ + LogDirective { + name: Some("crate2".to_string()), + level: LogLevelFilter::Info + }, + LogDirective { + name: Some("crate1::mod1".to_string()), + level: LogLevelFilter::Warn + } + ]); + assert!(logger.enabled(LogLevel::Warn, "crate1::mod1")); + assert!(!logger.enabled(LogLevel::Info, "crate1::mod1")); + assert!(logger.enabled(LogLevel::Info, "crate2")); + assert!(!logger.enabled(LogLevel::Debug, "crate2")); + } + + #[test] + fn no_match() { + let logger = make_logger(vec![ + LogDirective { name: Some("crate2".to_string()), level: LogLevelFilter::Info }, + LogDirective { name: Some("crate1::mod1".to_string()), level: LogLevelFilter::Warn } + ]); + assert!(!logger.enabled(LogLevel::Warn, "crate3")); + } + + #[test] + fn match_beginning() { + let logger = make_logger(vec![ + LogDirective { name: Some("crate2".to_string()), level: LogLevelFilter::Info }, + LogDirective { name: Some("crate1::mod1".to_string()), level: LogLevelFilter::Warn } + ]); + assert!(logger.enabled(LogLevel::Info, "crate2::mod1")); + } + + #[test] + fn match_beginning_longest_match() { + let logger = make_logger(vec![ + LogDirective { name: Some("crate2".to_string()), level: LogLevelFilter::Info }, + LogDirective { name: Some("crate2::mod".to_string()), level: LogLevelFilter::Debug }, + LogDirective { name: Some("crate1::mod1".to_string()), level: LogLevelFilter::Warn } + ]); + assert!(logger.enabled(LogLevel::Debug, "crate2::mod1")); + assert!(!logger.enabled(LogLevel::Debug, "crate2")); + } + + #[test] + fn match_default() { + let logger = make_logger(vec![ + LogDirective { name: None, level: LogLevelFilter::Info }, + LogDirective { name: Some("crate1::mod1".to_string()), level: LogLevelFilter::Warn } + ]); + assert!(logger.enabled(LogLevel::Warn, "crate1::mod1")); + assert!(logger.enabled(LogLevel::Info, "crate2::mod2")); + } + + #[test] + fn zero_level() { + let logger = make_logger(vec![ + LogDirective { name: None, level: LogLevelFilter::Info }, + LogDirective { name: Some("crate1::mod1".to_string()), level: LogLevelFilter::Off } + ]); + assert!(!logger.enabled(LogLevel::Error, "crate1::mod1")); + assert!(logger.enabled(LogLevel::Info, "crate2::mod2")); + } + + #[test] + fn parse_logging_spec_valid() { + let (dirs, filter) = parse_logging_spec("crate1::mod1=error,crate1::mod2,crate2=debug"); + assert_eq!(dirs.len(), 3); + assert_eq!(dirs[0].name, Some("crate1::mod1".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Error); + + assert_eq!(dirs[1].name, Some("crate1::mod2".to_string())); + assert_eq!(dirs[1].level, LogLevelFilter::max()); + + assert_eq!(dirs[2].name, Some("crate2".to_string())); + assert_eq!(dirs[2].level, LogLevelFilter::Debug); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_invalid_crate() { + // test parse_logging_spec with multiple = in specification + let (dirs, filter) = parse_logging_spec("crate1::mod1=warn=info,crate2=debug"); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate2".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Debug); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_invalid_log_level() { + // test parse_logging_spec with 'noNumber' as log level + let (dirs, filter) = parse_logging_spec("crate1::mod1=noNumber,crate2=debug"); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate2".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Debug); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_string_log_level() { + // test parse_logging_spec with 'warn' as log level + let (dirs, filter) = parse_logging_spec("crate1::mod1=wrong,crate2=warn"); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate2".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Warn); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_empty_log_level() { + // test parse_logging_spec with '' as log level + let (dirs, filter) = parse_logging_spec("crate1::mod1=wrong,crate2="); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate2".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::max()); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_global() { + // test parse_logging_spec with no crate + let (dirs, filter) = parse_logging_spec("warn,crate2=debug"); + assert_eq!(dirs.len(), 2); + assert_eq!(dirs[0].name, None); + assert_eq!(dirs[0].level, LogLevelFilter::Warn); + assert_eq!(dirs[1].name, Some("crate2".to_string())); + assert_eq!(dirs[1].level, LogLevelFilter::Debug); + assert!(filter.is_none()); + } + + #[test] + fn parse_logging_spec_valid_filter() { + let (dirs, filter) = parse_logging_spec("crate1::mod1=error,crate1::mod2,crate2=debug/abc"); + assert_eq!(dirs.len(), 3); + assert_eq!(dirs[0].name, Some("crate1::mod1".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Error); + + assert_eq!(dirs[1].name, Some("crate1::mod2".to_string())); + assert_eq!(dirs[1].level, LogLevelFilter::max()); + + assert_eq!(dirs[2].name, Some("crate2".to_string())); + assert_eq!(dirs[2].level, LogLevelFilter::Debug); + assert!(filter.is_some() && filter.unwrap().to_string() == "abc"); + } + + #[test] + fn parse_logging_spec_invalid_crate_filter() { + let (dirs, filter) = parse_logging_spec("crate1::mod1=error=warn,crate2=debug/a.c"); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate2".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::Debug); + assert!(filter.is_some() && filter.unwrap().to_string() == "a.c"); + } + + #[test] + fn parse_logging_spec_empty_with_filter() { + let (dirs, filter) = parse_logging_spec("crate1/a*c"); + assert_eq!(dirs.len(), 1); + assert_eq!(dirs[0].name, Some("crate1".to_string())); + assert_eq!(dirs[0].level, LogLevelFilter::max()); + assert!(filter.is_some() && filter.unwrap().to_string() == "a*c"); + } +} diff --git a/src/vendor/env_logger/src/regex.rs b/src/vendor/env_logger/src/regex.rs new file mode 100644 index 0000000000..0df03e6733 --- /dev/null +++ b/src/vendor/env_logger/src/regex.rs @@ -0,0 +1,28 @@ +extern crate regex; + +use std::fmt; + +use self::regex::Regex; + +pub struct Filter { + inner: Regex, +} + +impl Filter { + pub fn new(spec: &str) -> Result { + match Regex::new(spec){ + Ok(r) => Ok(Filter { inner: r }), + Err(e) => Err(e.to_string()), + } + } + + pub fn is_match(&self, s: &str) -> bool { + self.inner.is_match(s) + } +} + +impl fmt::Display for Filter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + self.inner.fmt(f) + } +} diff --git a/src/vendor/env_logger/src/string.rs b/src/vendor/env_logger/src/string.rs new file mode 100644 index 0000000000..74d0e04dbd --- /dev/null +++ b/src/vendor/env_logger/src/string.rs @@ -0,0 +1,21 @@ +use std::fmt; + +pub struct Filter { + inner: String, +} + +impl Filter { + pub fn new(spec: &str) -> Result { + Ok(Filter { inner: spec.to_string() }) + } + + pub fn is_match(&self, s: &str) -> bool { + s.contains(&self.inner) + } +} + +impl fmt::Display for Filter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + self.inner.fmt(f) + } +} diff --git a/src/vendor/env_logger/tests/regexp_filter.rs b/src/vendor/env_logger/tests/regexp_filter.rs new file mode 100644 index 0000000000..5036fb8e3c --- /dev/null +++ b/src/vendor/env_logger/tests/regexp_filter.rs @@ -0,0 +1,51 @@ +#[macro_use] extern crate log; +extern crate env_logger; + +use std::process; +use std::env; +use std::str; + +fn main() { + if env::var("LOG_REGEXP_TEST").ok() == Some(String::from("1")) { + child_main(); + } else { + parent_main() + } +} + +fn child_main() { + env_logger::init().unwrap(); + info!("XYZ Message"); +} + +fn run_child(rust_log: String) -> bool { + let exe = env::current_exe().unwrap(); + let out = process::Command::new(exe) + .env("LOG_REGEXP_TEST", "1") + .env("RUST_LOG", rust_log) + .output() + .unwrap_or_else(|e| panic!("Unable to start child process: {}", e)); + str::from_utf8(out.stderr.as_ref()).unwrap().contains("XYZ Message") +} + +fn assert_message_printed(rust_log: &str) { + if !run_child(rust_log.to_string()) { + panic!("RUST_LOG={} should allow the test log message", rust_log) + } +} + +fn assert_message_not_printed(rust_log: &str) { + if run_child(rust_log.to_string()) { + panic!("RUST_LOG={} should not allow the test log message", rust_log) + } +} + +fn parent_main() { + // test normal log severity levels + assert_message_printed("info"); + assert_message_not_printed("warn"); + + // test of regular expression filters + assert_message_printed("info/XYZ"); + assert_message_not_printed("info/XXX"); +} diff --git a/src/vendor/filetime/.cargo-checksum.json b/src/vendor/filetime/.cargo-checksum.json new file mode 100644 index 0000000000..674ae31b29 --- /dev/null +++ b/src/vendor/filetime/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".gitignore":"f9b1ca6ae27d1c18215265024629a8960c31379f206d9ed20f64e0b2dcf79805",".travis.yml":"c8cfe2c700e7b1d6500d0ad8084694be7009095e9572aaf54bf695c1fe7822d6","Cargo.toml":"4e414fe72ef2afcae81fb5a89f39e59ec40844272b589381746623f612333305","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"378f5840b258e2779c39418f3f2d7b2ba96f1c7917dd6be0713f88305dbda397","README.md":"fef1998633eb2f460e6b12bc1133a21f5674e0b53ae5914ba1e53f1b63a185c3","appveyor.yml":"da991211b72fa6f231af7adb84c9fb72f5a9131d1c0a3d47b8ceffe5a82c8542","src/lib.rs":"8fa03e69ab113e5a30c742f60b6beddc0b77ef41a1eb45e82f9df867c9265815"},"package":"5363ab8e4139b8568a6237db5248646e5a8a2f89bd5ccb02092182b11fd3e922"} \ No newline at end of file diff --git a/src/vendor/filetime/.cargo-ok b/src/vendor/filetime/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/filetime/.gitignore b/src/vendor/filetime/.gitignore new file mode 100644 index 0000000000..a9d37c560c --- /dev/null +++ b/src/vendor/filetime/.gitignore @@ -0,0 +1,2 @@ +target +Cargo.lock diff --git a/src/vendor/filetime/.travis.yml b/src/vendor/filetime/.travis.yml new file mode 100644 index 0000000000..001cdd259e --- /dev/null +++ b/src/vendor/filetime/.travis.yml @@ -0,0 +1,26 @@ +language: rust +rust: + - stable + - beta + - nightly +sudo: false +script: + - cargo build --verbose + - cargo test --verbose + - cargo doc --no-deps +after_success: | + [ $TRAVIS_BRANCH = master ] && + [ $TRAVIS_PULL_REQUEST = false ] && + echo '' > target/doc/index.html && + pip install ghp-import --user $USER && + $HOME/.local/bin/ghp-import -n target/doc && + git push -qf https://${TOKEN}@github.com/${TRAVIS_REPO_SLUG}.git gh-pages +notifications: + email: + on_success: never +env: + global: + secure: dsIj09BQvGF872zKmqzG+WwCl7gfqwsnxcm3GZlAMgyLYm4juvHOwCRhIERCN3BCxPvdlSRKhe9Rwmp1RkiKuqTK3ITUTAy29Maf2vuL1T+zcdpZE0t6JSCU1gbEwzCA2foB1jzgy7Q47EzeJusmGNwibscjYmXKlH6JCFwTobM= +os: + - linux + - osx diff --git a/src/vendor/filetime/Cargo.toml b/src/vendor/filetime/Cargo.toml new file mode 100644 index 0000000000..971eaf6014 --- /dev/null +++ b/src/vendor/filetime/Cargo.toml @@ -0,0 +1,19 @@ +[package] +name = "filetime" +authors = ["Alex Crichton "] +version = "0.1.10" +license = "MIT/Apache-2.0" +readme = "README.md" +keywords = ["timestamp", "mtime"] +repository = "https://github.com/alexcrichton/filetime" +homepage = "https://github.com/alexcrichton/filetime" +documentation = "http://alexcrichton.com/filetime" +description = """ +Platform-agnostic accessors of timestamps in File metadata +""" + +[dependencies] +libc = "0.2" + +[dev-dependencies] +tempdir = "0.3" diff --git a/src/vendor/filetime/LICENSE-APACHE b/src/vendor/filetime/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/src/vendor/filetime/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/src/vendor/filetime/LICENSE-MIT b/src/vendor/filetime/LICENSE-MIT new file mode 100644 index 0000000000..39e0ed6602 --- /dev/null +++ b/src/vendor/filetime/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2014 Alex Crichton + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/src/vendor/filetime/README.md b/src/vendor/filetime/README.md new file mode 100644 index 0000000000..0422084e7e --- /dev/null +++ b/src/vendor/filetime/README.md @@ -0,0 +1,25 @@ +# filetime + +[![Build Status](https://travis-ci.org/alexcrichton/filetime.svg?branch=master)](https://travis-ci.org/alexcrichton/filetime) +[![Build status](https://ci.appveyor.com/api/projects/status/9tatexq47i3ee13k?svg=true)](https://ci.appveyor.com/project/alexcrichton/filetime) + +[Documentation](http://alexcrichton.com/filetime/filetime/index.html) + +A helper library for inspecting the various timestamps of files in Rust. This +library takes into account cross-platform differences in terms of where the +timestamps are located, what they are called, and how to convert them into a +platform-independent representation. + +```toml +# Cargo.toml +[dependencies] +filetime = "0.1" +``` + +# License + +`filetime` is primarily distributed under the terms of both the MIT license and +the Apache License (Version 2.0), with portions covered by various BSD-like +licenses. + +See LICENSE-APACHE, and LICENSE-MIT for details. diff --git a/src/vendor/filetime/appveyor.yml b/src/vendor/filetime/appveyor.yml new file mode 100644 index 0000000000..6a1b8dc19c --- /dev/null +++ b/src/vendor/filetime/appveyor.yml @@ -0,0 +1,17 @@ +environment: + matrix: + - TARGET: x86_64-pc-windows-msvc + - TARGET: i686-pc-windows-msvc + - TARGET: i686-pc-windows-gnu +install: + - ps: Start-FileDownload "https://static.rust-lang.org/dist/rust-nightly-${env:TARGET}.exe" + - rust-nightly-%TARGET%.exe /VERYSILENT /NORESTART /DIR="C:\Program Files (x86)\Rust" + - SET PATH=%PATH%;C:\Program Files (x86)\Rust\bin + - SET PATH=%PATH%;C:\MinGW\bin + - rustc -V + - cargo -V + +build: false + +test_script: + - cargo test --verbose diff --git a/src/vendor/filetime/src/lib.rs b/src/vendor/filetime/src/lib.rs new file mode 100644 index 0000000000..aa6bec1dfe --- /dev/null +++ b/src/vendor/filetime/src/lib.rs @@ -0,0 +1,305 @@ +//! Timestamps for files in Rust +//! +//! This library provides platform-agnostic inspection of the various timestamps +//! present in the standard `fs::Metadata` structure. +//! +//! # Installation +//! +//! Add this to you `Cargo.toml`: +//! +//! ```toml +//! [dependencies] +//! filetime = "0.1" +//! ``` +//! +//! # Usage +//! +//! ```no_run +//! use std::fs; +//! use filetime::FileTime; +//! +//! let metadata = fs::metadata("foo.txt").unwrap(); +//! +//! let mtime = FileTime::from_last_modification_time(&metadata); +//! println!("{}", mtime); +//! +//! let atime = FileTime::from_last_access_time(&metadata); +//! assert!(mtime < atime); +//! +//! // Inspect values that can be interpreted across platforms +//! println!("{}", mtime.seconds_relative_to_1970()); +//! println!("{}", mtime.nanoseconds()); +//! +//! // Print the platform-specific value of seconds +//! println!("{}", mtime.seconds()); +//! ``` + +extern crate libc; + +#[cfg(unix)] use std::os::unix::prelude::*; +#[cfg(windows)] use std::os::windows::prelude::*; + +use std::fmt; +use std::fs; +use std::io; +use std::path::Path; + +/// A helper structure to represent a timestamp for a file. +/// +/// The actual value contined within is platform-specific and does not have the +/// same meaning across platforms, but comparisons and stringification can be +/// significant among the same platform. +#[derive(Eq, PartialEq, Ord, PartialOrd, Debug, Copy, Clone, Hash)] +pub struct FileTime { + seconds: u64, + nanos: u32, +} + +impl FileTime { + /// Creates a new timestamp representing a 0 time. + /// + /// Useful for creating the base of a cmp::max chain of times. + pub fn zero() -> FileTime { + FileTime { seconds: 0, nanos: 0 } + } + + /// Creates a new instance of `FileTime` with a number of seconds and + /// nanoseconds relative to January 1, 1970. + /// + /// Note that this is typically the relative point that Unix time stamps are + /// from, but on Windows the native time stamp is relative to January 1, + /// 1601 so the return value of `seconds` from the returned `FileTime` + /// instance may not be the same as that passed in. + pub fn from_seconds_since_1970(seconds: u64, nanos: u32) -> FileTime { + FileTime { + seconds: seconds + if cfg!(windows) {11644473600} else {0}, + nanos: nanos, + } + } + + /// Creates a new timestamp from the last modification time listed in the + /// specified metadata. + /// + /// The returned value corresponds to the `mtime` field of `stat` on Unix + /// platforms and the `ftLastWriteTime` field on Windows platforms. + pub fn from_last_modification_time(meta: &fs::Metadata) -> FileTime { + #[cfg(unix)] + fn imp(meta: &fs::Metadata) -> FileTime { + FileTime::from_os_repr(meta.mtime() as u64, meta.mtime_nsec() as u32) + } + #[cfg(windows)] + fn imp(meta: &fs::Metadata) -> FileTime { + FileTime::from_os_repr(meta.last_write_time()) + } + imp(meta) + } + + /// Creates a new timestamp from the last access time listed in the + /// specified metadata. + /// + /// The returned value corresponds to the `atime` field of `stat` on Unix + /// platforms and the `ftLastAccessTime` field on Windows platforms. + pub fn from_last_access_time(meta: &fs::Metadata) -> FileTime { + #[cfg(unix)] + fn imp(meta: &fs::Metadata) -> FileTime { + FileTime::from_os_repr(meta.atime() as u64, meta.atime_nsec() as u32) + } + #[cfg(windows)] + fn imp(meta: &fs::Metadata) -> FileTime { + FileTime::from_os_repr(meta.last_access_time()) + } + imp(meta) + } + + /// Creates a new timestamp from the creation time listed in the specified + /// metadata. + /// + /// The returned value corresponds to the `birthtime` field of `stat` on + /// Unix platforms and the `ftCreationTime` field on Windows platforms. Note + /// that not all Unix platforms have this field available and may return + /// `None` in some circumstances. + pub fn from_creation_time(meta: &fs::Metadata) -> Option { + macro_rules! birthtim { + ($(($e:expr, $i:ident)),*) => { + #[cfg(any($(target_os = $e),*))] + fn imp(meta: &fs::Metadata) -> Option { + $( + #[cfg(target_os = $e)] + use std::os::$i::fs::MetadataExt; + )* + let raw = meta.as_raw_stat(); + Some(FileTime::from_os_repr(raw.st_birthtime as u64, + raw.st_birthtime_nsec as u32)) + } + + #[cfg(all(not(windows), + $(not(target_os = $e)),*))] + fn imp(_meta: &fs::Metadata) -> Option { + None + } + } + } + + birthtim! { + ("bitrig", bitrig), + ("freebsd", freebsd), + ("ios", ios), + ("macos", macos), + ("openbsd", openbsd) + } + + #[cfg(windows)] + fn imp(meta: &fs::Metadata) -> Option { + Some(FileTime::from_os_repr(meta.last_access_time())) + } + imp(meta) + } + + #[cfg(windows)] + fn from_os_repr(time: u64) -> FileTime { + // Windows write times are in 100ns intervals, so do a little math to + // get it into the right representation. + FileTime { + seconds: time / (1_000_000_000 / 100), + nanos: ((time % (1_000_000_000 / 100)) * 100) as u32, + } + } + + #[cfg(unix)] + fn from_os_repr(seconds: u64, nanos: u32) -> FileTime { + FileTime { seconds: seconds, nanos: nanos } + } + + /// Returns the whole number of seconds represented by this timestamp. + /// + /// Note that this value's meaning is **platform specific**. On Unix + /// platform time stamps are typically relative to January 1, 1970, but on + /// Windows platforms time stamps are relative to January 1, 1601. + pub fn seconds(&self) -> u64 { self.seconds } + + /// Returns the whole number of seconds represented by this timestamp, + /// relative to the Unix epoch start of January 1, 1970. + /// + /// Note that this does not return the same value as `seconds` for Windows + /// platforms as seconds are relative to a different date there. + pub fn seconds_relative_to_1970(&self) -> u64 { + self.seconds - if cfg!(windows) {11644473600} else {0} + } + + /// Returns the nanosecond precision of this timestamp. + /// + /// The returned value is always less than one billion and represents a + /// portion of a second forward from the seconds returned by the `seconds` + /// method. + pub fn nanoseconds(&self) -> u32 { self.nanos } +} + +impl fmt::Display for FileTime { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "{}.{:09}s", self.seconds, self.nanos) + } +} + +/// Set the last access and modification times for a file on the filesystem. +/// +/// This function will set the `atime` and `mtime` metadata fields for a file +/// on the local filesystem, returning any error encountered. +pub fn set_file_times

(p: P, atime: FileTime, mtime: FileTime) + -> io::Result<()> where P: AsRef { + set_file_times_(p.as_ref(), atime, mtime) +} + +#[cfg(unix)] +fn set_file_times_(p: &Path, atime: FileTime, mtime: FileTime) -> io::Result<()> { + use std::ffi::CString; + use libc::{timeval, time_t, suseconds_t, utimes}; + + let times = [to_timeval(&atime), to_timeval(&mtime)]; + let p = try!(CString::new(p.as_os_str().as_bytes())); + return unsafe { + if utimes(p.as_ptr() as *const _, times.as_ptr()) == 0 { + Ok(()) + } else { + Err(io::Error::last_os_error()) + } + }; + + fn to_timeval(ft: &FileTime) -> timeval { + timeval { + tv_sec: ft.seconds() as time_t, + tv_usec: (ft.nanoseconds() / 1000) as suseconds_t, + } + } +} + +#[cfg(windows)] +#[allow(bad_style)] +fn set_file_times_(p: &Path, atime: FileTime, mtime: FileTime) -> io::Result<()> { + use std::fs::OpenOptions; + + type BOOL = i32; + type HANDLE = *mut u8; + type DWORD = u32; + #[repr(C)] + struct FILETIME { + dwLowDateTime: u32, + dwHighDateTime: u32, + } + extern "system" { + fn SetFileTime(hFile: HANDLE, + lpCreationTime: *const FILETIME, + lpLastAccessTime: *const FILETIME, + lpLastWriteTime: *const FILETIME) -> BOOL; + } + + let f = try!(OpenOptions::new().write(true).open(p)); + let atime = to_filetime(&atime); + let mtime = to_filetime(&mtime); + return unsafe { + let ret = SetFileTime(f.as_raw_handle() as *mut _, + 0 as *const _, + &atime, &mtime); + if ret != 0 { + Ok(()) + } else { + Err(io::Error::last_os_error()) + } + }; + + fn to_filetime(ft: &FileTime) -> FILETIME { + let intervals = ft.seconds() * (1_000_000_000 / 100) + + ((ft.nanoseconds() as u64) / 100); + FILETIME { + dwLowDateTime: intervals as DWORD, + dwHighDateTime: (intervals >> 32) as DWORD, + } + } +} + +#[cfg(test)] +mod tests { + extern crate tempdir; + + use std::fs::{self, File}; + use self::tempdir::TempDir; + use super::{FileTime, set_file_times}; + + #[test] + fn set_file_times_test() { + let td = TempDir::new("filetime").unwrap(); + let path = td.path().join("foo.txt"); + File::create(&path).unwrap(); + + let metadata = fs::metadata(&path).unwrap(); + let mtime = FileTime::from_last_modification_time(&metadata); + let atime = FileTime::from_last_access_time(&metadata); + set_file_times(&path, atime, mtime).unwrap(); + + let new_mtime = FileTime::from_seconds_since_1970(10_000, 0); + set_file_times(&path, atime, new_mtime).unwrap(); + + let metadata = fs::metadata(&path).unwrap(); + let mtime = FileTime::from_last_modification_time(&metadata); + assert_eq!(mtime, new_mtime); + } +} diff --git a/src/vendor/gcc/.cargo-checksum.json b/src/vendor/gcc/.cargo-checksum.json new file mode 100644 index 0000000000..e85f4b2181 --- /dev/null +++ b/src/vendor/gcc/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".gitignore":"f9b1ca6ae27d1c18215265024629a8960c31379f206d9ed20f64e0b2dcf79805",".travis.yml":"675ffe583db77282d010306f29e6d81e5070ab081deddd0300137dfbd2cb83de","Cargo.toml":"19bb617b74de761515ef5d087fd0e30912fda1d7c22fd04fa211236dab99a509","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"378f5840b258e2779c39418f3f2d7b2ba96f1c7917dd6be0713f88305dbda397","README.md":"ecb2d93f4c81edbd48d8742ff7887dc0a4530a5890967839090bbc972d49bebe","appveyor.yml":"46c77d913eaa45871296942c2cd96ef092c9dcaf19201cb5c500a5107faeb06f","src/bin/gcc-shim.rs":"11edfe1fc6f932bd42ffffda5145833302bc163e0b87dc0d54f4bd0997ad4708","src/lib.rs":"7e7c60beccfdd145e876da81bb07dd09c5248dab0b26d93190bab4242799d51a","src/registry.rs":"3e2a42581ebb82e325dd5600c6571cef937b35003b2927dc618967f5238a2058","src/windows_registry.rs":"1f4211caec5a192b5f05c8a47efb27aa6a0ab976c659b9318a0cf603a28d6746","tests/cc_env.rs":"d92c5e3d3d43ac244e63b2cd2c93a521fcf124bf1ccf8d4c6bfa7f8333d88976","tests/support/mod.rs":"f4dad5a8133c3dd6678d9a3de057b82e624ef547b9b3e4ac9508a48962fc387b","tests/test.rs":"164220f11be2eebc20315826513999970660a82feff8cc4b15b4e9d73d98324e"},"package":"872db9e59486ef2b14f8e8c10e9ef02de2bccef6363d7f34835dedb386b3d950"} \ No newline at end of file diff --git a/src/vendor/gcc/.cargo-ok b/src/vendor/gcc/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/gcc/.gitignore b/src/vendor/gcc/.gitignore new file mode 100644 index 0000000000..a9d37c560c --- /dev/null +++ b/src/vendor/gcc/.gitignore @@ -0,0 +1,2 @@ +target +Cargo.lock diff --git a/src/vendor/gcc/.travis.yml b/src/vendor/gcc/.travis.yml new file mode 100644 index 0000000000..bf55f49173 --- /dev/null +++ b/src/vendor/gcc/.travis.yml @@ -0,0 +1,42 @@ +language: rust +rust: + - stable + - beta + - nightly +sudo: false +install: + - if [ "$TRAVIS_OS_NAME" = "linux" ]; then OS=unknown-linux-gnu; else OS=apple-darwin; fi + - export TARGET=$ARCH-$OS + - curl https://static.rust-lang.org/rustup.sh | + sh -s -- --add-target=$TARGET --disable-sudo -y --prefix=`rustc --print sysroot` +before_script: + - pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH +script: + - cargo build --verbose + - cargo test --verbose + - cargo test --verbose --features parallel + - cargo test --manifest-path gcc-test/Cargo.toml --target $TARGET + - cargo test --manifest-path gcc-test/Cargo.toml --target $TARGET --features parallel + - cargo test --manifest-path gcc-test/Cargo.toml --target $TARGET --release + - cargo doc + - cargo clean && cargo build + - rustdoc --test README.md -L target/debug -L target/debug/deps +after_success: + - travis-cargo --only nightly doc-upload +env: + global: + secure: "CBtqrudgE0PS8x3kTr44jKbC2D4nfnmdYVecooNm0qnER4B4TSvZpZSQoCgKK6k4BYQuOSyFTOwYx6M79w39ZMOgyCP9ytB+tyMWL0/+ZuUQL04yVg4M5vd3oJMkOaXbvG56ncgPyFrseY+FPDg+mXAzvJk/nily37YXjkQj2D0=" + + matrix: + - ARCH=x86_64 + - ARCH=i686 +notifications: + email: + on_success: never +os: + - linux + - osx +addons: + apt: + packages: + - g++-multilib diff --git a/src/vendor/gcc/Cargo.toml b/src/vendor/gcc/Cargo.toml new file mode 100644 index 0000000000..7efdbf9b4b --- /dev/null +++ b/src/vendor/gcc/Cargo.toml @@ -0,0 +1,23 @@ +[package] + +name = "gcc" +version = "0.3.40" +authors = ["Alex Crichton "] +license = "MIT/Apache-2.0" +repository = "https://github.com/alexcrichton/gcc-rs" +documentation = "http://alexcrichton.com/gcc-rs" +description = """ +A build-time dependency for Cargo build scripts to assist in invoking the native +C compiler to compile native C code into a static archive to be linked into Rust +code. +""" +keywords = ["build-dependencies"] + +[dependencies] +rayon = { version = "0.4", optional = true } + +[features] +parallel = ["rayon"] + +[dev-dependencies] +tempdir = "0.3" diff --git a/src/vendor/gcc/LICENSE-APACHE b/src/vendor/gcc/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/src/vendor/gcc/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/src/vendor/gcc/LICENSE-MIT b/src/vendor/gcc/LICENSE-MIT new file mode 100644 index 0000000000..39e0ed6602 --- /dev/null +++ b/src/vendor/gcc/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2014 Alex Crichton + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/src/vendor/gcc/README.md b/src/vendor/gcc/README.md new file mode 100644 index 0000000000..ecc79c6735 --- /dev/null +++ b/src/vendor/gcc/README.md @@ -0,0 +1,161 @@ +# gcc-rs + +A library to compile C/C++ code into a Rust library/application. + +[![Build Status](https://travis-ci.org/alexcrichton/gcc-rs.svg?branch=master)](https://travis-ci.org/alexcrichton/gcc-rs) +[![Build status](https://ci.appveyor.com/api/projects/status/onu270iw98h81nwv?svg=true)](https://ci.appveyor.com/project/alexcrichton/gcc-rs) + +[Documentation](http://alexcrichton.com/gcc-rs) + +A simple library meant to be used as a build dependency with Cargo packages in +order to build a set of C/C++ files into a static archive. Note that while this +crate is called "gcc", it actually calls out to the most relevant compile for +a platform, for example using `cl` on MSVC. That is, this crate does indeed work +on MSVC! + +## Using gcc-rs + +First, you'll want to both add a build script for your crate (`build.rs`) and +also add this crate to your `Cargo.toml` via: + +```toml +[package] +# ... +build = "build.rs" + +[build-dependencies] +gcc = "0.3" +``` + +Next up, you'll want to write a build script like so: + +```rust,no_run +// build.rs + +extern crate gcc; + +fn main() { + gcc::compile_library("libfoo.a", &["foo.c", "bar.c"]); +} +``` + +And that's it! Running `cargo build` should take care of the rest and your Rust +application will now have the C files `foo.c` and `bar.c` compiled into it. You +can call the functions in Rust by declaring functions in your Rust code like so: + +``` +extern { + fn foo_function(); + fn bar_function(); +} + +pub fn call() { + unsafe { + foo_function(); + bar_function(); + } +} + +fn main() { + // ... +} +``` + +## External configuration via environment variables + +To control the programs and flags used for building, the builder can set a +number of different environment variables. + +* `CFLAGS` - a series of space separated flags passed to "gcc". Note that + individual flags cannot currently contain spaces, so doing + something like: "-L=foo\ bar" is not possible. +* `CC` - the actual C compiler used. Note that this is used as an exact + executable name, so (for example) no extra flags can be passed inside + this variable, and the builder must ensure that there aren't any + trailing spaces. This compiler must understand the `-c` flag. For + certain `TARGET`s, it also is assumed to know about other flags (most + common is `-fPIC`). +* `AR` - the `ar` (archiver) executable to use to build the static library. + +Each of these variables can also be supplied with certain prefixes and suffixes, +in the following prioritized order: + +1. `_` - for example, `CC_x86_64-unknown-linux-gnu` +2. `_` - for example, `CC_x86_64_unknown_linux_gnu` +3. `_` - for example, `HOST_CC` or `TARGET_CFLAGS` +4. `` - a plain `CC`, `AR` as above. + +If none of these variables exist, gcc-rs uses built-in defaults + +In addition to the the above optional environment variables, `gcc-rs` has some +functions with hard requirements on some variables supplied by [cargo's +build-script driver][cargo] that it has the `TARGET`, `OUT_DIR`, `OPT_LEVEL`, +and `HOST` variables. + +[cargo]: http://doc.crates.io/build-script.html#inputs-to-the-build-script + +## Optional features + +Currently gcc-rs supports parallel compilation (think `make -jN`) but this +feature is turned off by default. To enable gcc-rs to compile C/C++ in parallel, +you can change your dependency to: + +```toml +[build-dependencies] +gcc = { version = "0.3", features = ["parallel"] } +``` + +By default gcc-rs will limit parallelism to `$NUM_JOBS`, or if not present it +will limit it to the number of cpus on the machine. + +## Compile-time Requirements + +To work properly this crate needs access to a C compiler when the build script +is being run. This crate does not ship a C compiler with it. The compiler +required varies per platform, but there are three broad categories: + +* Unix platforms require `cc` to be the C compiler. This can be found by + installing gcc/clang on Linux distributions and Xcode on OSX, for example. +* Windows platforms targeting MSVC (e.g. your target triple ends in `-msvc`) + require `cl.exe` to be available and in `PATH`. This is typically found in + standard Visual Studio installations and the `PATH` can be set up by running + the appropriate developer tools shell. +* Windows platforms targeting MinGW (e.g. your target triple ends in `-gnu`) + require `gcc` to be available in `PATH`. We recommend the + [MinGW-w64](http://mingw-w64.org) distribution, which is using the + [Win-builds](http://win-builds.org) installation system. + You may also acquire it via + [MSYS2](http://msys2.github.io), as explained [here][msys2-help]. Make sure + to install the appropriate architecture corresponding to your installation of + rustc. GCC from older [MinGW](http://www.mingw.org) project is compatible + only with 32-bit rust compiler. + +[msys2-help]: http://github.com/rust-lang/rust#building-on-windows + +## C++ support + +`gcc-rs` supports C++ libraries compilation by using the `cpp` method on +`Config`: + +```rust,no_run +extern crate gcc; + +fn main() { + gcc::Config::new() + .cpp(true) // Switch to C++ library compilation. + .file("foo.cpp") + .compile("libfoo.a"); +} +``` + +When using C++ library compilation switch, the `CXX` and `CXXFLAGS` env +variables are used instead of `CC` and `CFLAGS` and the C++ standard library is +linked to the crate target. + +## License + +`gcc-rs` is primarily distributed under the terms of both the MIT license and +the Apache License (Version 2.0), with portions covered by various BSD-like +licenses. + +See LICENSE-APACHE, and LICENSE-MIT for details. diff --git a/src/vendor/gcc/appveyor.yml b/src/vendor/gcc/appveyor.yml new file mode 100644 index 0000000000..f6108c6651 --- /dev/null +++ b/src/vendor/gcc/appveyor.yml @@ -0,0 +1,35 @@ +environment: + matrix: + - TARGET: x86_64-pc-windows-msvc + ARCH: amd64 + VS: C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat + - TARGET: x86_64-pc-windows-msvc + ARCH: amd64 + VS: C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat + - TARGET: i686-pc-windows-msvc + ARCH: x86 + VS: C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat + - TARGET: i686-pc-windows-msvc + ARCH: x86 + VS: C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat + - TARGET: x86_64-pc-windows-gnu + MSYS_BITS: 64 + - TARGET: i686-pc-windows-gnu + MSYS_BITS: 32 +install: + - ps: Start-FileDownload "https://static.rust-lang.org/dist/rust-nightly-${env:TARGET}.exe" + - rust-nightly-%TARGET%.exe /VERYSILENT /NORESTART /DIR="C:\Program Files (x86)\Rust" + - if defined VS call "%VS%" %ARCH% + - set PATH=%PATH%;C:\Program Files (x86)\Rust\bin + - if defined MSYS_BITS set PATH=%PATH%;C:\msys64\mingw%MSYS_BITS%\bin + - rustc -V + - cargo -V + +build: false + +test_script: + - cargo test --target %TARGET% + - cargo test --features parallel --target %TARGET% + - cargo test --manifest-path gcc-test/Cargo.toml --target %TARGET% + - cargo test --manifest-path gcc-test/Cargo.toml --features parallel --target %TARGET% + - cargo test --manifest-path gcc-test/Cargo.toml --release --target %TARGET% diff --git a/src/vendor/gcc/src/bin/gcc-shim.rs b/src/vendor/gcc/src/bin/gcc-shim.rs new file mode 100644 index 0000000000..43fd811d36 --- /dev/null +++ b/src/vendor/gcc/src/bin/gcc-shim.rs @@ -0,0 +1,23 @@ +#![cfg_attr(test, allow(dead_code))] + +use std::env; +use std::fs::File; +use std::io::prelude::*; +use std::path::PathBuf; + +fn main() { + let out_dir = PathBuf::from(env::var_os("GCCTEST_OUT_DIR").unwrap()); + for i in 0.. { + let candidate = out_dir.join(format!("out{}", i)); + if candidate.exists() { + continue + } + let mut f = File::create(candidate).unwrap(); + for arg in env::args().skip(1) { + writeln!(f, "{}", arg).unwrap(); + } + + File::create(out_dir.join("libfoo.a")).unwrap(); + break + } +} diff --git a/src/vendor/gcc/src/lib.rs b/src/vendor/gcc/src/lib.rs new file mode 100644 index 0000000000..43cc371117 --- /dev/null +++ b/src/vendor/gcc/src/lib.rs @@ -0,0 +1,1007 @@ +//! A library for build scripts to compile custom C code +//! +//! This library is intended to be used as a `build-dependencies` entry in +//! `Cargo.toml`: +//! +//! ```toml +//! [build-dependencies] +//! gcc = "0.3" +//! ``` +//! +//! The purpose of this crate is to provide the utility functions necessary to +//! compile C code into a static archive which is then linked into a Rust crate. +//! The top-level `compile_library` function serves as a convenience and more +//! advanced configuration is available through the `Config` builder. +//! +//! This crate will automatically detect situations such as cross compilation or +//! other environment variables set by Cargo and will build code appropriately. +//! +//! # Examples +//! +//! Use the default configuration: +//! +//! ```no_run +//! extern crate gcc; +//! +//! fn main() { +//! gcc::compile_library("libfoo.a", &["src/foo.c"]); +//! } +//! ``` +//! +//! Use more advanced configuration: +//! +//! ```no_run +//! extern crate gcc; +//! +//! fn main() { +//! gcc::Config::new() +//! .file("src/foo.c") +//! .define("FOO", Some("bar")) +//! .include("src") +//! .compile("libfoo.a"); +//! } +//! ``` + +#![doc(html_root_url = "http://alexcrichton.com/gcc-rs")] +#![cfg_attr(test, deny(warnings))] +#![deny(missing_docs)] + +#[cfg(feature = "parallel")] +extern crate rayon; + +use std::env; +use std::ffi::{OsString, OsStr}; +use std::fs; +use std::io; +use std::path::{PathBuf, Path}; +use std::process::{Command, Stdio}; +use std::io::{BufReader, BufRead, Write}; + +#[cfg(windows)] +mod registry; +pub mod windows_registry; + +/// Extra configuration to pass to gcc. +pub struct Config { + include_directories: Vec, + definitions: Vec<(String, Option)>, + objects: Vec, + flags: Vec, + files: Vec, + cpp: bool, + cpp_link_stdlib: Option>, + cpp_set_stdlib: Option, + target: Option, + host: Option, + out_dir: Option, + opt_level: Option, + debug: Option, + env: Vec<(OsString, OsString)>, + compiler: Option, + archiver: Option, + cargo_metadata: bool, + pic: Option, +} + +/// Configuration used to represent an invocation of a C compiler. +/// +/// This can be used to figure out what compiler is in use, what the arguments +/// to it are, and what the environment variables look like for the compiler. +/// This can be used to further configure other build systems (e.g. forward +/// along CC and/or CFLAGS) or the `to_command` method can be used to run the +/// compiler itself. +pub struct Tool { + path: PathBuf, + args: Vec, + env: Vec<(OsString, OsString)>, +} + +/// Compile a library from the given set of input C files. +/// +/// This will simply compile all files into object files and then assemble them +/// into the output. This will read the standard environment variables to detect +/// cross compilations and such. +/// +/// This function will also print all metadata on standard output for Cargo. +/// +/// # Example +/// +/// ```no_run +/// gcc::compile_library("libfoo.a", &["foo.c", "bar.c"]); +/// ``` +pub fn compile_library(output: &str, files: &[&str]) { + let mut c = Config::new(); + for f in files.iter() { + c.file(*f); + } + c.compile(output) +} + +impl Config { + /// Construct a new instance of a blank set of configuration. + /// + /// This builder is finished with the `compile` function. + pub fn new() -> Config { + Config { + include_directories: Vec::new(), + definitions: Vec::new(), + objects: Vec::new(), + flags: Vec::new(), + files: Vec::new(), + cpp: false, + cpp_link_stdlib: None, + cpp_set_stdlib: None, + target: None, + host: None, + out_dir: None, + opt_level: None, + debug: None, + env: Vec::new(), + compiler: None, + archiver: None, + cargo_metadata: true, + pic: None, + } + } + + /// Add a directory to the `-I` or include path for headers + pub fn include>(&mut self, dir: P) -> &mut Config { + self.include_directories.push(dir.as_ref().to_path_buf()); + self + } + + /// Specify a `-D` variable with an optional value. + pub fn define(&mut self, var: &str, val: Option<&str>) -> &mut Config { + self.definitions.push((var.to_string(), val.map(|s| s.to_string()))); + self + } + + /// Add an arbitrary object file to link in + pub fn object>(&mut self, obj: P) -> &mut Config { + self.objects.push(obj.as_ref().to_path_buf()); + self + } + + /// Add an arbitrary flag to the invocation of the compiler + pub fn flag(&mut self, flag: &str) -> &mut Config { + self.flags.push(flag.to_string()); + self + } + + /// Add a file which will be compiled + pub fn file>(&mut self, p: P) -> &mut Config { + self.files.push(p.as_ref().to_path_buf()); + self + } + + /// Set C++ support. + /// + /// The other `cpp_*` options will only become active if this is set to + /// `true`. + pub fn cpp(&mut self, cpp: bool) -> &mut Config { + self.cpp = cpp; + self + } + + /// Set the standard library to link against when compiling with C++ + /// support. + /// + /// The default value of this property depends on the current target: On + /// OS X `Some("c++")` is used, when compiling for a Visual Studio based + /// target `None` is used and for other targets `Some("stdc++")` is used. + /// + /// A value of `None` indicates that no automatic linking should happen, + /// otherwise cargo will link against the specified library. + /// + /// The given library name must not contain the `lib` prefix. + pub fn cpp_link_stdlib(&mut self, cpp_link_stdlib: Option<&str>) + -> &mut Config { + self.cpp_link_stdlib = Some(cpp_link_stdlib.map(|s| s.into())); + self + } + + /// Force the C++ compiler to use the specified standard library. + /// + /// Setting this option will automatically set `cpp_link_stdlib` to the same + /// value. + /// + /// The default value of this option is always `None`. + /// + /// This option has no effect when compiling for a Visual Studio based + /// target. + /// + /// This option sets the `-stdlib` flag, which is only supported by some + /// compilers (clang, icc) but not by others (gcc). The library will not + /// detect which compiler is used, as such it is the responsibility of the + /// caller to ensure that this option is only used in conjuction with a + /// compiler which supports the `-stdlib` flag. + /// + /// A value of `None` indicates that no specific C++ standard library should + /// be used, otherwise `-stdlib` is added to the compile invocation. + /// + /// The given library name must not contain the `lib` prefix. + pub fn cpp_set_stdlib(&mut self, cpp_set_stdlib: Option<&str>) + -> &mut Config { + self.cpp_set_stdlib = cpp_set_stdlib.map(|s| s.into()); + self.cpp_link_stdlib(cpp_set_stdlib); + self + } + + /// Configures the target this configuration will be compiling for. + /// + /// This option is automatically scraped from the `TARGET` environment + /// variable by build scripts, so it's not required to call this function. + pub fn target(&mut self, target: &str) -> &mut Config { + self.target = Some(target.to_string()); + self + } + + /// Configures the host assumed by this configuration. + /// + /// This option is automatically scraped from the `HOST` environment + /// variable by build scripts, so it's not required to call this function. + pub fn host(&mut self, host: &str) -> &mut Config { + self.host = Some(host.to_string()); + self + } + + /// Configures the optimization level of the generated object files. + /// + /// This option is automatically scraped from the `OPT_LEVEL` environment + /// variable by build scripts, so it's not required to call this function. + pub fn opt_level(&mut self, opt_level: u32) -> &mut Config { + self.opt_level = Some(opt_level.to_string()); + self + } + + /// Configures the optimization level of the generated object files. + /// + /// This option is automatically scraped from the `OPT_LEVEL` environment + /// variable by build scripts, so it's not required to call this function. + pub fn opt_level_str(&mut self, opt_level: &str) -> &mut Config { + self.opt_level = Some(opt_level.to_string()); + self + } + + /// Configures whether the compiler will emit debug information when + /// generating object files. + /// + /// This option is automatically scraped from the `PROFILE` environment + /// variable by build scripts (only enabled when the profile is "debug"), so + /// it's not required to call this function. + pub fn debug(&mut self, debug: bool) -> &mut Config { + self.debug = Some(debug); + self + } + + /// Configures the output directory where all object files and static + /// libraries will be located. + /// + /// This option is automatically scraped from the `OUT_DIR` environment + /// variable by build scripts, so it's not required to call this function. + pub fn out_dir>(&mut self, out_dir: P) -> &mut Config { + self.out_dir = Some(out_dir.as_ref().to_owned()); + self + } + + /// Configures the compiler to be used to produce output. + /// + /// This option is automatically determined from the target platform or a + /// number of environment variables, so it's not required to call this + /// function. + pub fn compiler>(&mut self, compiler: P) -> &mut Config { + self.compiler = Some(compiler.as_ref().to_owned()); + self + } + + /// Configures the tool used to assemble archives. + /// + /// This option is automatically determined from the target platform or a + /// number of environment variables, so it's not required to call this + /// function. + pub fn archiver>(&mut self, archiver: P) -> &mut Config { + self.archiver = Some(archiver.as_ref().to_owned()); + self + } + /// Define whether metadata should be emitted for cargo allowing it to + /// automatically link the binary. Defaults to `true`. + pub fn cargo_metadata(&mut self, cargo_metadata: bool) -> &mut Config { + self.cargo_metadata = cargo_metadata; + self + } + + /// Configures whether the compiler will emit position independent code. + /// + /// This option defaults to `false` for `i686` and `windows-gnu` targets and to `true` for all + /// other targets. + pub fn pic(&mut self, pic: bool) -> &mut Config { + self.pic = Some(pic); + self + } + + + #[doc(hidden)] + pub fn __set_env(&mut self, a: A, b: B) -> &mut Config + where A: AsRef, B: AsRef + { + self.env.push((a.as_ref().to_owned(), b.as_ref().to_owned())); + self + } + + /// Run the compiler, generating the file `output` + /// + /// The name `output` must begin with `lib` and end with `.a` + pub fn compile(&self, output: &str) { + assert!(output.starts_with("lib")); + assert!(output.ends_with(".a")); + let lib_name = &output[3..output.len() - 2]; + let dst = self.get_out_dir(); + + let mut objects = Vec::new(); + let mut src_dst = Vec::new(); + for file in self.files.iter() { + let obj = dst.join(file).with_extension("o"); + let obj = if !obj.starts_with(&dst) { + dst.join(obj.file_name().unwrap()) + } else { + obj + }; + fs::create_dir_all(&obj.parent().unwrap()).unwrap(); + src_dst.push((file.to_path_buf(), obj.clone())); + objects.push(obj); + } + self.compile_objects(&src_dst); + self.assemble(lib_name, &dst.join(output), &objects); + + if self.get_target().contains("msvc") { + let compiler = self.get_base_compiler(); + let atlmfc_lib = compiler.env().iter().find(|&&(ref var, _)| { + var == OsStr::new("LIB") + }).and_then(|&(_, ref lib_paths)| { + env::split_paths(lib_paths).find(|path| { + let sub = Path::new("atlmfc/lib"); + path.ends_with(sub) || path.parent().map_or(false, |p| p.ends_with(sub)) + }) + }); + + if let Some(atlmfc_lib) = atlmfc_lib { + self.print(&format!("cargo:rustc-link-search=native={}", + atlmfc_lib.display())); + } + } + + self.print(&format!("cargo:rustc-link-lib=static={}", + &output[3..output.len() - 2])); + self.print(&format!("cargo:rustc-link-search=native={}", dst.display())); + + // Add specific C++ libraries, if enabled. + if self.cpp { + if let Some(stdlib) = self.get_cpp_link_stdlib() { + self.print(&format!("cargo:rustc-link-lib={}", stdlib)); + } + } + } + + #[cfg(feature = "parallel")] + fn compile_objects(&self, objs: &[(PathBuf, PathBuf)]) { + use self::rayon::prelude::*; + + let mut cfg = rayon::Configuration::new(); + if let Ok(amt) = env::var("NUM_JOBS") { + if let Ok(amt) = amt.parse() { + cfg = cfg.set_num_threads(amt); + } + } + drop(rayon::initialize(cfg)); + + objs.par_iter().weight_max().for_each(|&(ref src, ref dst)| { + self.compile_object(src, dst) + }) + } + + #[cfg(not(feature = "parallel"))] + fn compile_objects(&self, objs: &[(PathBuf, PathBuf)]) { + for &(ref src, ref dst) in objs { + self.compile_object(src, dst); + } + } + + fn compile_object(&self, file: &Path, dst: &Path) { + let is_asm = file.extension().and_then(|s| s.to_str()) == Some("asm"); + let msvc = self.get_target().contains("msvc"); + let (mut cmd, name) = if msvc && is_asm { + self.msvc_macro_assembler() + } else { + let compiler = self.get_compiler(); + let mut cmd = compiler.to_command(); + for &(ref a, ref b) in self.env.iter() { + cmd.env(a, b); + } + (cmd, compiler.path.file_name().unwrap() + .to_string_lossy().into_owned()) + }; + if msvc && is_asm { + cmd.arg("/Fo").arg(dst); + } else if msvc { + let mut s = OsString::from("/Fo"); + s.push(&dst); + cmd.arg(s); + } else { + cmd.arg("-o").arg(&dst); + } + cmd.arg(if msvc {"/c"} else {"-c"}); + cmd.arg(file); + + run(&mut cmd, &name); + } + + /// Get the compiler that's in use for this configuration. + /// + /// This function will return a `Tool` which represents the culmination + /// of this configuration at a snapshot in time. The returned compiler can + /// be inspected (e.g. the path, arguments, environment) to forward along to + /// other tools, or the `to_command` method can be used to invoke the + /// compiler itself. + /// + /// This method will take into account all configuration such as debug + /// information, optimization level, include directories, defines, etc. + /// Additionally, the compiler binary in use follows the standard + /// conventions for this path, e.g. looking at the explicitly set compiler, + /// environment variables (a number of which are inspected here), and then + /// falling back to the default configuration. + pub fn get_compiler(&self) -> Tool { + let opt_level = self.get_opt_level(); + let debug = self.get_debug(); + let target = self.get_target(); + let msvc = target.contains("msvc"); + self.print(&format!("debug={} opt-level={}", debug, opt_level)); + + let mut cmd = self.get_base_compiler(); + let nvcc = cmd.path.to_str() + .map(|path| path.contains("nvcc")) + .unwrap_or(false); + + if msvc { + cmd.args.push("/nologo".into()); + let features = env::var("CARGO_CFG_TARGET_FEATURE") + .unwrap_or(String::new()); + if features.contains("crt-static") { + cmd.args.push("/MT".into()); + } else { + cmd.args.push("/MD".into()); + } + match &opt_level[..] { + "z" | "s" => cmd.args.push("/Os".into()), + "2" => cmd.args.push("/O2".into()), + "1" => cmd.args.push("/O1".into()), + _ => {} + } + if target.contains("i586") { + cmd.args.push("/ARCH:IA32".into()); + } + } else if nvcc { + cmd.args.push(format!("-O{}", opt_level).into()); + } else { + cmd.args.push(format!("-O{}", opt_level).into()); + cmd.args.push("-ffunction-sections".into()); + cmd.args.push("-fdata-sections".into()); + } + for arg in self.envflags(if self.cpp {"CXXFLAGS"} else {"CFLAGS"}) { + cmd.args.push(arg.into()); + } + + if debug { + cmd.args.push(if msvc {"/Z7"} else {"-g"}.into()); + } + + if target.contains("-ios") { + self.ios_flags(&mut cmd); + } else if !msvc { + if target.contains("i686") || target.contains("i586") { + cmd.args.push("-m32".into()); + } else if target.contains("x86_64") || target.contains("powerpc64") { + cmd.args.push("-m64".into()); + } + + if !nvcc && self.pic.unwrap_or(!target.contains("i686") && !target.contains("windows-gnu")) { + cmd.args.push("-fPIC".into()); + } else if nvcc && self.pic.unwrap_or(false) { + cmd.args.push("-Xcompiler".into()); + cmd.args.push("\'-fPIC\'".into()); + } + + if target.contains("musl") { + cmd.args.push("-static".into()); + } + + // armv7 targets get to use armv7 instructions + if target.starts_with("armv7-unknown-linux-") { + cmd.args.push("-march=armv7-a".into()); + } + + // On android we can guarantee some extra float instructions + // (specified in the android spec online) + if target.starts_with("armv7-linux-androideabi") { + cmd.args.push("-march=armv7-a".into()); + cmd.args.push("-mfpu=vfpv3-d16".into()); + } + + // For us arm == armv6 by default + if target.starts_with("arm-unknown-linux-") { + cmd.args.push("-march=armv6".into()); + cmd.args.push("-marm".into()); + } + + // Turn codegen down on i586 to avoid some instructions. + if target.starts_with("i586-unknown-linux-") { + cmd.args.push("-march=pentium".into()); + } + + // Set codegen level for i686 correctly + if target.starts_with("i686-unknown-linux-") { + cmd.args.push("-march=i686".into()); + } + + // Looks like `musl-gcc` makes is hard for `-m32` to make its way + // all the way to the linker, so we need to actually instruct the + // linker that we're generating 32-bit executables as well. This'll + // typically only be used for build scripts which transitively use + // these flags that try to compile executables. + if target == "i686-unknown-linux-musl" { + cmd.args.push("-Wl,-melf_i386".into()); + } + + if target.starts_with("thumb") { + cmd.args.push("-mthumb".into()); + + if target.ends_with("eabihf") { + cmd.args.push("-mfloat-abi=hard".into()) + } + } + if target.starts_with("thumbv6m") { + cmd.args.push("-march=armv6s-m".into()); + } + if target.starts_with("thumbv7em") { + cmd.args.push("-march=armv7e-m".into()); + + if target.ends_with("eabihf") { + cmd.args.push("-mfpu=fpv4-sp-d16".into()) + } + } + if target.starts_with("thumbv7m") { + cmd.args.push("-march=armv7-m".into()); + } + } + + if self.cpp && !msvc { + if let Some(ref stdlib) = self.cpp_set_stdlib { + cmd.args.push(format!("-stdlib=lib{}", stdlib).into()); + } + } + + for directory in self.include_directories.iter() { + cmd.args.push(if msvc {"/I"} else {"-I"}.into()); + cmd.args.push(directory.into()); + } + + for flag in self.flags.iter() { + cmd.args.push(flag.into()); + } + + for &(ref key, ref value) in self.definitions.iter() { + let lead = if msvc {"/"} else {"-"}; + if let &Some(ref value) = value { + cmd.args.push(format!("{}D{}={}", lead, key, value).into()); + } else { + cmd.args.push(format!("{}D{}", lead, key).into()); + } + } + cmd + } + + fn msvc_macro_assembler(&self) -> (Command, String) { + let target = self.get_target(); + let tool = if target.contains("x86_64") {"ml64.exe"} else {"ml.exe"}; + let mut cmd = windows_registry::find(&target, tool).unwrap_or_else(|| { + self.cmd(tool) + }); + for directory in self.include_directories.iter() { + cmd.arg("/I").arg(directory); + } + for &(ref key, ref value) in self.definitions.iter() { + if let &Some(ref value) = value { + cmd.arg(&format!("/D{}={}", key, value)); + } else { + cmd.arg(&format!("/D{}", key)); + } + } + + if target.contains("i686") || target.contains("i586") { + cmd.arg("/safeseh"); + } + for flag in self.flags.iter() { + cmd.arg(flag); + } + + (cmd, tool.to_string()) + } + + fn assemble(&self, lib_name: &str, dst: &Path, objects: &[PathBuf]) { + // Delete the destination if it exists as the `ar` tool at least on Unix + // appends to it, which we don't want. + let _ = fs::remove_file(&dst); + + let target = self.get_target(); + if target.contains("msvc") { + let mut cmd = match self.archiver { + Some(ref s) => self.cmd(s), + None => windows_registry::find(&target, "lib.exe") + .unwrap_or(self.cmd("lib.exe")), + }; + let mut out = OsString::from("/OUT:"); + out.push(dst); + run(cmd.arg(out).arg("/nologo") + .args(objects) + .args(&self.objects), "lib.exe"); + + // The Rust compiler will look for libfoo.a and foo.lib, but the + // MSVC linker will also be passed foo.lib, so be sure that both + // exist for now. + let lib_dst = dst.with_file_name(format!("{}.lib", lib_name)); + let _ = fs::remove_file(&lib_dst); + fs::hard_link(&dst, &lib_dst).or_else(|_| { + //if hard-link fails, just copy (ignoring the number of bytes written) + fs::copy(&dst, &lib_dst).map(|_| ()) + }).ok().expect("Copying from {:?} to {:?} failed.");; + } else { + let ar = self.get_ar(); + let cmd = ar.file_name().unwrap().to_string_lossy(); + run(self.cmd(&ar).arg("crs") + .arg(dst) + .args(objects) + .args(&self.objects), &cmd); + } + } + + fn ios_flags(&self, cmd: &mut Tool) { + enum ArchSpec { + Device(&'static str), + Simulator(&'static str), + } + + let target = self.get_target(); + let arch = target.split('-').nth(0).unwrap(); + let arch = match arch { + "arm" | "armv7" | "thumbv7" => ArchSpec::Device("armv7"), + "armv7s" | "thumbv7s" => ArchSpec::Device("armv7s"), + "arm64" | "aarch64" => ArchSpec::Device("arm64"), + "i386" | "i686" => ArchSpec::Simulator("-m32"), + "x86_64" => ArchSpec::Simulator("-m64"), + _ => fail("Unknown arch for iOS target") + }; + + let sdk = match arch { + ArchSpec::Device(arch) => { + cmd.args.push("-arch".into()); + cmd.args.push(arch.into()); + cmd.args.push("-miphoneos-version-min=7.0".into()); + "iphoneos" + }, + ArchSpec::Simulator(arch) => { + cmd.args.push(arch.into()); + cmd.args.push("-mios-simulator-version-min=7.0".into()); + "iphonesimulator" + } + }; + + self.print(&format!("Detecting iOS SDK path for {}", sdk)); + let sdk_path = self.cmd("xcrun") + .arg("--show-sdk-path") + .arg("--sdk") + .arg(sdk) + .stderr(Stdio::inherit()) + .output() + .unwrap() + .stdout; + + let sdk_path = String::from_utf8(sdk_path).unwrap(); + + cmd.args.push("-isysroot".into()); + cmd.args.push(sdk_path.trim().into()); + } + + fn cmd>(&self, prog: P) -> Command { + let mut cmd = Command::new(prog); + for &(ref a, ref b) in self.env.iter() { + cmd.env(a, b); + } + return cmd + } + + fn get_base_compiler(&self) -> Tool { + if let Some(ref c) = self.compiler { + return Tool::new(c.clone()) + } + let host = self.get_host(); + let target = self.get_target(); + let (env, msvc, gnu, default) = if self.cpp { + ("CXX", "cl.exe", "g++", "c++") + } else { + ("CC", "cl.exe", "gcc", "cc") + }; + self.env_tool(env).map(|(tool, args)| { + let mut t = Tool::new(PathBuf::from(tool)); + for arg in args { + t.args.push(arg.into()); + } + return t + }).or_else(|| { + if target.contains("emscripten") { + if self.cpp { + Some(Tool::new(PathBuf::from("em++"))) + } else { + Some(Tool::new(PathBuf::from("emcc"))) + } + } else { + None + } + }).or_else(|| { + windows_registry::find_tool(&target, "cl.exe") + }).unwrap_or_else(|| { + let compiler = if host.contains("windows") && + target.contains("windows") { + if target.contains("msvc") { + msvc.to_string() + } else { + format!("{}.exe", gnu) + } + } else if target.contains("android") { + format!("{}-{}", target, gnu) + } else if self.get_host() != target { + // CROSS_COMPILE is of the form: "arm-linux-gnueabi-" + let cc_env = self.getenv("CROSS_COMPILE"); + let cross_compile = cc_env.as_ref().map(|s| s.trim_right_matches('-')); + let prefix = cross_compile.or(match &target[..] { + "aarch64-unknown-linux-gnu" => Some("aarch64-linux-gnu"), + "arm-unknown-linux-gnueabi" => Some("arm-linux-gnueabi"), + "arm-unknown-linux-gnueabihf" => Some("arm-linux-gnueabihf"), + "arm-unknown-linux-musleabi" => Some("arm-linux-musleabi"), + "arm-unknown-linux-musleabihf" => Some("arm-linux-musleabihf"), + "arm-unknown-netbsdelf-eabi" => Some("arm--netbsdelf-eabi"), + "armv6-unknown-netbsdelf-eabihf" => Some("armv6--netbsdelf-eabihf"), + "armv7-unknown-linux-gnueabihf" => Some("arm-linux-gnueabihf"), + "armv7-unknown-linux-musleabihf" => Some("arm-linux-musleabihf"), + "armv7-unknown-netbsdelf-eabihf" => Some("armv7--netbsdelf-eabihf"), + "i686-pc-windows-gnu" => Some("i686-w64-mingw32"), + "i686-unknown-linux-musl" => Some("musl"), + "i686-unknown-netbsdelf" => Some("i486--netbsdelf"), + "mips-unknown-linux-gnu" => Some("mips-linux-gnu"), + "mipsel-unknown-linux-gnu" => Some("mipsel-linux-gnu"), + "mips64-unknown-linux-gnuabi64" => Some("mips64-linux-gnuabi64"), + "mips64el-unknown-linux-gnuabi64" => Some("mips64el-linux-gnuabi64"), + "powerpc-unknown-linux-gnu" => Some("powerpc-linux-gnu"), + "powerpc-unknown-netbsd" => Some("powerpc--netbsd"), + "powerpc64-unknown-linux-gnu" => Some("powerpc-linux-gnu"), + "powerpc64le-unknown-linux-gnu" => Some("powerpc64le-linux-gnu"), + "s390x-unknown-linux-gnu" => Some("s390x-linux-gnu"), + "sparc64-unknown-netbsd" => Some("sparc64--netbsd"), + "thumbv6m-none-eabi" => Some("arm-none-eabi"), + "thumbv7em-none-eabi" => Some("arm-none-eabi"), + "thumbv7em-none-eabihf" => Some("arm-none-eabi"), + "thumbv7m-none-eabi" => Some("arm-none-eabi"), + "x86_64-pc-windows-gnu" => Some("x86_64-w64-mingw32"), + "x86_64-rumprun-netbsd" => Some("x86_64-rumprun-netbsd"), + "x86_64-unknown-linux-musl" => Some("musl"), + "x86_64-unknown-netbsd" => Some("x86_64--netbsd"), + _ => None, + }); + match prefix { + Some(prefix) => format!("{}-{}", prefix, gnu), + None => default.to_string(), + } + } else { + default.to_string() + }; + Tool::new(PathBuf::from(compiler)) + }) + } + + fn get_var(&self, var_base: &str) -> Result { + let target = self.get_target(); + let host = self.get_host(); + let kind = if host == target {"HOST"} else {"TARGET"}; + let target_u = target.replace("-", "_"); + let res = self.getenv(&format!("{}_{}", var_base, target)) + .or_else(|| self.getenv(&format!("{}_{}", var_base, target_u))) + .or_else(|| self.getenv(&format!("{}_{}", kind, var_base))) + .or_else(|| self.getenv(var_base)); + + match res { + Some(res) => Ok(res), + None => Err("could not get environment variable".to_string()), + } + } + + fn envflags(&self, name: &str) -> Vec { + self.get_var(name).unwrap_or(String::new()) + .split(|c: char| c.is_whitespace()).filter(|s| !s.is_empty()) + .map(|s| s.to_string()) + .collect() + } + + fn env_tool(&self, name: &str) -> Option<(String, Vec)> { + self.get_var(name).ok().map(|tool| { + let whitelist = ["ccache", "distcc"]; + for t in whitelist.iter() { + if tool.starts_with(t) && tool[t.len()..].starts_with(" ") { + return (t.to_string(), + vec![tool[t.len()..].trim_left().to_string()]) + } + } + (tool, Vec::new()) + }) + } + + /// Returns the default C++ standard library for the current target: `libc++` + /// for OS X and `libstdc++` for anything else. + fn get_cpp_link_stdlib(&self) -> Option { + self.cpp_link_stdlib.clone().unwrap_or_else(|| { + let target = self.get_target(); + if target.contains("msvc") { + None + } else if target.contains("darwin") { + Some("c++".to_string()) + } else if target.contains("freebsd") { + Some("c++".to_string()) + } else { + Some("stdc++".to_string()) + } + }) + } + + fn get_ar(&self) -> PathBuf { + self.archiver.clone().or_else(|| { + self.get_var("AR").map(PathBuf::from).ok() + }).unwrap_or_else(|| { + if self.get_target().contains("android") { + PathBuf::from(format!("{}-ar", self.get_target())) + } else if self.get_target().contains("emscripten") { + PathBuf::from("emar") + } else { + PathBuf::from("ar") + } + }) + } + + fn get_target(&self) -> String { + self.target.clone().unwrap_or_else(|| self.getenv_unwrap("TARGET")) + } + + fn get_host(&self) -> String { + self.host.clone().unwrap_or_else(|| self.getenv_unwrap("HOST")) + } + + fn get_opt_level(&self) -> String { + self.opt_level.as_ref().cloned().unwrap_or_else(|| { + self.getenv_unwrap("OPT_LEVEL") + }) + } + + fn get_debug(&self) -> bool { + self.debug.unwrap_or_else(|| self.getenv_unwrap("PROFILE") == "debug") + } + + fn get_out_dir(&self) -> PathBuf { + self.out_dir.clone().unwrap_or_else(|| { + env::var_os("OUT_DIR").map(PathBuf::from).unwrap() + }) + } + + fn getenv(&self, v: &str) -> Option { + let r = env::var(v).ok(); + self.print(&format!("{} = {:?}", v, r)); + r + } + + fn getenv_unwrap(&self, v: &str) -> String { + match self.getenv(v) { + Some(s) => s, + None => fail(&format!("environment variable `{}` not defined", v)), + } + } + + fn print(&self, s: &str) { + if self.cargo_metadata { + println!("{}", s); + } + } +} + +impl Tool { + fn new(path: PathBuf) -> Tool { + Tool { + path: path, + args: Vec::new(), + env: Vec::new(), + } + } + + /// Converts this compiler into a `Command` that's ready to be run. + /// + /// This is useful for when the compiler needs to be executed and the + /// command returned will already have the initial arguments and environment + /// variables configured. + pub fn to_command(&self) -> Command { + let mut cmd = Command::new(&self.path); + cmd.args(&self.args); + for &(ref k, ref v) in self.env.iter() { + cmd.env(k, v); + } + return cmd + } + + /// Returns the path for this compiler. + /// + /// Note that this may not be a path to a file on the filesystem, e.g. "cc", + /// but rather something which will be resolved when a process is spawned. + pub fn path(&self) -> &Path { + &self.path + } + + /// Returns the default set of arguments to the compiler needed to produce + /// executables for the target this compiler generates. + pub fn args(&self) -> &[OsString] { + &self.args + } + + /// Returns the set of environment variables needed for this compiler to + /// operate. + /// + /// This is typically only used for MSVC compilers currently. + pub fn env(&self) -> &[(OsString, OsString)] { + &self.env + } +} + +fn run(cmd: &mut Command, program: &str) { + println!("running: {:?}", cmd); + // Capture the standard error coming from these programs, and write it out + // with cargo:warning= prefixes. Note that this is a bit wonky to avoid + // requiring the output to be UTF-8, we instead just ship bytes from one + // location to another. + let spawn_result = match cmd.stderr(Stdio::piped()).spawn() { + Ok(mut child) => { + let stderr = BufReader::new(child.stderr.take().unwrap()); + for line in stderr.split(b'\n').filter_map(|l| l.ok()) { + print!("cargo:warning="); + std::io::stdout().write_all(&line).unwrap(); + println!(""); + } + child.wait() + } + Err(e) => Err(e), + }; + let status = match spawn_result { + Ok(status) => status, + Err(ref e) if e.kind() == io::ErrorKind::NotFound => { + let extra = if cfg!(windows) { + " (see https://github.com/alexcrichton/gcc-rs#compile-time-requirements \ + for help)" + } else { + "" + }; + fail(&format!("failed to execute command: {}\nIs `{}` \ + not installed?{}", e, program, extra)); + } + Err(e) => fail(&format!("failed to execute command: {}", e)), + }; + println!("{:?}", status); + if !status.success() { + fail(&format!("command did not execute successfully, got: {}", status)); + } +} + +fn fail(s: &str) -> ! { + println!("\n\n{}\n\n", s); + panic!() +} diff --git a/src/vendor/gcc/src/registry.rs b/src/vendor/gcc/src/registry.rs new file mode 100644 index 0000000000..d871cd21f3 --- /dev/null +++ b/src/vendor/gcc/src/registry.rs @@ -0,0 +1,169 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::ffi::{OsString, OsStr}; +use std::io; +use std::ops::RangeFrom; +use std::os::raw; +use std::os::windows::prelude::*; + +pub struct RegistryKey(Repr); + +type HKEY = *mut u8; +type DWORD = u32; +type LPDWORD = *mut DWORD; +type LPCWSTR = *const u16; +type LPWSTR = *mut u16; +type LONG = raw::c_long; +type PHKEY = *mut HKEY; +type PFILETIME = *mut u8; +type LPBYTE = *mut u8; +type REGSAM = u32; + +const ERROR_SUCCESS: DWORD = 0; +const ERROR_NO_MORE_ITEMS: DWORD = 259; +const HKEY_LOCAL_MACHINE: HKEY = 0x80000002 as HKEY; +const REG_SZ: DWORD = 1; +const KEY_READ: DWORD = 0x20019; +const KEY_WOW64_32KEY: DWORD = 0x200; + +#[link(name = "advapi32")] +extern "system" { + fn RegOpenKeyExW(key: HKEY, + lpSubKey: LPCWSTR, + ulOptions: DWORD, + samDesired: REGSAM, + phkResult: PHKEY) -> LONG; + fn RegEnumKeyExW(key: HKEY, + dwIndex: DWORD, + lpName: LPWSTR, + lpcName: LPDWORD, + lpReserved: LPDWORD, + lpClass: LPWSTR, + lpcClass: LPDWORD, + lpftLastWriteTime: PFILETIME) -> LONG; + fn RegQueryValueExW(hKey: HKEY, + lpValueName: LPCWSTR, + lpReserved: LPDWORD, + lpType: LPDWORD, + lpData: LPBYTE, + lpcbData: LPDWORD) -> LONG; + fn RegCloseKey(hKey: HKEY) -> LONG; +} + +struct OwnedKey(HKEY); + +enum Repr { + Const(HKEY), + Owned(OwnedKey), +} + +pub struct Iter<'a> { + idx: RangeFrom, + key: &'a RegistryKey, +} + +unsafe impl Sync for Repr {} +unsafe impl Send for Repr {} + +pub static LOCAL_MACHINE: RegistryKey = + RegistryKey(Repr::Const(HKEY_LOCAL_MACHINE)); + +impl RegistryKey { + fn raw(&self) -> HKEY { + match self.0 { + Repr::Const(val) => val, + Repr::Owned(ref val) => val.0, + } + } + + pub fn open(&self, key: &OsStr) -> io::Result { + let key = key.encode_wide().chain(Some(0)).collect::>(); + let mut ret = 0 as *mut _; + let err = unsafe { + RegOpenKeyExW(self.raw(), key.as_ptr(), 0, + KEY_READ | KEY_WOW64_32KEY, &mut ret) + }; + if err == ERROR_SUCCESS as LONG { + Ok(RegistryKey(Repr::Owned(OwnedKey(ret)))) + } else { + Err(io::Error::from_raw_os_error(err as i32)) + } + } + + pub fn iter(&self) -> Iter { + Iter { idx: 0.., key: self } + } + + pub fn query_str(&self, name: &str) -> io::Result { + let name: &OsStr = name.as_ref(); + let name = name.encode_wide().chain(Some(0)).collect::>(); + let mut len = 0; + let mut kind = 0; + unsafe { + let err = RegQueryValueExW(self.raw(), name.as_ptr(), 0 as *mut _, + &mut kind, 0 as *mut _, &mut len); + if err != ERROR_SUCCESS as LONG { + return Err(io::Error::from_raw_os_error(err as i32)) + } + if kind != REG_SZ { + return Err(io::Error::new(io::ErrorKind::Other, + "registry key wasn't a string")) + } + + // The length here is the length in bytes, but we're using wide + // characters so we need to be sure to halve it for the capacity + // passed in. + let mut v = Vec::with_capacity(len as usize / 2); + let err = RegQueryValueExW(self.raw(), name.as_ptr(), 0 as *mut _, + 0 as *mut _, v.as_mut_ptr() as *mut _, + &mut len); + if err != ERROR_SUCCESS as LONG { + return Err(io::Error::from_raw_os_error(err as i32)) + } + v.set_len(len as usize / 2); + + // Some registry keys may have a terminating nul character, but + // we're not interested in that, so chop it off if it's there. + if v[v.len() - 1] == 0 { + v.pop(); + } + Ok(OsString::from_wide(&v)) + } + } +} + +impl Drop for OwnedKey { + fn drop(&mut self) { + unsafe { RegCloseKey(self.0); } + } +} + +impl<'a> Iterator for Iter<'a> { + type Item = io::Result; + + fn next(&mut self) -> Option> { + self.idx.next().and_then(|i| unsafe { + let mut v = Vec::with_capacity(256); + let mut len = v.capacity() as DWORD; + let ret = RegEnumKeyExW(self.key.raw(), i, v.as_mut_ptr(), &mut len, + 0 as *mut _, 0 as *mut _, 0 as *mut _, + 0 as *mut _); + if ret == ERROR_NO_MORE_ITEMS as LONG { + None + } else if ret != ERROR_SUCCESS as LONG { + Some(Err(io::Error::from_raw_os_error(ret as i32))) + } else { + v.set_len(len as usize); + Some(Ok(OsString::from_wide(&v))) + } + }) + } +} diff --git a/src/vendor/gcc/src/windows_registry.rs b/src/vendor/gcc/src/windows_registry.rs new file mode 100644 index 0000000000..e16a33f246 --- /dev/null +++ b/src/vendor/gcc/src/windows_registry.rs @@ -0,0 +1,430 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +//! A helper module to probe the Windows Registry when looking for +//! windows-specific tools. + +use std::process::Command; + +use Tool; + +macro_rules! otry { + ($expr:expr) => (match $expr { + Some(val) => val, + None => return None, + }) +} + +/// Attempts to find a tool within an MSVC installation using the Windows +/// registry as a point to search from. +/// +/// The `target` argument is the target that the tool should work for (e.g. +/// compile or link for) and the `tool` argument is the tool to find (e.g. +/// `cl.exe` or `link.exe`). +/// +/// This function will return `None` if the tool could not be found, or it will +/// return `Some(cmd)` which represents a command that's ready to execute the +/// tool with the appropriate environment variables set. +/// +/// Note that this function always returns `None` for non-MSVC targets. +pub fn find(target: &str, tool: &str) -> Option { + find_tool(target, tool).map(|c| c.to_command()) +} + +/// Similar to the `find` function above, this function will attempt the same +/// operation (finding a MSVC tool in a local install) but instead returns a +/// `Tool` which may be introspected. +#[cfg(not(windows))] +pub fn find_tool(_target: &str, _tool: &str) -> Option { + None +} + +/// Documented above. +#[cfg(windows)] +pub fn find_tool(target: &str, tool: &str) -> Option { + use std::env; + use std::ffi::OsString; + use std::mem; + use std::path::{Path, PathBuf}; + use registry::{RegistryKey, LOCAL_MACHINE}; + + struct MsvcTool { + tool: PathBuf, + libs: Vec, + path: Vec, + include: Vec, + } + + impl MsvcTool { + fn new(tool: PathBuf) -> MsvcTool { + MsvcTool { + tool: tool, + libs: Vec::new(), + path: Vec::new(), + include: Vec::new(), + } + } + + fn into_tool(self) -> Tool { + let MsvcTool { tool, libs, path, include } = self; + let mut tool = Tool::new(tool.into()); + add_env(&mut tool, "LIB", libs); + add_env(&mut tool, "PATH", path); + add_env(&mut tool, "INCLUDE", include); + return tool + } + } + + // This logic is all tailored for MSVC, if we're not that then bail out + // early. + if !target.contains("msvc") { + return None + } + + // Looks like msbuild isn't located in the same location as other tools like + // cl.exe and lib.exe. To handle this we probe for it manually with + // dedicated registry keys. + if tool.contains("msbuild") { + return find_msbuild(target) + } + + // If VCINSTALLDIR is set, then someone's probably already run vcvars and we + // should just find whatever that indicates. + if env::var_os("VCINSTALLDIR").is_some() { + return env::var_os("PATH").and_then(|path| { + env::split_paths(&path).map(|p| p.join(tool)).find(|p| p.exists()) + }).map(|path| { + Tool::new(path.into()) + }) + } + + // Ok, if we're here, now comes the fun part of the probing. Default shells + // or shells like MSYS aren't really configured to execute `cl.exe` and the + // various compiler tools shipped as part of Visual Studio. Here we try to + // first find the relevant tool, then we also have to be sure to fill in + // environment variables like `LIB`, `INCLUDE`, and `PATH` to ensure that + // the tool is actually usable. + + return find_msvc_latest(tool, target, "15.0").or_else(|| { + find_msvc_latest(tool, target, "14.0") + }).or_else(|| { + find_msvc_12(tool, target) + }).or_else(|| { + find_msvc_11(tool, target) + }); + + // For MSVC 14 or newer we need to find the Universal CRT as well as either + // the Windows 10 SDK or Windows 8.1 SDK. + fn find_msvc_latest(tool: &str, target: &str, ver: &str) -> Option { + let vcdir = otry!(get_vc_dir(ver)); + let mut tool = otry!(get_tool(tool, &vcdir, target)); + let sub = otry!(lib_subdir(target)); + let (ucrt, ucrt_version) = otry!(get_ucrt_dir()); + + let ucrt_include = ucrt.join("include").join(&ucrt_version); + tool.include.push(ucrt_include.join("ucrt")); + + let ucrt_lib = ucrt.join("lib").join(&ucrt_version); + tool.libs.push(ucrt_lib.join("ucrt").join(sub)); + + if let Some((sdk, version)) = get_sdk10_dir() { + tool.path.push(sdk.join("bin").join(sub)); + let sdk_lib = sdk.join("lib").join(&version); + tool.libs.push(sdk_lib.join("um").join(sub)); + let sdk_include = sdk.join("include").join(&version); + tool.include.push(sdk_include.join("um")); + tool.include.push(sdk_include.join("winrt")); + tool.include.push(sdk_include.join("shared")); + } else if let Some(sdk) = get_sdk81_dir() { + tool.path.push(sdk.join("bin").join(sub)); + let sdk_lib = sdk.join("lib").join("winv6.3"); + tool.libs.push(sdk_lib.join("um").join(sub)); + let sdk_include = sdk.join("include"); + tool.include.push(sdk_include.join("um")); + tool.include.push(sdk_include.join("winrt")); + tool.include.push(sdk_include.join("shared")); + } else { + return None + } + Some(tool.into_tool()) + } + + // For MSVC 12 we need to find the Windows 8.1 SDK. + fn find_msvc_12(tool: &str, target: &str) -> Option { + let vcdir = otry!(get_vc_dir("12.0")); + let mut tool = otry!(get_tool(tool, &vcdir, target)); + let sub = otry!(lib_subdir(target)); + let sdk81 = otry!(get_sdk81_dir()); + tool.path.push(sdk81.join("bin").join(sub)); + let sdk_lib = sdk81.join("lib").join("winv6.3"); + tool.libs.push(sdk_lib.join("um").join(sub)); + let sdk_include = sdk81.join("include"); + tool.include.push(sdk_include.join("shared")); + tool.include.push(sdk_include.join("um")); + tool.include.push(sdk_include.join("winrt")); + Some(tool.into_tool()) + } + + // For MSVC 11 we need to find the Windows 8 SDK. + fn find_msvc_11(tool: &str, target: &str) -> Option { + let vcdir = otry!(get_vc_dir("11.0")); + let mut tool = otry!(get_tool(tool, &vcdir, target)); + let sub = otry!(lib_subdir(target)); + let sdk8 = otry!(get_sdk8_dir()); + tool.path.push(sdk8.join("bin").join(sub)); + let sdk_lib = sdk8.join("lib").join("win8"); + tool.libs.push(sdk_lib.join("um").join(sub)); + let sdk_include = sdk8.join("include"); + tool.include.push(sdk_include.join("shared")); + tool.include.push(sdk_include.join("um")); + tool.include.push(sdk_include.join("winrt")); + Some(tool.into_tool()) + } + + fn add_env(tool: &mut Tool, env: &str, paths: Vec) { + let prev = env::var_os(env).unwrap_or(OsString::new()); + let prev = env::split_paths(&prev); + let new = paths.into_iter().chain(prev); + tool.env.push((env.to_string().into(), env::join_paths(new).unwrap())); + } + + // Given a possible MSVC installation directory, we look for the linker and + // then add the MSVC library path. + fn get_tool(tool: &str, path: &Path, target: &str) -> Option { + bin_subdir(target).into_iter().map(|(sub, host)| { + (path.join("bin").join(sub).join(tool), + path.join("bin").join(host)) + }).filter(|&(ref path, _)| { + path.is_file() + }).map(|(path, host)| { + let mut tool = MsvcTool::new(path); + tool.path.push(host); + tool + }).filter_map(|mut tool| { + let sub = otry!(vc_lib_subdir(target)); + tool.libs.push(path.join("lib").join(sub)); + tool.include.push(path.join("include")); + let atlmfc_path = path.join("atlmfc"); + if atlmfc_path.exists() { + tool.libs.push(atlmfc_path.join("lib").join(sub)); + tool.include.push(atlmfc_path.join("include")); + } + Some(tool) + }).next() + } + + // To find MSVC we look in a specific registry key for the version we are + // trying to find. + fn get_vc_dir(ver: &str) -> Option { + let key = r"SOFTWARE\Microsoft\VisualStudio\SxS\VC7"; + let key = otry!(LOCAL_MACHINE.open(key.as_ref()).ok()); + let path = otry!(key.query_str(ver).ok()); + Some(path.into()) + } + + // To find the Universal CRT we look in a specific registry key for where + // all the Universal CRTs are located and then sort them asciibetically to + // find the newest version. While this sort of sorting isn't ideal, it is + // what vcvars does so that's good enough for us. + // + // Returns a pair of (root, version) for the ucrt dir if found + fn get_ucrt_dir() -> Option<(PathBuf, String)> { + let key = r"SOFTWARE\Microsoft\Windows Kits\Installed Roots"; + let key = otry!(LOCAL_MACHINE.open(key.as_ref()).ok()); + let root = otry!(key.query_str("KitsRoot10").ok()); + let readdir = otry!(Path::new(&root).join("lib").read_dir().ok()); + let max_libdir = otry!(readdir.filter_map(|dir| { + dir.ok() + }).map(|dir| { + dir.path() + }).filter(|dir| { + dir.components().last().and_then(|c| { + c.as_os_str().to_str() + }).map(|c| { + c.starts_with("10.") && dir.join("ucrt").is_dir() + }).unwrap_or(false) + }).max()); + let version = max_libdir.components().last().unwrap(); + let version = version.as_os_str().to_str().unwrap().to_string(); + Some((root.into(), version)) + } + + // Vcvars finds the correct version of the Windows 10 SDK by looking + // for the include `um\Windows.h` because sometimes a given version will + // only have UCRT bits without the rest of the SDK. Since we only care about + // libraries and not includes, we instead look for `um\x64\kernel32.lib`. + // Since the 32-bit and 64-bit libraries are always installed together we + // only need to bother checking x64, making this code a tiny bit simpler. + // Like we do for the Universal CRT, we sort the possibilities + // asciibetically to find the newest one as that is what vcvars does. + fn get_sdk10_dir() -> Option<(PathBuf, String)> { + let key = r"SOFTWARE\Microsoft\Microsoft SDKs\Windows\v10.0"; + let key = otry!(LOCAL_MACHINE.open(key.as_ref()).ok()); + let root = otry!(key.query_str("InstallationFolder").ok()); + let readdir = otry!(Path::new(&root).join("lib").read_dir().ok()); + let mut dirs = readdir.filter_map(|dir| dir.ok()) + .map(|dir| dir.path()) + .collect::>(); + dirs.sort(); + let dir = otry!(dirs.into_iter().rev().filter(|dir| { + dir.join("um").join("x64").join("kernel32.lib").is_file() + }).next()); + let version = dir.components().last().unwrap(); + let version = version.as_os_str().to_str().unwrap().to_string(); + Some((root.into(), version)) + } + + // Interestingly there are several subdirectories, `win7` `win8` and + // `winv6.3`. Vcvars seems to only care about `winv6.3` though, so the same + // applies to us. Note that if we were targetting kernel mode drivers + // instead of user mode applications, we would care. + fn get_sdk81_dir() -> Option { + let key = r"SOFTWARE\Microsoft\Microsoft SDKs\Windows\v8.1"; + let key = otry!(LOCAL_MACHINE.open(key.as_ref()).ok()); + let root = otry!(key.query_str("InstallationFolder").ok()); + Some(root.into()) + } + + fn get_sdk8_dir() -> Option { + let key = r"SOFTWARE\Microsoft\Microsoft SDKs\Windows\v8.0"; + let key = otry!(LOCAL_MACHINE.open(key.as_ref()).ok()); + let root = otry!(key.query_str("InstallationFolder").ok()); + Some(root.into()) + } + + const PROCESSOR_ARCHITECTURE_INTEL: u16 = 0; + const PROCESSOR_ARCHITECTURE_AMD64: u16 = 9; + const X86: u16 = PROCESSOR_ARCHITECTURE_INTEL; + const X86_64: u16 = PROCESSOR_ARCHITECTURE_AMD64; + + // When choosing the tool to use, we have to choose the one which matches + // the target architecture. Otherwise we end up in situations where someone + // on 32-bit Windows is trying to cross compile to 64-bit and it tries to + // invoke the native 64-bit compiler which won't work. + // + // For the return value of this function, the first member of the tuple is + // the folder of the tool we will be invoking, while the second member is + // the folder of the host toolchain for that tool which is essential when + // using a cross linker. We return a Vec since on x64 there are often two + // linkers that can target the architecture we desire. The 64-bit host + // linker is preferred, and hence first, due to 64-bit allowing it more + // address space to work with and potentially being faster. + fn bin_subdir(target: &str) -> Vec<(&'static str, &'static str)> { + let arch = target.split('-').next().unwrap(); + match (arch, host_arch()) { + ("i586", X86) | + ("i686", X86) => vec![("", "")], + ("i586", X86_64) | + ("i686", X86_64) => vec![("amd64_x86", "amd64"), ("", "")], + ("x86_64", X86) => vec![("x86_amd64", "")], + ("x86_64", X86_64) => vec![("amd64", "amd64"), ("x86_amd64", "")], + ("arm", X86) => vec![("x86_arm", "")], + ("arm", X86_64) => vec![("amd64_arm", "amd64"), ("x86_arm", "")], + _ => vec![], + } + } + + fn lib_subdir(target: &str) -> Option<&'static str> { + let arch = target.split('-').next().unwrap(); + match arch { + "i586" | "i686" => Some("x86"), + "x86_64" => Some("x64"), + "arm" => Some("arm"), + _ => None, + } + } + + // MSVC's x86 libraries are not in a subfolder + fn vc_lib_subdir(target: &str) -> Option<&'static str> { + let arch = target.split('-').next().unwrap(); + match arch { + "i586" | "i686" => Some(""), + "x86_64" => Some("amd64"), + "arm" => Some("arm"), + _ => None, + } + } + + #[allow(bad_style)] + fn host_arch() -> u16 { + type DWORD = u32; + type WORD = u16; + type LPVOID = *mut u8; + type DWORD_PTR = usize; + + #[repr(C)] + struct SYSTEM_INFO { + wProcessorArchitecture: WORD, + _wReserved: WORD, + _dwPageSize: DWORD, + _lpMinimumApplicationAddress: LPVOID, + _lpMaximumApplicationAddress: LPVOID, + _dwActiveProcessorMask: DWORD_PTR, + _dwNumberOfProcessors: DWORD, + _dwProcessorType: DWORD, + _dwAllocationGranularity: DWORD, + _wProcessorLevel: WORD, + _wProcessorRevision: WORD, + } + + extern "system" { + fn GetNativeSystemInfo(lpSystemInfo: *mut SYSTEM_INFO); + } + + unsafe { + let mut info = mem::zeroed(); + GetNativeSystemInfo(&mut info); + info.wProcessorArchitecture + } + } + + // Given a registry key, look at all the sub keys and find the one which has + // the maximal numeric value. + // + // Returns the name of the maximal key as well as the opened maximal key. + fn max_version(key: &RegistryKey) -> Option<(OsString, RegistryKey)> { + let mut max_vers = 0; + let mut max_key = None; + for subkey in key.iter().filter_map(|k| k.ok()) { + let val = subkey.to_str().and_then(|s| { + s.trim_left_matches("v").replace(".", "").parse().ok() + }); + let val = match val { + Some(s) => s, + None => continue, + }; + if val > max_vers { + if let Ok(k) = key.open(&subkey) { + max_vers = val; + max_key = Some((subkey, k)); + } + } + } + return max_key + } + + // see http://stackoverflow.com/questions/328017/path-to-msbuild + fn find_msbuild(target: &str) -> Option { + let key = r"SOFTWARE\Microsoft\MSBuild\ToolsVersions"; + LOCAL_MACHINE.open(key.as_ref()).ok().and_then(|key| { + max_version(&key).and_then(|(_vers, key)| { + key.query_str("MSBuildToolsPath").ok() + }) + }).map(|path| { + let mut path = PathBuf::from(path); + path.push("MSBuild.exe"); + let mut tool = Tool::new(path); + if target.contains("x86_64") { + tool.env.push(("Platform".into(), "X64".into())); + } + tool + }) + } +} diff --git a/src/vendor/gcc/tests/cc_env.rs b/src/vendor/gcc/tests/cc_env.rs new file mode 100644 index 0000000000..559dbe8ad4 --- /dev/null +++ b/src/vendor/gcc/tests/cc_env.rs @@ -0,0 +1,49 @@ +extern crate tempdir; +extern crate gcc; + +use std::env; + +mod support; +use support::Test; + +#[test] +fn main() { + ccache(); + distcc(); + ccache_spaces(); +} + +fn ccache() { + let test = Test::gnu(); + test.shim("ccache"); + + env::set_var("CC", "ccache lol-this-is-not-a-compiler foo"); + test.gcc().file("foo.c").compile("libfoo.a"); + + test.cmd(0) + .must_have("lol-this-is-not-a-compiler foo") + .must_have("foo.c") + .must_not_have("ccache"); +} + +fn ccache_spaces() { + let test = Test::gnu(); + test.shim("ccache"); + + env::set_var("CC", "ccache lol-this-is-not-a-compiler foo"); + test.gcc().file("foo.c").compile("libfoo.a"); + test.cmd(0).must_have("lol-this-is-not-a-compiler foo"); +} + +fn distcc() { + let test = Test::gnu(); + test.shim("distcc"); + + env::set_var("CC", "distcc lol-this-is-not-a-compiler foo"); + test.gcc().file("foo.c").compile("libfoo.a"); + + test.cmd(0) + .must_have("lol-this-is-not-a-compiler foo") + .must_have("foo.c") + .must_not_have("distcc"); +} diff --git a/src/vendor/gcc/tests/support/mod.rs b/src/vendor/gcc/tests/support/mod.rs new file mode 100644 index 0000000000..5c40984eb6 --- /dev/null +++ b/src/vendor/gcc/tests/support/mod.rs @@ -0,0 +1,114 @@ +#![allow(dead_code)] + +use std::env; +use std::ffi::OsStr; +use std::fs::{self, File}; +use std::io::prelude::*; +use std::path::PathBuf; + +use gcc; +use tempdir::TempDir; + +pub struct Test { + pub td: TempDir, + pub gcc: PathBuf, + pub msvc: bool, +} + +pub struct Execution { + args: Vec, +} + +impl Test { + pub fn new() -> Test { + let mut gcc = PathBuf::from(env::current_exe().unwrap()); + gcc.pop(); + if gcc.ends_with("deps") { + gcc.pop(); + } + gcc.push(format!("gcc-shim{}", env::consts::EXE_SUFFIX)); + Test { + td: TempDir::new("gcc-test").unwrap(), + gcc: gcc, + msvc: false, + } + } + + pub fn gnu() -> Test { + let t = Test::new(); + t.shim("cc").shim("ar"); + return t + } + + pub fn msvc() -> Test { + let mut t = Test::new(); + t.shim("cl").shim("lib.exe"); + t.msvc = true; + return t + } + + pub fn shim(&self, name: &str) -> &Test { + let fname = format!("{}{}", name, env::consts::EXE_SUFFIX); + fs::hard_link(&self.gcc, self.td.path().join(&fname)).or_else(|_| { + fs::copy(&self.gcc, self.td.path().join(&fname)).map(|_| ()) + }).unwrap(); + self + } + + pub fn gcc(&self) -> gcc::Config { + let mut cfg = gcc::Config::new(); + let mut path = env::split_paths(&env::var_os("PATH").unwrap()) + .collect::>(); + path.insert(0, self.td.path().to_owned()); + let target = if self.msvc { + "x86_64-pc-windows-msvc" + } else { + "x86_64-unknown-linux-gnu" + }; + + cfg.target(target).host(target) + .opt_level(2) + .debug(false) + .out_dir(self.td.path()) + .__set_env("PATH", env::join_paths(path).unwrap()) + .__set_env("GCCTEST_OUT_DIR", self.td.path()); + if self.msvc { + cfg.compiler(self.td.path().join("cl")); + cfg.archiver(self.td.path().join("lib.exe")); + } + return cfg + } + + pub fn cmd(&self, i: u32) -> Execution { + let mut s = String::new(); + File::open(self.td.path().join(format!("out{}", i))).unwrap() + .read_to_string(&mut s).unwrap(); + Execution { + args: s.lines().map(|s| s.to_string()).collect(), + } + } +} + +impl Execution { + pub fn must_have>(&self, p: P) -> &Execution { + if !self.has(p.as_ref()) { + panic!("didn't find {:?} in {:?}", p.as_ref(), self.args); + } else { + self + } + } + + pub fn must_not_have>(&self, p: P) -> &Execution { + if self.has(p.as_ref()) { + panic!("found {:?}", p.as_ref()); + } else { + self + } + } + + pub fn has(&self, p: &OsStr) -> bool { + self.args.iter().any(|arg| { + OsStr::new(arg) == p + }) + } +} diff --git a/src/vendor/gcc/tests/test.rs b/src/vendor/gcc/tests/test.rs new file mode 100644 index 0000000000..1b6a0bd0d1 --- /dev/null +++ b/src/vendor/gcc/tests/test.rs @@ -0,0 +1,207 @@ +extern crate gcc; +extern crate tempdir; + +use support::Test; + +mod support; + +#[test] +fn gnu_smoke() { + let test = Test::gnu(); + test.gcc() + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-O2") + .must_have("foo.c") + .must_not_have("-g") + .must_have("-c") + .must_have("-ffunction-sections") + .must_have("-fdata-sections"); + test.cmd(1).must_have(test.td.path().join("foo.o")); +} + +#[test] +fn gnu_opt_level_1() { + let test = Test::gnu(); + test.gcc() + .opt_level(1) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-O1") + .must_not_have("-O2"); +} + +#[test] +fn gnu_opt_level_s() { + let test = Test::gnu(); + test.gcc() + .opt_level_str("s") + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-Os") + .must_not_have("-O1") + .must_not_have("-O2") + .must_not_have("-O3") + .must_not_have("-Oz"); +} + +#[test] +fn gnu_debug() { + let test = Test::gnu(); + test.gcc() + .debug(true) + .file("foo.c").compile("libfoo.a"); + test.cmd(0).must_have("-g"); +} + +#[test] +fn gnu_x86_64() { + for vendor in &["unknown-linux-gnu", "apple-darwin"] { + let target = format!("x86_64-{}", vendor); + let test = Test::gnu(); + test.gcc() + .target(&target) + .host(&target) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-fPIC") + .must_have("-m64"); + } +} + +#[test] +fn gnu_x86_64_no_pic() { + for vendor in &["unknown-linux-gnu", "apple-darwin"] { + let target = format!("x86_64-{}", vendor); + let test = Test::gnu(); + test.gcc() + .pic(false) + .target(&target) + .host(&target) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_not_have("-fPIC"); + } +} + +#[test] +fn gnu_i686() { + for vendor in &["unknown-linux-gnu", "apple-darwin"] { + let target = format!("i686-{}", vendor); + let test = Test::gnu(); + test.gcc() + .target(&target) + .host(&target) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_not_have("-fPIC") + .must_have("-m32"); + } +} + +#[test] +fn gnu_i686_pic() { + for vendor in &["unknown-linux-gnu", "apple-darwin"] { + let target = format!("i686-{}", vendor); + let test = Test::gnu(); + test.gcc() + .pic(true) + .target(&target) + .host(&target) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-fPIC"); + } +} + +#[test] +fn gnu_set_stdlib() { + let test = Test::gnu(); + test.gcc() + .cpp_set_stdlib(Some("foo")) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_not_have("-stdlib=foo"); +} + +#[test] +fn gnu_include() { + let test = Test::gnu(); + test.gcc() + .include("foo/bar") + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-I").must_have("foo/bar"); +} + +#[test] +fn gnu_define() { + let test = Test::gnu(); + test.gcc() + .define("FOO", Some("bar")) + .define("BAR", None) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("-DFOO=bar").must_have("-DBAR"); +} + +#[test] +fn gnu_compile_assembly() { + let test = Test::gnu(); + test.gcc() + .file("foo.S").compile("libfoo.a"); + test.cmd(0).must_have("foo.S"); +} + +#[test] +fn msvc_smoke() { + let test = Test::msvc(); + test.gcc() + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("/O2") + .must_have("foo.c") + .must_not_have("/Z7") + .must_have("/c"); + test.cmd(1).must_have(test.td.path().join("foo.o")); +} + +#[test] +fn msvc_opt_level_0() { + let test = Test::msvc(); + test.gcc() + .opt_level(0) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_not_have("/O2"); +} + +#[test] +fn msvc_debug() { + let test = Test::msvc(); + test.gcc() + .debug(true) + .file("foo.c").compile("libfoo.a"); + test.cmd(0).must_have("/Z7"); +} + +#[test] +fn msvc_include() { + let test = Test::msvc(); + test.gcc() + .include("foo/bar") + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("/I").must_have("foo/bar"); +} + +#[test] +fn msvc_define() { + let test = Test::msvc(); + test.gcc() + .define("FOO", Some("bar")) + .define("BAR", None) + .file("foo.c").compile("libfoo.a"); + + test.cmd(0).must_have("/DFOO=bar").must_have("/DBAR"); +} diff --git a/src/vendor/getopts/.cargo-checksum.json b/src/vendor/getopts/.cargo-checksum.json new file mode 100644 index 0000000000..0c13fda1c1 --- /dev/null +++ b/src/vendor/getopts/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".gitignore":"c1e953ee360e77de57f7b02f1b7880bd6a3dc22d1a69e953c2ac2c52cc52d247",".travis.yml":"f01015154ac55bebd8ff25742496135c40395959f772005bdf7c63bc9b373c12","Cargo.toml":"a027aa6d21622b42c545707ba04f78341cc28079b46da775827ab1ec37fe3ca7","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"6485b8ed310d3f0340bf1ad1f47645069ce4069dcc6bb46c7d5c6faf41de1fdb","README.md":"4002d78e71c4e1fb82c77590eddb999371f40dce037d895f96e6d6df42c728d3","appveyor.yml":"da991211b72fa6f231af7adb84c9fb72f5a9131d1c0a3d47b8ceffe5a82c8542","src/lib.rs":"9512dd4ec1053c9fc61f630d869053ca50c55e0839e3ab7091246a8654423bf0","tests/smoke.rs":"26a95ac42e42b766ae752fe8531fb740fd147d5cdff352dec0763d175ce91806"},"package":"d9047cfbd08a437050b363d35ef160452c5fe8ea5187ae0a624708c91581d685"} \ No newline at end of file diff --git a/src/vendor/getopts/.cargo-ok b/src/vendor/getopts/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/getopts/.gitignore b/src/vendor/getopts/.gitignore new file mode 100644 index 0000000000..4fffb2f89c --- /dev/null +++ b/src/vendor/getopts/.gitignore @@ -0,0 +1,2 @@ +/target +/Cargo.lock diff --git a/src/vendor/getopts/.travis.yml b/src/vendor/getopts/.travis.yml new file mode 100644 index 0000000000..d7e3f4787a --- /dev/null +++ b/src/vendor/getopts/.travis.yml @@ -0,0 +1,20 @@ +language: rust +rust: + - 1.0.0 + - beta + - nightly +sudo: false +before_script: + - pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH +script: + - cargo build --verbose + - cargo test --verbose + - cargo doc --no-deps +after_success: + - travis-cargo --only nightly doc-upload +env: + global: + secure: by+Jo/boBPbcF5c1N6RNCA008oJm2aRFE5T0SUc3OIfTXxY08dZc0WCBJCHrplp44VjpeKRp/89Y+k1CKncIeU8LiS6ZgsKqaQcCglE2O1KS90B6FYB7+rBqT3ib25taq1nW38clnBHYHV9nz4gOElSdKGRxCcBy+efQ5ZXr2tY= +notifications: + email: + on_success: never diff --git a/src/vendor/getopts/Cargo.toml b/src/vendor/getopts/Cargo.toml new file mode 100644 index 0000000000..f84899fe81 --- /dev/null +++ b/src/vendor/getopts/Cargo.toml @@ -0,0 +1,16 @@ +[package] + +name = "getopts" +version = "0.2.14" +authors = ["The Rust Project Developers"] +license = "MIT/Apache-2.0" +readme = "README.md" +repository = "https://github.com/rust-lang/getopts" +documentation = "http://doc.rust-lang.org/getopts" +homepage = "https://github.com/rust-lang/getopts" +description = """ +getopts-like option parsing. +""" + +[dev-dependencies] +log = "0.3" diff --git a/src/vendor/getopts/LICENSE-APACHE b/src/vendor/getopts/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/src/vendor/getopts/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/src/vendor/getopts/LICENSE-MIT b/src/vendor/getopts/LICENSE-MIT new file mode 100644 index 0000000000..39d4bdb5ac --- /dev/null +++ b/src/vendor/getopts/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2014 The Rust Project Developers + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/src/vendor/getopts/README.md b/src/vendor/getopts/README.md new file mode 100644 index 0000000000..c19f48fb06 --- /dev/null +++ b/src/vendor/getopts/README.md @@ -0,0 +1,23 @@ +getopts +=== + +A Rust library for option parsing for CLI utilities. + +[![Build Status](https://travis-ci.org/rust-lang/getopts.svg?branch=master)](https://travis-ci.org/rust-lang/getopts) + +[Documentation](http://doc.rust-lang.org/getopts) + +## Usage + +Add this to your `Cargo.toml`: + +```toml +[dependencies] +getopts = "0.2.4" +``` + +and this to your crate root: + +```rust +extern crate getopts; +``` diff --git a/src/vendor/getopts/appveyor.yml b/src/vendor/getopts/appveyor.yml new file mode 100644 index 0000000000..6a1b8dc19c --- /dev/null +++ b/src/vendor/getopts/appveyor.yml @@ -0,0 +1,17 @@ +environment: + matrix: + - TARGET: x86_64-pc-windows-msvc + - TARGET: i686-pc-windows-msvc + - TARGET: i686-pc-windows-gnu +install: + - ps: Start-FileDownload "https://static.rust-lang.org/dist/rust-nightly-${env:TARGET}.exe" + - rust-nightly-%TARGET%.exe /VERYSILENT /NORESTART /DIR="C:\Program Files (x86)\Rust" + - SET PATH=%PATH%;C:\Program Files (x86)\Rust\bin + - SET PATH=%PATH%;C:\MinGW\bin + - rustc -V + - cargo -V + +build: false + +test_script: + - cargo test --verbose diff --git a/src/vendor/getopts/src/lib.rs b/src/vendor/getopts/src/lib.rs new file mode 100644 index 0000000000..8f0c866fae --- /dev/null +++ b/src/vendor/getopts/src/lib.rs @@ -0,0 +1,1831 @@ +// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. +// +// ignore-lexer-test FIXME #15677 + +//! Simple getopt alternative. +//! +//! Construct a vector of options, either by using `reqopt`, `optopt`, and +//! `optflag` or by building them from components yourself, and pass them to +//! `getopts`, along with a vector of actual arguments (not including +//! `argv[0]`). You'll either get a failure code back, or a match. You'll have +//! to verify whether the amount of 'free' arguments in the match is what you +//! expect. Use `opt_*` accessors to get argument values out of the matches +//! object. +//! +//! Single-character options are expected to appear on the command line with a +//! single preceding dash; multiple-character options are expected to be +//! proceeded by two dashes. Options that expect an argument accept their +//! argument following either a space or an equals sign. Single-character +//! options don't require the space. +//! +//! # Usage +//! +//! This crate is [on crates.io](https://crates.io/crates/getopts) and can be +//! used by adding `getopts` to the dependencies in your project's `Cargo.toml`. +//! +//! ```toml +//! [dependencies] +//! getopts = "0.2" +//! ``` +//! +//! and this to your crate root: +//! +//! ```rust +//! extern crate getopts; +//! ``` +//! +//! # Example +//! +//! The following example shows simple command line parsing for an application +//! that requires an input file to be specified, accepts an optional output file +//! name following `-o`, and accepts both `-h` and `--help` as optional flags. +//! +//! ```{.rust} +//! extern crate getopts; +//! use getopts::Options; +//! use std::env; +//! +//! fn do_work(inp: &str, out: Option) { +//! println!("{}", inp); +//! match out { +//! Some(x) => println!("{}", x), +//! None => println!("No Output"), +//! } +//! } +//! +//! fn print_usage(program: &str, opts: Options) { +//! let brief = format!("Usage: {} FILE [options]", program); +//! print!("{}", opts.usage(&brief)); +//! } +//! +//! fn main() { +//! let args: Vec = env::args().collect(); +//! let program = args[0].clone(); +//! +//! let mut opts = Options::new(); +//! opts.optopt("o", "", "set output file name", "NAME"); +//! opts.optflag("h", "help", "print this help menu"); +//! let matches = match opts.parse(&args[1..]) { +//! Ok(m) => { m } +//! Err(f) => { panic!(f.to_string()) } +//! }; +//! if matches.opt_present("h") { +//! print_usage(&program, opts); +//! return; +//! } +//! let output = matches.opt_str("o"); +//! let input = if !matches.free.is_empty() { +//! matches.free[0].clone() +//! } else { +//! print_usage(&program, opts); +//! return; +//! }; +//! do_work(&input, output); +//! } +//! ``` + +#![doc(html_logo_url = "http://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png", + html_favicon_url = "http://www.rust-lang.org/favicon.ico", + html_root_url = "http://doc.rust-lang.org/getopts/")] +#![deny(missing_docs)] +#![cfg_attr(test, deny(warnings))] +#![cfg_attr(rust_build, feature(staged_api))] +#![cfg_attr(rust_build, staged_api)] +#![cfg_attr(rust_build, + unstable(feature = "rustc_private", + reason = "use the crates.io `getopts` library instead"))] + +#[cfg(test)] #[macro_use] extern crate log; + +use self::Name::*; +use self::HasArg::*; +use self::Occur::*; +use self::Fail::*; +use self::Optval::*; +use self::SplitWithinState::*; +use self::Whitespace::*; +use self::LengthLimit::*; + +use std::error::Error; +use std::ffi::OsStr; +use std::fmt; +use std::iter::{repeat, IntoIterator}; +use std::result; + +/// A description of the options that a program can handle. +pub struct Options { + grps: Vec, + parsing_style : ParsingStyle +} + +impl Options { + /// Create a blank set of options. + pub fn new() -> Options { + Options { + grps: Vec::new(), + parsing_style: ParsingStyle::FloatingFrees + } + } + + /// Set the parsing style. + pub fn parsing_style(&mut self, style: ParsingStyle) -> &mut Options { + self.parsing_style = style; + self + } + + /// Create a generic option group, stating all parameters explicitly. + pub fn opt(&mut self, short_name: &str, long_name: &str, desc: &str, + hint: &str, hasarg: HasArg, occur: Occur) -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: hint.to_string(), + desc: desc.to_string(), + hasarg: hasarg, + occur: occur + }); + self + } + + /// Create a long option that is optional and does not take an argument. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + pub fn optflag(&mut self, short_name: &str, long_name: &str, desc: &str) + -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: "".to_string(), + desc: desc.to_string(), + hasarg: No, + occur: Optional + }); + self + } + + /// Create a long option that can occur more than once and does not + /// take an argument. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + pub fn optflagmulti(&mut self, short_name: &str, long_name: &str, desc: &str) + -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: "".to_string(), + desc: desc.to_string(), + hasarg: No, + occur: Multi + }); + self + } + + /// Create a long option that is optional and takes an optional argument. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + /// * `hint` - Hint that is used in place of the argument in the usage help, + /// e.g. `"FILE"` for a `-o FILE` option + pub fn optflagopt(&mut self, short_name: &str, long_name: &str, desc: &str, + hint: &str) -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: hint.to_string(), + desc: desc.to_string(), + hasarg: Maybe, + occur: Optional + }); + self + } + + /// Create a long option that is optional, takes an argument, and may occur + /// multiple times. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + /// * `hint` - Hint that is used in place of the argument in the usage help, + /// e.g. `"FILE"` for a `-o FILE` option + pub fn optmulti(&mut self, short_name: &str, long_name: &str, desc: &str, hint: &str) + -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: hint.to_string(), + desc: desc.to_string(), + hasarg: Yes, + occur: Multi + }); + self + } + + /// Create a long option that is optional and takes an argument. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + /// * `hint` - Hint that is used in place of the argument in the usage help, + /// e.g. `"FILE"` for a `-o FILE` option + pub fn optopt(&mut self, short_name: &str, long_name: &str, desc: &str, hint: &str) + -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: hint.to_string(), + desc: desc.to_string(), + hasarg: Yes, + occur: Optional + }); + self + } + + /// Create a long option that is required and takes an argument. + /// + /// * `short_name` - e.g. `"h"` for a `-h` option, or `""` for none + /// * `long_name` - e.g. `"help"` for a `--help` option, or `""` for none + /// * `desc` - Description for usage help + /// * `hint` - Hint that is used in place of the argument in the usage help, + /// e.g. `"FILE"` for a `-o FILE` option + pub fn reqopt(&mut self, short_name: &str, long_name: &str, desc: &str, hint: &str) + -> &mut Options { + let len = short_name.len(); + assert!(len == 1 || len == 0); + self.grps.push(OptGroup { + short_name: short_name.to_string(), + long_name: long_name.to_string(), + hint: hint.to_string(), + desc: desc.to_string(), + hasarg: Yes, + occur: Req + }); + self + } + + /// Parse command line arguments according to the provided options. + /// + /// On success returns `Ok(Matches)`. Use methods such as `opt_present` + /// `opt_str`, etc. to interrogate results. + /// # Panics + /// + /// Returns `Err(Fail)` on failure: use the `Debug` implementation of `Fail` + /// to display information about it. + pub fn parse(&self, args: C) -> Result + where C::Item: AsRef + { + let opts: Vec = self.grps.iter().map(|x| x.long_to_short()).collect(); + let n_opts = opts.len(); + + fn f(_x: usize) -> Vec { return Vec::new(); } + + let mut vals = (0 .. n_opts).map(f).collect::>(); + let mut free: Vec = Vec::new(); + let args = try!(args.into_iter().map(|i| { + i.as_ref().to_str().ok_or_else(|| { + Fail::UnrecognizedOption(format!("{:?}", i.as_ref())) + }).map(|s| s.to_owned()) + }).collect::<::std::result::Result, _>>()); + let l = args.len(); + let mut i = 0; + while i < l { + let cur = args[i].clone(); + let curlen = cur.len(); + if !is_arg(&cur) { + match self.parsing_style { + ParsingStyle::FloatingFrees => free.push(cur), + ParsingStyle::StopAtFirstFree => { + while i < l { + free.push(args[i].clone()); + i += 1; + } + break; + } + } + } else if cur == "--" { + let mut j = i + 1; + while j < l { free.push(args[j].clone()); j += 1; } + break; + } else { + let mut names; + let mut i_arg = None; + if cur.as_bytes()[1] == b'-' { + let tail = &cur[2..curlen]; + let tail_eq: Vec<&str> = tail.splitn(2, '=').collect(); + if tail_eq.len() <= 1 { + names = vec!(Long(tail.to_string())); + } else { + names = + vec!(Long(tail_eq[0].to_string())); + i_arg = Some(tail_eq[1].to_string()); + } + } else { + names = Vec::new(); + for (j, ch) in cur.char_indices().skip(1) { + let opt = Short(ch); + + /* In a series of potential options (eg. -aheJ), if we + see one which takes an argument, we assume all + subsequent characters make up the argument. This + allows options such as -L/usr/local/lib/foo to be + interpreted correctly + */ + + let opt_id = match find_opt(&opts, opt.clone()) { + Some(id) => id, + None => return Err(UnrecognizedOption(opt.to_string())) + }; + + names.push(opt); + + let arg_follows = match opts[opt_id].hasarg { + Yes | Maybe => true, + No => false + }; + + if arg_follows { + let next = j + ch.len_utf8(); + if next < curlen { + i_arg = Some(cur[next..curlen].to_string()); + break; + } + } + } + } + let mut name_pos = 0; + for nm in names.iter() { + name_pos += 1; + let optid = match find_opt(&opts, (*nm).clone()) { + Some(id) => id, + None => return Err(UnrecognizedOption(nm.to_string())) + }; + match opts[optid].hasarg { + No => { + if name_pos == names.len() && !i_arg.is_none() { + return Err(UnexpectedArgument(nm.to_string())); + } + vals[optid].push(Given); + } + Maybe => { + if !i_arg.is_none() { + vals[optid] + .push(Val((i_arg.clone()) + .unwrap())); + } else if name_pos < names.len() || i + 1 == l || + is_arg(&args[i + 1]) { + vals[optid].push(Given); + } else { + i += 1; + vals[optid].push(Val(args[i].clone())); + } + } + Yes => { + if !i_arg.is_none() { + vals[optid].push(Val(i_arg.clone().unwrap())); + } else if i + 1 == l { + return Err(ArgumentMissing(nm.to_string())); + } else { + i += 1; + vals[optid].push(Val(args[i].clone())); + } + } + } + } + } + i += 1; + } + for i in 0 .. n_opts { + let n = vals[i].len(); + let occ = opts[i].occur; + if occ == Req && n == 0 { + return Err(OptionMissing(opts[i].name.to_string())); + } + if occ != Multi && n > 1 { + return Err(OptionDuplicated(opts[i].name.to_string())); + } + } + Ok(Matches { + opts: opts, + vals: vals, + free: free + }) + } + + /// Derive a short one-line usage summary from a set of long options. + #[allow(deprecated)] // connect => join in 1.3 + pub fn short_usage(&self, program_name: &str) -> String { + let mut line = format!("Usage: {} ", program_name); + line.push_str(&self.grps.iter() + .map(format_option) + .collect::>() + .connect(" ")); + line + } + + /// Derive a usage message from a set of options. + #[allow(deprecated)] // connect => join in 1.3 + pub fn usage(&self, brief: &str) -> String { + let desc_sep = format!("\n{}", repeat(" ").take(24).collect::()); + + let any_short = self.grps.iter().any(|optref| { + optref.short_name.len() > 0 + }); + + let rows = self.grps.iter().map(|optref| { + let OptGroup{short_name, + long_name, + hint, + desc, + hasarg, + ..} = (*optref).clone(); + + let mut row = " ".to_string(); + + // short option + match short_name.len() { + 0 => { + if any_short { + row.push_str(" "); + } + } + 1 => { + row.push('-'); + row.push_str(&short_name); + if long_name.len() > 0 { + row.push_str(", "); + } else { + // Only a single space here, so that any + // argument is printed in the correct spot. + row.push(' '); + } + } + _ => panic!("the short name should only be 1 ascii char long"), + } + + // long option + match long_name.len() { + 0 => {} + _ => { + row.push_str("--"); + row.push_str(&long_name); + row.push(' '); + } + } + + // arg + match hasarg { + No => {} + Yes => row.push_str(&hint), + Maybe => { + row.push('['); + row.push_str(&hint); + row.push(']'); + } + } + + // FIXME: #5516 should be graphemes not codepoints + // here we just need to indent the start of the description + let rowlen = row.chars().count(); + if rowlen < 24 { + for _ in 0 .. 24 - rowlen { + row.push(' '); + } + } else { + row.push_str(&desc_sep) + } + + // Normalize desc to contain words separated by one space character + let mut desc_normalized_whitespace = String::new(); + for word in desc.split(|c: char| c.is_whitespace()) + .filter(|s| !s.is_empty()) { + desc_normalized_whitespace.push_str(word); + desc_normalized_whitespace.push(' '); + } + + // FIXME: #5516 should be graphemes not codepoints + let mut desc_rows = Vec::new(); + each_split_within(&desc_normalized_whitespace, + 54, + |substr| { + desc_rows.push(substr.to_string()); + true + }); + + // FIXME: #5516 should be graphemes not codepoints + // wrapped description + row.push_str(&desc_rows.connect(&desc_sep)); + + row + }); + + format!("{}\n\nOptions:\n{}\n", brief, + rows.collect::>().connect("\n")) + } +} + +/// What parsing style to use when parsing arguments. +#[derive(Clone, Copy, PartialEq, Eq)] +pub enum ParsingStyle { + /// Flags and "free" arguments can be freely inter-mixed. + FloatingFrees, + /// As soon as a "free" argument (i.e. non-flag) is encountered, stop + /// considering any remaining arguments as flags. + StopAtFirstFree +} + +/// Name of an option. Either a string or a single char. +#[derive(Clone, PartialEq, Eq)] +enum Name { + /// A string representing the long name of an option. + /// For example: "help" + Long(String), + /// A char representing the short name of an option. + /// For example: 'h' + Short(char), +} + +/// Describes whether an option has an argument. +#[derive(Clone, Copy, PartialEq, Eq)] +pub enum HasArg { + /// The option requires an argument. + Yes, + /// The option takes no argument. + No, + /// The option argument is optional. + Maybe, +} + +/// Describes how often an option may occur. +#[derive(Clone, Copy, PartialEq, Eq)] +pub enum Occur { + /// The option occurs once. + Req, + /// The option occurs at most once. + Optional, + /// The option occurs zero or more times. + Multi, +} + +/// A description of a possible option. +#[derive(Clone, PartialEq, Eq)] +struct Opt { + /// Name of the option + name: Name, + /// Whether it has an argument + hasarg: HasArg, + /// How often it can occur + occur: Occur, + /// Which options it aliases + aliases: Vec, +} + +/// One group of options, e.g., both `-h` and `--help`, along with +/// their shared description and properties. +#[derive(Clone, PartialEq, Eq)] +struct OptGroup { + /// Short name of the option, e.g. `h` for a `-h` option + short_name: String, + /// Long name of the option, e.g. `help` for a `--help` option + long_name: String, + /// Hint for argument, e.g. `FILE` for a `-o FILE` option + hint: String, + /// Description for usage help text + desc: String, + /// Whether option has an argument + hasarg: HasArg, + /// How often it can occur + occur: Occur +} + +/// Describes whether an option is given at all or has a value. +#[derive(Clone, PartialEq, Eq)] +enum Optval { + Val(String), + Given, +} + +/// The result of checking command line arguments. Contains a vector +/// of matches and a vector of free strings. +#[derive(Clone, PartialEq, Eq)] +pub struct Matches { + /// Options that matched + opts: Vec, + /// Values of the Options that matched + vals: Vec>, + /// Free string fragments + pub free: Vec, +} + +/// The type returned when the command line does not conform to the +/// expected format. Use the `Debug` implementation to output detailed +/// information. +#[derive(Clone, Debug, PartialEq, Eq)] +pub enum Fail { + /// The option requires an argument but none was passed. + ArgumentMissing(String), + /// The passed option is not declared among the possible options. + UnrecognizedOption(String), + /// A required option is not present. + OptionMissing(String), + /// A single occurrence option is being used multiple times. + OptionDuplicated(String), + /// There's an argument being passed to a non-argument option. + UnexpectedArgument(String), +} + +impl Error for Fail { + fn description(&self) -> &str { + match *self { + ArgumentMissing(_) => "missing argument", + UnrecognizedOption(_) => "unrecognized option", + OptionMissing(_) => "missing option", + OptionDuplicated(_) => "duplicated option", + UnexpectedArgument(_) => "unexpected argument", + } + } +} + +/// The type of failure that occurred. +#[derive(Clone, Copy, PartialEq, Eq)] +#[allow(missing_docs)] +pub enum FailType { + ArgumentMissing_, + UnrecognizedOption_, + OptionMissing_, + OptionDuplicated_, + UnexpectedArgument_, +} + +/// The result of parsing a command line with a set of options. +pub type Result = result::Result; + +impl Name { + fn from_str(nm: &str) -> Name { + if nm.len() == 1 { + Short(nm.as_bytes()[0] as char) + } else { + Long(nm.to_string()) + } + } + + fn to_string(&self) -> String { + match *self { + Short(ch) => ch.to_string(), + Long(ref s) => s.to_string() + } + } +} + +impl OptGroup { + /// Translate OptGroup into Opt. + /// (Both short and long names correspond to different Opts). + fn long_to_short(&self) -> Opt { + let OptGroup { + short_name, + long_name, + hasarg, + occur, + .. + } = (*self).clone(); + + match (short_name.len(), long_name.len()) { + (0,0) => panic!("this long-format option was given no name"), + (0,_) => Opt { + name: Long((long_name)), + hasarg: hasarg, + occur: occur, + aliases: Vec::new() + }, + (1,0) => Opt { + name: Short(short_name.as_bytes()[0] as char), + hasarg: hasarg, + occur: occur, + aliases: Vec::new() + }, + (1,_) => Opt { + name: Long((long_name)), + hasarg: hasarg, + occur: occur, + aliases: vec!( + Opt { + name: Short(short_name.as_bytes()[0] as char), + hasarg: hasarg, + occur: occur, + aliases: Vec::new() + } + ) + }, + (_,_) => panic!("something is wrong with the long-form opt") + } + } +} + +impl Matches { + fn opt_vals(&self, nm: &str) -> Vec { + match find_opt(&self.opts, Name::from_str(nm)) { + Some(id) => self.vals[id].clone(), + None => panic!("No option '{}' defined", nm) + } + } + + fn opt_val(&self, nm: &str) -> Option { + self.opt_vals(nm).into_iter().next() + } + + /// Returns true if an option was matched. + pub fn opt_present(&self, nm: &str) -> bool { + !self.opt_vals(nm).is_empty() + } + + /// Returns the number of times an option was matched. + pub fn opt_count(&self, nm: &str) -> usize { + self.opt_vals(nm).len() + } + + /// Returns true if any of several options were matched. + pub fn opts_present(&self, names: &[String]) -> bool { + names.iter().any(|nm| { + match find_opt(&self.opts, Name::from_str(&nm)) { + Some(id) if !self.vals[id].is_empty() => true, + _ => false, + } + }) + } + + /// Returns the string argument supplied to one of several matching options or `None`. + pub fn opts_str(&self, names: &[String]) -> Option { + names.iter().filter_map(|nm| { + match self.opt_val(&nm) { + Some(Val(s)) => Some(s), + _ => None, + } + }).next() + } + + /// Returns a vector of the arguments provided to all matches of the given + /// option. + /// + /// Used when an option accepts multiple values. + pub fn opt_strs(&self, nm: &str) -> Vec { + self.opt_vals(nm).into_iter().filter_map(|v| { + match v { + Val(s) => Some(s), + _ => None, + } + }).collect() + } + + /// Returns the string argument supplied to a matching option or `None`. + pub fn opt_str(&self, nm: &str) -> Option { + match self.opt_val(nm) { + Some(Val(s)) => Some(s), + _ => None, + } + } + + + /// Returns the matching string, a default, or `None`. + /// + /// Returns `None` if the option was not present, `def` if the option was + /// present but no argument was provided, and the argument if the option was + /// present and an argument was provided. + pub fn opt_default(&self, nm: &str, def: &str) -> Option { + match self.opt_val(nm) { + Some(Val(s)) => Some(s), + Some(_) => Some(def.to_string()), + None => None, + } + } + +} + +fn is_arg(arg: &str) -> bool { + arg.as_bytes().get(0) == Some(&b'-') && arg.len() > 1 +} + +fn find_opt(opts: &[Opt], nm: Name) -> Option { + // Search main options. + let pos = opts.iter().position(|opt| opt.name == nm); + if pos.is_some() { + return pos + } + + // Search in aliases. + for candidate in opts.iter() { + if candidate.aliases.iter().position(|opt| opt.name == nm).is_some() { + return opts.iter().position(|opt| opt.name == candidate.name); + } + } + + None +} + +impl fmt::Display for Fail { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match *self { + ArgumentMissing(ref nm) => { + write!(f, "Argument to option '{}' missing.", *nm) + } + UnrecognizedOption(ref nm) => { + write!(f, "Unrecognized option: '{}'.", *nm) + } + OptionMissing(ref nm) => { + write!(f, "Required option '{}' missing.", *nm) + } + OptionDuplicated(ref nm) => { + write!(f, "Option '{}' given more than once.", *nm) + } + UnexpectedArgument(ref nm) => { + write!(f, "Option '{}' does not take an argument.", *nm) + } + } + } +} + +fn format_option(opt: &OptGroup) -> String { + let mut line = String::new(); + + if opt.occur != Req { + line.push('['); + } + + // Use short_name if possible, but fall back to long_name. + if opt.short_name.len() > 0 { + line.push('-'); + line.push_str(&opt.short_name); + } else { + line.push_str("--"); + line.push_str(&opt.long_name); + } + + if opt.hasarg != No { + line.push(' '); + if opt.hasarg == Maybe { + line.push('['); + } + line.push_str(&opt.hint); + if opt.hasarg == Maybe { + line.push(']'); + } + } + + if opt.occur != Req { + line.push(']'); + } + if opt.occur == Multi { + line.push_str(".."); + } + + line +} + +#[derive(Clone, Copy)] +enum SplitWithinState { + A, // leading whitespace, initial state + B, // words + C, // internal and trailing whitespace +} + +#[derive(Clone, Copy)] +enum Whitespace { + Ws, // current char is whitespace + Cr // current char is not whitespace +} + +#[derive(Clone, Copy)] +enum LengthLimit { + UnderLim, // current char makes current substring still fit in limit + OverLim // current char makes current substring no longer fit in limit +} + + +/// Splits a string into substrings with possibly internal whitespace, +/// each of them at most `lim` bytes long. The substrings have leading and trailing +/// whitespace removed, and are only cut at whitespace boundaries. +/// +/// Note: Function was moved here from `std::str` because this module is the only place that +/// uses it, and because it was too specific for a general string function. +/// +/// # Panics +/// +/// Panics during iteration if the string contains a non-whitespace +/// sequence longer than the limit. +fn each_split_within<'a, F>(ss: &'a str, lim: usize, mut it: F) + -> bool where F: FnMut(&'a str) -> bool { + // Just for fun, let's write this as a state machine: + + let mut slice_start = 0; + let mut last_start = 0; + let mut last_end = 0; + let mut state = A; + let mut fake_i = ss.len(); + let mut lim = lim; + + let mut cont = true; + + // if the limit is larger than the string, lower it to save cycles + if lim >= fake_i { + lim = fake_i; + } + + let mut machine = |cont: &mut bool, (i, c): (usize, char)| { + let whitespace = if c.is_whitespace() { Ws } else { Cr }; + let limit = if (i - slice_start + 1) <= lim { UnderLim } else { OverLim }; + + state = match (state, whitespace, limit) { + (A, Ws, _) => { A } + (A, Cr, _) => { slice_start = i; last_start = i; B } + + (B, Cr, UnderLim) => { B } + (B, Cr, OverLim) if (i - last_start + 1) > lim + => panic!("word starting with {} longer than limit!", + &ss[last_start..i + 1]), + (B, Cr, OverLim) => { + *cont = it(&ss[slice_start..last_end]); + slice_start = last_start; + B + } + (B, Ws, UnderLim) => { + last_end = i; + C + } + (B, Ws, OverLim) => { + last_end = i; + *cont = it(&ss[slice_start..last_end]); + A + } + + (C, Cr, UnderLim) => { + last_start = i; + B + } + (C, Cr, OverLim) => { + *cont = it(&ss[slice_start..last_end]); + slice_start = i; + last_start = i; + last_end = i; + B + } + (C, Ws, OverLim) => { + *cont = it(&ss[slice_start..last_end]); + A + } + (C, Ws, UnderLim) => { + C + } + }; + + *cont + }; + + ss.char_indices().all(|x| machine(&mut cont, x)); + + // Let the automaton 'run out' by supplying trailing whitespace + while cont && match state { B | C => true, A => false } { + machine(&mut cont, (fake_i, ' ')); + fake_i += 1; + } + return cont; +} + +#[test] +fn test_split_within() { + fn t(s: &str, i: usize, u: &[String]) { + let mut v = Vec::new(); + each_split_within(s, i, |s| { v.push(s.to_string()); true }); + assert!(v.iter().zip(u.iter()).all(|(a,b)| a == b)); + } + t("", 0, &[]); + t("", 15, &[]); + t("hello", 15, &["hello".to_string()]); + t("\nMary had a little lamb\nLittle lamb\n", 15, &[ + "Mary had a".to_string(), + "little lamb".to_string(), + "Little lamb".to_string() + ]); + t("\nMary had a little lamb\nLittle lamb\n", ::std::usize::MAX, + &["Mary had a little lamb\nLittle lamb".to_string()]); +} + +#[cfg(test)] +mod tests { + use super::{HasArg, Name, Occur, Opt, Options, ParsingStyle}; + use super::Fail::*; + + // Tests for reqopt + #[test] + fn test_reqopt() { + let long_args = vec!("--test=20".to_string()); + let mut opts = Options::new(); + opts.reqopt("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!(m.opt_present("t")); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => { panic!("test_reqopt failed (long arg)"); } + } + let short_args = vec!("-t".to_string(), "20".to_string()); + match opts.parse(&short_args) { + Ok(ref m) => { + assert!((m.opt_present("test"))); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!((m.opt_present("t"))); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => { panic!("test_reqopt failed (short arg)"); } + } + } + + #[test] + fn test_reqopt_missing() { + let args = vec!("blah".to_string()); + match Options::new() + .reqopt("t", "test", "testing", "TEST") + .parse(&args) { + Err(OptionMissing(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_reqopt_no_arg() { + let long_args = vec!("--test".to_string()); + let mut opts = Options::new(); + opts.reqopt("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + let short_args = vec!("-t".to_string()); + match opts.parse(&short_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_reqopt_multi() { + let args = vec!("--test=20".to_string(), "-t".to_string(), "30".to_string()); + match Options::new() + .reqopt("t", "test", "testing", "TEST") + .parse(&args) { + Err(OptionDuplicated(_)) => {}, + _ => panic!() + } + } + + // Tests for optopt + #[test] + fn test_optopt() { + let long_args = vec!("--test=20".to_string()); + let mut opts = Options::new(); + opts.optopt("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!((m.opt_present("t"))); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => panic!() + } + let short_args = vec!("-t".to_string(), "20".to_string()); + match opts.parse(&short_args) { + Ok(ref m) => { + assert!((m.opt_present("test"))); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!((m.opt_present("t"))); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => panic!() + } + } + + #[test] + fn test_optopt_missing() { + let args = vec!("blah".to_string()); + match Options::new() + .optopt("t", "test", "testing", "TEST") + .parse(&args) { + Ok(ref m) => { + assert!(!m.opt_present("test")); + assert!(!m.opt_present("t")); + } + _ => panic!() + } + } + + #[test] + fn test_optopt_no_arg() { + let long_args = vec!("--test".to_string()); + let mut opts = Options::new(); + opts.optopt("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + let short_args = vec!("-t".to_string()); + match opts.parse(&short_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_optopt_multi() { + let args = vec!("--test=20".to_string(), "-t".to_string(), "30".to_string()); + match Options::new() + .optopt("t", "test", "testing", "TEST") + .parse(&args) { + Err(OptionDuplicated(_)) => {}, + _ => panic!() + } + } + + // Tests for optflag + #[test] + fn test_optflag() { + let long_args = vec!("--test".to_string()); + let mut opts = Options::new(); + opts.optflag("t", "test", "testing"); + match opts.parse(&long_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert!(m.opt_present("t")); + } + _ => panic!() + } + let short_args = vec!("-t".to_string()); + match opts.parse(&short_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert!(m.opt_present("t")); + } + _ => panic!() + } + } + + #[test] + fn test_optflag_missing() { + let args = vec!("blah".to_string()); + match Options::new() + .optflag("t", "test", "testing") + .parse(&args) { + Ok(ref m) => { + assert!(!m.opt_present("test")); + assert!(!m.opt_present("t")); + } + _ => panic!() + } + } + + #[test] + fn test_optflag_long_arg() { + let args = vec!("--test=20".to_string()); + match Options::new() + .optflag("t", "test", "testing") + .parse(&args) { + Err(UnexpectedArgument(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_optflag_multi() { + let args = vec!("--test".to_string(), "-t".to_string()); + match Options::new() + .optflag("t", "test", "testing") + .parse(&args) { + Err(OptionDuplicated(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_optflag_short_arg() { + let args = vec!("-t".to_string(), "20".to_string()); + match Options::new() + .optflag("t", "test", "testing") + .parse(&args) { + Ok(ref m) => { + // The next variable after the flag is just a free argument + + assert!(m.free[0] == "20"); + } + _ => panic!() + } + } + + // Tests for optflagmulti + #[test] + fn test_optflagmulti_short1() { + let args = vec!("-v".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("v"), 1); + } + _ => panic!() + } + } + + #[test] + fn test_optflagmulti_short2a() { + let args = vec!("-v".to_string(), "-v".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("v"), 2); + } + _ => panic!() + } + } + + #[test] + fn test_optflagmulti_short2b() { + let args = vec!("-vv".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("v"), 2); + } + _ => panic!() + } + } + + #[test] + fn test_optflagmulti_long1() { + let args = vec!("--verbose".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("verbose"), 1); + } + _ => panic!() + } + } + + #[test] + fn test_optflagmulti_long2() { + let args = vec!("--verbose".to_string(), "--verbose".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("verbose"), 2); + } + _ => panic!() + } + } + + #[test] + fn test_optflagmulti_mix() { + let args = vec!("--verbose".to_string(), "-v".to_string(), + "-vv".to_string(), "verbose".to_string()); + match Options::new() + .optflagmulti("v", "verbose", "verbosity") + .parse(&args) { + Ok(ref m) => { + assert_eq!(m.opt_count("verbose"), 4); + assert_eq!(m.opt_count("v"), 4); + } + _ => panic!() + } + } + + // Tests for optflagopt + #[test] + fn test_optflagopt() { + let long_args = vec!("--test".to_string()); + let mut opts = Options::new(); + opts.optflag("t", "test", "testing"); + match opts.parse(&long_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert!(m.opt_present("t")); + } + _ => panic!() + } + let short_args = vec!("-t".to_string()); + match opts.parse(&short_args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert!(m.opt_present("t")); + } + _ => panic!() + } + let no_args: Vec = vec!(); + match opts.parse(&no_args) { + Ok(ref m) => { + assert!(!m.opt_present("test")); + assert!(!m.opt_present("t")); + } + _ => panic!() + } + } + + // Tests for optmulti + #[test] + fn test_optmulti() { + let long_args = vec!("--test=20".to_string()); + let mut opts = Options::new(); + opts.optmulti("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Ok(ref m) => { + assert!((m.opt_present("test"))); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!((m.opt_present("t"))); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => panic!() + } + let short_args = vec!("-t".to_string(), "20".to_string()); + match opts.parse(&short_args) { + Ok(ref m) => { + assert!((m.opt_present("test"))); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!((m.opt_present("t"))); + assert_eq!(m.opt_str("t").unwrap(), "20"); + } + _ => panic!() + } + } + + #[test] + fn test_optmulti_missing() { + let args = vec!("blah".to_string()); + match Options::new() + .optmulti("t", "test", "testing", "TEST") + .parse(&args) { + Ok(ref m) => { + assert!(!m.opt_present("test")); + assert!(!m.opt_present("t")); + } + _ => panic!() + } + } + + #[test] + fn test_optmulti_no_arg() { + let long_args = vec!("--test".to_string()); + let mut opts = Options::new(); + opts.optmulti("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + let short_args = vec!("-t".to_string()); + match opts.parse(&short_args) { + Err(ArgumentMissing(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_optmulti_multi() { + let args = vec!("--test=20".to_string(), "-t".to_string(), "30".to_string()); + match Options::new() + .optmulti("t", "test", "testing", "TEST") + .parse(&args) { + Ok(ref m) => { + assert!(m.opt_present("test")); + assert_eq!(m.opt_str("test").unwrap(), "20"); + assert!(m.opt_present("t")); + assert_eq!(m.opt_str("t").unwrap(), "20"); + let pair = m.opt_strs("test"); + assert!(pair[0] == "20"); + assert!(pair[1] == "30"); + } + _ => panic!() + } + } + + #[test] + fn test_free_argument_is_hyphen() { + let args = vec!("-".to_string()); + match Options::new().parse(&args) { + Ok(ref m) => { + assert_eq!(m.free.len(), 1); + assert_eq!(m.free[0], "-"); + } + _ => panic!() + } + } + + #[test] + fn test_unrecognized_option() { + let long_args = vec!("--untest".to_string()); + let mut opts = Options::new(); + opts.optmulti("t", "test", "testing", "TEST"); + match opts.parse(&long_args) { + Err(UnrecognizedOption(_)) => {}, + _ => panic!() + } + let short_args = vec!("-u".to_string()); + match opts.parse(&short_args) { + Err(UnrecognizedOption(_)) => {}, + _ => panic!() + } + } + + #[test] + fn test_combined() { + let args = + vec!("prog".to_string(), + "free1".to_string(), + "-s".to_string(), + "20".to_string(), + "free2".to_string(), + "--flag".to_string(), + "--long=30".to_string(), + "-f".to_string(), + "-m".to_string(), + "40".to_string(), + "-m".to_string(), + "50".to_string(), + "-n".to_string(), + "-A B".to_string(), + "-n".to_string(), + "-60 70".to_string()); + match Options::new() + .optopt("s", "something", "something", "SOMETHING") + .optflag("", "flag", "a flag") + .reqopt("", "long", "hi", "LONG") + .optflag("f", "", "another flag") + .optmulti("m", "", "mmmmmm", "YUM") + .optmulti("n", "", "nothing", "NOTHING") + .optopt("", "notpresent", "nothing to see here", "NOPE") + .parse(&args) { + Ok(ref m) => { + assert!(m.free[0] == "prog"); + assert!(m.free[1] == "free1"); + assert_eq!(m.opt_str("s").unwrap(), "20"); + assert!(m.free[2] == "free2"); + assert!((m.opt_present("flag"))); + assert_eq!(m.opt_str("long").unwrap(), "30"); + assert!((m.opt_present("f"))); + let pair = m.opt_strs("m"); + assert!(pair[0] == "40"); + assert!(pair[1] == "50"); + let pair = m.opt_strs("n"); + assert!(pair[0] == "-A B"); + assert!(pair[1] == "-60 70"); + assert!((!m.opt_present("notpresent"))); + } + _ => panic!() + } + } + + #[test] + fn test_mixed_stop() { + let args = + vec!("-a".to_string(), + "b".to_string(), + "-c".to_string(), + "d".to_string()); + match Options::new() + .parsing_style(ParsingStyle::StopAtFirstFree) + .optflag("a", "", "") + .optopt("c", "", "", "") + .parse(&args) { + Ok(ref m) => { + println!("{}", m.opt_present("c")); + assert!(m.opt_present("a")); + assert!(!m.opt_present("c")); + assert_eq!(m.free.len(), 3); + assert_eq!(m.free[0], "b"); + assert_eq!(m.free[1], "-c"); + assert_eq!(m.free[2], "d"); + } + _ => panic!() + } + } + + #[test] + fn test_mixed_stop_hyphen() { + let args = + vec!("-a".to_string(), + "-".to_string(), + "-c".to_string(), + "d".to_string()); + match Options::new() + .parsing_style(ParsingStyle::StopAtFirstFree) + .optflag("a", "", "") + .optopt("c", "", "", "") + .parse(&args) { + Ok(ref m) => { + println!("{}", m.opt_present("c")); + assert!(m.opt_present("a")); + assert!(!m.opt_present("c")); + assert_eq!(m.free.len(), 3); + assert_eq!(m.free[0], "-"); + assert_eq!(m.free[1], "-c"); + assert_eq!(m.free[2], "d"); + } + _ => panic!() + } + } + + #[test] + fn test_multi() { + let mut opts = Options::new(); + opts.optopt("e", "", "encrypt", "ENCRYPT"); + opts.optopt("", "encrypt", "encrypt", "ENCRYPT"); + opts.optopt("f", "", "flag", "FLAG"); + + let args_single = vec!("-e".to_string(), "foo".to_string()); + let matches_single = &match opts.parse(&args_single) { + Ok(m) => m, + Err(_) => panic!() + }; + assert!(matches_single.opts_present(&["e".to_string()])); + assert!(matches_single.opts_present(&["encrypt".to_string(), "e".to_string()])); + assert!(matches_single.opts_present(&["e".to_string(), "encrypt".to_string()])); + assert!(!matches_single.opts_present(&["encrypt".to_string()])); + assert!(!matches_single.opts_present(&["thing".to_string()])); + assert!(!matches_single.opts_present(&[])); + + assert_eq!(matches_single.opts_str(&["e".to_string()]).unwrap(), "foo"); + assert_eq!(matches_single.opts_str(&["e".to_string(), "encrypt".to_string()]).unwrap(), + "foo"); + assert_eq!(matches_single.opts_str(&["encrypt".to_string(), "e".to_string()]).unwrap(), + "foo"); + + let args_both = vec!("-e".to_string(), "foo".to_string(), "--encrypt".to_string(), + "foo".to_string()); + let matches_both = &match opts.parse(&args_both) { + Ok(m) => m, + Err(_) => panic!() + }; + assert!(matches_both.opts_present(&["e".to_string()])); + assert!(matches_both.opts_present(&["encrypt".to_string()])); + assert!(matches_both.opts_present(&["encrypt".to_string(), "e".to_string()])); + assert!(matches_both.opts_present(&["e".to_string(), "encrypt".to_string()])); + assert!(!matches_both.opts_present(&["f".to_string()])); + assert!(!matches_both.opts_present(&["thing".to_string()])); + assert!(!matches_both.opts_present(&[])); + + assert_eq!(matches_both.opts_str(&["e".to_string()]).unwrap(), "foo"); + assert_eq!(matches_both.opts_str(&["encrypt".to_string()]).unwrap(), "foo"); + assert_eq!(matches_both.opts_str(&["e".to_string(), "encrypt".to_string()]).unwrap(), + "foo"); + assert_eq!(matches_both.opts_str(&["encrypt".to_string(), "e".to_string()]).unwrap(), + "foo"); + } + + #[test] + fn test_nospace() { + let args = vec!("-Lfoo".to_string(), "-M.".to_string()); + let matches = &match Options::new() + .optmulti("L", "", "library directory", "LIB") + .optmulti("M", "", "something", "MMMM") + .parse(&args) { + Ok(m) => m, + Err(_) => panic!() + }; + assert!(matches.opts_present(&["L".to_string()])); + assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "foo"); + assert!(matches.opts_present(&["M".to_string()])); + assert_eq!(matches.opts_str(&["M".to_string()]).unwrap(), "."); + + } + + #[test] + fn test_nospace_conflict() { + let args = vec!("-vvLverbose".to_string(), "-v".to_string() ); + let matches = &match Options::new() + .optmulti("L", "", "library directory", "LIB") + .optflagmulti("v", "verbose", "Verbose") + .parse(&args) { + Ok(m) => m, + Err(e) => panic!( "{}", e ) + }; + assert!(matches.opts_present(&["L".to_string()])); + assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "verbose"); + assert!(matches.opts_present(&["v".to_string()])); + assert_eq!(3, matches.opt_count("v")); + } + + #[test] + fn test_long_to_short() { + let mut short = Opt { + name: Name::Long("banana".to_string()), + hasarg: HasArg::Yes, + occur: Occur::Req, + aliases: Vec::new(), + }; + short.aliases = vec!(Opt { name: Name::Short('b'), + hasarg: HasArg::Yes, + occur: Occur::Req, + aliases: Vec::new() }); + let mut opts = Options::new(); + opts.reqopt("b", "banana", "some bananas", "VAL"); + let ref verbose = opts.grps[0]; + assert!(verbose.long_to_short() == short); + } + + #[test] + fn test_aliases_long_and_short() { + let args = vec!("-a".to_string(), "--apple".to_string(), "-a".to_string()); + + let matches = Options::new() + .optflagmulti("a", "apple", "Desc") + .parse(&args) + .unwrap(); + assert_eq!(3, matches.opt_count("a")); + assert_eq!(3, matches.opt_count("apple")); + } + + #[test] + fn test_usage() { + let mut opts = Options::new(); + opts.reqopt("b", "banana", "Desc", "VAL"); + opts.optopt("a", "012345678901234567890123456789", + "Desc", "VAL"); + opts.optflag("k", "kiwi", "Desc"); + opts.optflagopt("p", "", "Desc", "VAL"); + opts.optmulti("l", "", "Desc", "VAL"); + opts.optflag("", "starfruit", "Starfruit"); + + let expected = +"Usage: fruits + +Options: + -b, --banana VAL Desc + -a, --012345678901234567890123456789 VAL + Desc + -k, --kiwi Desc + -p [VAL] Desc + -l VAL Desc + --starfruit Starfruit +"; + + let generated_usage = opts.usage("Usage: fruits"); + + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", generated_usage); + assert_eq!(generated_usage, expected); + } + + #[test] + fn test_usage_description_wrapping() { + // indentation should be 24 spaces + // lines wrap after 78: or rather descriptions wrap after 54 + + let mut opts = Options::new(); + opts.optflag("k", "kiwi", + "This is a long description which won't be wrapped..+.."); // 54 + opts.optflag("a", "apple", + "This is a long description which _will_ be wrapped..+.."); + + let expected = +"Usage: fruits + +Options: + -k, --kiwi This is a long description which won't be wrapped..+.. + -a, --apple This is a long description which _will_ be + wrapped..+.. +"; + + let usage = opts.usage("Usage: fruits"); + + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", usage); + assert!(usage == expected) + } + + #[test] + fn test_usage_description_multibyte_handling() { + let mut opts = Options::new(); + opts.optflag("k", "k\u{2013}w\u{2013}", + "The word kiwi is normally spelled with two i's"); + opts.optflag("a", "apple", + "This \u{201C}description\u{201D} has some characters that could \ +confuse the line wrapping; an apple costs 0.51€ in some parts of Europe."); + + let expected = +"Usage: fruits + +Options: + -k, --k–w– The word kiwi is normally spelled with two i's + -a, --apple This “description” has some characters that could + confuse the line wrapping; an apple costs 0.51€ in + some parts of Europe. +"; + + let usage = opts.usage("Usage: fruits"); + + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", usage); + assert!(usage == expected) + } + + #[test] + fn test_usage_short_only() { + let mut opts = Options::new(); + opts.optopt("k", "", "Kiwi", "VAL"); + opts.optflag("s", "", "Starfruit"); + opts.optflagopt("a", "", "Apple", "TYPE"); + + let expected = +"Usage: fruits + +Options: + -k VAL Kiwi + -s Starfruit + -a [TYPE] Apple +"; + + let usage = opts.usage("Usage: fruits"); + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", usage); + assert!(usage == expected) + } + + #[test] + fn test_usage_long_only() { + let mut opts = Options::new(); + opts.optopt("", "kiwi", "Kiwi", "VAL"); + opts.optflag("", "starfruit", "Starfruit"); + opts.optflagopt("", "apple", "Apple", "TYPE"); + + let expected = +"Usage: fruits + +Options: + --kiwi VAL Kiwi + --starfruit Starfruit + --apple [TYPE] Apple +"; + + let usage = opts.usage("Usage: fruits"); + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", usage); + assert!(usage == expected) + } + + #[test] + fn test_short_usage() { + let mut opts = Options::new(); + opts.reqopt("b", "banana", "Desc", "VAL"); + opts.optopt("a", "012345678901234567890123456789", + "Desc", "VAL"); + opts.optflag("k", "kiwi", "Desc"); + opts.optflagopt("p", "", "Desc", "VAL"); + opts.optmulti("l", "", "Desc", "VAL"); + + let expected = "Usage: fruits -b VAL [-a VAL] [-k] [-p [VAL]] [-l VAL]..".to_string(); + let generated_usage = opts.short_usage("fruits"); + + debug!("expected: <<{}>>", expected); + debug!("generated: <<{}>>", generated_usage); + assert_eq!(generated_usage, expected); + } + + #[test] + fn test_args_with_equals() { + let mut opts = Options::new(); + opts.optopt("o", "one", "One", "INFO"); + opts.optopt("t", "two", "Two", "INFO"); + + let args = vec!("--one".to_string(), "A=B".to_string(), + "--two=C=D".to_string()); + let matches = &match opts.parse(&args) { + Ok(m) => m, + Err(e) => panic!("{}", e) + }; + assert_eq!(matches.opts_str(&["o".to_string()]).unwrap(), "A=B"); + assert_eq!(matches.opts_str(&["t".to_string()]).unwrap(), "C=D"); + } +} diff --git a/src/vendor/getopts/tests/smoke.rs b/src/vendor/getopts/tests/smoke.rs new file mode 100644 index 0000000000..a46f9c0167 --- /dev/null +++ b/src/vendor/getopts/tests/smoke.rs @@ -0,0 +1,8 @@ +extern crate getopts; + +use std::env; + +#[test] +fn main() { + getopts::Options::new().parse(env::args()).unwrap(); +} diff --git a/src/vendor/libc/.cargo-checksum.json b/src/vendor/libc/.cargo-checksum.json new file mode 100644 index 0000000000..56c0bb8d25 --- /dev/null +++ b/src/vendor/libc/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".gitignore":"7150ee9391a955b2ef7e0762fc61c0c1aab167620ca36d88d78062d93b8334ba",".travis.yml":"ca5e05b688a8c9a3215de3b38f22f4b468f73d26738a80bd939af503ddb222e1","Cargo.toml":"4b1f0d59b5fb939877a639d1d4cac5a12440c6e2d366edf2abcb45c46e3dcd3e","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"6485b8ed310d3f0340bf1ad1f47645069ce4069dcc6bb46c7d5c6faf41de1fdb","README.md":"c1f46480074340f17f1c3ea989b28e6b632b9d324e57792293a60399b90bfda0","appveyor.yml":"c0d70c650b6231e6ff78a352224f1a522a9be69d9da4251adbaddb3f0393294d","ci/README.md":"be804f15e2128e5fd4b160cb0b13cff5f19e7d77b55ec5254aa6fd8731c84f0d","ci/docker/aarch64-unknown-linux-gnu/Dockerfile":"62ca7317439f9c303990e897450a91cd467be05eb75dfc01456d417932ac8672","ci/docker/arm-linux-androideabi/Dockerfile":"c3d60f2ba389e60e59cb6973542751c66a0e7bd484e11589c8ee7346e9ff2bab","ci/docker/arm-unknown-linux-gnueabihf/Dockerfile":"e349f7caa463adbde8d6ec4d2b9f7720ed81c77f48d75bbfb78c89751f55c2dc","ci/docker/i686-unknown-linux-gnu/Dockerfile":"07e9df6ba91025cbec7ae81ade63f8cfb8a54c5e1e5a8f8def0617e17bd59db0","ci/docker/i686-unknown-linux-musl/Dockerfile":"1a4d064adff4a8f58773305567cfe5d915bcd0762bcb0e101cf6f4ca628a96da","ci/docker/mips-unknown-linux-gnu/Dockerfile":"860299d96ee50ebdbd788e65eb6ba1f561ef66107647bddffcb2567ac350896b","ci/docker/mips-unknown-linux-musl/Dockerfile":"b5917a15c0998adb79ebfdb8aff9ab0e5c4098c4bd5ca78e90ee05859dcfbda3","ci/docker/mips64-unknown-linux-gnuabi64/Dockerfile":"163776e0fd38f66df7415421202ac29efc7d345a628947434e573c3885594ab5","ci/docker/mipsel-unknown-linux-musl/Dockerfile":"b2dd4c26890c1070228df9694adde8fdb1fe78d7d5a71a8cb5c1b54835f93c46","ci/docker/powerpc-unknown-linux-gnu/Dockerfile":"08b846a338c2ee70100f4e80db812668dc58bfb536c44a95cd1cf004d965186b","ci/docker/powerpc64-unknown-linux-gnu/Dockerfile":"4da285ffd035d16f5da9e3701841eb86049c8cfa417fa81e53da4ef74152eac0","ci/docker/x86_64-rumprun-netbsd/Dockerfile":"44c3107fb30380785aaed6ff73fa334017a5bb4e3b5c7d4876154f09023a2b99","ci/docker/x86_64-unknown-freebsd/Dockerfile":"56fce89ceb70792be9005425f3e896361f5ba8a0553db659da87daced93f9785","ci/docker/x86_64-unknown-linux-gnu/Dockerfile":"67fabbc8c6ac02376cf9344251ad49ecdac396b71accb572fd1ae65225325bc0","ci/docker/x86_64-unknown-linux-musl/Dockerfile":"f71019fed5204b950843ef5e56144161fda7e27fad68ed0e8bc4353c388c7bcf","ci/docker/x86_64-unknown-openbsd/Dockerfile":"4a5583797a613056d87f6ae0b1d7a3d3a55552efa7c30e1e0aa67e34d69b4d9c","ci/dox.sh":"2161cb17ee0d6a2279a64149c6b7c73a5b2eab344f248ea1fa0e6c8f6335ec5f","ci/landing-page-footer.html":"b70b3112c2147f5c967e7481061ef38bc2d79a28dd55a16fb916d9c9426da2c4","ci/landing-page-head.html":"ad69663fac7924f27d0209bc519d55838e86edfc4133713a6fd08caadac1b142","ci/run-docker.sh":"325648a92ff4d74f18fdf3d190a5cd483306ed2a98479c0742ca7284acd6b948","ci/run-qemu.sh":"bb859421170871ef23a8940c5e150efec0c01b95e32d2ce2d37b79a45d9d346c","ci/run.sh":"3bb839c2d28986c6915b8f11ed820ff6c62e755fb96bd921a18899ee5f7efd32","ci/style.rs":"60564abc1d5197ed1598426dd0d6ee9939a16d2875b03373538f58843bb616c4","src/dox.rs":"eb6fbcc0b8b59430271bb71ee023961fd165337fc5fd6ca433882457a3c735bd","src/lib.rs":"4cece0e880ec8731913e5110b58d1b134148b0a43e72d6b990c1d999916fc706","src/macros.rs":"bd9802772b0e5c8b3c550d1c24307f06c0d1e4ce656b4ae1cf092142bbe5412c","src/unix/bsd/apple/b32.rs":"110ecff78da0e8d405d861447904da403d8b3f6da1f0f9dc9987633f3f04fe46","src/unix/bsd/apple/b64.rs":"e6808081c0b276cca3189628716f507c7c0d00b62417cd44addbdaefe848cec7","src/unix/bsd/apple/mod.rs":"6691f81221d455b882d68d1102de049d5b9729bb4b59050c1d62c835dcaddafb","src/unix/bsd/freebsdlike/dragonfly/mod.rs":"d87f02c64649ce63367d9f0e39de7213bd30366bbd5e497f7d88f0dc3c319294","src/unix/bsd/freebsdlike/freebsd/mod.rs":"0a675c4b7f54b410547e10e433503487eb1e738394ab81cac82112a96d275bdc","src/unix/bsd/freebsdlike/freebsd/x86.rs":"54311d3ebf2bb091ab22361e377e6ef9224aec2ecfe459fbfcedde4932db9c58","src/unix/bsd/freebsdlike/freebsd/x86_64.rs":"c7f46b9ae23fde5a9e245a28ed1380066e67f081323b4d253a18e9da3b97b860","src/unix/bsd/freebsdlike/mod.rs":"574f7a1368058fad551cdebea4f576fe672f9bbe95a85468c91f9ff5661908c3","src/unix/bsd/mod.rs":"bd422d4bca87a3e8ea4bd78b9ae019643399807d036913f42fdd7476f260297d","src/unix/bsd/netbsdlike/mod.rs":"7b62b89c6ba0d5a8e0cf0937587a81e0314f9c5dabb0c9a9164106b677cf4dd8","src/unix/bsd/netbsdlike/netbsd/mod.rs":"d62a02a78275ed705b2080cae452eb8954ef0f66ac9acb0f44c819d453904c5c","src/unix/bsd/netbsdlike/netbsd/other/b32/mod.rs":"bd251a102bed65d5cb3459275f6ec3310fe5803ff4c9651212115548f86256d0","src/unix/bsd/netbsdlike/netbsd/other/b64/mod.rs":"927eeccaf3269d299db4c2a55f8010807bf43dfa894aea6a783215f5d3560baa","src/unix/bsd/netbsdlike/netbsd/other/mod.rs":"8ce39030f3e4fb45a3d676ade97da8f6d1b3d5f6d8d141224d341c993c57e090","src/unix/bsd/netbsdlike/openbsdlike/bitrig.rs":"f8cd05dacd3a3136c58da5a2fbe26f703767823b28e74fe8a2b57a7bd98d6d5c","src/unix/bsd/netbsdlike/openbsdlike/mod.rs":"769647209be7b8fc5b7e5c1970f16d5cf9cc3fba04bb456c9584f19a5c406e08","src/unix/bsd/netbsdlike/openbsdlike/openbsd.rs":"b1b9cf7be9f0e4d294a57092594074ad03a65fe0eeac9d1104fa874c313e7900","src/unix/haiku/b32.rs":"bd251a102bed65d5cb3459275f6ec3310fe5803ff4c9651212115548f86256d0","src/unix/haiku/b64.rs":"b422430c550c0ba833c9206d1350861e344e3a2eb33d7d58693efb35044be1cc","src/unix/haiku/mod.rs":"d14c45d536f24cd9cd8d5170b9829026da4c782ff2d5855644cc217553e309cf","src/unix/mod.rs":"82952d405742b8b21bfbc29648115b3909d9c64422ad04fb6aca443c16ddaa99","src/unix/notbsd/android/b32.rs":"148e1b4ed8b4f700d5aa24178af925164176e1c18b54db877ced4b55ba9f03d4","src/unix/notbsd/android/b64.rs":"302caf0aa95fa022030717c58de17d85d814b04350eca081a722ec435bc4f217","src/unix/notbsd/android/mod.rs":"f7c0145110a406c5cb14243dc71b98af8971674aa7620e5f55dabfa5c8b344c8","src/unix/notbsd/linux/mips.rs":"7736e565499b04560bc7e6f8636fd39c74f4a588c671ece931d27de8ca263963","src/unix/notbsd/linux/mips64.rs":"f269d516e0f5203fbfd18ff6b22ff33f206be1584d9df03c35743f5e80127d8b","src/unix/notbsd/linux/mod.rs":"81dbebd7dd798dc57e5b5b84cec69af2b6027a415262f4ad07b8c609ad2c95ee","src/unix/notbsd/linux/musl/b32/arm.rs":"a8416bc6e36460f3c60e2f7730dad7c43466790d11214441ef227ffb05ea450f","src/unix/notbsd/linux/musl/b32/asmjs.rs":"c660c5eef21a5f7580e9258eb44881014d2aeba5928af431dfc782b6c4393f33","src/unix/notbsd/linux/musl/b32/mips.rs":"76d835acd06c7bcd07a293a6f141b715ac88b959b633df9af3610e8d6eeb1ab4","src/unix/notbsd/linux/musl/b32/mod.rs":"bd29a02c67b69791e7cabd7666503c35ed5322d244a005b9cc7fd0cb28b552a8","src/unix/notbsd/linux/musl/b32/x86.rs":"da2e557a6afa9d15649d8862a5d17032597c924cd8bb290105500905fe975133","src/unix/notbsd/linux/musl/b64/aarch64.rs":"4009c7eaf703472daef2a70bdac910d9fc395a33689ef2e8cf1c4e692445d3f0","src/unix/notbsd/linux/musl/b64/mod.rs":"20f34e48124d8ca2a08cc0d28353b310238d37a345dfa0d58993e2e930a1ae23","src/unix/notbsd/linux/musl/b64/powerpc64.rs":"dc28f5b7284235d6cf5519053cac59a1c16dc39223b71cca0871e4880755f852","src/unix/notbsd/linux/musl/b64/x86_64.rs":"43291acc0dfc92c2fec8ba6ce77ee9ca3c20bcdccec18e149f95ba911cee704b","src/unix/notbsd/linux/musl/mod.rs":"c195e04167d26f82885f9157e32a28caccfd4eabe807af683708f33e28562021","src/unix/notbsd/linux/other/b32/arm.rs":"f5cb989075fa3b5f997e7101495532c8d5c9f3577412d4c07e4c8c1a16f7b43c","src/unix/notbsd/linux/other/b32/mod.rs":"8b774feb5510b963ed031db7ab3d7e24f1ba5524a6396db0b851d237ccc16fd3","src/unix/notbsd/linux/other/b32/powerpc.rs":"3b62052bb9741afa5349098e6e9c675b60e822e41fed6b5e1b694be1872097b1","src/unix/notbsd/linux/other/b32/x86.rs":"1eda37736f5966c7968b594f74f5018f56b6b8c67bbdeb31fc3db1b6e4ac31b4","src/unix/notbsd/linux/other/b64/aarch64.rs":"a978e82d037a9c8127b2f704323864aff42ac910e721ecc69c255671ca96b950","src/unix/notbsd/linux/other/b64/mod.rs":"efb7740c2fb925ea98977a6a3ff52bc0b72205c1f88a9ba281a939b66b7f0efe","src/unix/notbsd/linux/other/b64/powerpc64.rs":"06a795bca8e91a0143ef1787b034201ed7a21d01960ce9fe869d18c274d5bdb4","src/unix/notbsd/linux/other/b64/x86_64.rs":"0ed128e93f212c0d65660bd95e29190a2dae7c9d15d6fa0d3c4c6656f89e9bdc","src/unix/notbsd/linux/other/mod.rs":"0f7b29425273101ce90a9565637e5f7f61905db2a1e8f5360b285c73b1287da1","src/unix/notbsd/linux/s390x.rs":"6eddef139e18191bc3894f759ca8bd83c59b547bc572ad8938dc61fb5a97d2e9","src/unix/notbsd/mod.rs":"6ba17e2e9a6d05d4470ba595fd38dc55f70fea874a46425a4733ae52d93ee8ff","src/unix/solaris/mod.rs":"6d1f023b637467fe26385d23b32219dbb4573ea177d159e32dad75e4a6ff95de","src/windows.rs":"08f351462388566dcdc6566fb183a467942db63a1caa1bc97f85284fb7a74063"},"package":"044d1360593a78f5c8e5e710beccdc24ab71d1f01bc19a29bcacdba22e8475d8"} \ No newline at end of file diff --git a/src/vendor/libc/.cargo-ok b/src/vendor/libc/.cargo-ok new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/vendor/libc/.gitignore b/src/vendor/libc/.gitignore new file mode 100644 index 0000000000..f0ff2599d0 --- /dev/null +++ b/src/vendor/libc/.gitignore @@ -0,0 +1,3 @@ +target +Cargo.lock +*~ diff --git a/src/vendor/libc/.travis.yml b/src/vendor/libc/.travis.yml new file mode 100644 index 0000000000..703329b705 --- /dev/null +++ b/src/vendor/libc/.travis.yml @@ -0,0 +1,125 @@ +language: rust +sudo: required +dist: trusty +services: + - docker +install: + - curl https://static.rust-lang.org/rustup.sh | + sh -s -- --add-target=$TARGET --disable-sudo -y --prefix=`rustc --print sysroot` +script: + - cargo build + - cargo build --no-default-features + - cargo generate-lockfile --manifest-path libc-test/Cargo.toml + - if [[ $TRAVIS_OS_NAME = "linux" ]]; then + sh ci/run-docker.sh $TARGET; + else + export CARGO_TARGET_DIR=`pwd`/target; + sh ci/run.sh $TARGET; + fi + - rustc ci/style.rs && ./style src +osx_image: xcode7.3 +env: + global: + secure: eIDEoQdTyglcsTD13zSGotAX2HDhRSXIaaTnVZTThqLSrySOc3/6KY3qmOc2Msf7XaBqfFy9QA+alk7OwfePp253eiy1Kced67ffjjFOytEcRT7FlQiYpcYQD6WNHZEj62/bJBO4LTM9sGtWNCTJVEDKW0WM8mUK7qNuC+honPM= +matrix: + include: + # 1.0.0 compat + - os: linux + env: TARGET=x86_64-unknown-linux-gnu + rust: 1.0.0 + script: cargo build + install: + + # build documentation + - os: linux + env: TARGET=x86_64-unknown-linux-gnu + rust: stable + script: sh ci/dox.sh + + # stable compat + - os: linux + env: TARGET=x86_64-unknown-linux-gnu + rust: stable + - os: linux + env: TARGET=i686-unknown-linux-gnu + rust: stable + - os: osx + env: TARGET=x86_64-apple-darwin + rust: stable + - os: osx + env: TARGET=i686-apple-darwin + rust: stable + - os: linux + env: TARGET=arm-linux-androideabi + rust: stable + - os: linux + env: TARGET=x86_64-unknown-linux-musl + rust: stable + - os: linux + env: TARGET=i686-unknown-linux-musl + rust: stable + - os: linux + env: TARGET=arm-unknown-linux-gnueabihf + rust: stable + - os: linux + env: TARGET=aarch64-unknown-linux-gnu + rust: stable + - os: osx + env: TARGET=i386-apple-ios + rust: stable + - os: osx + env: TARGET=x86_64-apple-ios + rust: stable + - os: linux + env: TARGET=x86_64-rumprun-netbsd + rust: stable + - os: linux + env: TARGET=powerpc-unknown-linux-gnu + rust: stable + - os: linux + env: TARGET=powerpc64-unknown-linux-gnu + rust: stable + - os: linux + env: TARGET=mips-unknown-linux-musl + rust: stable + - os: linux + env: TARGET=mipsel-unknown-linux-musl + rust: stable + - os: linux + env: TARGET=mips64-unknown-linux-gnuabi64 + rust: nightly + + # beta + - os: linux + env: TARGET=x86_64-unknown-linux-gnu + rust: beta + - os: osx + env: TARGET=x86_64-apple-darwin + rust: beta + + # nightly + - os: linux + env: TARGET=x86_64-unknown-linux-gnu + rust: nightly + - os: osx + env: TARGET=x86_64-apple-darwin + rust: nightly + - os: linux + env: TARGET=mips-unknown-linux-gnu + # not sure why this has to be nightly... + rust: nightly + + # QEMU based targets that compile in an emulator + - os: linux + env: TARGET=x86_64-unknown-freebsd + rust: stable + - os: linux + env: TARGET=x86_64-unknown-openbsd QEMU=openbsd.qcow2 + rust: stable + script: sh ci/run-docker.sh $TARGET + install: + +notifications: + email: + on_success: never + webhooks: https://buildbot.rust-lang.org/homu/travis diff --git a/src/vendor/libc/Cargo.toml b/src/vendor/libc/Cargo.toml new file mode 100644 index 0000000000..c08ab3aab9 --- /dev/null +++ b/src/vendor/libc/Cargo.toml @@ -0,0 +1,21 @@ +[package] + +name = "libc" +version = "0.2.17" +authors = ["The Rust Project Developers"] +license = "MIT/Apache-2.0" +readme = "README.md" +repository = "https://github.com/rust-lang/libc" +homepage = "https://github.com/rust-lang/libc" +documentation = "http://doc.rust-lang.org/libc" +description = """ +A library for types and bindings to native C functions often found in libc or +other common platform libraries. +""" + +[features] +default = ["use_std"] +use_std = [] + +[workspace] +members = ["libc-test", "libc-test/generate-files"] diff --git a/src/vendor/libc/LICENSE-APACHE b/src/vendor/libc/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/src/vendor/libc/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/src/vendor/libc/LICENSE-MIT b/src/vendor/libc/LICENSE-MIT new file mode 100644 index 0000000000..39d4bdb5ac --- /dev/null +++ b/src/vendor/libc/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2014 The Rust Project Developers + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/src/vendor/libc/README.md b/src/vendor/libc/README.md new file mode 100644 index 0000000000..5ea812320f --- /dev/null +++ b/src/vendor/libc/README.md @@ -0,0 +1,137 @@ +libc +==== + +A Rust library with native bindings to the types and functions commonly found on +various systems, including libc. + +[![Build Status](https://travis-ci.org/rust-lang/libc.svg?branch=master)](https://travis-ci.org/rust-lang/libc) +[![Build status](https://ci.appveyor.com/api/projects/status/34csq3uurnw7c0rl?svg=true)](https://ci.appveyor.com/project/alexcrichton/libc) + +[Documentation](#platforms-and-documentation) + +## Usage + +First, add the following to your `Cargo.toml`: + +```toml +[dependencies] +libc = "0.2" +``` + +Next, add this to your crate root: + +```rust +extern crate libc; +``` + +Currently libc by default links to the standard library, but if you would +instead like to use libc in a `#![no_std]` situation or crate you can request +this via: + +```toml +[dependencies] +libc = { version = "0.2", default-features = false } +``` + +## What is libc? + +The primary purpose of this crate is to provide all of the definitions necessary +to easily interoperate with C code (or "C-like" code) on each of the platforms +that Rust supports. This includes type definitions (e.g. `c_int`), constants +(e.g. `EINVAL`) as well as function headers (e.g. `malloc`). + +This crate does not strive to have any form of compatibility across platforms, +but rather it is simply a straight binding to the system libraries on the +platform in question. + +## Public API + +This crate exports all underlying platform types, functions, and constants under +the crate root, so all items are accessible as `libc::foo`. The types and values +of all the exported APIs match the platform that libc is compiled for. + +More detailed information about the design of this library can be found in its +[associated RFC][rfc]. + +[rfc]: https://github.com/rust-lang/rfcs/blob/master/text/1291-promote-libc.md + +## Adding an API + +Want to use an API which currently isn't bound in `libc`? It's quite easy to add +one! + +The internal structure of this crate is designed to minimize the number of +`#[cfg]` attributes in order to easily be able to add new items which apply +to all platforms in the future. As a result, the crate is organized +hierarchically based on platform. Each module has a number of `#[cfg]`'d +children, but only one is ever actually compiled. Each module then reexports all +the contents of its children. + +This means that for each platform that libc supports, the path from a +leaf module to the root will contain all bindings for the platform in question. +Consequently, this indicates where an API should be added! Adding an API at a +particular level in the hierarchy means that it is supported on all the child +platforms of that level. For example, when adding a Unix API it should be added +to `src/unix/mod.rs`, but when adding a Linux-only API it should be added to +`src/unix/notbsd/linux/mod.rs`. + +If you're not 100% sure at what level of the hierarchy an API should be added +at, fear not! This crate has CI support which tests any binding against all +platforms supported, so you'll see failures if an API is added at the wrong +level or has different signatures across platforms. + +With that in mind, the steps for adding a new API are: + +1. Determine where in the module hierarchy your API should be added. +2. Add the API. +3. Send a PR to this repo. +4. Wait for CI to pass, fixing errors. +5. Wait for a merge! + +### Test before you commit + +We have two automated tests running on [Travis](https://travis-ci.org/rust-lang/libc): + +1. [`libc-test`](https://github.com/alexcrichton/ctest) + - `cd libc-test && cargo run` + - Use the `skip_*()` functions in `build.rs` if you really need a workaround. +2. Style checker + - `rustc ci/style.rs && ./style src` + +## Platforms and Documentation + +The following platforms are currently tested and have documentation available: + +Tested: + * [`i686-pc-windows-msvc`](https://doc.rust-lang.org/libc/i686-pc-windows-msvc/libc/) + * [`x86_64-pc-windows-msvc`](https://doc.rust-lang.org/libc/x86_64-pc-windows-msvc/libc/) + (Windows) + * [`i686-pc-windows-gnu`](https://doc.rust-lang.org/libc/i686-pc-windows-gnu/libc/) + * [`x86_64-pc-windows-gnu`](https://doc.rust-lang.org/libc/x86_64-pc-windows-gnu/libc/) + * [`i686-apple-darwin`](https://doc.rust-lang.org/libc/i686-apple-darwin/libc/) + * [`x86_64-apple-darwin`](https://doc.rust-lang.org/libc/x86_64-apple-darwin/libc/) + (OSX) + * `i686-apple-ios` + * `x86_64-apple-ios` + * [`i686-unknown-linux-gnu`](https://doc.rust-lang.org/libc/i686-unknown-linux-gnu/libc/) + * [`x86_64-unknown-linux-gnu`](https://doc.rust-lang.org/libc/x86_64-unknown-linux-gnu/libc/) + (Linux) + * [`x86_64-unknown-linux-musl`](https://doc.rust-lang.org/libc/x86_64-unknown-linux-musl/libc/) + (Linux MUSL) + * [`aarch64-unknown-linux-gnu`](https://doc.rust-lang.org/libc/aarch64-unknown-linux-gnu/libc/) + * [`mips-unknown-linux-gnu`](https://doc.rust-lang.org/libc/mips-unknown-linux-gnu/libc/) + * [`arm-unknown-linux-gnueabihf`](https://doc.rust-lang.org/libc/arm-unknown-linux-gnueabihf/libc/) + * [`arm-linux-androideabi`](https://doc.rust-lang.org/libc/arm-linux-androideabi/libc/) + (Android) + * [`x86_64-unknown-freebsd`](https://doc.rust-lang.org/libc/x86_64-unknown-freebsd/libc/) + * [`x86_64-unknown-openbsd`](https://doc.rust-lang.org/libc/x86_64-unknown-openbsd/libc/) + * [`x86_64-rumprun-netbsd`](https://doc.rust-lang.org/libc/x86_64-unknown-netbsd/libc/) + +The following may be supported, but are not guaranteed to always work: + + * `i686-unknown-freebsd` + * [`x86_64-unknown-bitrig`](https://doc.rust-lang.org/libc/x86_64-unknown-bitrig/libc/) + * [`x86_64-unknown-dragonfly`](https://doc.rust-lang.org/libc/x86_64-unknown-dragonfly/libc/) + * `i686-unknown-haiku` + * `x86_64-unknown-haiku` + * [`x86_64-unknown-netbsd`](https://doc.rust-lang.org/libc/x86_64-unknown-netbsd/libc/) diff --git a/src/vendor/libc/appveyor.yml b/src/vendor/libc/appveyor.yml new file mode 100644 index 0000000000..a851bb87b6 --- /dev/null +++ b/src/vendor/libc/appveyor.yml @@ -0,0 +1,25 @@ +environment: + matrix: + - TARGET: x86_64-pc-windows-gnu + MSYS2_BITS: 64 + - TARGET: i686-pc-windows-gnu + MSYS2_BITS: 32 + - TARGET: x86_64-pc-windows-msvc + - TARGET: i686-pc-windows-msvc +install: + - curl -sSf -o rustup-init.exe https://win.rustup.rs/ + - rustup-init.exe -y --default-host %TARGET% + - set PATH=%PATH%;C:\Users\appveyor\.cargo\bin + - if defined MSYS2_BITS set PATH=%PATH%;C:\msys64\mingw%MSYS2_BITS%\bin + - rustc -V + - cargo -V + +build: false + +test_script: + - cargo test --target %TARGET% + - cargo run --manifest-path libc-test/Cargo.toml --target %TARGET% + +cache: + - target + - C:\Users\appveyor\.cargo\registry diff --git a/src/vendor/libc/ci/README.md b/src/vendor/libc/ci/README.md new file mode 100644 index 0000000000..13c7c8da52 --- /dev/null +++ b/src/vendor/libc/ci/README.md @@ -0,0 +1,203 @@ +The goal of the libc crate is to have CI running everywhere to have the +strongest guarantees about the definitions that this library contains, and as a +result the CI is pretty complicated and also pretty large! Hopefully this can +serve as a guide through the sea of scripts in this directory and elsewhere in +this project. + +# Files + +First up, let's talk about the files in this directory: + +* `run-travis.sh` - a shell script run by all Travis builders, this is + responsible for setting up the rest of the environment such as installing new + packages, downloading Rust target libraries, etc. + +* `run.sh` - the actual script which runs tests for a particular architecture. + Called from the `run-travis.sh` script this will run all tests for the target + specified. + +* `cargo-config` - Cargo configuration of linkers to use copied into place by + the `run-travis.sh` script before builds are run. + +* `dox.sh` - script called from `run-travis.sh` on only the linux 64-bit nightly + Travis bots to build documentation for this crate. + +* `landing-page-*.html` - used by `dox.sh` to generate a landing page for all + architectures' documentation. + +* `run-qemu.sh` - see discussion about QEMU below + +* `mips`, `rumprun` - instructions to build the docker image for each respective + CI target + +# CI Systems + +Currently this repository leverages a combination of Travis CI and AppVeyor for +running tests. The triples tested are: + +* AppVeyor + * `{i686,x86_64}-pc-windows-{msvc,gnu}` +* Travis + * `{i686,x86_64,mips,aarch64}-unknown-linux-gnu` + * `x86_64-unknown-linux-musl` + * `arm-unknown-linux-gnueabihf` + * `arm-linux-androideabi` + * `{i686,x86_64}-apple-{darwin,ios}` + * `x86_64-rumprun-netbsd` + * `x86_64-unknown-freebsd` + * `x86_64-unknown-openbsd` + +The Windows triples are all pretty standard, they just set up their environment +then run tests, no need for downloading any extra target libs (we just download +the right installer). The Intel Linux/OSX builds are similar in that we just +download the right target libs and run tests. Note that the Intel Linux/OSX +builds are run on stable/beta/nightly, but are the only ones that do so. + +The remaining architectures look like: + +* Android runs in a [docker image][android-docker] with an emulator, the NDK, + and the SDK already set up. The entire build happens within the docker image. +* The MIPS, ARM, and AArch64 builds all use the QEMU userspace emulator to run + the generated binary to actually verify the tests pass. +* The MUSL build just has to download a MUSL compiler and target libraries and + then otherwise runs tests normally. +* iOS builds need an extra linker flag currently, but beyond that they're built + as standard as everything else. +* The rumprun target builds an entire kernel from the test suite and then runs + it inside QEMU using the serial console to test whether it succeeded or + failed. +* The BSD builds, currently OpenBSD and FreeBSD, use QEMU to boot up a system + and compile/run tests. More information on that below. + +[android-docker]: https://github.com/rust-lang/rust-buildbot/blob/master/slaves/android/Dockerfile + +## QEMU + +Lots of the architectures tested here use QEMU in the tests, so it's worth going +over all the crazy capabilities QEMU has and the various flavors in which we use +it! + +First up, QEMU has userspace emulation where it doesn't boot a full kernel, it +just runs a binary from another architecture (using the `qemu-` wrappers). +We provide it the runtime path for the dynamically loaded system libraries, +however. This strategy is used for all Linux architectures that aren't intel. +Note that one downside of this QEMU system is that threads are barely +implemented, so we're careful to not spawn many threads. + +For the rumprun target the only output is a kernel image, so we just use that +plus the `rumpbake` command to create a full kernel image which is then run from +within QEMU. + +Finally, the fun part, the BSDs. Quite a few hoops are jumped through to get CI +working for these platforms, but the gist of it looks like: + +* Cross compiling from Linux to any of the BSDs seems to be quite non-standard. + We may be able to get it working but it might be difficult at that point to + ensure that the libc definitions align with what you'd get on the BSD itself. + As a result, we try to do compiles within the BSD distro. +* On Travis we can't run a VM-in-a-VM, so we resort to userspace emulation + (QEMU). +* Unfortunately on Travis we also can't use KVM, so the emulation is super slow. + +With all that in mind, the way BSD is tested looks like: + +1. Download a pre-prepared image for the OS being tested. +2. Generate the tests for the OS being tested. This involves running the `ctest` + library over libc to generate a Rust file and a C file which will then be + compiled into the final test. +3. Generate a disk image which will later be mounted by the OS being tested. + This image is mostly just the libc directory, but some modifications are made + to compile the generated files from step 2. +4. The kernel is booted in QEMU, and it is configured to detect the libc-test + image being available, run the test script, and then shut down afterwards. +5. Look for whether the tests passed in the serial console output of the kernel. + +There's some pretty specific instructions for setting up each image (detailed +below), but the main gist of this is that we must avoid a vanilla `cargo run` +inside of the `libc-test` directory (which is what it's intended for) because +that would compile `syntex_syntax`, a large library, with userspace emulation. +This invariably times out on Travis, so we can't do that. + +Once all those hoops are jumped through, however, we can be happy that we're +testing almost everything! + +Below are some details of how to set up the initial OS images which are +downloaded. Each image must be enabled have input/output over the serial +console, log in automatically at the serial console, detect if a second drive in +QEMU is available, and if so mount it, run a script (it'll specifically be +`run-qemu.sh` in this folder which is copied into the generated image talked +about above), and then shut down. + +### QEMU setup - FreeBSD + +1. Download CD installer (most minimal is fine) +2. `qemu-img create -f qcow2 foo.qcow2 2G` +3. `qemu -cdrom foo.iso -drive if=virtio,file=foo.qcow2 -net nic,model=virtio -net user` +4. run installer +5. `echo 'console="comconsole"' >> /boot/loader.conf` +6. `echo 'autoboot_delay="0"' >> /boot/loader.conf` +7. look at /etc/ttys, see what getty argument is for ttyu0 +8. edit /etc/gettytab, look for ttyu0 argument, prepend `:al=root` to line + beneath + +(note that the current image has a `freebsd` user, but this isn't really +necessary) + +Once that's done, arrange for this script to run at login: + +``` +#!/bin/sh + +sudo kldload ext2fs +[ -e /dev/vtbd1 ] || exit 0 +sudo mount -t ext2fs /dev/vtbd1 /mnt +sh /mnt/run.sh /mnt +sudo poweroff +``` + +Helpful links + +* https://en.wikibooks.org/wiki/QEMU/Images +* https://blog.nekoconeko.nl/blog/2015/06/04/creating-an-openstack-freebsd-image.html +* https://www.freebsd.org/doc/handbook/serialconsole-setup.html + + +### QEMU setup - OpenBSD + +1. Download CD installer +2. `qemu-img create -f qcow2 foo.qcow2 2G` +3. `qemu -cdrom foo.iso -drive if=virtio,file=foo.qcow2 -net nic,model=virtio -net user` +4. run installer +5. `echo 'set tty com0' >> /etc/boot.conf` +6. `echo 'boot' >> /etc/boot.conf` +7. Modify /etc/ttys, change the `tty00` at the end from 'unknown off' to + 'vt220 on secure' +8. Modify same line in /etc/ttys to have `"/root/foo.sh"` as the shell +9. Add this script to `/root/foo.sh` + +``` +#!/bin/sh +exec 1>/dev/tty00 +exec 2>&1 + +if mount -t ext2fs /dev/sd1c /mnt; then + sh /mnt/run.sh /mnt + shutdown -ph now +fi + +# limited shell... +exec /bin/sh < /dev/tty00 +``` + +10. `chmod +x /root/foo.sh` + +Helpful links: + +* https://en.wikibooks.org/wiki/QEMU/Images +* http://www.openbsd.org/faq/faq7.html#SerCon + +# Questions? + +Hopefully that's at least somewhat of an introduction to everything going on +here, and feel free to ping @alexcrichton with questions! + diff --git a/src/vendor/libc/ci/docker/aarch64-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/aarch64-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..2ba69e1544 --- /dev/null +++ b/src/vendor/libc/ci/docker/aarch64-unknown-linux-gnu/Dockerfile @@ -0,0 +1,7 @@ +FROM ubuntu:16.10 +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev ca-certificates \ + gcc-aarch64-linux-gnu libc6-dev-arm64-cross qemu-user +ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/arm-linux-androideabi/Dockerfile b/src/vendor/libc/ci/docker/arm-linux-androideabi/Dockerfile new file mode 100644 index 0000000000..0e41ba6dbe --- /dev/null +++ b/src/vendor/libc/ci/docker/arm-linux-androideabi/Dockerfile @@ -0,0 +1,4 @@ +FROM alexcrichton/rust-slave-android:2015-11-22 +ENV CARGO_TARGET_ARM_LINUX_ANDROIDEABI_LINKER=arm-linux-androideabi-gcc \ + PATH=$PATH:/rust/bin +ENTRYPOINT ["sh"] diff --git a/src/vendor/libc/ci/docker/arm-unknown-linux-gnueabihf/Dockerfile b/src/vendor/libc/ci/docker/arm-unknown-linux-gnueabihf/Dockerfile new file mode 100644 index 0000000000..3824c04664 --- /dev/null +++ b/src/vendor/libc/ci/docker/arm-unknown-linux-gnueabihf/Dockerfile @@ -0,0 +1,7 @@ +FROM ubuntu:16.10 +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev ca-certificates \ + gcc-arm-linux-gnueabihf libc6-dev-armhf-cross qemu-user +ENV CARGO_TARGET_ARM_UNKNOWN_LINUX_GNUEABIHF_LINKER=arm-linux-gnueabihf-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/i686-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/i686-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..c149d84072 --- /dev/null +++ b/src/vendor/libc/ci/docker/i686-unknown-linux-gnu/Dockerfile @@ -0,0 +1,5 @@ +FROM ubuntu:16.10 +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc-multilib libc6-dev ca-certificates +ENV PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/i686-unknown-linux-musl/Dockerfile b/src/vendor/libc/ci/docker/i686-unknown-linux-musl/Dockerfile new file mode 100644 index 0000000000..87459a1672 --- /dev/null +++ b/src/vendor/libc/ci/docker/i686-unknown-linux-musl/Dockerfile @@ -0,0 +1,22 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc make libc6-dev git curl ca-certificates +# Below we're cross-compiling musl for i686 using the system compiler on an +# x86_64 system. This is an awkward thing to be doing and so we have to jump +# through a couple hoops to get musl to be happy. In particular: +# +# * We specifically pass -m32 in CFLAGS and override CC when running ./configure, +# since otherwise the script will fail to find a compiler. +# * We manually unset CROSS_COMPILE when running make; otherwise the makefile +# will call the non-existent binary 'i686-ar'. +RUN curl https://www.musl-libc.org/releases/musl-1.1.15.tar.gz | \ + tar xzf - && \ + cd musl-1.1.15 && \ + CC=gcc CFLAGS=-m32 ./configure --prefix=/musl-i686 --disable-shared --target=i686 && \ + make CROSS_COMPILE= install -j4 && \ + cd .. && \ + rm -rf musl-1.1.15 +ENV PATH=$PATH:/musl-i686/bin:/rust/bin \ + CC_i686_unknown_linux_musl=musl-gcc diff --git a/src/vendor/libc/ci/docker/mips-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/mips-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..eea1f652c3 --- /dev/null +++ b/src/vendor/libc/ci/docker/mips-unknown-linux-gnu/Dockerfile @@ -0,0 +1,10 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates \ + gcc-mips-linux-gnu libc6-dev-mips-cross \ + qemu-system-mips + +ENV CARGO_TARGET_MIPS_UNKNOWN_LINUX_GNU_LINKER=mips-linux-gnu-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/mips-unknown-linux-musl/Dockerfile b/src/vendor/libc/ci/docker/mips-unknown-linux-musl/Dockerfile new file mode 100644 index 0000000000..77c6adb435 --- /dev/null +++ b/src/vendor/libc/ci/docker/mips-unknown-linux-musl/Dockerfile @@ -0,0 +1,14 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates qemu-system-mips curl \ + bzip2 + +RUN mkdir /toolchain +RUN curl -L https://downloads.openwrt.org/snapshots/trunk/ar71xx/generic/OpenWrt-SDK-ar71xx-generic_gcc-5.3.0_musl-1.1.15.Linux-x86_64.tar.bz2 | \ + tar xjf - -C /toolchain --strip-components=1 + +ENV PATH=$PATH:/rust/bin:/toolchain/staging_dir/toolchain-mips_34kc_gcc-5.3.0_musl-1.1.15/bin \ + CC_mips_unknown_linux_musl=mips-openwrt-linux-gcc \ + CARGO_TARGET_MIPS_UNKNOWN_LINUX_MUSL_LINKER=mips-openwrt-linux-gcc diff --git a/src/vendor/libc/ci/docker/mips64-unknown-linux-gnuabi64/Dockerfile b/src/vendor/libc/ci/docker/mips64-unknown-linux-gnuabi64/Dockerfile new file mode 100644 index 0000000000..2eb5de2453 --- /dev/null +++ b/src/vendor/libc/ci/docker/mips64-unknown-linux-gnuabi64/Dockerfile @@ -0,0 +1,11 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates \ + gcc-mips64-linux-gnuabi64 libc6-dev-mips64-cross \ + qemu-system-mips64 + +ENV CARGO_TARGET_MIPS64_UNKNOWN_LINUX_GNUABI64_LINKER=mips64-linux-gnuabi64-gcc \ + CC_mips64_unknown_linux_gnuabi64=mips64-linux-gnuabi64-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/mipsel-unknown-linux-musl/Dockerfile b/src/vendor/libc/ci/docker/mipsel-unknown-linux-musl/Dockerfile new file mode 100644 index 0000000000..36c4d90ef6 --- /dev/null +++ b/src/vendor/libc/ci/docker/mipsel-unknown-linux-musl/Dockerfile @@ -0,0 +1,14 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates qemu-system-mips curl \ + bzip2 + +RUN mkdir /toolchain +RUN curl -L https://downloads.openwrt.org/snapshots/trunk/malta/generic/OpenWrt-Toolchain-malta-le_gcc-5.3.0_musl-1.1.15.Linux-x86_64.tar.bz2 | \ + tar xjf - -C /toolchain --strip-components=2 + +ENV PATH=$PATH:/rust/bin:/toolchain/bin \ + CC_mipsel_unknown_linux_musl=mipsel-openwrt-linux-gcc \ + CARGO_TARGET_MIPSEL_UNKNOWN_LINUX_MUSL_LINKER=mipsel-openwrt-linux-gcc diff --git a/src/vendor/libc/ci/docker/powerpc-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/powerpc-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..d9d7db0f41 --- /dev/null +++ b/src/vendor/libc/ci/docker/powerpc-unknown-linux-gnu/Dockerfile @@ -0,0 +1,10 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates \ + gcc-powerpc-linux-gnu libc6-dev-powerpc-cross \ + qemu-system-ppc + +ENV CARGO_TARGET_POWERPC_UNKNOWN_LINUX_GNU_LINKER=powerpc-linux-gnu-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/powerpc64-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/powerpc64-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..df0e6057b4 --- /dev/null +++ b/src/vendor/libc/ci/docker/powerpc64-unknown-linux-gnu/Dockerfile @@ -0,0 +1,11 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu-user ca-certificates \ + gcc-powerpc64-linux-gnu libc6-dev-ppc64-cross \ + qemu-system-ppc + +ENV CARGO_TARGET_POWERPC64_UNKNOWN_LINUX_GNU_LINKER=powerpc64-linux-gnu-gcc \ + CC=powerpc64-linux-gnu-gcc \ + PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/x86_64-rumprun-netbsd/Dockerfile b/src/vendor/libc/ci/docker/x86_64-rumprun-netbsd/Dockerfile new file mode 100644 index 0000000000..129771e76b --- /dev/null +++ b/src/vendor/libc/ci/docker/x86_64-rumprun-netbsd/Dockerfile @@ -0,0 +1,6 @@ +FROM mato/rumprun-toolchain-hw-x86_64 +USER root +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + qemu +ENV PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/x86_64-unknown-freebsd/Dockerfile b/src/vendor/libc/ci/docker/x86_64-unknown-freebsd/Dockerfile new file mode 100644 index 0000000000..b127338222 --- /dev/null +++ b/src/vendor/libc/ci/docker/x86_64-unknown-freebsd/Dockerfile @@ -0,0 +1,13 @@ +FROM alexcrichton/rust-slave-linux-cross:2016-04-15 +USER root + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + qemu genext2fs + +ENTRYPOINT ["sh"] + +ENV PATH=$PATH:/rust/bin \ + QEMU=freebsd.qcow2.gz \ + CAN_CROSS=1 \ + CARGO_TARGET_X86_64_UNKNOWN_FREEBSD_LINKER=x86_64-unknown-freebsd10-gcc diff --git a/src/vendor/libc/ci/docker/x86_64-unknown-linux-gnu/Dockerfile b/src/vendor/libc/ci/docker/x86_64-unknown-linux-gnu/Dockerfile new file mode 100644 index 0000000000..4af3f834cb --- /dev/null +++ b/src/vendor/libc/ci/docker/x86_64-unknown-linux-gnu/Dockerfile @@ -0,0 +1,5 @@ +FROM ubuntu:16.10 +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev ca-certificates +ENV PATH=$PATH:/rust/bin diff --git a/src/vendor/libc/ci/docker/x86_64-unknown-linux-musl/Dockerfile b/src/vendor/libc/ci/docker/x86_64-unknown-linux-musl/Dockerfile new file mode 100644 index 0000000000..9c2499948a --- /dev/null +++ b/src/vendor/libc/ci/docker/x86_64-unknown-linux-musl/Dockerfile @@ -0,0 +1,13 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc make libc6-dev git curl ca-certificates +RUN curl https://www.musl-libc.org/releases/musl-1.1.15.tar.gz | \ + tar xzf - && \ + cd musl-1.1.15 && \ + ./configure --prefix=/musl-x86_64 && \ + make install -j4 && \ + cd .. && \ + rm -rf musl-1.1.15 +ENV PATH=$PATH:/musl-x86_64/bin:/rust/bin diff --git a/src/vendor/libc/ci/docker/x86_64-unknown-openbsd/Dockerfile b/src/vendor/libc/ci/docker/x86_64-unknown-openbsd/Dockerfile new file mode 100644 index 0000000000..26340a5ed1 --- /dev/null +++ b/src/vendor/libc/ci/docker/x86_64-unknown-openbsd/Dockerfile @@ -0,0 +1,8 @@ +FROM ubuntu:16.10 + +RUN apt-get update +RUN apt-get install -y --no-install-recommends \ + gcc libc6-dev qemu curl ca-certificates \ + genext2fs +ENV PATH=$PATH:/rust/bin \ + QEMU=2016-09-07/openbsd-6.0-without-pkgs.qcow2 diff --git a/src/vendor/libc/ci/dox.sh b/src/vendor/libc/ci/dox.sh new file mode 100644 index 0000000000..88d882dcac --- /dev/null +++ b/src/vendor/libc/ci/dox.sh @@ -0,0 +1,33 @@ +#!/bin/sh + +# Builds documentation for all target triples that we have a registered URL for +# in liblibc. This scrapes the list of triples to document from `src/lib.rs` +# which has a bunch of `html_root_url` directives we pick up. + +set -e + +TARGETS=`grep html_root_url src/lib.rs | sed 's/.*".*\/\(.*\)"/\1/'` + +rm -rf target/doc +mkdir -p target/doc + +cp ci/landing-page-head.html target/doc/index.html + +for target in $TARGETS; do + echo documenting $target + + rustdoc -o target/doc/$target --target $target src/lib.rs --cfg dox \ + --crate-name libc + + echo "

" \ + >> target/doc/index.html +done + +cat ci/landing-page-footer.html >> target/doc/index.html + +# If we're on travis, not a PR, and on the right branch, publish! +if [ "$TRAVIS_PULL_REQUEST" = "false" ] && [ "$TRAVIS_BRANCH" = "master" ]; then + pip install ghp-import --user $USER + $HOME/.local/bin/ghp-import -n target/doc + git push -qf https://${GH_TOKEN}@github.com/${TRAVIS_REPO_SLUG}.git gh-pages +fi diff --git a/src/vendor/libc/ci/landing-page-footer.html b/src/vendor/libc/ci/landing-page-footer.html new file mode 100644 index 0000000000..941cc8d2b4 --- /dev/null +++ b/src/vendor/libc/ci/landing-page-footer.html @@ -0,0 +1,3 @@ + + + diff --git a/src/vendor/libc/ci/landing-page-head.html b/src/vendor/libc/ci/landing-page-head.html new file mode 100644 index 0000000000..fc69fa88eb --- /dev/null +++ b/src/vendor/libc/ci/landing-page-head.html @@ -0,0 +1,7 @@ + + + + + + +
  • $target