rust-lang/rust

Tracking Issue for strict_provenance

Gankra opened this issue · 161 comments

Feature gate: #![feature(strict_provenance)]

read the docs

get the stable polyfill

subtasks

This is a tracking issue for the strict_provenance feature. This is a standard library feature that governs the following APIs:

IMPORTANT: This is purely a set of library APIs to make your code more clear/reliable, so that we can better understand what Rust code is actually trying to do and what it actually needs help with. It is overwhelmingly framed as a memory model because we are doing a bit of Roleplay here. We are roleplaying that this is a real memory model and seeing what code doesn't conform to it already. Then we are seeing how trivial it is to make that code "conform".

This cannot and will not "break your code" because the lang and compiler teams are wholy uninvolved with this. Your code cannot be "run under strict provenance" because there isn't a compiler flag for "enabling" it. Although it would be nice to have a lint to make it easier to quickly migrate code that wants to play along.

This is an unofficial experiment to see How Bad it would be if Rust had extremely strict pointer provenance rules that require you to always dynamically preserve provenance information. Which is to say if you ever want to treat something as a Real Pointer that can be Offset and Dereferenced, there must be an unbroken chain of custody from that pointer to the original allocation you are trying to access using only pointer->pointer operations. If at any point you turn a pointer into an integer, that integer cannot be turned back into a pointer. This includes usize as ptr, transmute, type punning with raw pointer reads/writes, whatever. Just assume the memory "knows" it contains a pointer and that writing to it as a non-pointer makes it forget (because this is quite literally true on CHERI and miri, which are immediate beneficiaries of doing this).

A secondary goal of this project is to try to disambiguate the many meanings of ptr as usize, in the hopes that it might make it plausible/tolerable to allow usize to be redefined to be an address-sized integer instead of a pointer-sized integer. This would allow for Rust to more natively support platforms where sizeof(size_t) < sizeof(intptr_t), and effectively redefine usize from intptr_t to size_t/ptrdiff_t/ptraddr_t (it would still generally conflate those concepts, absent a motivation to do otherwise). To the best of my knowledge this would not have a practical effect on any currently supported platforms, and just allow for more platforms to be supported (certainly true for our tier 1 platforms).

A tertiary goal of this project is to more clearly answer the question "hey what's the deal with Rust on architectures that are pretty harvard-y like AVR and WASM (platforms which treat function pointers and data pointers non-uniformly)". There is... weirdness in the language because it's difficult to talk about "some" function pointer generically/opaquely and that encourages you to turn them into data pointers and then maybe that does Wrong Things.

The mission statement of this experiment is: assume it will and must work, try to make code conform to it, smash face-first into really nasty problems that need special consideration, and try to actually figure out how to handle those situations. We want the evil shit you do with pointers to work but the current situation leads to incredibly broken results, so something has to give.

Public API

This design is roughly based on the article Rust's Unsafe Pointer Types Need An Overhaul, which is itself based on the APIs that CHERI exposes for dynamically maintaining provenance information even under Fun Bit Tricks.

The core piece that makes this at all plausible is pointer::with_addr(self, usize) -> Self which dynamically re-establishes the provenance chain of custody. Everything else introduced is sugar or alternatives to as casts that better express intent.

More APIs may be introduced as we explore the feature space.

// core::ptr
pub fn invalid<T>(addr: usize) -> *const T;
pub fn invalid_mut<T>(addr: usize) -> *mut T;

// core::pointer
pub fn addr(self) -> usize;
pub fn with_addr(self, addr: usize) -> Self;
pub fn map_addr(self, f: impl FnOnce(usize) -> usize) -> Self;

Steps / History

  • Implementation: #95241
  • Final comment period (FCP)
  • Stabilization PR

Unresolved Questions

  • How Bad Is This?

  • How Good Is This?

  • What's Problematic (And Should Work)?

    • Hardcoded MMIO address stuff
      • We should define a platform-specific way to do this, possibly requiring that you only use volatile access
    • Opaque Function Pointers - architectures like AVR and WASM treat function pointers special, they're normal pointers.
      • We should really define a #[repr(transparent)] OpaqueFnPtr(fn() -> ()) type in std, need a way to talk about e.g. dlopen.
    • libc interop for bad APIs that pun integers and pointers
      • Use a union to make the pun explicit?
    • passing shared pointers over IPC?
      • At worst you can rederive from your SHMEM?
    • downcasting to subclasses?
      • Would be nice if you could create a reference without shrinking its provenance to allow for ergonomic references to a baseclass that can be (unsafely) cast to a reference to a subclass.
    • memcpy operations conceptually say "all this memory is just u8's" which would trash provenance
      • it's pretty standard to carve out exceptions for memcpy, but it would be good to know if this can be done more rigorously
        with something like llvm's proposed byte type
    • AtomicPtr - AtomicPtr has a very limited API, so lots of people use AtomicUsize to do the equivalent of wrapping_add
      • Morally this is fine, unclear if the right compiler intrinsics exist to express this without "dropping" provenance.
  • What's Problematic (And Might Be Impossible)?

    • High-bit Tagging - rustc::ty does this because it makes common addressing modes Free Untagging Realestate
      • Technically this is "fine" but CHERI might get upset about it, needs investigation.
    • Pointer Compression - V8 and JVM like compressing pointers, involving massive truncations.
      • Can a Sufficiently Smart Union handle this?
    • Unrestricted XOR-list - XORing pointers to make an even more jacked up linked list
      • You must allocate all your nodes in a Vec/Arena to be able to reconstitute ptrs. At that point, use indices.
  • APIs We Want To Add/Change?

    • A lot of uses of .addr() are for alignment checks, .is_aligned(), .is_aligned_to(usize)?
    • An API to make ZST alloc forging explicit, exists_zst(usize)?
    • .addr() should arguably work on a DST, if you use .addr() you are ostensibly saying "I know this doesn't roundtrip"
    • Explicit conveniences for low-bit tagging? .with_tag(TAG)?
    • expose_addr/from_exposed_addr are slightly unfortunate names since it's not the address that gets exposed, it's the provenance. What would be better names? Please discuss on Zulip.
    • It is somewhat unfortunate that addr is the short and easy name for the operation that programmers likely expect less. (Many will expect expose_addr semantics.) Maybe it should have a different name. But which name?

FAQ

Why Is Rust So Broken? This Clearly Isn't A Problem For C!

It is absolutely a problem for C. Rust, C, C++, Swift, etc. are all fundamentally built on the same tools and principles once you start dropping down to the level of memory models (i.e. Rust literally just punts on atomics by saying "it's the C11 model" because that's the model for atomics). Compiler backends currently do not consistently model pointers in the face of things like "but pointers are just integers right"?

Why Doesn't Rust Use C's Solution

Folks who work on C's semantics are still trying to solve this issue, with the leading solution being PNVI-ae-udi. It's an interesting and reasonable approach under the assumption of "we can't possibly get people to fix their code, and we can't completely change compiler backends". At a high-level, the proposal is to (in the abstract machine's semantics):

  • Maintain a global list of all "exposed" ("maybe aliased") allocations.
  • Allocations come into the world un-exposed (unaliased).
  • Whenever a pointer is cast to an integer (or otherwise escapes opaquely), mark its allocation as "exposed".
  • Whenever an integer is cast to a pointer do a global hittest with that address on all exposed allocations.
  • If it hits an exposed allocation, great, it gets that allocation's provenance.
  • If it hits two exposed allocations (due to "one-past-the-end" shenanigans) be sad and just tell the programmer to "be consistent" and only access one of the two.

The strength of this model is that from a compiler's perspective this can mostly just be understood as "business as usual" because you can still let programmers do ~whatever and reason locally along the lines of: "I have kept perfect track of these pointers, and therefore know exactly how they are/aren't aliased. These other pointers over here have escaped or come from something I don't know about, so I will assume they all alias eachother and compile their accesses very conservatively."

What they do need to change under this model is to admit that a ptr-to-int cast has a "side-effect" (exposing the allocation) and that optimizing it away is unsound, because then you will forget the pointer was exposed and do bad things.

The weakness of this model is that essentially implies that an int-to-ptr cast has permission to access whatever memory you want (all "exposed" memory). This makes it very hard for dynamic checkers to be useful, because anytime ptr-int-ptr things happen, even implicitly/transiently, the checker has to throw up its hands and say "I guess you know what you're doing" and won't actually be able to help you catch bugs. For instance if you use-after-free, the checker cannot notice if you get "lucky" and start accessing a new allocation in the same place if the pointer was cast from an integer after the reallocation. This is Sad.

It is perhaps an inevitable fate that Rust will adopt C's model, or at least have to interoperate with it, but it would be nice if we could do better given how much time and energy we put into having more rigorous and safe concepts for borrows and lifetimes! Rust is uniquely positioned to explore stricter semantics, and has established precedent for just saying "hey actually this idea we've had since time immemorial is busted, let's migrate everyone off of it so the language can make sense".

I am not saying we are going to break the world right now, but we should explore how bad breaking the world is, and at worst, making code conform to strict provenance in more places will make our existing tools work more reliably, which is just Good.

But CHERI Runs C Code Fine?

Yes! Sort of.

Pointer-integer shenanigans are not actually that common, and so most code actually already trivially has dynamic/strict provenance in both Rust and C. Most of the time people are just comparing addresses, checking alignment, or doing tagged-pointer shenanigans. These things are totally fine under strict provenance!

The remaining places where people are actually committing pointer-integer crimes are handled by a hack that mostly works: defining intptr_t to just actually be a pointer still and having the compiler handle/codegen it as such. This is one of the genuine successes of C's model of "define a million different integer types that sure sound like they should be the same thing but get to be different if an implementation says they are". In particular, making this tolerable requires CHERI to say intptr_t is 128-bit and size_t/ptrdiff_t and friends are 64-bit. That said this hack has its limitations and Sufficiently Evil code will still break and just needs to be reworked. Or the code needs to be compiled to a significantly less strict model that looks A Lot like PNVI-ae-udi and removes most of the value of the checker.

The intptr_t hack was explored for Rust but it doesn't work very well, because rust doesn't make a distinction between size_t and intptr_t. This meant every array size and index was 128-bit and handled by CHERI's more expensive pointer registers/instructions on the paranoid assumption that it could all be Secret Pointers We Need To Track. For Rust to properly support CHERI it needs to decouple the notions of size_t and intptr_t, which means we need everyone to be more clear on what they mean when they convert between pointers and integers.

It would also be nice if Rust, the systems language that Really Cares About Memory-Safety was a first-class citizen on CHERI, the architecture that Really Cares About Memory-Safety. We are pursuing the same goals, and Rust's design is seemingly very friendly to CHERI... except pointer<->integer shenanigans make everything into a muddy mess.

Isn't It A Big Deal That CHERI "Breaks" wrapping_offset

CHERI's pointer compression scheme falls over if you wrapping_offset too far out-of-bounds (like, kilobytes out of bounds), and will silently mark your pointer as "corrupt" (while still faithfully preserving the actual address). Once this happens, offsetting that pointer back in bounds and trying to load/store will fault.

It's annoying but it's not really a problem. It's a system limitation, and if you run afoul of it you will get a deterministic fault. This is much the same as targetting Rust to some little embedded system, having to disable all of libstd, and then still crashing because you were too sloppy with your memory footprint. Or how random parts of std have to be cfg'd out when targetting WASM. Some code just isn't as portable as you'd like, because anything more exotic than x64 has quirky little limitations. Rust is not the intersection of all platform limitations, because that intersection is terrible.

Also I should clarify something because it seems to have been lost to history: wrapping_offset was never the "good" offset. At least in my time as a standard library team member, it was always intended that all Rust code should always be attempting to use offset. This is what the rustonomicon advocates, and how libstd was written. offset is the semantics the language uses for things like borrowing fields. If you access the contents of a slice or collection or anything else in std, that will overwhelmingly be done with offset, because that's The Right Way To Do It.

wrapping_offset is for "I am doing something really bad and can't do things Right". It's useful! I have many times wanted to use it to avoid dealing with some weird case in thin-vec or whatever other horrible unsafe code I'm writing. I generally don't because years of working on std burned Offset Is Right, Do It Right into my brain. It's ok for you to want a bit of "slop" to simplify some nasty unsafe code, but in this case that slop comes with the possibility of the code crashing on a relatively exotic platform.

Yes I know offset says:

Consider using wrapping_offset instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.

I wrote those docs, so this is my fault, mea culpa. The fact that you "wanted" to use offset was completely burned into my brain, so I didn't even think about mentioning/clarifying that. Like whenever I look at this line my brain is implicitly putting "☠️ IF YOUR CODE IS TERRIBLE, AND YOU ABSOLUTELY MUST ☠️" in front of it, because that was the conventional understanding of these two methods when this was written.

Your code isn't terrible for using wrapping_offset, I just should have made it more clear that it should be regarded as a Last Resort and not "the chill offset for everyone". 😿

Isn't This Model WAY Too Strict?

Probably! My goal is not to define The One True Memory Model for Rust, but to define a painfully simple one that is easy for Rust programmers to understand and make their code conform to by default. The "idea" with strict-provenance is that it's so strict that any coherent model must surely be a weakening of it (i.e. says strictly more code is allowed). In this way, code that happens to conform to strict-provenance (which is most code, as CHERI has demonstrated!) is essentially guaranteed to be correct under any model and to not be miscompiled (barring compiler bugs).

One can imagine ending up with this "tower of weakenings":

  • strictest: (stacked-borrows-with-)strict-provenance, a model Rust programmers try their best to conform to.
  • strict-ish: "real" stacked-borrows, an actually functional memory-model for the crimes Rust Programmers Crave.
  • shrug-emoji: the actual primitives that compiler backends emit, and the optimizations they perform with them.

If your code works higher up the tower, it will definitely work against anything lower down the tower, and the bottom of the tower is the one that "matters", because that's the thing that actually compiles your code.

It is frustrating as a programmer to know that there is this vague memory-model stuff going on, and that compilers are vaguely broken because they don't really have coherent models. By making it easier for code to conform to strict-provenance, we are making it more robust in the face of inconsistent and buggy semantics AND future-proofing that code against any possible "real" model.

Mega Link Pile

The Work On This API:

Prior Art For This API:

CHERI Resources:

Provenance Resources:

Strict Provenance Zulip Threads:

The proposed lints in #95199 should be something a user messing with these APIs can opt into to quickly find sketchy places in their code. What's the "right" way to expose an unstable lint? Is it sufficient to make it allow and users can opt in with normal linting stuff, or do we also need a special feature/-Z to opt into the lint existing at all?

Hardcoded MMIO address stuff

We should define a platform-specific way to do this, possibly requiring that you only use volatile access

This should probably be more like core::arch::asm!: it's under arch in module terms but it actually is platform-generic, because it has the same factors in play: it's "architecturally specific in terms of invocation but almost universal because it appears almost everywhere".

The proposed lints in #95199 should be something a user messing with these APIs can opt into to quickly find sketchy places in their code. What's the "right" way to expose an unstable lint? Is it sufficient to make it allow and users can opt in with normal linting stuff, or do we also need a special feature/-Z to opt into the lint existing at all?

I... hm. I think the allow-by-default lint is conservative enough? It shouldn't need a -Z feature unless we get something wild going on like turning miri on to find misbehaving pointers.

At least in the bootstrap, the compiler will complain if you allow() a lint in your code that doesn't exist. This potentially just means:

  • We need to keep the experimental lint around forever even when the experiment is over
  • Users can only "safely" invoke it from the command line manually, which is slightly unfortunate for anything like what I did where I used it as a FIXME/WONTFIX marker for the file.

Also due to the "Opaque Function Pointers" / "Harvard Architecture" / "AVR is cursed" issue

// HACK: The intermediate cast as usize is required for AVR

I think we want the lint broken up into parts:

  • #[fuzzy_provenance_casts] - int-to-ptr, totally evil
  • #[lossy_provencance_casts] - ptr-to-int, sketchy but valid as long as you actually want .addr() semantics
  • #[oxford_casts] - casts that make harvard architectures sad -- fn<->ptr (name is a joke... unless...)

I can't justify discouraging fn <-> int, absent better ways to talk about fn ptrs properly.

At least in the bootstrap, the compiler will complain if you allow() a lint in your code that doesn't exist. This potentially just means:

Hm... I think we can make a lint conditional on #![feature(strict_provenance)] being enabled? I remember seeing at least one lint that is like that.

Overall I like this idea for Rust, but it seems incompatible for C interop. It's pretty common in C APIs to use an integer as a "user-data" field intended to store arbitrary values (primarily pointers). For example, the winapi SetWindowLongPtr function accepts a pointer-sized integer.

I could imagine having some kind of built-in "pointer map" type, which behaves like a HashMap<usize. *mut T>: this would allow storing and then later "recovering" the provenance of a pointer based on its address. On CHERI it could be an actual map, but on other architectures it could just be a no-op (at least at the machine level, not in the abstract machine).

I feel like it's very plausible to define some sort of pointer-int union without messing with ABI for "this is a pointer, the API is lying" since in general, afaict, it's always sound to say something that is actually just an integer is a pointer "for fun" (as ptr::invalid shows) as long as you only deref/offset it when it's a Real Pointer.

it's always sound to say something that is actually just an integer is a pointer "for fun"

Yes indeed, pointers are a superset of (equally-sized) integers.

I feel like it's very plausible to define some sort of pointer-int union without messing with ABI for "this is a pointer, the API is lying"

But then you can't mechanically map C signatures to Rust, because you can't know whether an integer should be treated as a pointer or not.

Today, basically everything you can do in C, you can also do in unsafe Rust, which makes it possible to call APIs designed to be called from C, but if Rust can't just "do what C does", then it makes that interop a lot harder.

I mean, yes, but if you actually use the API and it expects you to pass it a dereferenceable pointer where it says the arg is an integer, then at that point you can go "ah, this API is lying" and do whatever needs to be done about that. Blindly charging forward, slamming into problems, and forcing ourselves to figure out what "do whatever needs to be done" should look like is the primary mission statement of this experiment.

So, for that specific API, SetWindowLongPtrW, the official signature looks like this:

LONG_PTR SetWindowLongPtrW(
  [in] HWND     hWnd,
  [in] int      nIndex,
  [in] LONG_PTR dwNewLong
);

where HWND is a *mut c_void (or other opaque pointer of choice), and LONG_PTR is isize (more specifically documented as "an integer the size of a pointer").

So in "real programs" you're expected to:

  • during window creation: allocate your user data on the heap, then set your pointer to your HWND
  • during painting or whatever: you can get your userdata pointer (GetWindowLongPtrW) and change some stuff in that allocation. This is largely necessary because the window event handler system is callback based, and the callback fn doesn't otherwise have access to any of your "main program" data.
  • during window destruction: you get the pointer and then free that allocation

Also, while it's sometimes UB to declare the wrong foreign signature, that's because of cross-lang LTO, and we never normally compile user32, so we can just declare the wrong signature and type the function as:

extern "system" {
  pub fn SetWindowLongPtrW(hwnd: *mut c_void, index: c_int, new_long: *mut c_void) -> *mut c_void
}

And now we "don't have to" perform pointer to int casting ourselves, it just silently happens during the foreign interfacing.

All that said, when we get the pointer back from GetWindowLongPtrW, we've still lost out provenance info. So we're still hosed. We still need to be able to send a pointer to foreign-land, get it back from foreign-land later, and have the pointer continue to be usable by rust.

Could you elaborate on why provenance "must" be lost?

If you're operating on a system that actually dynamically maintains provenance, the information must be maintained by the callee anyway or the OS literally doesn't function.

If you're operating under a model where the rust abstract machine just needs to be self-consistent, then presumably having the API signature reflect "pointer goes in, pointer goes out" is sufficient? Like yes the compiler doesn't "know" where those pointers go or come from, but the compiler also doesn't "do" global analysis and therefore must be able to cope with calling a native rust function with a (*mut) -> *mut signature, right?

Well if there's no global analysis then that's fine, sure.

Agreed with @Gankra.

  • Either those FFI calls go to 'outside' the Abstract Machine, in which case their effect on the state of the Abstract Machine basically has to be axiomatized (similar to how Miri implements 'shims' for calling system functions). We can just axiomatize that the provenance of the user data pointer is preserved. The compiler doesn't know what the right axiom is for this function so it has to be correct no matter what.

    (This assume GetWindowLongPtrW returns a type that can carry provenance, like *mut c_void.)

  • Or the other side of the call is conceptually "inside" the Abstract Machine (think: cross-lang inlining); then that code has to be written to preserve provenance.

(This assume GetWindowLongPtrW returns a type that can carry provenance, like *mut c_void.)

It does not, it returns an integer, but presumably you are suggesting redefining the function such that the return type preserves provenance.

Yes, @Lokathor suggested above to adjust the signature of SetWindowLongPtrW so I assume the same is done to the other related functions.

So, for that specific API, SetWindowLongPtrW, the official signature looks like this:

LONG_PTR SetWindowLongPtrW(
  [in] HWND     hWnd,
  [in] int      nIndex,
  [in] LONG_PTR dwNewLong
);
extern "system" {
  pub fn SetWindowLongPtrW(hwnd: *mut c_void, index: c_int, new_long: *mut c_void) -> *mut c_void
}

Just as a note, this is in fact what the LLVM IR function signature looks like when building C/C++ with CHERI LLVM (https://cheri-compiler-explorer.cl.cam.ac.uk/z/KG5oqq).

intptr_t is lowered to i8 addrspace(200)*, so this retains provenance information and the signature would also be correct for cross-language LTO in the CHERI case.

@arichardson presumably that's because intptr_t is typedef'd to some special intrinsic type that is lowered to a pointer only for CHERI? That's an additional incompatibility then, because Rust's isize is how we map intptr_t, but isize does not store provenance.

@Diggsey yes, see "A secondary goal" in the top comment and "But CHERI Runs C Code Fine?" in the FAQ.

I think my concerns with C interop basically boil down to this: Rust might need a way to interoperate with the PNVI-ae-udi model (assuming that is what C goes with) and I'm not convinced that the boundary between Rust's provenance model and the C provenance model can lie precisely on the FFI boundary. I think there will always be cases where we need unsafe Rust to be able to "act like C" for that interop to be practical.

If that concern turns out to be well-founded, it doesn't mean we can't still do better than C though. For example, we could use the same model, but have the "act of exposing a pointer" be an explicit compiler intrinsic intended for unsafe FFI code, rather than happening on all pointer-to-int casts.

Yes it's probable there will need to be a way to say "I give up" and use a Ptr16/Ptr32/Ptr64/Ptr128 "integer type" that is exactly like how CHERI handles intptr_t. Solutions like this will be considered more seriously as we attempt to "migrate" more code to stricter semantics and run into limitations with the MVP.

In terms of things like interoperating with PNVI-ae-udi at the level of cross-language LTO -- it is wholy premature to at all think about that. Like, 5-10 years premature, realistically. Having a proposed experimental memory model is in a completely different galaxy from actually emitting aliasing metadata into LLVM.

If we don't consider cross-language LTO, then the FFI boundary is where such interop happens and I don't think we need to worry much about PNVI. Interactions all happen on the 'target machine' level, where most of the time there is no provenance (and when there is, like on CHERI, it is very explicit and propagates through integer and pointer types equally).

If we do consider cross-language LTO, then really the interactions are described by a "shared abstract machine" that the optimizations happen on -- probably the LLVM IR Abstract Machine. It is anyone's guess how that one will account for PNVI, but given that the trade-offs are very different between surface languages and IRs (and given that LLVM IR does not enforce TBAA on all accesses) I doubt they will just copy PNVI. So it is rather futile IMO to try and prepare for this future. (And @Gankra wrote basically the same thing at the same time.^^)

comex commented

Unrestricted XOR-list - XORing pointers to make an even more jacked up linked list

  • You must allocate all your nodes in a Vec/Arena to be able to reconstitute ptrs. At that point, use indices.

Is the "you must" meant to refer to strict provenance or the status quo? I don't see why under PNVI you wouldn't be able to allocate the nodes wherever you want, modulo aliasing concerns. Of course, this would not work on CHERI.

All discussion is with respect to strict provenance, because that is the thing we are experimenting with.

For XOR lists also see this horrible hack ;)

There are several different PNVI models with distinct implications, including ones that the paper did not fully explore, and allusions to other possibilities. Please do not truncate them to merely "PNVI" unless genuinely referring to all possible ones, as XorLinkedList by the usual methods does not work under PNVI-roll-a-die, which in the case of any question regarding provenance simply rolls a die and assigns a random provenance.

@RalfJung surely if one were determined to make XOR lists work, it would be better to define an XOR operation on pointers (ie. that XORs the address part and performs some operation with the same operator axioms as XOR (but not necessarily XOR) on the provenance part). Theoretically that could also work on CHERI if a suitable operator could be created, and if it can't then this XOR would just not be available for pointers on that architecture.

some operation with the same operator axioms as XOR (but not necessarily XOR) on the provenance part

I don't think such an operation necessarily exists, and even if it does then I would not agree. Rather we should have better operations to explicitly and separately manipulate the provenance and the address bytes of a pointer.

But anyway this is veering wildly into off-topic terrain. Let's continue on Zulip, if we must. :)

Ralf Discussing this API Concept Long Ago

For the record, I did not invent this API, though it's also a bit hard to track its history now. I posted a very similar API in the UCG stream in May 2020, but that was just copied from... somewhere. I think I first learned about the idea to kill int2ptr and replace it by an API with explicit provenance from @digama0.

  • What's Problematic (And Might Be Impossible)?

    • High-bit Tagging - rustc::ty does this because it makes common addressing modes Free Untagging Realestate

From a CHERI perspective, this can be made to work so long as the capability encoding knows what the useable address space is and ignores high bits appropriately with cheri_address_set. I can't remember if Morello supports TBI or not. The big downside is that it really has to be architectural (if potentially mode-dependent) since you're sacrificing addressable virtual address space.

  • APIs We Want To Add/Change?

    • A lot of uses of .addr() are for alignment checks, .is_aligned(), .is_aligned_to(usize)?

In CHERI C we added __builtin_is_aligned, __builtin_align_up, and __builtin_align_down and upstreamed them to LLVM. We also have/had some macros that could be builtins for dealing with low tags, but currently don't need them much.

Problem that CHERI people might have some good answers for: AtomicPtr has a very limited API and currently people use AtomicUsize in its place for even basic stuff like the moral equivalent of wrapping_offset. Do you have APIs for that / llvm plumbing for that? In particular it would be ideal if all platforms could do some more complex ops on AtomicPtrs without "dropping" provenance.

YESSSSS IT LANDED

ok i will properly post a public announcement for this when it hits nightly and I can link the docs

Another "problematic but should work" thing I would like to bring attention to is hardware APIs that provide the programmer with opaque memory addresses to refer to. Examples being AArch64 TTBR_EL registers like this one.

I assume such cases will be naturally addressed by the idea of pointer::with_addr, but I figured it's worth pointing it out anyway.

Here's a strongly simplified, currently problematic example of how a programmer might go about implementing paging with it:

pub struct PageTable {
    l1_tables: [PhysAddr; 2],
    num_blocks: [usize; 2],
}

impl PageTable {
    pub fn new() -> Self {
        // Read L1 table locations and their sizes from system registers...
        Self {
            l1_tables: [
                PhysAddr::new(align_down(unsafe { TTBR0_EL1.get() }, PAGE_SIZE)),
                PhysAddr::new(align_down(unsafe { TTBR1_EL1.get() }, PAGE_SIZE)),
            ],
            num_blocks: [
                unsafe { TCR_EL1.read(TCR_EL1::T0SZ) } / L1_BLOCK_SIZE,
                unsafe { TCR_EL1.read(TCR_EL1::T1SZ) } / L1_BLOCK_SIZE,
            ],
        }
    }

    // Public APIs will be built on top of this.
    // `L1PageTableDescriptor` is a set of information for a page table entry.
    fn get_l1_descriptor(&mut self, address: VirtAddr) -> &mut L1PageTableDescriptor {
        let idx = (address.as_usize() >> 63) & 1;
        unsafe {
            // SAFETY: Even with re-established provenance for pointers to L1 tables,
            // what would be considered the "allocated object" to make `.add` and the
            // pointer deref safe to do here?
            &mut *self.l1_tables[idx]
                .as_mut_ptr::<L1PageTableDescriptor>()
                .add((address.as_usize() / L1_BLOCK_SIZE) & (self.num_blocks[idx] - 1))
        }
    }
}

Problem that CHERI people might have some good answers for: AtomicPtr has a very limited API and currently people use AtomicUsize in its place for even basic stuff like the moral equivalent of wrapping_offset. Do you have APIs for that / llvm plumbing for that? In particular it would be ideal if all platforms could do some more complex ops on AtomicPtrs without "dropping" provenance.

We have the full set of RMW atomics; they're implemented as you would imagine, with the same rules as normal uintptr_t arithmetic in terms of the requirements for maintaining provenance (i.e. stay within representable bounds). Like C(++)11's atomics, the first operand is the pointer and the second operand is a plain integer (though I think we're lazy and shove the integer in a pointer type whose metadata is ignored as that way we don't need to update everywhere that deals with atomics to separate the types, but that's an implementation detail and not part of the model).

Ok so basically all the machinery for doing arbitrary atomics with pointers (cross platform?) is already in LLVM, and rustc/std just needs to wire it up and expose it? (I am... incredibly fuzzy on what it means for a compiler to "preserve" provenance in a way that llvm and the CHERI backend will "get it".)

We support all the C11 atomics, yes; for example: https://cheri-compiler-explorer.cl.cam.ac.uk/z/ezess6

Sadly, it also needs fetch_or and fetch_and equivalents, or a lot of atomic code (https://github.com/Amanieu/parking_lot/blob/master/core/src/word_lock.rs for example) will not be able to be ported to AtomicPtr. That seems unsupported on pointers on compiler explorer, and it also brings up design questions.

It works at the IR level, we're just missing a bit of plumbing for the Clang frontend for anything other than add/sub I guess: https://cheri-compiler-explorer.cl.cam.ac.uk/z/G7GcK7

Oh, right, because C doesn't let you do that on "real" pointers at all, you need to use uintptr_t and then everything works: https://cheri-compiler-explorer.cl.cam.ac.uk/z/9MrKWP

Just filed a bunch of issues for stuff off the top of my head under A-strict-provenance Area: Strict provenance for raw pointers

Today I learned that the m68k has the dubious honor of having an actual ABI where "pointer-sized-integer" and "pointer" actually have different ABIs just to keep you on your toes, so anyone who thinks m68k is Cool needs to stop all pointer-int-punning crimes immediately and report to the Type Correctness Ward.

(if you're gonna pun them, which is still largely super reasonable, you should still prefer a pointer over an integer for the pun, because that will still work better everywhere it can work)

Sorry to be this way, but I'm pretty worried about adding more undefined behavior to Rust to support a niche project like CHERI. Rust UB is already very hard to avoid. If this is an experiment, fine, but if an RFC gets filed I would probably be opposed at this point.

Please see "Isn't This Model WAY Too Strict?" in the FAQ

this is pretty cool. thank you for putting together such an understandable writeup 🙏

To address that FAQ issue, I don't think that it's frustrating at all for programmers that provenance is vaguely broken. I think it's frustrating for compiler writers and people who work on memory models. But most programmers happily go about their day not thinking about provenance. To that end, I'm much more interested in fixing the notion of provenance to comply with the unsafe code that people have written in the wild than in overhauling our pointer types and declaring large swathes of code UB. Yes, that will be "ugly"; it's our job to deal with that.

A valid reaction to what I've written above might be "but you'd be equally skeptical of stacked borrows then, right?" And yes, I am also skeptical of stacked borrows.

To be more concrete, I'd prefer we just use something like PNVI-ae-udi. Maybe we could tighten things up a little, but that seems like a good place to start.

Yeah. I get that PNVI-ae-udi is a bit of a messy model semantically, but it has the property that code that does ptr->int->ptr merely misses out on optimizations, rather than being UB. That is a really nice property.

FWIW while Stacked Borrows is a lot easier to define on top of strict provenance, it is possible to define it in a way that ptr-int-ptr roundtrips always work and just pessimize the optimizer. However, the resulting model is... ghastly. About as ghastly as PNVI-ae-udi would be if it supported restrict (which it doesn't). Not something you'd want to implement in Miri. If you thought Stacked Borrows as-is is complicated, you really won't like this model. ;)

But Stacked Borrows vs no aliasing rules is in first approximation orthogonal to strict provenance vs allowing ptr-int-ptr roundtrips.

Also PNVI-ae-udi relies on C things like strict aliasing, which we don't want in Rust. This creates further problems and makes it a non-solution for us IMO. (If you remove strict aliasing, then I am fairly sure PNVI-ae-udi means that dead load elimination -- as in, removing loads whose result is unused -- is not a legal optimization. I promise I will write this up properly some day...)

@Gankra Would you consider adding a FAQ entry for "Why don't compiler backends like LLVM just stop doing this provenance thing entirely so we don't have to track it?" I'd expect that to be a common question, right after "what is provenance and why is it a thing?".

As far as I can tell, the answer to the former question is something roughly like "We don't know a good way to do that. C isn't going to drive it, so it won't happen, unless someone comes along wanting to do a couple of PhDs in compilers and memory models. Meanwhile, this approach allows us to coexist with what compiler backends currently do.".

(I'd love an answer to the latter question as well that's more satisfying than "here's this weird C example involving pointer comparisons and out-of-bounds accesses that doesn't seem comparable to anything real-world code should ever actually do", which is how the int y, x, *p = &x + 1 example always seems to me.)

Clarifying something here: I'm actually genuinely asking for these FAQ entries, regardless of what path we go. Separately from that, I wish that there were another path we could go, but I don't actually know what that path would be or whether it's possible. So I really do want to have a clear answer for why we need this, but I'm not attempting to argue that we don't or that there's a better option.

fu5ha commented

(I'd love an answer to the latter question as well that's more satisfying than "here's this weird C example involving pointer comparisons and out-of-bounds accesses that doesn't seem comparable to anything real-world code should ever actually do", which is how the int y, x, *p = &x + 1 example always seems to me.)

I think that's basically this post, which is linked in the OP, though admittedly not prominently https://www.ralfj.de/blog/2020/12/14/provenance.html

Well, to be honest, the problem is that pointer provenance bugs are empirically very rare. Which is why it hasn't been properly solved in C and C++ to begin with.

My worry is very practical: we have a lot (I'm not permitted to say how much, but a lot) of unsafe Rust code that does not conform to Stacked Borrows, much less to strict provenance, that we are not going to be able to rewrite. I can't imagine that we are the only ones in that situation. Regardless of what the 1.0 compatibility promise says, if you break unsafe code too much you are effectively creating a Rust 2.

(I'd love an answer to the latter question as well that's more satisfying than "here's this weird C example involving pointer comparisons and out-of-bounds accesses that doesn't seem comparable to anything real-world code should ever actually do", which is how the int y, x, *p = &x + 1 example always seems to me.)

Part of the problem is that real-world code does a lot of the pieces of this (one-past-the-end pointers are fairly common), and if we want a compiler that's always correct (modulo bugs) we can't just discount such counterexamples merely on the basis that real-world code doesn't look like this. Real-world streets also don't look like the Moose test and yet we better make sure our everyday cars pass it.

I wish I knew how many of the weird segfaults in C/C++ where people just shrug and move on, doing random code mutation until it works, are caused by issues like this... if a provenance bug affects real-world code, I have serious doubts anyone would be able to tell. Compiler devs would just assume the code has UB, and developers would assume the compiler has a bug, and everyone moves on.

I guess what I'm arguing here is that a lot of thought needs to be given to how this can become opt-in, because it seems to me that strict provenance (and probably stacked borrows) needs to be opt-in, with full compatibility with opt-out code. Much like strict-aliasing effectively is in C. If this is a purely opt-in semantics, then I don't have any objections to this work.

Patrick, as someone who actually teaches the language to people I know for a fact that MANY "normal" programmers are constantly frustrated with unsafe Rust code because it is literally impossible for us to tell them how to write it correctly. I tried! I wrote a book on it! Two books! And documented the shit out of the standard library! And wrote tons of articles explaining the design of the language! And people are still struggling to deal with unsafe!

The raison-d'etre of Rust is that it helps you write correct and safe code. The ability to drop down to unsafe code and extend the language is a HUGE part of that! But if you go into it with a desire to write correct code, you know, the core mindset of Rust... we just send you a shrug emoji and chill vibes.

To be blunt, the design of Rust 1.0 completely ignored unsafe Rust and left it in a complete disaster state. I have repeatedly tried to make it more ergonomic and coherent at the language or compiler level and every time I do I have gotten rebuffed with "no we should do something magic instead" and then that never materializes because the lang team has no interest in unsafe rust.

So honestly I am just deeply frustrated by of course, a former lang and compiler dev responding with "no we should just do something magic instead" even when I am actively avoiding specifying any language or compiler changes. I am literally just adding new stdlib APIs that make it easier to write unsafe rust code that expresses intent and helps ensure your code is obviously correct. Literally! Optional APIs! That are unstable! With tons of documentation explaining how they help! And I am still! Being told! To fuck off! And wait for magic! That you will not work on!

fu5ha commented

I tried! I wrote a book on it! Two books! And documented the shit out of the standard library! And wrote tons of articles explaining the design of the language! And people are still struggling to deal with unsafe!

The raison-d'etre of Rust is that it helps you write correct and safe code. The ability to drop down to unsafe code and extend the language is a HUGE part of that! But if you go into it with a desire to write correct code, you know, the core mindset of Rust... we just send you a shrug emoji and chill vibes.

Just want to echo this. I consider myself a fairly competent Rust programmer, and until recently have used unsafe moderately competently at a surface level. As I got into the depths of ""real"" unsafe code recently with the intent of verifying and testing safety assumptions in our biggest Rust project at Embark, I've run into several of the roadblocks described above, and a fair amount of the confusion is root-caused by the stuff this initiative is trying to address. As of now,

programmers are constantly frustrated with unsafe Rust code because it is literally impossible for us to tell them how to write it correctly

could not be more true, and I echo the feelings of the disembodied 'programmer'. This is quite sad I think, and it's absolutely worth the effort to try to improve this. I now write unsafe code with the resigned feeling that the code I'm writing is a best-effort works, but is very much not "correct" to the same standard that we try to write Rust code in general with.

Making it opt-in basically means having two Rust dialects, and having a subset of libraries not compatible with one of the libraries. That's an ecosystem split. It sounds pretty terrible...

That said, I could totally imagine a compromise where code that violates whatever provenance ends up being is treated as technically non-conformant, but there is still some best-effort attempt to keep that code working -- in a dialect between programmer and compiler devs, possibly adjusting the code or the compiler or both. No formal spec will cover it and it'll never be certified, but we don't have to treat all UB equal, and we don't have to close all reports of "code with UB misbehaves" as wont-fix; we can try to work together to find solutions that work for everyone.
EDIT: Actually I think a better way to describe the situation is what I wrote here.

I for once would be very curious if you think that your large codebase could have been written to be conformant with strict provenance if the developers would have been aware of it at the time. IOW, is the issue just one of legacy code or is there an expressivity gap?

I'm not telling anyone to go away and wait for magic. I'm just saying that you can't break the tons of unsafe code that exists in the wild. There's plenty of empirical evidence that almost nobody is writing unsafe code properly. Our choices are (1) do a Rust 2.0 to fix it; (2) opt-in changes to address things and messy fixes that are not what we would like. There is no in-between. Breaking changes are effectively a Rust 2.0.

The lead-up to Rust 1.0 was fine and wasn't a complete disaster. Not any more than any other software 1.0 project is, anyway. There were things that had to be sacrificed; a complete and coherent memory model was one of them. Delaying Rust 1.0 to the invention of CHERI wasn't realistic.

There's plenty of empirical evidence that almost nobody is writing unsafe code properly.

Dunno, I have to say overall Miri results are pretty encouraging to me. I think Rust is in a much better spot than C/C++ to actually tackle this issue, and we should use that chance rather than throw up our hands and give up. Sure, mistakes were made, but if we work together we can find ways to overcome them.

Computers don't have to be terrible, if we don't make them terrible.

Making it opt-in basically means having two Rust dialects, and having a subset of libraries not compatible with one of the libraries. That's an ecosystem split. It sounds pretty terrible...

I don't see it as worse than editions. And besides, you already have an ecosystem split, if you introduce changes that break unsafe code.

I for once would be very curious if you think that your large codebase could have been written to be conformant with strict provenance if the developers would have been aware of it at the time. IOW, is the issue just one of legacy code or is there an expressivity gap?

The feedback I have heard is that stacked borrows is too complex for developers to grasp on a large scale. I don't personally have enough information to evaluate that claim, though.

The feedback I have heard is that stacked borrows is too complex for developers to grasp on a large scale. I don't personally have enough information to evaluate that claim, though.

Yeah that's why I was asking about strict provenance, which is rather simple (I think).

I mean, what I can tell you is that if you break unsafe code, then I'm going to be on the hook for maintaining internal changes to our local fork of rustc that reverts your changes. Which is a thing I can totally do, I don't mind doing it. But I question whether having major organizations maintain their own internal forks of the language is the road that the project wants to go down.

What part of this proposes to break existing unsafe code? I'm not seeing anything here that proposes to break code that currently works.

Breaking the assumption that usize can be cast to and from pointers breaks unsafe code.

I mean, what I can tell you is that if you break unsafe code, then I'm going to be on the hook for maintaining internal changes to our local fork of rustc that reverts your changes. Which is a thing I can totally do, I don't mind doing it. But I question whether this is the road that the project wants to go down.

Not sure if it was a reply to my question, but it doesn't really answer it... we know there is code out there that does ptr-int-ptr roundtrips. We also know a lot of it does that for lack of an alternative API to do the thing it wants to do. So we want to figure out how much of that code could be written with better APIs instead.

It would be really valuable feedback from your organization if we would learn about whether your code can be expressed in terms of the strict provenance APIs or not. This will help define a memory model for Rust that actually is reliable and works for as many people as possible. Feedback from people that excessively use ptr-int-ptr roundtrips is particularly helpful.

Just refusing to even consider whether any alternative API might cover the needs of unsafe Rust code just as well, however, is not very helpful I am afraid.

View this as a data-gathering experiment. It is even quite explicitly described as such at the top of this very thread. I am rather surprised that gathering data is met with fierce opposition. No decisions have been made yet! But I hope we all agree that some decision needs to be made at some point, and that if we gather more data we'll probably make a better decision.

It sounds like you do have some interesting data, we'd be happy to hear about it. :)

If there's a checking tool via miri or something, I'd be happy to provide feedback as to how the porting process of some of our internal tools goes.

I'm just saying that it's not feasible to port all of it. It's just an unfortunate reality.

comex commented

So honestly I am just deeply frustrated by of course, a former lang and compiler dev responding with "no we should just do something magic instead" even when I am actively avoiding specifying any language or compiler changes. I am literally just adding new stdlib APIs that make it easier to write unsafe rust code that expresses intent and helps ensure your code is obviously correct. Literally! Optional APIs! That are unstable!

You're not specifying any specific language or compiler changes, but you're explicitly opening the door to a future where the compiler optimizes based on some hypothetical model that is at least somewhat stricter than what we have today. To that extent, the APIs are not truly optional.

If there's a checking tool via miri or something, I'd be happy to provide feedback as to how the porting process of some of our internal tools goes.

Miri with -Zmiri-tag-raw-pointers will (effectively) implement strict provenance, but also still enforces the rest of Stacked Borrows. I can look into having a flag that enforces strict provenance without Stacked Borrows.

I was thinking maybe it is possible to at least roughly evaluate this without actually doing even a partial port (by considering whether the ptr-int-ptr roundtrips fit the patterns mentioned in the docs, like tagged pointers), but maybe that is not feasible.

I think the concern some folks (TBH, myself included) have is that the FAQ states:

I am not saying we are going to break the world right now, but we should explore how bad breaking the world is

But the evaluation criteria is based on how hard (or possible) the code is to rewrite to this model, which assumes the code will be rewritten, but in many cases it won't.

eddyb commented

@joshtriplett wrote, in #95228 (comment):

a FAQ entry for "Why don't compiler backends like LLVM just stop doing this provenance thing entirely so we don't have to track it?" I'd expect that to be a common question, right after "what is provenance and why is it a thing?".

As far as I can tell, the answer to the former question is something roughly like "We don't know a good way to do that.

My understanding is that you have something similar to wasm then (flat address space, unoptimizable memory-wise).

(EDIT: I've hid some wasm details in here, it was too confusing before, click to open)

Wasm is even a bit of an extreme, with how deterministic it is (though I do not think you could soundly make it optimizable just by e.g. adding ASLR).

The way wasm is defined means you can e.g. cryptographically hash the entire address space, heap and stack included, and assert a specific behavior, before an after executing the wasm code itself. (Okay maybe some implementations will do funny things with floats, but at least the integer side should like this.)

In wasm, you can optimize the SSA values, but not what ends up in memory. All of memory is a deterministic array.
The stack of local variables with addresses taken to them? You can't change anything about their layout or what values are written. So if they weren't optimized before emitting wasm, they're frozen forever.


"Provenance" as a whole isn't some arbitrary choice of a model, it's what we call the concept of having "disjoint memory allocations" and pointers that are too dynamic for us to know that they point into one specific allocation (I suppose "alias analysis" is also brought up but that can confuse matters, esp. with C's TBAA and whatnot).

Lately I've come to consider "provenance" near-synonymous with "pointer".
What is a pointer but a dynamic name for a location in memory? Which you must still reason about, abstractly?

"Which allocation could a (dynamic) write touch" is a fundamental question in optimizing memory accesses.
If a pointer you got from someone else, when written to, could touch your own stack variables, how could those stack variables ever be optimized?

The UB is simply a matter of the "negative space" viewpoint of the invariants in question: you can only touch a memory allocation if there's a "chain of custody" of pointers to it.
This is the simplest way to have meaningfully disjoint "memory allocations".
(With the additional stuff like ptr2int2ptr as extensions which can be e.g. inefficiently emulated)

If you start from "disjoint memory allocations", then the fundamental operation of accessing memory takes a "memory allocation" and an offset - a dynamic pointer is then logically (alloc: MemoryAlloc, offset: usize) and the alloc.base + offset flat address merely coincidences as the "machine-level" representation on some targets.

Both miri and CHERI do an excellent job at materializing this, and though they differ in certain aspects (CHERI has some granularity that Rust only has with Stacked Borrows, but only miri hides "metadata" etc.), they overlap in one important way: you can't pretend the "memory allocation" part isn't there without emulation (i.e. ptr2int2ptr requires a global map of "leaked as integer" pointers).


That wasn't as compact or smooth as I'd hoped, but those are at least my thoughts anyway, having been exposed to CHERI in the past few months. The main thing I wanted to push back on is "we don't know".

In fewer words: I'm not aware of any room for there to even be anything between "one allocation" (like wasm) and "arbitrarily many allocations", that can describe e.g. stack variables (as opposed to coarser segmentation).
"Provenance" is merely the "word that lost a bet" and ended up used as a blanket term for the consequences.

Miri with -Zmiri-tag-raw-pointers will (effectively) implement strict provenance, but also still enforces the rest of Stacked Borrows. I can look into having a flag that enforces strict provenance without Stacked Borrows.

Thanks, that's helpful. I'll bring up the possibility of testing some of our code with miri internally and let y'all know.

A lot of this conversation is about Strict Provenance as it may be in a final state. Gankra is being careful to make sure this is an iterative exploration. I'd like to encourage everyone to not get too far ahead of the state of the issue.

You're not specifying any specific language or compiler changes, but you're explicitly opening the door to a future where the compiler optimizes based on some hypothetical model that is at least somewhat stricter than what we have today. To that extent, the APIs are not truly optional.

They are definitely 100% optional right now.

The lingering concern here seems to be that they might become mandatory in the future. That's fair. But I think gathering this data is still valuable; even if ptr-int-ptr roundtrips end up still working (which was never officially blessed AFAIK but always understood by everyone -- including myself -- to be okay[1]) we'll end up with better APIs for many cases, I think.

I would totally understand the outrage if someone would make a first move towards actually mandating anything like this. And maybe some people think this is such a move. It is not. I assure you, the intent of this experiment is not to "slowly boil the frog" and sneakily introduce extra rules slowly without anyone noticing.

These APIs will not become mandatory with a huge amount of discussion that y'all will definitely hear about. And I don't think even starting those discussions is anywhere close to being on the table for quite a long time.

[1]: Heck I spent a lot of effort making them work well in Miri!

Another gentle reminder I hope everyone can keep in mind, this is not something that is going to change overnight. We have time to talk about this and ensure all concerns are given twice the consideration they deserve.

In fewer words: I'm not aware of any room for there to even be anything between "one allocation" (like wasm) and "arbitrarily many allocations", that can describe e.g. stack variables (as opposed to coarser segmentation).

But there is something in between: what existing C, C++, and Rust compilers do. Yes, that memory model is formally inconsistent. But it has also been extremely successful--so successful that it isn't practically possible to go back to one of the two extremes. I'm much more interested in figuring out how to bring the model that we have to some sort of more correct state, which will undoubtedly be messy and inconsistent, because we're stuck with that model no matter what.

After thinking about this some more, I think I found a good way to describe this effort and why it is not a threat to existing unsafe Rust code.

I imagine that we might one day have a "spec for Rust that conforms to strict provenance". That fragment of Rust is a lot easier to specify than "full" Rust with ptr-int-ptr roundtrips. I would argue it is better to have a precise spec for parts of Rust than to have no precise spec at all because it is held back by this one nasty issue. Whether that is true depends on how much Rust "out there" falls into the fragment of "conforms with strict provenance", which is a question this experiment aims at answering.

Like all actual experiments, some of the next steps depend on its results!

  • If it turns out that yes a lot of Rust out there can be described with strict provenance, then IMO going ahead with a precise spec for that fragment would be a very viable plan. This would not give the compiler license to treat ptr-int-ptr roundtrips as UB (nothing would change about their relationship status -- "it's complicated"), but it would greatly help folks that can avoid that fragment by letting them work against an actual bona fide precise spec for their unsafe code.
  • If it turns out that strict provenance is untenable in practice, then we all learned something and we know a spec for just the strict provenance fragment of Rust wouldn't be that helpful. That's good to know!

In other words, I imagine a future situation where ptr-int-ptr roundtrips are not UB, but they are "outside of the fragment of the language that we understand properly" and the UCG WG will use a lot of hedging if you ask too probing questions about what you can and cannot do with them. That actually means they are literally no worse off than today! However, everyone who can avoid them (e.g. through these new shiny APIs) is much better off than today as they actually have a precise spec.

The goal is not to make things any worse than they currently are for existing unsafe code. The goal is to make things better for the fragment of unsafe code that doesn't need the most wild aspects of unsafe programming, and to bring more and more unsafe code into that fragment. And then who knows maybe one day we'll crack ptr-int-ptr roundtrips and all of unsafe will live in the glorious land of "having a precise spec". :D

I hope this puts people with large existing unsafe Rust codebases at ease. :)

@Gankra

that never materializes because the lang team has no interest in unsafe rust

FWIW, several of us on the lang team care a great deal about unsafe Rust, myself included. This specific issue has been and continues to be a massive challenge to get right. It has come up numerous times, and it's still not clear what the best solution is. Every path has massive tradeoffs.

With my lang team hat on: thank you for trying this experiment, and I'm very interested to see it.

eddyb commented

But there is something in between: what existing C, C++, and Rust compilers do. Yes, that memory model is formally inconsistent. But it has also been extremely successful--so successful that it isn't practically possible to go back to one of the two extremes. I'm much more interested in figuring out how to bring the model that we have to some sort of more correct state, which will undoubtedly be messy and inconsistent, because we're stuck with that model no matter what.

I'm sorry, but I feel like my comment didn't properly get across.
I was replying to "are there alternatives to provenance" with "no" (with only minor caveats).
Not strict provenance, but any provenance. You're talking about a specific provenance model.

I was replying to "are there alternatives to provenance" with "no" (with only minor caveats).

FWIW I disagree. You won't get restrict or anywhere close to Stacked Borrows without provenance (i.e., no fancy aliasing rules, and no competing with Fortran performance), but there is a huge gap between wasm and what you can do by exploiting allocator non-determinism. (However, this does require accepting a fully non-deterministic allocator, which low-level people also find hard to stomach sometimes. In particular those that implement allocators. 😂 )

But that takes us too far for this thread. I created a topic on Zulip.

eddyb commented

I was replying to "are there alternatives to provenance" with "no" (with only minor caveats).

FWIW I disagree. You won't get restrict or anywhere close to Stacked Borrows without provenance (i.e., no fancy aliasing rules, and no competing with Fortran performance), but there is a huge gap between wasm and what you can do by exploiting allocator non-determinism. (However, this does require accepting a fully non-deterministic allocator, which low-level people also find hard to stomach sometimes. In particular those that implement allocators. joy )

But that takes us too far for this thread. I created a topic on Zulip.

I replied more on Zulip to that specific concern, but overall, I likely caused two kinds of confusion, sadly:

  • the wasm determinism stuff was just a distraction, a curiosity I like bringing up
    • simply adding non-determinism, with no other provisions, doesn't soundly allow more optimizations AFAIK
    • it's just that wasm is so deterministic that it's (IMO) a great case study in IR design, and what to avoid if you do want optimizations
  • I didn't consider @joshtriplett's FAQ suggestions to be talking about the subset of provenance approaches in use/that we know of, as opposed to "provenance as a whole"
    • in that case, it's probably safe to say that code obeying "strict provenance" rules doesn't run into issues with any model (i.e. "maximally portable" wrt provenance)
    • even if LLVM adopted something interestingly different from today, I seriously doubt it would include no provenance(-like) reasoning (probably not even as a mode), mainly because it's an optimizing compiler, it's part of its job to understand memory
      • i.e. if you want truly no memory UB (so that you can e.g. modify a stack variable through a pointer to another variable, offset out of bounds), you likely need dedicated codegen that bypasses any "optimizing compiler" (e.g. rustc_codegen_wasm, but also you'd need to make sure MIR doesn't do any similar optimizations)
skade commented

I want to share a couple of observations and views on this discussion:

  1. I think it's inappropriate to dismiss features like CHERI as "niche" for the sake of a discussion. Everyone is informed by the space they work in. In the space I am currently in - high assurances - CHERI is a deciding factor for choosing new hardware platforms. Can I show you the documents? No, so it's claim against claim. Also, CHERI is new and we - as still a niche language! - should know how large niches can be and how quickly they can grow.
  2. The same goes for vague comments about internal codebases and the complexity to apply certain patterns to them. If you want to discuss them, minimise to an example and extract them for public discussion. This is expected behaviour in many engineering circles. I expect this work of everyone who wants to sit at this table.
  3. I'm worried about ascribing intent and interests to certain teams. I'm sure there's a lot of frustration that a team doesn't cover the particular need one cares about. I have a laundry list of those things. But I am aware that "the team cares" and "it currently has different priorities and only limited time" can both be true at the same time and as long as I'm not on the team, I should be very careful to casually throw opinions around.

Finally, I want to remind some in this discussion that both you and your employers are large names in the Rust ecosystem. Casually threatening certain actions is inappropriate and will reflect on either your or your employers name. We already have the meme "corporations own Rust" out there and anyone willing to casually using their employers size gives this meme more weight. You earned your name and your jobs, but with that comes great responsibility.

That being said, from the perspective of someone currently qualifying Rust for high assurances and been training Rust for 7 years now: a stricter pointer model is very much desireable, because the current one is indeed extremely messy to the point where validation by looking at assembly is a recommended chore. The current one is not easy to apply and review.
The ease of applying a new model is a concern, but from my perspective a lesser one, because those implementations will be costly anyways. A stricter model also allows for better automated tooling.
The maintenance of existing codebases from our perspective is of little concern, as the ecosystem is not yet build - so having such a model rather earlier than later is of interest to us.

CHERI support is indeed something we'd like to see covered, it's a major gap.

one thing to maybe add to the list of stuff MMIO pointers should be able to do: use the MMIO address space as backing memory for a memory allocator...e.g. some embedded platforms have special sram blocks at specific addresses...also most modern GPUs can map their video memory into specific regions of cpu-visible address space, e.g. via PCIe BAR. i think there are definitely cases where video memory read/write shouldn't need volatile, e.g. you pass the pointer to the gpu so it's more like passing the pointer to another cpu thread than it is like writing to a MMIO serial port's transmit register.

Leaving aside the memory model discussion, I read the PR and I truly think the newly introduced APIs are better in that they are clearer to read, easier to document and less error prone (since you won't silently be shrinking some pointers to a smaller integer etc).

Even if the strict provenance experiment would fail in some way, I would very much like to see those APIs stabilized anyway (and cast deprecated) since they allow you do to do the same thing, just in a better and more controlled way.

Crazy idea: what if we defined optimization levels according to the model which the corresponding optimization passes were compatible with?

So for example, the highest optimization level would run all optimization passes which are valid against the strict provenance model presented here. There could be a lower optimization level which simply excluded all passes which are invalid under the weaker model of PNVI-ae-udi. There could be a lower level still which is basically WASM (no provenance at all).

The compiler could limit the optimization level based on the crates in your crate graph (eg. based on editions). This would motivate the adoption of the new model through the performance and validation benefits possible under the new model without breaking old code, and it would still be possible for users to open miscompilation bug issues under the lower optimization level.

That's a wonderful idea!

To elaborate: can we opt-in to strict provenance per-crate in Cargo.toml?

# default: true = allow = PNVI-whatever
ptr2int2ptr = true
  • ptr2int2ptr = false crates can be well-optimized by themselves.
  • LTO can do PVNI optimization — or better if all crates are ptr2int2ptr = false
  • Miri does not have be accurate for a mix of ptr2int2ptr = false and ptr2int2ptr = true in the same run.
  • Miri may have false negatives for ptr2int2ptr = true — it does not have to detect PVNI-whatever violations.
  • Miri may not even to support ptr2int2ptr = true at all.

This would be nice for legacy codebases, sloppy FFI, and other Use-cases for unrestricted ptr2int2ptr, while limiting the damage.

In my opinion, we can regress a little in performance of ptr2int2ptr = true code, if it means we can simplify rustc. But ideally ptr2int2ptr doesn't break 100%.

edit: made the flag a boolean

Strict provenance is a memory model question, which means it is a global choice.

Put differently: if any crate opts out (or does not opt in), the entire program has to be compiled without strict provenance.

Right, I think we'd still want to be clear that (assuming this all goes through) we think the strict provenance model is the future - that it's something that all crates should aspire to support, but it at least gives us a way to guarantee that existing code can continue to work.

Beyond performance and validation, there's another benefit to this approach: there are certain areas of the Rust abstract machine that aren't fully figured out yet (eg. how FFI interaction is modelled in the Rust abstract machine) that mostly relate to the expecations lower-level code can have when we map the abstract machine onto a concrete architecture, and these questions are a lot easier to answer in the weaker memory models, so it would provide a path to begin using Rust for these areas before we've fully answered those questions (at a performance cost) and that usage could actually help us figure out what's important when answering those questions in the stricter memory models (again with the goal to eventually support every use-case in the strict provenance model).

Finally, I think it would be nice to have a more formal definition for what constitutes an optimization level, rather than individually toggling passes as is frequently done in C/C++.

Please note that tracking issues are not for general discussion. Github issues are infamously, notoriously poor at hosting long and branching discussions. I ask that people please use the t-lang/-unsafe-code-guidelines Zulip channel for further discussion. For ease of reference, here again are the existing topics relevant to this discussion; please make a new one if none of these suffice:

As a favor to all of our future selves I will be presumptuously limiting further discussion here to collaborators; please keep further comments focused on issues directly relevant to the tracking issue. Open new issues here in the issue tracker if you have a specific issue that needs to be addressed.

I have updated the initial comment to clarify that this is a set of library APIs and that there cannot be any implications for lang/compiler semantics because those teams are wholy uninvolved. To the extent that a memory model is "proposed" it is to provide a Narrative Framing for why you want to use these APIs. The experiment is voluntary "memory model roleplay" to encourage users to make their intent clear, so that we can understand what they are having trouble expressing with "proper" pointer APIs.

Also I will no longer be helping with this experiment.

I wrote the APIs. I wrote the docs. I made a stable polyfill so everyone can experiment with them right away. I migrated the stdlib/compiler to mostly comply to these "rules" to prove that they're relatively easy to comply with (and that most code does already). I listened to everyone's concerns and facilitated discussion, figured out the biggest issues, and filed subtasks for them.

I have done everything I can, it is now up to the larger rust ecosystem and domain-specific stakeholders to figure out to what extent this is "useful" and what needs to be done to fill in the semantics gaps.

Relevant to this issue is a new proposed option in Miri for executing Rust code under strict provenance: rust-lang/miri#2045

#95588 expands the APIs and docs to also better explain what happens with non-Strict-Provenance code that does want to do pointer-usize-pointer roundtrips.

Initially just reading the documentation I found expose_addr very confusing and it took me some time to spot the subtle difference to addr. Expose in the name made me think somehow it was connected to the return type/value. Some alternatives:

  • leak_provenance + fabricate_provenance or fake_provencance Would tie into the existing Box leak, where you leak the box but the 'content' remains. Danger confusing meaning of leak.
  • leak_origin + invent_origin As a non native speaker I had to look up provenance, origin is a more common word. Danger provenance is more easily searched for.
  • escape_sandbox + from_escaped Would tie into the language used to describe allocations as a sandbox. Danger sandbox and escaped could be confused to mean something else.

Thinking a bit more about it,

  • leak_origin + guess_origin Similar as above, but it clearly states that guess_origin is an imperfect operation that might not guess correctly. And could be defined as Implementation defined. For example on CHERI it would guess wrong all the time.

Using a different word than "provenance" is not a good idea. This is a term-of-art: the fact that it does not have a common use in english is a strength since it makes it easier to find references to the technical concept online. Using a synonym will only confuse matters, since once you start digging into the concept you will soon have to consult documentation in other sources like LLVM or the C specification and the word "provenance" is used there.

The same thing applies to "expose": I first used the term "broadcast" for this operation, but I switched to "expose" to match usage of the word in C. It's a technical term so there is no getting around the fact that users will have to read the documentation to understand the difference, especially since it's not observably different from addr() except in exotic optimization scenarios.

I think if we come up with new terms of art, that's fine. But I say new because they would have to not also risk semantic collision: leak implies it is the same as other leak operations, like Box::leak, which leak allocations, and it is very not. We cannot afford confusing these matters here, so I think leak is right out.

guess implies the wrong semantics, also, because "guess", in human terms, refers to applying an intelligent heuristic. However, here, provenance-selection can be a very "simple" heuristic: my understanding is that if there is only one exposed provenance, that is the provenance used. It doesn't matter whether that "makes sense".

In a sense this "guessing" is actually extremely intelligent -- if there is any correct guess, it will be taken! Here, "correct" is defined as "avoiding UB".

60% joking: recover_provenance_by_magic(). (I like that this will make people really look askance at any uses of the function in code review.)

2c for expose_addr replacement; pluck_addr, "plucking from thin air"