88. Vec::push_mut — Push and Modify in One Step

Tired of pushing a default into a Vec and then grabbing a mutable reference to fill it in? Rust 1.95 stabilises push_mut, which returns &mut T to the element it just inserted.

The old dance

A common pattern is to push a placeholder value and then immediately mutate it. Before push_mut, you had to do an awkward two-step:

1
2
3
4
5
6
7
8
9
let mut names = Vec::new();

// Push, then index back in to mutate
names.push(String::new());
let last = names.last_mut().unwrap();
last.push_str("hello");
last.push_str(", world");

assert_eq!(names[0], "hello, world");

That last_mut().unwrap() is boilerplate — you know the element is there because you just pushed it. Worse, in more complex code the compiler can’t always prove the reference is safe, forcing you into index gymnastics.

Enter push_mut

push_mut pushes the value and hands you back a mutable reference in one shot:

1
2
3
4
5
6
7
let mut scores = Vec::new();

let entry = scores.push_mut(0);
*entry += 10;
*entry += 25;

assert_eq!(scores[0], 35);

No unwraps, no indexing, no second lookup. The borrow checker is happy because there’s a clear chain of ownership.

Building structs in-place

This really shines when you’re constructing complex values piece by piece:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#[derive(Debug, Default)]
struct Player {
    name: String,
    score: u32,
    active: bool,
}

let mut roster = Vec::new();

let p = roster.push_mut(Player::default());
p.name = String::from("Ferris");
p.score = 100;
p.active = true;

assert_eq!(roster[0].name, "Ferris");
assert_eq!(roster[0].score, 100);
assert!(roster[0].active);

Instead of building the struct fully before pushing, you push a default and fill it in. This can be handy when the final field values depend on context you only have after insertion.

Also works for insert_mut

Need to insert at a specific index? insert_mut follows the same pattern:

1
2
3
4
5
let mut v = vec![1, 3, 4];
let inserted = v.insert_mut(1, 2);
*inserted *= 10;

assert_eq!(v, [1, 20, 3, 4]);

Both methods are also available on VecDeque (push_front_mut, push_back_mut, insert_mut) and LinkedList (push_front_mut, push_back_mut).

When to reach for it

Use push_mut whenever you’d otherwise write push followed by last_mut().unwrap(). It’s one less unwrap, one fewer line, and clearer intent: push this, then let me tweak it.

Stabilised in Rust 1.95 (April 2026) — update your toolchain and give it a spin.

#087 Apr 2026

87. Atomic update — Kill the Compare-and-Swap Loop

Every Rust developer who’s written lock-free code has written the same compare_exchange loop. Rust 1.95 finally gives atomics an update method that does it for you.

The old way

Atomically doubling a counter used to mean writing a retry loop yourself:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

loop {
    let current = counter.load(Ordering::Relaxed);
    let new_val = current * 2;
    match counter.compare_exchange(
        current, new_val,
        Ordering::SeqCst, Ordering::Relaxed,
    ) {
        Ok(_) => break,
        Err(_) => continue,
    }
}
// counter is now 20

It works, but it’s boilerplate — and easy to get wrong (use the wrong ordering, forget to retry, etc.).

The new way: update

1
2
3
4
5
6
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x * 2);
// counter is now 20

One line. No loop. No chance of forgetting to retry on contention.

The method takes two orderings (one for the store on success, one for the load on failure) and a closure that transforms the current value. It handles the compare-and-swap retry loop internally.

It returns the previous value

Just like fetch_add and friends, update returns the value before the update:

1
2
3
4
5
6
7
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(5);

let prev = counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 3);
assert_eq!(prev, 5);  // was 5
assert_eq!(counter.load(Ordering::SeqCst), 8);  // now 8

This makes it perfect for “fetch-and-modify” patterns where you need the old value.

Works on all atomic types

update isn’t just for AtomicUsize — it’s available on AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr too:

1
2
3
4
5
use std::sync::atomic::{AtomicBool, Ordering};

let flag = AtomicBool::new(false);
flag.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x);
assert_eq!(flag.load(Ordering::SeqCst), true);

When to use update vs fetch_add

If your operation is a simple add, sub, or bitwise op, the specialized fetch_* methods are still better — they compile down to a single atomic instruction on most architectures.

Use update when your transformation is more complex: clamping, toggling state machines, applying arbitrary functions. Anywhere you’d previously hand-roll a CAS loop.

Summary

MethodUse when
fetch_add, fetch_or, etc.Simple arithmetic/bitwise ops
updateArbitrary transformations (Rust 1.95+)
Manual CAS loopNever again (mostly)

Available on stable since Rust 1.95.0 for AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr.

86. cfg_select! — Compile-Time Match on Platform and Features

Stop stacking #[cfg] blocks that contradict each other — cfg_select! gives you a match-like syntax for conditional compilation, right in the standard library.

The old way: a wall of #[cfg]

When you need platform-specific code, the traditional approach is a series of #[cfg(...)] attributes. It works, but nothing ties the branches together — you can easily miss a platform or accidentally define the same function twice.

1
2
3
4
5
6
7
8
#[cfg(unix)]
fn default_shell() -> &'static str { "/bin/sh" }

#[cfg(windows)]
fn default_shell() -> &'static str { "cmd.exe" }

// Forgot wasm? Forgot the fallback? The compiler won't tell you
// until someone tries to build on an unsupported target.

The fix: cfg_select!

Stabilized in Rust 1.95, cfg_select! works like a compile-time match. The compiler evaluates each cfg predicate top to bottom and emits only the first matching arm. Add a _ wildcard for a catch-all.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
cfg_select! {
    unix => {
        fn default_shell() -> &'static str { "/bin/sh" }
    }
    windows => {
        fn default_shell() -> &'static str { "cmd.exe" }
    }
    _ => {
        fn default_shell() -> &'static str { "sh" }
    }
}

The branches are mutually exclusive by design — exactly one fires. No more worrying about overlapping #[cfg] blocks or missing targets.

It works in expression position too

Need a quick platform-dependent value? Skip the braces:

1
2
3
4
5
6
let path_sep = cfg_select! {
    windows => '\\',
    _ => '/',
};

assert_eq!(path_sep, '/'); // on unix

This is far cleaner than wrapping a let in two separate #[cfg] attributes or pulling in the cfg-if crate.

Why this matters

Before 1.95, the community relied on the cfg-if crate for this exact pattern — it has over 300 million downloads. Now the same functionality ships in std, with one less dependency to track and a syntax the compiler can verify directly.

85. cast_signed & cast_unsigned — Explicit Sign Casting for Integers

Stop using as to flip between signed and unsigned integers — cast_signed() and cast_unsigned() say exactly what you mean.

The problem with as

When you write value as u32 or value as i64, the as keyword does too many things at once: it can change the sign, widen, truncate, or even convert floats. Readers have to mentally verify which conversion is actually happening.

1
2
let x: i32 = -1;
let y = x as u32;  // Sign cast? Truncation? Widening? All of the above?

The fix: cast_signed() and cast_unsigned()

Stabilized in Rust 1.87, these methods only reinterpret the sign of an integer — same bit width, same bits, just a different type. If you accidentally try to change the size, it won’t compile.

1
2
3
4
5
6
let signed: i32 = -1;
let unsigned: u32 = signed.cast_unsigned();
assert_eq!(unsigned, u32::MAX); // Same bits, different interpretation

let back: i32 = unsigned.cast_signed();
assert_eq!(back, -1); // Round-trips perfectly

The key constraint: these methods only exist between same-sized pairs (i32u32, i64u64, etc.). There’s no i32::cast_unsigned() returning a u64 — that would silently widen, which is exactly the kind of ambiguity these methods eliminate.

Where this shines

Bit manipulation is the classic use case. When you need to treat an unsigned value as signed for arithmetic and then go back, the intent is crystal clear:

1
2
3
4
5
6
fn wrapping_distance(a: u32, b: u32) -> i32 {
    a.wrapping_sub(b).cast_signed()
}

assert_eq!(wrapping_distance(10, 3), 7);
assert_eq!(wrapping_distance(3, 10), -7);

Compare that to the as version — a.wrapping_sub(b) as i32 — and you can see why reviewers love the explicit method. It’s one less thing to second-guess in a code review.

84. Result::flatten — Unwrap Nested Results in One Call

You have a Result<Result<T, E>, E> and just want the inner Result<T, E>. Before Rust 1.89, that meant a clunky and_then(|r| r). Now there’s Result::flatten.

The problem: nested Results

Nested Results crop up naturally — call a function that returns Result, then map over the success with another fallible operation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
fn parse_port(s: &str) -> Result<u16, String> {
    s.parse::<u16>().map_err(|e| e.to_string())
}

fn validate_port(port: u16) -> Result<u16, String> {
    if port >= 1024 {
        Ok(port)
    } else {
        Err(format!("port {} is privileged", port))
    }
}

let input = "8080";
let nested: Result<Result<u16, String>, String> =
    parse_port(input).map(|p| validate_port(p));
// nested is Ok(Ok(8080)) — awkward to work with

You end up with Ok(Ok(8080)) when you really want Ok(8080).

The old workaround

The standard trick was and_then with an identity closure:

1
2
3
4
5
6
7
8
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
let flat = parse_port("8080").map(|p| validate_port(p)).and_then(|r| r);
assert_eq!(flat, Ok(8080));

It works, but .and_then(|r| r) is a puzzler if you haven’t seen the pattern before.

The fix: flatten

Stabilized in Rust 1.89, Result::flatten does exactly what you’d expect:

1
2
3
4
5
6
7
8
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
let result = parse_port("8080").map(|p| validate_port(p)).flatten();
assert_eq!(result, Ok(8080));

If the outer is Err, you get that Err. If the outer is Ok(Err(e)), you get Err(e). Only Ok(Ok(v)) becomes Ok(v).

Error propagation still works

Both layers must share the same error type. The flattening preserves whichever error came first:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
// Outer error: parse fails
let r1 = parse_port("abc").map(|p| validate_port(p)).flatten();
assert!(r1.is_err());

// Inner error: parse succeeds, validation fails
let r2 = parse_port("80").map(|p| validate_port(p)).flatten();
assert_eq!(r2, Err("port 80 is privileged".to_string()));

// Both succeed
let r3 = parse_port("3000").map(|p| validate_port(p)).flatten();
assert_eq!(r3, Ok(3000));

When to use flatten vs and_then

If you’re writing .map(f).flatten(), you probably want .and_then(f) — it’s the same thing, one call shorter. flatten shines when you already have a nested Result and just need to collapse it — say, from a generic API, a deserialized value, or a collection of results mapped over a fallible function.

83. Arc::unwrap_or_clone — Take Ownership Without the Dance

You need to own a T but all you have is an Arc<T>. The old pattern is a six-line fumble with try_unwrap. Arc::unwrap_or_clone collapses it into one call — and skips the clone entirely when it can.

The old dance

Arc::try_unwrap hands you the inner value — but only if you’re the last reference. Otherwise it gives your Arc back, and you have to clone.

1
2
3
4
5
6
7
8
use std::sync::Arc;

let arc = Arc::new(String::from("hello"));
let owned: String = match Arc::try_unwrap(arc) {
    Ok(inner) => inner,
    Err(still_shared) => (*still_shared).clone(),
};
assert_eq!(owned, "hello");

Every place that wanted an owned T from an Arc<T> wrote this same pattern, often subtly wrong.

The fix: unwrap_or_clone

Stabilized in Rust 1.76, Arc::unwrap_or_clone does exactly the right thing: move the inner value out if we’re the last owner, clone it otherwise.

1
2
3
4
5
use std::sync::Arc;

let arc = Arc::new(String::from("hello"));
let owned: String = Arc::unwrap_or_clone(arc);
assert_eq!(owned, "hello");

One call. No match. No deref gymnastics.

It actually skips the clone

The key win isn’t just ergonomics — it’s performance. When the refcount is 1, no clone happens; the T is moved out of the allocation directly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::Arc;

let solo = Arc::new(vec![1, 2, 3, 4, 5]);
let v: Vec<i32> = Arc::unwrap_or_clone(solo); // no allocation, just a move
assert_eq!(v, [1, 2, 3, 4, 5]);

let shared = Arc::new(vec![1, 2, 3]);
let _other = Arc::clone(&shared);
let v2: Vec<i32> = Arc::unwrap_or_clone(shared); // clones, because _other still holds a ref
assert_eq!(v2, [1, 2, 3]);

Also on Rc

The same method exists on Rc for single-threaded code — identical semantics, identical ergonomics:

1
2
3
4
5
use std::rc::Rc;

let rc = Rc::new(42);
let n: i32 = Rc::unwrap_or_clone(rc);
assert_eq!(n, 42);

Anywhere you were reaching for try_unwrap().unwrap_or_else(|a| (*a).clone()), reach for unwrap_or_clone instead. Shorter, clearer, and it avoids the clone when it can.

#082 Apr 2026

82. isqrt — Integer Square Root Without Floating Point

(n as f64).sqrt() as u64 is the classic hack — and it silently gives the wrong answer for large values. Rust 1.84 stabilized isqrt on every integer type: exact, float-free, no precision traps.

The floating-point trap

Converting to f64, calling .sqrt(), and casting back is the go-to pattern. It looks fine. It isn’t.

1
2
3
4
let n: u64 = 10_000_000_000_000_000_000;
let bad = (n as f64).sqrt() as u64;
// bad == 3_162_277_660, but floor(sqrt(n)) is 3_162_277_660 — or is it?
// For many large u64 values, the f64 round-trip is off by 1.

f64 only has 53 bits of mantissa, so for u64 values above 2^53 the conversion loses precision before you even take the square root.

The fix: isqrt

1
2
3
4
let n: u64 = 10_000_000_000_000_000_000;
let root = n.isqrt();
assert_eq!(root * root <= n, true);
assert_eq!((root + 1).checked_mul(root + 1).map_or(true, |sq| sq > n), true);

It’s defined on every integer type — u8, u16, u32, u64, u128, usize, and their signed counterparts — and always returns the exact floor of the square root. No casts, no rounding, no surprises.

Signed integers too

1
2
3
4
5
6
let x: i32 = 42;
assert_eq!(x.isqrt(), 6); // 6*6 = 36, 7*7 = 49

// Negative values would panic, so check first:
let maybe_neg: i32 = -4;
assert_eq!(maybe_neg.checked_isqrt(), None);

Use checked_isqrt on signed types when the input might be negative — it returns Option<T> instead of panicking.

When you’d reach for it

Perfect-square checks, tight loops over divisors, hash table sizing, geometry on integer grids — anywhere you were reaching for f64::sqrt purely to round down, reach for isqrt instead. It’s faster, exact, and one character shorter.

81. checked_sub_signed — Subtract a Signed Delta From an Unsigned Without Casts

checked_add_signed has been around for years. Its missing sibling finally landed: as of Rust 1.91, u64::checked_sub_signed (and the whole {checked, overflowing, saturating, wrapping}_sub_signed family) lets you subtract an i64 from a u64 without casting, unsafe, or hand-rolled overflow checks.

The problem

You’ve got an unsigned counter — a file offset, a buffer index, a frame number — and you want to apply a signed delta. The delta is negative, so subtracting it should increase the counter. But Rust won’t let you subtract an i64 from a u64:

1
2
3
4
5
let pos: u64 = 100;
let delta: i64 = -5;

// error[E0277]: cannot subtract `i64` from `u64`
// let new_pos = pos - delta;

The usual workarounds are all awkward. Cast to i64 and hope nothing overflows. Branch on the sign of the delta and call either checked_sub or checked_add depending. Convert via as and pray.

The fix

checked_sub_signed takes an i64 directly and returns Option<u64>:

1
2
3
4
5
let pos: u64 = 100;

assert_eq!(pos.checked_sub_signed(30),  Some(70));   // normal subtraction
assert_eq!(pos.checked_sub_signed(-5),  Some(105));  // subtracting negative adds
assert_eq!(pos.checked_sub_signed(200), None);       // underflow → None

Subtracting a negative number “wraps around” to addition, exactly as the math says it should. Underflow (going below zero) returns None instead of panicking or silently wrapping.

The whole family

Pick your overflow semantics, same as every other integer op:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
let pos: u64 = 10;

// Checked: returns Option.
assert_eq!(pos.checked_sub_signed(-5),  Some(15));
assert_eq!(pos.checked_sub_signed(100), None);

// Saturating: clamps to 0 or u64::MAX.
assert_eq!(pos.saturating_sub_signed(100), 0);
assert_eq!((u64::MAX - 5).saturating_sub_signed(-100), u64::MAX);

// Wrapping: modular arithmetic, never panics.
assert_eq!(pos.wrapping_sub_signed(20), u64::MAX - 9);

// Overflowing: returns (value, did_overflow).
assert_eq!(pos.overflowing_sub_signed(20), (u64::MAX - 9, true));
assert_eq!(pos.overflowing_sub_signed(5),  (5, false));

Same convention as checked_sub, saturating_sub, etc. — you already know the shape.

Why it matters

The signed-from-unsigned case comes up more than you’d think. Scrubbing back and forth in a timeline. Applying a velocity to a position. Rebasing a byte offset. Any time the delta can be negative, you need this method — and now you have it without touching as.

It pairs nicely with its long-stable sibling checked_add_signed, which has been around since Rust 1.66. Between the two, signed deltas on unsigned counters are a one-liner in any direction.

Available on every unsigned primitive (u8, u16, u32, u64, u128, usize) as of Rust 1.91.

#080 Apr 2026

80. VecDeque::pop_front_if and pop_back_if — Conditional Pops on Both Ends

Vec::pop_if got a deque-shaped sibling. As of Rust 1.93, VecDeque has pop_front_if and pop_back_if — conditional pops on either end without the peek-then-pop dance.

The problem

You want to remove an element from a VecDeque only when it matches a predicate. Before 1.93, you’d peek, match, then pop:

1
2
3
4
5
6
7
use std::collections::VecDeque;

let mut queue: VecDeque<i32> = VecDeque::from([1, 2, 3, 4]);

if queue.front().is_some_and(|&x| x == 1) {
    queue.pop_front();
}

Two lookups, two branches, one opportunity to desynchronize the check from the pop if you refactor the predicate later.

The fix

pop_front_if takes a closure, checks the front element against it, and pops it only if it matches. pop_back_if does the same on the other end.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::collections::VecDeque;

let mut queue: VecDeque<i32> = VecDeque::from([1, 2, 3, 4]);

let popped = queue.pop_front_if(|x| *x == 1);
assert_eq!(popped, Some(1));
assert_eq!(queue, VecDeque::from([2, 3, 4]));

// Predicate doesn't match — nothing is popped.
let not_popped = queue.pop_front_if(|x| *x > 100);
assert_eq!(not_popped, None);
assert_eq!(queue, VecDeque::from([2, 3, 4]));

The return type is Option<T>: Some(value) if the predicate matched and the element was removed, None otherwise (including when the deque is empty).

One subtle detail worth noting — the closure receives &mut T, not &T. That means |&x| won’t type-check; use |x| *x == ... or destructure with |&mut x|. The extra flexibility lets you mutate the element in-place before deciding to pop it.

Draining a prefix

The pattern clicks when you pair it with a while let loop. Drain everything at the front that matches a condition, stop the moment it doesn’t:

1
2
3
4
5
6
7
8
use std::collections::VecDeque;

let mut events: VecDeque<i32> = VecDeque::from([1, 2, 3, 10, 11, 12]);

// Drain the "small" prefix only.
while let Some(_) = events.pop_front_if(|x| *x < 10) {}

assert_eq!(events, VecDeque::from([10, 11, 12]));

No index tracking, no split_off, no collecting into a new deque.

Why both ends?

VecDeque is a double-ended ring buffer, so it’s natural to support the same idiom on both sides. Processing a priority queue from the back, trimming expired entries from the front, popping a sentinel only when it’s still there — all one method call each.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::collections::VecDeque;

let mut log: VecDeque<&str> = VecDeque::from(["START", "a", "b", "c", "END"]);

let end = log.pop_back_if(|s| *s == "END");
let start = log.pop_front_if(|s| *s == "START");

assert_eq!(start, Some("START"));
assert_eq!(end, Some("END"));
assert_eq!(log, VecDeque::from(["a", "b", "c"]));

When to reach for it

Whenever the shape of your code is “peek, compare, pop.” That’s the tell. pop_front_if / pop_back_if collapse three steps into one atomic operation, and the Option<T> return makes it composable with while let, ?, and the rest of the Option toolbox.

Stabilized in Rust 1.93 — if your MSRV is recent enough, this is a free readability win.

79. #[diagnostic::on_unimplemented] — Custom Error Messages for Your Traits

Trait errors are notoriously cryptic. #[diagnostic::on_unimplemented] lets you replace the compiler’s default “trait bound not satisfied” with a message that actually tells the user what went wrong.

The problem

You define a trait, someone forgets to implement it, and the compiler spits out a wall of generics and trait bounds that even experienced Rustaceans have to squint at:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
trait Storable {
    fn store(&self, path: &str);
}

fn save<T: Storable>(item: &T) {
    item.store("/data/output");
}

fn main() {
    save(&42_i32);
    // error[E0277]: the trait bound `i32: Storable` is not satisfied
}

For your own code that’s fine — but if you’re writing a library, your users deserve better.

The fix

Annotate your trait with #[diagnostic::on_unimplemented] and the compiler will use your message instead:

1
2
3
4
5
6
7
8
#[diagnostic::on_unimplemented(
    message = "`{Self}` cannot be stored — implement `Storable` for it",
    label = "this type doesn't implement Storable",
    note = "all types passed to `save()` must implement the `Storable` trait"
)]
trait Storable {
    fn store(&self, path: &str);
}

Now the error reads like documentation, not like a stack trace.

It works with generics too

The placeholders {Self} and {A} (for generic params) let you generate targeted messages:

1
2
3
4
5
6
7
#[diagnostic::on_unimplemented(
    message = "cannot serialize `{Self}` into format `{F}`",
    note = "see docs for supported format/type combinations"
)]
trait Serialize<F> {
    fn serialize(&self) -> Vec<u8>;
}

If someone tries to serialize a type for an unsupported format, they get a message that names both the type and the format — no guessing required.

Multiple notes

You can attach several note entries, and each one becomes a separate note in the compiler output:

1
2
3
4
5
6
7
8
#[diagnostic::on_unimplemented(
    message = "`{Self}` is not a valid handler",
    note = "handlers must implement `Handler` with the appropriate request type",
    note = "see https://docs.example.com/handlers for the full list"
)]
trait Handler<Req> {
    fn handle(&self, req: Req);
}

When to use it

This is a library-author tool. If you expose a public trait and expect users to implement it (or pass types that satisfy it), adding on_unimplemented is a small investment that saves your users real debugging time. Crates like bevy, axum, and diesel already use it to turn walls of trait errors into actionable guidance.

Stabilized in Rust 1.78, it’s part of the #[diagnostic] namespace — the compiler treats unrecognized diagnostic hints as soft warnings rather than hard errors, so it’s forward-compatible by design.