90. black_box — Stop the Compiler From Erasing Your Benchmarks

Your benchmark ran in 0 nanoseconds? Congratulations — the compiler optimised away the code you were trying to measure. std::hint::black_box prevents that by hiding values from the optimiser.

The problem: the optimiser is too smart

The Rust compiler aggressively eliminates dead code. If it can prove a result is never used, it simply removes the computation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn sum_range(n: u64) -> u64 {
    (0..n).sum()
}

fn main() {
    let start = std::time::Instant::now();
    let result = sum_range(10_000_000);
    let elapsed = start.elapsed();

    // Without using `result`, the compiler may skip the entire computation
    println!("took: {elapsed:?}");

    // Force the result to be "used" so the above isn't optimised out
    assert!(result > 0);
}

In release mode, the compiler can see through this and may still optimise the loop away — or even compute the answer at compile time. Your benchmark reports near-zero time, and you learn nothing.

Enter black_box

std::hint::black_box takes a value and returns it unchanged, but the compiler treats it as an opaque barrier — it can’t see through it, so it can’t optimise away whatever produced that value:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::hint::black_box;

fn sum_range(n: u64) -> u64 {
    (0..n).sum()
}

fn main() {
    let start = std::time::Instant::now();
    let result = sum_range(black_box(10_000_000));
    let _ = black_box(result);
    let elapsed = start.elapsed();

    println!("sum = {result}, took: {elapsed:?}");
}

Two black_box calls do the trick:

  1. Wrap the input — prevents the compiler from constant-folding the argument
  2. Wrap the output — prevents dead-code elimination of the computation

Before and after

Without black_box (release mode):

1
sum = 49999995000000, took: 83ns     ← suspiciously fast

With black_box (release mode):

1
sum = 49999995000000, took: 5.612ms  ← actual work

It works on any type

black_box is generic — it works on integers, strings, structs, references, whatever:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::hint::black_box;

fn main() {
    // Hide a vector from the optimiser
    let data: Vec<i32> = black_box(vec![1, 2, 3, 4, 5]);
    let total: i32 = data.iter().sum();
    let total = black_box(total);

    assert_eq!(total, 15);
}

Micro-benchmark recipe

Here’s a minimal pattern for quick-and-dirty micro-benchmarks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::hint::black_box;
use std::time::Instant;

fn fibonacci(n: u32) -> u64 {
    match n {
        0 => 0,
        1 => 1,
        _ => fibonacci(n - 1) + fibonacci(n - 2),
    }
}

fn main() {
    let iterations = 100;
    let start = Instant::now();
    for _ in 0..iterations {
        black_box(fibonacci(black_box(30)));
    }
    let elapsed = start.elapsed();
    println!("{iterations} runs in {elapsed:?} ({:?}/iter)", elapsed / iterations);
}

Without black_box, the compiler could hoist the pure function call out of the loop or eliminate it entirely. With it, each iteration does real work.

When to use it

Reach for black_box whenever you’re timing code and the results look suspiciously fast. It’s also the foundation that benchmarking frameworks like criterion and the built-in #[bench] use under the hood.

It’s not a full benchmarking harness — for serious measurement you still want warmup, statistics, and outlier detection. But when you need a quick sanity check, black_box + Instant gets the job done.

Available since Rust 1.66 on stable.

89. cold_path — Tell the Compiler Which Branch Won't Happen

Your error-handling branch fires once in a million calls, but the compiler doesn’t know that. core::hint::cold_path lets you mark unlikely branches so the optimiser can focus on the hot path.

Why branch layout matters

Modern CPUs predict which way a branch will go and speculatively execute instructions ahead of time. When the prediction is right, execution flies. When it’s wrong, the pipeline stalls.

Compilers already try to guess which branches are hot, but they don’t always get it right — especially when both sides of an if look equally plausible from a static analysis perspective. That’s where cold_path comes in.

The basics

Call cold_path() at the start of a branch that is rarely taken:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::hint::cold_path;

fn process(value: Option<u64>) -> u64 {
    if let Some(v) = value {
        v * 2
    } else {
        cold_path();
        log_miss();
        0
    }
}

fn log_miss() {
    // imagine logging or metrics here
}

The compiler now knows the else arm is unlikely. It can lay out the machine code so the hot path (the Some arm) has no jumps, keeping it in the instruction cache and the branch predictor happy.

In match expressions

cold_path works well in match arms too — mark the rare variants:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use std::hint::cold_path;

fn handle_status(code: u16) -> &'static str {
    match code {
        200 => "ok",
        301 => "moved",
        404 => { cold_path(); "not found" }
        500 => { cold_path(); "server error" }
        _   => { cold_path(); "unknown" }
    }
}

Only the branches you expect to be common stay on the fast track.

Building likely and unlikely helpers

If you’ve used C/C++, you might miss __builtin_expect. With cold_path you can build the same thing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::hint::cold_path;

#[inline(always)]
const fn likely(b: bool) -> bool {
    if !b { cold_path(); }
    b
}

#[inline(always)]
const fn unlikely(b: bool) -> bool {
    if b { cold_path(); }
    b
}

fn check_bounds(index: usize, len: usize) -> bool {
    if unlikely(index >= len) {
        panic!("out of bounds: {} >= {}", index, len);
    }
    true
}

Now you can annotate conditions directly instead of marking individual branches.

A word of caution

Misusing cold_path on a branch that actually runs often can hurt performance — the compiler will deprioritise it, and you’ll get more pipeline stalls, not fewer. Always benchmark before sprinkling hints around. Profile first, hint second.

The bottom line

cold_path is a zero-cost, zero-argument function that tells the optimiser what you already know: this branch is the exception, not the rule. It’s a small tool, but in hot loops and latency-sensitive code, it can make a measurable difference.

Stabilised in Rust 1.95 (April 2026).

88. Vec::push_mut — Push and Modify in One Step

Tired of pushing a default into a Vec and then grabbing a mutable reference to fill it in? Rust 1.95 stabilises push_mut, which returns &mut T to the element it just inserted.

The old dance

A common pattern is to push a placeholder value and then immediately mutate it. Before push_mut, you had to do an awkward two-step:

1
2
3
4
5
6
7
8
9
let mut names = Vec::new();

// Push, then index back in to mutate
names.push(String::new());
let last = names.last_mut().unwrap();
last.push_str("hello");
last.push_str(", world");

assert_eq!(names[0], "hello, world");

That last_mut().unwrap() is boilerplate — you know the element is there because you just pushed it. Worse, in more complex code the compiler can’t always prove the reference is safe, forcing you into index gymnastics.

Enter push_mut

push_mut pushes the value and hands you back a mutable reference in one shot:

1
2
3
4
5
6
7
let mut scores = Vec::new();

let entry = scores.push_mut(0);
*entry += 10;
*entry += 25;

assert_eq!(scores[0], 35);

No unwraps, no indexing, no second lookup. The borrow checker is happy because there’s a clear chain of ownership.

Building structs in-place

This really shines when you’re constructing complex values piece by piece:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#[derive(Debug, Default)]
struct Player {
    name: String,
    score: u32,
    active: bool,
}

let mut roster = Vec::new();

let p = roster.push_mut(Player::default());
p.name = String::from("Ferris");
p.score = 100;
p.active = true;

assert_eq!(roster[0].name, "Ferris");
assert_eq!(roster[0].score, 100);
assert!(roster[0].active);

Instead of building the struct fully before pushing, you push a default and fill it in. This can be handy when the final field values depend on context you only have after insertion.

Also works for insert_mut

Need to insert at a specific index? insert_mut follows the same pattern:

1
2
3
4
5
let mut v = vec![1, 3, 4];
let inserted = v.insert_mut(1, 2);
*inserted *= 10;

assert_eq!(v, [1, 20, 3, 4]);

Both methods are also available on VecDeque (push_front_mut, push_back_mut, insert_mut) and LinkedList (push_front_mut, push_back_mut).

When to reach for it

Use push_mut whenever you’d otherwise write push followed by last_mut().unwrap(). It’s one less unwrap, one fewer line, and clearer intent: push this, then let me tweak it.

Stabilised in Rust 1.95 (April 2026) — update your toolchain and give it a spin.

#087 Apr 2026

87. Atomic update — Kill the Compare-and-Swap Loop

Every Rust developer who’s written lock-free code has written the same compare_exchange loop. Rust 1.95 finally gives atomics an update method that does it for you.

The old way

Atomically doubling a counter used to mean writing a retry loop yourself:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

loop {
    let current = counter.load(Ordering::Relaxed);
    let new_val = current * 2;
    match counter.compare_exchange(
        current, new_val,
        Ordering::SeqCst, Ordering::Relaxed,
    ) {
        Ok(_) => break,
        Err(_) => continue,
    }
}
// counter is now 20

It works, but it’s boilerplate — and easy to get wrong (use the wrong ordering, forget to retry, etc.).

The new way: update

1
2
3
4
5
6
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x * 2);
// counter is now 20

One line. No loop. No chance of forgetting to retry on contention.

The method takes two orderings (one for the store on success, one for the load on failure) and a closure that transforms the current value. It handles the compare-and-swap retry loop internally.

It returns the previous value

Just like fetch_add and friends, update returns the value before the update:

1
2
3
4
5
6
7
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(5);

let prev = counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 3);
assert_eq!(prev, 5);  // was 5
assert_eq!(counter.load(Ordering::SeqCst), 8);  // now 8

This makes it perfect for “fetch-and-modify” patterns where you need the old value.

Works on all atomic types

update isn’t just for AtomicUsize — it’s available on AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr too:

1
2
3
4
5
use std::sync::atomic::{AtomicBool, Ordering};

let flag = AtomicBool::new(false);
flag.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x);
assert_eq!(flag.load(Ordering::SeqCst), true);

When to use update vs fetch_add

If your operation is a simple add, sub, or bitwise op, the specialized fetch_* methods are still better — they compile down to a single atomic instruction on most architectures.

Use update when your transformation is more complex: clamping, toggling state machines, applying arbitrary functions. Anywhere you’d previously hand-roll a CAS loop.

Summary

MethodUse when
fetch_add, fetch_or, etc.Simple arithmetic/bitwise ops
updateArbitrary transformations (Rust 1.95+)
Manual CAS loopNever again (mostly)

Available on stable since Rust 1.95.0 for AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr.

86. cfg_select! — Compile-Time Match on Platform and Features

Stop stacking #[cfg] blocks that contradict each other — cfg_select! gives you a match-like syntax for conditional compilation, right in the standard library.

The old way: a wall of #[cfg]

When you need platform-specific code, the traditional approach is a series of #[cfg(...)] attributes. It works, but nothing ties the branches together — you can easily miss a platform or accidentally define the same function twice.

1
2
3
4
5
6
7
8
#[cfg(unix)]
fn default_shell() -> &'static str { "/bin/sh" }

#[cfg(windows)]
fn default_shell() -> &'static str { "cmd.exe" }

// Forgot wasm? Forgot the fallback? The compiler won't tell you
// until someone tries to build on an unsupported target.

The fix: cfg_select!

Stabilized in Rust 1.95, cfg_select! works like a compile-time match. The compiler evaluates each cfg predicate top to bottom and emits only the first matching arm. Add a _ wildcard for a catch-all.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
cfg_select! {
    unix => {
        fn default_shell() -> &'static str { "/bin/sh" }
    }
    windows => {
        fn default_shell() -> &'static str { "cmd.exe" }
    }
    _ => {
        fn default_shell() -> &'static str { "sh" }
    }
}

The branches are mutually exclusive by design — exactly one fires. No more worrying about overlapping #[cfg] blocks or missing targets.

It works in expression position too

Need a quick platform-dependent value? Skip the braces:

1
2
3
4
5
6
let path_sep = cfg_select! {
    windows => '\\',
    _ => '/',
};

assert_eq!(path_sep, '/'); // on unix

This is far cleaner than wrapping a let in two separate #[cfg] attributes or pulling in the cfg-if crate.

Why this matters

Before 1.95, the community relied on the cfg-if crate for this exact pattern — it has over 300 million downloads. Now the same functionality ships in std, with one less dependency to track and a syntax the compiler can verify directly.

85. cast_signed & cast_unsigned — Explicit Sign Casting for Integers

Stop using as to flip between signed and unsigned integers — cast_signed() and cast_unsigned() say exactly what you mean.

The problem with as

When you write value as u32 or value as i64, the as keyword does too many things at once: it can change the sign, widen, truncate, or even convert floats. Readers have to mentally verify which conversion is actually happening.

1
2
let x: i32 = -1;
let y = x as u32;  // Sign cast? Truncation? Widening? All of the above?

The fix: cast_signed() and cast_unsigned()

Stabilized in Rust 1.87, these methods only reinterpret the sign of an integer — same bit width, same bits, just a different type. If you accidentally try to change the size, it won’t compile.

1
2
3
4
5
6
let signed: i32 = -1;
let unsigned: u32 = signed.cast_unsigned();
assert_eq!(unsigned, u32::MAX); // Same bits, different interpretation

let back: i32 = unsigned.cast_signed();
assert_eq!(back, -1); // Round-trips perfectly

The key constraint: these methods only exist between same-sized pairs (i32u32, i64u64, etc.). There’s no i32::cast_unsigned() returning a u64 — that would silently widen, which is exactly the kind of ambiguity these methods eliminate.

Where this shines

Bit manipulation is the classic use case. When you need to treat an unsigned value as signed for arithmetic and then go back, the intent is crystal clear:

1
2
3
4
5
6
fn wrapping_distance(a: u32, b: u32) -> i32 {
    a.wrapping_sub(b).cast_signed()
}

assert_eq!(wrapping_distance(10, 3), 7);
assert_eq!(wrapping_distance(3, 10), -7);

Compare that to the as version — a.wrapping_sub(b) as i32 — and you can see why reviewers love the explicit method. It’s one less thing to second-guess in a code review.

84. Result::flatten — Unwrap Nested Results in One Call

You have a Result<Result<T, E>, E> and just want the inner Result<T, E>. Before Rust 1.89, that meant a clunky and_then(|r| r). Now there’s Result::flatten.

The problem: nested Results

Nested Results crop up naturally — call a function that returns Result, then map over the success with another fallible operation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
fn parse_port(s: &str) -> Result<u16, String> {
    s.parse::<u16>().map_err(|e| e.to_string())
}

fn validate_port(port: u16) -> Result<u16, String> {
    if port >= 1024 {
        Ok(port)
    } else {
        Err(format!("port {} is privileged", port))
    }
}

let input = "8080";
let nested: Result<Result<u16, String>, String> =
    parse_port(input).map(|p| validate_port(p));
// nested is Ok(Ok(8080)) — awkward to work with

You end up with Ok(Ok(8080)) when you really want Ok(8080).

The old workaround

The standard trick was and_then with an identity closure:

1
2
3
4
5
6
7
8
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
let flat = parse_port("8080").map(|p| validate_port(p)).and_then(|r| r);
assert_eq!(flat, Ok(8080));

It works, but .and_then(|r| r) is a puzzler if you haven’t seen the pattern before.

The fix: flatten

Stabilized in Rust 1.89, Result::flatten does exactly what you’d expect:

1
2
3
4
5
6
7
8
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
let result = parse_port("8080").map(|p| validate_port(p)).flatten();
assert_eq!(result, Ok(8080));

If the outer is Err, you get that Err. If the outer is Ok(Err(e)), you get Err(e). Only Ok(Ok(v)) becomes Ok(v).

Error propagation still works

Both layers must share the same error type. The flattening preserves whichever error came first:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# fn parse_port(s: &str) -> Result<u16, String> {
#     s.parse::<u16>().map_err(|e| e.to_string())
# }
# fn validate_port(port: u16) -> Result<u16, String> {
#     if port >= 1024 { Ok(port) } else { Err(format!("port {} is privileged", port)) }
# }
// Outer error: parse fails
let r1 = parse_port("abc").map(|p| validate_port(p)).flatten();
assert!(r1.is_err());

// Inner error: parse succeeds, validation fails
let r2 = parse_port("80").map(|p| validate_port(p)).flatten();
assert_eq!(r2, Err("port 80 is privileged".to_string()));

// Both succeed
let r3 = parse_port("3000").map(|p| validate_port(p)).flatten();
assert_eq!(r3, Ok(3000));

When to use flatten vs and_then

If you’re writing .map(f).flatten(), you probably want .and_then(f) — it’s the same thing, one call shorter. flatten shines when you already have a nested Result and just need to collapse it — say, from a generic API, a deserialized value, or a collection of results mapped over a fallible function.

83. Arc::unwrap_or_clone — Take Ownership Without the Dance

You need to own a T but all you have is an Arc<T>. The old pattern is a six-line fumble with try_unwrap. Arc::unwrap_or_clone collapses it into one call — and skips the clone entirely when it can.

The old dance

Arc::try_unwrap hands you the inner value — but only if you’re the last reference. Otherwise it gives your Arc back, and you have to clone.

1
2
3
4
5
6
7
8
use std::sync::Arc;

let arc = Arc::new(String::from("hello"));
let owned: String = match Arc::try_unwrap(arc) {
    Ok(inner) => inner,
    Err(still_shared) => (*still_shared).clone(),
};
assert_eq!(owned, "hello");

Every place that wanted an owned T from an Arc<T> wrote this same pattern, often subtly wrong.

The fix: unwrap_or_clone

Stabilized in Rust 1.76, Arc::unwrap_or_clone does exactly the right thing: move the inner value out if we’re the last owner, clone it otherwise.

1
2
3
4
5
use std::sync::Arc;

let arc = Arc::new(String::from("hello"));
let owned: String = Arc::unwrap_or_clone(arc);
assert_eq!(owned, "hello");

One call. No match. No deref gymnastics.

It actually skips the clone

The key win isn’t just ergonomics — it’s performance. When the refcount is 1, no clone happens; the T is moved out of the allocation directly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::Arc;

let solo = Arc::new(vec![1, 2, 3, 4, 5]);
let v: Vec<i32> = Arc::unwrap_or_clone(solo); // no allocation, just a move
assert_eq!(v, [1, 2, 3, 4, 5]);

let shared = Arc::new(vec![1, 2, 3]);
let _other = Arc::clone(&shared);
let v2: Vec<i32> = Arc::unwrap_or_clone(shared); // clones, because _other still holds a ref
assert_eq!(v2, [1, 2, 3]);

Also on Rc

The same method exists on Rc for single-threaded code — identical semantics, identical ergonomics:

1
2
3
4
5
use std::rc::Rc;

let rc = Rc::new(42);
let n: i32 = Rc::unwrap_or_clone(rc);
assert_eq!(n, 42);

Anywhere you were reaching for try_unwrap().unwrap_or_else(|a| (*a).clone()), reach for unwrap_or_clone instead. Shorter, clearer, and it avoids the clone when it can.

#082 Apr 2026

82. isqrt — Integer Square Root Without Floating Point

(n as f64).sqrt() as u64 is the classic hack — and it silently gives the wrong answer for large values. Rust 1.84 stabilized isqrt on every integer type: exact, float-free, no precision traps.

The floating-point trap

Converting to f64, calling .sqrt(), and casting back is the go-to pattern. It looks fine. It isn’t.

1
2
3
4
let n: u64 = 10_000_000_000_000_000_000;
let bad = (n as f64).sqrt() as u64;
// bad == 3_162_277_660, but floor(sqrt(n)) is 3_162_277_660 — or is it?
// For many large u64 values, the f64 round-trip is off by 1.

f64 only has 53 bits of mantissa, so for u64 values above 2^53 the conversion loses precision before you even take the square root.

The fix: isqrt

1
2
3
4
let n: u64 = 10_000_000_000_000_000_000;
let root = n.isqrt();
assert_eq!(root * root <= n, true);
assert_eq!((root + 1).checked_mul(root + 1).map_or(true, |sq| sq > n), true);

It’s defined on every integer type — u8, u16, u32, u64, u128, usize, and their signed counterparts — and always returns the exact floor of the square root. No casts, no rounding, no surprises.

Signed integers too

1
2
3
4
5
6
let x: i32 = 42;
assert_eq!(x.isqrt(), 6); // 6*6 = 36, 7*7 = 49

// Negative values would panic, so check first:
let maybe_neg: i32 = -4;
assert_eq!(maybe_neg.checked_isqrt(), None);

Use checked_isqrt on signed types when the input might be negative — it returns Option<T> instead of panicking.

When you’d reach for it

Perfect-square checks, tight loops over divisors, hash table sizing, geometry on integer grids — anywhere you were reaching for f64::sqrt purely to round down, reach for isqrt instead. It’s faster, exact, and one character shorter.

81. checked_sub_signed — Subtract a Signed Delta From an Unsigned Without Casts

checked_add_signed has been around for years. Its missing sibling finally landed: as of Rust 1.91, u64::checked_sub_signed (and the whole {checked, overflowing, saturating, wrapping}_sub_signed family) lets you subtract an i64 from a u64 without casting, unsafe, or hand-rolled overflow checks.

The problem

You’ve got an unsigned counter — a file offset, a buffer index, a frame number — and you want to apply a signed delta. The delta is negative, so subtracting it should increase the counter. But Rust won’t let you subtract an i64 from a u64:

1
2
3
4
5
let pos: u64 = 100;
let delta: i64 = -5;

// error[E0277]: cannot subtract `i64` from `u64`
// let new_pos = pos - delta;

The usual workarounds are all awkward. Cast to i64 and hope nothing overflows. Branch on the sign of the delta and call either checked_sub or checked_add depending. Convert via as and pray.

The fix

checked_sub_signed takes an i64 directly and returns Option<u64>:

1
2
3
4
5
let pos: u64 = 100;

assert_eq!(pos.checked_sub_signed(30),  Some(70));   // normal subtraction
assert_eq!(pos.checked_sub_signed(-5),  Some(105));  // subtracting negative adds
assert_eq!(pos.checked_sub_signed(200), None);       // underflow → None

Subtracting a negative number “wraps around” to addition, exactly as the math says it should. Underflow (going below zero) returns None instead of panicking or silently wrapping.

The whole family

Pick your overflow semantics, same as every other integer op:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
let pos: u64 = 10;

// Checked: returns Option.
assert_eq!(pos.checked_sub_signed(-5),  Some(15));
assert_eq!(pos.checked_sub_signed(100), None);

// Saturating: clamps to 0 or u64::MAX.
assert_eq!(pos.saturating_sub_signed(100), 0);
assert_eq!((u64::MAX - 5).saturating_sub_signed(-100), u64::MAX);

// Wrapping: modular arithmetic, never panics.
assert_eq!(pos.wrapping_sub_signed(20), u64::MAX - 9);

// Overflowing: returns (value, did_overflow).
assert_eq!(pos.overflowing_sub_signed(20), (u64::MAX - 9, true));
assert_eq!(pos.overflowing_sub_signed(5),  (5, false));

Same convention as checked_sub, saturating_sub, etc. — you already know the shape.

Why it matters

The signed-from-unsigned case comes up more than you’d think. Scrubbing back and forth in a timeline. Applying a velocity to a position. Rebasing a byte offset. Any time the delta can be negative, you need this method — and now you have it without touching as.

It pairs nicely with its long-stable sibling checked_add_signed, which has been around since Rust 1.66. Between the two, signed deltas on unsigned counters are a one-liner in any direction.

Available on every unsigned primitive (u8, u16, u32, u64, u128, usize) as of Rust 1.91.