#102 Apr 2026

102. slice::partition_point — Binary Search That Just Returns the Index

Reaching for binary_search on a sorted Vec and unwrapping Ok(i) | Err(i) because you only ever wanted the index? slice::partition_point skips the Result ceremony and hands you the position directly.

The binary_search annoyance

binary_search is great when you care whether the value was actually found. But often you don’t — you just want the spot where it would go to keep the slice sorted:

1
2
3
4
5
6
7
8
9
let nums = vec![1, 3, 5, 7, 9, 11];
let target = 6;

// Awkward: collapse Ok and Err to a single index.
let pos = match nums.binary_search(&target) {
    Ok(i) | Err(i) => i,
};

assert_eq!(pos, 3);

The Ok | Err pattern works, but it’s noisy and obscures the intent. Worse, it doesn’t generalise — what if you want the insertion point for a predicate, not an exact value?

partition_point to the rescue

partition_point takes a predicate and returns the first index where the predicate flips from true to false. On a sorted slice, that’s the insertion point — no Result, no match arms:

1
2
3
4
5
let nums = vec![1, 3, 5, 7, 9, 11];

let pos = nums.partition_point(|&x| x < 6);

assert_eq!(pos, 3); // 6 would slot between 5 and 7

The slice still has to be partitioned (all trues before all falses), but for a sorted slice with a < predicate that’s automatic. Internally it’s still O(log n) binary search — same complexity as binary_search, friendlier API.

Insert while keeping sorted

A common use: keep a Vec sorted as you add to it.

1
2
3
4
5
6
7
let mut leaderboard = vec![10, 25, 40, 70];
let new_score = 33;

let pos = leaderboard.partition_point(|&x| x < new_score);
leaderboard.insert(pos, new_score);

assert_eq!(leaderboard, [10, 25, 33, 40, 70]);

Compare that to binary_search(&new_score).unwrap_or_else(|i| i) — same result, more ceremony.

Beyond simple ordering

Because it takes any predicate, partition_point works on any slice partitioned by a property — not just sorted-by-Ord. Sorted by a derived key? Filter by a threshold? Same call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Event { day: u32, name: &'static str }

let log = vec![
    Event { day: 1, name: "boot"   },
    Event { day: 2, name: "login"  },
    Event { day: 5, name: "deploy" },
    Event { day: 7, name: "alert"  },
    Event { day: 9, name: "reboot" },
];

// First event on or after day 5.
let i = log.partition_point(|e| e.day < 5);
assert_eq!(log[i].name, "deploy");

// Number of events strictly before day 5.
assert_eq!(log.partition_point(|e| e.day < 5), 2);

That second line is a slick trick: partition_point doubles as “count how many elements satisfy the prefix predicate” in O(log n).

When to reach for it

Any time you find yourself writing binary_search(...).unwrap_or_else(|i| i) or match ... { Ok(i) | Err(i) => i }, swap in partition_point. Stable since Rust 1.52 — old enough to use everywhere, fresh enough that plenty of code still does it the noisy way.

#101 Apr 2026

101. f64::total_cmp — Sort Floats Without the NaN Panic

Tried v.sort() on a Vec<f64> and hit the trait Ord is not implemented for f64? Then reached for .sort_by(|a, b| a.partial_cmp(b).unwrap()) and now a stray NaN is about to panic your service at 3am? f64::total_cmp is the one-liner that makes both problems disappear.

Why f64 doesn’t implement Ord

Floats form a partial order because NaN is not equal to anything — not even itself. So f64: PartialOrd but not Ord, which means sort() flat out refuses to compile:

1
2
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
// temps.sort(); // ❌ the trait `Ord` is not implemented for `f64`

The classic workaround is partial_cmp().unwrap():

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(|a, b| a.partial_cmp(b).unwrap());

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Works — until a NaN sneaks in. Then partial_cmp returns None, the unwrap fires, and your sort becomes a panic.

Enter total_cmp

f64::total_cmp implements the IEEE 754 totalOrder predicate: a real total ordering on every f64 bit pattern, including all the NaNs. It returns Ordering directly — no Option, no panic:

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(f64::total_cmp);

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Same result for well-behaved input, but now NaN won’t take the process down:

1
2
3
4
5
6
7
8
9
let mut values: Vec<f64> = vec![3.0, f64::NAN, 1.0, f64::NEG_INFINITY, 2.0];
values.sort_by(f64::total_cmp);

// Finite values in order, -∞ at the front, NaN at the back.
assert_eq!(values[0], f64::NEG_INFINITY);
assert_eq!(values[1], 1.0);
assert_eq!(values[2], 2.0);
assert_eq!(values[3], 3.0);
assert!(values[4].is_nan());

min and max too

partial_cmp poisons more than just sort. Any time you reach for iter().max_by(|a, b| a.partial_cmp(b).unwrap()), you’ve written the same latent panic. total_cmp fits there too:

1
2
3
4
5
6
7
let readings = [3.2_f64, 1.4, 4.8, 2.1];

let peak = readings.iter().copied().max_by(f64::total_cmp).unwrap();
let low  = readings.iter().copied().min_by(f64::total_cmp).unwrap();

assert_eq!(peak, 4.8);
assert_eq!(low, 1.4);

Sorting structs by a float field

Because total_cmp takes two &f64s and returns Ordering, it slots straight into sort_by:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
struct Trade { symbol: &'static str, price: f64 }

let mut book = vec![
    Trade { symbol: "AAPL", price: 172.40 },
    Trade { symbol: "NVDA", price: 915.10 },
    Trade { symbol: "MSFT", price: 419.80 },
];

book.sort_by(|a, b| a.price.total_cmp(&b.price));

assert_eq!(book[0].symbol, "AAPL");
assert_eq!(book[2].symbol, "NVDA");

When to reach for it

Any time you’re about to type partial_cmp(...).unwrap() for a float, stop and use total_cmp instead. f32::total_cmp works the same way. Available since Rust 1.62 — the fix has been hiding in plain sight for years.

#100 Apr 2026

100. std::cmp::Reverse — Sort Descending Without Writing a Closure

Want to sort a Vec in descending order? The old trick is v.sort_by(|a, b| b.cmp(a)), and every time you write it you flip a and b and pray you got it right. std::cmp::Reverse is a one-word replacement.

The closure swap trick

To sort descending, you reverse the comparison. The closure is short but easy to mess up:

1
2
3
4
let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by(|a, b| b.cmp(a));

assert_eq!(scores, [50, 40, 30, 20, 10]);

Swap the a and b and you silently get ascending order again. Not a bug the compiler can help with.

Enter Reverse

std::cmp::Reverse<T> is a tuple wrapper whose Ord impl is the reverse of T’s. Hand it to sort_by_key and you’re done:

1
2
3
4
5
6
use std::cmp::Reverse;

let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by_key(|&s| Reverse(s));

assert_eq!(scores, [50, 40, 30, 20, 10]);

No chance of flipping the comparison the wrong way — the intent is right there in the name.

Sorting by a field, descending

It really shines when you’re sorting structs by a specific field:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::cmp::Reverse;

#[derive(Debug)]
struct Player {
    name: &'static str,
    score: u32,
}

let mut roster = vec![
    Player { name: "Ferris", score: 42 },
    Player { name: "Corro",  score: 88 },
    Player { name: "Rusty",  score: 65 },
];

roster.sort_by_key(|p| Reverse(p.score));

assert_eq!(roster[0].name, "Corro");
assert_eq!(roster[1].name, "Rusty");
assert_eq!(roster[2].name, "Ferris");

Mixing ascending and descending

Sort by one field ascending and another descending? Pack them in a tuple — Reverse composes cleanly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::cmp::Reverse;

let mut items = vec![
    ("apple",  3),
    ("banana", 1),
    ("apple",  1),
    ("banana", 3),
];

// Name ascending, then count descending.
items.sort_by_key(|&(name, count)| (name, Reverse(count)));

assert_eq!(items, [
    ("apple",  3),
    ("apple",  1),
    ("banana", 3),
    ("banana", 1),
]);

It works anywhere Ord does

Because Reverse<T> is an Ord type, you can use it with BinaryHeap to turn a max-heap into a min-heap, or with BTreeSet to iterate in reverse order — no extra wrapper types needed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::cmp::Reverse;
use std::collections::BinaryHeap;

// BinaryHeap is max-heap by default. Wrap in Reverse for min-heap behaviour.
let mut heap = BinaryHeap::new();
heap.push(Reverse(3));
heap.push(Reverse(1));
heap.push(Reverse(2));

assert_eq!(heap.pop(), Some(Reverse(1)));
assert_eq!(heap.pop(), Some(Reverse(2)));
assert_eq!(heap.pop(), Some(Reverse(3)));

When to reach for it

Any time you’d write b.cmp(a) — reach for Reverse instead. The code reads top-to-bottom, the intent is obvious, and there’s no comparator to accidentally flip.

99. std::mem::replace — Swap a Value and Keep the Old One

mem::take is great until your type doesn’t have a sensible Default. That’s where mem::replace steps in — you pick what gets left behind, and you still get the old value out of a &mut.

The shape of the problem

You can’t move a value out of a &mut T. The borrow checker rightly refuses. mem::take fixes this by swapping in T::default(), but an enum with no obvious default, or a type that deliberately doesn’t implement Default, leaves you stuck.

mem::replace(dest, src) is the escape hatch: it writes src into *dest and hands you back the old value.

1
2
3
4
5
6
7
use std::mem;

let mut greeting = String::from("Hello");
let old = mem::replace(&mut greeting, String::from("Howdy"));

assert_eq!(old, "Hello");
assert_eq!(greeting, "Howdy");

No clones, no unsafe, no Default required.

State machines without a default variant

This is where replace earns its keep. Picture a connection type where none of the variants makes a natural default — Disconnected is fine here, but it might be Error(e) somewhere else, and #[derive(Default)] would be a lie:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::mem;

enum Connection {
    Disconnected,
    Connecting(u32),
    Connected { session: String },
}

fn finalize(conn: &mut Connection) -> Option<String> {
    match mem::replace(conn, Connection::Disconnected) {
        Connection::Connected { session } => Some(session),
        _ => None,
    }
}

let mut c = Connection::Connected { session: String::from("abc123") };
let session = finalize(&mut c);

assert_eq!(session.as_deref(), Some("abc123"));
assert!(matches!(c, Connection::Disconnected));

You get the owned String out of the Connected variant — no cloning the session, no Option<Connection> gymnastics, no unsafe.

Flushing a buffer with a fresh one

mem::take would leave behind an empty Vec with zero capacity. mem::replace lets you pre-size the replacement, which matters if you’re about to refill it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::mem;

struct Batch {
    items: Vec<u32>,
}

impl Batch {
    fn flush(&mut self) -> Vec<u32> {
        mem::replace(&mut self.items, Vec::with_capacity(16))
    }
}

let mut b = Batch { items: vec![1, 2, 3] };
let drained = b.flush();

assert_eq!(drained, vec![1, 2, 3]);
assert!(b.items.is_empty());
assert_eq!(b.items.capacity(), 16);

Same trick works for swapping in a String::with_capacity(...), a pre-allocated HashMap, or anything where the replacement’s shape is tuned for what comes next.

When to reach for which

mem::take when the type has a cheap, meaningful Default and you don’t care about the leftover. mem::replace when you need to control the replacement — an enum variant, a pre-sized collection, a sentinel value. Both are safe, both are O(1), and both read more clearly than the Option::take / unwrap dance.

98. sort_by_cached_key — Stop Recomputing Expensive Sort Keys

sort_by_key sounds like it computes the key once per element. It doesn’t — it calls your closure at every comparison, so an n-element sort can pay for that key O(n log n) times. If the key is expensive, sort_by_cached_key is the fix you’ve been looking for.

The trap

The signature reads nicely: “sort by this key.” The implementation, less so — the closure fires on every comparison, not once per element:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_key(|s| {
    calls.set(calls.get() + 1);
    // Pretend this is a heavy computation: allocating, hashing,
    // parsing, calling a regex, opening a file, etc.
    s.to_string()
});

// 5 elements, but the key ran way more than 5 times.
assert!(calls.get() > items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

For identity keys that cost a pointer-deref, nobody cares. For anything that allocates.to_string(), .to_lowercase(), format!(...), a regex capture, a trimmed-and-lowered filename — the cost compounds quickly. I’ve seen a profile where 80% of total runtime was the key closure being called 40,000 times to sort 2,000 items.

The fix

slice::sort_by_cached_key runs your closure exactly once per element, stashes the results in a scratch buffer, then sorts against the cache. This is the Schwartzian transform, wrapped up in a method call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_cached_key(|s| {
    calls.set(calls.get() + 1);
    s.to_string()
});

// Exactly one call per element — no matter how big the slice is.
assert_eq!(calls.get(), items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

Same result, linear key-function calls. The memory trade is a Vec<(K, usize)> the size of the slice — cheap next to the cost of re-running an allocating closure on every compare.

When to reach for which

The rule is about where your time goes, not how fancy the key looks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let mut nums = vec![5u32, 2, 8, 1, 9, 3];

// Trivial key: sort_by_key is fine (and avoids the scratch alloc).
nums.sort_by_key(|n| *n);
assert_eq!(nums, [1, 2, 3, 5, 8, 9]);

// Expensive key: sort_by_cached_key wins.
let mut files = vec!["Cargo.TOML", "src/MAIN.rs", "README.md", "build.RS"];
files.sort_by_cached_key(|path| path.to_lowercase());
assert_eq!(files, ["build.RS", "Cargo.TOML", "README.md", "src/MAIN.rs"]);

Use sort_by_key for cheap, Copy-ish keys. Use sort_by_cached_key the moment your closure allocates, hashes, parses, or otherwise does real work — it’s the difference between O(n log n) and O(n) calls to that closure.

#097 Apr 2026

97. Option::take_if — Take the Value Out Only When You Want To

You want to pull a value out of an Option — but only if it meets some condition. Before take_if, that little sentence turned into a five-line dance with as_ref, is_some_and, and a separate take(). Now it’s one call.

The old dance

Say you hold an Option<Session> and you want to evict it if it’s expired, otherwise leave it alone. The naive version keeps ownership juggling in your face:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#[derive(Debug, PartialEq)]
struct Session { id: u32, age: u32 }

let mut slot: Option<Session> = Some(Session { id: 1, age: 45 });

// Old way: peek, decide, then take.
let evicted = if slot.as_ref().is_some_and(|s| s.age > 30) {
    slot.take()
} else {
    None
};

assert_eq!(evicted, Some(Session { id: 1, age: 45 }));
assert!(slot.is_none());

Three lines of control flow just to express “take it if it’s stale.” And if you ever need to inspect the value more deeply, you’re one borrow-checker nudge away from rewriting the whole thing.

Enter take_if

Option::take_if bakes the whole pattern into one method — it runs your predicate on the inner value, and if the predicate returns true, it take()s it for you:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut slot: Option<Session> = Some(Session { id: 2, age: 12 });

// Young session — predicate is false, nothing happens.
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted, None);
assert!(slot.is_some());

// Age it and try again.
if let Some(s) = slot.as_mut() { s.age = 99; }
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted.unwrap().id, 2);
assert!(slot.is_none());

One line, one branch, one name for the pattern.

The mutable twist

Here’s the detail that trips people up: the predicate receives &mut T, not &T. You can mutate the inner value before deciding whether to yank it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let mut counter: Option<u32> = Some(0);

// Bump on every call; take it once it hits the threshold.
for _ in 0..5 {
    let taken = counter.take_if(|n| {
        *n += 1;
        *n >= 3
    });
    if let Some(n) = taken {
        assert_eq!(n, 3);
    }
}

assert!(counter.is_none());

Useful for cache eviction with hit counters, retry-until-exhausted slots, or anywhere “inspect, maybe mutate, maybe remove” shows up. Reach for take_if whenever you find yourself writing if condition { x.take() } else { None }.

96. Result::inspect_err — Log Errors Without Breaking the ? Chain

You want to log an error but still bubble it up with ?. The usual trick is .map_err with a closure that sneaks in a eprintln! and returns the error unchanged. Result::inspect_err does that for you — and reads like what you meant.

The problem: logging mid-chain

You’re happily propagating errors with ?, but somewhere in the pipeline you want to peek at the failure — log it, bump a metric, attach context — without otherwise touching the value:

1
2
3
4
5
6
7
8
9
fn load_config(path: &str) -> Result<String, std::io::Error> {
    std::fs::read_to_string(path)
}

fn run(path: &str) -> Result<String, std::io::Error> {
    // We want to log the error here, but still return it.
    let cfg = load_config(path)?;
    Ok(cfg)
}

Adding a println! means breaking the chain, introducing a match, or writing a no-op map_err:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
fn run(path: &str) -> Result<String, std::io::Error> {
    let cfg = load_config(path).map_err(|e| {
        eprintln!("failed to read {}: {}", path, e);
        e  // have to return the error unchanged — easy to mess up
    })?;
    Ok(cfg)
}

That closure always has to end with e. Miss it and you’re silently changing the error type — or worse, swallowing it.

The fix: inspect_err

Stabilized in Rust 1.76, Result::inspect_err runs a closure on the error by reference and passes the Result through untouched:

1
2
3
4
5
6
7
8
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
fn run(path: &str) -> Result<String, std::io::Error> {
    let cfg = load_config(path)
        .inspect_err(|e| eprintln!("failed to read config: {e}"))?;
    Ok(cfg)
}

The closure takes &E, so there’s nothing to return and nothing to get wrong. The value flows straight through to ?.

The Ok side too

There’s a mirror image for the happy path: Result::inspect. Same idea — &T in, nothing out, value preserved:

1
2
3
4
5
6
let parsed: Result<u16, _> = "8080"
    .parse::<u16>()
    .inspect(|port| println!("parsed port: {port}"))
    .inspect_err(|e| eprintln!("parse failed: {e}"));

assert_eq!(parsed, Ok(8080));

Both methods exist on Option too (Option::inspect) — handy when you want to trace a Some without consuming it.

Why it’s nicer than map_err

map_err is for transforming errors. inspect_err is for observing them. Using the right tool means:

  • No accidental error swallowing — the closure can’t return the wrong type.
  • The intent is obvious at a glance: this is a side effect, not a transformation.
  • It composes cleanly with ?, and_then, and the rest of the Result toolbox.
1
2
3
4
5
6
7
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
// Chain several observations together without ceremony.
let _ = load_config("/does/not/exist")
    .inspect(|cfg| println!("loaded {} bytes", cfg.len()))
    .inspect_err(|e| eprintln!("load failed: {e}"));

Reach for inspect_err any time you’d otherwise write map_err(|e| { log(&e); e }) — you’ll have one less footgun and one less line.

95. LazyLock::get — Peek at a Lazy Value Without Initializing It

Wanted to know whether a LazyLock has been initialized yet — without causing the initialization by touching it? Rust 1.94 stabilises LazyLock::get and LazyCell::get, which return Option<&T> and leave the closure untouched if it hasn’t fired.

The old pain

Any access that goes through Deref forces the closure to run. So the moment you do *CONFIG — or anything that implicitly derefs — the lazy becomes eager. Before 1.94 there was no way to ask “has this been initialized?” without tripping that wire.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    // Imagine this is slow: reads a file, hits the network, etc.
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // No way to peek without paying the init cost
    let _ = CONFIG.len(); // forces init
    assert_eq!(CONFIG.len(), 2);
}

Handy enough, but if you want a metric like “was the config ever read?” you’d have to wrap the whole thing in your own AtomicBool.

The fix: LazyLock::get

Called as an associated function (LazyLock::get(&lock)), it returns Option<&T> without touching the closure. None means the lazy is still pending. Some(&value) means someone already forced it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // Not yet initialised — get returns None
    assert!(LazyLock::get(&CONFIG).is_none());

    // Force init via normal deref
    assert_eq!(CONFIG.len(), 2);

    // Now get returns Some(&value)
    let peek = LazyLock::get(&CONFIG).unwrap();
    assert_eq!(peek.len(), 2);
}

Note the call style: LazyLock::get(&CONFIG), not CONFIG.get(). That’s deliberate — method-lookup on the lock itself would go through Deref, which is exactly the thing we’re trying to avoid.

Same story for LazyCell

LazyCell is the single-threaded cousin and gets the same treatment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::cell::LazyCell;

fn main() {
    let greeting = LazyCell::new(|| "Hello, Rust!".to_uppercase());

    // Not forced yet
    assert!(LazyCell::get(&greeting).is_none());

    // Deref triggers the closure
    assert_eq!(*greeting, "HELLO, RUST!");

    // Now we can peek without re-running anything
    assert_eq!(LazyCell::get(&greeting), Some(&"HELLO, RUST!".to_string()));
}

Where this shines

Two patterns fall out naturally.

Metrics and diagnostics. You want to log “did we ever load the config?” at shutdown without accidentally loading it just to find out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::sync::LazyLock;

static CACHE: LazyLock<Vec<u64>> = LazyLock::new(|| (0..1000).collect());

fn was_cache_used() -> bool {
    LazyLock::get(&CACHE).is_some()
}

fn main() {
    assert!(!was_cache_used());
    let _ = CACHE.first(); // touch it
    assert!(was_cache_used());
}

Tests. Assert that the lazy init didn’t happen on a code path that shouldn’t need it — a guarantee that was basically impossible to write before.

When to reach for it

Use LazyLock::get / LazyCell::get whenever you need to ask about initialization state without causing it. For everything else, just deref as usual — that’s still the one-liner that Just Works.

Stabilised in Rust 1.94 (March 2026).

94. ControlFlow::is_break and is_continue — Ask the Flow Which Way It Went

Got a ControlFlow back from try_fold or a visitor and just want to know which variant you’re holding? Before 1.95 you either pattern-matched or reached for matches!. Rust 1.95 adds straightforward .is_break() and .is_continue() methods.

The old pain

ControlFlow<B, C> is the enum that powers short-circuiting iterator methods like try_for_each and try_fold. Once you had one, checking which arm it was took more ceremony than you’d expect:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::ops::ControlFlow;

fn first_over(v: &[i32], n: i32) -> ControlFlow<i32> {
    v.iter().try_for_each(|&x| {
        if x > n { ControlFlow::Break(x) } else { ControlFlow::Continue(()) }
    })
}

fn main() {
    let flow = first_over(&[1, 2, 3, 10, 4], 5);
    // Want a bool? Reach for matches!
    let found = matches!(flow, ControlFlow::Break(_));
    assert!(found);
}

The name ControlFlow::Break also collides visually with loop break, so code like this reads a bit awkwardly.

The fix: inherent is_break / is_continue

Rust 1.95 stabilises two one-line accessors, mirroring Result::is_ok / is_err and Option::is_some / is_none:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::ops::ControlFlow;

fn first_over(v: &[i32], n: i32) -> ControlFlow<i32> {
    v.iter().try_for_each(|&x| {
        if x > n { ControlFlow::Break(x) } else { ControlFlow::Continue(()) }
    })
}

fn main() {
    let hit = first_over(&[1, 2, 3, 10, 4], 5);
    let miss = first_over(&[1, 2, 3], 5);

    assert!(hit.is_break());
    assert!(!hit.is_continue());

    assert!(miss.is_continue());
    assert!(!miss.is_break());
}

No pattern, no import of a macro, no wildcards.

Handy in counting and filtering

Because the methods take &self, you can pipe results straight through iterator adapters:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::ops::ControlFlow;

fn main() {
    let flows: Vec<ControlFlow<&'static str, i32>> = vec![
        ControlFlow::Continue(1),
        ControlFlow::Break("boom"),
        ControlFlow::Continue(2),
        ControlFlow::Break("fire"),
        ControlFlow::Continue(3),
    ];

    let breaks = flows.iter().filter(|f| f.is_break()).count();
    let continues = flows.iter().filter(|f| f.is_continue()).count();

    assert_eq!(breaks, 2);
    assert_eq!(continues, 3);
}

Previously you’d write |f| matches!(f, ControlFlow::Break(_)) — shorter, but noisier and requires you to name the variant (which means a use or a fully-qualified path).

It’s just a bool — but it’s the right bool

Nothing here is groundbreaking: you could always write a matches!. But when a method exists on Result and Option and Poll and hash iterators and even Peekable, not having one on ControlFlow meant reaching for a different idiom for no good reason. 1.95 closes that small gap.

Stabilised in Rust 1.95 (April 2026).

93. MaybeUninit Array Conversions — Build Fixed Arrays Without transmute

Ever tried to build a [T; N] element-by-element and ended up reaching for mem::transmute because [MaybeUninit<T>; N] refused to convert? Rust 1.95 stabilises safe conversions between [MaybeUninit<T>; N] and MaybeUninit<[T; N]> — no transmute, no tricks.

The old pain

You want a fully-initialised [T; N], but T isn’t Default (or the init is fallible, or expensive). The canonical pattern is:

  1. Allocate [MaybeUninit<T>; N] uninitialised.
  2. Fill each slot.
  3. Get a [T; N] out the other end.

Step 3 is where it got ugly. MaybeUninit::assume_init only works on MaybeUninit<T>, not [MaybeUninit<T>; N]. To flip the array-of-uninits into uninit-of-array, you reached for mem::transmute — which works, but leans on layout assumptions and carries a big “here be dragons” vibe.

The fix: From conversions

Rust 1.95 stabilises both directions of the conversion, so no transmute is needed:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::mem::MaybeUninit;

fn main() {
    let arr: [MaybeUninit<u32>; 4] = [
        MaybeUninit::new(1),
        MaybeUninit::new(2),
        MaybeUninit::new(3),
        MaybeUninit::new(4),
    ];

    // [MaybeUninit<T>; N] -> MaybeUninit<[T; N]>
    let packed: MaybeUninit<[u32; 4]> = MaybeUninit::from(arr);
    let init: [u32; 4] = unsafe { packed.assume_init() };

    assert_eq!(init, [1, 2, 3, 4]);
}

MaybeUninit::from takes the array-of-uninits and gives you an uninit-of-array ready to assume_init. Safe, obvious, no layout assumption on your part.

The reverse direction

You can also go the other way — from MaybeUninit<[T; N]> back to [MaybeUninit<T>; N] — useful when you want to touch elements individually:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::mem::MaybeUninit;

fn main() {
    let packed: MaybeUninit<[u32; 3]> = MaybeUninit::new([10, 20, 30]);

    // MaybeUninit<[T; N]> -> [MaybeUninit<T>; N]
    let unpacked: [MaybeUninit<u32>; 3] = <[MaybeUninit<u32>; 3]>::from(packed);

    let values: [u32; 3] = unsafe {
        [
            unpacked[0].assume_init(),
            unpacked[1].assume_init(),
            unpacked[2].assume_init(),
        ]
    };
    assert_eq!(values, [10, 20, 30]);
}

Plus AsRef / AsMut for free

The same release adds AsRef<[MaybeUninit<T>; N]> and AsMut<[MaybeUninit<T>; N]> (plus slice versions) for MaybeUninit<[T; N]>. That means you can borrow the uninit array as a slice without any conversion ceremony:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::mem::MaybeUninit;

fn fill_evens(buf: &mut [MaybeUninit<u32>]) {
    for (i, slot) in buf.iter_mut().enumerate() {
        slot.write(i as u32 * 2);
    }
}

fn main() {
    let mut packed: MaybeUninit<[u32; 4]> = MaybeUninit::uninit();

    // AsMut<[MaybeUninit<T>]> — borrow as a mutable slice of slots
    fill_evens(packed.as_mut());

    let init: [u32; 4] = unsafe { packed.assume_init() };
    assert_eq!(init, [0, 2, 4, 6]);
}

The helper writes through the AsMut slice view; the caller gets a fully-initialised [u32; 4] after assume_init. No transmute, no pointer casting.

When to reach for it

This is niche — you don’t need it until you’re writing generic collection code, FFI wrappers, or allocator-like APIs that build arrays without Default. But when you do, the new conversions delete an uncomfortable transmute from your codebase and make the intent explicit.

Stabilised in Rust 1.95 (April 2026).