#103 Apr 2026

103. bool::then_some — Turn a Condition Into an Option Without an if

Writing if condition { Some(value) } else { None } for the hundredth time? bool::then_some collapses that whole pattern into a single chainable call.

The familiar boilerplate

You have a bool and you want to lift it into an Option. The naive form works, but it’s noisy and breaks chains:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn even_double(n: i32) -> Option<i32> {
    if n % 2 == 0 {
        Some(n * 2)
    } else {
        None
    }
}

assert_eq!(even_double(4), Some(8));
assert_eq!(even_double(5), None);

Three lines and an if/else for what is conceptually “if true, wrap it.” It also doesn’t slot into iterator chains — you’d have to wrap it in a closure.

then_some: the eager form

bool::then_some(value) returns Some(value) if the bool is true, None otherwise:

1
2
3
4
5
6
fn even_double(n: i32) -> Option<i32> {
    (n % 2 == 0).then_some(n * 2)
}

assert_eq!(even_double(4), Some(8));
assert_eq!(even_double(5), None);

One line, no branches in sight. The intent reads exactly like the English: if even, then some n * 2.

then: the lazy form

then_some always evaluates its argument — fine for cheap values, wasteful for expensive ones. Use then(|| ...) when the value should only be computed when the condition holds:

1
2
3
4
5
6
7
fn parse_if_digits(s: &str) -> Option<u64> {
    s.chars().all(|c| c.is_ascii_digit())
        .then(|| s.parse().unwrap())
}

assert_eq!(parse_if_digits("42"), Some(42));
assert_eq!(parse_if_digits("4a2"), None);

The parse only runs when the string is all digits. With then_some(s.parse().unwrap()) you’d unwrap on every call — instant panic on bad input.

Where it really shines: filter_map chains

The big payoff is in iterator pipelines. Filter and transform in one step, no closure-returning-Option boilerplate:

1
2
3
4
5
6
7
8
let nums = [1, 2, 3, 4, 5, 6];

let doubled_evens: Vec<i32> = nums
    .iter()
    .filter_map(|&n| (n % 2 == 0).then_some(n * 2))
    .collect();

assert_eq!(doubled_evens, [4, 8, 12]);

Compare to the if/else version inside the closure — it works, but it’s twice as wide and the else None arm adds zero information.

Watch the eager trap

then_some evaluates its argument unconditionally — which means an “obvious” guard like this is actually a panic:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn first<T: Copy>(items: &[T]) -> Option<T> {
    // BAD: items[0] runs even when the slice is empty.
    // (!items.is_empty()).then_some(items[0])

    // GOOD: defer the indexing into the closure.
    (!items.is_empty()).then(|| items[0])
}

assert_eq!(first(&[10, 20, 30]), Some(10));
assert_eq!(first::<i32>(&[]), None);

The bool short-circuits, the closure doesn’t. Whenever the value can panic, unwrap, or just costs real work, reach for then, not then_some.

When to reach for which

Use then_some(value) when the value is already computed or trivially cheap — a literal, a copy, a field access. Use then(|| value) whenever building the value costs something or could panic. Both are stable (then since Rust 1.50, then_some since 1.62) and both read cleaner than the if/else they replace.

#104 Apr 2026

104. Vec::swap_remove — O(1) Removal When Order Doesn't Matter

Calling Vec::remove(i) shifts every element after i left by one — that’s O(n) per call. If you don’t care about order, swap_remove does the same job in O(1).

The hidden cost of Vec::remove

remove takes the value out of the vector, but to keep the rest in order it has to shift every following element left by one slot. On a long vector, removing near the front is a giant memcpy:

1
2
3
4
5
let mut v = vec![1, 2, 3, 4, 5];
let removed = v.remove(1);

assert_eq!(removed, 2);
assert_eq!(v, [1, 3, 4, 5]);

Inside a loop — “remove every item that matches X” — that O(n) cost compounds into O(n²). A tight loop that should be linear silently becomes quadratic.

swap_remove: trade order for speed

swap_remove(i) removes the element at index i by swapping it with the last element, then popping the tail. No shifting, no memcpy chain — just one swap and a pop:

1
2
3
4
5
6
let mut v = vec![1, 2, 3, 4, 5];
let removed = v.swap_remove(1);

assert_eq!(removed, 2);
// 5 took 2's spot — order is gone, but the op was O(1).
assert_eq!(v, [1, 5, 3, 4]);

If your Vec is really a set — entities in a game world, pending jobs in a worker pool, items keyed by id elsewhere — order rarely matters. swap_remove is free.

Removing many items in place

The classic mistake: iterate forward calling remove. swap_remove lets you do it without the O(n²) blow-up. Walk by index and don’t bump the cursor when you remove:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let mut nums = vec![1, 2, 3, 4, 5, 6, 7, 8];

let mut i = 0;
while i < nums.len() {
    if nums[i] % 2 == 0 {
        nums.swap_remove(i);
        // Don't increment — a new element is now at position i.
    } else {
        i += 1;
    }
}

nums.sort(); // canonicalize for the assert
assert_eq!(nums, [1, 3, 5, 7]);

For bulk predicate-based filtering, Vec::retain stays cleaner (and is also O(n)). Reach for swap_remove when you have a specific index you want gone in O(1).

With structs

swap_remove returns the removed value, so you can hand it off to another collection on the way out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#[derive(PartialEq, Debug)]
struct Job { id: u32 }

let mut queue = vec![
    Job { id: 1 },
    Job { id: 2 },
    Job { id: 3 },
    Job { id: 4 },
];

let pulled = queue.swap_remove(0);

assert_eq!(pulled, Job { id: 1 });
assert_eq!(queue.len(), 3);

When to reach for it

Use swap_remove whenever you’d write remove(i) and the surrounding code doesn’t depend on element order. Order-preserving removal still needs remove, but most “remove one item from this Vec” code is order-agnostic — and swap_remove turns an O(n) pothole into a one-line O(1) win. Stable since Rust 1.0, one of those quietly-essential APIs that’s easy to forget exists.

#102 Apr 2026

102. slice::partition_point — Binary Search That Just Returns the Index

Reaching for binary_search on a sorted Vec and unwrapping Ok(i) | Err(i) because you only ever wanted the index? slice::partition_point skips the Result ceremony and hands you the position directly.

The binary_search annoyance

binary_search is great when you care whether the value was actually found. But often you don’t — you just want the spot where it would go to keep the slice sorted:

1
2
3
4
5
6
7
8
9
let nums = vec![1, 3, 5, 7, 9, 11];
let target = 6;

// Awkward: collapse Ok and Err to a single index.
let pos = match nums.binary_search(&target) {
    Ok(i) | Err(i) => i,
};

assert_eq!(pos, 3);

The Ok | Err pattern works, but it’s noisy and obscures the intent. Worse, it doesn’t generalise — what if you want the insertion point for a predicate, not an exact value?

partition_point to the rescue

partition_point takes a predicate and returns the first index where the predicate flips from true to false. On a sorted slice, that’s the insertion point — no Result, no match arms:

1
2
3
4
5
let nums = vec![1, 3, 5, 7, 9, 11];

let pos = nums.partition_point(|&x| x < 6);

assert_eq!(pos, 3); // 6 would slot between 5 and 7

The slice still has to be partitioned (all trues before all falses), but for a sorted slice with a < predicate that’s automatic. Internally it’s still O(log n) binary search — same complexity as binary_search, friendlier API.

Insert while keeping sorted

A common use: keep a Vec sorted as you add to it.

1
2
3
4
5
6
7
let mut leaderboard = vec![10, 25, 40, 70];
let new_score = 33;

let pos = leaderboard.partition_point(|&x| x < new_score);
leaderboard.insert(pos, new_score);

assert_eq!(leaderboard, [10, 25, 33, 40, 70]);

Compare that to binary_search(&new_score).unwrap_or_else(|i| i) — same result, more ceremony.

Beyond simple ordering

Because it takes any predicate, partition_point works on any slice partitioned by a property — not just sorted-by-Ord. Sorted by a derived key? Filter by a threshold? Same call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Event { day: u32, name: &'static str }

let log = vec![
    Event { day: 1, name: "boot"   },
    Event { day: 2, name: "login"  },
    Event { day: 5, name: "deploy" },
    Event { day: 7, name: "alert"  },
    Event { day: 9, name: "reboot" },
];

// First event on or after day 5.
let i = log.partition_point(|e| e.day < 5);
assert_eq!(log[i].name, "deploy");

// Number of events strictly before day 5.
assert_eq!(log.partition_point(|e| e.day < 5), 2);

That second line is a slick trick: partition_point doubles as “count how many elements satisfy the prefix predicate” in O(log n).

When to reach for it

Any time you find yourself writing binary_search(...).unwrap_or_else(|i| i) or match ... { Ok(i) | Err(i) => i }, swap in partition_point. Stable since Rust 1.52 — old enough to use everywhere, fresh enough that plenty of code still does it the noisy way.

#101 Apr 2026

101. f64::total_cmp — Sort Floats Without the NaN Panic

Tried v.sort() on a Vec<f64> and hit the trait Ord is not implemented for f64? Then reached for .sort_by(|a, b| a.partial_cmp(b).unwrap()) and now a stray NaN is about to panic your service at 3am? f64::total_cmp is the one-liner that makes both problems disappear.

Why f64 doesn’t implement Ord

Floats form a partial order because NaN is not equal to anything — not even itself. So f64: PartialOrd but not Ord, which means sort() flat out refuses to compile:

1
2
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
// temps.sort(); // ❌ the trait `Ord` is not implemented for `f64`

The classic workaround is partial_cmp().unwrap():

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(|a, b| a.partial_cmp(b).unwrap());

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Works — until a NaN sneaks in. Then partial_cmp returns None, the unwrap fires, and your sort becomes a panic.

Enter total_cmp

f64::total_cmp implements the IEEE 754 totalOrder predicate: a real total ordering on every f64 bit pattern, including all the NaNs. It returns Ordering directly — no Option, no panic:

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(f64::total_cmp);

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Same result for well-behaved input, but now NaN won’t take the process down:

1
2
3
4
5
6
7
8
9
let mut values: Vec<f64> = vec![3.0, f64::NAN, 1.0, f64::NEG_INFINITY, 2.0];
values.sort_by(f64::total_cmp);

// Finite values in order, -∞ at the front, NaN at the back.
assert_eq!(values[0], f64::NEG_INFINITY);
assert_eq!(values[1], 1.0);
assert_eq!(values[2], 2.0);
assert_eq!(values[3], 3.0);
assert!(values[4].is_nan());

min and max too

partial_cmp poisons more than just sort. Any time you reach for iter().max_by(|a, b| a.partial_cmp(b).unwrap()), you’ve written the same latent panic. total_cmp fits there too:

1
2
3
4
5
6
7
let readings = [3.2_f64, 1.4, 4.8, 2.1];

let peak = readings.iter().copied().max_by(f64::total_cmp).unwrap();
let low  = readings.iter().copied().min_by(f64::total_cmp).unwrap();

assert_eq!(peak, 4.8);
assert_eq!(low, 1.4);

Sorting structs by a float field

Because total_cmp takes two &f64s and returns Ordering, it slots straight into sort_by:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
struct Trade { symbol: &'static str, price: f64 }

let mut book = vec![
    Trade { symbol: "AAPL", price: 172.40 },
    Trade { symbol: "NVDA", price: 915.10 },
    Trade { symbol: "MSFT", price: 419.80 },
];

book.sort_by(|a, b| a.price.total_cmp(&b.price));

assert_eq!(book[0].symbol, "AAPL");
assert_eq!(book[2].symbol, "NVDA");

When to reach for it

Any time you’re about to type partial_cmp(...).unwrap() for a float, stop and use total_cmp instead. f32::total_cmp works the same way. Available since Rust 1.62 — the fix has been hiding in plain sight for years.

#100 Apr 2026

100. std::cmp::Reverse — Sort Descending Without Writing a Closure

Want to sort a Vec in descending order? The old trick is v.sort_by(|a, b| b.cmp(a)), and every time you write it you flip a and b and pray you got it right. std::cmp::Reverse is a one-word replacement.

The closure swap trick

To sort descending, you reverse the comparison. The closure is short but easy to mess up:

1
2
3
4
let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by(|a, b| b.cmp(a));

assert_eq!(scores, [50, 40, 30, 20, 10]);

Swap the a and b and you silently get ascending order again. Not a bug the compiler can help with.

Enter Reverse

std::cmp::Reverse<T> is a tuple wrapper whose Ord impl is the reverse of T’s. Hand it to sort_by_key and you’re done:

1
2
3
4
5
6
use std::cmp::Reverse;

let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by_key(|&s| Reverse(s));

assert_eq!(scores, [50, 40, 30, 20, 10]);

No chance of flipping the comparison the wrong way — the intent is right there in the name.

Sorting by a field, descending

It really shines when you’re sorting structs by a specific field:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::cmp::Reverse;

#[derive(Debug)]
struct Player {
    name: &'static str,
    score: u32,
}

let mut roster = vec![
    Player { name: "Ferris", score: 42 },
    Player { name: "Corro",  score: 88 },
    Player { name: "Rusty",  score: 65 },
];

roster.sort_by_key(|p| Reverse(p.score));

assert_eq!(roster[0].name, "Corro");
assert_eq!(roster[1].name, "Rusty");
assert_eq!(roster[2].name, "Ferris");

Mixing ascending and descending

Sort by one field ascending and another descending? Pack them in a tuple — Reverse composes cleanly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::cmp::Reverse;

let mut items = vec![
    ("apple",  3),
    ("banana", 1),
    ("apple",  1),
    ("banana", 3),
];

// Name ascending, then count descending.
items.sort_by_key(|&(name, count)| (name, Reverse(count)));

assert_eq!(items, [
    ("apple",  3),
    ("apple",  1),
    ("banana", 3),
    ("banana", 1),
]);

It works anywhere Ord does

Because Reverse<T> is an Ord type, you can use it with BinaryHeap to turn a max-heap into a min-heap, or with BTreeSet to iterate in reverse order — no extra wrapper types needed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::cmp::Reverse;
use std::collections::BinaryHeap;

// BinaryHeap is max-heap by default. Wrap in Reverse for min-heap behaviour.
let mut heap = BinaryHeap::new();
heap.push(Reverse(3));
heap.push(Reverse(1));
heap.push(Reverse(2));

assert_eq!(heap.pop(), Some(Reverse(1)));
assert_eq!(heap.pop(), Some(Reverse(2)));
assert_eq!(heap.pop(), Some(Reverse(3)));

When to reach for it

Any time you’d write b.cmp(a) — reach for Reverse instead. The code reads top-to-bottom, the intent is obvious, and there’s no comparator to accidentally flip.

99. std::mem::replace — Swap a Value and Keep the Old One

mem::take is great until your type doesn’t have a sensible Default. That’s where mem::replace steps in — you pick what gets left behind, and you still get the old value out of a &mut.

The shape of the problem

You can’t move a value out of a &mut T. The borrow checker rightly refuses. mem::take fixes this by swapping in T::default(), but an enum with no obvious default, or a type that deliberately doesn’t implement Default, leaves you stuck.

mem::replace(dest, src) is the escape hatch: it writes src into *dest and hands you back the old value.

1
2
3
4
5
6
7
use std::mem;

let mut greeting = String::from("Hello");
let old = mem::replace(&mut greeting, String::from("Howdy"));

assert_eq!(old, "Hello");
assert_eq!(greeting, "Howdy");

No clones, no unsafe, no Default required.

State machines without a default variant

This is where replace earns its keep. Picture a connection type where none of the variants makes a natural default — Disconnected is fine here, but it might be Error(e) somewhere else, and #[derive(Default)] would be a lie:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::mem;

enum Connection {
    Disconnected,
    Connecting(u32),
    Connected { session: String },
}

fn finalize(conn: &mut Connection) -> Option<String> {
    match mem::replace(conn, Connection::Disconnected) {
        Connection::Connected { session } => Some(session),
        _ => None,
    }
}

let mut c = Connection::Connected { session: String::from("abc123") };
let session = finalize(&mut c);

assert_eq!(session.as_deref(), Some("abc123"));
assert!(matches!(c, Connection::Disconnected));

You get the owned String out of the Connected variant — no cloning the session, no Option<Connection> gymnastics, no unsafe.

Flushing a buffer with a fresh one

mem::take would leave behind an empty Vec with zero capacity. mem::replace lets you pre-size the replacement, which matters if you’re about to refill it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::mem;

struct Batch {
    items: Vec<u32>,
}

impl Batch {
    fn flush(&mut self) -> Vec<u32> {
        mem::replace(&mut self.items, Vec::with_capacity(16))
    }
}

let mut b = Batch { items: vec![1, 2, 3] };
let drained = b.flush();

assert_eq!(drained, vec![1, 2, 3]);
assert!(b.items.is_empty());
assert_eq!(b.items.capacity(), 16);

Same trick works for swapping in a String::with_capacity(...), a pre-allocated HashMap, or anything where the replacement’s shape is tuned for what comes next.

When to reach for which

mem::take when the type has a cheap, meaningful Default and you don’t care about the leftover. mem::replace when you need to control the replacement — an enum variant, a pre-sized collection, a sentinel value. Both are safe, both are O(1), and both read more clearly than the Option::take / unwrap dance.

98. sort_by_cached_key — Stop Recomputing Expensive Sort Keys

sort_by_key sounds like it computes the key once per element. It doesn’t — it calls your closure at every comparison, so an n-element sort can pay for that key O(n log n) times. If the key is expensive, sort_by_cached_key is the fix you’ve been looking for.

The trap

The signature reads nicely: “sort by this key.” The implementation, less so — the closure fires on every comparison, not once per element:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_key(|s| {
    calls.set(calls.get() + 1);
    // Pretend this is a heavy computation: allocating, hashing,
    // parsing, calling a regex, opening a file, etc.
    s.to_string()
});

// 5 elements, but the key ran way more than 5 times.
assert!(calls.get() > items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

For identity keys that cost a pointer-deref, nobody cares. For anything that allocates.to_string(), .to_lowercase(), format!(...), a regex capture, a trimmed-and-lowered filename — the cost compounds quickly. I’ve seen a profile where 80% of total runtime was the key closure being called 40,000 times to sort 2,000 items.

The fix

slice::sort_by_cached_key runs your closure exactly once per element, stashes the results in a scratch buffer, then sorts against the cache. This is the Schwartzian transform, wrapped up in a method call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_cached_key(|s| {
    calls.set(calls.get() + 1);
    s.to_string()
});

// Exactly one call per element — no matter how big the slice is.
assert_eq!(calls.get(), items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

Same result, linear key-function calls. The memory trade is a Vec<(K, usize)> the size of the slice — cheap next to the cost of re-running an allocating closure on every compare.

When to reach for which

The rule is about where your time goes, not how fancy the key looks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let mut nums = vec![5u32, 2, 8, 1, 9, 3];

// Trivial key: sort_by_key is fine (and avoids the scratch alloc).
nums.sort_by_key(|n| *n);
assert_eq!(nums, [1, 2, 3, 5, 8, 9]);

// Expensive key: sort_by_cached_key wins.
let mut files = vec!["Cargo.TOML", "src/MAIN.rs", "README.md", "build.RS"];
files.sort_by_cached_key(|path| path.to_lowercase());
assert_eq!(files, ["build.RS", "Cargo.TOML", "README.md", "src/MAIN.rs"]);

Use sort_by_key for cheap, Copy-ish keys. Use sort_by_cached_key the moment your closure allocates, hashes, parses, or otherwise does real work — it’s the difference between O(n log n) and O(n) calls to that closure.

#097 Apr 2026

97. Option::take_if — Take the Value Out Only When You Want To

You want to pull a value out of an Option — but only if it meets some condition. Before take_if, that little sentence turned into a five-line dance with as_ref, is_some_and, and a separate take(). Now it’s one call.

The old dance

Say you hold an Option<Session> and you want to evict it if it’s expired, otherwise leave it alone. The naive version keeps ownership juggling in your face:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#[derive(Debug, PartialEq)]
struct Session { id: u32, age: u32 }

let mut slot: Option<Session> = Some(Session { id: 1, age: 45 });

// Old way: peek, decide, then take.
let evicted = if slot.as_ref().is_some_and(|s| s.age > 30) {
    slot.take()
} else {
    None
};

assert_eq!(evicted, Some(Session { id: 1, age: 45 }));
assert!(slot.is_none());

Three lines of control flow just to express “take it if it’s stale.” And if you ever need to inspect the value more deeply, you’re one borrow-checker nudge away from rewriting the whole thing.

Enter take_if

Option::take_if bakes the whole pattern into one method — it runs your predicate on the inner value, and if the predicate returns true, it take()s it for you:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut slot: Option<Session> = Some(Session { id: 2, age: 12 });

// Young session — predicate is false, nothing happens.
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted, None);
assert!(slot.is_some());

// Age it and try again.
if let Some(s) = slot.as_mut() { s.age = 99; }
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted.unwrap().id, 2);
assert!(slot.is_none());

One line, one branch, one name for the pattern.

The mutable twist

Here’s the detail that trips people up: the predicate receives &mut T, not &T. You can mutate the inner value before deciding whether to yank it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let mut counter: Option<u32> = Some(0);

// Bump on every call; take it once it hits the threshold.
for _ in 0..5 {
    let taken = counter.take_if(|n| {
        *n += 1;
        *n >= 3
    });
    if let Some(n) = taken {
        assert_eq!(n, 3);
    }
}

assert!(counter.is_none());

Useful for cache eviction with hit counters, retry-until-exhausted slots, or anywhere “inspect, maybe mutate, maybe remove” shows up. Reach for take_if whenever you find yourself writing if condition { x.take() } else { None }.

96. Result::inspect_err — Log Errors Without Breaking the ? Chain

You want to log an error but still bubble it up with ?. The usual trick is .map_err with a closure that sneaks in a eprintln! and returns the error unchanged. Result::inspect_err does that for you — and reads like what you meant.

The problem: logging mid-chain

You’re happily propagating errors with ?, but somewhere in the pipeline you want to peek at the failure — log it, bump a metric, attach context — without otherwise touching the value:

1
2
3
4
5
6
7
8
9
fn load_config(path: &str) -> Result<String, std::io::Error> {
    std::fs::read_to_string(path)
}

fn run(path: &str) -> Result<String, std::io::Error> {
    // We want to log the error here, but still return it.
    let cfg = load_config(path)?;
    Ok(cfg)
}

Adding a println! means breaking the chain, introducing a match, or writing a no-op map_err:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
fn run(path: &str) -> Result<String, std::io::Error> {
    let cfg = load_config(path).map_err(|e| {
        eprintln!("failed to read {}: {}", path, e);
        e  // have to return the error unchanged — easy to mess up
    })?;
    Ok(cfg)
}

That closure always has to end with e. Miss it and you’re silently changing the error type — or worse, swallowing it.

The fix: inspect_err

Stabilized in Rust 1.76, Result::inspect_err runs a closure on the error by reference and passes the Result through untouched:

1
2
3
4
5
6
7
8
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
fn run(path: &str) -> Result<String, std::io::Error> {
    let cfg = load_config(path)
        .inspect_err(|e| eprintln!("failed to read config: {e}"))?;
    Ok(cfg)
}

The closure takes &E, so there’s nothing to return and nothing to get wrong. The value flows straight through to ?.

The Ok side too

There’s a mirror image for the happy path: Result::inspect. Same idea — &T in, nothing out, value preserved:

1
2
3
4
5
6
let parsed: Result<u16, _> = "8080"
    .parse::<u16>()
    .inspect(|port| println!("parsed port: {port}"))
    .inspect_err(|e| eprintln!("parse failed: {e}"));

assert_eq!(parsed, Ok(8080));

Both methods exist on Option too (Option::inspect) — handy when you want to trace a Some without consuming it.

Why it’s nicer than map_err

map_err is for transforming errors. inspect_err is for observing them. Using the right tool means:

  • No accidental error swallowing — the closure can’t return the wrong type.
  • The intent is obvious at a glance: this is a side effect, not a transformation.
  • It composes cleanly with ?, and_then, and the rest of the Result toolbox.
1
2
3
4
5
6
7
# fn load_config(path: &str) -> Result<String, std::io::Error> {
#     std::fs::read_to_string(path)
# }
// Chain several observations together without ceremony.
let _ = load_config("/does/not/exist")
    .inspect(|cfg| println!("loaded {} bytes", cfg.len()))
    .inspect_err(|e| eprintln!("load failed: {e}"));

Reach for inspect_err any time you’d otherwise write map_err(|e| { log(&e); e }) — you’ll have one less footgun and one less line.

95. LazyLock::get — Peek at a Lazy Value Without Initializing It

Wanted to know whether a LazyLock has been initialized yet — without causing the initialization by touching it? Rust 1.94 stabilises LazyLock::get and LazyCell::get, which return Option<&T> and leave the closure untouched if it hasn’t fired.

The old pain

Any access that goes through Deref forces the closure to run. So the moment you do *CONFIG — or anything that implicitly derefs — the lazy becomes eager. Before 1.94 there was no way to ask “has this been initialized?” without tripping that wire.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    // Imagine this is slow: reads a file, hits the network, etc.
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // No way to peek without paying the init cost
    let _ = CONFIG.len(); // forces init
    assert_eq!(CONFIG.len(), 2);
}

Handy enough, but if you want a metric like “was the config ever read?” you’d have to wrap the whole thing in your own AtomicBool.

The fix: LazyLock::get

Called as an associated function (LazyLock::get(&lock)), it returns Option<&T> without touching the closure. None means the lazy is still pending. Some(&value) means someone already forced it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // Not yet initialised — get returns None
    assert!(LazyLock::get(&CONFIG).is_none());

    // Force init via normal deref
    assert_eq!(CONFIG.len(), 2);

    // Now get returns Some(&value)
    let peek = LazyLock::get(&CONFIG).unwrap();
    assert_eq!(peek.len(), 2);
}

Note the call style: LazyLock::get(&CONFIG), not CONFIG.get(). That’s deliberate — method-lookup on the lock itself would go through Deref, which is exactly the thing we’re trying to avoid.

Same story for LazyCell

LazyCell is the single-threaded cousin and gets the same treatment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::cell::LazyCell;

fn main() {
    let greeting = LazyCell::new(|| "Hello, Rust!".to_uppercase());

    // Not forced yet
    assert!(LazyCell::get(&greeting).is_none());

    // Deref triggers the closure
    assert_eq!(*greeting, "HELLO, RUST!");

    // Now we can peek without re-running anything
    assert_eq!(LazyCell::get(&greeting), Some(&"HELLO, RUST!".to_string()));
}

Where this shines

Two patterns fall out naturally.

Metrics and diagnostics. You want to log “did we ever load the config?” at shutdown without accidentally loading it just to find out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::sync::LazyLock;

static CACHE: LazyLock<Vec<u64>> = LazyLock::new(|| (0..1000).collect());

fn was_cache_used() -> bool {
    LazyLock::get(&CACHE).is_some()
}

fn main() {
    assert!(!was_cache_used());
    let _ = CACHE.first(); // touch it
    assert!(was_cache_used());
}

Tests. Assert that the lazy init didn’t happen on a code path that shouldn’t need it — a guarantee that was basically impossible to write before.

When to reach for it

Use LazyLock::get / LazyCell::get whenever you need to ask about initialization state without causing it. For everything else, just deref as usual — that’s still the one-liner that Just Works.

Stabilised in Rust 1.94 (March 2026).