Std

#111 Apr 2026

111. Vec::insert_mut — Splice In and Edit Without Reindexing

You insert a placeholder, then index back to fix it up. Two lookups, two bounds checks, one wobble. Vec::insert_mut — stable in 1.95 — hands you the &mut T directly.

The classic dance:

1
2
3
4
let mut v = vec![1, 2, 4, 5];
v.insert(2, 0);
v[2] = 3; // recompute the index, second bounds check
assert_eq!(v, [1, 2, 3, 4, 5]);

insert_mut returns the slot:

1
2
3
4
let mut v = vec![1, 2, 4, 5];
let slot = v.insert_mut(2, 0);
*slot = 3;
assert_eq!(v, [1, 2, 3, 4, 5]);

This shines when the value is built in pieces — push a default, then fill it in based on where it landed:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#[derive(Default, Debug, PartialEq)]
struct Job { id: u32, label: String }

let mut jobs: Vec<Job> = vec![Job { id: 1, label: "build".into() }];
let j = jobs.insert_mut(0, Job::default());
j.id = 0;
j.label = "setup".into();

assert_eq!(jobs[0], Job { id: 0, label: String::from("setup") });
assert_eq!(jobs[1].label, "build");

Rust 1.95 grew the whole _mut family alongside Vec::push_mut: Vec::insert_mut, VecDeque::push_front_mut, VecDeque::push_back_mut, VecDeque::insert_mut, LinkedList::push_front_mut, and LinkedList::push_back_mut. Same idea everywhere — the place-it method now returns a mutable reference to the slot it just placed.

1
2
3
4
5
6
use std::collections::VecDeque;

let mut q: VecDeque<i32> = VecDeque::from([2, 3]);
let head = q.push_front_mut(0);
*head += 1; // now 1
assert_eq!(q, VecDeque::from([1, 2, 3]));

Quietly useful, no churn — just fewer indices floating around.

#110 Apr 2026

110. slice::split_at_checked — Split Without the Panic

slice.split_at(i) panics the second i > len. The usual fix is a length check wrapped around the call so you don’t blow up on a bad index. split_at_checked does the same job in one call and hands you an Option.

The classic trap — a single bad index away from a panic:

1
2
let xs = [1, 2, 3, 4];
let (head, tail) = xs.split_at(10); // panics: byte index 10 is out of bounds

The defensive version everyone writes:

1
2
3
4
5
6
7
8
9
let xs = [1, 2, 3, 4];
let i = 10;

if i <= xs.len() {
    let (head, tail) = xs.split_at(i);
    // ...use head and tail
} else {
    // handle out-of-bounds
}

Two reads of i, one easy off-by-one (< vs <=), and a panic waiting if you ever drop the guard.

Rust 1.80 stabilised split_at_checked (and split_at_mut_checked), which folds the bounds check into the return type:

1
2
3
4
5
let xs = [1, 2, 3, 4];

assert_eq!(xs.split_at_checked(2), Some((&xs[..2], &xs[2..])));
assert_eq!(xs.split_at_checked(4), Some((&xs[..], &[][..]))); // boundary is fine
assert_eq!(xs.split_at_checked(5), None);                     // would have panicked

Now the bounds check is the API. You get an Option<(&[T], &[T])> and the compiler nudges you to handle the None case:

1
2
3
4
5
6
7
fn take_prefix(buf: &[u8], n: usize) -> Option<&[u8]> {
    let (head, _rest) = buf.split_at_checked(n)?;
    Some(head)
}

assert_eq!(take_prefix(b"hello", 3), Some(&b"hel"[..]));
assert_eq!(take_prefix(b"hi", 3), None);

? does the bailout, no manual length check, no panic path. This works on &str too, where the index has to land on a UTF-8 boundary — and it returns None if it doesn’t, instead of panicking.

109. BinaryHeap::peek_mut — Edit the Top Without Pop-and-Push

Updating the largest element in a BinaryHeap shouldn’t take two heap operations. peek_mut lets you mutate it in place and re-heapifies on drop — one sift instead of two.

The pop-and-push dance

Say you keep tasks in a max-heap by priority and you need to bump the top one down. The naive recipe is to pop, change it, and push it back:

1
2
3
4
5
6
7
8
9
use std::collections::BinaryHeap;

let mut heap = BinaryHeap::from([3, 1, 5, 2, 4]);

// Pull the max out, edit it, push it back.
let top = heap.pop().unwrap();
heap.push(top - 10);

assert_eq!(heap.peek(), Some(&4));

That’s two O(log n) heap operations — a sift-down for pop, a sift-up for push — plus an awkward two-step where the heap briefly forgets it ever had a max.

peek_mut does it in one shot

peek_mut returns a PeekMut guard that derefs to the top element. You mutate it in place, and the heap fixes itself with a single sift-down when the guard is dropped:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::collections::BinaryHeap;

let mut heap = BinaryHeap::from([3, 1, 5, 2, 4]);

if let Some(mut top) = heap.peek_mut() {
    *top -= 10; // 5 becomes -5
}
// PeekMut dropped here — heap re-heapifies once.

assert_eq!(heap.peek(), Some(&4));

One traversal, no temporary owned value, and the heap invariant is restored before the next line of code runs.

Conditionally pop without a re-peek

PeekMut also has an associated pop function. Look at the top, decide whether to keep or remove it, all without peeking twice:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::collections::{BinaryHeap, binary_heap::PeekMut};

let mut heap = BinaryHeap::from([10, 7, 4, 1]);

if let Some(top) = heap.peek_mut() {
    if *top > 5 {
        // Pop only when the top passes a check.
        PeekMut::pop(top);
    }
}

assert_eq!(heap.peek(), Some(&7));

Same shape as Vec::pop_if, but for the heap’s max — pull the top out only when it actually meets your condition.

When to reach for it

Any time you’d write pop().push() to edit the max, peek_mut is the better tool: one heap fixup instead of two, and the borrow stays inside the heap so you don’t shuffle ownership for nothing. Great fit for priority queues that adjust the top entry — schedulers, event loops, top-k trackers.

#108 Apr 2026

108. Iterator::max_by_key — Find the Biggest Without Sorting the Whole Thing

Need the longest string in a Vec? Don’t sort the whole list to grab the last element — max_by_key walks once, allocates nothing, and returns it directly.

The wasteful way

You want the longest word from a list. The first instinct is often to sort and pick:

1
2
3
4
5
6
7
let mut words = vec!["fig", "apple", "kiwi", "blackberry", "pear"];

// Sort the full vec by length, then grab the last element.
words.sort_by_key(|w| w.len());
let longest = words.last().unwrap();

assert_eq!(*longest, "blackberry");

That’s O(n log n) work — and a side effect, since your Vec is now reordered — just to answer “which one is biggest?”.

Enter max_by_key

max_by_key walks the iterator once, tracks the running maximum, and hands you the element it picked:

1
2
3
4
5
let words = vec!["fig", "apple", "kiwi", "blackberry", "pear"];

let longest = words.iter().max_by_key(|w| w.len()).unwrap();

assert_eq!(*longest, "blackberry");

O(n), no allocation, no mutation. The original Vec is untouched and you get a &&str back without cloning anything.

Same energy: min_by_key

The mirror method is just as useful — find the shortest word in one pass:

1
2
3
4
5
let words = vec!["fig", "apple", "kiwi", "blackberry", "pear"];

let shortest = words.iter().min_by_key(|w| w.len()).unwrap();

assert_eq!(*shortest, "fig");

Picking by computed value

The closure can do real work — compute distance, score, age — anything that returns something Ord:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
struct Player { name: &'static str, score: u32 }

let roster = [
    Player { name: "Ferris", score: 80 },
    Player { name: "Corro",  score: 95 },
    Player { name: "Crusty", score: 60 },
];

let top = roster.iter().max_by_key(|p| p.score).unwrap();

assert_eq!(top.name, "Corro");

No need to sort, no need to implement Ord on the struct — just point at the field that decides.

Tiebreaks: last one wins

When two elements share the same key, max_by_key returns the last one and min_by_key returns the first:

1
2
3
4
5
6
7
let v = vec![(1, 'a'), (3, 'b'), (3, 'c'), (2, 'd')];

let max = v.iter().max_by_key(|(k, _)| *k).unwrap();
let min = v.iter().min_by_key(|(k, _)| *k).unwrap();

assert_eq!(*max, (3, 'c')); // last 3
assert_eq!(*min, (1, 'a')); // first 1

Worth knowing if your data has duplicates — pick the iteration direction so the right element wins.

When to reach for it

If you catch yourself sorting and then grabbing first(), last(), or [0], stop. min_by_key and max_by_key are the targeted tools: one pass, zero allocations, and the result type is exactly the element you wanted.

#107 Apr 2026

107. Vec::splice — Replace a Range and Keep the Old Values

Need to swap a chunk of a Vec for a different number of items? Most people reach for drain plus extend and a bunch of bookkeeping. Vec::splice does the whole thing in one call — and even hands back what it removed.

The clunky way

You want to replace v[1..4] with two different values. Without splice, you end up juggling indices:

1
2
3
4
5
6
7
8
9
let mut v = vec![1, 2, 3, 4, 5];

// Remove the middle, save the tail, then stitch it back together.
let tail: Vec<_> = v.drain(4..).collect();
v.truncate(1);
v.extend([10, 20]);
v.extend(tail);

assert_eq!(v, [1, 10, 20, 5]);

Three steps, one temporary Vec, and you didn’t even capture what was removed. Easy to off‑by‑one, easy to get wrong.

Enter splice

splice takes a range and an iterator and swaps them in place — the replacement can be any length:

1
2
3
4
5
let mut v = vec![1, 2, 3, 4, 5];

v.splice(1..4, [10, 20]);

assert_eq!(v, [1, 10, 20, 5]);

One call. No tail buffer. Works whether the replacement is shorter, longer, or the same size as the range.

It returns the removed items too

splice is a draining iterator — collect it and you get the elements that were taken out:

1
2
3
4
5
6
let mut v = vec![10, 20, 30, 40, 50];

let removed: Vec<_> = v.splice(1..4, [99]).collect();

assert_eq!(removed, [20, 30, 40]);
assert_eq!(v, [10, 99, 50]);

Perfect for “swap this section, but undo it later” patterns.

Insert without removing

Pass an empty range and splice becomes a multi‑item insert:

1
2
3
4
5
let mut v = vec![1, 5];

v.splice(1..1, [2, 3, 4]);

assert_eq!(v, [1, 2, 3, 4, 5]);

That’s far nicer than calling insert in a loop and watching every push shift the tail again.

When to reach for it

Any time you’d write drain(..) followed by extend(..) or a tower of insert calls, try splice first. One method, one allocation pattern, and the removed items come along for free.

#106 Apr 2026

106. Iterator::by_ref — Take Part of an Iterator and Keep the Rest

Called iter.take(n).collect() and watched the borrow checker swallow the whole iterator? by_ref lets you consume a chunk and keep iterating from where you stopped.

The trap: adapters consume their iterator

Most iterator adapters take self, not &mut self. The moment you write iter.take(2), you have moved iter into the new Take adapter. The original binding is gone:

1
2
3
4
5
6
let v = vec![1, 2, 3, 4, 5];
let mut iter = v.into_iter();

let _first_two: Vec<i32> = iter.take(2).collect();
// let rest: Vec<i32> = iter.collect();
// ^^ error: use of moved value: `iter`

So even though there are clearly elements left, the variable that points at them is no longer usable.

The fix: iter.by_ref()

Iterator::by_ref returns a &mut Self — and &mut I itself implements Iterator whenever I: Iterator. Adapters chained off by_ref() consume from the underlying iterator without moving it:

1
2
3
4
5
6
7
8
let v = vec![1, 2, 3, 4, 5];
let mut iter = v.into_iter();

let first_two: Vec<i32> = iter.by_ref().take(2).collect();
let rest: Vec<i32> = iter.collect();

assert_eq!(first_two, [1, 2]);
assert_eq!(rest, [3, 4, 5]);

Same iterator, two phases, no clones, no indexing.

A real use: parse a header, then a body

The pattern shines when the front of a stream uses different rules than the rest. Pull lines until you hit a blank one, then hand the same iterator to the body parser:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let input = "Subject: hi\nFrom: a@b\n\nbody line 1\nbody line 2";
let mut lines = input.lines();

let headers: Vec<&str> = lines
    .by_ref()
    .take_while(|l| !l.is_empty())
    .collect();

let body: Vec<&str> = lines.collect();

assert_eq!(headers, ["Subject: hi", "From: a@b"]);
assert_eq!(body, ["body line 1", "body line 2"]);

Without by_ref, take_while would have consumed lines whole and the body would be unreachable.

Gotcha: take_while peeks one past

take_while reads the first non-matching element to know when to stop — and that element is gone from the underlying iterator. With by_ref you’ll see this directly:

1
2
3
4
5
6
7
let mut nums = (1..).into_iter();

let small: Vec<i32> = nums.by_ref().take_while(|&n| n < 3).collect();
let next = nums.next();

assert_eq!(small, [1, 2]);
assert_eq!(next, Some(4)); // 3 was eaten by take_while

If you need to keep the boundary element, reach for Peekable instead.

When to use it

Any time you want to apply an adapter to part of an iterator and continue afterwards: chunked parsing, header/body splits, “take the first N then process the rest”, consuming until a sentinel. by_ref() turns “I moved my iterator and now I can’t use it” into a single extra method call.

#105 Apr 2026

105. Vec::extend_from_within — Duplicate a Range Without Fighting the Borrow Checker

Trying to copy a slice of a Vec back onto the end of itself? v.extend_from_slice(&v[..k]) won’t compile — &v and &mut v collide. extend_from_within is the one-call answer.

The borrow checker says no

It looks innocent: take the first few bytes of a buffer and append them to the end. The naïve write doesn’t even reach the type checker without complaint:

1
2
3
let mut buf: Vec<u8> = vec![0xCA, 0xFE, 0xBA, 0xBE];
// buf.extend_from_slice(&buf[..2]);
// ^^ error: cannot borrow `buf` as immutable because it is also borrowed as mutable

extend_from_slice needs &mut self, and &buf[..2] is an immutable borrow of the same Vec. Two borrows, one of them mutable, same value — instant rejection.

The usual workarounds bloat the call site. Either make a temporary copy, or reach for unsafe and a raw pointer dance:

1
2
3
4
5
6
let mut buf: Vec<u8> = vec![0xCA, 0xFE, 0xBA, 0xBE];

let head: Vec<u8> = buf[..2].to_vec();   // extra allocation
buf.extend_from_slice(&head);

assert_eq!(buf, [0xCA, 0xFE, 0xBA, 0xBE, 0xCA, 0xFE]);

A whole heap allocation just to copy two bytes inside a vector you already own. There’s a better way.

extend_from_within — one call, no temp

Vec::extend_from_within takes a range into self and appends those elements. No second buffer, no unsafe, no fight with the borrow checker:

1
2
3
4
let mut buf: Vec<u8> = vec![0xCA, 0xFE, 0xBA, 0xBE];
buf.extend_from_within(..2);

assert_eq!(buf, [0xCA, 0xFE, 0xBA, 0xBE, 0xCA, 0xFE]);

It accepts any RangeBounds<usize>..k, i..j, i..=j, .. — so you can grab whatever slice you want, including the whole Vec:

1
2
3
4
let mut v = vec![1, 2, 3];
v.extend_from_within(..);   // double the contents

assert_eq!(v, [1, 2, 3, 1, 2, 3]);

It requires T: Clone, and clones each element exactly once. For Copy types like primitives it lowers to a single memcpy.

A real use: building patterns

It’s especially nice for building repeating or growing patterns where each step depends on the previous one — think DSP buffers, simple test fixtures, or doubling tricks:

1
2
3
4
5
6
7
let mut data = vec![1u32];
for _ in 0..4 {
    data.extend_from_within(..); // double in place
}

assert_eq!(data.len(), 16);
assert!(data.iter().all(|&x| x == 1));

When to reach for it

Any time you’d write v.extend_from_slice(&v[range]) and the borrow checker stops you, swap in v.extend_from_within(range). Stable since Rust 1.53, exists for both Vec<T> and VecDeque<T>, and quietly turns a frustrating compile error into a one-liner.

#103 Apr 2026

103. bool::then_some — Turn a Condition Into an Option Without an if

Writing if condition { Some(value) } else { None } for the hundredth time? bool::then_some collapses that whole pattern into a single chainable call.

The familiar boilerplate

You have a bool and you want to lift it into an Option. The naive form works, but it’s noisy and breaks chains:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn even_double(n: i32) -> Option<i32> {
    if n % 2 == 0 {
        Some(n * 2)
    } else {
        None
    }
}

assert_eq!(even_double(4), Some(8));
assert_eq!(even_double(5), None);

Three lines and an if/else for what is conceptually “if true, wrap it.” It also doesn’t slot into iterator chains — you’d have to wrap it in a closure.

then_some: the eager form

bool::then_some(value) returns Some(value) if the bool is true, None otherwise:

1
2
3
4
5
6
fn even_double(n: i32) -> Option<i32> {
    (n % 2 == 0).then_some(n * 2)
}

assert_eq!(even_double(4), Some(8));
assert_eq!(even_double(5), None);

One line, no branches in sight. The intent reads exactly like the English: if even, then some n * 2.

then: the lazy form

then_some always evaluates its argument — fine for cheap values, wasteful for expensive ones. Use then(|| ...) when the value should only be computed when the condition holds:

1
2
3
4
5
6
7
fn parse_if_digits(s: &str) -> Option<u64> {
    s.chars().all(|c| c.is_ascii_digit())
        .then(|| s.parse().unwrap())
}

assert_eq!(parse_if_digits("42"), Some(42));
assert_eq!(parse_if_digits("4a2"), None);

The parse only runs when the string is all digits. With then_some(s.parse().unwrap()) you’d unwrap on every call — instant panic on bad input.

Where it really shines: filter_map chains

The big payoff is in iterator pipelines. Filter and transform in one step, no closure-returning-Option boilerplate:

1
2
3
4
5
6
7
8
let nums = [1, 2, 3, 4, 5, 6];

let doubled_evens: Vec<i32> = nums
    .iter()
    .filter_map(|&n| (n % 2 == 0).then_some(n * 2))
    .collect();

assert_eq!(doubled_evens, [4, 8, 12]);

Compare to the if/else version inside the closure — it works, but it’s twice as wide and the else None arm adds zero information.

Watch the eager trap

then_some evaluates its argument unconditionally — which means an “obvious” guard like this is actually a panic:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn first<T: Copy>(items: &[T]) -> Option<T> {
    // BAD: items[0] runs even when the slice is empty.
    // (!items.is_empty()).then_some(items[0])

    // GOOD: defer the indexing into the closure.
    (!items.is_empty()).then(|| items[0])
}

assert_eq!(first(&[10, 20, 30]), Some(10));
assert_eq!(first::<i32>(&[]), None);

The bool short-circuits, the closure doesn’t. Whenever the value can panic, unwrap, or just costs real work, reach for then, not then_some.

When to reach for which

Use then_some(value) when the value is already computed or trivially cheap — a literal, a copy, a field access. Use then(|| value) whenever building the value costs something or could panic. Both are stable (then since Rust 1.50, then_some since 1.62) and both read cleaner than the if/else they replace.

#104 Apr 2026

104. Vec::swap_remove — O(1) Removal When Order Doesn't Matter

Calling Vec::remove(i) shifts every element after i left by one — that’s O(n) per call. If you don’t care about order, swap_remove does the same job in O(1).

The hidden cost of Vec::remove

remove takes the value out of the vector, but to keep the rest in order it has to shift every following element left by one slot. On a long vector, removing near the front is a giant memcpy:

1
2
3
4
5
let mut v = vec![1, 2, 3, 4, 5];
let removed = v.remove(1);

assert_eq!(removed, 2);
assert_eq!(v, [1, 3, 4, 5]);

Inside a loop — “remove every item that matches X” — that O(n) cost compounds into O(n²). A tight loop that should be linear silently becomes quadratic.

swap_remove: trade order for speed

swap_remove(i) removes the element at index i by swapping it with the last element, then popping the tail. No shifting, no memcpy chain — just one swap and a pop:

1
2
3
4
5
6
let mut v = vec![1, 2, 3, 4, 5];
let removed = v.swap_remove(1);

assert_eq!(removed, 2);
// 5 took 2's spot — order is gone, but the op was O(1).
assert_eq!(v, [1, 5, 3, 4]);

If your Vec is really a set — entities in a game world, pending jobs in a worker pool, items keyed by id elsewhere — order rarely matters. swap_remove is free.

Removing many items in place

The classic mistake: iterate forward calling remove. swap_remove lets you do it without the O(n²) blow-up. Walk by index and don’t bump the cursor when you remove:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let mut nums = vec![1, 2, 3, 4, 5, 6, 7, 8];

let mut i = 0;
while i < nums.len() {
    if nums[i] % 2 == 0 {
        nums.swap_remove(i);
        // Don't increment — a new element is now at position i.
    } else {
        i += 1;
    }
}

nums.sort(); // canonicalize for the assert
assert_eq!(nums, [1, 3, 5, 7]);

For bulk predicate-based filtering, Vec::retain stays cleaner (and is also O(n)). Reach for swap_remove when you have a specific index you want gone in O(1).

With structs

swap_remove returns the removed value, so you can hand it off to another collection on the way out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#[derive(PartialEq, Debug)]
struct Job { id: u32 }

let mut queue = vec![
    Job { id: 1 },
    Job { id: 2 },
    Job { id: 3 },
    Job { id: 4 },
];

let pulled = queue.swap_remove(0);

assert_eq!(pulled, Job { id: 1 });
assert_eq!(queue.len(), 3);

When to reach for it

Use swap_remove whenever you’d write remove(i) and the surrounding code doesn’t depend on element order. Order-preserving removal still needs remove, but most “remove one item from this Vec” code is order-agnostic — and swap_remove turns an O(n) pothole into a one-line O(1) win. Stable since Rust 1.0, one of those quietly-essential APIs that’s easy to forget exists.

#102 Apr 2026

102. slice::partition_point — Binary Search That Just Returns the Index

Reaching for binary_search on a sorted Vec and unwrapping Ok(i) | Err(i) because you only ever wanted the index? slice::partition_point skips the Result ceremony and hands you the position directly.

The binary_search annoyance

binary_search is great when you care whether the value was actually found. But often you don’t — you just want the spot where it would go to keep the slice sorted:

1
2
3
4
5
6
7
8
9
let nums = vec![1, 3, 5, 7, 9, 11];
let target = 6;

// Awkward: collapse Ok and Err to a single index.
let pos = match nums.binary_search(&target) {
    Ok(i) | Err(i) => i,
};

assert_eq!(pos, 3);

The Ok | Err pattern works, but it’s noisy and obscures the intent. Worse, it doesn’t generalise — what if you want the insertion point for a predicate, not an exact value?

partition_point to the rescue

partition_point takes a predicate and returns the first index where the predicate flips from true to false. On a sorted slice, that’s the insertion point — no Result, no match arms:

1
2
3
4
5
let nums = vec![1, 3, 5, 7, 9, 11];

let pos = nums.partition_point(|&x| x < 6);

assert_eq!(pos, 3); // 6 would slot between 5 and 7

The slice still has to be partitioned (all trues before all falses), but for a sorted slice with a < predicate that’s automatic. Internally it’s still O(log n) binary search — same complexity as binary_search, friendlier API.

Insert while keeping sorted

A common use: keep a Vec sorted as you add to it.

1
2
3
4
5
6
7
let mut leaderboard = vec![10, 25, 40, 70];
let new_score = 33;

let pos = leaderboard.partition_point(|&x| x < new_score);
leaderboard.insert(pos, new_score);

assert_eq!(leaderboard, [10, 25, 33, 40, 70]);

Compare that to binary_search(&new_score).unwrap_or_else(|i| i) — same result, more ceremony.

Beyond simple ordering

Because it takes any predicate, partition_point works on any slice partitioned by a property — not just sorted-by-Ord. Sorted by a derived key? Filter by a threshold? Same call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Event { day: u32, name: &'static str }

let log = vec![
    Event { day: 1, name: "boot"   },
    Event { day: 2, name: "login"  },
    Event { day: 5, name: "deploy" },
    Event { day: 7, name: "alert"  },
    Event { day: 9, name: "reboot" },
];

// First event on or after day 5.
let i = log.partition_point(|e| e.day < 5);
assert_eq!(log[i].name, "deploy");

// Number of events strictly before day 5.
assert_eq!(log.partition_point(|e| e.day < 5), 2);

That second line is a slick trick: partition_point doubles as “count how many elements satisfy the prefix predicate” in O(log n).

When to reach for it

Any time you find yourself writing binary_search(...).unwrap_or_else(|i| i) or match ... { Ok(i) | Err(i) => i }, swap in partition_point. Stable since Rust 1.52 — old enough to use everywhere, fresh enough that plenty of code still does it the noisy way.

#101 Apr 2026

101. f64::total_cmp — Sort Floats Without the NaN Panic

Tried v.sort() on a Vec<f64> and hit the trait Ord is not implemented for f64? Then reached for .sort_by(|a, b| a.partial_cmp(b).unwrap()) and now a stray NaN is about to panic your service at 3am? f64::total_cmp is the one-liner that makes both problems disappear.

Why f64 doesn’t implement Ord

Floats form a partial order because NaN is not equal to anything — not even itself. So f64: PartialOrd but not Ord, which means sort() flat out refuses to compile:

1
2
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
// temps.sort(); // ❌ the trait `Ord` is not implemented for `f64`

The classic workaround is partial_cmp().unwrap():

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(|a, b| a.partial_cmp(b).unwrap());

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Works — until a NaN sneaks in. Then partial_cmp returns None, the unwrap fires, and your sort becomes a panic.

Enter total_cmp

f64::total_cmp implements the IEEE 754 totalOrder predicate: a real total ordering on every f64 bit pattern, including all the NaNs. It returns Ordering directly — no Option, no panic:

1
2
3
4
let mut temps: Vec<f64> = vec![21.5, 18.0, 23.0, 19.5];
temps.sort_by(f64::total_cmp);

assert_eq!(temps, [18.0, 19.5, 21.5, 23.0]);

Same result for well-behaved input, but now NaN won’t take the process down:

1
2
3
4
5
6
7
8
9
let mut values: Vec<f64> = vec![3.0, f64::NAN, 1.0, f64::NEG_INFINITY, 2.0];
values.sort_by(f64::total_cmp);

// Finite values in order, -∞ at the front, NaN at the back.
assert_eq!(values[0], f64::NEG_INFINITY);
assert_eq!(values[1], 1.0);
assert_eq!(values[2], 2.0);
assert_eq!(values[3], 3.0);
assert!(values[4].is_nan());

min and max too

partial_cmp poisons more than just sort. Any time you reach for iter().max_by(|a, b| a.partial_cmp(b).unwrap()), you’ve written the same latent panic. total_cmp fits there too:

1
2
3
4
5
6
7
let readings = [3.2_f64, 1.4, 4.8, 2.1];

let peak = readings.iter().copied().max_by(f64::total_cmp).unwrap();
let low  = readings.iter().copied().min_by(f64::total_cmp).unwrap();

assert_eq!(peak, 4.8);
assert_eq!(low, 1.4);

Sorting structs by a float field

Because total_cmp takes two &f64s and returns Ordering, it slots straight into sort_by:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
struct Trade { symbol: &'static str, price: f64 }

let mut book = vec![
    Trade { symbol: "AAPL", price: 172.40 },
    Trade { symbol: "NVDA", price: 915.10 },
    Trade { symbol: "MSFT", price: 419.80 },
];

book.sort_by(|a, b| a.price.total_cmp(&b.price));

assert_eq!(book[0].symbol, "AAPL");
assert_eq!(book[2].symbol, "NVDA");

When to reach for it

Any time you’re about to type partial_cmp(...).unwrap() for a float, stop and use total_cmp instead. f32::total_cmp works the same way. Available since Rust 1.62 — the fix has been hiding in plain sight for years.

#100 Apr 2026

100. std::cmp::Reverse — Sort Descending Without Writing a Closure

Want to sort a Vec in descending order? The old trick is v.sort_by(|a, b| b.cmp(a)), and every time you write it you flip a and b and pray you got it right. std::cmp::Reverse is a one-word replacement.

The closure swap trick

To sort descending, you reverse the comparison. The closure is short but easy to mess up:

1
2
3
4
let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by(|a, b| b.cmp(a));

assert_eq!(scores, [50, 40, 30, 20, 10]);

Swap the a and b and you silently get ascending order again. Not a bug the compiler can help with.

Enter Reverse

std::cmp::Reverse<T> is a tuple wrapper whose Ord impl is the reverse of T’s. Hand it to sort_by_key and you’re done:

1
2
3
4
5
6
use std::cmp::Reverse;

let mut scores = vec![30, 10, 50, 20, 40];
scores.sort_by_key(|&s| Reverse(s));

assert_eq!(scores, [50, 40, 30, 20, 10]);

No chance of flipping the comparison the wrong way — the intent is right there in the name.

Sorting by a field, descending

It really shines when you’re sorting structs by a specific field:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::cmp::Reverse;

#[derive(Debug)]
struct Player {
    name: &'static str,
    score: u32,
}

let mut roster = vec![
    Player { name: "Ferris", score: 42 },
    Player { name: "Corro",  score: 88 },
    Player { name: "Rusty",  score: 65 },
];

roster.sort_by_key(|p| Reverse(p.score));

assert_eq!(roster[0].name, "Corro");
assert_eq!(roster[1].name, "Rusty");
assert_eq!(roster[2].name, "Ferris");

Mixing ascending and descending

Sort by one field ascending and another descending? Pack them in a tuple — Reverse composes cleanly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::cmp::Reverse;

let mut items = vec![
    ("apple",  3),
    ("banana", 1),
    ("apple",  1),
    ("banana", 3),
];

// Name ascending, then count descending.
items.sort_by_key(|&(name, count)| (name, Reverse(count)));

assert_eq!(items, [
    ("apple",  3),
    ("apple",  1),
    ("banana", 3),
    ("banana", 1),
]);

It works anywhere Ord does

Because Reverse<T> is an Ord type, you can use it with BinaryHeap to turn a max-heap into a min-heap, or with BTreeSet to iterate in reverse order — no extra wrapper types needed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::cmp::Reverse;
use std::collections::BinaryHeap;

// BinaryHeap is max-heap by default. Wrap in Reverse for min-heap behaviour.
let mut heap = BinaryHeap::new();
heap.push(Reverse(3));
heap.push(Reverse(1));
heap.push(Reverse(2));

assert_eq!(heap.pop(), Some(Reverse(1)));
assert_eq!(heap.pop(), Some(Reverse(2)));
assert_eq!(heap.pop(), Some(Reverse(3)));

When to reach for it

Any time you’d write b.cmp(a) — reach for Reverse instead. The code reads top-to-bottom, the intent is obvious, and there’s no comparator to accidentally flip.

98. sort_by_cached_key — Stop Recomputing Expensive Sort Keys

sort_by_key sounds like it computes the key once per element. It doesn’t — it calls your closure at every comparison, so an n-element sort can pay for that key O(n log n) times. If the key is expensive, sort_by_cached_key is the fix you’ve been looking for.

The trap

The signature reads nicely: “sort by this key.” The implementation, less so — the closure fires on every comparison, not once per element:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_key(|s| {
    calls.set(calls.get() + 1);
    // Pretend this is a heavy computation: allocating, hashing,
    // parsing, calling a regex, opening a file, etc.
    s.to_string()
});

// 5 elements, but the key ran way more than 5 times.
assert!(calls.get() > items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

For identity keys that cost a pointer-deref, nobody cares. For anything that allocates.to_string(), .to_lowercase(), format!(...), a regex capture, a trimmed-and-lowered filename — the cost compounds quickly. I’ve seen a profile where 80% of total runtime was the key closure being called 40,000 times to sort 2,000 items.

The fix

slice::sort_by_cached_key runs your closure exactly once per element, stashes the results in a scratch buffer, then sorts against the cache. This is the Schwartzian transform, wrapped up in a method call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_cached_key(|s| {
    calls.set(calls.get() + 1);
    s.to_string()
});

// Exactly one call per element — no matter how big the slice is.
assert_eq!(calls.get(), items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

Same result, linear key-function calls. The memory trade is a Vec<(K, usize)> the size of the slice — cheap next to the cost of re-running an allocating closure on every compare.

When to reach for which

The rule is about where your time goes, not how fancy the key looks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let mut nums = vec![5u32, 2, 8, 1, 9, 3];

// Trivial key: sort_by_key is fine (and avoids the scratch alloc).
nums.sort_by_key(|n| *n);
assert_eq!(nums, [1, 2, 3, 5, 8, 9]);

// Expensive key: sort_by_cached_key wins.
let mut files = vec!["Cargo.TOML", "src/MAIN.rs", "README.md", "build.RS"];
files.sort_by_cached_key(|path| path.to_lowercase());
assert_eq!(files, ["build.RS", "Cargo.TOML", "README.md", "src/MAIN.rs"]);

Use sort_by_key for cheap, Copy-ish keys. Use sort_by_cached_key the moment your closure allocates, hashes, parses, or otherwise does real work — it’s the difference between O(n log n) and O(n) calls to that closure.

#097 Apr 2026

97. Option::take_if — Take the Value Out Only When You Want To

You want to pull a value out of an Option — but only if it meets some condition. Before take_if, that little sentence turned into a five-line dance with as_ref, is_some_and, and a separate take(). Now it’s one call.

The old dance

Say you hold an Option<Session> and you want to evict it if it’s expired, otherwise leave it alone. The naive version keeps ownership juggling in your face:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#[derive(Debug, PartialEq)]
struct Session { id: u32, age: u32 }

let mut slot: Option<Session> = Some(Session { id: 1, age: 45 });

// Old way: peek, decide, then take.
let evicted = if slot.as_ref().is_some_and(|s| s.age > 30) {
    slot.take()
} else {
    None
};

assert_eq!(evicted, Some(Session { id: 1, age: 45 }));
assert!(slot.is_none());

Three lines of control flow just to express “take it if it’s stale.” And if you ever need to inspect the value more deeply, you’re one borrow-checker nudge away from rewriting the whole thing.

Enter take_if

Option::take_if bakes the whole pattern into one method — it runs your predicate on the inner value, and if the predicate returns true, it take()s it for you:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut slot: Option<Session> = Some(Session { id: 2, age: 12 });

// Young session — predicate is false, nothing happens.
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted, None);
assert!(slot.is_some());

// Age it and try again.
if let Some(s) = slot.as_mut() { s.age = 99; }
let evicted = slot.take_if(|s| s.age > 30);
assert_eq!(evicted.unwrap().id, 2);
assert!(slot.is_none());

One line, one branch, one name for the pattern.

The mutable twist

Here’s the detail that trips people up: the predicate receives &mut T, not &T. You can mutate the inner value before deciding whether to yank it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
let mut counter: Option<u32> = Some(0);

// Bump on every call; take it once it hits the threshold.
for _ in 0..5 {
    let taken = counter.take_if(|n| {
        *n += 1;
        *n >= 3
    });
    if let Some(n) = taken {
        assert_eq!(n, 3);
    }
}

assert!(counter.is_none());

Useful for cache eviction with hit counters, retry-until-exhausted slots, or anywhere “inspect, maybe mutate, maybe remove” shows up. Reach for take_if whenever you find yourself writing if condition { x.take() } else { None }.

95. LazyLock::get — Peek at a Lazy Value Without Initializing It

Wanted to know whether a LazyLock has been initialized yet — without causing the initialization by touching it? Rust 1.94 stabilises LazyLock::get and LazyCell::get, which return Option<&T> and leave the closure untouched if it hasn’t fired.

The old pain

Any access that goes through Deref forces the closure to run. So the moment you do *CONFIG — or anything that implicitly derefs — the lazy becomes eager. Before 1.94 there was no way to ask “has this been initialized?” without tripping that wire.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    // Imagine this is slow: reads a file, hits the network, etc.
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // No way to peek without paying the init cost
    let _ = CONFIG.len(); // forces init
    assert_eq!(CONFIG.len(), 2);
}

Handy enough, but if you want a metric like “was the config ever read?” you’d have to wrap the whole thing in your own AtomicBool.

The fix: LazyLock::get

Called as an associated function (LazyLock::get(&lock)), it returns Option<&T> without touching the closure. None means the lazy is still pending. Some(&value) means someone already forced it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::sync::LazyLock;

static CONFIG: LazyLock<Vec<String>> = LazyLock::new(|| {
    vec!["debug".into(), "verbose".into()]
});

fn main() {
    // Not yet initialised — get returns None
    assert!(LazyLock::get(&CONFIG).is_none());

    // Force init via normal deref
    assert_eq!(CONFIG.len(), 2);

    // Now get returns Some(&value)
    let peek = LazyLock::get(&CONFIG).unwrap();
    assert_eq!(peek.len(), 2);
}

Note the call style: LazyLock::get(&CONFIG), not CONFIG.get(). That’s deliberate — method-lookup on the lock itself would go through Deref, which is exactly the thing we’re trying to avoid.

Same story for LazyCell

LazyCell is the single-threaded cousin and gets the same treatment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::cell::LazyCell;

fn main() {
    let greeting = LazyCell::new(|| "Hello, Rust!".to_uppercase());

    // Not forced yet
    assert!(LazyCell::get(&greeting).is_none());

    // Deref triggers the closure
    assert_eq!(*greeting, "HELLO, RUST!");

    // Now we can peek without re-running anything
    assert_eq!(LazyCell::get(&greeting), Some(&"HELLO, RUST!".to_string()));
}

Where this shines

Two patterns fall out naturally.

Metrics and diagnostics. You want to log “did we ever load the config?” at shutdown without accidentally loading it just to find out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::sync::LazyLock;

static CACHE: LazyLock<Vec<u64>> = LazyLock::new(|| (0..1000).collect());

fn was_cache_used() -> bool {
    LazyLock::get(&CACHE).is_some()
}

fn main() {
    assert!(!was_cache_used());
    let _ = CACHE.first(); // touch it
    assert!(was_cache_used());
}

Tests. Assert that the lazy init didn’t happen on a code path that shouldn’t need it — a guarantee that was basically impossible to write before.

When to reach for it

Use LazyLock::get / LazyCell::get whenever you need to ask about initialization state without causing it. For everything else, just deref as usual — that’s still the one-liner that Just Works.

Stabilised in Rust 1.94 (March 2026).

92. core::range — Range Types You Can Actually Copy

Ever tried to reuse a 0..=10 range and hit “use of moved value”? Rust 1.95 stabilises core::range, a new module of range types that implement Copy.

The old pain

The classic range types in std::opsRange, RangeInclusive — are iterators themselves. That means iterating them consumes them, and they can’t be Copy:

1
2
3
4
let r = 0..=5;
let a: Vec<_> = r.clone().collect();
let b: Vec<_> = r.collect(); // consumes r
// r is gone here

Every time you want to reuse a range you reach for .clone() or rebuild it. Annoying for something that looks like a pair of integers.

The fix: core::range

Rust 1.95 adds new range types in core::range (re-exported as std::range). They’re plain data — Copy, no iterator state baked in — and you turn them into an iterator on demand with .into_iter():

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use core::range::RangeInclusive;

fn main() {
    let r: RangeInclusive<i32> = (0..=5).into();

    let a: Vec<i32> = r.into_iter().collect();
    let b: Vec<i32> = r.into_iter().collect(); // r is Copy, still usable

    assert_eq!(a, vec![0, 1, 2, 3, 4, 5]);
    assert_eq!(a, b);
}

No .clone(), no rebuild. r is just two numbers behind the scenes, so copying it is free.

Passing ranges around

Because the new types are Copy, you can pass them into helpers without worrying about move semantics:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use core::range::RangeInclusive;

fn sum_range(r: RangeInclusive<i32>) -> i32 {
    r.into_iter().sum()
}

fn main() {
    let r: RangeInclusive<i32> = (1..=4).into();

    assert_eq!(sum_range(r), 10);
    assert_eq!(sum_range(r), 10); // reuse: r is Copy
}

The old std::ops::RangeInclusive would have been moved by the first call.

Field access, not method calls

The new types expose their bounds as public fields — no more .start() / .end() accessors. For RangeInclusive, the upper bound is called last (emphasising that it’s included):

1
2
3
4
5
6
7
8
use core::range::RangeInclusive;

fn main() {
    let r: RangeInclusive<i32> = (10..=20).into();

    assert_eq!(r.start, 10);
    assert_eq!(r.last, 20);
}

Exclusive Range has start and end in the same way. Either one is simple plain-old-data that you can pattern-match, destructure, or read directly.

When to reach for it

Use core::range whenever you want to store, pass, or reuse a range as a value. The old std::ops ranges are still everywhere (literal syntax, slice indexing, for loops), so there’s no rush to migrate — but for library APIs that take ranges as parameters, the new types are the friendlier choice.

Stabilised in Rust 1.95 (April 2026).

90. black_box — Stop the Compiler From Erasing Your Benchmarks

Your benchmark ran in 0 nanoseconds? Congratulations — the compiler optimised away the code you were trying to measure. std::hint::black_box prevents that by hiding values from the optimiser.

The problem: the optimiser is too smart

The Rust compiler aggressively eliminates dead code. If it can prove a result is never used, it simply removes the computation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn sum_range(n: u64) -> u64 {
    (0..n).sum()
}

fn main() {
    let start = std::time::Instant::now();
    let result = sum_range(10_000_000);
    let elapsed = start.elapsed();

    // Without using `result`, the compiler may skip the entire computation
    println!("took: {elapsed:?}");

    // Force the result to be "used" so the above isn't optimised out
    assert!(result > 0);
}

In release mode, the compiler can see through this and may still optimise the loop away — or even compute the answer at compile time. Your benchmark reports near-zero time, and you learn nothing.

Enter black_box

std::hint::black_box takes a value and returns it unchanged, but the compiler treats it as an opaque barrier — it can’t see through it, so it can’t optimise away whatever produced that value:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::hint::black_box;

fn sum_range(n: u64) -> u64 {
    (0..n).sum()
}

fn main() {
    let start = std::time::Instant::now();
    let result = sum_range(black_box(10_000_000));
    let _ = black_box(result);
    let elapsed = start.elapsed();

    println!("sum = {result}, took: {elapsed:?}");
}

Two black_box calls do the trick:

  1. Wrap the input — prevents the compiler from constant-folding the argument
  2. Wrap the output — prevents dead-code elimination of the computation

Before and after

Without black_box (release mode):

1
sum = 49999995000000, took: 83ns     ← suspiciously fast

With black_box (release mode):

1
sum = 49999995000000, took: 5.612ms  ← actual work

It works on any type

black_box is generic — it works on integers, strings, structs, references, whatever:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::hint::black_box;

fn main() {
    // Hide a vector from the optimiser
    let data: Vec<i32> = black_box(vec![1, 2, 3, 4, 5]);
    let total: i32 = data.iter().sum();
    let total = black_box(total);

    assert_eq!(total, 15);
}

Micro-benchmark recipe

Here’s a minimal pattern for quick-and-dirty micro-benchmarks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::hint::black_box;
use std::time::Instant;

fn fibonacci(n: u32) -> u64 {
    match n {
        0 => 0,
        1 => 1,
        _ => fibonacci(n - 1) + fibonacci(n - 2),
    }
}

fn main() {
    let iterations = 100;
    let start = Instant::now();
    for _ in 0..iterations {
        black_box(fibonacci(black_box(30)));
    }
    let elapsed = start.elapsed();
    println!("{iterations} runs in {elapsed:?} ({:?}/iter)", elapsed / iterations);
}

Without black_box, the compiler could hoist the pure function call out of the loop or eliminate it entirely. With it, each iteration does real work.

When to use it

Reach for black_box whenever you’re timing code and the results look suspiciously fast. It’s also the foundation that benchmarking frameworks like criterion and the built-in #[bench] use under the hood.

It’s not a full benchmarking harness — for serious measurement you still want warmup, statistics, and outlier detection. But when you need a quick sanity check, black_box + Instant gets the job done.

Available since Rust 1.66 on stable.

89. cold_path — Tell the Compiler Which Branch Won't Happen

Your error-handling branch fires once in a million calls, but the compiler doesn’t know that. core::hint::cold_path lets you mark unlikely branches so the optimiser can focus on the hot path.

Why branch layout matters

Modern CPUs predict which way a branch will go and speculatively execute instructions ahead of time. When the prediction is right, execution flies. When it’s wrong, the pipeline stalls.

Compilers already try to guess which branches are hot, but they don’t always get it right — especially when both sides of an if look equally plausible from a static analysis perspective. That’s where cold_path comes in.

The basics

Call cold_path() at the start of a branch that is rarely taken:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::hint::cold_path;

fn process(value: Option<u64>) -> u64 {
    if let Some(v) = value {
        v * 2
    } else {
        cold_path();
        log_miss();
        0
    }
}

fn log_miss() {
    // imagine logging or metrics here
}

The compiler now knows the else arm is unlikely. It can lay out the machine code so the hot path (the Some arm) has no jumps, keeping it in the instruction cache and the branch predictor happy.

In match expressions

cold_path works well in match arms too — mark the rare variants:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use std::hint::cold_path;

fn handle_status(code: u16) -> &'static str {
    match code {
        200 => "ok",
        301 => "moved",
        404 => { cold_path(); "not found" }
        500 => { cold_path(); "server error" }
        _   => { cold_path(); "unknown" }
    }
}

Only the branches you expect to be common stay on the fast track.

Building likely and unlikely helpers

If you’ve used C/C++, you might miss __builtin_expect. With cold_path you can build the same thing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::hint::cold_path;

#[inline(always)]
const fn likely(b: bool) -> bool {
    if !b { cold_path(); }
    b
}

#[inline(always)]
const fn unlikely(b: bool) -> bool {
    if b { cold_path(); }
    b
}

fn check_bounds(index: usize, len: usize) -> bool {
    if unlikely(index >= len) {
        panic!("out of bounds: {} >= {}", index, len);
    }
    true
}

Now you can annotate conditions directly instead of marking individual branches.

A word of caution

Misusing cold_path on a branch that actually runs often can hurt performance — the compiler will deprioritise it, and you’ll get more pipeline stalls, not fewer. Always benchmark before sprinkling hints around. Profile first, hint second.

The bottom line

cold_path is a zero-cost, zero-argument function that tells the optimiser what you already know: this branch is the exception, not the rule. It’s a small tool, but in hot loops and latency-sensitive code, it can make a measurable difference.

Stabilised in Rust 1.95 (April 2026).

88. Vec::push_mut — Push and Modify in One Step

Tired of pushing a default into a Vec and then grabbing a mutable reference to fill it in? Rust 1.95 stabilises push_mut, which returns &mut T to the element it just inserted.

The old dance

A common pattern is to push a placeholder value and then immediately mutate it. Before push_mut, you had to do an awkward two-step:

1
2
3
4
5
6
7
8
9
let mut names = Vec::new();

// Push, then index back in to mutate
names.push(String::new());
let last = names.last_mut().unwrap();
last.push_str("hello");
last.push_str(", world");

assert_eq!(names[0], "hello, world");

That last_mut().unwrap() is boilerplate — you know the element is there because you just pushed it. Worse, in more complex code the compiler can’t always prove the reference is safe, forcing you into index gymnastics.

Enter push_mut

push_mut pushes the value and hands you back a mutable reference in one shot:

1
2
3
4
5
6
7
let mut scores = Vec::new();

let entry = scores.push_mut(0);
*entry += 10;
*entry += 25;

assert_eq!(scores[0], 35);

No unwraps, no indexing, no second lookup. The borrow checker is happy because there’s a clear chain of ownership.

Building structs in-place

This really shines when you’re constructing complex values piece by piece:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#[derive(Debug, Default)]
struct Player {
    name: String,
    score: u32,
    active: bool,
}

let mut roster = Vec::new();

let p = roster.push_mut(Player::default());
p.name = String::from("Ferris");
p.score = 100;
p.active = true;

assert_eq!(roster[0].name, "Ferris");
assert_eq!(roster[0].score, 100);
assert!(roster[0].active);

Instead of building the struct fully before pushing, you push a default and fill it in. This can be handy when the final field values depend on context you only have after insertion.

Also works for insert_mut

Need to insert at a specific index? insert_mut follows the same pattern:

1
2
3
4
5
let mut v = vec![1, 3, 4];
let inserted = v.insert_mut(1, 2);
*inserted *= 10;

assert_eq!(v, [1, 20, 3, 4]);

Both methods are also available on VecDeque (push_front_mut, push_back_mut, insert_mut) and LinkedList (push_front_mut, push_back_mut).

When to reach for it

Use push_mut whenever you’d otherwise write push followed by last_mut().unwrap(). It’s one less unwrap, one fewer line, and clearer intent: push this, then let me tweak it.

Stabilised in Rust 1.95 (April 2026) — update your toolchain and give it a spin.

#087 Apr 2026

87. Atomic update — Kill the Compare-and-Swap Loop

Every Rust developer who’s written lock-free code has written the same compare_exchange loop. Rust 1.95 finally gives atomics an update method that does it for you.

The old way

Atomically doubling a counter used to mean writing a retry loop yourself:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

loop {
    let current = counter.load(Ordering::Relaxed);
    let new_val = current * 2;
    match counter.compare_exchange(
        current, new_val,
        Ordering::SeqCst, Ordering::Relaxed,
    ) {
        Ok(_) => break,
        Err(_) => continue,
    }
}
// counter is now 20

It works, but it’s boilerplate — and easy to get wrong (use the wrong ordering, forget to retry, etc.).

The new way: update

1
2
3
4
5
6
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(10);

counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x * 2);
// counter is now 20

One line. No loop. No chance of forgetting to retry on contention.

The method takes two orderings (one for the store on success, one for the load on failure) and a closure that transforms the current value. It handles the compare-and-swap retry loop internally.

It returns the previous value

Just like fetch_add and friends, update returns the value before the update:

1
2
3
4
5
6
7
use std::sync::atomic::{AtomicUsize, Ordering};

let counter = AtomicUsize::new(5);

let prev = counter.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 3);
assert_eq!(prev, 5);  // was 5
assert_eq!(counter.load(Ordering::SeqCst), 8);  // now 8

This makes it perfect for “fetch-and-modify” patterns where you need the old value.

Works on all atomic types

update isn’t just for AtomicUsize — it’s available on AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr too:

1
2
3
4
5
use std::sync::atomic::{AtomicBool, Ordering};

let flag = AtomicBool::new(false);
flag.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x);
assert_eq!(flag.load(Ordering::SeqCst), true);

When to use update vs fetch_add

If your operation is a simple add, sub, or bitwise op, the specialized fetch_* methods are still better — they compile down to a single atomic instruction on most architectures.

Use update when your transformation is more complex: clamping, toggling state machines, applying arbitrary functions. Anywhere you’d previously hand-roll a CAS loop.

Summary

MethodUse when
fetch_add, fetch_or, etc.Simple arithmetic/bitwise ops
updateArbitrary transformations (Rust 1.95+)
Manual CAS loopNever again (mostly)

Available on stable since Rust 1.95.0 for AtomicBool, AtomicIsize, AtomicUsize, and AtomicPtr.

81. checked_sub_signed — Subtract a Signed Delta From an Unsigned Without Casts

checked_add_signed has been around for years. Its missing sibling finally landed: as of Rust 1.91, u64::checked_sub_signed (and the whole {checked, overflowing, saturating, wrapping}_sub_signed family) lets you subtract an i64 from a u64 without casting, unsafe, or hand-rolled overflow checks.

The problem

You’ve got an unsigned counter — a file offset, a buffer index, a frame number — and you want to apply a signed delta. The delta is negative, so subtracting it should increase the counter. But Rust won’t let you subtract an i64 from a u64:

1
2
3
4
5
let pos: u64 = 100;
let delta: i64 = -5;

// error[E0277]: cannot subtract `i64` from `u64`
// let new_pos = pos - delta;

The usual workarounds are all awkward. Cast to i64 and hope nothing overflows. Branch on the sign of the delta and call either checked_sub or checked_add depending. Convert via as and pray.

The fix

checked_sub_signed takes an i64 directly and returns Option<u64>:

1
2
3
4
5
let pos: u64 = 100;

assert_eq!(pos.checked_sub_signed(30),  Some(70));   // normal subtraction
assert_eq!(pos.checked_sub_signed(-5),  Some(105));  // subtracting negative adds
assert_eq!(pos.checked_sub_signed(200), None);       // underflow → None

Subtracting a negative number “wraps around” to addition, exactly as the math says it should. Underflow (going below zero) returns None instead of panicking or silently wrapping.

The whole family

Pick your overflow semantics, same as every other integer op:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
let pos: u64 = 10;

// Checked: returns Option.
assert_eq!(pos.checked_sub_signed(-5),  Some(15));
assert_eq!(pos.checked_sub_signed(100), None);

// Saturating: clamps to 0 or u64::MAX.
assert_eq!(pos.saturating_sub_signed(100), 0);
assert_eq!((u64::MAX - 5).saturating_sub_signed(-100), u64::MAX);

// Wrapping: modular arithmetic, never panics.
assert_eq!(pos.wrapping_sub_signed(20), u64::MAX - 9);

// Overflowing: returns (value, did_overflow).
assert_eq!(pos.overflowing_sub_signed(20), (u64::MAX - 9, true));
assert_eq!(pos.overflowing_sub_signed(5),  (5, false));

Same convention as checked_sub, saturating_sub, etc. — you already know the shape.

Why it matters

The signed-from-unsigned case comes up more than you’d think. Scrubbing back and forth in a timeline. Applying a velocity to a position. Rebasing a byte offset. Any time the delta can be negative, you need this method — and now you have it without touching as.

It pairs nicely with its long-stable sibling checked_add_signed, which has been around since Rust 1.66. Between the two, signed deltas on unsigned counters are a one-liner in any direction.

Available on every unsigned primitive (u8, u16, u32, u64, u128, usize) as of Rust 1.91.

#080 Apr 2026

80. VecDeque::pop_front_if and pop_back_if — Conditional Pops on Both Ends

Vec::pop_if got a deque-shaped sibling. As of Rust 1.93, VecDeque has pop_front_if and pop_back_if — conditional pops on either end without the peek-then-pop dance.

The problem

You want to remove an element from a VecDeque only when it matches a predicate. Before 1.93, you’d peek, match, then pop:

1
2
3
4
5
6
7
use std::collections::VecDeque;

let mut queue: VecDeque<i32> = VecDeque::from([1, 2, 3, 4]);

if queue.front().is_some_and(|&x| x == 1) {
    queue.pop_front();
}

Two lookups, two branches, one opportunity to desynchronize the check from the pop if you refactor the predicate later.

The fix

pop_front_if takes a closure, checks the front element against it, and pops it only if it matches. pop_back_if does the same on the other end.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::collections::VecDeque;

let mut queue: VecDeque<i32> = VecDeque::from([1, 2, 3, 4]);

let popped = queue.pop_front_if(|x| *x == 1);
assert_eq!(popped, Some(1));
assert_eq!(queue, VecDeque::from([2, 3, 4]));

// Predicate doesn't match — nothing is popped.
let not_popped = queue.pop_front_if(|x| *x > 100);
assert_eq!(not_popped, None);
assert_eq!(queue, VecDeque::from([2, 3, 4]));

The return type is Option<T>: Some(value) if the predicate matched and the element was removed, None otherwise (including when the deque is empty).

One subtle detail worth noting — the closure receives &mut T, not &T. That means |&x| won’t type-check; use |x| *x == ... or destructure with |&mut x|. The extra flexibility lets you mutate the element in-place before deciding to pop it.

Draining a prefix

The pattern clicks when you pair it with a while let loop. Drain everything at the front that matches a condition, stop the moment it doesn’t:

1
2
3
4
5
6
7
8
use std::collections::VecDeque;

let mut events: VecDeque<i32> = VecDeque::from([1, 2, 3, 10, 11, 12]);

// Drain the "small" prefix only.
while let Some(_) = events.pop_front_if(|x| *x < 10) {}

assert_eq!(events, VecDeque::from([10, 11, 12]));

No index tracking, no split_off, no collecting into a new deque.

Why both ends?

VecDeque is a double-ended ring buffer, so it’s natural to support the same idiom on both sides. Processing a priority queue from the back, trimming expired entries from the front, popping a sentinel only when it’s still there — all one method call each.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::collections::VecDeque;

let mut log: VecDeque<&str> = VecDeque::from(["START", "a", "b", "c", "END"]);

let end = log.pop_back_if(|s| *s == "END");
let start = log.pop_front_if(|s| *s == "START");

assert_eq!(start, Some("START"));
assert_eq!(end, Some("END"));
assert_eq!(log, VecDeque::from(["a", "b", "c"]));

When to reach for it

Whenever the shape of your code is “peek, compare, pop.” That’s the tell. pop_front_if / pop_back_if collapse three steps into one atomic operation, and the Option<T> return makes it composable with while let, ?, and the rest of the Option toolbox.

Stabilized in Rust 1.93 — if your MSRV is recent enough, this is a free readability win.

#078 Apr 2026

78. div_ceil — Divide and Round Up Without the Overflow Bug

Need to split items into fixed-size pages or chunks? The classic (n + size - 1) / size trick silently overflows. div_ceil does it correctly.

The classic footgun

Paging, chunking, allocating — any time you divide and need to round up, this pattern shows up:

1
2
3
fn pages_needed(total: u64, per_page: u64) -> u64 {
    (total + per_page - 1) / per_page // ⚠️ overflows when total is large
}

It works until total + per_page - 1 wraps around. With u64::MAX items and a page size of 10, you get a wrong answer instead of a panic or correct result.

The fix: div_ceil

Stabilized in Rust 1.73, div_ceil handles the rounding without intermediate overflow:

1
2
3
fn pages_needed(total: u64, per_page: u64) -> u64 {
    total.div_ceil(per_page)
}

One method call, no overflow risk, intent crystal clear.

Real-world examples

Allocating pixel rows for a tiled renderer:

1
2
3
4
let image_height: u32 = 1080;
let tile_size: u32 = 64;
let tile_rows = image_height.div_ceil(tile_size);
assert_eq!(tile_rows, 17); // 16 full tiles + 1 partial

Splitting work across threads:

1
2
3
4
let items: usize = 1000;
let threads: usize = 6;
let chunk_size = items.div_ceil(threads);
assert_eq!(chunk_size, 167); // each thread handles at most 167 items

It works on all unsigned integers

div_ceil is available on u8, u16, u32, u64, u128, and usize. Signed integers also have it (since Rust 1.73), but watch out — it rounds toward positive infinity, which for negative dividends means rounding away from zero.

1
2
let signed: i32 = -7;
assert_eq!(signed.div_ceil(2), -3); // rounds toward +∞, not toward 0

Next time you reach for (a + b - 1) / b, stop — div_ceil already exists and it won’t betray you at the boundaries.

#077 Apr 2026

77. repeat_n — Repeat a Value Exactly N Times

Stop writing repeat(x).take(n) — there’s a dedicated function that’s both cleaner and more efficient.

The old way

If you wanted an iterator that yields a value a fixed number of times, you’d chain repeat with take:

1
2
3
4
use std::iter;

let greetings: Vec<_> = iter::repeat("hello").take(3).collect();
assert_eq!(greetings, vec!["hello", "hello", "hello"]);

This works fine for Copy types, but it always clones the value — even on the last iteration, where you could just move it instead.

Enter repeat_n

std::iter::repeat_n does exactly what the name says — repeats a value n times:

1
2
3
4
use std::iter;

let greetings: Vec<_> = iter::repeat_n("hello", 3).collect();
assert_eq!(greetings, vec!["hello", "hello", "hello"]);

Cleaner, more readable, and it comes with a hidden superpower.

The efficiency win

repeat_n moves the value on the last iteration instead of cloning it. This matters when cloning is expensive:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::iter;

let original = vec![1, 2, 3]; // expensive to clone
let copies: Vec<Vec<i32>> = iter::repeat_n(original, 3).collect();

assert_eq!(copies.len(), 3);
assert_eq!(copies[0], vec![1, 2, 3]);
assert_eq!(copies[1], vec![1, 2, 3]);
assert_eq!(copies[2], vec![1, 2, 3]);
// The first two are clones, but the third one is a move — one fewer allocation!

With repeat(x).take(n), you’d clone all n times. With repeat_n, you save one clone. For large buffers or complex types, that’s a meaningful win.

repeat_n with zero

Passing n = 0 yields an empty iterator, and the value is simply dropped — no clones happen at all:

1
2
3
4
use std::iter;

let items: Vec<String> = iter::repeat_n(String::from("unused"), 0).collect();
assert!(items.is_empty());

When to reach for it

Use repeat_n whenever you need a fixed number of identical values. Common patterns:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::iter;

// Initialize a grid row
let row: Vec<f64> = iter::repeat_n(0.0, 10).collect();
assert_eq!(row.len(), 10);

// Pad a sequence
let mut data = vec![1, 2, 3];
data.extend(iter::repeat_n(0, 5));
assert_eq!(data, vec![1, 2, 3, 0, 0, 0, 0, 0]);

Small change, but it makes intent crystal clear: you want exactly n copies, no more, no less.

#068 Apr 2026

68. f64::next_up — Walk the Floating Point Number Line

Ever wondered what the next representable floating point number after 1.0 is? Since Rust 1.86, f64::next_up and f64::next_down let you step through the number line one float at a time.

The problem

Floating point numbers aren’t evenly spaced — the gap between representable values grows as the magnitude increases. Before next_up / next_down, figuring out the next neighbor required bit-level manipulation of the IEEE 754 representation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn main() {
    // The hard way (before 1.86): manually decode bits
    let x: f64 = 1.0;
    let bits = x.to_bits();
    let next_bits = bits + 1;
    let next = f64::from_bits(next_bits);

    assert!(next > x);
    assert_eq!(next, 1.0000000000000002);
}

Error-prone, unreadable, and doesn’t handle edge cases like negative numbers, zero, or special values.

The clean way

next_up returns the smallest f64 greater than self. next_down returns the largest f64 less than self:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn main() {
    let x: f64 = 1.0;

    let up = x.next_up();
    let down = x.next_down();

    assert!(up > x);
    assert!(down < x);
    assert_eq!(up, 1.0000000000000002);
    assert_eq!(down, 0.9999999999999998);

    // There's no float between x and its neighbors
    assert_eq!(up.next_down(), x);
    assert_eq!(down.next_up(), x);
}

They handle all the edge cases you’d rather not think about — negative numbers, subnormals, and infinity:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn main() {
    // Works across zero
    assert_eq!(0.0_f64.next_up(), 5e-324);   // smallest positive f64
    assert_eq!(0.0_f64.next_down(), -5e-324); // largest negative f64

    // Infinity is the boundary
    assert_eq!(f64::MAX.next_up(), f64::INFINITY);
    assert_eq!(f64::INFINITY.next_up(), f64::INFINITY);

    // NaN stays NaN
    assert!(f64::NAN.next_up().is_nan());
}

Practical use: precision-aware comparisons

The gap between adjacent floats is called an ULP (unit in the last place). next_up lets you build tolerance-aware comparisons without guessing at epsilon values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
fn almost_equal(a: f64, b: f64, max_ulps: u32) -> bool {
    if a == b { return true; }

    let mut current = a;
    for _ in 0..max_ulps {
        current = if a < b { current.next_up() } else { current.next_down() };
        if current == b { return true; }
    }
    false
}

fn main() {
    let a = 0.1 + 0.2;
    let b = 0.3;

    // They're not equal...
    assert_ne!(a, b);

    // ...but they're within 1 ULP of each other
    assert!(almost_equal(a, b, 1));
}

Also available on f32 with the same API. These methods are const fn, so you can use them in const contexts too.

63. fmt::from_fn — Display Anything With a Closure

Need a quick Display impl without defining a whole new type? std::fmt::from_fn turns any closure into a formattable value — ad-hoc Display in one line.

The problem: Display needs a type

Normally, implementing Display means creating a wrapper struct just to control how something gets formatted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::fmt;

struct CommaSeparated<'a>(&'a [i32]);

impl fmt::Display for CommaSeparated<'_> {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        for (i, val) in self.0.iter().enumerate() {
            if i > 0 { write!(f, ", ")?; }
            write!(f, "{val}")?;
        }
        Ok(())
    }
}

fn main() {
    let nums = vec![1, 2, 3];
    let display = CommaSeparated(&nums);
    assert_eq!(display.to_string(), "1, 2, 3");
}

That’s a lot of boilerplate for “join with commas.”

After: fmt::from_fn

Stabilized in Rust 1.93, std::fmt::from_fn takes a closure Fn(&mut Formatter) -> fmt::Result and returns a value that implements Display (and Debug):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::fmt;

fn main() {
    let nums = vec![1, 2, 3];

    let display = fmt::from_fn(|f| {
        for (i, val) in nums.iter().enumerate() {
            if i > 0 { write!(f, ", ")?; }
            write!(f, "{val}")?;
        }
        Ok(())
    });

    assert_eq!(display.to_string(), "1, 2, 3");
    // Works directly in format strings too
    assert_eq!(format!("numbers: {display}"), "numbers: 1, 2, 3");
}

No wrapper type, no trait impl — just a closure that writes to a formatter.

Building reusable formatters

Wrap from_fn in a function and you have a reusable formatter without the boilerplate:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::fmt;

fn join_with<'a, I, T>(iter: I, sep: &'a str) -> impl fmt::Display + 'a
where
    I: IntoIterator<Item = T> + 'a,
    T: fmt::Display + 'a,
{
    fmt::from_fn(move |f| {
        let mut first = true;
        for item in iter {
            if !first { write!(f, "{sep}")?; }
            first = false;
            write!(f, "{item}")?;
        }
        Ok(())
    })
}

fn main() {
    let tags = vec!["rust", "formatting", "closures"];
    assert_eq!(join_with(tags, " | ").to_string(), "rust | formatting | closures");

    let nums = vec![10, 20, 30];
    assert_eq!(format!("sum of [{}]", join_with(nums, " + ")), "sum of [10 + 20 + 30]");
}

Lazy formatting — no allocation until needed

from_fn is lazy — the closure only runs when the value is actually formatted. This makes it perfect for logging where most messages might be filtered out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::fmt;

fn debug_summary(data: &[u8]) -> impl fmt::Display + '_ {
    fmt::from_fn(move |f| {
        write!(f, "[{} bytes: ", data.len())?;
        for (i, byte) in data.iter().take(4).enumerate() {
            if i > 0 { write!(f, " ")?; }
            write!(f, "{byte:02x}")?;
        }
        if data.len() > 4 { write!(f, " ...")?; }
        write!(f, "]")
    })
}

fn main() {
    let payload = vec![0xDE, 0xAD, 0xBE, 0xEF, 0x42, 0x00];
    let summary = debug_summary(&payload);

    // The closure hasn't run yet — zero work done
    // It only formats when you actually use it:
    assert_eq!(summary.to_string(), "[6 bytes: de ad be ef ...]");
}

No String is allocated unless something actually calls Display::fmt. Compare that to eagerly building a debug string just in case.

from_fn also implements Debug

The returned value implements both Display and Debug, so it works with {:?} too:

1
2
3
4
5
6
7
use std::fmt;

fn main() {
    let val = fmt::from_fn(|f| write!(f, "custom"));
    assert_eq!(format!("{val}"),   "custom");
    assert_eq!(format!("{val:?}"), "custom");
}

fmt::from_fn is a tiny addition to std that removes a surprisingly common friction point — any time you’d reach for a newtype just to implement Display, try a closure instead.

#062 Apr 2026

62. Iterator::flat_map — Map and Flatten in One Step

Need to transform each element into multiple items and collect them all into a flat sequence? flat_map combines map and flatten into a single, expressive call.

The nested iterator problem

Say you have a list of sentences and want all individual words. The naive approach with map gives you an iterator of iterators:

1
2
3
4
5
6
7
8
let sentences = vec!["hello world", "foo bar baz"];

// map gives us an iterator of Split iterators — not what we want
let nested: Vec<Vec<&str>> = sentences.iter()
    .map(|s| s.split_whitespace().collect())
    .collect();

assert_eq!(nested, vec![vec!["hello", "world"], vec!["foo", "bar", "baz"]]);

You could chain .map().flatten(), but flat_map does both at once:

1
2
3
4
5
6
7
let sentences = vec!["hello world", "foo bar baz"];

let words: Vec<&str> = sentences.iter()
    .flat_map(|s| s.split_whitespace())
    .collect();

assert_eq!(words, vec!["hello", "world", "foo", "bar", "baz"]);

Expanding one-to-many relationships

flat_map shines when each input element maps to zero or more outputs. Think of it as a one-to-many transform:

1
2
3
4
5
6
7
8
let numbers = vec![1, 2, 3];

// Each number expands to itself and its double
let expanded: Vec<i32> = numbers.iter()
    .flat_map(|&n| vec![n, n * 2])
    .collect();

assert_eq!(expanded, vec![1, 2, 2, 4, 3, 6]);

Filtering and transforming at once

Since flat_map’s closure can return an empty iterator, it naturally combines filtering and mapping — just return None or Some:

1
2
3
4
5
6
7
let inputs = vec!["42", "not_a_number", "7", "oops", "13"];

let parsed: Vec<i32> = inputs.iter()
    .flat_map(|s| s.parse::<i32>().ok())
    .collect();

assert_eq!(parsed, vec![42, 7, 13]);

This works because Option implements IntoIteratorSome(x) yields one item, None yields zero. It’s equivalent to filter_map, but flat_map generalizes to any iterator, not just Option.

Traversing nested structures

Got a tree-like structure? flat_map lets you drill into children naturally:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Team {
    name: &'static str,
    members: Vec<&'static str>,
}

let teams = vec![
    Team { name: "backend", members: vec!["Alice", "Bob"] },
    Team { name: "frontend", members: vec!["Carol"] },
    Team { name: "devops", members: vec!["Dave", "Eve", "Frank"] },
];

let all_members: Vec<&str> = teams.iter()
    .flat_map(|team| team.members.iter().copied())
    .collect();

assert_eq!(all_members, vec!["Alice", "Bob", "Carol", "Dave", "Eve", "Frank"]);

flat_map vs map + flatten

They’re semantically identical — flat_map(f) is just map(f).flatten(). But flat_map reads better and signals your intent: “each element produces multiple items, and I want them all in one sequence.”

1
2
3
4
5
6
7
8
let data = vec![vec![1, 2], vec![3], vec![4, 5, 6]];

// These are equivalent:
let a: Vec<i32> = data.iter().flat_map(|v| v.iter().copied()).collect();
let b: Vec<i32> = data.iter().map(|v| v.iter().copied()).flatten().collect();

assert_eq!(a, b);
assert_eq!(a, vec![1, 2, 3, 4, 5, 6]);

flat_map has been stable since Rust 1.0 — it’s a fundamental iterator combinator that replaces nested loops with clean, composable pipelines.

#061 Apr 2026

61. Iterator::reduce — Fold Without an Initial Value

Using fold but your accumulator starts as the first element anyway? Iterator::reduce cuts out the boilerplate and handles empty iterators gracefully.

The fold pattern you keep writing

When finding the longest string, maximum value, or combining elements, fold forces you to pick an initial value — often awkwardly:

1
2
3
4
5
6
7
let words = vec!["rust", "is", "awesome"];

let longest = words.iter().fold("", |acc, &w| {
    if w.len() > acc.len() { w } else { acc }
});

assert_eq!(longest, "awesome");

That empty string "" is a code smell — it’s not a real element, it’s just satisfying fold’s signature. And if the input is empty, you silently get "" back instead of knowing there was nothing to reduce.

Enter reduce

Iterator::reduce uses the first element as the initial accumulator. No seed value needed, and it returns Option<T>None for empty iterators:

1
2
3
4
5
6
7
let words = vec!["rust", "is", "awesome"];

let longest = words.iter().reduce(|acc, w| {
    if w.len() > acc.len() { w } else { acc }
});

assert_eq!(longest, Some(&"awesome"));

The Option return makes the empty case explicit — no more silent defaults.

Finding extremes without max_by

reduce is perfect for custom comparisons where max_by feels heavy:

1
2
3
4
5
6
7
let scores = vec![("Alice", 92), ("Bob", 87), ("Carol", 95), ("Dave", 88)];

let top_scorer = scores.iter().reduce(|best, current| {
    if current.1 > best.1 { current } else { best }
});

assert_eq!(top_scorer, Some(&("Carol", 95)));

Concatenating without an allocator seed

Building a combined result from parts? reduce avoids allocating an empty starter:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let parts = vec![
    String::from("hello"),
    String::from(" "),
    String::from("world"),
];

let combined = parts.into_iter().reduce(|mut acc, s| {
    acc.push_str(&s);
    acc
});

assert_eq!(combined, Some(String::from("hello world")));

Compare this to fold(String::new(), ...) — with reduce, the first String becomes the accumulator directly, saving one allocation.

reduce vs fold — when to use which

Use reduce when the accumulator is the same type as the elements and there’s no meaningful “zero” value. Use fold when you need a different return type or a specific starting value:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// reduce: same type in, same type out
let sum = vec![1, 2, 3, 4].into_iter().reduce(|a, b| a + b);
assert_eq!(sum, Some(10));

// fold: different return type (counting into a HashMap)
use std::collections::HashMap;
let counts = vec!["a", "b", "a", "c", "b", "a"]
    .into_iter()
    .fold(HashMap::new(), |mut map, item| {
        *map.entry(item).or_insert(0) += 1;
        map
    });
assert_eq!(counts["a"], 3);

reduce has been stable since Rust 1.51 — it’s the functional programmer’s best friend for collapsing iterators when the first element is your natural starting point.

60. Iterator::partition — Split a Collection in Two

Need to split a collection into two groups based on a condition? Skip the manual loop — Iterator::partition does it in one call.

The manual way

Without partition, you’d loop and push into two separate vectors:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8];

let mut evens = Vec::new();
let mut odds = Vec::new();

for n in numbers {
    if n % 2 == 0 {
        evens.push(n);
    } else {
        odds.push(n);
    }
}

assert_eq!(evens, vec![2, 4, 6, 8]);
assert_eq!(odds, vec![1, 3, 5, 7]);

It works, but it’s a lot of ceremony for a simple split.

Enter partition

Iterator::partition collects into two collections in a single pass. Items where the predicate returns true go left, false goes right:

1
2
3
4
5
6
7
8
let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8];

let (evens, odds): (Vec<_>, Vec<_>) = numbers
    .iter()
    .partition(|&&n| n % 2 == 0);

assert_eq!(evens, vec![&2, &4, &6, &8]);
assert_eq!(odds, vec![&1, &3, &5, &7]);

The type annotation (Vec<_>, Vec<_>) is required — Rust needs to know what collections to build. You can partition into any type that implements Default + Extend, not just Vec.

Owned values with into_iter

Use into_iter() when you want owned values instead of references:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let files = vec![
    "main.rs", "lib.rs", "test_utils.rs",
    "README.md", "CHANGELOG.md",
];

let (rust_files, other_files): (Vec<_>, Vec<_>) = files
    .into_iter()
    .partition(|f| f.ends_with(".rs"));

assert_eq!(rust_files, vec!["main.rs", "lib.rs", "test_utils.rs"]);
assert_eq!(other_files, vec!["README.md", "CHANGELOG.md"]);

A practical use: triaging results

partition pairs beautifully with Result to separate successes from failures:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let inputs = vec!["42", "not_a_number", "7", "oops", "13"];

let (oks, errs): (Vec<_>, Vec<_>) = inputs
    .iter()
    .map(|s| s.parse::<i32>())
    .partition(Result::is_ok);

let values: Vec<i32> = oks.into_iter().map(Result::unwrap).collect();
let failures: Vec<_> = errs.into_iter().map(Result::unwrap_err).collect();

assert_eq!(values, vec![42, 7, 13]);
assert_eq!(failures.len(), 2);

partition has been stable since Rust 1.0 — one of those hidden gems that’s been there all along. Anytime you reach for a loop to split items into two buckets, reach for partition instead.

#057 Apr 2026

57. New Math Constants — GOLDEN_RATIO and EULER_GAMMA in std

Tired of defining your own golden ratio or Euler-Mascheroni constant? As of Rust 1.94, std ships them out of the box — no more copy-pasting magic numbers.

Before: Roll Your Own

If you needed the golden ratio or the Euler-Mascheroni constant before Rust 1.94, you had to define them yourself:

1
2
const PHI: f64 = 1.618033988749895;
const EULER_GAMMA: f64 = 0.5772156649015329;

This works, but it’s error-prone. One wrong digit and your calculations drift. And every project that needs these ends up with its own slightly-different copy.

After: Just Use std

Rust 1.94 added GOLDEN_RATIO and EULER_GAMMA to the standard consts modules for both f32 and f64:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::f64::consts::{GOLDEN_RATIO, EULER_GAMMA};

fn main() {
    // Golden ratio: (1 + √5) / 2
    let phi = GOLDEN_RATIO;
    assert!((phi * phi - phi - 1.0).abs() < 1e-10);

    // Euler-Mascheroni constant
    let gamma = EULER_GAMMA;
    assert!((gamma - 0.5772156649015329).abs() < 1e-10);

    println!("φ = {phi}");
    println!("γ = {gamma}");
}

These sit right alongside the constants you already know — PI, TAU, E, SQRT_2, and friends.

Where You’d Actually Use Them

Golden ratio shows up in algorithm design (Fibonacci heaps, golden-section search), generative art, and UI layout proportions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::f64::consts::GOLDEN_RATIO;

fn golden_section_dimensions(width: f64) -> (f64, f64) {
    let height = width / GOLDEN_RATIO;
    (width, height)
}

fn main() {
    let (w, h) = golden_section_dimensions(800.0);
    assert!((w / h - GOLDEN_RATIO).abs() < 1e-10);
    println!("Width: {w}, Height: {h:.2}");
}

Euler-Mascheroni constant appears in number theory, harmonic series approximations, and probability:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::f64::consts::EULER_GAMMA;

/// Approximate the N-th harmonic number using the
/// asymptotic expansion: H_n ≈ ln(n) + γ + 1/(2n)
fn harmonic_approx(n: u64) -> f64 {
    let nf = n as f64;
    nf.ln() + EULER_GAMMA + 1.0 / (2.0 * nf)
}

fn main() {
    // Exact H_10 = 1 + 1/2 + 1/3 + ... + 1/10
    let exact: f64 = (1..=10).map(|i| 1.0 / i as f64).sum();
    let approx = harmonic_approx(10);

    println!("Exact H_10:  {exact:.6}");
    println!("Approx H_10: {approx:.6}");
    assert!((exact - approx).abs() < 0.01);
}

The Full Lineup

With these additions, std::f64::consts now includes: PI, TAU, E, SQRT_2, SQRT_3, LN_2, LN_10, LOG2_E, LOG2_10, LOG10_2, LOG10_E, FRAC_1_PI, FRAC_1_SQRT_2, FRAC_1_SQRT_2PI, FRAC_2_PI, FRAC_2_SQRT_PI, FRAC_PI_2, FRAC_PI_3, FRAC_PI_4, FRAC_PI_6, FRAC_PI_8, GOLDEN_RATIO, and EULER_GAMMA. That’s a pretty complete toolkit for numerical work — all with full f64 precision, available at compile time.

#055 Apr 2026

55. floor_char_boundary — Truncate Strings Without Breaking UTF-8

Ever tried to truncate a string to a byte limit and got a panic because you sliced in the middle of a multi-byte character? floor_char_boundary fixes that.

The Problem

Slicing a string at an arbitrary byte index panics if that index lands inside a multi-byte UTF-8 character:

1
2
3
4
5
6
let s = "Héllo 🦀 world";
// This panics at runtime!
// let truncated = &s[..5]; // 'é' spans bytes 1..3, index 5 is fine here
// but what if we don't know the content?
let s = "🦀🦀🦀"; // each crab is 4 bytes
// &s[..5] would panic — byte 5 is inside the second crab!

You could scan backward byte-by-byte checking is_char_boundary(), but that’s tedious and easy to get wrong.

The Fix: floor_char_boundary

str::floor_char_boundary(index) returns the largest byte position at or before index that sits on a valid character boundary. Its counterpart ceil_char_boundary gives you the smallest position at or after the index.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
fn main() {
    let s = "🦀🦀🦀"; // each 🦀 is 4 bytes, total 12 bytes

    // We want ~6 bytes, but byte 6 is inside the second crab
    let i = s.floor_char_boundary(6);
    assert_eq!(i, 4); // rounds down to end of first 🦀
    assert_eq!(&s[..i], "🦀");

    // ceil_char_boundary rounds up instead
    let j = s.ceil_char_boundary(6);
    assert_eq!(j, 8); // rounds up to end of second 🦀
    assert_eq!(&s[..j], "🦀🦀");
}

Real-World Use: Safe Truncation

Here’s a practical helper that truncates a string to fit a byte budget, adding an ellipsis if it was shortened:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
fn truncate(s: &str, max_bytes: usize) -> String {
    if s.len() <= max_bytes {
        return s.to_string();
    }
    let end = s.floor_char_boundary(max_bytes.saturating_sub(3));
    format!("{}...", &s[..end])
}

fn main() {
    let bio = "I love Rust 🦀 and crabs!";
    let short = truncate(bio, 16);
    assert_eq!(short, "I love Rust 🦀...");
    // 'I love Rust 🦀' = 15 bytes + '...' = 18 total
    // Safe! No panics, no broken characters.

    // Short strings pass through unchanged
    assert_eq!(truncate("hi", 10), "hi");
}

No more manual boundary scanning — these two methods handle the UTF-8 dance for you.

54. Cell::update — Modify Interior Values Without the Gymnastics

Tired of writing cell.set(cell.get() + 1) every time you want to tweak a Cell value? Rust 1.88 added Cell::update — one call to read, transform, and write back.

The old way

Cell<T> gives you interior mutability for Copy types, but updating a value always felt clunky:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::cell::Cell;

fn main() {
    let counter = Cell::new(0u32);

    // Read, modify, write back — three steps for one logical operation
    counter.set(counter.get() + 1);
    counter.set(counter.get() + 1);
    counter.set(counter.get() + 1);

    assert_eq!(counter.get(), 3);
    println!("Counter: {}", counter.get());
}

You’re calling .get() and .set() in the same expression, which is repetitive and visually noisy — especially when the transformation is more complex than + 1.

Enter Cell::update

Stabilized in Rust 1.88, update takes a closure that receives the current value and returns the new one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::cell::Cell;

fn main() {
    let counter = Cell::new(0u32);

    counter.update(|n| n + 1);
    counter.update(|n| n + 1);
    counter.update(|n| n + 1);

    assert_eq!(counter.get(), 3);
    println!("Counter: {}", counter.get());
}

One call. No repetition of the cell name. The intent — “increment this value” — is immediately clear.

Beyond simple increments

update shines when the transformation is more involved:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::cell::Cell;

fn main() {
    let flags = Cell::new(0b0000_1010u8);

    // Toggle bit 0
    flags.update(|f| f ^ 0b0000_0001);
    assert_eq!(flags.get(), 0b0000_1011);

    // Clear the top nibble
    flags.update(|f| f & 0b0000_1111);
    assert_eq!(flags.get(), 0b0000_1011);

    // Saturating shift left
    flags.update(|f| f.saturating_mul(2));
    assert_eq!(flags.get(), 22);

    println!("Flags: {:#010b}", flags.get());
}

Compare that to flags.set(flags.get() ^ 0b0000_0001) — the update version reads like a pipeline of transformations.

A practical example: tracking state in callbacks

Cell::update is especially handy inside closures where you need shared mutable state without reaching for RefCell:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::cell::Cell;

fn main() {
    let total = Cell::new(0i64);

    let prices = [199, 450, 85, 320, 1200];
    let discounted: Vec<i64> = prices.iter().map(|&price| {
        let final_price = if price > 500 { price * 9 / 10 } else { price };
        total.update(|t| t + final_price);
        final_price
    }).collect();

    assert_eq!(discounted, vec![199, 450, 85, 320, 1080]);
    assert_eq!(total.get(), 2134);
    println!("Prices: {:?}, Total: {}", discounted, total.get());
}

No RefCell, no runtime borrow checks, no panics — just a clean in-place update.

The signature

1
2
3
impl<T: Copy> Cell<T> {
    pub fn update(&self, f: impl FnOnce(T) -> T);
}

Note the T: Copy bound — this works because Cell copies the value out, passes it to your closure, and copies the result back in. If you need this for non-Copy types, you’ll still want RefCell.

Simple, ergonomic, and long overdue. Available since Rust 1.88.0.

#053 Mar 2026

53. element_offset — Find an Element's Index by Reference

Ever had a reference to an element inside a slice but needed its index? Before Rust 1.94, you’d reach for .position() with value equality or resort to pointer math. Now there’s a cleaner way.

The problem

Imagine you’re scanning a slice and a helper function hands you back a reference to the element it found. You know the reference points somewhere inside your slice, but you need the index — not a value-based search.

1
2
3
fn first_long_word<'a>(words: &'a [&str]) -> Option<&'a &'a str> {
    words.iter().find(|w| w.len() > 5)
}

You could call .position() with value comparison, but that re-scans the slice and compares by value — which is wasteful when you already hold the exact reference.

The solution: element_offset

<[T]>::element_offset takes a reference to an element and returns its Option<usize> index by comparing pointers, not values. If the reference points into the slice, you get Some(index). If it doesn’t, you get None.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
fn main() {
    let words = ["hi", "hello", "rustacean", "world"];

    // A helper hands us a reference into the slice
    let found: &&str = words.iter().find(|w| w.len() > 5).unwrap();

    // Get the index by reference identity — no value scan needed
    let index = words.element_offset(found).unwrap();

    assert_eq!(index, 2);
    assert_eq!(words[index], "rustacean");

    println!("Found '{}' at index {}", found, index);
}

Why not .position()?

.position() compares by value and has to walk the slice from the start. element_offset is an O(1) pointer comparison — it checks whether your reference falls within the slice’s memory range and computes the offset directly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn main() {
    let values = [10, 20, 10, 30];

    let third = &values[2]; // points at the second '10'

    // position() finds the FIRST 10 (index 0) — wrong!
    let by_value = values.iter().position(|v| v == third);
    assert_eq!(by_value, Some(0));

    // element_offset() finds THIS exact element (index 2) — correct!
    let by_ref = values.element_offset(third);
    assert_eq!(by_ref, Some(2));

    println!("By value: {:?}, By reference: {:?}", by_value, by_ref);
}

This distinction matters whenever your slice has duplicate values.

When the reference is outside the slice

If the reference doesn’t point into the slice, you get None:

1
2
3
4
5
6
7
8
fn main() {
    let a = [1, 2, 3];
    let outside = &42;

    assert_eq!(a.element_offset(outside), None);

    println!("Outside reference: {:?}", a.element_offset(outside));
}

Clean, safe, and no unsafe pointer arithmetic required. Available since Rust 1.94.0.

51. File::lock — File Locking in the Standard Library

Multiple processes writing to the same file? That’s a recipe for corruption. Since Rust 1.89, File::lock gives you OS-backed file locking without external crates.

The problem

You have a CLI tool that appends to a shared log file. Two instances run at the same time, and suddenly your log entries are garbled — half a line from one process interleaved with another. Before 1.89, you’d reach for the fslock or file-lock crate. Now it’s built in.

Exclusive locking

File::lock() acquires an exclusive (write) lock. Only one handle can hold an exclusive lock at a time — all other attempts block until the lock is released:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::fs::File;
use std::io::{self, Write};

fn main() -> io::Result<()> {
    let mut file = File::options()
        .write(true)
        .create(true)
        .open("/tmp/rustbites_lock_demo.txt")?;

    // Blocks until the lock is acquired
    file.lock()?;

    writeln!(file, "safe write from process {}", std::process::id())?;

    // Lock is released when the file is closed (dropped)
    Ok(())
}

When the File is dropped, the lock is automatically released. No manual unlock() needed — though you can call file.unlock() explicitly if you want to release it early.

Shared (read) locking

Sometimes you want to allow multiple readers but block writers. That’s what lock_shared() is for:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::fs::File;
use std::io::{self, Read};

fn main() -> io::Result<()> {
    let mut file = File::open("/tmp/rustbites_lock_demo.txt")?;

    // Multiple processes can hold a shared lock simultaneously
    file.lock_shared()?;

    let mut contents = String::new();
    file.read_to_string(&mut contents)?;
    println!("Read: {contents}");

    file.unlock()?; // explicit release
    Ok(())
}

Shared locks coexist with other shared locks, but block exclusive lock attempts. Classic reader-writer pattern, enforced at the OS level.

Non-blocking with try_lock

Don’t want to wait? try_lock() and try_lock_shared() return immediately instead of blocking:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::fs::{self, File, TryLockError};

fn main() -> std::io::Result<()> {
    let file = File::options()
        .write(true)
        .create(true)
        .open("/tmp/rustbites_trylock.txt")?;

    match file.try_lock() {
        Ok(()) => println!("Lock acquired!"),
        Err(TryLockError::WouldBlock) => println!("File is busy, try later"),
        Err(TryLockError::Error(e)) => return Err(e),
    }

    Ok(())
}

If another process holds the lock, you get TryLockError::WouldBlock instead of hanging. Perfect for tools that should fail fast rather than block when another instance is already running.

Key details

  • Advisory locks: these locks are advisory on most platforms — they don’t prevent other processes from reading/writing the file unless those processes also use locking
  • Automatic release: locks are released when the File handle is dropped
  • Cross-platform: works on Linux, macOS, and Windows (uses flock on Unix, LockFileEx on Windows)
  • Stable since Rust 1.89
#049 Mar 2026

49. std::io::pipe — Anonymous Pipes in the Standard Library

Need to wire up stdout and stderr from a child process, or stream data between threads? Since Rust 1.87, std::io::pipe() gives you OS-backed anonymous pipes without reaching for external crates.

What’s an anonymous pipe?

A pipe is a one-way data channel: one end writes, the other reads. Before 1.87, you needed the os_pipe crate or platform-specific code to get one. Now it’s a single function call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::io::{self, Read, Write};

fn main() -> io::Result<()> {
    let (mut reader, mut writer) = io::pipe()?;

    writer.write_all(b"hello from the pipe")?;
    drop(writer); // close the write end so reads hit EOF

    let mut buf = String::new();
    reader.read_to_string(&mut buf)?;
    assert_eq!(buf, "hello from the pipe");

    println!("Received: {buf}");
    Ok(())
}

pipe() returns a (PipeReader, PipeWriter) pair. PipeReader implements Read, PipeWriter implements Write — they plug into any generic I/O code you already have.

Merge stdout and stderr from a child process

The killer use case: capture both output streams from a subprocess as a single interleaved stream:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
use std::io::{self, Read};
use std::process::Command;

fn main() -> io::Result<()> {
    let (mut recv, send) = io::pipe()?;

    let mut child = Command::new("echo")
        .arg("hello world")
        .stdout(send.try_clone()?)
        .stderr(send)
        .spawn()?;

    child.wait()?;

    let mut output = String::new();
    recv.read_to_string(&mut output)?;
    assert!(output.contains("hello world"));

    println!("Combined output: {output}");
    Ok(())
}

The try_clone() on the writer lets both stdout and stderr write to the same pipe. When both copies of the PipeWriter are dropped (one moved into stdout, one into stderr), reads on the PipeReader return EOF.

Why not just use Command::output()?

Command::output() captures stdout and stderr separately into Vec<u8> — you get two blobs, no interleaving, and everything is buffered in memory. With pipes, you can stream the output as it arrives, merge the two streams, or fan data into multiple consumers. Pipes give you the plumbing; output() gives you the convenience.

Key behavior

A read on PipeReader blocks until data is available or all writers are closed. A write on PipeWriter blocks when the OS pipe buffer is full. This is the same behavior as Unix pipes under the hood — because that’s exactly what they are.

#047 Mar 2026

47. Vec::pop_if — Conditionally Pop the Last Element

Need to remove the last element of a Vec only when it meets a condition? Vec::pop_if does exactly that — no index juggling, no separate check-then-pop.

The old way

Before pop_if, you’d write something like this:

1
2
3
4
5
6
let mut stack = vec![1, 2, 3, 4];

if stack.last().is_some_and(|x| *x > 3) {
    let val = stack.pop().unwrap();
    println!("Popped: {val}");
}

Two separate calls, and a subtle TOCTOU gap if you’re not careful — last() checks one thing, then pop() acts on an assumption.

Enter pop_if

Stabilized in Rust 1.86, pop_if combines the check and the removal into one atomic operation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let mut stack = vec![1, 2, 3, 4];

// Pop the last element only if it's greater than 3
let popped = stack.pop_if(|x| *x > 3);
assert_eq!(popped, Some(4));
assert_eq!(stack, [1, 2, 3]);

// Now the last element is 3 — doesn't match, so nothing happens
let stayed = stack.pop_if(|x| *x > 3);
assert_eq!(stayed, None);
assert_eq!(stack, [1, 2, 3]);

The closure receives a &mut T reference to the last element. If it returns true, the element is removed and returned as Some(T). If false (or the vec is empty), you get None.

Mutable access inside the predicate

Because the closure gets &mut T, you can even modify the element before deciding whether to pop it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
let mut tasks = vec![
    String::from("buy milk"),
    String::from("URGENT: deploy fix"),
];

let urgent = tasks.pop_if(|task| {
    if task.starts_with("URGENT:") {
        *task = task.replacen("URGENT: ", "", 1);
        true
    } else {
        false
    }
});

assert_eq!(urgent.as_deref(), Some("deploy fix"));
assert_eq!(tasks, vec!["buy milk"]);

A practical use: draining from the back

pop_if is handy for processing a sorted vec from the tail. Think of a priority queue backed by a sorted vec where you only want to process items above a threshold:

1
2
3
4
5
6
7
8
9
let mut scores = vec![10, 25, 50, 75, 90];

let mut high_scores = Vec::new();
while let Some(score) = scores.pop_if(|s| *s >= 70) {
    high_scores.push(score);
}

assert_eq!(high_scores, vec![90, 75]);
assert_eq!(scores, vec![10, 25, 50]);

Clean, expressive, and no off-by-one errors. Another small addition to Vec that makes everyday Rust just a bit nicer.

#042 Mar 2026

42. array::from_fn — Build Arrays from a Function

Need a fixed-size array where each element depends on its index? Skip the vec![..].try_into().unwrap() dance — std::array::from_fn builds it in place with zero allocation.

The problem

Rust arrays [T; N] have a fixed size known at compile time, but initializing them with computed values used to be awkward:

1
2
3
4
5
6
// Clunky: build a Vec, then convert
let squares: [u64; 5] = (0..5)
    .map(|i| i * i)
    .collect::<Vec<_>>()
    .try_into()
    .unwrap();

That allocates a Vec on the heap just to get a stack array. For Copy types you could use [0u64; 5] and a loop, but that doesn’t work for non-Copy types and is verbose either way.

The fix: array::from_fn

1
2
let squares: [u64; 5] = std::array::from_fn(|i| (i as u64) * (i as u64));
assert_eq!(squares, [0, 1, 4, 9, 16]);

The closure receives the index (0..N) and returns the element. No heap allocation, no unwrapping — just a clean array on the stack.

Non-Copy types? No problem

from_fn shines when your elements don’t implement Copy:

1
2
3
let labels: [String; 4] = std::array::from_fn(|i| format!("item_{i}"));
assert_eq!(labels[0], "item_0");
assert_eq!(labels[3], "item_3");

Try doing that with [String::new(); 4] — the compiler won’t let you because String isn’t Copy.

Stateful initialization

The closure can capture mutable state. Elements are produced left to right, index 0 first:

1
2
3
4
5
6
7
let mut acc = 1u64;
let powers_of_two: [u64; 6] = std::array::from_fn(|_| {
    let val = acc;
    acc *= 2;
    val
});
assert_eq!(powers_of_two, [1, 2, 4, 8, 16, 32]);

A practical example: lookup tables

Build a compile-time-friendly lookup table for ASCII case conversion offsets:

1
2
3
4
5
6
let is_upper: [bool; 128] = std::array::from_fn(|i| {
    (i as u8).is_ascii_uppercase()
});
assert!(is_upper[b'A' as usize]);
assert!(!is_upper[b'a' as usize]);
assert!(!is_upper[b'0' as usize]);

Why it matters

std::array::from_fn has been stable since Rust 1.63. It avoids heap allocation, works with any type (no Copy or Default bound), and keeps your code readable. Anytime you reach for Vec just to build a fixed-size array — stop, and use from_fn instead.

#032 Mar 2026

32. iter::successors

Need to generate a sequence where each element depends on the previous one? std::iter::successors turns any “next from previous” logic into a lazy iterator.

1
2
3
4
5
6
7
8
use std::iter::successors;

// Powers of 10 that fit in a u32
let powers: Vec<u32> = successors(Some(1u32), |&n| n.checked_mul(10))
    .collect();

// [1, 10, 100, 1_000, 10_000, 100_000, 1_000_000, 10_000_000, 100_000_000, 1_000_000_000]
assert_eq!(powers.len(), 10);

You give it a starting value and a closure that computes the next element from a reference to the current one. Return None to stop — here checked_mul naturally returns None on overflow, so the iterator terminates on its own.

It works great for any recurrence:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::iter::successors;

// Collatz sequence starting from 12
let collatz: Vec<u64> = successors(Some(12u64), |&n| match n {
    1 => None,
    n if n % 2 == 0 => Some(n / 2),
    n => Some(3 * n + 1),
}).collect();

assert_eq!(collatz, vec![12, 6, 3, 10, 5, 16, 8, 4, 2, 1]);

Think of it as unfold for when your state is the yielded value. Simple, lazy, and zero-allocation until you collect.