#118 May 2026

118. [T; N]::map — Transform an Array Without Allocating a Vec

[1, 2, 3].iter().map(|n| n * 2).collect::<Vec<_>>() works, but you’ve thrown the length away in the type and paid for a heap allocation. Arrays have their own map — same shape in, same shape out, no Vec in sight.

The reflex for transforming an array is the iterator chain:

1
2
3
let nums = [1, 2, 3, 4];
let doubled: Vec<i32> = nums.iter().map(|n| n * 2).collect();
assert_eq!(doubled, vec![2, 4, 6, 8]);

That gives you a Vec<i32>. The compiler no longer knows the length, and you allocated on the heap to find that out. If you want the array shape back, you’re stuck with try_into and a unwrap you don’t want.

[T; N]::map skips all of it. The output is [U; N] — same N, brand-new element type:

1
2
3
let nums = [1, 2, 3, 4];
let doubled: [i32; 4] = nums.map(|n| n * 2);
assert_eq!(doubled, [2, 4, 6, 8]);

No heap, no length erased, no try_into. Just an array on the stack with a different element type.

It takes each element by value, so it works fine with non-Copy types — no clone dance:

1
2
3
let names = [String::from("a"), String::from("bb"), String::from("ccc")];
let lens: [usize; 3] = names.map(|s| s.len());
assert_eq!(lens, [1, 2, 3]);

The closure consumes the String, the array is moved, and you get a fresh [usize; 3] back. Compare to the iterator version, which would need .into_iter() plus a try_into to recover the array type.

It’s also a clean way to build initialized arrays from one you already have — RGB to RGBA, raw bytes to parsed records, anything fixed-width:

1
2
3
4
5
6
let rgb: [u8; 3] = [200, 100, 50];
let rgba: [u8; 4] = {
    let [r, g, b] = rgb.map(|c| c.saturating_add(5));
    [r, g, b, 255]
};
assert_eq!(rgba, [205, 105, 55, 255]);

When you genuinely want a Vec, .iter().map().collect() still wins. But when the length is part of the design — config slots, fixed-N pipelines, embedded buffers, no_std code — [T; N]::map keeps that fact in the type system instead of throwing it away.

#117 May 2026

117. Iterator::step_by — Every Nth Element Without filter + enumerate

Want every 3rd value from a series? The reflex is enumerate().filter(|(i, _)| i % 3 == 0) — three combinators, one modulo, and you’ve thrown away the indices anyway. step_by(3) does the same thing in one call.

The classic shape: keep every Nth item, drop the rest. Most people reach for enumerate plus a modulo filter:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let xs = [10, 20, 30, 40, 50, 60, 70];

let evens_by_index: Vec<_> = xs
    .iter()
    .enumerate()
    .filter(|(i, _)| i % 2 == 0)
    .map(|(_, x)| *x)
    .collect();

assert_eq!(evens_by_index, [10, 30, 50, 70]);

That works, but you’re indexing just to throw the index away, and the filter runs once per element even though the iterator already knows where to land.

Iterator::step_by(n) yields the first item, then advances by n - 1, repeating. Same result, no bookkeeping:

1
2
3
4
5
let xs = [10, 20, 30, 40, 50, 60, 70];

let stepped: Vec<_> = xs.iter().step_by(2).copied().collect();

assert_eq!(stepped, [10, 30, 50, 70]);

The first element is always included — step_by(n) starts at index 0, then jumps. If you want to skip the first one, chain with skip:

1
2
3
4
5
let xs = [10, 20, 30, 40, 50, 60, 70];

let from_second: Vec<_> = xs.iter().skip(1).step_by(2).copied().collect();

assert_eq!(from_second, [20, 40, 60]);

It composes nicely with ranges, which is where it really shines — multiples, downsampling, every-other-frame logic without writing the loop yourself:

1
2
3
4
5
6
7
8
// All multiples of 5 up to 30 (inclusive)
let multiples: Vec<i32> = (0..=30).step_by(5).collect();
assert_eq!(multiples, [0, 5, 10, 15, 20, 25, 30]);

// Downsample a buffer to one in four
let signal: Vec<f32> = (0..16).map(|i| i as f32).collect();
let downsampled: Vec<f32> = signal.iter().step_by(4).copied().collect();
assert_eq!(downsampled, [0.0, 4.0, 8.0, 12.0]);

One footgun: step_by(0) panics. The step has to be at least 1, which makes sense — you can’t “advance by zero” and make progress — but it’s a runtime panic, not a compile error, so don’t pass a step you computed at runtime without checking.

1
2
3
4
5
6
7
8
// This would panic: (0..10).step_by(0)
fn safe_step(xs: &[i32], n: usize) -> Vec<i32> {
    if n == 0 { return Vec::new(); }
    xs.iter().step_by(n).copied().collect()
}

assert_eq!(safe_step(&[1, 2, 3, 4], 0), Vec::<i32>::new());
assert_eq!(safe_step(&[1, 2, 3, 4], 2), vec![1, 3]);

Reach for step_by whenever you’d otherwise write enumerate().filter(|(i, _)| i % n == 0) — same behavior, half the code, and the iterator can actually skip elements instead of inspecting every one.

#116 May 2026

116. Path::file_prefix — Get the Real Stem of archive.tar.gz

Path::file_stem strips the last extension, so archive.tar.gz comes back as archive.tar. That’s almost never what you want for double-extension files. file_prefix strips from the first dot instead — archive, finally.

The classic confusion. You ask for the “stem” of a tarball and get something with .tar still glued on:

1
2
3
4
5
6
use std::path::Path;

let p = Path::new("backups/archive.tar.gz");

assert_eq!(p.file_stem(),   Some("archive.tar".as_ref()));
assert_eq!(p.extension(),   Some("gz".as_ref()));

file_stem takes the file name and drops everything from the last . onwards. For a single extension that’s fine. For .tar.gz, .min.js, .d.ts, .spec.ts, you end up doing the second strip yourself:

1
2
3
4
5
6
7
8
9
use std::path::Path;

fn real_stem_old(p: &Path) -> Option<&str> {
    let stem = p.file_stem()?.to_str()?;
    Some(stem.split('.').next().unwrap_or(stem))
}

assert_eq!(real_stem_old(Path::new("archive.tar.gz")), Some("archive"));
assert_eq!(real_stem_old(Path::new("bundle.min.js")),  Some("bundle"));

Works, but you’ve left OsStr land just to do a string split, and you’ve quietly made the function lossy on non-UTF-8 paths.

Rust 1.91 stabilised Path::file_prefix. It returns the file name up to the first . — staying in OsStr the whole time:

1
2
3
4
5
6
use std::path::Path;

assert_eq!(Path::new("archive.tar.gz").file_prefix(), Some("archive".as_ref()));
assert_eq!(Path::new("bundle.min.js").file_prefix(),  Some("bundle".as_ref()));
assert_eq!(Path::new("notes.md").file_prefix(),       Some("notes".as_ref()));
assert_eq!(Path::new("README").file_prefix(),         Some("README".as_ref()));

Leading dots on dotfiles are kept — exactly like file_stem already does — so you don’t accidentally turn .bashrc into an empty string:

1
2
3
4
use std::path::Path;

assert_eq!(Path::new(".bashrc").file_prefix(),     Some(".bashrc".as_ref()));
assert_eq!(Path::new(".config.toml").file_prefix(), Some(".config".as_ref()));

Pair it with file_stem when you want both halves of a multi-extension name in one place:

1
2
3
4
5
6
7
8
use std::path::Path;

let p = Path::new("logs/app.2026-05-03.log.gz");
let prefix = p.file_prefix().and_then(|s| s.to_str()).unwrap_or("");
let stem   = p.file_stem().and_then(|s| s.to_str()).unwrap_or("");

assert_eq!(prefix, "app");                     // the real name
assert_eq!(stem,   "app.2026-05-03.log");      // everything except the final ext

Reach for file_prefix whenever a filename has more than one dot and you want the part a human would call “the name”.

#115 May 2026

115. Vec::resize_with — Grow a Vec With a Closure, Not a Clone

Vec::resize makes every new slot a clone of the same value. When you need fresh values per slot — counters, allocations, defaults — resize_with calls a closure for each new element instead.

Vec::resize(n, value) is fine when the filler is cheap and identical, but it has two annoyances. It needs T: Clone, and every new slot is the same clone. So this doesn’t work the way you want:

1
2
3
4
5
// One Vec shared across every slot — mutating slot 0 mutates them all.
let mut grid: Vec<Vec<u8>> = Vec::new();
grid.resize(3, Vec::new());
grid[0].push(42);
assert_eq!(grid[1], vec![]); // fine — Vec::new() clones to a new empty Vec

That one happens to be safe because Vec::clone actually allocates. But the moment your T is Rc<RefCell<…>>, every slot points at the same cell. And if T isn’t Clone at all, you can’t call resize in the first place.

resize_with takes a closure and calls it once per new slot:

1
2
3
4
5
6
7
let mut counter = 0;
let mut v = vec![10, 20];
v.resize_with(5, || {
    counter += 1;
    counter
});
assert_eq!(v, vec![10, 20, 1, 2, 3]);

The closure can capture mutable state, so each call is fresh. Generating IDs, pulling from an RNG, allocating independent buffers — all easy:

1
2
3
4
5
6
7
8
9
let mut next_id = 100;
let mut buffers: Vec<(usize, Vec<u8>)> = Vec::new();
buffers.resize_with(3, || {
    let id = next_id;
    next_id += 1;
    (id, Vec::with_capacity(1024))
});
assert_eq!(buffers[0].0, 100);
assert_eq!(buffers[2].0, 102);

For non-Clone types, Default::default is the usual filler:

1
2
3
4
5
6
7
8
9
#[derive(Default, Debug, PartialEq)]
struct Slot {
    open: bool,
    payload: Vec<u8>,
}

let mut slots: Vec<Slot> = Vec::new();
slots.resize_with(2, Default::default);
assert_eq!(slots, vec![Slot::default(), Slot::default()]);

Shrinking still works, and the closure is never called when the new length is smaller:

1
2
3
let mut v = vec![1, 2, 3, 4, 5];
v.resize_with(2, || unreachable!());
assert_eq!(v, vec![1, 2]);

Reach for resize_with whenever the filler isn’t a single static value — and especially when T doesn’t (or shouldn’t) implement Clone.

114. Option::transpose — Use ? on an Optional Result

Got an Option<Result<T, E>> and want to ? the error out? You can’t — ? doesn’t reach inside the Option. transpose flips it to Result<Option<T>, E>, and the rest takes care of itself.

The classic case: a config field that’s optional, but if it’s there, it has to parse. The old dance is three match arms just to thread the error out of the Option:

1
2
3
4
5
6
7
8
9
use std::num::ParseIntError;

fn parse_port_old(raw: Option<&str>) -> Result<Option<u16>, ParseIntError> {
    match raw.map(str::parse::<u16>) {
        Some(Ok(p))  => Ok(Some(p)),
        Some(Err(e)) => Err(e),
        None         => Ok(None),
    }
}

transpose collapses it:

1
2
3
4
5
use std::num::ParseIntError;

fn parse_port(raw: Option<&str>) -> Result<Option<u16>, ParseIntError> {
    raw.map(str::parse::<u16>).transpose()
}

Option<Result<T, E>>::transpose() returns Result<Option<T>, E> — exactly the shape ? wants. Now you can chain it inline:

1
2
3
4
5
6
7
use std::collections::HashMap;
use std::num::ParseIntError;

fn read_port(config: &HashMap<&str, &str>) -> Result<Option<u16>, ParseIntError> {
    let port = config.get("port").copied().map(str::parse::<u16>).transpose()?;
    Ok(port)
}

It works the other way too: Result<Option<T>, E>::transpose() returns Option<Result<T, E>>. Handy when an iterator chain wants the Option on the outside.

All three cases, one call:

1
2
3
assert_eq!(parse_port(Some("8080")).unwrap(), Some(8080));
assert_eq!(parse_port(None).unwrap(), None);
assert!(parse_port(Some("nope")).is_err());

Reach for transpose any time Option and Result get nested and you wish ? could see through both.

#113 May 2026

113. Arc::make_mut — Mutate Inside an Arc Without the Dance

You have an Arc<T>, you want a &mut T. Arc only hands out &T, so the usual workaround is clone-the-inner, mutate, rewrap. Arc::make_mut does that for you — and skips the clone when no one else is watching.

The manual version everyone writes once and then copies forever:

1
2
3
4
5
6
7
8
9
use std::sync::Arc;

let mut shared = Arc::new(vec![1, 2, 3]);

let mut owned: Vec<i32> = (*shared).clone(); // always clones
owned.push(4);
shared = Arc::new(owned);                    // always reallocates the Arc

assert_eq!(*shared, vec![1, 2, 3, 4]);

It works, but it clones the Vec and reallocates the Arc every single time — even when this Arc is the only one pointing at the data.

Arc::make_mut takes &mut Arc<T> and hands you &mut T:

1
2
3
4
5
6
7
use std::sync::Arc;

let mut shared = Arc::new(vec![1, 2, 3]);

Arc::make_mut(&mut shared).push(4);

assert_eq!(*shared, vec![1, 2, 3, 4]);

One call, one borrow, and — crucially — no clone when this Arc is unique:

1
2
3
4
5
use std::sync::Arc;

let mut solo = Arc::new(vec![1, 2, 3]);
Arc::make_mut(&mut solo).push(99); // strong_count == 1, mutates in place
assert_eq!(*solo, vec![1, 2, 3, 99]);

When the Arc is shared, make_mut quietly clones the inner value into a fresh allocation and detaches your handle from the rest. The other handles keep seeing the old data — clone-on-write, exactly like you’d want:

1
2
3
4
5
6
7
8
9
use std::sync::Arc;

let mut a = Arc::new(vec![1, 2, 3]);
let b = Arc::clone(&a);            // strong_count == 2

Arc::make_mut(&mut a).push(99);    // clones, then mutates the clone

assert_eq!(*a, vec![1, 2, 3, 99]); // a moved to its own allocation
assert_eq!(*b, vec![1, 2, 3]);     // b still sees the original

The same method exists on Rc for single-threaded code, with identical semantics. Reach for make_mut whenever you find yourself cloning the inside of an Arc just to change one field — you’ll skip the allocation in the common case and get an honest &mut T in return.

#112 May 2026

112. Iterator::scan — Fold That Yields Every Step

fold keeps the running state but only hands you the final answer. So you reach for a mut variable plus map, or you give up and collect first. scan is the missing middle: a fold that yields each intermediate step.

The setup is familiar — you want a running total, not just the sum:

1
2
3
4
5
6
7
let nums = [1, 2, 3, 4, 5];

let mut sum = 0;
let totals: Vec<i32> = nums.iter()
    .map(|&x| { sum += x; sum })
    .collect();
assert_eq!(totals, vec![1, 3, 6, 10, 15]);

It works, but the state lives outside the chain. map is supposed to be pure — leaning on a captured mut makes the iterator harder to refactor and impossible to compose.

scan puts the state inside the chain. You hand it an initial value and a closure that mutates the state and returns each yielded item:

1
2
3
4
5
6
7
8
9
let nums = [1, 2, 3, 4, 5];

let totals: Vec<i32> = nums.iter()
    .scan(0, |sum, &x| {
        *sum += x;
        Some(*sum)
    })
    .collect();
assert_eq!(totals, vec![1, 3, 6, 10, 15]);

No captured state, no second pass. The closure returns Option<U>, so returning None ends the iteration early — handy for “stop when the running total crosses a threshold”:

1
2
3
4
5
6
7
8
9
let nums = [3, 4, 5, 6, 7];

let until_ten: Vec<i32> = nums.iter()
    .scan(0, |sum, &x| {
        *sum += x;
        (*sum <= 10).then_some(*sum)
    })
    .collect();
assert_eq!(until_ten, vec![3, 7]);

The state isn’t limited to numbers either — pair it with a tuple to track “previous + current” and you’ve got differences in one pass:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let readings = [10, 13, 12, 18, 20];

let deltas: Vec<i32> = readings.iter()
    .scan(None, |prev, &x| {
        let d = prev.map(|p| x - p);
        *prev = Some(x);
        Some(d)
    })
    .flatten()
    .collect();
assert_eq!(deltas, vec![3, -1, 6, 2]);

Reach for scan whenever you’d otherwise write let mut acc = … outside a map. Same shape, no escapee state.

#111 Apr 2026

111. Vec::insert_mut — Splice In and Edit Without Reindexing

You insert a placeholder, then index back to fix it up. Two lookups, two bounds checks, one wobble. Vec::insert_mut — stable in 1.95 — hands you the &mut T directly.

The classic dance:

1
2
3
4
let mut v = vec![1, 2, 4, 5];
v.insert(2, 0);
v[2] = 3; // recompute the index, second bounds check
assert_eq!(v, [1, 2, 3, 4, 5]);

insert_mut returns the slot:

1
2
3
4
let mut v = vec![1, 2, 4, 5];
let slot = v.insert_mut(2, 0);
*slot = 3;
assert_eq!(v, [1, 2, 3, 4, 5]);

This shines when the value is built in pieces — push a default, then fill it in based on where it landed:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#[derive(Default, Debug, PartialEq)]
struct Job { id: u32, label: String }

let mut jobs: Vec<Job> = vec![Job { id: 1, label: "build".into() }];
let j = jobs.insert_mut(0, Job::default());
j.id = 0;
j.label = "setup".into();

assert_eq!(jobs[0], Job { id: 0, label: String::from("setup") });
assert_eq!(jobs[1].label, "build");

Rust 1.95 grew the whole _mut family alongside Vec::push_mut: Vec::insert_mut, VecDeque::push_front_mut, VecDeque::push_back_mut, VecDeque::insert_mut, LinkedList::push_front_mut, and LinkedList::push_back_mut. Same idea everywhere — the place-it method now returns a mutable reference to the slot it just placed.

1
2
3
4
5
6
use std::collections::VecDeque;

let mut q: VecDeque<i32> = VecDeque::from([2, 3]);
let head = q.push_front_mut(0);
*head += 1; // now 1
assert_eq!(q, VecDeque::from([1, 2, 3]));

Quietly useful, no churn — just fewer indices floating around.

#110 Apr 2026

110. slice::split_at_checked — Split Without the Panic

slice.split_at(i) panics the second i > len. The usual fix is a length check wrapped around the call so you don’t blow up on a bad index. split_at_checked does the same job in one call and hands you an Option.

The classic trap — a single bad index away from a panic:

1
2
let xs = [1, 2, 3, 4];
let (head, tail) = xs.split_at(10); // panics: byte index 10 is out of bounds

The defensive version everyone writes:

1
2
3
4
5
6
7
8
9
let xs = [1, 2, 3, 4];
let i = 10;

if i <= xs.len() {
    let (head, tail) = xs.split_at(i);
    // ...use head and tail
} else {
    // handle out-of-bounds
}

Two reads of i, one easy off-by-one (< vs <=), and a panic waiting if you ever drop the guard.

Rust 1.80 stabilised split_at_checked (and split_at_mut_checked), which folds the bounds check into the return type:

1
2
3
4
5
let xs = [1, 2, 3, 4];

assert_eq!(xs.split_at_checked(2), Some((&xs[..2], &xs[2..])));
assert_eq!(xs.split_at_checked(4), Some((&xs[..], &[][..]))); // boundary is fine
assert_eq!(xs.split_at_checked(5), None);                     // would have panicked

Now the bounds check is the API. You get an Option<(&[T], &[T])> and the compiler nudges you to handle the None case:

1
2
3
4
5
6
7
fn take_prefix(buf: &[u8], n: usize) -> Option<&[u8]> {
    let (head, _rest) = buf.split_at_checked(n)?;
    Some(head)
}

assert_eq!(take_prefix(b"hello", 3), Some(&b"hel"[..]));
assert_eq!(take_prefix(b"hi", 3), None);

? does the bailout, no manual length check, no panic path. This works on &str too, where the index has to land on a UTF-8 boundary — and it returns None if it doesn’t, instead of panicking.

109. BinaryHeap::peek_mut — Edit the Top Without Pop-and-Push

Updating the largest element in a BinaryHeap shouldn’t take two heap operations. peek_mut lets you mutate it in place and re-heapifies on drop — one sift instead of two.

The pop-and-push dance

Say you keep tasks in a max-heap by priority and you need to bump the top one down. The naive recipe is to pop, change it, and push it back:

1
2
3
4
5
6
7
8
9
use std::collections::BinaryHeap;

let mut heap = BinaryHeap::from([3, 1, 5, 2, 4]);

// Pull the max out, edit it, push it back.
let top = heap.pop().unwrap();
heap.push(top - 10);

assert_eq!(heap.peek(), Some(&4));

That’s two O(log n) heap operations — a sift-down for pop, a sift-up for push — plus an awkward two-step where the heap briefly forgets it ever had a max.

peek_mut does it in one shot

peek_mut returns a PeekMut guard that derefs to the top element. You mutate it in place, and the heap fixes itself with a single sift-down when the guard is dropped:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::collections::BinaryHeap;

let mut heap = BinaryHeap::from([3, 1, 5, 2, 4]);

if let Some(mut top) = heap.peek_mut() {
    *top -= 10; // 5 becomes -5
}
// PeekMut dropped here — heap re-heapifies once.

assert_eq!(heap.peek(), Some(&4));

One traversal, no temporary owned value, and the heap invariant is restored before the next line of code runs.

Conditionally pop without a re-peek

PeekMut also has an associated pop function. Look at the top, decide whether to keep or remove it, all without peeking twice:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::collections::{BinaryHeap, binary_heap::PeekMut};

let mut heap = BinaryHeap::from([10, 7, 4, 1]);

if let Some(top) = heap.peek_mut() {
    if *top > 5 {
        // Pop only when the top passes a check.
        PeekMut::pop(top);
    }
}

assert_eq!(heap.peek(), Some(&7));

Same shape as Vec::pop_if, but for the heap’s max — pull the top out only when it actually meets your condition.

When to reach for it

Any time you’d write pop().push() to edit the max, peek_mut is the better tool: one heap fixup instead of two, and the borrow stays inside the heap so you don’t shuffle ownership for nothing. Great fit for priority queues that adjust the top entry — schedulers, event loops, top-k trackers.