#132 May 2026

132. abs_diff — Subtract Without Caring Which Side Is Bigger

Subtracting two unsigned integers and the smaller one comes first? Instant panic. a.abs_diff(b) returns the gap as a u* regardless of which side is bigger — no branching, no overflow.

The Problem

Unsigned subtraction in Rust panics in debug and wraps in release the moment the result would go negative. You end up writing the same branch over and over:

1
2
3
4
5
6
fn gap(a: u32, b: u32) -> u32 {
    if a > b { a - b } else { b - a }
}

assert_eq!(gap(10, 3), 7);
assert_eq!(gap(3, 10), 7);

It works, but it’s noise. And the same trick on signed integers has a sneakier bug: i32::MIN.abs_diff(i32::MAX) overflows an i32 — the gap doesn’t fit in the signed range.

After: abs_diff

Every integer type carries an abs_diff method that returns the unsigned gap directly. Signed inputs come back as the matching unsigned type, so the result always fits:

1
2
3
4
5
6
assert_eq!(10u32.abs_diff(3), 7);
assert_eq!(3u32.abs_diff(10), 7);

// Signed → unsigned, no overflow at the extremes
assert_eq!((-5i32).abs_diff(5), 10u32);
assert_eq!(i32::MIN.abs_diff(i32::MAX), u32::MAX);

No if, no checked_sub, no casting through i64 to dodge overflow. One call, one number.

Where It Earns Its Keep

Distance-style calculations are the obvious fit — anywhere “how far apart are these” is the real question and the sign is incidental:

1
2
3
4
5
6
fn manhattan(a: (i32, i32), b: (i32, i32)) -> u32 {
    a.0.abs_diff(b.0) + a.1.abs_diff(b.1)
}

assert_eq!(manhattan((1, 2), (4, 6)), 7);
assert_eq!(manhattan((-3, -3), (3, 3)), 12);

It also cleans up timestamp deltas, where one side is “now” and the other could be in the past or the future:

1
2
3
4
5
let scheduled: u64 = 1_700_000_000;
let actual:    u64 = 1_699_999_995;

let drift = scheduled.abs_diff(actual);
assert_eq!(drift, 5);

Whenever you catch yourself writing if a > b { a - b } else { b - a }, reach for abs_diff instead.

#131 May 2026

131. mem::offset_of! — Byte Offsets Without the memoffset Crate

You need the byte offset of a field — for FFI, custom serialization, or talking to a C struct. The old answer was unsafe pointer arithmetic on a MaybeUninit, or pulling in the memoffset crate. std::mem::offset_of! is the safe, one-liner replacement.

The problem

Say you’re matching a C layout and need to know exactly where each field lives in memory:

1
2
3
4
5
6
7
#[repr(C)]
struct Header {
    magic: u32,
    version: u16,
    flags: u16,
    payload_len: u64,
}

The pre-1.77 way meant either an external crate or hand-rolled unsafe:

1
2
3
4
5
6
7
8
use std::mem::MaybeUninit;

fn payload_len_offset_old() -> usize {
    let uninit = MaybeUninit::<Header>::uninit();
    let base = uninit.as_ptr() as usize;
    let field = unsafe { &raw const (*uninit.as_ptr()).payload_len } as usize;
    field - base
}

It works, but unsafe, raw pointers, and a MaybeUninit is a lot of ceremony for “where does this field start?”

The fix: mem::offset_of!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use std::mem::offset_of;

let magic_off       = offset_of!(Header, magic);
let version_off     = offset_of!(Header, version);
let flags_off       = offset_of!(Header, flags);
let payload_len_off = offset_of!(Header, payload_len);

assert_eq!(magic_off, 0);
assert_eq!(version_off, 4);
assert_eq!(flags_off, 6);
assert_eq!(payload_len_off, 8);

No unsafe. No allocation. No instance of Header ever exists. The macro expands to a const-evaluable usize — usable inside const fn and static items.

Nested fields work too

Dot through a path of named fields and offset_of! keeps walking:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#[repr(C)]
struct Inner {
    a: u32,
    b: u32,
}

#[repr(C)]
struct Outer {
    tag: u8,
    _pad: [u8; 3],
    inner: Inner,
}

assert_eq!(offset_of!(Outer, inner), 4);
assert_eq!(offset_of!(Outer, inner.b), 8);

Tuples and tuple structs use numeric indices:

1
2
3
4
5
#[repr(C)]
struct Pair(u8, u32);

assert_eq!(offset_of!(Pair, 0), 0);
assert_eq!(offset_of!(Pair, 1), 4);

When it earns its keep

FFI bindings, custom binary parsers, kernel-style intrusive data structures, and anywhere you’d otherwise reach for memoffset. The macro is in core, so it works in no_std. Reach for it whenever you find yourself writing as *const _ as usize math.

#130 May 2026

130. mem::discriminant — Compare Enum Variants, Ignore the Payload

You want to know “is this another Click?” — not whether the coordinates match. Hand-rolling a match for every variant gets old fast. std::mem::discriminant answers that question in one call.

The problem

Say you have an event enum with payload data on every variant:

1
2
3
4
5
6
#[derive(Debug)]
enum Event {
    Click { x: i32, y: i32 },
    KeyPress(char),
    Scroll(i32),
}

Two Clicks with different coordinates are still both clicks. Deriving PartialEq won’t help — that compares the inner data too. The usual workaround is a tedious match:

1
2
3
4
5
6
7
8
fn same_kind_match(a: &Event, b: &Event) -> bool {
    match (a, b) {
        (Event::Click { .. }, Event::Click { .. }) => true,
        (Event::KeyPress(_), Event::KeyPress(_)) => true,
        (Event::Scroll(_), Event::Scroll(_)) => true,
        _ => false,
    }
}

Every new variant means another arm. Forget one and you’ve got a silent bug.

The fix: mem::discriminant

1
2
3
4
5
use std::mem::discriminant;

fn same_kind(a: &Event, b: &Event) -> bool {
    discriminant(a) == discriminant(b)
}

discriminant(&value) returns an opaque Discriminant<T> representing which variant the value is — nothing more. Two Clicks with wildly different coordinates have the same discriminant; a Click and a KeyPress don’t.

No match, no missing-arm bugs, no recompile when you add a new variant.

Bonus: Discriminant<T> is Hash + Eq + Copy

That makes it a perfectly good HashMap key, which is great for counting events by variant without writing a tag enum:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::collections::HashMap;
use std::mem::discriminant;

let events = [
    Event::Click { x: 0, y: 0 },
    Event::KeyPress('a'),
    Event::Click { x: 1, y: 1 },
    Event::Scroll(5),
    Event::KeyPress('b'),
];

let mut counts: HashMap<_, usize> = HashMap::new();
for ev in &events {
    *counts.entry(discriminant(ev)).or_insert(0) += 1;
}
// counts now holds: Click -> 2, KeyPress -> 2, Scroll -> 1

Reach for discriminant whenever you find yourself writing a kind() method or an “is this the same variant?” match. The std lib already has it.

#129 May 2026

129. BTreeMap::extract_if — Range-Scoped Filter-and-Remove in One Pass

Vec::extract_if was great. The BTreeMap version, stable since Rust 1.91, adds a trick the slice variant can’t pull off — a range bound that scopes the scan to part of the keyspace.

The collect-keys-then-remove dance

You want to remove every entry matching a predicate from a BTreeMap, and keep the removed entries. The borrow checker won’t let you mutate the map while you’re iterating it, so the textbook workaround is a two-pass clone-the-keys shuffle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, String> = BTreeMap::from([
    (1, "boot".into()),
    (2, "login".into()),
    (3, "click".into()),
    (4, "logout".into()),
]);

let cutoff = 3;

let to_remove: Vec<u64> = events
    .iter()
    .filter(|(k, _)| **k < cutoff)
    .map(|(k, _)| *k)            // requires K: Copy or Clone
    .collect();

let mut expired = Vec::new();
for k in to_remove {
    if let Some(v) = events.remove(&k) {
        expired.push((k, v));
    }
}

assert_eq!(expired.len(), 2);

It works, but the costs add up. You walk the map twice, you require K: Clone (or Copy), and you allocate a throwaway Vec<K> just to break the self-borrow.

extract_if is one pass — and it takes a range

BTreeMap::extract_if(range, pred) returns an iterator that visits keys in ascending order inside the given range, and yields (K, V) for every entry whose predicate returns true. The map’s structure is updated as the iterator advances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, &str> = BTreeMap::from([
    (1, "boot"),
    (2, "login"),
    (3, "click"),
    (4, "logout"),
    (5, "shutdown"),
]);

let expired: BTreeMap<u64, &str> =
    events.extract_if(..3, |_k, _v| true).collect();

assert_eq!(expired.into_iter().collect::<Vec<_>>(), [(1, "boot"), (2, "login")]);
assert_eq!(events.keys().copied().collect::<Vec<_>>(), [3, 4, 5]);

No Clone bound. No second pass. No temp Vec<K>. The range ..3 does double duty as a prune — keys outside it aren’t even visited, which matters when the map is large and the range is small.

The closure can mutate survivors too

The predicate signature is FnMut(&K, &mut V) -> bool. Returning true removes and yields; returning false keeps the entry in the map — but you’ve still got &mut V, so you can edit it on the way past:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
use std::collections::BTreeMap;

let mut scores: BTreeMap<&str, i32> = BTreeMap::from([
    ("alice", 85),
    ("bob",   42),
    ("carol", 91),
    ("dan",   30),
]);

let failing: Vec<(&str, i32)> = scores
    .extract_if(.., |_k, v| {
        if *v < 50 {
            true            // pull this one out
        } else {
            *v += 5;        // bump the survivors by 5
            false           // keep it
        }
    })
    .collect();

assert_eq!(failing, [("bob", 42), ("dan", 30)]);
assert_eq!(scores[&"alice"], 90);
assert_eq!(scores[&"carol"], 96);

One traversal, three behaviors at once: filter, remove, and update.

Constraints

The range is RangeBounds<K>, so anything from .. to start..=end works. extract_if panics if start > end or if the bounds collapse to an empty exclusive range. BTreeSet::extract_if is the same idea minus the value — predicate signature is FnMut(&T) -> bool.

When to reach for it

Whenever the operation is “walk a sorted map, peel off the entries matching a condition, optionally bounded to a key range”. Expiring old timestamped events. Draining tasks below a priority watermark. Splitting a map at a tenant boundary into “keep” and “ship”. The range bound is the part the Vec::extract_if (bite 43) can’t give you — and on a large map, scoping the scan is the whole point.

#128 May 2026

128. slice::copy_within — Shift Bytes In-Place Without Fighting the Borrow Checker

You want to copy buf[2..5] over buf[0..3] — same buffer, no allocation. Reach for copy_from_slice and the borrow checker says no. copy_within is the one-call answer.

The two-borrow trap

The obvious code can’t compile — buf would be borrowed mutably and immutably at the same time:

1
2
3
4
5
let mut buf = [1, 2, 3, 4, 5];

// error[E0502]: cannot borrow `buf` as immutable
//               while also borrowed as mutable
// buf[0..3].copy_from_slice(&buf[2..5]);

The usual workarounds are noisy. split_at_mut to carve the slice into two non-overlapping halves:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
let (left, right) = buf.split_at_mut(2);
left[0..2].copy_from_slice(&right[0..2]);
assert_eq!(buf, [3, 4, 3, 4, 5]);

…which only works when source and destination land on opposite sides of the split. Otherwise you allocate a throwaway Vec just to break the borrow:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
let tmp: Vec<_> = buf[2..5].to_vec();   // allocation just to copy
buf[0..3].copy_from_slice(&tmp);

copy_within is the primitive

<[T]>::copy_within(src, dest) copies a range of elements to a destination index inside the same slice — one call, no allocation, no split:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
buf.copy_within(2..5, 0);
assert_eq!(buf, [3, 4, 5, 4, 5]);

It’s memmove semantics, so overlapping source and destination just work — the elements that get overwritten don’t matter, the surviving order does:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
// shift everything one slot to the right
buf.copy_within(0..4, 1);
assert_eq!(buf, [1, 1, 2, 3, 4]);

Try writing that with split_at_mut — you can’t, the source and destination overlap.

A real shape: drop the first N from a Vec

Removing the first n elements without reallocating is copy_within plus a truncate:

1
2
3
4
5
6
7
8
let mut data: Vec<u8> = vec![10, 20, 30, 40, 50];
let n = 2;
let len = data.len();

data.copy_within(n..len, 0);
data.truncate(len - n);

assert_eq!(data, [30, 40, 50]);

Same allocation, same backing buffer — the values just shift down. Vec::drain(..n) reads cleaner for one-offs, but copy_within is what you want when you’re already holding &mut [T] and can’t reach for Vec methods (think ring buffers, fixed-size scratch arrays, no_std crates).

Constraints

T: Copy is required — the method does a memmove, it doesn’t run destructors or call Clone. Source and destination ranges must both fit inside the slice; otherwise it panics. The destination is a single index (where the copy starts), not a range — the length is taken from the source range.

When to reach for it

Any time you’d otherwise write split_at_mut just to satisfy the borrow checker, or allocate a temporary buffer to break a self-borrow. copy_within reads as what you actually meant: move these bytes over there, in place.

Stable since Rust 1.37. Works on [T], Vec<T>, and any DerefMut<Target = [T]>.

#127 May 2026

127. std::mem::swap — Trade Two Values Through Their &mut

The textbook let tmp = a; a = b; b = tmp; falls over the moment a and b are &mut T — you can’t move out of a reference. mem::swap is the generic, safe, two-line answer.

Why the temp-variable dance breaks

If a and b are owned locals, you can shuffle through a temporary just fine. As soon as they’re behind &mut, the borrow checker stops you:

1
2
3
4
5
fn naive<T>(a: &mut T, b: &mut T) {
    // let tmp = *a;   // E0507: cannot move out of `*a`
    // *a = *b;        // (also moves out of *b)
    // *b = tmp;
}

You’d have to require T: Copy (loses generality), T: Clone (extra work), or reach for unsafe { ptr::swap(...) }. None of those is the right answer.

mem::swap is the primitive

std::mem::swap(&mut a, &mut b) swaps the bits behind two mutable references. No traits required, no allocation, no unsafe:

1
2
3
4
5
6
7
8
9
use std::mem;

let mut a = String::from("left");
let mut b = String::from("right");

mem::swap(&mut a, &mut b);

assert_eq!(a, "right");
assert_eq!(b, "left");

Works for any T. The two memory locations exchange contents in-place — one memcpy-sized swap, no clones.

A real shape: front/back double buffering

The pattern that earns mem::swap its keep — flip a “current” and “next” buffer at the end of each frame, reuse both allocations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::mem;

struct DoubleBuffer<T> {
    front: Vec<T>,
    back: Vec<T>,
}

impl<T> DoubleBuffer<T> {
    fn flip(&mut self) {
        mem::swap(&mut self.front, &mut self.back);
    }
}

let mut buf = DoubleBuffer {
    front: vec![1, 2, 3],
    back:  vec![9, 9, 9],
};

buf.flip();

assert_eq!(buf.front, vec![9, 9, 9]);
assert_eq!(buf.back,  vec![1, 2, 3]);

Two fields of the same struct, both behind &mut self — exactly the case the temp-variable dance can’t reach. mem::swap doesn’t care that the references come from the same parent borrow.

Why not slice::swap or Vec::swap?

v.swap(i, j) is the right tool when both values live in the same slice — it does the index trick under the hood so the borrow checker stays happy. mem::swap is the broader primitive: any two &mut T, regardless of whether they share a container. They’re the same idea at different scopes.

When to reach for it

Whenever you need to exchange two owned values through &mut: flipping buffers, rotating state in a tree node, splicing nodes in a linked list, swapping fields during a state transition. mem::take is swap with T::default() on one side; mem::replace is swap with src on one side and the old value returned. Same family — pick the one whose shape matches what you actually want back.

#126 May 2026

126. Vec::split_off — Cut a Vec in Two and Keep Both Halves

You want the tail of a Vec as its own owned collection — the head stays put, the tail walks away. Cloning a slice works for Clone types, but breaks the moment your elements aren’t cloneable. Vec::split_off doesn’t care.

The clone-and-truncate dance

The textbook split: copy the tail with to_vec(), then truncate the original.

1
2
3
4
5
6
7
let mut all = vec![1, 2, 3, 4, 5, 6];

let tail: Vec<i32> = all[3..].to_vec();
all.truncate(3);

assert_eq!(all,  vec![1, 2, 3]);
assert_eq!(tail, vec![4, 5, 6]);

It works, but it clones every element of the tail. Fine for i32, wasteful for String, and a hard error for any type that isn’t Clone.

split_off moves, doesn’t clone

Vec::split_off(at) consumes the elements at at.. out of the original Vec and returns them as a new Vec. The elements are moved, not copied — so it works for any T, Clone or not:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut tasks: Vec<Box<dyn FnOnce()>> = vec![
    Box::new(|| println!("a")),
    Box::new(|| println!("b")),
    Box::new(|| println!("c")),
    Box::new(|| println!("d")),
];

// Box<dyn FnOnce()> isn't Clone — `tasks[2..].to_vec()` won't compile.
let later = tasks.split_off(2);

assert_eq!(tasks.len(), 2);
assert_eq!(later.len(), 2);

The original keeps [0..at), the returned Vec gets [at..len), and not a single element is duplicated. at == 0 gives you the whole thing in the new Vec (the original ends up empty); at == len gives you an empty new Vec. Anything past the length panics.

A real shape: page-by-page draining

split_off shines when you want to peel a batch off the front of a queue and hand it to a worker, keeping the rest for next time:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
fn next_batch<T>(queue: &mut Vec<T>, size: usize) -> Vec<T> {
    let take = size.min(queue.len());
    let rest = queue.split_off(take);
    // queue now holds the first `take` items — that's our batch.
    // `rest` holds everything else — put it back as the new queue.
    let batch = std::mem::replace(queue, rest);
    batch
}

let mut q = vec!["a", "b", "c", "d", "e"];
let first  = next_batch(&mut q, 2);
let second = next_batch(&mut q, 2);

assert_eq!(first,  vec!["a", "b"]);
assert_eq!(second, vec!["c", "d"]);
assert_eq!(q,      vec!["e"]);

No clone, no temporary Vec, no fighting the borrow checker over slice ranges. Two memcpys and you’re done.

When to reach for it

Use split_off whenever you need both halves of a Vec as owned collections — batching, chunked processing, splitting state between threads. If you only want to iterate the tail and throw it away, drain(at..) is better; if you want to keep it, split_off is the move.

#125 May 2026

125. RwLockWriteGuard::downgrade — Hand a Write Lock Off as a Read, Atomically

You took a write lock, updated the data, and now you only want to read. Dropping the write guard and re-acquiring as a reader leaves a window where another writer can slip in. downgrade closes that window.

The gap between releasing and re-acquiring

A common shape in read-heavy systems: a worker takes a write lock to refresh a cache, then wants to keep reading the value it just wrote. The straightforward version drops the writer and grabs a reader:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::RwLock;

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;
drop(w); // <-- another writer can grab the lock here

let r = cache.read().unwrap();
assert_eq!(*r, 42);

Between drop(w) and cache.read() the lock is released. On a busy system, another writer can land in that hole and replace your 42 with something else before your reader sees it.

downgrade is atomic

Stabilized in Rust 1.92, RwLockWriteGuard::downgrade consumes the write guard and returns a read guard — no release, no reacquire. The transition is atomic, so no other writer can sneak in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::{RwLock, RwLockWriteGuard};

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;

// Atomically: write lock -> read lock. No window.
let r = RwLockWriteGuard::downgrade(w);
assert_eq!(*r, 42);

Other readers waiting on the lock can wake up immediately, while the value you just published is guaranteed to still be 42 when you read it back.

A real shape: refresh-then-publish

The pattern shows up wherever one thread mutates state and then turns into a long-lived reader of the same state — config reloads, cache refreshes, snapshot publishers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::sync::{Arc, RwLock, RwLockWriteGuard};
use std::thread;

let snapshot: Arc<RwLock<Vec<u32>>> = Arc::new(RwLock::new(vec![]));

let writer = {
    let snapshot = Arc::clone(&snapshot);
    thread::spawn(move || {
        let mut w = snapshot.write().unwrap();
        w.extend([10, 20, 30]); // expensive build

        // Downgrade so readers can fan in immediately,
        // and so we keep reading the value we just wrote.
        let r = RwLockWriteGuard::downgrade(w);
        r.iter().sum::<u32>()
    })
};

assert_eq!(writer.join().unwrap(), 60);

Without downgrade, you’d either hold the write lock longer than necessary (blocking every reader) or release it and risk reading stale-or-clobbered data.

When to reach for it

Use downgrade whenever a thread finishes writing and immediately wants to read the same RwLock — especially in read-heavy workloads where you want other readers to fan in as soon as possible without losing the consistency of “I’m reading what I just wrote.” If you don’t need the read afterwards, plain drop is fine; if you do, downgrade is the only way to get there without a race.

#124 May 2026

124. Iterator::cycle — Round-Robin Without the Modulo Math

Round-robin assignment usually shows up as things[i % workers.len()] — fine until the index gets clever, the slice gets reordered, or the source isn’t even indexable. Iterator::cycle turns any Clone iterator into an infinite one, and the modulo dance disappears.

The textbook version: distribute jobs across a fixed pool of workers. Indexing works, but you’re carrying the index and the modulo around just to walk in a circle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

// Index-and-modulo: works, but the math is doing the iterating for you.
let mut assigned: Vec<(&str, &str)> = Vec::new();
for (i, job) in jobs.iter().copied().enumerate() {
    assigned.push((job, workers[i % workers.len()]));
}

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

Swap the modulo for cycle and the loop tells you exactly what it’s doing — pull the next worker, forever:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

let assigned: Vec<(&str, &str)> = jobs
    .iter()
    .copied()
    .zip(workers.iter().copied().cycle())
    .collect();

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

The trick is zip — zipping a finite iterator with an infinite one stops as soon as the finite side runs out, so you never have to bound cycle yourself. No off-by-one, no bookkeeping for “did I already use this worker?”.

It also composes with take when you want a fixed-length output and the source is the short one:

1
2
3
4
5
let pattern = [1, 2, 3];

let twelve: Vec<i32> = pattern.iter().copied().cycle().take(12).collect();

assert_eq!(twelve, vec![1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3]);

A handy companion is enumerate — when you want the round-robin and the original index together:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
let colors = ["red", "green", "blue"];
let rows = ["one", "two", "three", "four", "five"];

let painted: Vec<(usize, &str, &str)> = rows
    .iter()
    .copied()
    .zip(colors.iter().copied().cycle())
    .enumerate()
    .map(|(i, (row, color))| (i, row, color))
    .collect();

assert_eq!(
    painted,
    vec![
        (0, "one",   "red"),
        (1, "two",   "green"),
        (2, "three", "blue"),
        (3, "four",  "red"),
        (4, "five",  "green"),
    ],
);

Two things worth knowing. cycle requires the underlying iterator to be Clone — it remembers the original and starts over each time it’s exhausted, which means it’ll panic or loop forever on an empty iterator depending on what you do with it (zip is safe — empty side wins; bare .next() would just return None forever). And it’s lazy: nothing repeats until the consumer pulls another item, so pairing it with a finite iterator costs nothing extra.

Stable since Rust 1.0 — one of those iterator adapters that makes the modulo operator feel like the wrong tool the moment you remember it exists.

#123 May 2026

123. BTreeMap::pop_first — A Sorted Map That Doubles as a Priority Queue

BinaryHeap only goes one way — biggest first. When you want to pull the smallest or the largest from the same collection, reach for BTreeMap and let pop_first / pop_last do the work.

The classic shape: a queue of jobs keyed by priority where you sometimes need the most-urgent job and sometimes the least-urgent one. With BinaryHeap you’d pick a direction and stick with it (or wrap things in Reverse to flip it). With BTreeMap you get both ends for free, because the keys are already sorted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::collections::BTreeMap;

let mut jobs: BTreeMap<u32, &str> = BTreeMap::new();
jobs.insert(5, "rebuild index");
jobs.insert(1, "send heartbeat");
jobs.insert(9, "page oncall");
jobs.insert(3, "rotate logs");

// Smallest key first — drain by priority.
assert_eq!(jobs.pop_first(), Some((1, "send heartbeat")));
assert_eq!(jobs.pop_first(), Some((3, "rotate logs")));

// Or grab the most urgent from the other end.
assert_eq!(jobs.pop_last(), Some((9, "page oncall")));

// Empty? You get None — same shape as Vec::pop.
let mut empty: BTreeMap<u32, &str> = BTreeMap::new();
assert_eq!(empty.pop_first(), None);

Both methods return Option<(K, V)> and remove the entry from the map. No second lookup, no .remove(key) follow-up after .first_key_value().

Where this really earns its keep is the “drain-in-order” loop — the kind of thing you’d otherwise write with a heap plus a sidecar map:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::collections::BTreeMap;

let mut tasks: BTreeMap<u32, String> = BTreeMap::new();
tasks.insert(20, "compact".into());
tasks.insert(10, "vacuum".into());
tasks.insert(30, "snapshot".into());

let mut order = Vec::new();
while let Some((priority, name)) = tasks.pop_first() {
    order.push((priority, name));
}

assert_eq!(
    order,
    vec![
        (10, "vacuum".into()),
        (20, "compact".into()),
        (30, "snapshot".into()),
    ],
);

Same loop, swap pop_first for pop_last and you drain in reverse order — no Reverse wrapper, no second collection.

BTreeSet got the same pair (pop_first / pop_last) at the same time, so a sorted set behaves like a deque you can pop from either end:

1
2
3
4
5
6
use std::collections::BTreeSet;

let mut ids: BTreeSet<u32> = BTreeSet::from([7, 2, 9, 4]);
assert_eq!(ids.pop_first(), Some(2));
assert_eq!(ids.pop_last(),  Some(9));
assert_eq!(ids.len(), 2);

A few things worth knowing. BTreeMap insertion is O(log n) — heavier than a BinaryHeap push, which amortises to O(1). If you genuinely only ever pop from one side and throughput matters, a heap still wins. The moment you need ordered iteration, range queries, or popping from both ends, BTreeMap is the better fit and pop_first / pop_last make that fit feel native.

Stable since Rust 1.66 — and one of those methods that quietly replaces a fistful of match arms once you remember it exists.