#129 May 2026

129. BTreeMap::extract_if — Range-Scoped Filter-and-Remove in One Pass

Vec::extract_if was great. The BTreeMap version, stable since Rust 1.91, adds a trick the slice variant can’t pull off — a range bound that scopes the scan to part of the keyspace.

The collect-keys-then-remove dance

You want to remove every entry matching a predicate from a BTreeMap, and keep the removed entries. The borrow checker won’t let you mutate the map while you’re iterating it, so the textbook workaround is a two-pass clone-the-keys shuffle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, String> = BTreeMap::from([
    (1, "boot".into()),
    (2, "login".into()),
    (3, "click".into()),
    (4, "logout".into()),
]);

let cutoff = 3;

let to_remove: Vec<u64> = events
    .iter()
    .filter(|(k, _)| **k < cutoff)
    .map(|(k, _)| *k)            // requires K: Copy or Clone
    .collect();

let mut expired = Vec::new();
for k in to_remove {
    if let Some(v) = events.remove(&k) {
        expired.push((k, v));
    }
}

assert_eq!(expired.len(), 2);

It works, but the costs add up. You walk the map twice, you require K: Clone (or Copy), and you allocate a throwaway Vec<K> just to break the self-borrow.

extract_if is one pass — and it takes a range

BTreeMap::extract_if(range, pred) returns an iterator that visits keys in ascending order inside the given range, and yields (K, V) for every entry whose predicate returns true. The map’s structure is updated as the iterator advances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, &str> = BTreeMap::from([
    (1, "boot"),
    (2, "login"),
    (3, "click"),
    (4, "logout"),
    (5, "shutdown"),
]);

let expired: BTreeMap<u64, &str> =
    events.extract_if(..3, |_k, _v| true).collect();

assert_eq!(expired.into_iter().collect::<Vec<_>>(), [(1, "boot"), (2, "login")]);
assert_eq!(events.keys().copied().collect::<Vec<_>>(), [3, 4, 5]);

No Clone bound. No second pass. No temp Vec<K>. The range ..3 does double duty as a prune — keys outside it aren’t even visited, which matters when the map is large and the range is small.

The closure can mutate survivors too

The predicate signature is FnMut(&K, &mut V) -> bool. Returning true removes and yields; returning false keeps the entry in the map — but you’ve still got &mut V, so you can edit it on the way past:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
use std::collections::BTreeMap;

let mut scores: BTreeMap<&str, i32> = BTreeMap::from([
    ("alice", 85),
    ("bob",   42),
    ("carol", 91),
    ("dan",   30),
]);

let failing: Vec<(&str, i32)> = scores
    .extract_if(.., |_k, v| {
        if *v < 50 {
            true            // pull this one out
        } else {
            *v += 5;        // bump the survivors by 5
            false           // keep it
        }
    })
    .collect();

assert_eq!(failing, [("bob", 42), ("dan", 30)]);
assert_eq!(scores[&"alice"], 90);
assert_eq!(scores[&"carol"], 96);

One traversal, three behaviors at once: filter, remove, and update.

Constraints

The range is RangeBounds<K>, so anything from .. to start..=end works. extract_if panics if start > end or if the bounds collapse to an empty exclusive range. BTreeSet::extract_if is the same idea minus the value — predicate signature is FnMut(&T) -> bool.

When to reach for it

Whenever the operation is “walk a sorted map, peel off the entries matching a condition, optionally bounded to a key range”. Expiring old timestamped events. Draining tasks below a priority watermark. Splitting a map at a tenant boundary into “keep” and “ship”. The range bound is the part the Vec::extract_if (bite 43) can’t give you — and on a large map, scoping the scan is the whole point.

#128 May 2026

128. slice::copy_within — Shift Bytes In-Place Without Fighting the Borrow Checker

You want to copy buf[2..5] over buf[0..3] — same buffer, no allocation. Reach for copy_from_slice and the borrow checker says no. copy_within is the one-call answer.

The two-borrow trap

The obvious code can’t compile — buf would be borrowed mutably and immutably at the same time:

1
2
3
4
5
let mut buf = [1, 2, 3, 4, 5];

// error[E0502]: cannot borrow `buf` as immutable
//               while also borrowed as mutable
// buf[0..3].copy_from_slice(&buf[2..5]);

The usual workarounds are noisy. split_at_mut to carve the slice into two non-overlapping halves:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
let (left, right) = buf.split_at_mut(2);
left[0..2].copy_from_slice(&right[0..2]);
assert_eq!(buf, [3, 4, 3, 4, 5]);

…which only works when source and destination land on opposite sides of the split. Otherwise you allocate a throwaway Vec just to break the borrow:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
let tmp: Vec<_> = buf[2..5].to_vec();   // allocation just to copy
buf[0..3].copy_from_slice(&tmp);

copy_within is the primitive

<[T]>::copy_within(src, dest) copies a range of elements to a destination index inside the same slice — one call, no allocation, no split:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
buf.copy_within(2..5, 0);
assert_eq!(buf, [3, 4, 5, 4, 5]);

It’s memmove semantics, so overlapping source and destination just work — the elements that get overwritten don’t matter, the surviving order does:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
// shift everything one slot to the right
buf.copy_within(0..4, 1);
assert_eq!(buf, [1, 1, 2, 3, 4]);

Try writing that with split_at_mut — you can’t, the source and destination overlap.

A real shape: drop the first N from a Vec

Removing the first n elements without reallocating is copy_within plus a truncate:

1
2
3
4
5
6
7
8
let mut data: Vec<u8> = vec![10, 20, 30, 40, 50];
let n = 2;
let len = data.len();

data.copy_within(n..len, 0);
data.truncate(len - n);

assert_eq!(data, [30, 40, 50]);

Same allocation, same backing buffer — the values just shift down. Vec::drain(..n) reads cleaner for one-offs, but copy_within is what you want when you’re already holding &mut [T] and can’t reach for Vec methods (think ring buffers, fixed-size scratch arrays, no_std crates).

Constraints

T: Copy is required — the method does a memmove, it doesn’t run destructors or call Clone. Source and destination ranges must both fit inside the slice; otherwise it panics. The destination is a single index (where the copy starts), not a range — the length is taken from the source range.

When to reach for it

Any time you’d otherwise write split_at_mut just to satisfy the borrow checker, or allocate a temporary buffer to break a self-borrow. copy_within reads as what you actually meant: move these bytes over there, in place.

Stable since Rust 1.37. Works on [T], Vec<T>, and any DerefMut<Target = [T]>.

#127 May 2026

127. std::mem::swap — Trade Two Values Through Their &mut

The textbook let tmp = a; a = b; b = tmp; falls over the moment a and b are &mut T — you can’t move out of a reference. mem::swap is the generic, safe, two-line answer.

Why the temp-variable dance breaks

If a and b are owned locals, you can shuffle through a temporary just fine. As soon as they’re behind &mut, the borrow checker stops you:

1
2
3
4
5
fn naive<T>(a: &mut T, b: &mut T) {
    // let tmp = *a;   // E0507: cannot move out of `*a`
    // *a = *b;        // (also moves out of *b)
    // *b = tmp;
}

You’d have to require T: Copy (loses generality), T: Clone (extra work), or reach for unsafe { ptr::swap(...) }. None of those is the right answer.

mem::swap is the primitive

std::mem::swap(&mut a, &mut b) swaps the bits behind two mutable references. No traits required, no allocation, no unsafe:

1
2
3
4
5
6
7
8
9
use std::mem;

let mut a = String::from("left");
let mut b = String::from("right");

mem::swap(&mut a, &mut b);

assert_eq!(a, "right");
assert_eq!(b, "left");

Works for any T. The two memory locations exchange contents in-place — one memcpy-sized swap, no clones.

A real shape: front/back double buffering

The pattern that earns mem::swap its keep — flip a “current” and “next” buffer at the end of each frame, reuse both allocations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::mem;

struct DoubleBuffer<T> {
    front: Vec<T>,
    back: Vec<T>,
}

impl<T> DoubleBuffer<T> {
    fn flip(&mut self) {
        mem::swap(&mut self.front, &mut self.back);
    }
}

let mut buf = DoubleBuffer {
    front: vec![1, 2, 3],
    back:  vec![9, 9, 9],
};

buf.flip();

assert_eq!(buf.front, vec![9, 9, 9]);
assert_eq!(buf.back,  vec![1, 2, 3]);

Two fields of the same struct, both behind &mut self — exactly the case the temp-variable dance can’t reach. mem::swap doesn’t care that the references come from the same parent borrow.

Why not slice::swap or Vec::swap?

v.swap(i, j) is the right tool when both values live in the same slice — it does the index trick under the hood so the borrow checker stays happy. mem::swap is the broader primitive: any two &mut T, regardless of whether they share a container. They’re the same idea at different scopes.

When to reach for it

Whenever you need to exchange two owned values through &mut: flipping buffers, rotating state in a tree node, splicing nodes in a linked list, swapping fields during a state transition. mem::take is swap with T::default() on one side; mem::replace is swap with src on one side and the old value returned. Same family — pick the one whose shape matches what you actually want back.

#126 May 2026

126. Vec::split_off — Cut a Vec in Two and Keep Both Halves

You want the tail of a Vec as its own owned collection — the head stays put, the tail walks away. Cloning a slice works for Clone types, but breaks the moment your elements aren’t cloneable. Vec::split_off doesn’t care.

The clone-and-truncate dance

The textbook split: copy the tail with to_vec(), then truncate the original.

1
2
3
4
5
6
7
let mut all = vec![1, 2, 3, 4, 5, 6];

let tail: Vec<i32> = all[3..].to_vec();
all.truncate(3);

assert_eq!(all,  vec![1, 2, 3]);
assert_eq!(tail, vec![4, 5, 6]);

It works, but it clones every element of the tail. Fine for i32, wasteful for String, and a hard error for any type that isn’t Clone.

split_off moves, doesn’t clone

Vec::split_off(at) consumes the elements at at.. out of the original Vec and returns them as a new Vec. The elements are moved, not copied — so it works for any T, Clone or not:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut tasks: Vec<Box<dyn FnOnce()>> = vec![
    Box::new(|| println!("a")),
    Box::new(|| println!("b")),
    Box::new(|| println!("c")),
    Box::new(|| println!("d")),
];

// Box<dyn FnOnce()> isn't Clone — `tasks[2..].to_vec()` won't compile.
let later = tasks.split_off(2);

assert_eq!(tasks.len(), 2);
assert_eq!(later.len(), 2);

The original keeps [0..at), the returned Vec gets [at..len), and not a single element is duplicated. at == 0 gives you the whole thing in the new Vec (the original ends up empty); at == len gives you an empty new Vec. Anything past the length panics.

A real shape: page-by-page draining

split_off shines when you want to peel a batch off the front of a queue and hand it to a worker, keeping the rest for next time:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
fn next_batch<T>(queue: &mut Vec<T>, size: usize) -> Vec<T> {
    let take = size.min(queue.len());
    let rest = queue.split_off(take);
    // queue now holds the first `take` items — that's our batch.
    // `rest` holds everything else — put it back as the new queue.
    let batch = std::mem::replace(queue, rest);
    batch
}

let mut q = vec!["a", "b", "c", "d", "e"];
let first  = next_batch(&mut q, 2);
let second = next_batch(&mut q, 2);

assert_eq!(first,  vec!["a", "b"]);
assert_eq!(second, vec!["c", "d"]);
assert_eq!(q,      vec!["e"]);

No clone, no temporary Vec, no fighting the borrow checker over slice ranges. Two memcpys and you’re done.

When to reach for it

Use split_off whenever you need both halves of a Vec as owned collections — batching, chunked processing, splitting state between threads. If you only want to iterate the tail and throw it away, drain(at..) is better; if you want to keep it, split_off is the move.

#125 May 2026

125. RwLockWriteGuard::downgrade — Hand a Write Lock Off as a Read, Atomically

You took a write lock, updated the data, and now you only want to read. Dropping the write guard and re-acquiring as a reader leaves a window where another writer can slip in. downgrade closes that window.

The gap between releasing and re-acquiring

A common shape in read-heavy systems: a worker takes a write lock to refresh a cache, then wants to keep reading the value it just wrote. The straightforward version drops the writer and grabs a reader:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::RwLock;

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;
drop(w); // <-- another writer can grab the lock here

let r = cache.read().unwrap();
assert_eq!(*r, 42);

Between drop(w) and cache.read() the lock is released. On a busy system, another writer can land in that hole and replace your 42 with something else before your reader sees it.

downgrade is atomic

Stabilized in Rust 1.92, RwLockWriteGuard::downgrade consumes the write guard and returns a read guard — no release, no reacquire. The transition is atomic, so no other writer can sneak in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::{RwLock, RwLockWriteGuard};

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;

// Atomically: write lock -> read lock. No window.
let r = RwLockWriteGuard::downgrade(w);
assert_eq!(*r, 42);

Other readers waiting on the lock can wake up immediately, while the value you just published is guaranteed to still be 42 when you read it back.

A real shape: refresh-then-publish

The pattern shows up wherever one thread mutates state and then turns into a long-lived reader of the same state — config reloads, cache refreshes, snapshot publishers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::sync::{Arc, RwLock, RwLockWriteGuard};
use std::thread;

let snapshot: Arc<RwLock<Vec<u32>>> = Arc::new(RwLock::new(vec![]));

let writer = {
    let snapshot = Arc::clone(&snapshot);
    thread::spawn(move || {
        let mut w = snapshot.write().unwrap();
        w.extend([10, 20, 30]); // expensive build

        // Downgrade so readers can fan in immediately,
        // and so we keep reading the value we just wrote.
        let r = RwLockWriteGuard::downgrade(w);
        r.iter().sum::<u32>()
    })
};

assert_eq!(writer.join().unwrap(), 60);

Without downgrade, you’d either hold the write lock longer than necessary (blocking every reader) or release it and risk reading stale-or-clobbered data.

When to reach for it

Use downgrade whenever a thread finishes writing and immediately wants to read the same RwLock — especially in read-heavy workloads where you want other readers to fan in as soon as possible without losing the consistency of “I’m reading what I just wrote.” If you don’t need the read afterwards, plain drop is fine; if you do, downgrade is the only way to get there without a race.

#124 May 2026

124. Iterator::cycle — Round-Robin Without the Modulo Math

Round-robin assignment usually shows up as things[i % workers.len()] — fine until the index gets clever, the slice gets reordered, or the source isn’t even indexable. Iterator::cycle turns any Clone iterator into an infinite one, and the modulo dance disappears.

The textbook version: distribute jobs across a fixed pool of workers. Indexing works, but you’re carrying the index and the modulo around just to walk in a circle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

// Index-and-modulo: works, but the math is doing the iterating for you.
let mut assigned: Vec<(&str, &str)> = Vec::new();
for (i, job) in jobs.iter().copied().enumerate() {
    assigned.push((job, workers[i % workers.len()]));
}

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

Swap the modulo for cycle and the loop tells you exactly what it’s doing — pull the next worker, forever:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

let assigned: Vec<(&str, &str)> = jobs
    .iter()
    .copied()
    .zip(workers.iter().copied().cycle())
    .collect();

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

The trick is zip — zipping a finite iterator with an infinite one stops as soon as the finite side runs out, so you never have to bound cycle yourself. No off-by-one, no bookkeeping for “did I already use this worker?”.

It also composes with take when you want a fixed-length output and the source is the short one:

1
2
3
4
5
let pattern = [1, 2, 3];

let twelve: Vec<i32> = pattern.iter().copied().cycle().take(12).collect();

assert_eq!(twelve, vec![1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3]);

A handy companion is enumerate — when you want the round-robin and the original index together:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
let colors = ["red", "green", "blue"];
let rows = ["one", "two", "three", "four", "five"];

let painted: Vec<(usize, &str, &str)> = rows
    .iter()
    .copied()
    .zip(colors.iter().copied().cycle())
    .enumerate()
    .map(|(i, (row, color))| (i, row, color))
    .collect();

assert_eq!(
    painted,
    vec![
        (0, "one",   "red"),
        (1, "two",   "green"),
        (2, "three", "blue"),
        (3, "four",  "red"),
        (4, "five",  "green"),
    ],
);

Two things worth knowing. cycle requires the underlying iterator to be Clone — it remembers the original and starts over each time it’s exhausted, which means it’ll panic or loop forever on an empty iterator depending on what you do with it (zip is safe — empty side wins; bare .next() would just return None forever). And it’s lazy: nothing repeats until the consumer pulls another item, so pairing it with a finite iterator costs nothing extra.

Stable since Rust 1.0 — one of those iterator adapters that makes the modulo operator feel like the wrong tool the moment you remember it exists.

#123 May 2026

123. BTreeMap::pop_first — A Sorted Map That Doubles as a Priority Queue

BinaryHeap only goes one way — biggest first. When you want to pull the smallest or the largest from the same collection, reach for BTreeMap and let pop_first / pop_last do the work.

The classic shape: a queue of jobs keyed by priority where you sometimes need the most-urgent job and sometimes the least-urgent one. With BinaryHeap you’d pick a direction and stick with it (or wrap things in Reverse to flip it). With BTreeMap you get both ends for free, because the keys are already sorted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::collections::BTreeMap;

let mut jobs: BTreeMap<u32, &str> = BTreeMap::new();
jobs.insert(5, "rebuild index");
jobs.insert(1, "send heartbeat");
jobs.insert(9, "page oncall");
jobs.insert(3, "rotate logs");

// Smallest key first — drain by priority.
assert_eq!(jobs.pop_first(), Some((1, "send heartbeat")));
assert_eq!(jobs.pop_first(), Some((3, "rotate logs")));

// Or grab the most urgent from the other end.
assert_eq!(jobs.pop_last(), Some((9, "page oncall")));

// Empty? You get None — same shape as Vec::pop.
let mut empty: BTreeMap<u32, &str> = BTreeMap::new();
assert_eq!(empty.pop_first(), None);

Both methods return Option<(K, V)> and remove the entry from the map. No second lookup, no .remove(key) follow-up after .first_key_value().

Where this really earns its keep is the “drain-in-order” loop — the kind of thing you’d otherwise write with a heap plus a sidecar map:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::collections::BTreeMap;

let mut tasks: BTreeMap<u32, String> = BTreeMap::new();
tasks.insert(20, "compact".into());
tasks.insert(10, "vacuum".into());
tasks.insert(30, "snapshot".into());

let mut order = Vec::new();
while let Some((priority, name)) = tasks.pop_first() {
    order.push((priority, name));
}

assert_eq!(
    order,
    vec![
        (10, "vacuum".into()),
        (20, "compact".into()),
        (30, "snapshot".into()),
    ],
);

Same loop, swap pop_first for pop_last and you drain in reverse order — no Reverse wrapper, no second collection.

BTreeSet got the same pair (pop_first / pop_last) at the same time, so a sorted set behaves like a deque you can pop from either end:

1
2
3
4
5
6
use std::collections::BTreeSet;

let mut ids: BTreeSet<u32> = BTreeSet::from([7, 2, 9, 4]);
assert_eq!(ids.pop_first(), Some(2));
assert_eq!(ids.pop_last(),  Some(9));
assert_eq!(ids.len(), 2);

A few things worth knowing. BTreeMap insertion is O(log n) — heavier than a BinaryHeap push, which amortises to O(1). If you genuinely only ever pop from one side and throughput matters, a heap still wins. The moment you need ordered iteration, range queries, or popping from both ends, BTreeMap is the better fit and pop_first / pop_last make that fit feel native.

Stable since Rust 1.66 — and one of those methods that quietly replaces a fistful of match arms once you remember it exists.

#122 May 2026

122. Option::filter — Keep Some Only When the Value Passes

You’ve got an Option<T>, but you only want to keep the Some if the value passes a test. The match-with-guard version works — Option::filter says the same thing in one call.

The shape that keeps showing up: parse something into an Option, then validate it. The naive version stacks an if on top of the unwrap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn parse_port(raw: Option<&str>) -> Option<u16> {
    let s = raw?;
    let n: u16 = s.parse().ok()?;
    if n > 0 { Some(n) } else { None }
}

assert_eq!(parse_port(Some("8080")), Some(8080));
assert_eq!(parse_port(Some("0")),    None);
assert_eq!(parse_port(Some("nope")), None);
assert_eq!(parse_port(None),         None);

Option::filter collapses that trailing if into the chain:

1
2
3
4
5
6
7
fn parse_port(raw: Option<&str>) -> Option<u16> {
    raw.and_then(|s| s.parse().ok())
       .filter(|&n| n > 0)
}

assert_eq!(parse_port(Some("8080")), Some(8080));
assert_eq!(parse_port(Some("0")),    None);

Same story for the match-with-guard pattern — when the predicate is the only thing the arm checks, filter reads better:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let name: Option<&str> = Some("");

// Before: pattern match with a guard
let valid = match name {
    Some(s) if !s.is_empty() => Some(s),
    _ => None,
};
assert_eq!(valid, None);

// After: just say what you mean
let valid = name.filter(|s| !s.is_empty());
assert_eq!(valid, None);

Two things worth knowing. First, the closure receives &T, not T — same as Iterator::filter. So for Option<i32> you write |&n| n > 0 or |n| *n > 0. For Option<String> auto-deref makes |s| !s.is_empty() just work.

Second, filter only keeps or drops — it never transforms. If the predicate returns true you get the original Some back, untouched. To transform, chain .map() after:

1
2
3
4
let trimmed = Some("  hello  ")
    .map(str::trim)
    .filter(|s| !s.is_empty());
assert_eq!(trimmed, Some("hello"));

Stable since Rust 1.27, and the kind of method that quietly disappears the boilerplate once you know it’s there.

#121 May 2026

121. rem_euclid — The Modulo That Doesn't Go Negative

-1 % 7 in Rust is -1, not 6. That’s a math gotcha lurking in every wraparound index, every clock arithmetic, every “what day of the week” calculation. rem_euclid is the modulo you actually wanted.

Rust’s % operator follows the same rule as C: the sign of the result matches the sign of the dividend. Useful sometimes, surprising the rest of the time:

1
2
3
assert_eq!(-1_i32 % 7, -1);
assert_eq!(-8_i32 % 7, -1);
assert_eq!( 8_i32 % 7,  1);

Try indexing a circular buffer with that and you get a panic the first time you step backwards across zero. The fix is rem_euclid, which always returns a value in [0, |divisor|):

1
2
3
assert_eq!((-1_i32).rem_euclid(7), 6);
assert_eq!((-8_i32).rem_euclid(7), 6);
assert_eq!(( 8_i32).rem_euclid(7), 1);

A real-world shape — wrap an index around a slice in either direction, no if ladder, no manual + len trick:

1
2
3
4
5
6
7
8
9
let days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"];

fn day_after(today: i32, delta: i32) -> i32 {
    (today + delta).rem_euclid(7)
}

assert_eq!(days[day_after(0, -1) as usize], "Sun"); // Mon - 1 = Sun
assert_eq!(days[day_after(2,  4) as usize], "Sun"); // Wed + 4 = Sun
assert_eq!(days[day_after(0, -8) as usize], "Sun"); // wraps past week boundary

div_euclid is the partner that pairs with it: a.div_euclid(b) * b + a.rem_euclid(b) == a always holds, even for negative a. Plain / and % only satisfy that identity for non-negative inputs.

1
2
3
let a = -7_i32;
let b =  3_i32;
assert_eq!(a.div_euclid(b) * b + a.rem_euclid(b), a);

Both are available on every signed integer type (and floats), and they’re const. The rule of thumb: if your code can ever see a negative operand and you want the mathematician’s modulo — not the hardware’s — reach for rem_euclid.

#120 May 2026

120. OnceLock — Lazy Statics That Initialize on Your Schedule

LazyLock runs its initializer the first time anyone touches the value — fine when the inputs are baked in at compile time, useless when you only learn them at runtime. OnceLock is the same idea, but you decide when (and with what data) initialization happens.

The classic case: you want a global that’s expensive to build, and the data only exists after main starts — CLI args, env vars, a parsed config file. LazyLock can’t see those without baking the work into a closure that re-runs every test setup.

OnceLock solves it by separating creation from initialization:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::sync::OnceLock;

static CONFIG: OnceLock<String> = OnceLock::new();

fn main() {
    // Initialize from real runtime data, exactly once.
    let cfg = std::env::var("APP_CONFIG").unwrap_or_else(|_| "default".into());
    CONFIG.set(cfg).unwrap();

    assert_eq!(config(), "default");
}

fn config() -> &'static str {
    CONFIG.get().expect("config not initialized")
}

set returns Err if the cell was already filled — you get to decide whether that’s a panic, a log line, or a no-op.

For the read-mostly path, get_or_init combines “is it set?” and “set it” into a single thread-safe call. Concurrent callers race; the winner runs the closure, everyone else waits and reads the result:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::OnceLock;

static GREETING: OnceLock<String> = OnceLock::new();

fn greeting() -> &'static str {
    GREETING.get_or_init(|| format!("hello, {}", "world"))
}

assert_eq!(greeting(), "hello, world");
assert_eq!(greeting(), "hello, world"); // cached, closure does not run again

When to reach for which: pick LazyLock when the initializer is self-contained and you’re happy with it firing on first touch. Pick OnceLock when you need to feed in runtime data — or when you want the option to ask “has this been set yet?” before triggering the work.