#126 May 2026

126. Vec::split_off — Cut a Vec in Two and Keep Both Halves

You want the tail of a Vec as its own owned collection — the head stays put, the tail walks away. Cloning a slice works for Clone types, but breaks the moment your elements aren’t cloneable. Vec::split_off doesn’t care.

The clone-and-truncate dance

The textbook split: copy the tail with to_vec(), then truncate the original.

1
2
3
4
5
6
7
let mut all = vec![1, 2, 3, 4, 5, 6];

let tail: Vec<i32> = all[3..].to_vec();
all.truncate(3);

assert_eq!(all,  vec![1, 2, 3]);
assert_eq!(tail, vec![4, 5, 6]);

It works, but it clones every element of the tail. Fine for i32, wasteful for String, and a hard error for any type that isn’t Clone.

split_off moves, doesn’t clone

Vec::split_off(at) consumes the elements at at.. out of the original Vec and returns them as a new Vec. The elements are moved, not copied — so it works for any T, Clone or not:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let mut tasks: Vec<Box<dyn FnOnce()>> = vec![
    Box::new(|| println!("a")),
    Box::new(|| println!("b")),
    Box::new(|| println!("c")),
    Box::new(|| println!("d")),
];

// Box<dyn FnOnce()> isn't Clone — `tasks[2..].to_vec()` won't compile.
let later = tasks.split_off(2);

assert_eq!(tasks.len(), 2);
assert_eq!(later.len(), 2);

The original keeps [0..at), the returned Vec gets [at..len), and not a single element is duplicated. at == 0 gives you the whole thing in the new Vec (the original ends up empty); at == len gives you an empty new Vec. Anything past the length panics.

A real shape: page-by-page draining

split_off shines when you want to peel a batch off the front of a queue and hand it to a worker, keeping the rest for next time:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
fn next_batch<T>(queue: &mut Vec<T>, size: usize) -> Vec<T> {
    let take = size.min(queue.len());
    let rest = queue.split_off(take);
    // queue now holds the first `take` items — that's our batch.
    // `rest` holds everything else — put it back as the new queue.
    let batch = std::mem::replace(queue, rest);
    batch
}

let mut q = vec!["a", "b", "c", "d", "e"];
let first  = next_batch(&mut q, 2);
let second = next_batch(&mut q, 2);

assert_eq!(first,  vec!["a", "b"]);
assert_eq!(second, vec!["c", "d"]);
assert_eq!(q,      vec!["e"]);

No clone, no temporary Vec, no fighting the borrow checker over slice ranges. Two memcpys and you’re done.

When to reach for it

Use split_off whenever you need both halves of a Vec as owned collections — batching, chunked processing, splitting state between threads. If you only want to iterate the tail and throw it away, drain(at..) is better; if you want to keep it, split_off is the move.

#125 May 2026

125. RwLockWriteGuard::downgrade — Hand a Write Lock Off as a Read, Atomically

You took a write lock, updated the data, and now you only want to read. Dropping the write guard and re-acquiring as a reader leaves a window where another writer can slip in. downgrade closes that window.

The gap between releasing and re-acquiring

A common shape in read-heavy systems: a worker takes a write lock to refresh a cache, then wants to keep reading the value it just wrote. The straightforward version drops the writer and grabs a reader:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::RwLock;

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;
drop(w); // <-- another writer can grab the lock here

let r = cache.read().unwrap();
assert_eq!(*r, 42);

Between drop(w) and cache.read() the lock is released. On a busy system, another writer can land in that hole and replace your 42 with something else before your reader sees it.

downgrade is atomic

Stabilized in Rust 1.92, RwLockWriteGuard::downgrade consumes the write guard and returns a read guard — no release, no reacquire. The transition is atomic, so no other writer can sneak in:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::{RwLock, RwLockWriteGuard};

let cache = RwLock::new(0);

let mut w = cache.write().unwrap();
*w = 42;

// Atomically: write lock -> read lock. No window.
let r = RwLockWriteGuard::downgrade(w);
assert_eq!(*r, 42);

Other readers waiting on the lock can wake up immediately, while the value you just published is guaranteed to still be 42 when you read it back.

A real shape: refresh-then-publish

The pattern shows up wherever one thread mutates state and then turns into a long-lived reader of the same state — config reloads, cache refreshes, snapshot publishers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::sync::{Arc, RwLock, RwLockWriteGuard};
use std::thread;

let snapshot: Arc<RwLock<Vec<u32>>> = Arc::new(RwLock::new(vec![]));

let writer = {
    let snapshot = Arc::clone(&snapshot);
    thread::spawn(move || {
        let mut w = snapshot.write().unwrap();
        w.extend([10, 20, 30]); // expensive build

        // Downgrade so readers can fan in immediately,
        // and so we keep reading the value we just wrote.
        let r = RwLockWriteGuard::downgrade(w);
        r.iter().sum::<u32>()
    })
};

assert_eq!(writer.join().unwrap(), 60);

Without downgrade, you’d either hold the write lock longer than necessary (blocking every reader) or release it and risk reading stale-or-clobbered data.

When to reach for it

Use downgrade whenever a thread finishes writing and immediately wants to read the same RwLock — especially in read-heavy workloads where you want other readers to fan in as soon as possible without losing the consistency of “I’m reading what I just wrote.” If you don’t need the read afterwards, plain drop is fine; if you do, downgrade is the only way to get there without a race.

#124 May 2026

124. Iterator::cycle — Round-Robin Without the Modulo Math

Round-robin assignment usually shows up as things[i % workers.len()] — fine until the index gets clever, the slice gets reordered, or the source isn’t even indexable. Iterator::cycle turns any Clone iterator into an infinite one, and the modulo dance disappears.

The textbook version: distribute jobs across a fixed pool of workers. Indexing works, but you’re carrying the index and the modulo around just to walk in a circle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

// Index-and-modulo: works, but the math is doing the iterating for you.
let mut assigned: Vec<(&str, &str)> = Vec::new();
for (i, job) in jobs.iter().copied().enumerate() {
    assigned.push((job, workers[i % workers.len()]));
}

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

Swap the modulo for cycle and the loop tells you exactly what it’s doing — pull the next worker, forever:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
let workers = ["alice", "bob", "carol"];
let jobs = ["build", "test", "lint", "deploy", "notify"];

let assigned: Vec<(&str, &str)> = jobs
    .iter()
    .copied()
    .zip(workers.iter().copied().cycle())
    .collect();

assert_eq!(
    assigned,
    vec![
        ("build",  "alice"),
        ("test",   "bob"),
        ("lint",   "carol"),
        ("deploy", "alice"),
        ("notify", "bob"),
    ],
);

The trick is zip — zipping a finite iterator with an infinite one stops as soon as the finite side runs out, so you never have to bound cycle yourself. No off-by-one, no bookkeeping for “did I already use this worker?”.

It also composes with take when you want a fixed-length output and the source is the short one:

1
2
3
4
5
let pattern = [1, 2, 3];

let twelve: Vec<i32> = pattern.iter().copied().cycle().take(12).collect();

assert_eq!(twelve, vec![1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3]);

A handy companion is enumerate — when you want the round-robin and the original index together:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
let colors = ["red", "green", "blue"];
let rows = ["one", "two", "three", "four", "five"];

let painted: Vec<(usize, &str, &str)> = rows
    .iter()
    .copied()
    .zip(colors.iter().copied().cycle())
    .enumerate()
    .map(|(i, (row, color))| (i, row, color))
    .collect();

assert_eq!(
    painted,
    vec![
        (0, "one",   "red"),
        (1, "two",   "green"),
        (2, "three", "blue"),
        (3, "four",  "red"),
        (4, "five",  "green"),
    ],
);

Two things worth knowing. cycle requires the underlying iterator to be Clone — it remembers the original and starts over each time it’s exhausted, which means it’ll panic or loop forever on an empty iterator depending on what you do with it (zip is safe — empty side wins; bare .next() would just return None forever). And it’s lazy: nothing repeats until the consumer pulls another item, so pairing it with a finite iterator costs nothing extra.

Stable since Rust 1.0 — one of those iterator adapters that makes the modulo operator feel like the wrong tool the moment you remember it exists.

#123 May 2026

123. BTreeMap::pop_first — A Sorted Map That Doubles as a Priority Queue

BinaryHeap only goes one way — biggest first. When you want to pull the smallest or the largest from the same collection, reach for BTreeMap and let pop_first / pop_last do the work.

The classic shape: a queue of jobs keyed by priority where you sometimes need the most-urgent job and sometimes the least-urgent one. With BinaryHeap you’d pick a direction and stick with it (or wrap things in Reverse to flip it). With BTreeMap you get both ends for free, because the keys are already sorted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::collections::BTreeMap;

let mut jobs: BTreeMap<u32, &str> = BTreeMap::new();
jobs.insert(5, "rebuild index");
jobs.insert(1, "send heartbeat");
jobs.insert(9, "page oncall");
jobs.insert(3, "rotate logs");

// Smallest key first — drain by priority.
assert_eq!(jobs.pop_first(), Some((1, "send heartbeat")));
assert_eq!(jobs.pop_first(), Some((3, "rotate logs")));

// Or grab the most urgent from the other end.
assert_eq!(jobs.pop_last(), Some((9, "page oncall")));

// Empty? You get None — same shape as Vec::pop.
let mut empty: BTreeMap<u32, &str> = BTreeMap::new();
assert_eq!(empty.pop_first(), None);

Both methods return Option<(K, V)> and remove the entry from the map. No second lookup, no .remove(key) follow-up after .first_key_value().

Where this really earns its keep is the “drain-in-order” loop — the kind of thing you’d otherwise write with a heap plus a sidecar map:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use std::collections::BTreeMap;

let mut tasks: BTreeMap<u32, String> = BTreeMap::new();
tasks.insert(20, "compact".into());
tasks.insert(10, "vacuum".into());
tasks.insert(30, "snapshot".into());

let mut order = Vec::new();
while let Some((priority, name)) = tasks.pop_first() {
    order.push((priority, name));
}

assert_eq!(
    order,
    vec![
        (10, "vacuum".into()),
        (20, "compact".into()),
        (30, "snapshot".into()),
    ],
);

Same loop, swap pop_first for pop_last and you drain in reverse order — no Reverse wrapper, no second collection.

BTreeSet got the same pair (pop_first / pop_last) at the same time, so a sorted set behaves like a deque you can pop from either end:

1
2
3
4
5
6
use std::collections::BTreeSet;

let mut ids: BTreeSet<u32> = BTreeSet::from([7, 2, 9, 4]);
assert_eq!(ids.pop_first(), Some(2));
assert_eq!(ids.pop_last(),  Some(9));
assert_eq!(ids.len(), 2);

A few things worth knowing. BTreeMap insertion is O(log n) — heavier than a BinaryHeap push, which amortises to O(1). If you genuinely only ever pop from one side and throughput matters, a heap still wins. The moment you need ordered iteration, range queries, or popping from both ends, BTreeMap is the better fit and pop_first / pop_last make that fit feel native.

Stable since Rust 1.66 — and one of those methods that quietly replaces a fistful of match arms once you remember it exists.

#122 May 2026

122. Option::filter — Keep Some Only When the Value Passes

You’ve got an Option<T>, but you only want to keep the Some if the value passes a test. The match-with-guard version works — Option::filter says the same thing in one call.

The shape that keeps showing up: parse something into an Option, then validate it. The naive version stacks an if on top of the unwrap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
fn parse_port(raw: Option<&str>) -> Option<u16> {
    let s = raw?;
    let n: u16 = s.parse().ok()?;
    if n > 0 { Some(n) } else { None }
}

assert_eq!(parse_port(Some("8080")), Some(8080));
assert_eq!(parse_port(Some("0")),    None);
assert_eq!(parse_port(Some("nope")), None);
assert_eq!(parse_port(None),         None);

Option::filter collapses that trailing if into the chain:

1
2
3
4
5
6
7
fn parse_port(raw: Option<&str>) -> Option<u16> {
    raw.and_then(|s| s.parse().ok())
       .filter(|&n| n > 0)
}

assert_eq!(parse_port(Some("8080")), Some(8080));
assert_eq!(parse_port(Some("0")),    None);

Same story for the match-with-guard pattern — when the predicate is the only thing the arm checks, filter reads better:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let name: Option<&str> = Some("");

// Before: pattern match with a guard
let valid = match name {
    Some(s) if !s.is_empty() => Some(s),
    _ => None,
};
assert_eq!(valid, None);

// After: just say what you mean
let valid = name.filter(|s| !s.is_empty());
assert_eq!(valid, None);

Two things worth knowing. First, the closure receives &T, not T — same as Iterator::filter. So for Option<i32> you write |&n| n > 0 or |n| *n > 0. For Option<String> auto-deref makes |s| !s.is_empty() just work.

Second, filter only keeps or drops — it never transforms. If the predicate returns true you get the original Some back, untouched. To transform, chain .map() after:

1
2
3
4
let trimmed = Some("  hello  ")
    .map(str::trim)
    .filter(|s| !s.is_empty());
assert_eq!(trimmed, Some("hello"));

Stable since Rust 1.27, and the kind of method that quietly disappears the boilerplate once you know it’s there.

#121 May 2026

121. rem_euclid — The Modulo That Doesn't Go Negative

-1 % 7 in Rust is -1, not 6. That’s a math gotcha lurking in every wraparound index, every clock arithmetic, every “what day of the week” calculation. rem_euclid is the modulo you actually wanted.

Rust’s % operator follows the same rule as C: the sign of the result matches the sign of the dividend. Useful sometimes, surprising the rest of the time:

1
2
3
assert_eq!(-1_i32 % 7, -1);
assert_eq!(-8_i32 % 7, -1);
assert_eq!( 8_i32 % 7,  1);

Try indexing a circular buffer with that and you get a panic the first time you step backwards across zero. The fix is rem_euclid, which always returns a value in [0, |divisor|):

1
2
3
assert_eq!((-1_i32).rem_euclid(7), 6);
assert_eq!((-8_i32).rem_euclid(7), 6);
assert_eq!(( 8_i32).rem_euclid(7), 1);

A real-world shape — wrap an index around a slice in either direction, no if ladder, no manual + len trick:

1
2
3
4
5
6
7
8
9
let days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"];

fn day_after(today: i32, delta: i32) -> i32 {
    (today + delta).rem_euclid(7)
}

assert_eq!(days[day_after(0, -1) as usize], "Sun"); // Mon - 1 = Sun
assert_eq!(days[day_after(2,  4) as usize], "Sun"); // Wed + 4 = Sun
assert_eq!(days[day_after(0, -8) as usize], "Sun"); // wraps past week boundary

div_euclid is the partner that pairs with it: a.div_euclid(b) * b + a.rem_euclid(b) == a always holds, even for negative a. Plain / and % only satisfy that identity for non-negative inputs.

1
2
3
let a = -7_i32;
let b =  3_i32;
assert_eq!(a.div_euclid(b) * b + a.rem_euclid(b), a);

Both are available on every signed integer type (and floats), and they’re const. The rule of thumb: if your code can ever see a negative operand and you want the mathematician’s modulo — not the hardware’s — reach for rem_euclid.

#120 May 2026

120. OnceLock — Lazy Statics That Initialize on Your Schedule

LazyLock runs its initializer the first time anyone touches the value — fine when the inputs are baked in at compile time, useless when you only learn them at runtime. OnceLock is the same idea, but you decide when (and with what data) initialization happens.

The classic case: you want a global that’s expensive to build, and the data only exists after main starts — CLI args, env vars, a parsed config file. LazyLock can’t see those without baking the work into a closure that re-runs every test setup.

OnceLock solves it by separating creation from initialization:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::sync::OnceLock;

static CONFIG: OnceLock<String> = OnceLock::new();

fn main() {
    // Initialize from real runtime data, exactly once.
    let cfg = std::env::var("APP_CONFIG").unwrap_or_else(|_| "default".into());
    CONFIG.set(cfg).unwrap();

    assert_eq!(config(), "default");
}

fn config() -> &'static str {
    CONFIG.get().expect("config not initialized")
}

set returns Err if the cell was already filled — you get to decide whether that’s a panic, a log line, or a no-op.

For the read-mostly path, get_or_init combines “is it set?” and “set it” into a single thread-safe call. Concurrent callers race; the winner runs the closure, everyone else waits and reads the result:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::sync::OnceLock;

static GREETING: OnceLock<String> = OnceLock::new();

fn greeting() -> &'static str {
    GREETING.get_or_init(|| format!("hello, {}", "world"))
}

assert_eq!(greeting(), "hello, world");
assert_eq!(greeting(), "hello, world"); // cached, closure does not run again

When to reach for which: pick LazyLock when the initializer is self-contained and you’re happy with it firing on first touch. Pick OnceLock when you need to feed in runtime data — or when you want the option to ask “has this been set yet?” before triggering the work.

#119 May 2026

119. iter::from_fn — Build a Custom Iterator Without the Struct

Need a custom iterator that carries a bit of state? Skip the struct plus impl Iterator boilerplate. iter::from_fn turns any closure that returns Option<T> into a real iterator.

The problem

You want a custom iterator with some captured state — a counter, a parser cursor, a lazy generator. The textbook approach is a struct plus a manual Iterator impl:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
struct Counter { n: u32, max: u32 }

impl Iterator for Counter {
    type Item = u32;
    fn next(&mut self) -> Option<u32> {
        if self.n < self.max {
            self.n += 1;
            Some(self.n)
        } else {
            None
        }
    }
}

let counter = Counter { n: 0, max: 5 };
let v: Vec<u32> = counter.collect();
assert_eq!(v, vec![1, 2, 3, 4, 5]);

Three lines of logic, ten lines of scaffolding.

The fix

std::iter::from_fn takes a FnMut() -> Option<T> and returns an iterator. Yield Some(x) for each item, None to stop. The closure’s captured variables become your iterator state:

1
2
3
4
5
6
7
8
use std::iter;

let mut n = 0;
let v: Vec<u32> = iter::from_fn(|| {
    n += 1;
    (n <= 5).then_some(n)
}).collect();
assert_eq!(v, vec![1, 2, 3, 4, 5]);

The closure captures n mutably and updates it on every call. No struct, no impl, no type to name.

Where it shines: tiny tokenizers

from_fn really earns its keep when paired with Peekable::next_if. Pull characters until a condition fails and you have a one-liner tokenizer:

1
2
3
4
5
6
7
8
let mut chars = "123abc".chars().peekable();
let digits: String =
    iter::from_fn(|| chars.next_if(|c| c.is_ascii_digit())).collect();
assert_eq!(digits, "123");

// chars still has "abc" left to consume
let rest: String = chars.collect();
assert_eq!(rest, "abc");

Reach for from_fn whenever the iterator’s state would just be a couple of local variables. Reach for a manual impl Iterator when the iterator is a public type, needs to implement other traits, or you want a size hint and specialization.

#118 May 2026

118. [T; N]::map — Transform an Array Without Allocating a Vec

[1, 2, 3].iter().map(|n| n * 2).collect::<Vec<_>>() works, but you’ve thrown the length away in the type and paid for a heap allocation. Arrays have their own map — same shape in, same shape out, no Vec in sight.

The reflex for transforming an array is the iterator chain:

1
2
3
let nums = [1, 2, 3, 4];
let doubled: Vec<i32> = nums.iter().map(|n| n * 2).collect();
assert_eq!(doubled, vec![2, 4, 6, 8]);

That gives you a Vec<i32>. The compiler no longer knows the length, and you allocated on the heap to find that out. If you want the array shape back, you’re stuck with try_into and a unwrap you don’t want.

[T; N]::map skips all of it. The output is [U; N] — same N, brand-new element type:

1
2
3
let nums = [1, 2, 3, 4];
let doubled: [i32; 4] = nums.map(|n| n * 2);
assert_eq!(doubled, [2, 4, 6, 8]);

No heap, no length erased, no try_into. Just an array on the stack with a different element type.

It takes each element by value, so it works fine with non-Copy types — no clone dance:

1
2
3
let names = [String::from("a"), String::from("bb"), String::from("ccc")];
let lens: [usize; 3] = names.map(|s| s.len());
assert_eq!(lens, [1, 2, 3]);

The closure consumes the String, the array is moved, and you get a fresh [usize; 3] back. Compare to the iterator version, which would need .into_iter() plus a try_into to recover the array type.

It’s also a clean way to build initialized arrays from one you already have — RGB to RGBA, raw bytes to parsed records, anything fixed-width:

1
2
3
4
5
6
let rgb: [u8; 3] = [200, 100, 50];
let rgba: [u8; 4] = {
    let [r, g, b] = rgb.map(|c| c.saturating_add(5));
    [r, g, b, 255]
};
assert_eq!(rgba, [205, 105, 55, 255]);

When you genuinely want a Vec, .iter().map().collect() still wins. But when the length is part of the design — config slots, fixed-N pipelines, embedded buffers, no_std code — [T; N]::map keeps that fact in the type system instead of throwing it away.

#117 May 2026

117. Iterator::step_by — Every Nth Element Without filter + enumerate

Want every 3rd value from a series? The reflex is enumerate().filter(|(i, _)| i % 3 == 0) — three combinators, one modulo, and you’ve thrown away the indices anyway. step_by(3) does the same thing in one call.

The classic shape: keep every Nth item, drop the rest. Most people reach for enumerate plus a modulo filter:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let xs = [10, 20, 30, 40, 50, 60, 70];

let evens_by_index: Vec<_> = xs
    .iter()
    .enumerate()
    .filter(|(i, _)| i % 2 == 0)
    .map(|(_, x)| *x)
    .collect();

assert_eq!(evens_by_index, [10, 30, 50, 70]);

That works, but you’re indexing just to throw the index away, and the filter runs once per element even though the iterator already knows where to land.

Iterator::step_by(n) yields the first item, then advances by n - 1, repeating. Same result, no bookkeeping:

1
2
3
4
5
let xs = [10, 20, 30, 40, 50, 60, 70];

let stepped: Vec<_> = xs.iter().step_by(2).copied().collect();

assert_eq!(stepped, [10, 30, 50, 70]);

The first element is always included — step_by(n) starts at index 0, then jumps. If you want to skip the first one, chain with skip:

1
2
3
4
5
let xs = [10, 20, 30, 40, 50, 60, 70];

let from_second: Vec<_> = xs.iter().skip(1).step_by(2).copied().collect();

assert_eq!(from_second, [20, 40, 60]);

It composes nicely with ranges, which is where it really shines — multiples, downsampling, every-other-frame logic without writing the loop yourself:

1
2
3
4
5
6
7
8
// All multiples of 5 up to 30 (inclusive)
let multiples: Vec<i32> = (0..=30).step_by(5).collect();
assert_eq!(multiples, [0, 5, 10, 15, 20, 25, 30]);

// Downsample a buffer to one in four
let signal: Vec<f32> = (0..16).map(|i| i as f32).collect();
let downsampled: Vec<f32> = signal.iter().step_by(4).copied().collect();
assert_eq!(downsampled, [0.0, 4.0, 8.0, 12.0]);

One footgun: step_by(0) panics. The step has to be at least 1, which makes sense — you can’t “advance by zero” and make progress — but it’s a runtime panic, not a compile error, so don’t pass a step you computed at runtime without checking.

1
2
3
4
5
6
7
8
// This would panic: (0..10).step_by(0)
fn safe_step(xs: &[i32], n: usize) -> Vec<i32> {
    if n == 0 { return Vec::new(); }
    xs.iter().step_by(n).copied().collect()
}

assert_eq!(safe_step(&[1, 2, 3, 4], 0), Vec::<i32>::new());
assert_eq!(safe_step(&[1, 2, 3, 4], 2), vec![1, 3]);

Reach for step_by whenever you’d otherwise write enumerate().filter(|(i, _)| i % n == 0) — same behavior, half the code, and the iterator can actually skip elements instead of inspecting every one.