65. Precise Capturing — Stop impl Trait From Borrowing Too Much

Ever had the compiler refuse to let you use a value after calling a function — even though the return type shouldn’t borrow it? use<> bounds give you precise control over what impl Trait actually captures.

The overcapturing problem

In Rust 2024, impl Trait in return position captures all in-scope generic parameters by default — including lifetimes you never intended to hold onto.

Here’s where it bites:

1
2
3
fn make_greeting(name: &str) -> impl std::fmt::Display + use<> {
    format!("Hello, {name}!")
}

Without the use<> bound, the returned impl Display would capture the &str lifetime — even though format! produces an owned String that doesn’t borrow name at all. That means the caller can’t drop or reuse name while the return value is alive, for no good reason.

Enter use<> bounds

Stabilized in Rust 1.82, the use<> syntax lets you explicitly declare which generic parameters the opaque type is allowed to capture:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
fn make_greeting(name: &str) -> impl std::fmt::Display + use<> {
    format!("Hello, {name}!")
}

fn main() {
    let mut name = String::from("world");
    let greeting = make_greeting(&name);

    // This works! The greeting doesn't capture the lifetime of `name`
    name.push_str("!!!");

    println!("{greeting}"); // Hello, world!
    println!("{name}");     // world!!!
}

use<> means “capture nothing” — the return type is fully owned and independent of any input lifetimes.

Selective capturing

You can also pick exactly which lifetimes to capture. Consider a function that takes two references but only needs to hold onto one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
fn pick_label<'a, 'b>(
    label: &'a str,
    _config: &'b str,
) -> impl std::fmt::Display + use<'a> {
    // We use `_config` to decide formatting, but
    // the return value only borrows from `label`
    format!("[{label}]")
}

fn main() {
    let label = String::from("status");
    let mut config = String::from("uppercase");

    let display = pick_label(&label, &config);

    // We can mutate `config` — the return value doesn't borrow it
    config.push_str(":bold");

    assert_eq!(format!("{display}"), "[status]");
    assert_eq!(config, "uppercase:bold");
}

use<'a> says “this return type borrows from 'a but is independent of 'b.” Without it, the compiler assumes the opaque type captures both lifetimes, preventing you from reusing config.

When you need this

The use<> bound shines when you’re returning iterators, closures, or other impl Trait types from functions that take references they don’t actually need to hold:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn make_counter(start: &str) -> impl Iterator<Item = usize> + use<> {
    let n: usize = start.parse().unwrap_or(0);
    (n..).take(5)
}

fn main() {
    let mut input = String::from("3");
    let counter = make_counter(&input);

    // We can mutate `input` because the iterator doesn't borrow it
    input.clear();

    let values: Vec<usize> = counter.collect();
    assert_eq!(values, vec![3, 4, 5, 6, 7]);
}

A small annotation that unlocks flexibility the compiler couldn’t infer on its own. If you’ve upgraded to Rust 2024 and hit mysterious “borrowed value does not live long enough” errors on impl Trait returns, use<> is likely the fix.

64. Iterator::unzip — Split Pairs into Separate Collections

Got an iterator of tuples and need two separate collections? Stop looping and pushing manually — unzip splits pairs into two collections in a single pass.

The problem

You have an iterator that yields pairs — maybe key-value tuples from a computation, or results from enumerate. You need two separate Vecs. The manual approach works, but it’s noisy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let pairs = vec![("Alice", 95), ("Bob", 87), ("Carol", 92)];

let mut names = Vec::new();
let mut scores = Vec::new();
for (name, score) in &pairs {
    names.push(*name);
    scores.push(*score);
}

assert_eq!(names, vec!["Alice", "Bob", "Carol"]);
assert_eq!(scores, vec![95, 87, 92]);

Two mutable Vecs, a loop, manual pushes — all to split pairs apart.

The fix

Iterator::unzip does exactly this in one line:

1
2
3
4
5
6
let pairs = vec![("Alice", 95), ("Bob", 87), ("Carol", 92)];

let (names, scores): (Vec<&str>, Vec<i32>) = pairs.into_iter().unzip();

assert_eq!(names, vec!["Alice", "Bob", "Carol"]);
assert_eq!(scores, vec![95, 87, 92]);

The type annotation on the left tells Rust which collections to build. It works with any types that implement Default + Extend — so Vec, String, HashSet, and more.

Works great with enumerate

Need indices and values in separate collections?

1
2
3
4
5
6
let fruits = vec!["apple", "banana", "cherry"];

let (indices, items): (Vec<usize>, Vec<&&str>) = fruits.iter().enumerate().unzip();

assert_eq!(indices, vec![0, 1, 2]);
assert_eq!(items, vec![&"apple", &"banana", &"cherry"]);

Combine with map for transforms

Chain map before unzip to transform on the fly:

1
2
3
4
5
6
7
8
9
let data = vec![("temp_c", 20.0), ("temp_c", 35.0), ("temp_c", 0.0)];

let (labels, fahrenheit): (Vec<&str>, Vec<f64>) = data
    .into_iter()
    .map(|(label, c)| (label, c * 9.0 / 5.0 + 32.0))
    .unzip();

assert_eq!(labels, vec!["temp_c", "temp_c", "temp_c"]);
assert_eq!(fahrenheit, vec![68.0, 95.0, 32.0]);

Unzip into different collection types

Since unzip works with any Default + Extend types, you can collect into mixed collections:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use std::collections::HashSet;

let entries = vec![("admin", "read"), ("admin", "write"), ("user", "read")];

let (roles, perms): (Vec<&str>, HashSet<&str>) = entries.into_iter().unzip();

assert_eq!(roles, vec!["admin", "admin", "user"]);
assert_eq!(perms.len(), 2); // "read" and "write", deduplicated
assert!(perms.contains("read"));
assert!(perms.contains("write"));

One side is a Vec, the other is a HashSetunzip doesn’t care, as long as both sides can extend themselves.

Whenever you’re about to write a loop that pushes into two collections, reach for unzip instead.

63. fmt::from_fn — Display Anything With a Closure

Need a quick Display impl without defining a whole new type? std::fmt::from_fn turns any closure into a formattable value — ad-hoc Display in one line.

The problem: Display needs a type

Normally, implementing Display means creating a wrapper struct just to control how something gets formatted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::fmt;

struct CommaSeparated<'a>(&'a [i32]);

impl fmt::Display for CommaSeparated<'_> {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        for (i, val) in self.0.iter().enumerate() {
            if i > 0 { write!(f, ", ")?; }
            write!(f, "{val}")?;
        }
        Ok(())
    }
}

fn main() {
    let nums = vec![1, 2, 3];
    let display = CommaSeparated(&nums);
    assert_eq!(display.to_string(), "1, 2, 3");
}

That’s a lot of boilerplate for “join with commas.”

After: fmt::from_fn

Stabilized in Rust 1.93, std::fmt::from_fn takes a closure Fn(&mut Formatter) -> fmt::Result and returns a value that implements Display (and Debug):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::fmt;

fn main() {
    let nums = vec![1, 2, 3];

    let display = fmt::from_fn(|f| {
        for (i, val) in nums.iter().enumerate() {
            if i > 0 { write!(f, ", ")?; }
            write!(f, "{val}")?;
        }
        Ok(())
    });

    assert_eq!(display.to_string(), "1, 2, 3");
    // Works directly in format strings too
    assert_eq!(format!("numbers: {display}"), "numbers: 1, 2, 3");
}

No wrapper type, no trait impl — just a closure that writes to a formatter.

Building reusable formatters

Wrap from_fn in a function and you have a reusable formatter without the boilerplate:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::fmt;

fn join_with<'a, I, T>(iter: I, sep: &'a str) -> impl fmt::Display + 'a
where
    I: IntoIterator<Item = T> + 'a,
    T: fmt::Display + 'a,
{
    fmt::from_fn(move |f| {
        let mut first = true;
        for item in iter {
            if !first { write!(f, "{sep}")?; }
            first = false;
            write!(f, "{item}")?;
        }
        Ok(())
    })
}

fn main() {
    let tags = vec!["rust", "formatting", "closures"];
    assert_eq!(join_with(tags, " | ").to_string(), "rust | formatting | closures");

    let nums = vec![10, 20, 30];
    assert_eq!(format!("sum of [{}]", join_with(nums, " + ")), "sum of [10 + 20 + 30]");
}

Lazy formatting — no allocation until needed

from_fn is lazy — the closure only runs when the value is actually formatted. This makes it perfect for logging where most messages might be filtered out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::fmt;

fn debug_summary(data: &[u8]) -> impl fmt::Display + '_ {
    fmt::from_fn(move |f| {
        write!(f, "[{} bytes: ", data.len())?;
        for (i, byte) in data.iter().take(4).enumerate() {
            if i > 0 { write!(f, " ")?; }
            write!(f, "{byte:02x}")?;
        }
        if data.len() > 4 { write!(f, " ...")?; }
        write!(f, "]")
    })
}

fn main() {
    let payload = vec![0xDE, 0xAD, 0xBE, 0xEF, 0x42, 0x00];
    let summary = debug_summary(&payload);

    // The closure hasn't run yet — zero work done
    // It only formats when you actually use it:
    assert_eq!(summary.to_string(), "[6 bytes: de ad be ef ...]");
}

No String is allocated unless something actually calls Display::fmt. Compare that to eagerly building a debug string just in case.

from_fn also implements Debug

The returned value implements both Display and Debug, so it works with {:?} too:

1
2
3
4
5
6
7
use std::fmt;

fn main() {
    let val = fmt::from_fn(|f| write!(f, "custom"));
    assert_eq!(format!("{val}"),   "custom");
    assert_eq!(format!("{val:?}"), "custom");
}

fmt::from_fn is a tiny addition to std that removes a surprisingly common friction point — any time you’d reach for a newtype just to implement Display, try a closure instead.

#062 Apr 2026

62. Iterator::flat_map — Map and Flatten in One Step

Need to transform each element into multiple items and collect them all into a flat sequence? flat_map combines map and flatten into a single, expressive call.

The nested iterator problem

Say you have a list of sentences and want all individual words. The naive approach with map gives you an iterator of iterators:

1
2
3
4
5
6
7
8
let sentences = vec!["hello world", "foo bar baz"];

// map gives us an iterator of Split iterators — not what we want
let nested: Vec<Vec<&str>> = sentences.iter()
    .map(|s| s.split_whitespace().collect())
    .collect();

assert_eq!(nested, vec![vec!["hello", "world"], vec!["foo", "bar", "baz"]]);

You could chain .map().flatten(), but flat_map does both at once:

1
2
3
4
5
6
7
let sentences = vec!["hello world", "foo bar baz"];

let words: Vec<&str> = sentences.iter()
    .flat_map(|s| s.split_whitespace())
    .collect();

assert_eq!(words, vec!["hello", "world", "foo", "bar", "baz"]);

Expanding one-to-many relationships

flat_map shines when each input element maps to zero or more outputs. Think of it as a one-to-many transform:

1
2
3
4
5
6
7
8
let numbers = vec![1, 2, 3];

// Each number expands to itself and its double
let expanded: Vec<i32> = numbers.iter()
    .flat_map(|&n| vec![n, n * 2])
    .collect();

assert_eq!(expanded, vec![1, 2, 2, 4, 3, 6]);

Filtering and transforming at once

Since flat_map’s closure can return an empty iterator, it naturally combines filtering and mapping — just return None or Some:

1
2
3
4
5
6
7
let inputs = vec!["42", "not_a_number", "7", "oops", "13"];

let parsed: Vec<i32> = inputs.iter()
    .flat_map(|s| s.parse::<i32>().ok())
    .collect();

assert_eq!(parsed, vec![42, 7, 13]);

This works because Option implements IntoIteratorSome(x) yields one item, None yields zero. It’s equivalent to filter_map, but flat_map generalizes to any iterator, not just Option.

Traversing nested structures

Got a tree-like structure? flat_map lets you drill into children naturally:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Team {
    name: &'static str,
    members: Vec<&'static str>,
}

let teams = vec![
    Team { name: "backend", members: vec!["Alice", "Bob"] },
    Team { name: "frontend", members: vec!["Carol"] },
    Team { name: "devops", members: vec!["Dave", "Eve", "Frank"] },
];

let all_members: Vec<&str> = teams.iter()
    .flat_map(|team| team.members.iter().copied())
    .collect();

assert_eq!(all_members, vec!["Alice", "Bob", "Carol", "Dave", "Eve", "Frank"]);

flat_map vs map + flatten

They’re semantically identical — flat_map(f) is just map(f).flatten(). But flat_map reads better and signals your intent: “each element produces multiple items, and I want them all in one sequence.”

1
2
3
4
5
6
7
8
let data = vec![vec![1, 2], vec![3], vec![4, 5, 6]];

// These are equivalent:
let a: Vec<i32> = data.iter().flat_map(|v| v.iter().copied()).collect();
let b: Vec<i32> = data.iter().map(|v| v.iter().copied()).flatten().collect();

assert_eq!(a, b);
assert_eq!(a, vec![1, 2, 3, 4, 5, 6]);

flat_map has been stable since Rust 1.0 — it’s a fundamental iterator combinator that replaces nested loops with clean, composable pipelines.

#061 Apr 2026

61. Iterator::reduce — Fold Without an Initial Value

Using fold but your accumulator starts as the first element anyway? Iterator::reduce cuts out the boilerplate and handles empty iterators gracefully.

The fold pattern you keep writing

When finding the longest string, maximum value, or combining elements, fold forces you to pick an initial value — often awkwardly:

1
2
3
4
5
6
7
let words = vec!["rust", "is", "awesome"];

let longest = words.iter().fold("", |acc, &w| {
    if w.len() > acc.len() { w } else { acc }
});

assert_eq!(longest, "awesome");

That empty string "" is a code smell — it’s not a real element, it’s just satisfying fold’s signature. And if the input is empty, you silently get "" back instead of knowing there was nothing to reduce.

Enter reduce

Iterator::reduce uses the first element as the initial accumulator. No seed value needed, and it returns Option<T>None for empty iterators:

1
2
3
4
5
6
7
let words = vec!["rust", "is", "awesome"];

let longest = words.iter().reduce(|acc, w| {
    if w.len() > acc.len() { w } else { acc }
});

assert_eq!(longest, Some(&"awesome"));

The Option return makes the empty case explicit — no more silent defaults.

Finding extremes without max_by

reduce is perfect for custom comparisons where max_by feels heavy:

1
2
3
4
5
6
7
let scores = vec![("Alice", 92), ("Bob", 87), ("Carol", 95), ("Dave", 88)];

let top_scorer = scores.iter().reduce(|best, current| {
    if current.1 > best.1 { current } else { best }
});

assert_eq!(top_scorer, Some(&("Carol", 95)));

Concatenating without an allocator seed

Building a combined result from parts? reduce avoids allocating an empty starter:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let parts = vec![
    String::from("hello"),
    String::from(" "),
    String::from("world"),
];

let combined = parts.into_iter().reduce(|mut acc, s| {
    acc.push_str(&s);
    acc
});

assert_eq!(combined, Some(String::from("hello world")));

Compare this to fold(String::new(), ...) — with reduce, the first String becomes the accumulator directly, saving one allocation.

reduce vs fold — when to use which

Use reduce when the accumulator is the same type as the elements and there’s no meaningful “zero” value. Use fold when you need a different return type or a specific starting value:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// reduce: same type in, same type out
let sum = vec![1, 2, 3, 4].into_iter().reduce(|a, b| a + b);
assert_eq!(sum, Some(10));

// fold: different return type (counting into a HashMap)
use std::collections::HashMap;
let counts = vec!["a", "b", "a", "c", "b", "a"]
    .into_iter()
    .fold(HashMap::new(), |mut map, item| {
        *map.entry(item).or_insert(0) += 1;
        map
    });
assert_eq!(counts["a"], 3);

reduce has been stable since Rust 1.51 — it’s the functional programmer’s best friend for collapsing iterators when the first element is your natural starting point.

60. Iterator::partition — Split a Collection in Two

Need to split a collection into two groups based on a condition? Skip the manual loop — Iterator::partition does it in one call.

The manual way

Without partition, you’d loop and push into two separate vectors:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8];

let mut evens = Vec::new();
let mut odds = Vec::new();

for n in numbers {
    if n % 2 == 0 {
        evens.push(n);
    } else {
        odds.push(n);
    }
}

assert_eq!(evens, vec![2, 4, 6, 8]);
assert_eq!(odds, vec![1, 3, 5, 7]);

It works, but it’s a lot of ceremony for a simple split.

Enter partition

Iterator::partition collects into two collections in a single pass. Items where the predicate returns true go left, false goes right:

1
2
3
4
5
6
7
8
let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8];

let (evens, odds): (Vec<_>, Vec<_>) = numbers
    .iter()
    .partition(|&&n| n % 2 == 0);

assert_eq!(evens, vec![&2, &4, &6, &8]);
assert_eq!(odds, vec![&1, &3, &5, &7]);

The type annotation (Vec<_>, Vec<_>) is required — Rust needs to know what collections to build. You can partition into any type that implements Default + Extend, not just Vec.

Owned values with into_iter

Use into_iter() when you want owned values instead of references:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let files = vec![
    "main.rs", "lib.rs", "test_utils.rs",
    "README.md", "CHANGELOG.md",
];

let (rust_files, other_files): (Vec<_>, Vec<_>) = files
    .into_iter()
    .partition(|f| f.ends_with(".rs"));

assert_eq!(rust_files, vec!["main.rs", "lib.rs", "test_utils.rs"]);
assert_eq!(other_files, vec!["README.md", "CHANGELOG.md"]);

A practical use: triaging results

partition pairs beautifully with Result to separate successes from failures:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let inputs = vec!["42", "not_a_number", "7", "oops", "13"];

let (oks, errs): (Vec<_>, Vec<_>) = inputs
    .iter()
    .map(|s| s.parse::<i32>())
    .partition(Result::is_ok);

let values: Vec<i32> = oks.into_iter().map(Result::unwrap).collect();
let failures: Vec<_> = errs.into_iter().map(Result::unwrap_err).collect();

assert_eq!(values, vec![42, 7, 13]);
assert_eq!(failures.len(), 2);

partition has been stable since Rust 1.0 — one of those hidden gems that’s been there all along. Anytime you reach for a loop to split items into two buckets, reach for partition instead.

#059 Apr 2026

59. split_first_chunk — Destructure Slices into Arrays

Parsing a header from a byte buffer? Extracting the first N elements of a slice? split_first_chunk hands you a fixed-size array and the remainder in one call — no manual indexing, no panics.

The Problem

You have a byte slice and need to pull out a fixed-size prefix — say a 4-byte magic number or a 2-byte length field. The manual approach is fragile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
fn parse_header(data: &[u8]) -> Option<([u8; 4], &[u8])> {
    if data.len() < 4 {
        return None;
    }
    let header: [u8; 4] = data[..4].try_into().unwrap();
    let rest = &data[4..];
    Some((header, rest))
}

fn main() {
    let packet = b"RUST is awesome";
    let (header, rest) = parse_header(packet).unwrap();
    assert_eq!(&header, b"RUST");
    assert_eq!(rest, b" is awesome");
}

That try_into().unwrap() is ugly, and if you get the index arithmetic wrong, you get a panic at runtime.

After: split_first_chunk

Stabilized in Rust 1.77, split_first_chunk splits a slice into a &[T; N] array reference and the remaining slice — returning None if the slice is too short:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
fn parse_header(data: &[u8]) -> Option<(&[u8; 4], &[u8])> {
    data.split_first_chunk::<4>()
}

fn main() {
    let packet = b"RUST is awesome";
    let (magic, rest) = parse_header(packet).unwrap();
    assert_eq!(magic, b"RUST");
    assert_eq!(rest, b" is awesome");

    // Too short — returns None instead of panicking
    let tiny = b"RS";
    assert!(tiny.split_first_chunk::<4>().is_none());
}

One method call. No manual slicing, no try_into, and the const generic N ensures the compiler knows the exact array size.

Chaining Chunks for Protocol Parsing

Real protocols have multiple fields. Chain split_first_chunk calls to peel them off one at a time:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
fn parse_packet(data: &[u8]) -> Option<([u8; 2], [u8; 4], &[u8])> {
    let (version, rest) = data.split_first_chunk::<2>()?;
    let (length, payload) = rest.split_first_chunk::<4>()?;
    Some((*version, *length, payload))
}

fn main() {
    let raw = b"\x01\x02\x00\x00\x00\x05hello";
    let (version, length, payload) = parse_packet(raw).unwrap();

    assert_eq!(version, [0x01, 0x02]);
    assert_eq!(length, [0x00, 0x00, 0x00, 0x05]);
    assert_eq!(payload, b"hello");
}

Each ? short-circuits if the remaining data is too short. No bounds checks scattered across your code.

From the Other End: split_last_chunk

Need to grab a suffix instead — like a trailing checksum? split_last_chunk mirrors the API from the back:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
fn strip_checksum(data: &[u8]) -> Option<(&[u8], &[u8; 2])> {
    data.split_last_chunk::<2>()
}

fn main() {
    let msg = b"payload\xAB\xCD";
    let (body, checksum) = strip_checksum(msg).unwrap();
    assert_eq!(body, b"payload");
    assert_eq!(checksum, &[0xAB, 0xCD]);

    let short = b"\x01";
    assert!(strip_checksum(short).is_none());
}

Same safety, same ergonomics — just peeling from the tail.

The Full Family

These methods come in mutable variants too, all stabilized in 1.77:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
fn main() {
    // Immutable — borrow array refs from a slice
    let data: &[u8] = &[1, 2, 3, 4, 5];
    let (head, tail) = data.split_first_chunk::<2>().unwrap();
    assert_eq!(head, &[1, 2]);
    assert_eq!(tail, &[3, 4, 5]);

    // split_last_chunk — from the back
    let (init, last) = data.split_last_chunk::<2>().unwrap();
    assert_eq!(init, &[1, 2, 3]);
    assert_eq!(last, &[4, 5]);

    // first_chunk / last_chunk — just the array, no remainder
    let first: &[u8; 3] = data.first_chunk::<3>().unwrap();
    assert_eq!(first, &[1, 2, 3]);

    let last: &[u8; 3] = data.last_chunk::<3>().unwrap();
    assert_eq!(last, &[3, 4, 5]);
}

Wherever you reach for &data[..N] and a try_into(), there’s probably a chunk method that does it better. Type-safe, bounds-checked, and zero-cost.

#058 Apr 2026

58. Option::is_none_or — The Guard Clause You Were Missing

You know is_some_and — it checks if the Option holds a value matching a predicate. But what about “it’s fine if it’s missing, just validate it when it’s there”? That’s is_none_or.

The Problem

You have an optional field and want to validate it only when present. Without is_none_or, this gets awkward fast:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
struct Query {
    max_results: Option<usize>,
}

fn is_valid(q: &Query) -> bool {
    // "None is fine, but if set, must be ≤ 100"
    match q.max_results {
        None => true,
        Some(n) => n <= 100,
    }
}

fn main() {
    assert!(is_valid(&Query { max_results: None }));
    assert!(is_valid(&Query { max_results: Some(50) }));
    assert!(!is_valid(&Query { max_results: Some(200) }));
}

That match is fine once, but when you’re validating five optional fields, it bloats your code.

After: is_none_or

Stabilized in Rust 1.82, is_none_or collapses the pattern into a single call — returns true if the Option is None, or applies the predicate if it’s Some:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
struct Query {
    max_results: Option<usize>,
    page: Option<usize>,
    prefix: Option<String>,
}

fn is_valid(q: &Query) -> bool {
    q.max_results.is_none_or(|n| n <= 100)
        && q.page.is_none_or(|p| p >= 1)
        && q.prefix.is_none_or(|s| !s.is_empty())
}

fn main() {
    let good = Query {
        max_results: Some(50),
        page: Some(1),
        prefix: None,
    };
    assert!(is_valid(&good));

    let bad = Query {
        max_results: Some(200),
        page: Some(0),
        prefix: Some(String::new()),
    };
    assert!(!is_valid(&bad));

    let empty = Query {
        max_results: None,
        page: None,
        prefix: None,
    };
    assert!(is_valid(&empty));
}

Three fields validated in three clean lines. Every None passes, every Some must satisfy the predicate.

is_some_and vs is_none_or — Pick the Right Default

Think of it this way — what happens when the value is None?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
fn main() {
    let val: Option<i32> = None;

    // is_some_and: None → false (strict: must be present AND valid)
    assert_eq!(val.is_some_and(|x| x > 0), false);

    // is_none_or: None → true (lenient: absence is acceptable)
    assert_eq!(val.is_none_or(|x| x > 0), true);

    let val = Some(5);

    // Both agree when value is present and valid
    assert_eq!(val.is_some_and(|x| x > 0), true);
    assert_eq!(val.is_none_or(|x| x > 0), true);

    let val = Some(-1);

    // Both agree when value is present and invalid
    assert_eq!(val.is_some_and(|x| x > 0), false);
    assert_eq!(val.is_none_or(|x| x > 0), false);
}

Use is_some_and when a value must be present. Use is_none_or when absence is fine — optional config, nullable API fields, or any “default-if-missing” scenario.

Real-World: Filtering with Optional Criteria

This pattern is perfect for search filters where users only set the criteria they care about:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
struct Filter {
    min_price: Option<f64>,
    category: Option<String>,
}

struct Product {
    name: String,
    price: f64,
    category: String,
}

fn matches(product: &Product, filter: &Filter) -> bool {
    filter.min_price.is_none_or(|min| product.price >= min)
        && filter.category.is_none_or(|cat| product.category == *cat)
}

fn main() {
    let products = vec![
        Product { name: "Laptop".into(), price: 999.0, category: "Electronics".into() },
        Product { name: "Book".into(), price: 15.0, category: "Books".into() },
        Product { name: "Phone".into(), price: 699.0, category: "Electronics".into() },
    ];

    let filter = Filter {
        min_price: Some(100.0),
        category: Some("Electronics".into()),
    };

    let results: Vec<_> = products.iter().filter(|p| matches(p, &filter)).collect();
    assert_eq!(results.len(), 2);
    assert_eq!(results[0].name, "Laptop");
    assert_eq!(results[1].name, "Phone");
}

Every None filter lets everything through. Every Some filter narrows the results. No match statements, no nested ifs — just composable guards.

#057 Apr 2026

57. New Math Constants — GOLDEN_RATIO and EULER_GAMMA in std

Tired of defining your own golden ratio or Euler-Mascheroni constant? As of Rust 1.94, std ships them out of the box — no more copy-pasting magic numbers.

Before: Roll Your Own

If you needed the golden ratio or the Euler-Mascheroni constant before Rust 1.94, you had to define them yourself:

1
2
const PHI: f64 = 1.618033988749895;
const EULER_GAMMA: f64 = 0.5772156649015329;

This works, but it’s error-prone. One wrong digit and your calculations drift. And every project that needs these ends up with its own slightly-different copy.

After: Just Use std

Rust 1.94 added GOLDEN_RATIO and EULER_GAMMA to the standard consts modules for both f32 and f64:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use std::f64::consts::{GOLDEN_RATIO, EULER_GAMMA};

fn main() {
    // Golden ratio: (1 + √5) / 2
    let phi = GOLDEN_RATIO;
    assert!((phi * phi - phi - 1.0).abs() < 1e-10);

    // Euler-Mascheroni constant
    let gamma = EULER_GAMMA;
    assert!((gamma - 0.5772156649015329).abs() < 1e-10);

    println!("φ = {phi}");
    println!("γ = {gamma}");
}

These sit right alongside the constants you already know — PI, TAU, E, SQRT_2, and friends.

Where You’d Actually Use Them

Golden ratio shows up in algorithm design (Fibonacci heaps, golden-section search), generative art, and UI layout proportions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
use std::f64::consts::GOLDEN_RATIO;

fn golden_section_dimensions(width: f64) -> (f64, f64) {
    let height = width / GOLDEN_RATIO;
    (width, height)
}

fn main() {
    let (w, h) = golden_section_dimensions(800.0);
    assert!((w / h - GOLDEN_RATIO).abs() < 1e-10);
    println!("Width: {w}, Height: {h:.2}");
}

Euler-Mascheroni constant appears in number theory, harmonic series approximations, and probability:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
use std::f64::consts::EULER_GAMMA;

/// Approximate the N-th harmonic number using the
/// asymptotic expansion: H_n ≈ ln(n) + γ + 1/(2n)
fn harmonic_approx(n: u64) -> f64 {
    let nf = n as f64;
    nf.ln() + EULER_GAMMA + 1.0 / (2.0 * nf)
}

fn main() {
    // Exact H_10 = 1 + 1/2 + 1/3 + ... + 1/10
    let exact: f64 = (1..=10).map(|i| 1.0 / i as f64).sum();
    let approx = harmonic_approx(10);

    println!("Exact H_10:  {exact:.6}");
    println!("Approx H_10: {approx:.6}");
    assert!((exact - approx).abs() < 0.01);
}

The Full Lineup

With these additions, std::f64::consts now includes: PI, TAU, E, SQRT_2, SQRT_3, LN_2, LN_10, LOG2_E, LOG2_10, LOG10_2, LOG10_E, FRAC_1_PI, FRAC_1_SQRT_2, FRAC_1_SQRT_2PI, FRAC_2_PI, FRAC_2_SQRT_PI, FRAC_PI_2, FRAC_PI_3, FRAC_PI_4, FRAC_PI_6, FRAC_PI_8, GOLDEN_RATIO, and EULER_GAMMA. That’s a pretty complete toolkit for numerical work — all with full f64 precision, available at compile time.

56. Iterator::map_while — Take While Transforming

Need to take elements from an iterator while a condition holds and transform them at the same time? map_while does both in one step — no awkward take_while + map chains needed.

The Problem

Imagine you’re parsing leading digits from a string. With take_while and map, you’d write something like this:

1
2
3
4
5
6
7
8
9
let input = "42abc";

let digits: Vec<u32> = input
    .chars()
    .take_while(|c| c.is_ascii_digit())
    .map(|c| c.to_digit(10).unwrap())
    .collect();

assert_eq!(digits, vec![4, 2]);

This works, but the condition and the transformation are split across two combinators. The unwrap() in map is also a code smell — you know the char is a digit because take_while checked, but the compiler doesn’t.

The Solution

map_while combines both steps. Your closure returns Some(value) to keep going or None to stop:

1
2
3
4
5
6
7
8
let input = "42abc";

let digits: Vec<u32> = input
    .chars()
    .map_while(|c| c.to_digit(10))
    .collect();

assert_eq!(digits, vec![4, 2]);

char::to_digit already returns Option<u32> — it’s Some(n) for digits and None otherwise. That’s a perfect fit for map_while. No separate condition, no unwrap.

A More Practical Example

Parse key-value config lines until you hit a blank line:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
let lines = vec![
    "host=localhost",
    "port=8080",
    "",
    "ignored=true",
];

let config: Vec<(&str, &str)> = lines
    .iter()
    .map_while(|line| line.split_once('='))
    .collect();

assert_eq!(config, vec![("host", "localhost"), ("port", "8080")]);

When split_once('=') hits the empty line "", it returns None — and the iterator stops. Everything after the blank line is skipped, no extra logic required.

map_while vs take_while + map

The key difference: map_while fuses the predicate and the transformation into one closure, which means:

  • No redundant checks — you don’t test a condition in take_while and then repeat similar logic in map.
  • No unwrap — since the closure returns Option, you never need to unwrap inside a subsequent map.
  • Cleaner intent — one combinator says “transform elements until you can’t.”

Reach for map_while whenever your stopping condition and your transformation are two sides of the same coin.