#136 May 2026

136. LazyLock::force_mut — Mutate a Lazy Value Without Wrapping It in a Mutex

Got a LazyLock you own outright? With force_mut (stable in Rust 1.94), you can initialize and mutate it through &mut LazyLock — no Mutex, no RwLock, no locking dance.

The Problem

LazyLock is perfect for one-time initialization, but its Deref only hands out a shared reference. If you want to mutate the inner value, the textbook move is to wrap it in Mutex<T>:

1
2
3
4
5
6
7
use std::sync::{LazyLock, Mutex};

static CACHE: LazyLock<Mutex<Vec<String>>> = LazyLock::new(|| Mutex::new(Vec::new()));

fn add(s: String) {
    CACHE.lock().unwrap().push(s);
}

That’s the right answer for shared global state. But when you actually have exclusive ownership — a struct field, a builder, a test fixture — the Mutex is pure ceremony.

force_mut: Init and Mutate in One Step

If you have &mut LazyLock<T>, you already have exclusive access. Synchronization is moot. force_mut exploits that: it triggers initialization if needed, then hands you a plain &mut T.

1
2
3
4
5
6
7
8
9
use std::sync::LazyLock;

let mut tags = LazyLock::new(|| vec!["rust"]);

let v: &mut Vec<&'static str> = LazyLock::force_mut(&mut tags);
v.push("std");
v.push("lazy");

assert_eq!(*v, vec!["rust", "std", "lazy"]);

No Mutex, no .lock().unwrap(), no poisoning to handle. The init closure runs at most once, and from then on you can mutate freely through &mut.

get_mut: Mutate Only If Already Initialized

The sibling, LazyLock::get_mut, returns Option<&mut T> and won’t trigger init:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::sync::LazyLock;

let mut counts: LazyLock<Vec<u32>> = LazyLock::new(|| vec![0u32; 8]);

// Hasn't been touched yet — get_mut returns None.
assert!(LazyLock::get_mut(&mut counts).is_none());

// Force initialization explicitly:
LazyLock::force_mut(&mut counts)[0] = 42;

// Now it's available without re-running the closure:
let slot = LazyLock::get_mut(&mut counts).unwrap();
slot[1] = 99;

assert_eq!(*slot, [42, 99, 0, 0, 0, 0, 0, 0]);

Useful when you’d rather skip work entirely if it never happened — “flush the cache on shutdown, but only if anyone built it.”

When to Reach for It

Pick force_mut whenever you own the LazyLock outright and would otherwise wrap it in Mutex<T> just to get mutation. It’s perfect for struct fields, test fixtures, builders, and anything else where you already have &mut to the container.

LazyCell::force_mut and LazyCell::get_mut ship the same shape for the single-thread cell — pick whichever matches your sync story.

#135 May 2026

135. str::strip_prefix — Trim a Prefix Without Slicing by Hand

Reaching for if s.starts_with("foo") { &s[3..] } to drop a prefix? That’s an off-by-one waiting to happen — and a panic the first time someone passes in an emoji. str::strip_prefix returns Option<&str> and gets it right by construction.

The Problem

You want the part of a string after a known prefix:

1
2
3
4
5
6
7
8
let s = "Bearer abc123";

let token = if s.starts_with("Bearer ") {
    &s[7..]
} else {
    s
};
assert_eq!(token, "abc123");

Two things wrong here: the literal 7 has to stay in sync with the literal "Bearer ", and slicing by byte offset will panic if the prefix ever lands mid-codepoint. Even using prefix.len() only saves you from the first bug, not the second when the prefix is dynamic.

The Fix: strip_prefix

1
2
3
4
let s = "Bearer abc123";

let token = s.strip_prefix("Bearer ").unwrap_or(s);
assert_eq!(token, "abc123");

strip_prefix returns Some(&str) if the prefix matched (giving you the rest), or None if it didn’t. No magic numbers, no slicing, no UTF-8 footguns — the prefix length comes from the prefix itself.

Pattern Matching, Not Just Strings

The argument is anything implementing Pattern, so a char, a closure, or even an array of chars all work:

1
2
3
4
5
6
assert_eq!("-x".strip_prefix('-'), Some("x"));
assert_eq!("x".strip_prefix('-'), None);

// Trim any leading whitespace character
let s = "\t  hello".strip_prefix(|c: char| c.is_whitespace());
assert_eq!(s, Some("  hello"));

Note this only strips one match — the char form doesn’t loop. For “strip every leading space,” reach for trim_start_matches.

The Twin: strip_suffix

Same shape, other end:

1
2
3
4
let filename = "report.tar.gz";

let stem = filename.strip_suffix(".gz").unwrap_or(filename);
assert_eq!(stem, "report.tar");

Together they replace half the manual &s[..s.len() - 3] arithmetic you’d otherwise write — and the Option return makes “did it actually have the prefix?” a value, not a separate starts_with call.

#134 May 2026

134. Iterator::find_map — Find and Transform in One Pass

Looking for the first element that matches and needs to come back as something else? Skip the filter_map(...).next() two-step — find_map says it in one call.

The Problem

You have an iterator and want the first item that satisfies a condition plus the value derived from it. The hand-rolled version is a for loop with a mutable binding and a break:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let inputs = ["abc", "12", "def", "34"];

let mut first: Option<i32> = None;
for s in &inputs {
    if let Ok(n) = s.parse::<i32>() {
        first = Some(n);
        break;
    }
}
assert_eq!(first, Some(12));

You can compress it with filter_map(...).next():

1
2
3
4
5
let first: Option<i32> = inputs
    .iter()
    .filter_map(|s| s.parse::<i32>().ok())
    .next();
assert_eq!(first, Some(12));

It’s shorter, but what you actually mean — find the first one — is buried inside “filter everything, then take one.”

The Fix: find_map

Iterator::find_map takes a closure returning Option<U> and returns the first Some(U) it produces — short-circuiting as soon as the closure says yes:

1
2
3
4
let first: Option<i32> = inputs
    .iter()
    .find_map(|s| s.parse::<i32>().ok());
assert_eq!(first, Some(12));

Same short-circuit behavior as find, but the closure does the transformation too — no separate map step on the result.

Where It Earns Its Keep

Looking up the first input that’s in a small table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn level(name: &str) -> Option<u8> {
    match name {
        "trace" => Some(0),
        "debug" => Some(1),
        "info"  => Some(2),
        _ => None,
    }
}

let inputs = ["??", "huh", "info", "debug"];
let first = inputs.iter().find_map(|s| level(s));
assert_eq!(first, Some(2));

Pulling the first error out of a batch of Results without losing the message:

1
2
3
4
let results: Vec<Result<i32, &str>> = vec![Ok(1), Ok(2), Err("bad row"), Ok(3)];

let first_err = results.iter().find_map(|r| r.as_ref().err().copied());
assert_eq!(first_err, Some("bad row"));

Anywhere you catch yourself writing .filter_map(...).next() or a manual loop with break, find_map says the same thing with less noise.

#133 May 2026

133. slice::rotate_left — Cycle Elements Through a Slice Without a Second Buffer

Need the first few elements to wrap around to the back? slice.rotate_left(n) cycles them through in place — no scratch Vec, no clever indexing, no borrow checker drama.

The Problem

Rotating a slice “the obvious way” means allocating a temporary, copying things twice, and being very careful about ranges:

1
2
3
4
5
6
7
8
9
fn rotate_left_manual(v: &mut Vec<i32>, n: usize) {
    let mut tmp: Vec<i32> = v[..n].to_vec();
    v.drain(..n);
    v.append(&mut tmp);
}

let mut data = vec![1, 2, 3, 4, 5];
rotate_left_manual(&mut data, 2);
assert_eq!(data, vec![3, 4, 5, 1, 2]);

It works, but it allocates and only runs on Vec. The moment you only have a &mut [T] — a window inside a larger buffer, say — to_vec/drain aren’t options at all.

After: rotate_left and rotate_right

Every slice already knows how to rotate itself in place:

1
2
3
4
5
6
7
let mut data = [1, 2, 3, 4, 5];
data.rotate_left(2);
assert_eq!(data, [3, 4, 5, 1, 2]);

let mut data = [1, 2, 3, 4, 5];
data.rotate_right(2);
assert_eq!(data, [4, 5, 1, 2, 3]);

rotate_left(n) moves the first n elements to the end; rotate_right(n) moves the last n to the front. Both run in O(len) with zero allocations, and they work on any &mut [T] — arrays, vector slices, sub-ranges of bigger buffers.

Where It Earns Its Keep

Round-robin scheduling: the first runner takes a turn, then moves to the back of the line.

1
2
3
4
5
6
7
8
let mut queue = ["alice", "bob", "carol", "dave"];

for _ in 0..queue.len() {
    println!("now serving: {}", queue[0]);
    queue.rotate_left(1);
}
// queue is back to its original order
assert_eq!(queue, ["alice", "bob", "carol", "dave"]);

Scrolling a fixed-size display buffer — drop the oldest row, leave a slot at the end for the newest:

1
2
3
4
let mut rows = [10, 20, 30, 40];
rows.rotate_left(1);
rows[rows.len() - 1] = 99;
assert_eq!(rows, [20, 30, 40, 99]);

And because it operates on &mut [T], you can rotate a window inside a larger buffer without splitting it:

1
2
3
let mut buf = [0, 1, 2, 3, 4, 5, 6, 7];
buf[2..6].rotate_left(1);
assert_eq!(buf, [0, 1, 3, 4, 5, 2, 6, 7]);

Anywhere you’d reach for “shift everything left and stick the front on the end,” rotate_left does it in one line with no allocation.

#132 May 2026

132. abs_diff — Subtract Without Caring Which Side Is Bigger

Subtracting two unsigned integers and the smaller one comes first? Instant panic. a.abs_diff(b) returns the gap as a u* regardless of which side is bigger — no branching, no overflow.

The Problem

Unsigned subtraction in Rust panics in debug and wraps in release the moment the result would go negative. You end up writing the same branch over and over:

1
2
3
4
5
6
fn gap(a: u32, b: u32) -> u32 {
    if a > b { a - b } else { b - a }
}

assert_eq!(gap(10, 3), 7);
assert_eq!(gap(3, 10), 7);

It works, but it’s noise. And the same trick on signed integers has a sneakier bug: i32::MIN.abs_diff(i32::MAX) overflows an i32 — the gap doesn’t fit in the signed range.

After: abs_diff

Every integer type carries an abs_diff method that returns the unsigned gap directly. Signed inputs come back as the matching unsigned type, so the result always fits:

1
2
3
4
5
6
assert_eq!(10u32.abs_diff(3), 7);
assert_eq!(3u32.abs_diff(10), 7);

// Signed → unsigned, no overflow at the extremes
assert_eq!((-5i32).abs_diff(5), 10u32);
assert_eq!(i32::MIN.abs_diff(i32::MAX), u32::MAX);

No if, no checked_sub, no casting through i64 to dodge overflow. One call, one number.

Where It Earns Its Keep

Distance-style calculations are the obvious fit — anywhere “how far apart are these” is the real question and the sign is incidental:

1
2
3
4
5
6
fn manhattan(a: (i32, i32), b: (i32, i32)) -> u32 {
    a.0.abs_diff(b.0) + a.1.abs_diff(b.1)
}

assert_eq!(manhattan((1, 2), (4, 6)), 7);
assert_eq!(manhattan((-3, -3), (3, 3)), 12);

It also cleans up timestamp deltas, where one side is “now” and the other could be in the past or the future:

1
2
3
4
5
let scheduled: u64 = 1_700_000_000;
let actual:    u64 = 1_699_999_995;

let drift = scheduled.abs_diff(actual);
assert_eq!(drift, 5);

Whenever you catch yourself writing if a > b { a - b } else { b - a }, reach for abs_diff instead.

#131 May 2026

131. mem::offset_of! — Byte Offsets Without the memoffset Crate

You need the byte offset of a field — for FFI, custom serialization, or talking to a C struct. The old answer was unsafe pointer arithmetic on a MaybeUninit, or pulling in the memoffset crate. std::mem::offset_of! is the safe, one-liner replacement.

The problem

Say you’re matching a C layout and need to know exactly where each field lives in memory:

1
2
3
4
5
6
7
#[repr(C)]
struct Header {
    magic: u32,
    version: u16,
    flags: u16,
    payload_len: u64,
}

The pre-1.77 way meant either an external crate or hand-rolled unsafe:

1
2
3
4
5
6
7
8
use std::mem::MaybeUninit;

fn payload_len_offset_old() -> usize {
    let uninit = MaybeUninit::<Header>::uninit();
    let base = uninit.as_ptr() as usize;
    let field = unsafe { &raw const (*uninit.as_ptr()).payload_len } as usize;
    field - base
}

It works, but unsafe, raw pointers, and a MaybeUninit is a lot of ceremony for “where does this field start?”

The fix: mem::offset_of!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use std::mem::offset_of;

let magic_off       = offset_of!(Header, magic);
let version_off     = offset_of!(Header, version);
let flags_off       = offset_of!(Header, flags);
let payload_len_off = offset_of!(Header, payload_len);

assert_eq!(magic_off, 0);
assert_eq!(version_off, 4);
assert_eq!(flags_off, 6);
assert_eq!(payload_len_off, 8);

No unsafe. No allocation. No instance of Header ever exists. The macro expands to a const-evaluable usize — usable inside const fn and static items.

Nested fields work too

Dot through a path of named fields and offset_of! keeps walking:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#[repr(C)]
struct Inner {
    a: u32,
    b: u32,
}

#[repr(C)]
struct Outer {
    tag: u8,
    _pad: [u8; 3],
    inner: Inner,
}

assert_eq!(offset_of!(Outer, inner), 4);
assert_eq!(offset_of!(Outer, inner.b), 8);

Tuples and tuple structs use numeric indices:

1
2
3
4
5
#[repr(C)]
struct Pair(u8, u32);

assert_eq!(offset_of!(Pair, 0), 0);
assert_eq!(offset_of!(Pair, 1), 4);

When it earns its keep

FFI bindings, custom binary parsers, kernel-style intrusive data structures, and anywhere you’d otherwise reach for memoffset. The macro is in core, so it works in no_std. Reach for it whenever you find yourself writing as *const _ as usize math.

#130 May 2026

130. mem::discriminant — Compare Enum Variants, Ignore the Payload

You want to know “is this another Click?” — not whether the coordinates match. Hand-rolling a match for every variant gets old fast. std::mem::discriminant answers that question in one call.

The problem

Say you have an event enum with payload data on every variant:

1
2
3
4
5
6
#[derive(Debug)]
enum Event {
    Click { x: i32, y: i32 },
    KeyPress(char),
    Scroll(i32),
}

Two Clicks with different coordinates are still both clicks. Deriving PartialEq won’t help — that compares the inner data too. The usual workaround is a tedious match:

1
2
3
4
5
6
7
8
fn same_kind_match(a: &Event, b: &Event) -> bool {
    match (a, b) {
        (Event::Click { .. }, Event::Click { .. }) => true,
        (Event::KeyPress(_), Event::KeyPress(_)) => true,
        (Event::Scroll(_), Event::Scroll(_)) => true,
        _ => false,
    }
}

Every new variant means another arm. Forget one and you’ve got a silent bug.

The fix: mem::discriminant

1
2
3
4
5
use std::mem::discriminant;

fn same_kind(a: &Event, b: &Event) -> bool {
    discriminant(a) == discriminant(b)
}

discriminant(&value) returns an opaque Discriminant<T> representing which variant the value is — nothing more. Two Clicks with wildly different coordinates have the same discriminant; a Click and a KeyPress don’t.

No match, no missing-arm bugs, no recompile when you add a new variant.

Bonus: Discriminant<T> is Hash + Eq + Copy

That makes it a perfectly good HashMap key, which is great for counting events by variant without writing a tag enum:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::collections::HashMap;
use std::mem::discriminant;

let events = [
    Event::Click { x: 0, y: 0 },
    Event::KeyPress('a'),
    Event::Click { x: 1, y: 1 },
    Event::Scroll(5),
    Event::KeyPress('b'),
];

let mut counts: HashMap<_, usize> = HashMap::new();
for ev in &events {
    *counts.entry(discriminant(ev)).or_insert(0) += 1;
}
// counts now holds: Click -> 2, KeyPress -> 2, Scroll -> 1

Reach for discriminant whenever you find yourself writing a kind() method or an “is this the same variant?” match. The std lib already has it.

#129 May 2026

129. BTreeMap::extract_if — Range-Scoped Filter-and-Remove in One Pass

Vec::extract_if was great. The BTreeMap version, stable since Rust 1.91, adds a trick the slice variant can’t pull off — a range bound that scopes the scan to part of the keyspace.

The collect-keys-then-remove dance

You want to remove every entry matching a predicate from a BTreeMap, and keep the removed entries. The borrow checker won’t let you mutate the map while you’re iterating it, so the textbook workaround is a two-pass clone-the-keys shuffle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, String> = BTreeMap::from([
    (1, "boot".into()),
    (2, "login".into()),
    (3, "click".into()),
    (4, "logout".into()),
]);

let cutoff = 3;

let to_remove: Vec<u64> = events
    .iter()
    .filter(|(k, _)| **k < cutoff)
    .map(|(k, _)| *k)            // requires K: Copy or Clone
    .collect();

let mut expired = Vec::new();
for k in to_remove {
    if let Some(v) = events.remove(&k) {
        expired.push((k, v));
    }
}

assert_eq!(expired.len(), 2);

It works, but the costs add up. You walk the map twice, you require K: Clone (or Copy), and you allocate a throwaway Vec<K> just to break the self-borrow.

extract_if is one pass — and it takes a range

BTreeMap::extract_if(range, pred) returns an iterator that visits keys in ascending order inside the given range, and yields (K, V) for every entry whose predicate returns true. The map’s structure is updated as the iterator advances:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::collections::BTreeMap;

let mut events: BTreeMap<u64, &str> = BTreeMap::from([
    (1, "boot"),
    (2, "login"),
    (3, "click"),
    (4, "logout"),
    (5, "shutdown"),
]);

let expired: BTreeMap<u64, &str> =
    events.extract_if(..3, |_k, _v| true).collect();

assert_eq!(expired.into_iter().collect::<Vec<_>>(), [(1, "boot"), (2, "login")]);
assert_eq!(events.keys().copied().collect::<Vec<_>>(), [3, 4, 5]);

No Clone bound. No second pass. No temp Vec<K>. The range ..3 does double duty as a prune — keys outside it aren’t even visited, which matters when the map is large and the range is small.

The closure can mutate survivors too

The predicate signature is FnMut(&K, &mut V) -> bool. Returning true removes and yields; returning false keeps the entry in the map — but you’ve still got &mut V, so you can edit it on the way past:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
use std::collections::BTreeMap;

let mut scores: BTreeMap<&str, i32> = BTreeMap::from([
    ("alice", 85),
    ("bob",   42),
    ("carol", 91),
    ("dan",   30),
]);

let failing: Vec<(&str, i32)> = scores
    .extract_if(.., |_k, v| {
        if *v < 50 {
            true            // pull this one out
        } else {
            *v += 5;        // bump the survivors by 5
            false           // keep it
        }
    })
    .collect();

assert_eq!(failing, [("bob", 42), ("dan", 30)]);
assert_eq!(scores[&"alice"], 90);
assert_eq!(scores[&"carol"], 96);

One traversal, three behaviors at once: filter, remove, and update.

Constraints

The range is RangeBounds<K>, so anything from .. to start..=end works. extract_if panics if start > end or if the bounds collapse to an empty exclusive range. BTreeSet::extract_if is the same idea minus the value — predicate signature is FnMut(&T) -> bool.

When to reach for it

Whenever the operation is “walk a sorted map, peel off the entries matching a condition, optionally bounded to a key range”. Expiring old timestamped events. Draining tasks below a priority watermark. Splitting a map at a tenant boundary into “keep” and “ship”. The range bound is the part the Vec::extract_if (bite 43) can’t give you — and on a large map, scoping the scan is the whole point.

#128 May 2026

128. slice::copy_within — Shift Bytes In-Place Without Fighting the Borrow Checker

You want to copy buf[2..5] over buf[0..3] — same buffer, no allocation. Reach for copy_from_slice and the borrow checker says no. copy_within is the one-call answer.

The two-borrow trap

The obvious code can’t compile — buf would be borrowed mutably and immutably at the same time:

1
2
3
4
5
let mut buf = [1, 2, 3, 4, 5];

// error[E0502]: cannot borrow `buf` as immutable
//               while also borrowed as mutable
// buf[0..3].copy_from_slice(&buf[2..5]);

The usual workarounds are noisy. split_at_mut to carve the slice into two non-overlapping halves:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
let (left, right) = buf.split_at_mut(2);
left[0..2].copy_from_slice(&right[0..2]);
assert_eq!(buf, [3, 4, 3, 4, 5]);

…which only works when source and destination land on opposite sides of the split. Otherwise you allocate a throwaway Vec just to break the borrow:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
let tmp: Vec<_> = buf[2..5].to_vec();   // allocation just to copy
buf[0..3].copy_from_slice(&tmp);

copy_within is the primitive

<[T]>::copy_within(src, dest) copies a range of elements to a destination index inside the same slice — one call, no allocation, no split:

1
2
3
let mut buf = [1, 2, 3, 4, 5];
buf.copy_within(2..5, 0);
assert_eq!(buf, [3, 4, 5, 4, 5]);

It’s memmove semantics, so overlapping source and destination just work — the elements that get overwritten don’t matter, the surviving order does:

1
2
3
4
let mut buf = [1, 2, 3, 4, 5];
// shift everything one slot to the right
buf.copy_within(0..4, 1);
assert_eq!(buf, [1, 1, 2, 3, 4]);

Try writing that with split_at_mut — you can’t, the source and destination overlap.

A real shape: drop the first N from a Vec

Removing the first n elements without reallocating is copy_within plus a truncate:

1
2
3
4
5
6
7
8
let mut data: Vec<u8> = vec![10, 20, 30, 40, 50];
let n = 2;
let len = data.len();

data.copy_within(n..len, 0);
data.truncate(len - n);

assert_eq!(data, [30, 40, 50]);

Same allocation, same backing buffer — the values just shift down. Vec::drain(..n) reads cleaner for one-offs, but copy_within is what you want when you’re already holding &mut [T] and can’t reach for Vec methods (think ring buffers, fixed-size scratch arrays, no_std crates).

Constraints

T: Copy is required — the method does a memmove, it doesn’t run destructors or call Clone. Source and destination ranges must both fit inside the slice; otherwise it panics. The destination is a single index (where the copy starts), not a range — the length is taken from the source range.

When to reach for it

Any time you’d otherwise write split_at_mut just to satisfy the borrow checker, or allocate a temporary buffer to break a self-borrow. copy_within reads as what you actually meant: move these bytes over there, in place.

Stable since Rust 1.37. Works on [T], Vec<T>, and any DerefMut<Target = [T]>.

#127 May 2026

127. std::mem::swap — Trade Two Values Through Their &mut

The textbook let tmp = a; a = b; b = tmp; falls over the moment a and b are &mut T — you can’t move out of a reference. mem::swap is the generic, safe, two-line answer.

Why the temp-variable dance breaks

If a and b are owned locals, you can shuffle through a temporary just fine. As soon as they’re behind &mut, the borrow checker stops you:

1
2
3
4
5
fn naive<T>(a: &mut T, b: &mut T) {
    // let tmp = *a;   // E0507: cannot move out of `*a`
    // *a = *b;        // (also moves out of *b)
    // *b = tmp;
}

You’d have to require T: Copy (loses generality), T: Clone (extra work), or reach for unsafe { ptr::swap(...) }. None of those is the right answer.

mem::swap is the primitive

std::mem::swap(&mut a, &mut b) swaps the bits behind two mutable references. No traits required, no allocation, no unsafe:

1
2
3
4
5
6
7
8
9
use std::mem;

let mut a = String::from("left");
let mut b = String::from("right");

mem::swap(&mut a, &mut b);

assert_eq!(a, "right");
assert_eq!(b, "left");

Works for any T. The two memory locations exchange contents in-place — one memcpy-sized swap, no clones.

A real shape: front/back double buffering

The pattern that earns mem::swap its keep — flip a “current” and “next” buffer at the end of each frame, reuse both allocations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::mem;

struct DoubleBuffer<T> {
    front: Vec<T>,
    back: Vec<T>,
}

impl<T> DoubleBuffer<T> {
    fn flip(&mut self) {
        mem::swap(&mut self.front, &mut self.back);
    }
}

let mut buf = DoubleBuffer {
    front: vec![1, 2, 3],
    back:  vec![9, 9, 9],
};

buf.flip();

assert_eq!(buf.front, vec![9, 9, 9]);
assert_eq!(buf.back,  vec![1, 2, 3]);

Two fields of the same struct, both behind &mut self — exactly the case the temp-variable dance can’t reach. mem::swap doesn’t care that the references come from the same parent borrow.

Why not slice::swap or Vec::swap?

v.swap(i, j) is the right tool when both values live in the same slice — it does the index trick under the hood so the borrow checker stays happy. mem::swap is the broader primitive: any two &mut T, regardless of whether they share a container. They’re the same idea at different scopes.

When to reach for it

Whenever you need to exchange two owned values through &mut: flipping buffers, rotating state in a tree node, splicing nodes in a linked list, swapping fields during a state transition. mem::take is swap with T::default() on one side; mem::replace is swap with src on one side and the old value returned. Same family — pick the one whose shape matches what you actually want back.