Slice

#110 Apr 2026

110. slice::split_at_checked — Split Without the Panic

slice.split_at(i) panics the second i > len. The usual fix is a length check wrapped around the call so you don’t blow up on a bad index. split_at_checked does the same job in one call and hands you an Option.

The classic trap — a single bad index away from a panic:

1
2
let xs = [1, 2, 3, 4];
let (head, tail) = xs.split_at(10); // panics: byte index 10 is out of bounds

The defensive version everyone writes:

1
2
3
4
5
6
7
8
9
let xs = [1, 2, 3, 4];
let i = 10;

if i <= xs.len() {
    let (head, tail) = xs.split_at(i);
    // ...use head and tail
} else {
    // handle out-of-bounds
}

Two reads of i, one easy off-by-one (< vs <=), and a panic waiting if you ever drop the guard.

Rust 1.80 stabilised split_at_checked (and split_at_mut_checked), which folds the bounds check into the return type:

1
2
3
4
5
let xs = [1, 2, 3, 4];

assert_eq!(xs.split_at_checked(2), Some((&xs[..2], &xs[2..])));
assert_eq!(xs.split_at_checked(4), Some((&xs[..], &[][..]))); // boundary is fine
assert_eq!(xs.split_at_checked(5), None);                     // would have panicked

Now the bounds check is the API. You get an Option<(&[T], &[T])> and the compiler nudges you to handle the None case:

1
2
3
4
5
6
7
fn take_prefix(buf: &[u8], n: usize) -> Option<&[u8]> {
    let (head, _rest) = buf.split_at_checked(n)?;
    Some(head)
}

assert_eq!(take_prefix(b"hello", 3), Some(&b"hel"[..]));
assert_eq!(take_prefix(b"hi", 3), None);

? does the bailout, no manual length check, no panic path. This works on &str too, where the index has to land on a UTF-8 boundary — and it returns None if it doesn’t, instead of panicking.

#102 Apr 2026

102. slice::partition_point — Binary Search That Just Returns the Index

Reaching for binary_search on a sorted Vec and unwrapping Ok(i) | Err(i) because you only ever wanted the index? slice::partition_point skips the Result ceremony and hands you the position directly.

The binary_search annoyance

binary_search is great when you care whether the value was actually found. But often you don’t — you just want the spot where it would go to keep the slice sorted:

1
2
3
4
5
6
7
8
9
let nums = vec![1, 3, 5, 7, 9, 11];
let target = 6;

// Awkward: collapse Ok and Err to a single index.
let pos = match nums.binary_search(&target) {
    Ok(i) | Err(i) => i,
};

assert_eq!(pos, 3);

The Ok | Err pattern works, but it’s noisy and obscures the intent. Worse, it doesn’t generalise — what if you want the insertion point for a predicate, not an exact value?

partition_point to the rescue

partition_point takes a predicate and returns the first index where the predicate flips from true to false. On a sorted slice, that’s the insertion point — no Result, no match arms:

1
2
3
4
5
let nums = vec![1, 3, 5, 7, 9, 11];

let pos = nums.partition_point(|&x| x < 6);

assert_eq!(pos, 3); // 6 would slot between 5 and 7

The slice still has to be partitioned (all trues before all falses), but for a sorted slice with a < predicate that’s automatic. Internally it’s still O(log n) binary search — same complexity as binary_search, friendlier API.

Insert while keeping sorted

A common use: keep a Vec sorted as you add to it.

1
2
3
4
5
6
7
let mut leaderboard = vec![10, 25, 40, 70];
let new_score = 33;

let pos = leaderboard.partition_point(|&x| x < new_score);
leaderboard.insert(pos, new_score);

assert_eq!(leaderboard, [10, 25, 33, 40, 70]);

Compare that to binary_search(&new_score).unwrap_or_else(|i| i) — same result, more ceremony.

Beyond simple ordering

Because it takes any predicate, partition_point works on any slice partitioned by a property — not just sorted-by-Ord. Sorted by a derived key? Filter by a threshold? Same call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
struct Event { day: u32, name: &'static str }

let log = vec![
    Event { day: 1, name: "boot"   },
    Event { day: 2, name: "login"  },
    Event { day: 5, name: "deploy" },
    Event { day: 7, name: "alert"  },
    Event { day: 9, name: "reboot" },
];

// First event on or after day 5.
let i = log.partition_point(|e| e.day < 5);
assert_eq!(log[i].name, "deploy");

// Number of events strictly before day 5.
assert_eq!(log.partition_point(|e| e.day < 5), 2);

That second line is a slick trick: partition_point doubles as “count how many elements satisfy the prefix predicate” in O(log n).

When to reach for it

Any time you find yourself writing binary_search(...).unwrap_or_else(|i| i) or match ... { Ok(i) | Err(i) => i }, swap in partition_point. Stable since Rust 1.52 — old enough to use everywhere, fresh enough that plenty of code still does it the noisy way.

98. sort_by_cached_key — Stop Recomputing Expensive Sort Keys

sort_by_key sounds like it computes the key once per element. It doesn’t — it calls your closure at every comparison, so an n-element sort can pay for that key O(n log n) times. If the key is expensive, sort_by_cached_key is the fix you’ve been looking for.

The trap

The signature reads nicely: “sort by this key.” The implementation, less so — the closure fires on every comparison, not once per element:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_key(|s| {
    calls.set(calls.get() + 1);
    // Pretend this is a heavy computation: allocating, hashing,
    // parsing, calling a regex, opening a file, etc.
    s.to_string()
});

// 5 elements, but the key ran way more than 5 times.
assert!(calls.get() > items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

For identity keys that cost a pointer-deref, nobody cares. For anything that allocates.to_string(), .to_lowercase(), format!(...), a regex capture, a trimmed-and-lowered filename — the cost compounds quickly. I’ve seen a profile where 80% of total runtime was the key closure being called 40,000 times to sort 2,000 items.

The fix

slice::sort_by_cached_key runs your closure exactly once per element, stashes the results in a scratch buffer, then sorts against the cache. This is the Schwartzian transform, wrapped up in a method call:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use std::cell::Cell;

let mut items = vec!["banana", "fig", "apple", "cherry", "date"];
let calls = Cell::new(0);

items.sort_by_cached_key(|s| {
    calls.set(calls.get() + 1);
    s.to_string()
});

// Exactly one call per element — no matter how big the slice is.
assert_eq!(calls.get(), items.len());
assert_eq!(items, ["apple", "banana", "cherry", "date", "fig"]);

Same result, linear key-function calls. The memory trade is a Vec<(K, usize)> the size of the slice — cheap next to the cost of re-running an allocating closure on every compare.

When to reach for which

The rule is about where your time goes, not how fancy the key looks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let mut nums = vec![5u32, 2, 8, 1, 9, 3];

// Trivial key: sort_by_key is fine (and avoids the scratch alloc).
nums.sort_by_key(|n| *n);
assert_eq!(nums, [1, 2, 3, 5, 8, 9]);

// Expensive key: sort_by_cached_key wins.
let mut files = vec!["Cargo.TOML", "src/MAIN.rs", "README.md", "build.RS"];
files.sort_by_cached_key(|path| path.to_lowercase());
assert_eq!(files, ["build.RS", "Cargo.TOML", "README.md", "src/MAIN.rs"]);

Use sort_by_key for cheap, Copy-ish keys. Use sort_by_cached_key the moment your closure allocates, hashes, parses, or otherwise does real work — it’s the difference between O(n log n) and O(n) calls to that closure.

45. get_disjoint_mut — Multiple Mutable References at Once

The borrow checker won’t let you hold two &mut refs into the same collection — even when you know they don’t overlap. get_disjoint_mut fixes that without unsafe.

The problem

You want to update two elements of the same Vec together, but the compiler won’t allow two mutable borrows at once:

1
2
3
4
let mut scores = vec![10u32, 20, 30, 40];
let a = &mut scores[0];
let b = &mut scores[2]; // ❌ cannot borrow `scores` as mutable more than once
*a += *b;

The borrow checker doesn’t know indices 0 and 2 are different slots — it just sees two &mut to the same Vec. The classic escape hatches (split_at_mut, unsafe, RefCell) all feel like workarounds for something that should just work.

get_disjoint_mut to the rescue

Stabilized in Rust 1.86, get_disjoint_mut accepts an array of indices and returns multiple mutable references — verified at runtime to be non-overlapping:

1
2
3
4
5
6
7
let mut scores = vec![10u32, 20, 30, 40];

if let Ok([a, b]) = scores.get_disjoint_mut([0, 2]) {
    *a += *b; // 10 + 30 = 40
}

assert_eq!(scores, [40, 20, 30, 40]); // ✅

The Result is Err only if an index is out of bounds or indices overlap. Duplicate indices are caught at runtime and return Err — no silent aliasing bugs.

Works on HashMap as well

HashMap gets the same treatment. The return type is [Option<&mut V>; N] — one Option per key, since keys can be missing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::collections::HashMap;

let mut accounts: HashMap<&str, u32> = HashMap::from([
    ("alice", 100),
    ("bob", 200),
]);

// Transfer 50 from bob to alice
let [alice, bob] = accounts.get_disjoint_mut(["alice", "bob"]);
if let (Some(a), Some(b)) = (alice, bob) {
    *a += 50;
    *b -= 50;
}

assert_eq!(accounts["alice"], 150);
assert_eq!(accounts["bob"], 150); // ✅

Passing duplicate keys to the HashMap version panics — the right tradeoff for a bug that would otherwise silently produce undefined behavior.

When to reach for it

  • Swapping or combining two elements in a Vec without split_at_mut gymnastics
  • Updating multiple HashMap entries in one pass
  • Any place you’d have used unsafe or RefCell just to hold two &mut into the same container

If your indices or keys are known not to overlap, get_disjoint_mut is the clean, safe answer.