Concurrency

51. File::lock — File Locking in the Standard Library

Multiple processes writing to the same file? That’s a recipe for corruption. Since Rust 1.89, File::lock gives you OS-backed file locking without external crates.

The problem

You have a CLI tool that appends to a shared log file. Two instances run at the same time, and suddenly your log entries are garbled — half a line from one process interleaved with another. Before 1.89, you’d reach for the fslock or file-lock crate. Now it’s built in.

Exclusive locking

File::lock() acquires an exclusive (write) lock. Only one handle can hold an exclusive lock at a time — all other attempts block until the lock is released:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
use std::fs::File;
use std::io::{self, Write};

fn main() -> io::Result<()> {
    let mut file = File::options()
        .write(true)
        .create(true)
        .open("/tmp/rustbites_lock_demo.txt")?;

    // Blocks until the lock is acquired
    file.lock()?;

    writeln!(file, "safe write from process {}", std::process::id())?;

    // Lock is released when the file is closed (dropped)
    Ok(())
}

When the File is dropped, the lock is automatically released. No manual unlock() needed — though you can call file.unlock() explicitly if you want to release it early.

Shared (read) locking

Sometimes you want to allow multiple readers but block writers. That’s what lock_shared() is for:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::fs::File;
use std::io::{self, Read};

fn main() -> io::Result<()> {
    let mut file = File::open("/tmp/rustbites_lock_demo.txt")?;

    // Multiple processes can hold a shared lock simultaneously
    file.lock_shared()?;

    let mut contents = String::new();
    file.read_to_string(&mut contents)?;
    println!("Read: {contents}");

    file.unlock()?; // explicit release
    Ok(())
}

Shared locks coexist with other shared locks, but block exclusive lock attempts. Classic reader-writer pattern, enforced at the OS level.

Non-blocking with try_lock

Don’t want to wait? try_lock() and try_lock_shared() return immediately instead of blocking:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
use std::fs::{self, File, TryLockError};

fn main() -> std::io::Result<()> {
    let file = File::options()
        .write(true)
        .create(true)
        .open("/tmp/rustbites_trylock.txt")?;

    match file.try_lock() {
        Ok(()) => println!("Lock acquired!"),
        Err(TryLockError::WouldBlock) => println!("File is busy, try later"),
        Err(TryLockError::Error(e)) => return Err(e),
    }

    Ok(())
}

If another process holds the lock, you get TryLockError::WouldBlock instead of hanging. Perfect for tools that should fail fast rather than block when another instance is already running.

Key details

  • Advisory locks: these locks are advisory on most platforms — they don’t prevent other processes from reading/writing the file unless those processes also use locking
  • Automatic release: locks are released when the File handle is dropped
  • Cross-platform: works on Linux, macOS, and Windows (uses flock on Unix, LockFileEx on Windows)
  • Stable since Rust 1.89

40. Scoped Threads — Borrow Across Threads Without Arc

Need to share stack data with spawned threads? std::thread::scope lets you borrow local variables across threads — no Arc, no .clone().

The problem

With std::thread::spawn, you can’t borrow local data because the thread might outlive the data:

1
2
3
4
5
6
7
let data = vec![1, 2, 3];

// This won't compile — `data` might be dropped
// while the thread is still running
// std::thread::spawn(|| {
//     println!("{:?}", data);
// });

The classic workaround is wrapping everything in Arc:

1
2
3
4
5
6
7
8
9
use std::sync::Arc;

let data = Arc::new(vec![1, 2, 3]);
let data_clone = Arc::clone(&data);

let handle = std::thread::spawn(move || {
    println!("{:?}", data_clone);
});
handle.join().unwrap();

It works, but it’s noisy — especially when you just want to read some data in parallel.

The fix: std::thread::scope

Scoped threads guarantee that all spawned threads finish before the scope exits, so borrowing is safe:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
let data = vec![1, 2, 3];
let mut results = vec![];

std::thread::scope(|s| {
    s.spawn(|| {
        // Borrowing `data` directly — no Arc needed
        println!("Thread sees: {:?}", data);
    });

    s.spawn(|| {
        let sum: i32 = data.iter().sum();
        println!("Sum: {sum}");
    });
});

// All threads have joined here — guaranteed
println!("Done! data is still ours: {:?}", data);

Mutable access works too

Since the scope enforces proper lifetimes, you can even have one thread mutably borrow something:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
let mut counts = [0u32; 3];

std::thread::scope(|s| {
    for (i, count) in counts.iter_mut().enumerate() {
        s.spawn(move || {
            *count = (i as u32 + 1) * 10;
        });
    }
});

assert_eq!(counts, [10, 20, 30]);

Each thread gets exclusive access to its own element — the borrow checker is happy, no Mutex required.

When to reach for scoped threads

Use std::thread::scope when you need parallel work on local data and don’t want the overhead or ceremony of Arc/Mutex. It’s perfect for fork-join parallelism: spin up threads, borrow what you need, collect results when they’re done.