One mount.
Many disks.
Explicit rules.

Tired of your apps waking up all your drives just to list a directory? Common examples are media library scans (Plex/Jellyfin) and NVR retention workflows. PolicyFS unifies multiple disks under one mountpoint, routes reads and writes by explicit rules, and keeps archive disk metadata in a local index so scans don't needlessly spin up sleeping drives.

SSD ssd1 ssd2 HDD hdd1 hdd2 hdd3 pfs router & write policies optional metadata index queued maintenance changes /mnt/pfs/media unified view

Storage behavior you can reason about

Policy-based routing

Route reads and writes by path pattern — one rule for library/**, another for everything else. Write policies (first_found, most_free, least_free) choose a write target. Path-preserving mode prefers targets where the parent directory exists to reduce scattering.

Fewer unnecessary wakeups (optional)

For archive disks, PolicyFS can keep metadata-heavy scans from touching sleeping drives unnecessarily. You can run maintenance jobs during a scheduled window (for example, overnight) when waking disks is acceptable.

Built-in maintenance cycle

PolicyFS includes maintenance jobs for tiered storage: move colder files to an archive tier, apply queued changes, and refresh metadata. This works well with systemd timers and a simple “maintenance window” model.

Write fast, archive later

1

One path for your apps

Expose a single mountpoint and keep your directory layout stable as your storage grows.

2

Put new writes on the fast tier

Use explicit routing rules so new files land on SSDs (or your preferred fast tier).

3

Run maintenance on your schedule

Move colder files to archive disks and apply queued changes during a maintenance window.

4

Keep archive disks quieter (optional)

When configured, metadata-heavy scans can avoid touching sleeping archive disks unnecessarily.

A good fit for mostly-static libraries and archive tiers

Media servers are a common example, but PolicyFS is also useful anywhere you want one merged path across many disks, explicit placement rules, and a predictable maintenance window.

Media libraries

Keep a single path for your library while scaling storage over time. Reduce the “scan wakes every disk” problem by separating normal access from scheduled maintenance.

NVR / CCTV retention

Write new footage to a fast tier, then migrate older recordings to an archive tier on a schedule without changing paths.

See configuration examples →

pfs vs mergerfs

mergerfs is a mature FUSE union filesystem with broad POSIX coverage and a large install base. pfs is narrower and focuses on explicit storage placement and metadata behavior. If you need maximum compatibility or complex workloads, mergerfs is likely a better fit.

Feature pfs mergerfs
FUSE-based storage pooling
Path-pattern routing rules
Multiple write target policies
Optional metadata index (reduced HDD wakeups)
Queued changes (applied later)
Built-in tiered storage mover
POSIX feature coverage partial
Maturity & ecosystem newer established

pfs trades POSIX breadth for explicit, inspectable behavior. mergerfs covers more edge cases and has a longer track record.

One YAML file, multiple mounts

A minimal two-tier setup: SSD as a write cache, HDDs as indexed archive storage.

# /etc/pfs/pfs.yaml
mounts:
  media:
    mountpoint: /mnt/pfs/media  # single path your apps use

    storage_paths:
      - id:   ssd1
        path: /mnt/nvme/cache  # new files land here first

      - id:      hdd1
        path:    /mnt/hdd1/media
        indexed: true  # metadata cached in SQLite; disk stays asleep

      - id:      hdd2
        path:    /mnt/hdd2/media
        indexed: true

    routing_rules:
      - match:         "**"
        targets:       [ssd1, hdd1, hdd2]
        write_policy:  most_free  # new files go to the disk with most free space

Install on Debian / Ubuntu

Install the .deb from GitHub Releases. Includes systemd units.