One mount.
Many disks.
Explicit rules.

PolicyFS unifies multiple storage paths under a single FUSE mountpoint. Routing rules control where reads and writes go. An optional SQLite index reduces HDD wakeups during metadata-heavy workloads.

SSD ssd1 ssd2 HDD hdd1 hdd2 hdd3 pfs router & write policies SQLite metadata index deferred event log /mnt/pfs/media unified view

Storage behavior you can reason about

Policy-based routing

Route reads and writes by path pattern — one rule for photos/**, another for everything else. Write policies (first_found, most_free, least_free) choose a write target. Path-preserving mode prefers targets where the parent directory exists to reduce scattering.

SQLite metadata index

Mark a storage path indexed: true and readdir/getattr can be served from a local SQLite index, reducing HDD wakeups for metadata-heavy operations. Metadata mutations (delete, rename, permission/timestamp changes) are recorded to an event log and applied later by pfs prune.

Built-in maintenance cycle

pfs move tiers files from fast SSD to bulk HDD. pfs prune applies deferred mutations. pfs index refreshes metadata. pfs maint runs move → prune → index under one lock — ideal for a daily systemd timer.

How pfs handles a filesystem call

1

POSIX syscall

Your application issues a normal POSIX syscall — pfs is transparent to userspace.

2

FUSE dispatch

The kernel forwards the call over FUSE to the pfs daemon.

3

Routing & index

The router matches the path to a rule. Metadata may be served from SQLite to reduce HDD wakeups.

4

Physical I/O

File I/O hits physical storage through pfs, typically via a cached file descriptor.

How pfs index reduces HDD wakeups

With indexed: true, pfs index scans physical storage and upserts metadata into index.db. readdir and getattr for indexed storage can be served from the database, reducing HDD wakeups. File content I/O still hits the underlying storage. Non-indexed storage paths (indexed: false) are accessed via the filesystem directly (no metadata index).

HDD · indexed: true hdd1 hdd2 hdd3 pfs index periodic · systemd timer scan & upsert index.db SQLite readdir / getattr open/read/write · file content I/O SSD · indexed: false ssd1 ssd2 filesystem I/O · no index pfs daemon metadata reads via index.db data bytes → HDD direct SSD: direct I/O reads & writes no index

How pfs prune replays deferred operations

On indexed storage, delete/rename/setattr operations are recorded to events.ndjson and reflected in the index without immediately touching the physical file. pfs prune replays the log and applies pending operations to physical storage, freeing space and making changes persistent.

pfs daemon defer ops events.ndjson event log DELETE RENAME SETATTR reads log pfs prune periodic · systemd timer hdd1 hdd2 hdd3

How pfs move migrates files

As new files land on SSD storage, the write tier fills up. pfs move copies eligible files (age/size/usage) from the write tier to destination storage (often an indexed HDD archive). If the daemon control socket is available, open files can be skipped.

SSD · write tier ssd1 ssd2 pfs move age · size · usage HDD · archive tier hdd1 hdd2 hdd3 daemon.sock

pfs vs mergerfs

mergerfs is a mature FUSE union filesystem with broad POSIX coverage and a large install base. pfs is narrower and focuses on explicit storage placement and metadata behavior. If you need maximum compatibility or complex workloads, mergerfs is likely a better fit.

Feature pfs mergerfs
FUSE-based storage pooling
Path-pattern routing rules
Multiple write target policies
SQLite metadata index (reduced HDD wakeups)
Deferred metadata mutations
Built-in tiered storage mover
POSIX feature coverage partial
Maturity & ecosystem newer established

pfs trades POSIX breadth for explicit, inspectable behavior. mergerfs covers more edge cases and has a longer track record.

One YAML file, multiple mounts

A minimal two-tier setup: SSD as a write cache, HDDs as indexed archive storage.

# /etc/pfs/pfs.yaml
mounts:
  media:
    mountpoint: /mnt/pfs/media

    storage_paths:
      - id:   ssd1
        path: /mnt/nvme/cache

      - id:      hdd1
        path:    /mnt/hdd1/media
        indexed: true

      - id:      hdd2
        path:    /mnt/hdd2/media
        indexed: true

    routing_rules:
      - match:         "**"
        targets:       [ssd1, hdd1, hdd2]
        write_policy:  most_free

Install on Debian / Ubuntu

Download a .deb from GitHub Releases. Includes systemd units.