PolicyFS unifies multiple storage paths under a single FUSE mountpoint. Routing rules control where reads and writes go. An optional SQLite index reduces HDD wakeups during metadata-heavy workloads.
Features
Route reads and writes by path pattern — one rule for photos/**, another for everything else.
Write policies (first_found, most_free, least_free) choose a write
target. Path-preserving mode prefers targets where the parent directory exists to reduce scattering.
Mark a storage path indexed: true and readdir/getattr can be served
from a local SQLite index, reducing HDD wakeups for metadata-heavy operations. Metadata mutations (delete,
rename, permission/timestamp changes) are recorded to an event log and applied later by
pfs prune.
pfs move tiers files from fast SSD to bulk HDD. pfs prune applies deferred
mutations. pfs index refreshes metadata. pfs maint runs move → prune → index
under one lock — ideal for a daily systemd timer.
Architecture
Your application issues a normal POSIX syscall — pfs is transparent to userspace.
The kernel forwards the call over FUSE to the pfs daemon.
The router matches the path to a rule. Metadata may be served from SQLite to reduce HDD wakeups.
File I/O hits physical storage through pfs, typically via a cached file descriptor.
Indexed storage
pfs index reduces HDD wakeups
With indexed: true, pfs index scans physical storage and upserts metadata into
index.db. readdir and getattr for indexed storage can be served from
the database, reducing HDD wakeups. File content I/O still hits the underlying storage. Non-indexed storage
paths (indexed: false) are accessed via the filesystem directly (no metadata index).
Deferred operations
pfs prune replays deferred operations
On indexed storage, delete/rename/setattr operations are recorded to events.ndjson and reflected
in the index without immediately touching the physical file. pfs prune replays the log and
applies pending operations to physical storage, freeing space and making changes persistent.
Tiered storage
pfs move migrates files
As new files land on SSD storage, the write tier fills up.
pfs move copies eligible files (age/size/usage) from the write tier to destination storage (often
an indexed HDD archive). If the daemon control socket is available, open files can be skipped.
Comparison
mergerfs is a mature FUSE union filesystem with broad POSIX coverage and a large install base. pfs is narrower and focuses on explicit storage placement and metadata behavior. If you need maximum compatibility or complex workloads, mergerfs is likely a better fit.
| Feature | pfs | mergerfs |
|---|---|---|
| FUSE-based storage pooling | ✓ | ✓ |
| Path-pattern routing rules | ✓ | — |
| Multiple write target policies | ✓ | ✓ |
| SQLite metadata index (reduced HDD wakeups) | ✓ | — |
| Deferred metadata mutations | ✓ | — |
| Built-in tiered storage mover | ✓ | — |
| POSIX feature coverage | partial | ✓ |
| Maturity & ecosystem | newer | established |
pfs trades POSIX breadth for explicit, inspectable behavior. mergerfs covers more edge cases and has a longer track record.
Configuration
A minimal two-tier setup: SSD as a write cache, HDDs as indexed archive storage.
# /etc/pfs/pfs.yaml mounts: media: mountpoint: /mnt/pfs/media storage_paths: - id: ssd1 path: /mnt/nvme/cache - id: hdd1 path: /mnt/hdd1/media indexed: true - id: hdd2 path: /mnt/hdd2/media indexed: true routing_rules: - match: "**" targets: [ssd1, hdd1, hdd2] write_policy: most_free
Download a .deb from GitHub Releases. Includes systemd units.