I have been running a homelab for a while now, and one of the core services I rely on is OpenMediaVault (OMV) for managing my NAS storage. While OMV has a great web GUI for configuration, I wanted to take things a step further and automate the entire setup with Ansible. The goal was to have a reproducible, declarative configuration that I could run to set up my NAS from scratch — from partitioning drives to configuring SnapRAID parity protection. In this post, I will walk through how I approached this automation and share the challenges I encountered along the way.
What is OpenMediaVault
OpenMediaVault is an open-source, Debian-based network-attached storage (NAS) solution. It provides a web-based management interface that makes it easy to manage storage, users, shared folders, and services like NFS, SMB, and SSH. Think of it as a free alternative to commercial NAS operating systems like Synology DSM or TrueNAS.
OpenMediaVault is primarily designed to be used in small offices or home environments, but is not limited to those scenarios. It is a simple and easy to use out-of-the-box solution that allows everyone to install and administer a Network Attached Storage without deeper knowledge.
Some key features of OMV include:
- Plugin system — extend functionality with community plugins (MergerFS, SnapRAID, Docker, etc.)
- Web-based GUI — manage everything from your browser
- Debian-based — access to the full Debian ecosystem and CLI tools
- Flexible storage — supports ext4, XFS, Btrfs, and pool management via MergerFS
For my setup, I am running OMV on Debian 13 (Trixie) inside a Proxmox VM, with physical drives passed through to the VM.
Why automate with Ansible
You might be wondering — why automate something that already has a perfectly good web GUI? A few reasons:
- Reproducibility — if my VM ever breaks or I need to migrate to new hardware, I can re-run a single playbook and have everything configured exactly as before.
- Documentation as code — the Ansible roles serve as living documentation of my entire NAS setup. No need to remember which settings I clicked through in the GUI months ago.
- It's fun — I wanted to push my Ansible skills and see how far I could automate a GUI-driven application through its API.
The core challenge is that OMV is designed to be configured through its web
interface. Under the hood, it uses an internal database
(/etc/openmediavault/config.xml) and a set of RPC services to manage
everything. To automate OMV, I had to figure out how to interact with these
internals directly.
Key OMV tools for automation
Before diving into the tasks, it is important to understand the two main CLI tools that OMV provides for interacting with its configuration:
omv-confdbadm
omv-confdbadm
is a tool for reading and writing to OMV's configuration database. It is useful
for querying the current state of the system:
# List all available config keys
sudo omv-confdbadm list-ids
# Read all filesystem mount points
sudo omv-confdbadm read conf.system.filesystem.mountpoint
# Read all shared folders
sudo omv-confdbadm read conf.system.sharedfolder
I primarily used omv-confdbadm for reading the current configuration — for
example, checking if a filesystem is already registered or finding the UUID of a
MergerFS pool.
omv-rpc
omv-rpc
is the more powerful tool — it lets you invoke the same RPC calls that the web
GUI uses. This is how we actually create and configure resources:
# Create a shared folder
sudo omv-rpc -u admin 'ShareMgmt' 'set' '{"uuid": "...", "name": "myshare", ...}'
# Get NFS settings
sudo omv-rpc -u admin 'NFS' 'getSettings'
# Create an NFS share
sudo omv-rpc -u admin 'NFS' 'setShare' '{"uuid": "...", ...}'
The tricky part is that the RPC services and their expected parameters are not well-documented. I had to dig through the OMV source code on GitHub to figure out the correct service names, method names, and JSON payloads.
Project structure
I organized the automation as an Ansible role with separate task files for each concern. The main orchestration file includes each service in the correct order:
---
- name: Initialize list of services
ansible.builtin.set_fact:
services:
- users
- omv_plugins
- hdparm
- hd_idle
- filesystems
post_mergerfs_services:
- shared_folders
- nfs
- snapraid
- name: Setup OMV services (pre-mergerfs)
ansible.builtin.include_tasks:
file: 'setup_{{ service }}.yml'
loop: '{{ services }}'
loop_control:
loop_var: service
- name: Setup OMV services (post-mergerfs)
block:
- name: Gather filesystem data
ansible.builtin.import_tasks:
file: 'gather_filesystem_data.yml'
- name: Run setup tasks
ansible.builtin.include_tasks:
file: 'setup_{{ service }}.yml'
loop: '{{ post_mergerfs_services }}'
loop_control:
loop_var: service
The services are split into two phases — pre-mergerfs and post-mergerfs. This is because MergerFS pool creation currently needs to be done through the OMV web GUI (more on that later), and services like shared folders and NFS depend on the MergerFS pool being available.
The disk configuration is defined in the group variables, making it easy to adjust for different hardware:
omv_disks:
- name: '{{ disks.das_disk1.name }}'
serial: '{{ disks.das_disk1.serial }}'
path: /dev/sdb
mnt_path: /mnt/parity1
type: parity
label: parity1
- name: '{{ disks.das_disk2.name }}'
serial: '{{ disks.das_disk2.serial }}'
path: /dev/sdc
mnt_path: /mnt/disk1
type: data
label: disk1
Each disk is tagged with a type — either data or parity — which is used
throughout the tasks to apply different formatting options and mount
configurations.
Automating filesystem setup
The filesystem setup task handles the full lifecycle: partitioning, formatting, mounting, and registering the drives with OMV's internal database so they appear in the web GUI.
Step 1: Partition the drives
First, we create a GPT partition table on each drive with a single partition spanning the entire disk:
- name: Partition drives with GPT label
community.general.parted:
device: '{{ disk.path }}'
label: gpt
number: 1
part_start: 1MiB
part_end: 100%
align: optimal
state: present
loop: '{{ omv_disks }}'
loop_control:
loop_var: disk
The 1MiB offset for part_start ensures proper alignment on modern drives,
which is important for performance on both SSDs and HDDs with 4K sectors.
Step 2: Format with ext4
Data and parity drives are formatted differently. Data drives get 2% reserved
space (-m 2), while parity drives get 0% (-m 0) since they do not need
reserved blocks for root:
- name: Format & create ext4 filesystem on Data drives
community.general.filesystem:
fstype: ext4
dev: '{{ data_disk.path }}1'
opts: -m 2 -T largefile4 -L {{ data_disk.label }}
loop: "{{ omv_disks | selectattr('type', 'equalto', 'data') | list }}"
loop_control:
loop_var: data_disk
- name: Format & create ext4 filesystem on Parity drives
community.general.filesystem:
fstype: ext4
dev: '{{ parity_disk.path }}1'
opts: -m 0 -T largefile4 -L {{ parity_disk.label }}
loop: "{{ omv_disks | selectattr('type', 'equalto', 'parity') | list }}"
loop_control:
loop_var: parity_disk
A few notes on the format options:
-m 2/-m 0— percentage of disk reserved for the super-user. For parity drives, we do not need any reserved space.-T largefile4— optimizes the filesystem for storing large files (fewer inodes, larger block groups), which is perfect for media storage.-L <label>— sets a human-readable label, making it easier to identify drives.
The community.general.filesystem module is idempotent — it will skip
formatting if the partition already has a filesystem, which is great for
re-running the playbook safely.
Step 3: Mount the drives
Next, we create mount points, look up each partition's UUID via blkid, and
mount them:
- name: Create mount points
ansible.builtin.file:
path: '{{ dir_path }}'
state: directory
owner: root
group: root
mode: '0755'
loop: "{{ (omv_disks | map(attribute='mnt_path') | list) + ['/storage'] }}"
loop_control:
loop_var: dir_path
- name: Get partition UUID
ansible.builtin.command: blkid -s UUID -o value {{ partition.path }}1
loop: '{{ omv_disks }}'
loop_control:
loop_var: partition
register: disk_uuids
changed_when: false
- name: Mount drives
ansible.posix.mount:
path: '{{ disk_info.partition.mnt_path }}'
src: 'UUID={{ disk_info.stdout }}'
fstype: ext4
opts: defaults,noatime
passno: 2
state: ephemeral
loop_control:
loop_var: disk_info
loop: '{{ disk_uuids.results }}'
failed_when: false
An important detail here is the use of state: ephemeral for the mount. This
mounts the drive without adding an entry to /etc/fstab. This is intentional
because OMV manages its own fstab entries — if we write to fstab directly, it
would conflict with OMV's configuration management.
Step 4: Register with OMV
This is where things get interesting. Even though the drives are mounted, OMV's
web GUI does not know about them yet. We need to register each filesystem in
OMV's configuration database (/etc/openmediavault/config.xml):
- name: Check if Disk/Parity filesystem already registered in OMV
community.general.xml:
path: /etc/openmediavault/config.xml
xpath: >-
/config/system/fstab/mntent[dir='{{ disk_info.partition.mnt_path }}']
count: true
register: mntent_check
loop: '{{ disk_uuids.results }}'
loop_control:
loop_var: disk_info
- name: Register Disk/Parity filesystem in OMV
vars:
mntent_xml: >-
<mntent>
<uuid>{{ lookup('pipe', 'uuidgen') }}</uuid>
<fsname>/dev/disk/by-uuid/{{ disk_info.stdout }}</fsname>
<dir>{{ disk_info.partition.mnt_path }}</dir>
<type>ext4</type>
<opts>defaults,noatime</opts>
<freq>0</freq>
<passno>2</passno>
<hidden>0</hidden>
</mntent>
community.general.xml:
path: /etc/openmediavault/config.xml
xpath: /config/system/fstab
pretty_print: true
input_type: xml
add_children:
- '{{ mntent_xml }}'
loop: '{{ disk_uuids.results }}'
loop_control:
loop_var: disk_info
index_var: idx
when: mntent_check.results[idx].count == 0
The task first checks if a mount entry already exists (for idempotency), and
only adds a new <mntent> XML element if it does not. Each entry gets a fresh
UUID generated by uuidgen, which OMV uses internally to reference filesystems.
Finally, we run omv-salt to synchronize the database changes with the system:
- name: Run omv-salt to resync database changes with OMV webgui
ansible.builtin.command: omv-salt deploy run fstab
register: rescan_fs
changed_when: rescan_fs.changed
After this runs, the drives appear in the OMV web GUI under Storage > File Systems as if you had configured them through the interface.
Setting up MergerFS
MergerFS is a union filesystem that
pools multiple drives into a single logical mount point. Instead of dealing with
individual drives, you get a unified /storage directory that spans all your
data disks.
Why MergerFS over RAID
Unlike traditional RAID, MergerFS does not stripe data across drives. Each file lives on exactly one physical disk. This means:
- No rebuild times — if a drive fails, the other drives are still fully readable.
- Mix and match drive sizes — you can pool a 4TB and an 8TB drive together.
- Simple recovery — individual drives are just ext4, so you can mount them on any Linux system.
Combined with SnapRAID for parity protection, you get the benefits of redundancy without the downsides of traditional RAID.
The MergerFS automation challenge
This is one area where I hit a wall. I initially tried to automate the entire MergerFS setup through Ansible — mounting the pool via fuse and registering it in OMV's config. Here is what that looked like:
- name: Set mergerfs options
ansible.builtin.set_fact:
base_opts: >-
cache.files=off,moveonenospc=true,category.create=pfrd,
func.getattr=newest,dropcacheonclose=false,
minfreespace=20G,fsname=mergerfsPool
systemd_deps: >-
{{ omv_disks
| selectattr('type', 'equalto', 'data')
| map(attribute='mnt_path')
| map('regex_replace', '^(.*)$',
'x-systemd.requires-mounts-for=\\1')
| join(',') }}
- name: Mount mergerfs
ansible.posix.mount:
path: /storage
src: /mnt/disk*
fstype: fuse.mergerfs
opts: '{{ base_opts }},{{ systemd_deps }}'
passno: 0
dump: 0
state: mounted
A few things to note about the MergerFS options:
category.create=pfrd— the create policy that determines which underlying drive a new file is written to.pfrdstands for "path first, random distribution", which prefers existing paths but distributes new files randomly.moveonenospc=true— if a write fails due to disk full, MergerFS automatically moves the file to another drive with space.func.getattr=newest— returns the most recent metadata when multiple drives have the same path.minfreespace=20G— minimum free space required before MergerFS considers a drive for new writes.- The
x-systemd.requires-mounts-for=options ensure that the individual data disks are mounted before MergerFS starts.
However, I discovered that the MergerFS pool must be created through the
openmediavault-mergerfs plugin for it to display and function properly in
the OMV web GUI. When mounted purely through CLI and config.xml edits, the pool
would not show up correctly in the filesystem list, and shared folder creation
would fail.
So the current approach is a hybrid:
- Ansible handles partitioning, formatting, and mounting the individual data and parity drives.
- OMV web GUI is used (once) to create the MergerFS pool through the
mergerfs plugin with these options:
cache.files=off,moveonenospc=true,func.getattr=newest,dropcacheonclose=false - Ansible then picks up from there — gathering the pool's filesystem UUID and configuring shared folders, NFS shares, and SnapRAID on top.
The plugin installation itself is automated:
- name: Install openmediavault plugins
become: true
ansible.builtin.apt:
name:
- openmediavault-mergerfs
state: present
Along with the prerequisite omv-extras repository, which provides access to
community plugins:
- name: Install omv-extras
when: omv_extras_check.rc != 0
block:
- name: Download omv-extras installation script
ansible.builtin.get_url:
url: >-
https://github.com/OpenMediaVault-Plugin-Developers/
packages/raw/master/install
dest: /tmp/omv-extras-install.sh
mode: '0755'
- name: Install omv-extras
ansible.builtin.shell: |
bash /tmp/omv-extras-install.sh
Gathering MergerFS filesystem data
Once the MergerFS pool exists, subsequent tasks need its OMV filesystem UUID to
create shared folders and NFS shares. This is where omv-confdbadm comes in:
- name: Get mergerfs filesystem UUID
ansible.builtin.shell: >-
set -o pipefail && omv-confdbadm read conf.system.filesystem.mountpoint | jq
-r '.[] | select(.dir=="{{ target_shared_dir }}") | .uuid'
register: fs_uuid_result
changed_when: false
failed_when: fs_uuid_result.stdout == ""
- name: Parse mergerfs filesystem UUID
ansible.builtin.set_fact:
filesystem_uuid: '{{ fs_uuid_result.stdout }}'
This queries OMV's config database for all filesystem mount points, filters for
the MergerFS pool path (/srv/mergerfs/mergerfsPool), and extracts its UUID.
This UUID is then used as a reference when creating shared folders and NFS
exports.
Automating shared folders and NFS
With the MergerFS pool available and its UUID captured, we can automate the
creation of shared folders and NFS shares entirely through omv-rpc.
Creating shared folders
Shared folders are created via the ShareMgmt RPC service. The task first
queries existing shares to avoid duplicates:
- name: Get existing file shares from omv-confdbadm
ansible.builtin.shell: |
omv-confdbadm read conf.system.sharedfolder
register: existing_sharedfolders_data
changed_when: false
- name: Parse existing share names
ansible.builtin.set_fact:
existing_share_names: >-
{{ (existing_sharedfolders_data.stdout | default('[]') | from_json)
| map(attribute='name') | list }}
- name: Parse folders to be created
ansible.builtin.set_fact:
folders_to_create: >-
{{ omv_shared_folders
| rejectattr('name', 'in', existing_share_names) | list }}
Then creates only the folders that do not already exist:
- name: Create new shared folders
when: folders_to_create | length > 0
vars:
folder_json:
uuid: '{{ omv_special_uuid }}'
name: '{{ folder.name }}'
reldirpath: '{{ folder.path }}'
mntentref: '{{ filesystem_uuid }}'
mode: '775'
comment: "{{ folder.comment | default('') }}"
ansible.builtin.command:
cmd: >-
omv-rpc -u admin 'ShareMgmt' 'set' '{{ folder_json | to_json }}'
loop: '{{ folders_to_create }}'
loop_control:
loop_var: folder
changed_when: true
The omv_special_uuid (fa4b1c66-ef79-11e5-87a0-0002b3a176b4) is a special
UUID used by OMV to signal that a new resource should be created rather than
updating an existing one. I found this through a
forum post
after spending a while trying to figure out why my RPC calls were failing — this
is not documented anywhere in the official docs.
Setting up NFS shares
NFS share creation follows a similar pattern — query existing shares, compare with desired state, create what is missing:
- name: Create NFS shares
when: nfs_to_create | length > 0
vars:
nfs_json:
uuid: '{{ omv_special_uuid }}'
sharedfolderref: '{{ sharedfolder_uuids[nfs_spec.name] }}'
mntentref: '{{ filesystem_uuid }}'
client: '{{ nfs_spec.client }}'
options: '{{ nfs_spec.options }}'
extraoptions: '{{ nfs_spec.extraoptions }}'
comment: '{{ nfs_spec.comment }}'
ansible.builtin.command: >-
omv-rpc -u admin 'NFS' 'setShare' '{{ nfs_json | to_json }}'
loop: '{{ nfs_to_create }}'
loop_control:
loop_var: nfs_spec
changed_when: true
The NFS configuration is defined in the group variables, making it easy to add new exports:
nfs_specs:
- name: immich
client: '192.168.0.242'
options: 'rw'
extraoptions: >-
all_squash,anonuid={{ proxmox_immich_id }}, anongid={{ proxmox_immich_id
}},insecure, no_subtree_check,sync
comment: 'Immich NFS share'
In my case, I use NFS to share the immich folder with another Proxmox VM
running Immich — a self-hosted Google Photos alternative,
which was one of the main motivations for this whole homelab project.
Automating SnapRAID setup
SnapRAID provides snapshot-based parity protection for your data. Unlike real-time parity (like RAID 5/6), SnapRAID runs periodically to sync parity data. This makes it ideal for media storage where files are mostly written once and rarely modified.
How SnapRAID works
SnapRAID computes parity across your data disks and stores it on a dedicated parity disk. If a data disk fails, you can recover its contents using the parity data from the remaining disks. With my setup of 1 parity disk and 1 data disk, I can survive the failure of any single drive.
The key commands are:
snapraid sync— updates the parity data to reflect current file statesnapraid scrub— verifies data integrity by checking paritysnapraid status— shows the current state of the arraysnapraid fix— recovers data from a failed disk
Ansible tasks for SnapRAID
The SnapRAID setup is straightforward — install the package and deploy the configuration:
---
- name: Install snapraid
become: true
ansible.builtin.apt:
name:
- snapraid
state: present
- name: Copy snapraid config
become: true
ansible.builtin.template:
src: snapraid-config.j2
dest: /etc/snapraid.conf
owner: root
group: root
mode: '0644'
The configuration template defines the parity file location, data disk mappings, content file locations, and exclusion patterns:
# Where SnapRAID stores content files (metadata) for each disk.
# Put ONE on each data disk, plus one on parity
content /mnt/disk1/snapraid.content
content /mnt/parity1/snapraid.content
# Parity file(s) live on parity disk(s)
parity /mnt/parity1/snapraid.parity
# Data disks (these are the real disks, not /storage)
data d1 /mnt/disk1/
# Optional: exclude junk you don't want hashed/parity protected
exclude *.unrecoverable
exclude *.tmp
exclude *.temp
exclude *.swp
exclude .DS_Store
exclude Thumbs.db
exclude @eaDir
exclude .Trash-*
exclude lost+found/
exclude /tmp/
exclude /var/tmp/
# Immich exclusions
exclude /immich/thumbs/
exclude /immich/encoded-video/
autosave 100
A few things worth noting about the SnapRAID config:
- Content files should be spread across different disks. If a content file is lost along with a data disk, recovery becomes harder. Having a copy on the parity disk ensures you always have at least one intact copy.
- Data disk paths point to the individual mount points (
/mnt/disk1/), not the MergerFS pool (/storage/). SnapRAID needs direct access to each physical disk. - Immich exclusions — I exclude Immich's generated thumbnails and transcoded videos since these can be regenerated. No point in wasting parity space on derived data.
autosave 100— automatically saves progress every 100GB during a sync, so if the process is interrupted, it can resume from the last checkpoint.
The OMV API discovery challenge
The biggest challenge in this entire project was figuring out the OMV RPC API. Unlike well-documented REST APIs, OMV's internal RPC system has minimal documentation. Here is how I approached it:
- Read the source code — the RPC service implementations live in the
OMV GitHub repository.
Each PHP file corresponds to an RPC service (e.g.,
ShareMgmt.inc,NFS.inc). - Browser DevTools — I would perform an action in the OMV web GUI while watching the Network tab to see which RPC calls were being made and what payloads were sent.
omv-confdbadm list-ids— this command lists all available config database keys, which helped me discover what data was available to query.- Community forums — the OMV forum was helpful for edge cases like the special UUID for resource creation.
If you are planning to automate OMV, I strongly recommend starting with the browser DevTools approach. Perform the action manually in the GUI, capture the RPC call, and then replicate it in Ansible.
Conclusion
Automating OpenMediaVault with Ansible has been a rewarding project. While it required some detective work to figure out the internal APIs, the result is a setup that I can rebuild from scratch with a single command. The hybrid approach — where Ansible handles most of the configuration but MergerFS pool creation is done through the GUI — is a pragmatic compromise that works well in practice.
Here is a summary of the tools and techniques I used:
| Tool | Purpose |
|---|---|
omv-confdbadm | Query OMV's configuration database |
omv-rpc | Invoke OMV's RPC services (same as web GUI) |
omv-salt | Sync database changes to the running system |
community.general.xml | Edit /etc/openmediavault/config.xml directly |
community.general.parted | Partition drives with GPT |
community.general.filesystem | Format drives with ext4 |
If you are running a homelab with OpenMediaVault and are familiar with Ansible, I would encourage you to give this approach a try. Even automating a few services saves significant time when you inevitably need to rebuild or migrate your setup. The Ansible playbook becomes your single source of truth for your entire NAS configuration.
Here are some helpful resources that I referenced along the way: