Kubernetes

What is Persistent Volume (PV)?

A Kubernetes Persistent Volume (PV) is a cluster-level storage resource that exists independently of pods, allowing data to survive container restarts and pod rescheduling — backed by storage systems like NVMe/TCP, NFS, or cloud block storage.

Technical Overview

Kubernetes separates storage provisioning from storage consumption through two objects: PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). A PersistentVolume is a cluster-scoped resource that represents a piece of storage — a specific NFS export, a cloud block volume, or an NVMe namespace — with defined capacity, access modes, and reclaim policy. A PersistentVolumeClaim is a namespace-scoped request for storage that specifies desired capacity, access mode, and optionally a StorageClass. Kubernetes binds a PVC to a PV that satisfies its requirements.

Access modes define how a volume may be mounted: ReadWriteOnce (RWO) allows mounting as read-write by a single node — appropriate for block storage including NVMe/TCP and iSCSI; ReadOnlyMany (ROX) allows read-only mounts from multiple nodes; and ReadWriteMany (RWX) allows read-write mounts from multiple nodes simultaneously — appropriate for NFS and distributed filesystems. NVMe/TCP block volumes are RWO: a namespace can be connected to only one initiator at a time (unless namespace sharing and a cluster filesystem like OCFS2 or GFS2 are configured explicitly).

Dynamic provisioning eliminates the need for administrators to pre-create PVs manually. When a PVC references a StorageClass backed by a CSI driver, Kubernetes automatically calls the driver's CreateVolume API to provision a new volume. The PV is created automatically and bound to the PVC. This automation is essential for Kubernetes-native workloads like StatefulSets, where each replica requires its own storage volume and volumes must scale dynamically as the StatefulSet scales up.

How It Relates to NVMe/TCP

NVMe/TCP is increasingly the backing storage for Kubernetes PersistentVolumes in performance-sensitive deployments. When a StatefulSet pod is scheduled to a node, the NVMe/TCP CSI driver's node plugin connects the corresponding NVMe namespace to the node using nvme connect, stages it (formats if new, runs fsck if existing), and mounts it into the pod. The pod sees a standard POSIX filesystem backed by a high-performance NVMe block device over the network. simplyblock.io provisions Persistent Volumes backed by NVMe/TCP, providing Kubernetes workloads with high-performance, low-latency block storage through dynamic PVC provisioning.

Key Characteristics

  • Scope: Cluster-level resource (not namespace-scoped)
  • Access modes: RWO (block storage), ROX, RWX (file/distributed storage)
  • Reclaim policies: Retain, Delete, Recycle (deprecated)
  • Volume modes: Filesystem (default) or Block (raw device for databases)
  • Storage Classes: Enable dynamic provisioning via CSI drivers
  • Lifecycle: Independent of pod — survives pod deletion and rescheduling

PV Access Mode Comparison for Storage Protocols

Storage Backend Access Mode Latency Profile Best Use Case
NVMe/TCP RWO 25–40 µs Databases, StatefulSets
NFS RWX 500 µs–5 ms Shared config, CMS
iSCSI RWO 100–200 µs Legacy block workloads
Cloud block (EBS/PD) RWO 200–500 µs Cloud-native deployments

Stateful vs Stateless Applications