Internet Small Computer Systems Interface (iSCSI) is a block-level storage protocol that transports SCSI commands over TCP/IP networks, enabling access to centralized storage arrays over standard Ethernet.
iSCSI was standardized by the IETF in RFC 3720 (2004) and operates by wrapping SCSI commands inside TCP segments. An iSCSI session consists of an initiator (the client) and a target (the storage server). The initiator identifies storage resources using iSCSI Qualified Names (IQNs) in the format iqn.yyyy-mm.reverse-domain:identifier. Targets are discovered either through the iSNS (Internet Storage Name Service) protocol, through SendTargets discovery, or via manual configuration in an iSCSI initiator file.
Each iSCSI session maps to a single TCP connection (or optionally multiple connections per session via Multi-Connection per Session). The command queue for iSCSI is inherited from the underlying SCSI model: one queue per session with a maximum queue depth of 128 commands (CmdSN window). This is a fundamental architectural limitation compared to NVMe, which supports 64,000 queues each with 64,000 commands. The SCSI command overhead — including 32-byte CDBs, status responses, and data-in/out sequences — adds latency on every I/O path.
iSCSI supports optional IPsec encryption for data security and Challenge-Handshake Authentication Protocol (CHAP) for initiator/target authentication. While iSCSI was a major advancement over Fibre Channel for cost-sensitive deployments, its SCSI underpinnings — originally designed for parallel bus interfaces — have not aged as well as protocols built natively for networked, high-concurrency storage access.
iSCSI is the primary incumbent protocol that NVMe/TCP is displacing in modern infrastructure. Both protocols run over standard TCP/IP Ethernet, making them superficially similar from an infrastructure perspective. The critical difference is the command set: iSCSI's SCSI heritage limits it to a single queue per session and imposes per-command overhead that compounds at high IOPS. NVMe/TCP eliminates this bottleneck with its native multi-queue design, delivering 3–5× higher IOPS and significantly lower latency over the same network hardware.
| Feature | iSCSI | NVMe/TCP |
|---|---|---|
| Command set | SCSI | NVMe |
| Queue depth | 1 × 128 | 64K × 64K |
| Typical latency | 100–200 µs | 25–40 µs |
| Max IOPS (practical) | ~400K | ~1.5M+ |