

If you change it, the PEs will not alter their queue depth limit. They default to 128 and this setting is controlled with a new setting called Scsi.ScsiVVolPESNRO–this is a host-wide setting. In ESXi 6.5 this can be set to a maximum of whatever the current HBA device queue depth limit is. In ESXi 6.0 and earlier this could be increased up to 256. The DSRNO setting for a standard device defaults to 32. I am just going to keep using the term DSRNO because it is just easier to type 🙂 It is now a per-device setting called “No of outstanding IOs with competing worlds”. NOTE: DSRNO doesn’t really exist any more–as that was a host-wide setting retired in ESXi 5.5. and The HBA device queue depth limit and Disk.SchedNumReqOutstanding. So refer to my earlier post on that.įor standard volumes (VMFS or RDMs) the actual queue depth limit (called DQLEN) is calculated from the minimum of two values. PVSCSI settings and queues are identical to what happens with VMs on VMFS. From the guest perspective, nothing changes. The FlashArray doesn’t have one, so any possible bottleneck on that front would be in ESXi (or eventually the array itself). So first off, the storage array queue depth limit depends on the array, so ask your vendor. Let’s take a look at how queue depth limits work with PEs. So the multi-pathing and queue depth limits are set on the PE. Which then begs the question: Will my PE be a performance bottleneck? All of the I/O for those VVols go to the PE and then are dealt with by the array. So a lot of VVols! Understanding PE Queue Depth Handling So what this means you can have up to 16,383 VVols bound to a single PE. Unlike with standard volumes, VMware supports much higher LUN addressing for SLUs (up to 16,383 or 0x3FFF). So a PE would have a LUN ID of let’s say 244 and a SLU “bound” to it would have a LUN ID of 243:4 for instance.

A SLU is one of those sub-LUNs–it is only accessible via an ALU.Ī protocol endpoint is a ALU. So virtual volumes use the concept of an administrative logical unit (ALU) and a subsidiary logical unit (SLU) (more info can be found here in a SAM-5 draft).Īn ALU is a device that has other devices accessed through it as sub-LUN. So it needed to be done in a different way. So presenting a VVol to a host in a normal way would very quickly exhaust available paths and devices. So what about protocol endpoints? Let’s review quickly what they are.ĮSXi has device and path limitations (256 devices/1,024 logical paths in ESXi 6.0 and earlier, and those number doubled in 6.5). This indicates how many I/Os can be in-flight to a given single virtual disk (or RDM) from a guest at once. Virtual SCSI adapter queue depth limit. This indicates how many I/Os can be in-flight to all of the disks on a single virtual SCSI adapter at once.This indicates how many I/Os can be in-flight to a given single device at once. This indicates how many I/Os can be in-flight to all of the devices on a single HBA at once. Or there might not be one, like in the case of the FlashArray we do not have a volume or target limit This might be on a volume basis, or a target (port) basis or array basis. So to review, there are a few important queue depth limits: So, this begs the question, what changes with VVols when it comes to queuing? In a certain view, a lot. Just recently, Pure Storage announced our upcoming support for vSphere Virtual Volumes. I posted a few months back about ESXi queue depth limits and how it affects performance.
