The scenario seems very similar even though we are virtualized now: the guest OS is presented with a 4K cluster while the actual size of the cluster from underneath is 64K. Now going back to 64K cluster size in the Host and 4K cluster size in the Guest. So it reads the 4K sector into the memory, then modifies the required 512B chunk provided from OS, and finally overwrites the 4K physical sector on the disk with the new version of itself from the memory, hence Read-Modify-Write. HDD controller cannot just write into the middle of its 4K physical sector, it has to rewrite the whole 4K. However, problems arise during a Write request (can degrade performance by 30%-80%). One read request resulted in one read operation. If OS wants to read a specific 512B sector, the firmware on the HDD locates the whole 4K sector where that 512B chunk resides, reads the whole 4K into the memory and presents the requested 512B chunk to the OS.
The idea is to provide compatibility to software that cannot understand 4K physical sectors (and there's a lot of such software, for instance native 4K support started with Server 2012/R2 I believe).Īnyway, the point is that during Read operation, there's little no none performance hit. Even though the physical sector size is 4K, the controller emulates it as 8 sectors of 512B and presents it as such to OS. Blocksize is the backup parameter you will be looking for. This is a great starting point and don’t forget that you can define the size of your backups to match and speed them up. If we go back and use fsutil.exe to verify, we can see we are now formatted at 64K. I read about performance impacts caused by Read-Modify-Write process that applies to all 4096B disks that emulate a 512B sector size (512e disks). Format G: /FS:NTFS /V:DATA /Q /A:64K Resultant Allocation Unit Size.