The utilization of the disk drives is influenced by various factors. The number of physical IOs and the effort required for each IO are decisive. While the former can be reduced by an adequately dimensioned cache, the load caused by the accumulated IOs depends largely on the RAID level.
RAID 1
Two identical disk drives in each case, original and mirror disk, form a RAID 1 group (referred to below as “RAID 1(1+1)” for short).
The data these contain is identical. Consequently only 50% of the disk capacity is available to the user.
Mirroring does not discernibly affect the write performance. In the case of read jobs, the storage system attempts to optimize and read from the more convenient disk, and the load is thus distributed to both disks. While the performance does not profit from this in the case of a single user task with synchronous read, under some circumstances a significant benefit can be achieved in the case of parallel loads.
RAID 0
With RAID 0 the data is distributed over multiple disks by means of "data striping". In this case the maximum throughputs can be scaled almost with the number of disks. In contrast to RAID levels with parity information, a high write performance is also achieved.
CAUTION!
RAID 0 offers no failsafe performance, and is not recommended outside test scenarios. If high write performance is needed, RAID 1/0 should definitely be used instead.
RAID 1/0
RAID 1/0 combines the better performance through striping (RAID 0) with the redundancy of a mirror (RAID 1). A RAID 1/0 group therefore consists of multiple (generally 2 to 4) disks to which the original data is distributed by means of “data striping”. Plus the same number of disks containing the mirrored data (referred to below, for example, as “RAID 1/0(4+4)” for short). As with RAID 1, only 50% of the installed disk capacity is available to the user.
RAID 5
RAID 5 offers a good compromise between performance and capacity. Depending on the application scenario, however, a RAID 5 can offer sufficient performance provided all write IOs can be handled via the cache. RAID 1/0 should be preferred for write-intensive loads or data with high availability requirements.
RAID 5 implements common parity checking for a number of disk drives. As with RAID 1/0 the data is distributed in blocks over the disk drives of the RAID 5 group using “data striping” and saved with the parity information, which is also distributed over all the drives (“rotating parity”). Parity checking reduces the capacity of the network by the size of a disk drive. In the case of the frequently used configuration with 4 drives, consequently 75% of the installed disk capacity can be used (referred to below as "RAID 5(3+1)" for short).
But as with RAID 1/0, thanks to “data striping” RAID 5 offers advantages in the event of parallel access. When one disk fails and in the subsequent rebuild operation, however, only downgraded performance can be reckoned on.
When writing, the parity information must also be calculated and written. For this purpose the previous value of the block to be modified is read out before the actual write, i.e. this causes addititon read IOs. Consequently RAID 5 is somewhat more complicated and has longer write IO times than RAID 1/0.
RAID 6
As with RAID 5, the data stream is divided into stripes. However, not just one but two error correction values are calculated. Data and parity information is thus distributed over the disks in such a way that both sets of parity information are contained on different disks (“dual rotating parity”).
In RAID 6 up to two disks can fail. The effort for resynchronization, in particular when two disks have failed, is, however, considerably higher than with RAID 5, as is the effort for write IOs, too.
Compared to RAID 5, RAID 6 offers higher failsafe performance with lower write performance.
Examples
The examples below are designed to permit a comparison of the effects of the various RAID levels on the load of the single disk drives with the same disk capacity or number of disks.
RAID level | Capacity | HDDs | Operations for each physical disk | Operations for each physical disk |
Single disk | K | 1 | 40x Read | 40x Write |
RAID 1 | K | 2 | 20x Read | 40x Write |
RAID 1/0 | 2*K | 4 | 10x Read | 20x Write |
RAID 1/0 | 3*K | 6 | 7x Read | 13x Write |
RAID 1/0 | 4*K | 8 | 5x Read | 10x Write |
RAID 5 | 3*K | 4 | 10x Read | 10x Read |
RAID 5 | 4*K | 5 | 8x Read | 8x Read |
RAID 6 | 3*K | 5 | 8x Read | 16x Read |
RAID 6 | 4*K | 6 | 7x Read | 13x Read |