Your Browser is not longer supported

Please use Google Chrome, Mozilla Firefox or Microsoft Edge to view the page correctly
Loading...

{{viewport.spaceProperty.prod}}

Measurements with HSMS/ARCHIVE and ETERNUS CS8200 VTL

&pagelevel(5)&pagelevel

Measurements regarding data backup performance with the backup file located on the Logical Volumes of a ETERNUS CS8200 VTL were performed on a Server SE710. The results also apply to an SU730. The storage system volumes that were used with HSMS BACKUP- and RESTORE-FILES contained files varying in size (regarding PP, PAM Pages):

  • Small files: 35510 * 621 PP (1,2 MB)
  • Medium-sized files: 3630 * 6075 PP (12 MB)
  • Large files: 36 * 621600 PP (1,2 GB)
  • Systemmix: 11872 files of different sizes

The following hardware configuration was used: 

  • Server: SE710-20D (1220 RPF)
  • Storage-System: ETERNUS DX8700, 16 Gbit/s FC, 2,5" SSD-M Drives
    Volumes: NK2 format, in 2 RAID-Groups (RAID1), 4-way connection
  • ETERNUS CS8200 VTL: FC-connection with 16 Gbit/s via 2 channels, Logical Drives IBM 03590E1A (BS2000: DEVICE-TYPE=TAPE-C4)


Backup throughput with ETERNUS CS

The backups were performed on one Pubset with 1 task on 1 Logical Volume as well as using parallel backups of 2 Pubsets with 2 tasks on 2 Logical Volumes. With FastDPAV the following average throughputs were measured:



file size

HSMS/ARCHIVE throughput (MB/s)

BACKUP 
1 task

RESTORE
1 task

BACKUP
2 tasks

RESTORE
2 tasks

small files

317

198

620

282 + 105

medium files

317

253



large files

317

254

633

473


The throughputs using RESTORE are lower than those achieved using BACKUP. The Cache Hit Rate Write of the storage system occasionally dropped far below 100% during RESTORE operations. RESTORE subtask 1 with 2 small files and 2 tasks used about double the time of subtask 0. Hence in the table above values for 2 parallel tasks as well as the run-time of subtask 1 are pointed out. The many access operations on the BS20000-catalog lead to a significant overhead.


Comparison to ETERNUS DX

Comparison measurements with the HSMS backup file located on a Pubset of the storage system were conducted. The Pubset was connected via a different 4-way connection than the backed up Pubsets.
The throughputs achieved with FastDPAV as well as without PAV were as follows:


ARCHIVE 1 task
file size

HSMS/ARCHIVE throughput (MB/s)

BACKUP FastDPAV

RESTORE FastDPAV

BACKUP without PAV

RESTORE without PAV

small files

290

158

170

111

medium files

471

515

176

187

large files

463

477

177

197


In this scenario with FastDPAV the BACKUP throughput using medium and large files is about 50% higher compared to using Logical Volumes. For RESTORE operations the throughput is about twice as high as before. The parallel Read- and Write-IOs on the backup file impact the results heavily.


Scaling with several tasks

When connecting the Logical Drives via 1 channel to the server the BACKUP operations with several parallel tasks scale well. When using RESTORE the storage system ist the limiting factor regarding throughput. HSMS-Backups (FastDPAV, volumes with large files) with a single task and 2/3/4 parallel tasks resulted in the following BACKUP and RESTORE throughputs:


ARCHIVE

HSMS/ARCHIVE throughput (MB/s)

BACKUP TAPE-C4

RESTORE TAPE-C4

BACKUP TAPE-U4

RESTORE TAPE-U4

1 task

317

254

316

253

2 tasks

625

467

621

465

3 tasks

917

493

911

491

4 tasks

1089

464

1104

464

The Logical Drives were configured as both IBM 03590E1A (BS2000: DEVICE-TYPE=TAPE-C4) and LTO Ultrium 4 (BS2000: DEVICE-TYPE=TAPE-U4).


There was no significant difference between DEVICE-TYPE TAPE-C4 and TAPE-U4. 


Comparison K- and NK-Pubsets

Conducting further measurements the difference between backing up K-Pubsets and NK-Pubset with the backup file located on Logical Volumes was examined. The Pubsets each contained files of different sizes (Systemmix). Both BACKUP and RESTORE were used and tested several times. The following table describes the average throughputs for the different scenarios:


Pubset
BACKUP - RESTORE

HSMS/ARCHIVE throughput (MB/s)

BACKUP FastDPAV

RESTORE FastDPAV

BACKUP without PAV

RESTORE without PAV

K - K

287

175

124

93

K - NK

287

11

124

11

NK - NK

291

176

254

162

NK - K

290

178

249

109


The results varied siginificantly between runs,  most of all while using RESTORE. With BACKUP K and a following RESTORE NK all files were converted using PAMINT. This lead to a significant throughput decrease to 11 MB/s. This decrease in performance is unrelated to PAV, but is the reason for increased RESTORE run-times.