Your Browser is not longer supported

Please use Google Chrome, Mozilla Firefox or Microsoft Edge to view the page correctly
Loading...

{{viewport.spaceProperty.prod}}

Measurements with HSMS/ARCHIVE

The tables below show backup throughputs which can actually be achieved with LTO-6 tape devices under HSMS/ARCHIVE on /390 and x86 systems. In each case one volume with files permitting average compression (factor 2-3) was backed up.

The throughput rates specified refer to one task during the actual backup and restoration phase, in other words without mounting the tape, without starting and ending the programs, and without creating the report files.

SAVE and RESTORE scenarios with different file sizes were measured:

  • Large files: 1.2 GB (1200 MB)

  • Medium-sized files: 12 MB

  • Small files: 1.2 MB

The throughputs were measured using a number of configurations, with the following hardware components being used:

  • Servers: SE700-20 with 1,120 RPF and SE300-80F with 1,200 RPF

  • Storage systems: ETERNUS DX600 S3 and ETERNUS DX440 S2

  • SAN connection of the storage systems: 2-3 Fibre Channels operating at 8 Gbit/s

  • Volumes: Different-sized NK2 volumes on RAID 1/0 (3+3)

  • Tape devices: IBM ULTRIUM-TD6 connected to a Fibre Channel operating at 8 Gbit/s

In practice, the throughputs which can be achieved depend on the sum total of the components involved. For example, in the case of small files the throughput is limited by the accesses to the volumes. No generalization is therefore possible for throughputs in specific individual cases. Instead, the bandwidth of the throughputs measured is specified.

The throughputs achieved with servers with /390 and x86 architecture are at a comparable level when the same performance is available.

File size

Throughput with ARCHIVE in
Mbyte/s SAVE

Throughput with ARCHIVE in
Mbyte/s RESTORE

Small files (1.2 MB)

160 - 190

80 - 110

Medium-sized files (12 MB)

270 - 350

170 - 220

Large files (1.2 GB)

340 - 370

310 - 320

The measurement results in this section assume favorable conditions, e.g. no parallel load on the measured configuration, sufficient CPU performance.

In order to display the CPU requirement on a comparable basis, the CPU utilizations measured were converted to RPF per Mbyte/s. The CPU requirement for backing up and restoring large and medium-sized files is between 0.2 and 0.7 RPF per Mbyte/s. Consequently, a fictitious system with 500 RPF, for instance, would have a workload of 10-35% with a data throughput of 250 250 Mbyte/s.

When small files are backed up and restored, the overhead increases because of the increasing number of catalog operations required.

The CPU requirement when backing up a large number of small files rises to a factor of 2, and when restoring with default settings, to a factor of 7.

In such cases you should take into account the optimization tips in section "Performance recommendations for HSMS/ARCHIVE".

Particularly in multiprocessor systems it must be borne in mind that the CPU performance required for a backup task must be provided by just one BS2000 CPU.

The I/O processors of BS2000 servers in no way present a bottleneck.