Your Browser is not longer supported

Please use Google Chrome, Mozilla Firefox or Microsoft Edge to view the page correctly
Loading...

{{viewport.spaceProperty.prod}}

Creating user files

User files are generally created and processed by users in the course of their session. It must be noted that each execution of the following operations increases the workload on the file catalog: creating and deleting a file, opening and closing a file, changing the size of a file.

The main criteria for choosing the data volume and the logical operating mode are the size of the file, the access frequency and the frequency of accesses to the catalog during file processing.

Since BS2000/OSD V8.0 the input/output size when copying files using /COPY-FILE has been 128 KB (previously: 64 KB). When large files are copied, this results in enhancements of up to 20% in the CPU requirement and up to 15% in the runtime.

With small files the runtime can increase. When SCA is used (see "Accelerating catalog accesses with SCA"), the runtime improves by approx. 5%.

File size and access frequency

When data resources are planned, the frequency with which files are accessed has a considerable influence on system performance.

  • To keep the load on the volumes as low and uniform as possible, (large) files which are not accessed frequently should be placed on volumes on which frequently accessed files have already been installed and vice versa.

  • Frequently accessed files should always be placed on separate volumes (especially in the case of write-intensive applications). This is only practical if a number of tasks are involved.

  • Input/output files which are accessed alternately within the same processing run should be on separate volumes.

  • It makes sense to use frequently accessed files belonging to time-sensitive applications as HIPERFILEs (see section "Working with HIPERFILEs and DAB").

  • As a rule loads with normal to high write rates are managed efficiently in the disk controllers using familiar caching resources. However, bottlenecks can occur (write delay) in the case of loads with very high write rates to particular hotspots/files.

    To counteract this effect, it is recommended to distribute the hotspots across several logical volumes or physical devices:

    1. Hardware measures:
      Use of RAID 1/0 or RAID 5 (see section "RAID levels and their performance") with additional software support through the use of PAV (see "PAV (Parallel Access Volume) on /390 servers").

    2. Software measures:
      Distributing the hotspot or file over multiple files, extents or volumes. An example of this is the distribution of KDCFILE from UTM to multiple files (see "Optimizing the various phases").

Frequency of catalog access

As already mentioned in section "Logical operating modes of disks", the internal outlay for processing calls with catalog access is at its smallest for public volumes or pubsets, followed by private volumes. Throughput for the individual logical operating modes is therefore dependent on the ratio of management IO operations to “productive” I/O operations.

Public volumes or pubsets are most suited to files subject to frequent catalog access (procedures, ENTER files, a large number of OPEN/CLOSE operations in quick succession).

Files with a low frequency of catalog access can be stored on private volumes.

The high frequency of catalog accesses is only acceptable for shareable public volume sets when there is joint access by several servers.

In order to reduce the number of management I/O operations, the following points should be borne in mind:

  • select the primary and secondary space allocation in /CREATE-FILE to suit the file size expected

  • avoid erasing and creating a file unnecessarily (job command sequence: /CREATE-FILE, /DELETE-FILE, /CREATE-FILE for the same file name)

  • use OPEN/CLOSE operations sparingly with the commands /COPY-FILE, /CALL-PROCEDURE, /INCLUDE-PROCEDURE, /ENTER-JOB, /ENTER-PROCEDURE, /START-(EXECUTABLE-)PROGRAM, LOAD-(EXECUTABLE-)PROGRAM and /ASSIGN-SYSDTA also trigger OPEN/CLOSE operations.)

Reorganization

With larger systems, it is advisable to reorganize all files on a regular basis (e.g. every three weeks). This reorganization counteracts the splitting of files into a number of small extents, which cause an increase in positioning times. The reorganization is considerably simplified by the software product SPACEOPT.

SPACEOPT clears up the fragmentation through optimum relocation of the file extents on the volumes of a pubset. Files can also be reorganized if the free space available is smaller than the size of the file, i.e. SPACEOPT also permits local enhancements on the basis of the free disk space available.

SPACEOPT can adjust the size of BS2000 volumes to the size of the LUNs (Logical Units) with which they are implemented in ETERNUS DX/AF and Symmetrix storage systems. Such an adjustment can be required after disk migration using DRV, in which the source and target disks must have the same capacity. Information on disk migration, e.g. from D3475 to D3435, is provided in the “DRV” manual [7 (Related publications)].

A further application is the expansion of a BS2000 volume following the expansion of a LUN through the inclusion of additional disks in the ETERNUS DX/AF or Symmetrix storage system. This function is only available for the disk format D3435 (FBA data format).

SPACEOPT works on a volume basis, i.e. tasks can be specified for individual or all volumes of a pubset. SPACEOPT offers options for assessing the occupancy and fragmentation status and can be used during operation.

SPACEOPT can also move files if they are opened.

It is advisable to use SPACEOPT in off-peak times, as it can incur a high I/O workload depending on the degree of fragmentation of the volumes.

The performance of other BS2000 guest systems or of other servers which execute I/O operations to/from the same physical disk can be negatively influenced by reorganizing using SPACEOPT.

You will find detailed information on SPACEOPT in the “SPACEOPT” manual [31 (Related publications)].