Your Browser is not longer supported

Please use Google Chrome, Mozilla Firefox or Microsoft Edge to view the page correctly
Loading...

{{viewport.spaceProperty.prod}}

Preparing Snapset mode

&pagelevel(4)&pagelevel

Starting Snapset mode for a pubset means that copies (backups) of the pubset are created on Snapsets which will be available for restoring files/job variables or the entire pubset.

Terms

Snap unit

The local replication functions of the storage systems create a "Snapshot" of a logical unit (also of more than one if required). The Snapshot, which is called a snap unit, is a logical copy of the original unit at a particular time ("point-in-time copy"): while the data on the original unit is changed, the snap unit retains the status of the data at the time the Snapshot was created.

In ETERNUS DX/AF systems for snap units are used Thin Devices (TDEV) or Flex Volumes (FDEV). For Snapset mode, the planned snap units have to be initialized on Thin Devices (TDEV) or Flex Volumes (FDEV) in advance, using the VOLIN utility routine (see “Utility Routines” manual [15]). The special notation S#<mn> is expected as the VSN for these volumes, e.g. S#5234, where <mn> is the device mnemonic. Due to its flexibility this type of snap units is strongly recommended.

Using the Snapset mode specially configured units, called snap units or snap device volumes (SDVs) can be used alternatively. In most cases, these are set up directly on the storage system by a qualified technician. Original unit and snap unit must be placed inside the same storage system.

In BS2000, the snap units are generated like normal disk devices (type D3435), with a capacity equal to the respective original unit.

Save Pool

The storage space actually required for the original data to be saved is, if necessary, provided from a so-called save pool. Save pools are configured in the storage system with a suitable capacity.

The implementation of save pools depends on the storage system:

ETERNUS DX/AF Systems

  • in S#-Volume mode
    In this case, there is no separate save pool.
    The required storage space is provided by the respective snap units, which are subject to the overflow control of the respective “Thin Provisioning Pool”.
  • in SDV mode
    In this case, there is only one single save pool for all Snapsets.
    The required storage space is initially provided by the respective snap units. Only when the capacity of those is reached (about 10 % of the original unit), storage space on the save pool is used.
    In the case of a save pool overflow, all Snapsets using storage space on the save pool are corrupted. The SHC-OSD functions of for monitoring save pools should therefore consequently be used

Usually, the actually required storage space (for the original data to be saved) comes from a storage area specifically provided on the storage system for this purpose, a so-called save pool. The required capacity of a save pool depends on the following factors:

  • Data volume of the applications for which Snapsets have been generated
  • Number of snap sessions per original
  • Change volume on the original (and the associated snap units)

When a Snapset is deleted, the area concerned in the save pool is released again.

Save Pool / Thin Pool Monitoring

SHC-OSD provides functions for monitoring the utilization level of a save pool or the thin pool respectively. As soon as a specific value is reached or changed, warning messages are output to the console.
In any of these cases, system administration should react appropriately and create storage on the save pool or, if necessary, extend the save pool. The threshold values are set in the storage system or in the SHC-OSD parameter file.


CAUTION!
An overflow will lead to loss of the existing Snapsets.
Specifically, the respective snap sessions will enter the “FAILED” state; After that, they can only be deleted.

Example of preparations

Before Snapset mode can start for a pubset, corresponding preparatory measures are required. The procedure is illustrated using the example below.

  • Initial situation and requirements planning

    Pubset A consists of 8 disks (PUBA00 through PUBA07).
    It is to be saved twice each workday (e.g. at midday and in the evening).
    Access to the backups of the last 6 workdays should be ensured.

    12 Snapsets must then be available for 2 x 6 pubset backups. For each Snapset the same number of snap units is required as the number of disks in pubset A. In total 8 x 12 = 96 snap units are required.

  • Providing snap units in the storage system

    The 96 snap units required must be provided in the same storage system as the original disks of pubset A with the same type and the same size.
    If it is already clear that a pubset will be extended, additional snap units should be included for this purpose in sufficient number.

    ETERNUS DX/AF systems can be used with pre-configured snap units or S#<mn> volumes. See "Terms".
  • Configuring snap units in BS2000

    All 96 snap units must be included in the hardware generation of the BS2000 systems involved like normal disks (type D3435), in other words in practically all hardware generations in which the original disks of pubset A are also entered (see the “System Installation” manual [55]).

    On SUs /390, disks can also be included in the active I/O configuration while operation is in progress.
    The inclusion of emulated D3435 disks in the X2000 configuration of an SU x86 is described in the “Operation and Administration” manual [57].

    Note for VM2000

    Snap units created in the storage system are also recognized as snap units by VM2000. They are automatically attached when a pubset is imported, and under VM2000 implicitly assigned to the VM if the VM has the privilege AUTO-SNAP-ASSIGNMENT. When initialization takes place using the VM2000 command CREATE-VM, a VM is assigned this privilege by default. It permits the guest system on the VM to implicitly assign itself the snap units of a Snapset without the VM and device being prepared for implicit device assignment (i.e. the VM privilege and the device attribute ASSIGN-BY-GUEST are not necessary in this case).

    Exception

    Volumes S#<mn> used as snap units of VM2000 are not recognized as snap units and are also not output. Despite of that, they can be assigned implicitly to a VM with AUTO-SNAP-ASSIGNMENT privilege.

  • Setting the Snapset limit and, if necessary, assigning the save pool (in BS2000)

    The Snapset limit is entered in the SVL of the Pubres. It defines the maximum number of Snapsets permissible for the pubset. For a pubset without Snapset mode the Snapset limit is 0.
    As a maximum of 12 Snapsets are to be created for pubset A, systems support sets the Snapset limit to 12:
    /SET-PUBSET-ATTRIBUTES PUBSET=A,SNAPSET-LIMIT=12

Snapset mode in the case of disaster protection with remote replication

If, because of disaster protection, pubset A is operated in a local storage system and in a remote storage system, the following must be borne in mind for Snapset mode:

  • Normally the Snapsets are configured only in the local storage system. If the local storage system fails, they are no longer available.

  • If the Snapsets are also to be available in the remote storage system should a disaster occur, they must be created in both storage systems. To permit this, the same number of snap units must be provided in the remote storage system as in the local storage system and included in the hardware generation of the BS2000 systems involved.

    In addition, systems support must set the processing environment accordingly for the Snapsets:
    /SET-SNAPSET-PARAMETER PUBSET=A,REMOTE-COPY=*YES

    When disk access is switched between the source and target controllers, access to the Snapsets assigned to the pubset is not also switched automatically.
    The ADAPT-SNAPSET-ACCESS command ensures that the assigned Snapsets are still available after such a switchover without the pubset having to be exported:
    /ADAPT-SNAPSET-ACCESS PUBSET=A