Your Browser is not longer supported

Please use Google Chrome, Mozilla Firefox or Microsoft Edge to view the page correctly
Loading...

{{viewport.spaceProperty.prod}}

Synchronization and serialization

&pagelevel(3)&pagelevel

The basic mechanisms for synchronization and serialization in an XCS network are DLM locks. From the point of view of the DLM, a lock is identified by a name and an area of validity (that applies to processors or the entire network). It is up to the user to assign a name to the resources to be locked (e.g. device, block of a file) and to ensure that the name is unique within the XCS network.
A lock request can be made synchronously or asynchronously. Deadlocks are detected by means of timers.

The lock mode specifies the access permissions of the lock holder. A total of six lock modes are supported:

  • Null mode
    The holder of this lock is not permitted to access the assigned resource; all other lock requests are compatible with this mode.

  • Concurrent read mode
    The holder of this lock is entitled to read the assigned resource but must take into account the fact that other processors may be reading or writing to it at the same time.

  • Concurrent write mode
    The holder of this lock may modify the assigned resource but must take into account the fact that other processors may be reading or writing to it at the same time.

  • Protected read mode
    The holder of this lock is entitled to read the assigned resource. No other task can modify the resource at the same time.

  • Protected write mode
    The holder of this lock is entitled to modify the assigned resource. No other task may modify the resource at the same time, but can, however, read it in concurrent read mode.

  • Exclusive mode
    Only the holder of this lock can access the resource. Other tasks may neither modify nor read the resource at the same time.

The lock mode selected finally depends on the extent to which a resource has to be protected in order to be able to carry out a processing stage.

A storage area (called the "lock value block") is optionally available for each lock with the aid of which the lock holders can exchange data. The contents of the lock value block can be modified by owners of a write lock, and read by owners of a read lock. In this sense, the lock value block can be regarded as a distributed storage facility across all the processors.

If a processor belonging to the XCS network crashes or if MSCF is terminated on the processor, the recovery initiated takes into account the necessary reconfiguration
measures for the locks held on that processor. When a task is terminated, a lock recovery is implemented as part of the processing steps necessary for the task termination.

The global DLM locks are managed by a HIPLEX MSCF application. The availability of the global DLM locks on a processor with XCS capacity thus presupposes the existence of an MSCF session. It makes sense therefore for DLM locks to be used only to protect resources whose availability is also linked to an MSCF session.

The interfaces of the DLM are implemented by two DSSM subsystems. The DLMUSER subsystem is responsible for the lock handling on any single processor and receives jobs via the user interface. For lock handling that is applicable to all the processors, DLMUSER passes the jobs, if necessary, to a second DSSM subsystem, the node synchronization manager (NSM). The NSM is controlled by the XCM and in addition to the lock handling applicable to all the processors, it also carries out lock-related reconfigurations. The NSM is therefore a reconfiguration unit registered with the XCM, i.e. a "registered function".

A detailed description of the DLM is given in the "Executive Macros" manual [12 (Related publications )].

Examples

  • Partner monitoring
    Each processor in an XCS network can be assigned an exclusive lock. The "locked" resource in that case is the processor itself and the lock name, e.g. the unique processor name. If the other processors in the XCS network then attempt to obtain this lock exclusively, they can assume that, as long as they are unable to obtain the lock exclusively or are informed that a change has occurred, the lock-holding (i.e. monitored) processor is active. The reason for this is that in the event of the crash of a monitored processor, the DLM releases the lock and assigns it exclusively to another processor. The processor that receives it can then take over the tasks of the crashed processor.

    The processor that receives the exclusive lock of the partner can take over the tasks which are to be performed jointly between the two processors. However, the assignment of the partner lock does not necessarily mean that the partner has failed, but only that it has lost its multiprocessor capability. In this case the other processors can use the shared resources of the partner.
    This mechanism must not be used for switching and in conjunction with exclusively assigned resources (e.g. exclusively imported pubsets), as these resources could be destroyed in the event of a communication error with subsequent unloading of MSCF.

  • Synchronization of file accesses
    PAM files can be written to and read at the same time from several processors. Generally speaking, the write accesses must be synchronized. Appropriate locks can be set on all the processors with the help of the DLM so that, for example, only one processor has write access to the object.

  • Parallel batch runs
    In an XCS network with two processors, let us assume that a file is to be created with UPAM on processor 1 in a batch run. A sequential evaluation is to be executed on processor 2 in a second batch run, also with UPAM (i.e. processor 2 purely has to handle a read process). The file creation and evaluation are to be overlapping which is why the evaluating batch run on processor 2 must be informed about the progress of the file creation on processor 1.

    As soon as an appropriate number of blocks is written on processor 1, the block counter can be exchanged between processor 1 and processor 2 with the aid of a global lock using the lock value block. The counter is modified on processor 1 in lock mode "Exclusive" or "Protected Write" and read on processor 2 in lock mode "Protected Read" or "Concurrent Read". The end of this process can be indicated by depositing a special marker in the lock value block. If processor 2 reads a lock value block marked as invalid, it interprets this as an abnormal termination of processing on processor 1.