The Mainframe, how does it work?
The batch processing
The Mainframe systems have clearly separated the real-time applications from the batch ones. And definitely, it is in batch mode where Mainframe gives its best (mainframes were not designed as real-time platforms. For performing these tasks have been introduced somehow “artificial” mechanisms like CICS).
Batch applications are sequences of processes (or JOBs) controlled by a procedure written in JCL language. Once launched, the procedure will run until the total complexion. Commonly several procedures are run one after other.
Typically these transactions, by means of the sequence of JOBs (1, n, m, …) transform the DB from the initial situation (A) to the final one (Z). Even if the structure looks simple, consider that during these activities an ele-vated number of MIPS is consumed (once identified and isolated from other systems activities, it is possible to measure accurately this consumption).
Partial process migration
The “partial process migration” is a simple strategy that will reduce sharply the MIPS consumption, while having a negligible impact on the IT organization.
According to real cases, reduction up to 50% of the MIPS consumption can be achieved in a short time, by means of well controlled and limited actions. The key point is to address processes that are well defined and therefore can be isolated from the rest of the system. Depending on architecture these processes could be found in different parts of the system. In a phase of analysis using Caravel Insight, it is easy to identify these isolable subsystems components. And once identified they can be deeply analyzed to verify the interfaces (if any) with the rest of the systems (interfaces with data or with other processes), their characteristics and how to deal with them.
Typically there is a well-identified group of subsystems that have these characteristics among the batch applications. Procedures including transactions those run during the low activity hours, usually in an unattended way. In most cases, those only interface with the rest of the system through the data.
All the processes involved in the DB transformation (from DB situation A to Z) can then be performed in this external platform, without modifying the Mainframe contents neither charging the MIPS consumption. Once the processes are deployed on the external platform, the mechanism is simple: we copy the Mainframe database in a certain situation (let’s say A) to the external server. Then the processes are executed in the external server transforming data from situation A to Z and lastly the external database is copied back to the Mainframe. The result in the Mainframe database is the same that we would obtain using the Mainframe CPU to perform these processes. But without any Mainframe MIPS consumption.
This strategy offers several advantages:
- Reduction of MIPS
JOB 1, JOB n, JOB m,… are now executed in the Java platform, the corresponding MIPS are not consumed in the Mainframe CPU.
- Low invasive intervention
The general structure is unchanged. Nothing in the Mainframe must be modified.In fact, all other processes continue to be executed in the same way.
- Many verification points guarantees an exact conversion and a precise execution
The test and verification phase can be performed by comparing sets of data at different stages.
The size and the stage can be established according test necessities.
- Easily extensible
Once extract a batch application, a similar mechanism allows extracting additional ones, following a similar way.
Testing the converted system
Extracting processes to be deployed in an external Java system requires a previous consideration about performance. In many cases, these processes have a limited time to be completely executed in the Mainframe. So the new platform must accomplish with the same exigency.
In a first approach, moving from a compiled language such COBOL to an interpreted one, like Java seems a back step that could erode the speed execution. To use an Open platform, in principle less powerful than the Mainframe, can also be a cause of anxiety.
Open platforms, a deeper look
The Open platforms now offer improved cost to power ratio. Small, inexpensive systems can benefit from multicore, multithread CPUs with sophisticated L1, L2 and L3 cache memory architectures, extended amounts of fast memory and the most efficient disks. And all these features packed at the lower prices ever. Different technics can profit from these powerful hardware features.
A first technic to speed execution is a massive use of cache mechanisms. Everything that can be saved in a cache, avoiding disk access, must be done. Open platforms offer huge amounts of fast memory.
Open Architectures can also be a huge leverage for gaining speed, using adequate programming techniques. The ability to run several simultaneous processes shows a path to effective parallelization of converted systems. Java offers a whole set of characteristics to control multiple threads efficiently.
Batch processes that usually include many iterative processes can be parallelized. Consider splitting a process into several sub-processes (let’s say an iteration of 1.000 steps, can be reorganized as 10 iterations of 100 steps each). Every small subprocess is then executed in a separate threat (on a different CPU) multiplying the total throughput of the platform.