Legacy Application Modernization

Modernización de aplicaciones heredadas

The concept of ‘Application Modernization,’ in one of its possible definitions, is intimately related to ‘migration,’ that is, taking operational software and changing it to a different platform.

On the other hand, it is important to note that the practice of software development has evolved not only at conceptual or methodological levels but also in entirely pragmatic areas such as improving the user experience, which enhances customer retention and satisfaction.

Consequently, in a way, the ultimate goal in application modernization is to harness all the innovation from the new practices of software development and the power of the target technological platform in the service of legacy applications.

Ultimately, the idea is to extend the lifespan of the software, along with all the knowledge that was poured into the development of the application, leveraging the perspective of the latest technical innovations and platforms.

Why do we need to modernize legacy applications?

Starting from the undeniable fact that organizations have made significant investments in software and IT infrastructure. This fact cannot be ignored, and some way must be found to protect it.

So, updating legacy applications, leveraging current tools, infrastructure, and languages, is one way to preserve that investment.

An appropriate application modernization strategy will reduce costs by optimizing necessary resources and increasing the reliability of the installation.

It is evident that companies cannot afford to exclude application modernization strategies from their overall IT management strategy.

The reasons will keep piling up, and many have subjective answers, but if we had to name an objective motive, one that can be measured, for instance, we cannot ignore pointing to MIPS consumption.

The reduction of MIPS comes into focus for IT managers when consumption spirals into a situation where high costs and the peculiarities of contracting mechanisms push them to seek alternatives.

But, ultimately, in addressing this challenge, the obstacles are: the need to balance innovation with stability, and the urgency to adopt emerging technologies without disrupting business operations.

The Example: Managing MIPS

We know that if we allow CPU consumption to rise significantly, we force ourselves into tough decisions where we have to start evaluating hardware upgrades, either because response times begin to become unacceptable or because resource exhaustion leads to processing errors and we are cornered by abnormal application terminations.

There comes a point where it seems like we are constantly battling increasingly voracious fires, and it becomes difficult to envision scenarios of calm.

In summary, whatever the cause, it is essential to implement a good MIPS management policy.

At this point, a significant part of the organization’s IT department begins to analyze MIPS consumption.

This is the first line of prophylaxis. The idea is to stop being reactive, to stop putting out fires, and to start preventing them.

Our MIPS management policy will point out some items to control. According to Tony Shediak in ‘Performance tuning mainframe applications “It’s not so hard”‘ during a 2007 conference, the most common problems to manage are:

  • Inefficient use of programming language data types
  • Data conversions caused by unnecessarily mixing data types
  • Inefficient compilations
  • Inadequate VSAM or QSAM buffers
  • Inefficient SQL statements
  • etc.

The point is that there will come a time when any MIPS management policy faces the wall of money.

What happens when the policy we have adopted indicates that the next step in upgrade costs is no longer viable?

What happens when a management policy is no longer sufficient?

Let’s delve deeper into this concept.

A Roadmap: Let’s Break It Down.

When considering and preparing for application migration, it’s possible to start saving on MIPS consumption through low-impact, risk-free strategies.

Here’s a tip: looking inside the Mainframe, in batch processes, there are many opportunities for this.

The general idea is simple:

  1. Move the processes currently performed on the Mainframe to an external system, and then
  2. Restore the processed data back into the Mainframe.

The concept can be described as ‘plugging in an external processing device’ into the Mainframe system.

The most effective way is to convert Mainframe processes to Java, thus having the freedom to run them anywhere.

BASE100’s Caravel technology offers a complete suite for Mainframe to Java conversion, including specialized tools for these ‘low-impact’ approaches, covering everything from analysis and discovery to conversion and the testing phase.

The methodology proposed by Caravel provides a specific certification tool that will ensure that the Mainframe process and the process on the external system produce exactly the same results.

Batch Processing

Mainframe systems have clearly separated real-time applications from batch applications. And, undoubtedly, it’s in batch mode where the Mainframe truly excels.

Batch applications are sequences of processes controlled by a procedure written in JCL language.

Once initiated, the procedure will run until its completion. Typically, several procedures are run in sequence, one after the other.

This processing activity consumes a significant amount of MIPS, and one of the advantages of this separation into independent procedures is that it makes it easy for us to accurately measure the consumption of each of them.

Partial Process Migration

Within application modernization, the possibility of ‘partial process migration’ emerges. It’s a simple strategy that will drastically reduce MIPS consumption while having a minimal impact on the IT organization.

Real cases have shown that a reduction of up to 50% in MIPS consumption can be achieved in a short time, through well-controlled and limited actions.

The key point is to address processes that are well-defined and, therefore, can be isolated from the rest of the system.

Depending on the architecture, these processes can be found in different parts of the system. In an analysis phase using Caravel technology, which facilitates the identification of isolatable subsystem components.

Once identified, a detailed analysis continues to verify the interfaces (if any) with the rest of the systems (interfaces with data or other processes), their characteristics, and how to handle them.

There is generally a well-identified group of subsystems that have these characteristics among batch applications. Procedures that include transactions running during low-activity hours, usually unattended.

In most cases, these processes only interact with the rest of the system through data.

MIPS Saving

Once identified, the workload needs to be extracted from the Mainframe.

All these processes offer an opportunity to save MIPS because they can be converted to Java and deployed on an external server (‘the processing device’) without interfering with the rest of the subsystems.

Conversion to Java can be achieved precisely and effectively using Caravel technology.

BASE100’s Caravel offers two independent or combined options:

  • Automatic Conversion and/or
  • Rewriting.

Automatic conversion is a quick and low-risk process, performed using the Caravel Express tool, while rewriting is done with the Caravel Converter tool.

Rewriting has the advantage of producing well-structured 100% pure Java code and the ability to include various enhancements.

Batch Java Processes

All processes involved in transforming the database (from the starting situation to the final) can be performed on this external platform without modifying the Mainframe’s content or incurring Mainframe MIPS consumption.

Once the processes are deployed on the external platform, the mechanism is simple: we copy the Mainframe database at a certain situation (let’s say A) to the external server.

Then, the processes run on the external server, transforming the data from situation A to Z, and finally, the external database is copied back to the Mainframe.

The result in the Mainframe database is the same as what we would achieve using the Mainframe CPU to perform these processes. But without any Mainframe MIPS consumption.

Legacy Application Modernization Strategies

As we often emphasize in this blog, the starting point is a thorough analysis of the candidate software.

The evaluation and inventory of legacy applications should include not only their technical features but also the ROI of applying the modernization itself.

It’s important not to lose sight of a global perspective of the process, as it allows us to detect interoperabilities between different departments.

Additionally, determining whether the process can be applied in a single stage or whether a certain gradual approach should be followed is vital to avoid catastrophic failure.

In the case of a gradual approach, for example, studying how module-to-module decoupling affects the overall system’s performance is essential. This enables us to take corrective measures in a timely manner and manage resources and service availability effectively.

In a nutshell, we could identify the following points as critical:

  • Analyze the Mainframe system.
  • Identify isolatable subsystems or processes. Caravel Insight tool for iSeries or Z/OS helps detect isolatable subsystems.
  • Verify the type and number of interfaces. Once identified, isolatable subsystems must be analyzed to locate and describe each interface and its impact on the rest of the system.
  • Convert the selected subsystems to Java. Conversion to Java can be achieved through Caravel Converter or Caravel Express.
  • Implement on an open platform. 100% pure Java classes, implementable on any standard platform with Tomcat or JBoss.
  • Complete a testing and verification phase.
  • Implement in production.

The advantages are obvious since batch conversion is one of the safest ways to implement an effective MIPS reduction.

This translates to actual cost savings and establishes a foothold in a world that opens new development and feature possibilities.

The well-planned and coordinated use of different platforms offers a synergy that reveals many virtues previously hidden from Mainframe users.

Key Technologies in Application Modernization

Up until now, we’ve briefly addressed the topic of modernizing legacy applications to leverage the advantage provided by MIPS savings. However, it’s important to highlight that each installation has its own reasons, every team has its specific motivations to initiate an application modernization process.

This is why investigating what the market offers is crucial. ‘What is the best solution for my installation?’ is the quintessential valid question.

At Base100, we synergistically assist in providing a satisfactory answer to that question.

Returning to examples, the menu of technologies upon which we base the modernization of applications is varied. We can mention:

Cloud Computing: This is the star destination. It’s what most application modernization processes refer to. There are public clouds, private clouds, and hybrid clouds (usually local environments interacting with public or private clouds or both). Containers: This term refers to the method of packaging, deploying, and operating applications and workloads in the cloud. This technology promotes scalability and portability, enhancing the operational efficiency of cloud computing. Microservices: More than a technology in itself, it’s an arrangement of application architecture. Instead of operating software monolithically, it’s chosen to decouple it into smaller components, favoring deployments, updates, and interoperations. Automation and Orchestration: Orchestration refers to the interoperability of containers in the network and also to software deployments and scaling. Automation is the principle by which we try to ensure that orchestration tasks are not dependent on supervised update processes. It’s the recognition that ensuring the independence, security, and sustainability of agile and scalable development teams is increasingly necessary.

Is application modernization just a trend?

Companies invest immense resources in their mission-critical applications. Therefore, the sometimes negative connotation that the term ‘legacy application’ carries needs to be dispelled. Application modernization, far from being a trend, is a way to guarantee and preserve the investment made in a company’s existing application portfolio.

The solution of simply retiring legacy software and starting anew from scratch is often not a viable path for most companies. Cost analyses and productivity loss assessments usually reveal risks that are not manageable in most cases.

Application modernization is the most suitable way for most companies to approach migrating to new technologies and leverage innovative tools, architectures, and/or frameworks.

Where is legacy application modernization headed?

It’s evident that the current trend is the utilization of more than one service in the public cloud, primarily aiming for cost optimization, flexibility, and continuous availability.

New computer paradigms of distribution and scalability can be more easily tackled from modern applications than from legacy, monolithic, and centralized applications.

The trend also points us toward the strategic choice of containers and automation of orchestration as steps to alleviate and optimize workloads.

Companies are investing most heavily in containerizing legacy applications, pushing architectures to the maximum with microservices.


We provide a specific set of tools and products designed to ensure the unquestionable success of implementing mixed solutions for Mainframe modernization and proprietary midrange hardware.

¿Le interesa la propuesta?

¡Sigamos avanzando ¡hablemos de modernización!

Contácte Us

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *