The Linux Foundation Projects
Skip to main content
BlogMainframe ModernizationMentorship

Summer 2024: Mainframe Modernization WG White Paper

Written By: Swathi Rao,  Student at PES University

Hi, I’m Swathi! I’m currently a student studying Computer Science and Engineering at PES University in Bengaluru, India and I’m excited to share my blog post documenting my experience as an LFX Mentee under the Open Mainframe Project’s Mainframe Modernization Working Group! 

I have had the privilege of working under Dr. Vinu Russell N. Viswasadhas, Associate Director of Data Consulting at Kyndryl who also holds a Doctorate in Computer Science with a concentration in Big Data Analytics from Colorado Technical University. Under his mentorship, I researched artificial intelligence(AI) and machine learning(ML) workflows on mainframe data with the aim of providing an unbiased perspective to decision makers, mainframers and business professionals who are looking to leverage AI/ML for their mainframe data. 

What is the Mainframe Modernization Working Group? 

In 2022, the Open Mainframe Project launched the Mainframe Modernization Working Group with the aim of creating a common definition of “mainframe modernization.” The group’s objectives also include developing educational materials catered to diverse audiences and serving as a go-to hub for information on mainframe modernization. 

My Journey from Applicant to Mentee 

My Summer 2024 LFX Mentorship journey started when I applied for The Open Mainframe Project’s “Mainframe Modernization White Paper” Mentorship in early May 2024. The problem statement immediately resonated with my interests so I applied without a second thought. After being shortlisted and a round of interviews, I got the letter of acceptance for the Mentorship program on the 29th May, 2024! 

Watch my final presentation video here:


Redefining Legacy: Mainframe Data meets AI 

Posing the research question: 

Data is the sword of the 21st century, those who wield it well, the samurai.” – Jonathan Rosenberg, former SVP of products at Google. 

In the 2000s, improvements in computer hardware and the availability of high-quality data facilitated the shift of machine learning(ML) algorithms from theory to practical applications. Today, data-driven decision-making, powered by Artificial Intelligence (AI) and ML, relies heavily on the accessibility of quality data. Mainframe computers hold decades’ worth of valuable data and are a goldmine for AI/ML and analytics. However, leveraging this data is challenging due to the distinct architecture of mainframes compared to modern commodity servers. 

Thus we pose the question- How can AI workloads effectively use mainframe data and what tools are available to facilitate the execution of AI workloads on mainframes or transfer the data to environments where additional AI tools are accessible?

Approaches to run AI workloads on mainframe data: 

Through meticulous research through several articles, white papers and websites, we were able to find multiple tools that could be used to run AI workloads on mainframe data. They can be broadly classified into three categories: 

  • Running AI directly on mainframes. 
  • Running AI outside mainframes without migrating mainframe data. 
  • Running AI workloads on mainframe data outside mainframes. 

Key Takeaways: 

Running AI on mainframes leverages their strong RAS (Reliability, Availability, and Serviceability) properties, but older systems often lack GPUs, limiting effective inferencing. However, newer mainframes have AI enabled chips such as IBM Telum and IBM Telum 2. Besides, the s390x architecture’s compatibility with certain ML libraries such as pytorch shows scope for improvements. 

Many organizations transfer data off mainframes or maintain copies for external ML processing, striking a balance between modern AI capabilities and mainframes’ RAS properties. However, this approach can introduce security risks, stale data risks and latency, particularly with large datasets.

Each of these approaches have their strengths and weaknesses. It is up to companies to thoroughly review their requirements and conduct analyses to ensure that they select the most suitable option. 

Challenges: 

In the early stages of my research, I had to upskill myself on mainframes, a topic completely new to me. I waded through a lot of research material, white papers and webpages to completely understand the various facets of mainframes. It’s an incredibly vast domain and I had only 3 months to complete my research. Hence, I started classifying topics under the MoSCoW(Must, Should, Could, Won’t) to prioritize what I need so that I don’t go off track. Regular meetings with my mentors, Vinu and Bruno also helped me to discuss what I had learnt, understand how they fall into the broader mainframe ecosystem and estimate their relevance to the paper. 

During my research, I explored various tools for running AI workloads on mainframes but struggled to understand how they all fit together. At this point, I would also like to thank Mr. Ramesh Vishveshwar, Client Architect, IBM for throwing light on the AI landscape on mainframes, focusing on the various tools and workflows that are commercially utilized. 

Acknowledgements: 

The past three months have been filled with research, learning, meeting schedules, networking and of course writing and rewriting. It feels like just yesterday that we were all gathered for the kick-off meeting introducing mentors and mentees. Before I knew it, I was drafting my mid-term blog post (which you can read here) reflecting on what I had learnt and now I find myself penning my final blog-post as a LFX mentee on the successful completion of my mentorship. 

I would like to express my gratitude to my mentors, Vinu, Bruno, and Misty, for their exceptional guidance and support throughout the mentorship! Their expertise greatly helped me understand the mainframe domain and also helped me to pull out the relevant information from all the data that I had gathered. 

I would also like to thank Aditi for being a wonderful co-mentee. Working together with her in the mentorship working group has been an absolute pleasure. As a team we have gained a deeper understanding of the various facets of mainframe modernization. 

I would also like to thank Mr. Ramesh Vishveshwar, Client Architect, IBM for giving much needed clarity with respect to machine learning on mainframes. 

I’m also grateful to the Open Mainframe Project Managers, Yarille and Tom who ensured our smooth onboarding and have promptly handled any issues from the non-technical end. 

Learning about mainframes has been an eye-opening experience for me. I hope that people who read our papers will appreciate the legacy of these powerful machines. Even though technologies change, these machines continue to remain relevant and organizations can make use of tools to harness the power of data residing on these machines.

Stay tuned here and the Open Mainframe Project social channels for more mentee blogs and videos.