The Linux Foundation Projects
Skip to main content
BlogMentorshipZowe

Summer Mentorship 2024: Zebra Plugin for Hitachi Mainframe Storage

By September 26, 2024No Comments

Written by Krishi Jain, student at DJ Sanghvi College of Engineering

Introduction

Greetings viewers! I’m Krishi Jain, a CS undergrad and musician from Mumbai, India. I’ll do my best to keep this as interesting as I can! I’d love to start by saying that I’ve had a wonderful experience under the Open Mainframe Project’s mentorship with LFX. The most wonderful thing about this mentorship is that I’ve had the invaluable opportunity to be actively working and interacting with mentors who have had over 35 years of experience under their belt. The coolest part is that all of my mentors are as hungry to learn about new technologies and practices as I am which makes the work I do with them as a team a 100 times more interesting and enjoyable. 

Speaking of the project and my journey till the end, I started my work early on, on the 28th of May which was a week after I received my acceptance letter from the Open Mainframe Project thanks to my primary mentor Joe Carlisle, Master Solution Architect at Hitachi Vantara, who was kind enough to set up the entire project development environment for me to get started and made sure I got my hands on all the resources I needed to get comfortable with understanding my project which is “Open Mainframe Project – Zebra Plugin for Hitachi Mainframe Storage.” Being completely new to the mainframe world, Joe helped me get a huge head start with understanding the “whys” and “hows” of what we were to work on. He gave me exposure to production data and systems and explained how mainframe performance analytics was done at Hitachi Vantara as well as IBM. Project ZEBRA under ZOWE is the first of its kind in the open source world which is a parsing engine for raw RMF data that converts raw RMF XML data into JSON which can then easily be used by third-party softwares for mainframe performance analytics. ZEBRA offers performance analytics in a very useful way where it uses Prometheus/MongoDB as a time series database and then routes that data to Grafana to visualize performance metrics. 

Although my expected outcome was to simply create the HMAI plugin, I’m happy to work on more features and fine-tuning to make ZEBRA an even more amazing open-source product as mentioned in my future plans at the end of the blog. I’d love to thank Salisu Ali, one of the creators of ZEBRA, who initially helped me understand the project and guided me before I was accepted into the programme. My Secondary Mentors; Vincent Terrone, Len Santalucia and Fernando Zangari have been of great support throughout the project, always encouraging my work and positively reinforcing my efforts. Special mentions to Yarille Ortiz, Tom Slanda and Maemalynn Meanor for their encouragement and making sure mentees like myself are comfortable and have the smoothest experience as part of this wonderful mentorship cohort!My Project

My project was to create a plugin for Hitachi Vantara’s HMAI (Hitachi Mainframe Analytics Interpreter) data where ZEBRA can gather HMAI metric data and offer similar performance analysis for it as it does for RMF data on-prem. With the help of Joe and one of my secondary mentors, Fernando Zangari, I was able to create the prototype of this plugin within just 2 weeks of good work followed by 3000 lines of code later, here we are finally. I used MySQL as the on-prem database for HMAI data where the data can be fed to Grafana directly through the user’s MySQL database. This also opens doors for adding an archiving feature for data in ZEBRA. The way it works is, Hitachi Mainframe Analytics Interpreter is a z/OS (mainframe) operating system enablement utility that interprets SMF Records produced by Hitachi mainframe analytics recorder. The mainframe-related Hitachi storage metrics interpreted from those SMF Records are used to construct corresponding CSV records. There are 6 such CSV records corresponding to the 6 metrics which are; CLPR, PORT, MPB, PGRP, MPRANK20 and LDEV. These CSV files are generated and bundled in a directory. Many such directories are generated each day depending upon configurations. My code logs into the ftp server of the specified LPAR using the user’s credentials defined in Zconfig.json, scrapes these directories and streams the data of the CSV files directly to the MySQL database of the user where there are 6 metric tables corresponding to the 6 CSV reports. After many iterations, the optimized implementation can feed a month’s worth of data consisting millions of rows in a matter of 7-8 minutes. I also developed prototype code using PostgreSQL as the database and that can feed data in half the time ~2-3 minutes.

This is the frontend of the new HMAI plugin.

This is the feature that quickly allows users to view 1 day worth of data and download individual CSVs for them to view using other tools.

This is what the new DDS Configuration page looks like with the new HMAI Configuration section.

Sample Grafana dashboards to view data!

Future Plans

While working on this plugin, I also figured out a way to store RMF data on-prem using PostgreSQL. With this system, users can now visualize as well as archive their RMF or HMAI data. The reason this is good is because the current MongoDB storage setup relies on a workaround third party plugin to feed data to Grafana because Grafana requires an enterprise license for MongoDB support. With PostgreSQL, the system is faster and 100% in-house with an added feature of archiving months worth of data on-prem with added automated customisations as to what data they want to keep and for how long. This is super important because Prometheus and MongoDB both have memory constraints. With PostgreSQL on-prem, it’s completely up to the user as to how much data they need to hold and archive giving them full freedom and no memory constraints whatsoever, the infrastructure could easily scale with their available storage. Bundling a few standard grafana analytics dashboards would make ZEBRA simply plug and play for users making it even more appealing. Currently, I’ve already developed the prototype that works perfectly for the RMF III CACHE report and I plan to implement the above features soon!

 

Stay tuned here and the Open Mainframe Project for more summer mentee blogs and videos.