The Linux Foundation Projects
Skip to main content
I Am A Mainframer | Podcast

I am a Mainframer: Gerald Mitchell

By | February 26, 2025

In this episode of the Mainframe Connect podcast’s I Am A Mainframer series, Gerald Mitchell, Chief Testing Tools Architect and Senior Technical Staff Member at IBM, shares his journey in the mainframe industry. From his early days as an intern at IBM in 1997 to his current role revolutionising testing tools, Gerald offers unique insights into mainframe modernization. He challenges common misconceptions about COBOL and other mainframe technologies, emphasising that modernisation isn’t about abandoning platforms but optimising usage. Gerald discusses his work with Watson Code Assistant and the critical importance of validation in the development process. A highlight of the conversation is Gerald’s vision for the future of mainframe technology, focusing on sustainability, AI integration, and bringing testing closer to code.

Watch Full Episode here:

or

Transcript:

[Intro Voice]: This is the Mainframe Connect Podcast, brought to you by the Linux Foundation’s Open Mainframe Project. Mainframe Connect includes the I am a Mainframer Series, the Riveting Mainframe Voices Series, and other content exploring relevant topics with Mainframe professionals and offering insights into the industry and technology. This episode is another in the I am a Mainframer Series, exploring the career journeys of Mainframe professionals.

**Steven Dickens**: Hello and welcome. My name is Steven Dickens and you’re joining us here on another episode of the I am a Mainframer Podcast. I’m joined today by Gerald Mitchell from IBM. Hey Gerald, welcome to the show.

**Gerald Mitchell**: Thank you. Thank you for having me.

**Steven Dickens**: So let’s start straight in. Let’s get the listeners and viewers orientated. Tell us a little bit about your role and what you do for IBM.

**Gerald Mitchell**: Sure. So I’m the Chief Testing Tools Architect and Senior Technical Staff Member at IBM. I work on making an IBM-Z developer experience for doing testing, shifting testing left, giving an overall better experience for a developer working on the applications and programs.

**Steven Dickens**: So the show’s called I am a Mainframer. We on the show get people to sort of explain their story arc and their career arc. I’ll probably ask a question that gives us a lead off there. How long have you worked for IBM?

**Gerald Mitchell**: So I’ve worked at IBM since I was an intern. I believe it was June of 1997. And I actually signed up to be an intern at IBM in 1996, two weeks into my freshman year at Virginia Polytechnic Institute and State University at Go Hokies. So I have been with IBM pretty much since I was able to have a job.

It’s been a great opportunity for me. Lots of different experiences over the years. I signed up to be an intern to work on cable modems and funnily enough, that whole part of the business was sold between the time that I signed up as an intern in 1996. And they all decided to just vacate the building, move the buildings over just as everybody was going.

But it actually turned out to be a great thing for me because that happened I had to find a different internship. And this is actually my first real exposure to working with mainframes because I started working in the communications server group. At the time there was Windows and TX and of course the S360 in the mainframe portfolio

So I was actually working as an intern doing testing, working in build. I worked on translation. So I actually worked in the translation lab automating that. Actually that was my first exposure to REXX, was that I needed something to automate across a couple of different operating systems. And REXX actually was the one that worked on everything.

There was a REXX for NTOS to warp if people remember that. As well as what was I and Z. So I was able to write REXX programs to change the languages for these applications that were installed for language testing. Because we’re testing new versions, I had to roll that out.

So I actually asked my first exposure to automated testing because the volume and amount and what I was doing I don’t speak most of the languages that were in the test lab. So I got the chance to work with automating, using REXX, NetREXX if anybody has ever heard of that. And building the automation to switch to these installs and then also switch the languages for each of the machines.

**Steven Dickens**: So that’s interesting. We’re talking a little bit off camera about your role. Probably don’t get much opportunity to think back to that sort of first role. But I realized all that between what you were doing then and what you’re doing now. And maybe we’ll use that as a way to sort of join the dots up between the two.

**Gerald Mitchell**: I’ll say that not only was the general experience so great because when I was an intern I got to see so many different aspects. Because I was working in the test lab. So I got to actually see what the testers were doing. But I was working with the development team because I needed to get these builds. I was working with a build team because if a build wasn’t there or something was wrong, we needed to rebuild and I would help with that.

Building out the test systems for the translation lab. I actually built the client machines from spare parts. So I really got to do everything.

**Steven Dickens**: Is that not called DevSecOps platform?

**Gerald Mitchell**: Yes. Everything. It didn’t have the name at the time. But it was absolutely everything. And that kind of molded what I considered a job. What I considered what I was able to do. I didn’t fit myself in one small niche. So I was just able to take opportunities as they came. I was very flexible. I had learned all of these different systems enough to at least log in and do an install. So I could do that on every system you could name. Great experience.

As especially as now I work on building test tools. I’ve got to think like a tester. Well, I’ve been a tester. How do I do these installs? Well, I’ve done install work as a job. And this was all like formative. So even the things that I learned in school. I had a practical understanding of what I was learning.

So when you learn about computer and memory and where things go. And that kind of thing. Well, yeah, I had to actually fit the things in the space allocated on each of these machines. Which was different in how they load. So I understand about the file systems and memory storage and things. As I was learning them in college. Actually, after I had my internship.

**Steven Dickens**: I think so many students fail to see what they did. Sort of applicability of what they’re going to learn. And I have this conversation on the show with so many sort of comp-side majors. Who’ve been able to apply those and ended up in careers sort of 40 years later. That show those sort of key fundamentals that obviously they were learning in college. So it’s interesting that you say the same.

**Gerald Mitchell**: So actually, I had an interesting application of what I learned at IBM. Interest of the mainframes when I was in college. Virginia Tech had this class or program called Virtual Corporation. And they gave you real-world problems. And your job and grades were to solve these problems. And I liked this program. I actually did it more than once.

But one of the ones that I worked on was actually doing a modernization play for mainframe. In this case, it was a VTAM system in the veterinary medicine department. And I got a lot of real-world experience at IBM. And then I went to this Virtual Corporation. And I got the external aspects of what it takes to do mainframe modernization.

 Which, again, rolls right into what I’ve been doing for a living. Where, as part of what we do with mainframes is get people to DevOps. Having a DevOps mindset, having pipelines, having testing, having the tools. And so I can go back to, well, when I was doing this, what was I missing? When I was doing these evaluations, what were swaying me?

And, you know, so a lot of times, even having conversations with people trying to do these modernization plays, I have some understanding of what is to be on the other side of the desk. Where I have these goals that were brought from long high to do this modernization. And that was it. It wasn’t enabled to know how to get there. And I had to figure that out.

And some aspects aren’t necessarily obvious. Like, what do I currently have that I could use? What do I need to go acquire? Right. So I have some tools in place in this case. It was a long time ago. So there was a lot of move to, there wasn’t a cloud at the time. So it was, oh, we want to modernize. So we want to get off of this system.

Well, does that really what you want to do? Or do you want to actually look at what the current software is on the system?

So I had a great experience through IBM and through Virginia Tech having these understandings of what everybody has to do in their day-to-day jobs. That I’m actually able to apply to this day. I have those conversations constantly. Every time we look at how we can do modernization.

And the one thing that is very important is when modernization doesn’t mean go off the mainframe. A lot of people hear it and think that. But when you look at the hardware capabilities, the level of capability and complexity you already have, it’s not necessarily a matter of modernizing by moving something or adding something. It’s using it in a way that makes sense.

I use a great example for some modernization where when I wrote a program originally, I’m going to tell myself, and you wrote a program in COBOL. And it’s a batch program and it’s got the reporting built in. Do I actually need the reporting built in? No, but that’s how I had to write it because I had no other tools.

If I wrote it today, I’m not going to have it build a report in COBOL and use my MIPS for that part because I don’t need that report. I only need a report when I’m asking for a report. I have all the data. I’ll leave the data. And I’ll look for a report if I need it later. I’ll save lots of time and energy and MIPS. And I don’t have a bunch of reports taking up storage. I’ve done the run. I have the data.

**Steven Dickens**: And it might be a create an open API to the data. Is an ETL task and you put it into a data lake that. 

**Gerald Mitchell**: It might be MQ and something else that read it off the Q and threw it in a last search or something. I don’t need that information in hand when I run my batch. I don’t need that as part of my MIPS. And sometimes it’s the majority, right?

I’m doing a lot of work to do the actual calculations I need. And then I spend even more formatting strings to do an output that I don’t need right now. It’s important to look at how I’m doing things when I talk about modernization. I don’t want to get rid of the programming language because it’s so many years old, right? I’ve heard that a lot.

COBOL, PL1, they’re still viable. They’re still can do things. There’s data science toolkits and AI that you can do with these languages. They’re very much still vibrant. But the problem comes in is that I had the code that was 20 years old. Not the programming language. And honestly, if you look at my favorite languages, how long has C been around? How long has Java been around now?

You know, you talked to other people like, oh, well, these are new languages. No, they’re not. I was in college when Java was having full releases. I actually had a class in Java before 2000, right? So if you look now, yeah, that’s 30 years old, too. These languages are around because they have specific capabilities and strengths. But they’re not going away because they have these capabilities and strengths that we want to use.

And so it’s important to understand the utilization and the capabilities you have. COBOL batch processing is extremely fast and utilizes tons of throughput that you don’t get in a lot of other programming languages, right? And it’s quick to be able to do that because of the structure of the language, which is a business-oriented language. And you can tell that these parts are, like, for a long time, right?

**Steven Dickens**: In general, I get the feeling I’ve kind of tapped into a rich vein of passion here.

**Gerald Mitchell**: So because we have all of these capabilities in the systems and in the languages, the modernization is my utilization. How am I using things effectively, right? The parallel processing is unparalleled, right? I can do things in work while it’s encrypted, right? Some of the capabilities are just nowhere else. Why would I want to move off of it or move to something that can’t utilize these effectively? Because somebody didn’t learn it in school. It’s faster to teach somebody. Two weeks.

**Steven Dickens**: I mean, I think the interesting thing that you mentioned there for me, there’s a couple of things, and I’ll drill down on them. But the interesting thing for me is that you were doing a modernization project 25, 30 years ago. Yeah. There’s a bridge in the UK called the Forth Bridge. And by the time they’ve painted it, it takes so long to paint it that they go back and start again. So they’re continually painting this bridge.

And I think the piece that comes to mind whilst you were talking there, I think the same thought. Modernizing your mainframe isn’t a one-time thing. You should always be modernizing your mainframe. You should always be looking at that code. You should always be evaluating that code. We’ve got new tools and make that a lot easier. What’s an Exodus? There’s tools from other vendors. Code explanation.

I think the other thing that was interesting and I agree with your perspective is we only talk about the age of COBOL. We never talk about the age of other languages. We never have a conversation in our daily lives to say, English is this old and French is that old and Spanish is that old. So therefore we should look at these language. We never have that conversation.

The age of COBOL, what is it, 65 years old now, predates S360, system 360 machines. It’s a conversation that has no value for me. You look at Windows, it’s 40 years old. Linux is 30 years old. You talked about Java and C++. We shouldn’t be having a conversation about that. We should be having a conversation about, as you rightly mentioned, how old is that application? Let’s look at and break apart, maybe create a microservices architecture for that COBOL application.

If it’s a monolithic application like you mentioned and somebody’s still got reports sitting in there that just don’t fit into our modern data architecture. Yes, absolutely go modernize that COBOL application. But that doesn’t mean you have to get it off into Java or off into Python or C++ or something else. That just means go through signing to your brain and the methodology of a continual migration and a continual modernization of that code base to something more modern. That’s how we think.

**Gerald Mitchell**: I’ll use the Watson code assistant as an example. It’s broken down into these different phases. Understand, refactor, transform, optimize, explanation, validation. I actually work on the validation, so a quick plug for that.

**Steven Dickens**: The validation is absolutely essential.

**Gerald Mitchell**: I’ll go on a rant for a second. Validation is absolutely essential. Anytime you do something to change something, you want to make sure that what you changed was what you expected to change and no more. This is the part where testing and shifting left. That’s also the modernization play.

**Gerald Mitchell**: If you do nothing else but build unit test and functional test and integration tests, you still modernized. Now you have a way to move forward. You have a built-in system to manage your code changes and show that you’ve succeeded. That’s modernization. I now have a way to do repeatability. I can add on there.

It’s the same as if I’m doing a plug for Watson deploy, doing deployments or doing dependency-based builds and other plugs. That’s modernization plays. I may have had a way to do this before, but now I have a way to do it systematically. I have automation in it. I can add smarts in.

I’ll actually give a great plug for another part of Watson code assistant that helps all of this and is definitely a modernization play, which is explanation. I want to have an understanding of this code. I have my understanding aspect in Watson code assistant. But I’m looking at a piece of code. I don’t know what it does. I don’t want to change it till I know what it does. I can quickly just have that explain. What does this do?

I think this is not for me. It’s kind of me. I’m skilling myself up as part of the modernization play and I’m enabling new people to come along. People that haven’t seen this before to quickly understand things. I can take that explanation and build documentation out of it.

**Steven Dickens**: I think that’s the key thing for me in this. When we talk about one of the things you hear banded around is that nobody who understands COBOL, that people who do understand it are aging out. We need to move off because people don’t understand COBOL. There’s more Java developers. One of the key things in these code explanation pieces is that a relatively new COBOL programmer and there’s, you know, Rocket Software do a course where you can learn COBOL in a day.

So I just don’t believe and I’ve had multiple college kids on this podcast over the years who’ve learned COBOL. I just don’t see the same barrier that others see. I think that you can pick this language up. If you’re in a curious, enthusiastic 21-year-old who wants to go pick up a code in language over the summer, there’s as much chance you’ll learn COBOL as you can learn Java or Python or Rust or whatever else it is.

**Gerald Mitchell**: Just to that point, during the pandemic, there was actually a lot of people learned COBOL over the time frame people were at home. There was some, I won’t go into it, but some things that made the news about these COBOL programs. It actually, you know, turned out it’s just a modernization play. Again, and mostly the people side, right?

And there were a lot of people taking various COBOL classes. IBM has some open mainframe project. Rocket, as you mentioned, just a lot of material out there, classes out there. There’s lots of education and capability and learning. And the other side of it is COBOL is not actually that hard to pick up.

**Gerald Mitchell**: Once you understand that you define all your data first and then you’ve got your paragraphs of source, if you’ve learned other programming languages fairly fast to pick up, people forget again, this was a business oriented language. So once you have that thought process when you’re writing the code, instead of maybe trying to do object oriented, so there’s ways to do that too in COBOL. It’s just a fast way to do a business process.

So if I think about, okay, I need to do these financial calculations and put it in this data store at the end, it’s move this, do this calculation, and move it into the store that I’m done. It’s fairly short to do that operation. And it’s no more difficult than any other programming languages.

**Steven Dickens**: Yeah, I mean, as to say, I don’t subscribe to that sort of narrative. Yes, we need to be thinking about these tools, but I think people hold this opinion of the mainframe that’s kind of sepia-toned, kind of 1970s, kind of rooms, the size of computers, the size of rooms. We’re not there anymore. We’re bringing AI tools like Watson X and the BMC AMI-DEVX tools. And there’s others to be able to explain the code, be able to sort of move a code base forward.

Well, Gerald, I’ve just noticed the time. We could be talking about this topic yet. Well, I think we’re both passionate about it. We talked a little bit about your sort of arc of your career. We got a lot of younger listeners that sort of tune into this show. One of the questions that I always ask is what advice would you give to that intern starting IBM many years ago? What advice sort of based on the experience you’ve had over the last sort of, what is it, two, three decades now? What advice would you give to your younger self?

**Gerald Mitchell**: Okay, so I’m writing a story that go together. The advice is to just take advantage of every opportunity. As you’re learning and growing, and trying to decide what you want to do moving to the future. And this is especially with the mainframe. Go learn things. It doesn’t matter what. Like, don’t try to box it. I’m a Node.js programmer. And I’m going to go down that path. Don’t box yourself in, because things learn and evolve.

So what we’re doing now, a lot of the work will be prompt engineering. I’m going to use code generation. And I’m going to use things like Watson code assistant for Ansible Lightspeed or something. To do the work, to do the deployment, to do my code. And I’m going to have to have that understanding. And it doesn’t matter that I’m on the mainframe. 

**Gerald Mitchell**: If you look at what we do with Watson assistant for Z, where I can do things like orchestrate, and I can tell the system what to do through a chat. I want to be able to have an understanding of what that’s doing so I can give it good instructions. That’s going to be a skill that will carry forward when we talk about the future. So learn that now.

And don’t be afraid to take a look at even things that, oh, well, this doesn’t apply. It might not apply now, but it may in future. Understand, I would say, right now, if I had to code and come into an intern, I would make sure I understood how quantum worked. It’s quantum toolkit. It’s free. It’s out there. You can learn and use it. It’s one of the technologies everybody’s looking at for the future for computing.

I want to understand cryptography, because with everything else going on, I need to make sure everything is secure, including from quantum computers, which, by the way, you know, the IBM Zs has a quantum safe processing. And then, you know, AI, right? These are technologies that are going to exist, and they may take a different form, but if I have a basic understanding of how they work out right now, I can then just quickly learn and update.

And that’s the same programming language as how you learn how the operating system works. Again, ZOS has a really great operating system. So learn how it works. Learn how storage works. Learn how memory works. Because those things will keep applying as you move forward.

**Steven Dickens**: Exactly. That’s great advice. I think the consistent theme that I’ve been doing this podcast now, six to seven years, is be curious, be sort of on a journey, and take on all those opportunities. The other question I ask, Gerald, of all the, and I’m really interested given the sort of modernization conversation we’ve been having for the last, what is it, 25 minutes, is where do you see the platform three, four, five years out? We’ve got a new box coming this year, but think beyond that, think sort of further out. Where do you see, you mentioned quantum, you mentioned some of the other sort of areas that we’re innovating on. Where do you see the box on the mainframe platform? Maybe five years from now.

**Gerald Mitchell**: So I’ll do like I would do anything else. Hardware on out, right? So the hardware, if you pay attention to how chips and works up, getting the actual way first, everything is being able to get smaller and smaller and smaller. Right? Obviously that will keep happening, right? As we understand more of physics and how we can reduce that, it reduces your energy footprint.

That energy footprint, I think, is going to be key, especially if you look at how much energy AI takes. So having optimizations for energy so that you can have better, smarter AI operating in a way that is sustainable, right? That’s going to be super important.

So I see the future for the hardware going that direction, making sure that for the processing power we have more better sustainability, and we’re continuing the bad approach, right? Make things smarter, put things in hardware where it makes sense, where we have a way to optimize it to actually make it more sustainable, right? And you can see us that if you look at what IBM is doing now, with some of the chips and things, you can see that that’s where things are headed.

Obviously, throughput improvements will be constant. We’ve done that for all of the time, right? Same thing with storage. How does storage work? How fast can storage be for reads and writes and networks? How can we reduce… Well, it goes back to sustainability. How do we reduce emissions of energy from the box? We’ve seen the box actually able to get smaller, right? So you can actually now get it in a rack as opposed to a room.

So I see that trend continuing to be able to make it against sustainable and make more of it on the software side. I think that you’ll see more of what we have now. We talk about ZOS being so old, but it’s also brand new, right? We put out a new one that has all sorts of new capabilities, technologies that didn’t exist a year ago, right?

**Gerald Mitchell**: So I see that obviously continuing, making things simpler, right? Easier to work with, makes self-explanatory as we add in AI, as we add in smarts as we work on better ways to do installation, configuration management, and plug in for the software that we have for that, that we’re working on now through our ZDevOps and AI, from again, dependencies, builds, and Ansible for Z, and the quasi-deploy, as well as the IDEs, even being able to do work and push not only just the code, but also the behavior that I want with the code for build, and how it’s supposed to behave on the system.

And then, of course, I actually need to be the serviceability architect, so I see a lot more of the responsiveness for what I’m doing on the system, coming back to the forefront. I want to make sure I have my metrics. I want to make sure that what I’m doing when I build software and run the software is what I expect it to do, and optimize, which, by the way, is in Watson code system, we now have an optimize. I’ve seen more of that coming, right? We have the capabilities to take what you want to do with the system and turn that into, that’s what it’s doing.

And, again, I work on test. We have a test acceleration for Z. In the future, I see more test automation moved to closer to the code, right? So, I treat my test as code. I have my test with the code. In the follow DevOps processes, when I check in stuff, I need to make sure that my unit test passed, and my integration tests, and so on.

So, I want all of that things, those things are enabled for the user. So, where I see that going is that we’re going to tie all of that to the person that was asking to do the work, right? If I’m asking for something, I have my use case. I’m asking my developers to build it, my testers to test it. So, all of that has to go through the entire process together.

And so, I see with the future of AI and what we’re doing on the system that my prototype test accelerated for Z for testing, I see that moving forward in the testing environments where we’re able to test and give you results quicker. Five years out, right? I see more things working with automation and AI so that we can bring all of those things in to somebody brand new. And not only brand new, but even the more advanced people where they can do these operations and have a certainty that things have done correct.

And that kind of ties in serviceability testing and all the way back to everything I learned as an intern. I see that moving to the future. How can we do all of those things in a compact manner that’s sustainable and easy for the person to use and comprehend and move forward?

**Steven Dickens**: So, I’ve asked that question a bunch of times. That’s probably the most comprehensive answer. I love the way you brought it back.

You’ve been listening to Gerald Mitchell on the I am a Mainframer podcast. I’m your host as always Steven Dickens. Thank you for joining us on the show today. Do all the things to click subscribe and turn on the notifications on whatever your podcast platform is because that’s really good for us. And we’ll see you next time. Thank you very much for watching.

[Outro Voice]: Thank you for tuning in to the Mainframe Connect podcast. And this episode in the I am a Mainframer series. Like what you heard? Subscribe to get every episode. Or watch us online at openmainframeproject.org. Until next time, this is the Mainframe Connect podcast.