I've just received a query about my current opinion of the future of the mainframe, and (even today) it is a more interesting question than you might think.
At one level, my opinion is coloured by my memory of all those strategic plans in the 1990s for replacing the mainframe with a few networked PCs. One of my first editors refused to believe that my previous employer (a very large bank with a very strange view of both work/life balance and job satisfaction) ran on a mainframe—he assured me that a Pentium processor was more powerful than any mainframe processor. Obviously, ideas of throughput, job scheduling and effective multiprocessing hadn't filtered through to personal computer magazine editors, but they were in good company. Lots of companies announced strategic plans for replacing their mainframes but quite a few of these strategic journeys never completed—when the sheer work throughput mainframes could process reliably sank in. One of the less reliable vendors in the mainframe replacement business was quite keen on announcing that it had achieved "5 nines availability, apart from planned downtime"—which turned out to be rather different to what 5 nines availability meant in the mainframe world, where you could change operating systems without bringing the system down, you didn't need to patch the operating system every week and unplanned downtime of a few minutes a year worried your vendor. So, my immediate reaction is that the mainframe is very healthy, thank you very much; it has already lived past its predicted death by many years (although I think that a modern System z mainframe is better thought of as an "enterprise cloud server"; rather than simply as a "mainframe"); its technology has evolved in line with IT progress generally; and it still runs some of the very biggest workloads (IBM says that 92 of the top 100 worldwide banks, 21 of the top 25 insurance organisations and 23 of the top 25 retailers still rely on their mainframes).
However, this doesn't mean that mainframe replacement is always a mistake (and having a choice is usually welcome). Mainframes are all about high utilisation and efficient use of resources and you need to be running them at 80% or higher utilisation, not at the 5% that might be OK for a PC server. If you are just using a small part of a mainframe for a few old applications you need to think about moving everything off the mainframe—or retiring the old applications—and getting rid of it. Or, possibly, about moving more workload onto the mainframe (see z Linux, below), so you can use it efficiently.
There are plenty of tools to help with mainframe migration (you can easily compile, maintain and run COBOL programs on PCs, for example, with Micro Focus tools). One interesting option comes from a South Korean company called TmaxSoft, which claims, with its OpenFrame platform, to provide complete emulations of mainframe CICS transaction processing and DB2 DBMS on distributed servers. It has plenty of documented customer stories and a pretty impressive technology story but, as with all mainframe replacement technologies, don't overlook the regression testing issues. It isn't simply a matter of porting code to a different, but compatible platform; if you are moving important (or regulated, or financial) applications, you may need to be able to show that their behaviour is exactly the same as it was on the old platform (with some level of confidence)—and you may not have the test cases and requirements documentation for the mainframe system needed to recreate the required test cases and verifications. This may not be an insuperable problem, but don't overlook the resources you may need for verification of correct behaviour (or regression testing of previously fixed problems). If you don't have effective configuration management processes, it is always possible that the source code you migrate doesn't correspond exactly with the compiled code being used in production; and at the sort of scale many mainframe applications operate at, even a change in something like rounding error behaviour on a new platform may represent considerable sums of money after a few years of operation.
Mainframe ownership economics is largely about negotiating a good deal with your vendor (so being aware of mainframe replacement options may help with your negotiations)—which implies that you have a very good knowledge of your actual and expected usage and technologies that might let you smooth out peak demands. You don't want to pay for unused mainframe capacity for 11 months of the year just to cope with the Xmas rush, so if you do run a mainframe, make sure that you really understand the management tools available for the mainframe today and its current capabilities and specialised co-processor options.
Current mainframes have "free" processors you can offload special tasks to and also have the fastest processors available today. Although the mainframe is probably one of the most effective multiprocessor platforms available, it can run single processor application (and there still many applications which don't take full advantage of multiple processors) faster than anything else. It can also run distributed applications natively on built-in blade servers. Don't simply assume that that mainframe is the expensive option; look at the workload that you need to run, and its power and cooling demands, and do the sums. It is also important to compare like with like; it is possible to buy distributed systems with mainframe levels of throughput and reliability these days but these don't usually have a total cost of ownership (including management and support) comparable with the usual commodity Intel server.
One area where I think that the latest mainframes may have real traction is in emerging markets which may have to scale up to extremely large throughputs and huge numbers of concurrent users (think of China) very quickly. It is interesting that IBM announced the world's first dedicated System z Linux and Cloud Center of Competency in Beijing, in June 2014. System z Linux is the great success store for the modern z mainframe—over 50% of new mainframe accounts run Linux—and there are specialised processors for Linux workloads on z; z can support a lot of Linux virtual machines extremely efficiently (that is, the virtual machines work reliably at machine utilisations far higher than most distributed systems can cope with). IBM has been developing Linux on System z for more than a decade, and today there are over 3,000 certified applications for Linux on System z. IBM is investing heavily in open source development for the platform, and the use of mainframes to support "enterprise-strength" cloud platforms, in order to exploit new opportunities in China and other markets—it talks about "Engines of Progress".
So, back to my original query. My considered opinion today, is that the mainframe is still very healthy, for the very largest workloads. But it now has new applications, running z and supporting cloud platforms. It is really time that we stopped thinking of the mainframe as something fundamentally different to other enterprise-capable servers. It is just another enterprise-strength server, albeit one with levels of reliability, security and throughput other servers aspire to.
I think the mainframe message should be around freedom—that is, it's about the availability of technology choices. I think z is a super platform if properly managed and highly utilised—partly because mainframe professionals have an extremely mature attitude to "mainframes in the service of business" rather than as "toys for techies"—but no-one wants to be at the mercy of a vendor, with no place else to go. So (and I haven't asked), I wouldn't be surprised if IBM sees vendors like TmaxSoft as a key part of its mainframe marketing strategy. People are far more likely to sign-up to a mainframe if there's a get-out option if necessary (and, of course, a modern z mainframe can run native AIX on bladeservers anyway—I must ask if TMaxSoft sells "OpenFrame for z" sometime).