Where simple thoughts and wonderful memories are conjoined, to create a sparkle of luminous joy to all those who want to remember, reminisce, care, and share about the times that we used to know and cherish before
Saturday, May 15, 2010
Forty Years of Computing
The computer that is referred to as the mainframe, the IBM System/360, was born more than forty years ago. So, what’s the big deal? Why would anyone care? What the heck is a mainframe, anyway? The year was 1970 and I had just come out of University after having offered a Math degree and was hooked on being a Computer Whizkid at the age of 22, very much to the annoyance of my parents who seemed to think and believe that it was all Sci-Fi and had no real value as a profession. So bear with me on a somewhat nostalgic journey that I still remember and can relate.
This story really begins more than 40 years ago, in what must seem like the dark ages of computing. Yes, it was called computing back then, a verb turned into a noun. Computing’s origins were largely mathematically based, replacing scores of humans at bulky electromechanical calculators doing computations. Data processing was a phrase just coming into vogue. It employed computing to replace what were largely clerical tasks – bookkeeping, accounting, and report generation.
Computing and data processing were largely seen as separate and incompatible kinds of applications. Each was separately conceived and the procurement of a computer was justified based on that singular need. There were many computers of this singular type being sold as the 1960s unfolded. IBM had at least four, with additional ones for the military. There were different systems for large and small enterprises. Sri Lanka had just three IBM System 360/20’s used by the Central Bank, Insurance Corporation and University of Peradeniya. It was the same with computers from many other manufacturers. None used the same architecture – there seemed to be no reason to do so at the time; these were single-purpose computers.
The idea of sharing components between computer lines seemed impractical or impossible, given the different missions of the responsible organizations. These were semi-custom computers, designed, configured, built, and tuned to the customers’ needs. Any hope for portability or extensibility was at the FORTRAN or COBOL compiler level and, because of the differing word sizes (number of bits in a byte and bytes in a word) being used, programs often were not portable without recoding.
While there was some off-purpose use of these computers, i.e., for other than the intended original application (even then, no cycle was to go unused), these programs were run serially (one at a time). The idea of, much less the need for, a generalized multipurpose computer didn’t exist.
Because computers were specialized, so were many of the peripherals and interfaces. What we take for granted today – such as USB and PCI connectivity – just didn’t exist. If you bought a new computer, you had to buy new consoles (mostly hard copy, but later CRTs), tape drives, printers, and even card readers. While there was some standardization in punch card formats and tape media, the coding schemes were often unique.
Worse yet, were the control programs (operating systems). Each was different, and closely tied to the underlying hardware architecture. There was no reason for them to be the same across a range of computers and different geographies. Operators were specially trained for the computer that they would keep running. In many ways, they were the heroes of the day, using their intelligence to overcome all of the hardware and software shortcomings and programming errors. There were no standard rules of the road, so to speak. Best practices were developed on the job for each computer.
In retrospect, prior to System/360 was the Primitive Age of Computing. Computer development was a “tribal affair”, and there was little interaction between tribal communities. To summarize, computers were:
1. Special purpose, for either scientific or commercial computing,
2. Designed for a 4-5 year product sales life, maybe including a mid-life kicker,
3. With no concept of upgrading to a larger system within the “family”,
4. With one-of-a-kind architectures and technologies,
5. With parts that were not interchangeable,
6. With proprietary peripherals and interfaces (even from the same vendor),
7. With unique operating systems,
8. Capable of running one program at a time, and
9. Requiring a lot of tending by specially-trained operators and vendor support personnel, who had to be quite knowledgable,
10. There were no standards for computing or principles of operations, and
11. Any minor failure would bring the system to a halt.
This is the backdrop for the beginning of a new era of computing that began at IBM and resulted in the System/360 in 1964.
Hindsight is wonderful. You get to look at what happened, what evolved, and, equally important, what didn’t, and you try to analyze what wisdom you would have brought to that prior time. It’s easy to jump to the “right” conclusion.
On April 7, 1964, the System/360 was announced, with much fanfare, and to the total astonishment of IBM’s competitors, who were struggling to keep up with IBM’s prior offerings. The first Model 40 was shipped in April of the following year.
It’s too easy just to think of a mainframe as a box, and the current state of the art servers as the great-great-grandchild of the original mainframe box, the System/360. It is very hard for most “real folks” who exist outside of the data center, maybe somewhere in the application or end-users’ world, to see the intelligence that resides in the box - in terms of the operating systems, middleware, scheduling and management software, and other utilities - that bring the hardware to life. It’s easier to personify the box – and love it or hate it – as in “I really hate my PC”. It’s probably not the PC that you hate, but rather the operating system (such as a version of Windows), or the middleware (such as a browser), or an application (such as a specialized search engine), or a utility (such as anti-virus protection), or a combination of these, that brings out your ire. When it comes to a server, especially a server capable of running many workloads simultaneously, the less you see of it, the better it is. And the more automated it is, in terms of self-management and optimization, the better it is.
So it is not surprising that most people think that they know what a mainframe is, but are unable to name a mainframe operating system or hypervisor, a controller for virtual machine images. That’s too bad, because these behind-the-curtain software stacks (collections) really put the capabilities onto the hardware, and not vice versa. You might ask, then, so why can’t a mainframe operating system, with all of its goodness, run on a PC? In fact, it can, and many of the other traditional mainframe vendors basically have chosen to emulate the original instruction set of their mainframe on a commodity-chip-based server. IBM, too has done this too in its past, but the performance was just too constrained for the on-demand workloads of the 21st century. It was easier, and more efficient, to put Linux or Java onto the mainframe hardware than to put the mainframe operating environment on commodity hardware. The fact that the oh-so-modern Java and Linux are maximized on a traditional IBM mainframe (i.e., many sessions running under VM as a hypervisor) points to the inherent value of this relatively unknown class of mainframe control software.
There have been a number of series of operating systems for IBM’s mainframes – from the mainline original OS1 and OS2 to OS/MFT to OS/MVT to TSS to MVS to OS/390 to z/OS. The lineage carries the rights to the crown as the jewels of IBM’s Mainframe Empire. Almost of all of the black magic that other platforms’ operating systems hope to deliver under the banner of “mainframe-like” descend from this lineage. But there are other family lines in IBM’s royal family that have given us virtual machines and hypervisor control (VM and z/VM), scaled down functionality with greater operational simplicity (the original DOS to VSE to z/VSE), to large-scale transaction processing (TPF - primarily for airline reservation systems), and to several others developed outside of IBM (including MTS - Michigan Terminal System - from my college and early professional days).
When you add IBM mainframe-originated database products (IMS and DB2) and transaction processing system (CICS), all still in heavy, mission-critical use today by the largest enterprises, to the rich history of mainframe operating systems, then you really can begin to understand why the last forty years are not just about the mainframe hardware. It’s the whole mainframe offering, now including open-systems middleware, like WebSphere and MQseries, plus a strong heritage of customer support and service, that has moved the mainframe to its royal status.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment