Sunday, September 12, 2010

GENERATIONS OF COMPUTERS


GENERATIONS OF COMPUTERS
The PC that's sitting on your desk is, in many respects, a direct descendent of ENIAC-inspired research, including the stored-program concept. Of course, your computer is thousands of times faster and thousands of times less expensive than its room-filling, electricity-guzzling predecessors. When we're talking about a PC, the "computer" is the microprocessor chip, which is about the size of a postage stamp and consumes less energy than one of the desk lamps in ENIAC's operating room. How was this amazing transformation achieved?  Today's computers weren't achieved in a gradual, evolutionary process, but rather by a series of technological leaps, each of which was made possible by major new developments in both hardware and software. To describe the stage-by-stage development of modern computing, computer scientists and historians speak of computer generations. Each generation is characterized by a certain level of technological development. Some treatments of this subject assign precise dates to each generation, but this practice overstates the clarity of the boundary between one generation and the next.  The First Generation (1950s):  Until 1951, electronic computers were the exclusive possessions of scientists, engineers, and the military. No one had tried to create an electronic digital computer for business. And it wasn't much fun for Eckert and Mauchly, the first to try. With Remmington Rand's financial assistance, Eckert and Mauchly delivered the first UNIVAC to the U.S. Census Bureau in 1951.  Eckert and Mauchly delivered the first UNIVAC to the U.S. Census Bureau in 1951. UNIVAC gained fame when it correctly predicted the winner of the 1952 U.S. presidential election, Dwight Eisenhower.  UNIVAC gained fame when it correctly predicted the winner of the 1952 U.S. presidential election, Dwight Eisenhower. Since then, computers have been used to predict the winners in every presidential election.

From today's perspective, first-generation computers are almost laughably primitive. For input, punched cards were used, although UNIVAC could also accept input on magnetic tape. Power-hungry vacuum tubes provided the memory. The problem with vacuum tubes was that they failed frequently, so first-generation computers were down (not working) much of the time.  The first generation of computers used vacuum tubes. Vacuum tubes failed frequently, so first-generation computers did not work most of the time.  For all the limitations of first-generation technology, UNIVAC was a much more modern machine than ENIAC. Because it used fewer vacuum tubes than ENIAC, it was far more reliable. It employed the stored-program concept, provided a supervisory typewriter for controlling the computer, and used magnetic tapes for unlimited storage. Because the stored-program feature enabled users to run different programs, UNIVAC is considered to be the first successful general-purpose computer. A general-purpose computer can be used for scientific or business purposes, depending on how it is programmed. 


Although the stored-program concept made first-generation computers easier to use, they had to be programmed in machine language, which is composed of the numbers 0 and 1 because electronic computers use the binary numbering system, which contains only 0 and 1. People often find binary numbers difficult to read. Moreover, each type of computer has a unique machine language, which is designed to communicate directly with the processor's instruction set, the list of operations it is designed to carry out. Because machine language was difficult to work with, only a few specialists understood how to program these early computers. 

In 1953, the company announced its first commercial computer, the IBM 701, but it wasn't popular because it didn't work with IBM's own punched-card equipment. The 701 was quickly followed by the highly-successful (and more user-friendly) IBM 650, which interfaced with the most widely-used punched-card technology in the world. Thanks to IBM's aggressive sales staff, IBM sold over a thousand 650s in the first year of the computer's availability.

IBM's first commercial computer, the 701, wasn't popular because it didn't work with IBM's own punched-card equipment.

The Second Generation (Early 1960s) First-generation computers were notoriously unreliable, largely because the vacuum tubes kept burning out. To keep the ENIAC running, for example, students with grocery carts full of tubes were on hand to change the dozens that would fail during an average session. But a 1947 Bell Laboratories invention, the transistor, changed the way computers were built, leading to the second generation of computer technology. A transistor is a small electronic device that, like vacuum tubes, can be used to control the flow of electricity in an electronic circuit, but at a tiny fraction of the weight, power consumption, and heat output of vacuum tubes. Because second-generation computers were created with transistors instead of vacuum tubes, these computers were faster, smaller, and more reliable than first-generation computers the transistor heralded the second generation of computers.  Second-generation computers looked much more like the computers we use today. Although they still used punched cards for input, they had printers, tape storage, and disk storage. In contrast to the first-generation computer's reliance on cumbersome machine language, the second generation saw the development of the first high-level programming languages, which are much easier for people to understand and work with than machine languages. A high-level programming language enables the programmer to write program instructions using English-sounding commands and Arabic numbers. Also, unlike assembly language, a high-level language is not machine-specific.

This makes it possible to use the same program on computers produced by different manufacturers. The two programming languages introduced during the second generation, Common Business-Oriented Language (COBOL) and Formula Translator (FORTRAN), remain among the most widely-used programming languages even today. COBOL is preferred by businesses, and FORTRAN is used by scientists and engineers.  A leading second-generation computer was IBM's fully transistorized 1401, which brought the mainframe computer to an increasing number of businesses. (A mainframe computer is a large, expensive computer designed to meet all of an organization's computing needs.) The company shipped more than 12,000 of these computers. A sibling, the 1620, was developed for scientific computing and became the computer of choice for university research labs.  In business computing, an important 1959 development was General Electric Corporation's Electronic Recording Machine Accounting (ERMA), the first technology that could read special characters. Banks needed this system to handle the growing deluge of checks. Because ERMA digitizes checking account information, it has helped to lay the foundation for electronic commerce (e-commerce).  In 1963, an important development was the American Standard Code for Information Interchange (ASCII), a character set that enables computers to exchange information and the first computer industry standard. Although ASCII didn't have much of an impact for 15 years, it would later help to demonstrate the importance of standardization to industry executives.  In 1964, IBM announced a new line of computers called System/360 that changed the way people thought about computers. An entire line of compatible computers (computers that could use the same programs and peripherals), System/360 eliminated the distinction between computers designed primarily for business and those designed primarily for science. The computer's instruction set was big enough to encompass both uses.

The Third Generation (Mid-1960s to Mid-1970s)
It's possible to separate the first and second computer generations on neat, clean technological grounds: the transition from the vacuum tube to the transistor. The transition to the third generation isn't quite so clear-cut because many key innovations were involved.  One key innovation was timesharing. Early second-generation computers were frustrating to use because they could run only one job at a time. Users had to give their punched cards to computer operators, who would run their program and then give the results back to the user.  This technique, called batch processing, was time-consuming and inefficient. In timesharing, however, the computer is designed so that it can be used by many people simultaneously. They access the computer remotely by means of terminals, control devices equipped with a video display and keyboard. In a properly-designed timesharing system, users have the illusion that no one else is using the computer.  Early second-generation computers were frustrating to use because they could run only one job at a time. Users had to give their punched cards to computer operators, who would run their program and then give the results back to the user.  In the third generation, the key technological event was the development of computers based on the integrated circuit (IC), which incorporated many transistors and electronic circuits on a single wafer or chip of silicon (see Figure 1B.8). Invented by Jack St. Clair Kirby and Robert Noyce in 1958, integrated circuits promised to cut the cost of computer production significantly because ICs could duplicate the functions of transistors at a tiny fraction of a transistor's cost. The earliest ICs, using a technology now called small-scale integration (SSI), could pack up to 10 to 20 transistors on a chip. By the late 1960s, engineers had achieved medium-scale integration (MSI), which placed between 20 and 200 transistors on a chip. In the early 1970s, large-scale integration (LSI) was achieved, in which a single chip could hold up to 5,000 transistors.
Integrated chips are shown here with first-generation vacuum tubes and second-generation transistors.  Integrated circuit technology unleashed a period of innovation in the computer industry that is without parallel in history. By the second generation, scientists knew that more powerful computers could be created by building more complex circuits. But because these circuits had to be wired by hand, these computers were too complex and expensive to build. With integrated circuits, new and innovative designs became possible for the first time.
With ICs on the scene, it was possible to create smaller, inexpensive computers that more organizations could afford to buy. Mainframe computer manufacturers such as IBM, however, did not perceive that this market existed. In the first of two key events that demonstrated the inability of large companies to see new markets, the mainframe computer manufacturers left the market for smaller computers open to new, innovative firms. The first of these was Digital Electronic Corporation (DEC), which launched the minicomputer industry. (A minicomputer is smaller than a mainframe and is designed to meet the computing needs of a small- to mid-sized organization or a department within a larger organization.)  DEC's pioneering minicomputers used integrated circuits to cut down costs. Capable of fitting in the corner of a room, the PDP-8 (a 1965 model) did not require the attention of a full-time computer operator. In addition, users could access the computer from different locations in the same building by means of timesharing. This minicomputer's price tag was about one-fourth the cost of a traditional mainframe. For the first time, medium-sized companies (as well as smaller colleges and universities) could afford computers.
DEC's first commercially-available minicomputer, the PDP-8, did not require the attention of a full-time computer operator.  By 1969, so many different programming languages were in use that IBM decided to unbundle its systems and sell software and hardware separately. Before that time, computer manufacturers received software that was "bundled" (provided) with the purchased hardware. Now buyers could obtain software from sources other than the hardware manufacturer, if they wished. This freedom launched the software industry.  The minicomputer industry strongly promoted standards, chiefly as a means of distinguishing their business practices from mainframe manufacturers. In the mainframe industry, it was a common practice to create a proprietary architecture (also called a closed architecture) for connecting computer devices. In a proprietary architecture, the company uses a secret technique to define how the various computer components connect.

Translation? If you want a printer, you have to get it from the same company that sold you the computer. In contrast, most minicomputer companies stressed open architecture in open architecture designs, the various components connect according to nonproprietary, published standards. Examples of such standards are the RS-232c and Centronics standards for connecting devices such as printers.

The Fourth Generation (1975 to the Present)
As the integrated circuit revolution developed, engineers learned how to build increasingly more complex circuits on a single chip of silicon. With very-large-scale integration (VLSI) technology, they could place the equivalent of more than 5,000 transistors on a single chip—enough for a processing unit. Inevitably, it would occur to someone to try to create a chip that contained the core processing circuits of a computer.
In the early 1970s, an Intel Corporation engineer, Dr. Ted Hoff, was given the task of designing an integrated circuit to power a digital watch. Previously, these circuits had to be redesigned every time a new model of the watch appeared. Hoff decided that he could avoid costly redesigns by creating a tiny computer on a chip. The result was the Intel 4004, the world's first microprocessor.  A microprocessor chip holds the entire control unit and arithmetic-logic unit of a computer. Compared to today's microprocessors, the 4004 was a simple device (it had 2,200 transistors). The 4004 was soon followed by the 8080, and the first microcomputers—computers that used microprocessors for their central processing unit (CPU)—soon appeared. (The central processing unit processes data.)  The Intel 4004, the world's first microprocessor.  Repeating the pattern in which established companies did not see a market for smaller and less expensive computers, the large computer companies considered the microcomputer nothing but a toy. They left the market to a host of startup companies. The first of these was MITS, an Arizona-based company that marketed a microcomputer kit. This microcomputer, called the Altair, used Intel's 8080 chip.
In the mid-1970s, computer hobbyists assembled microcomputers from kits or from secondhand parts purchased from electronics suppliers. However, two young entrepreneurs, Steve Jobs and Steve Wozniak, dreamed of creating an "appliance computer." They wanted a microcomputer so simple that you could take it out of the box, plug it in, and use it, just as you would use a toaster oven. Jobs and Wozniak set up shop in a garage after selling a Volkswagen for $1,300 to raise the needed capital.

They founded Apple Computer, Inc., in April 1977. Its first product, the Apple I, was a processor board intended for hobbyists, but the experience the company gained in building the Apple I led to the Apple II computer system.

The Apple I was intended for hobbyists, but the experience Apple gained in building it led to the highly-successful Apple II.  The Apple II was a huge success. With a keyboard, monitor, floppy disk drive, and operating system, the Apple II was a complete microcomputer system, based on the Motorola 6502 microprocessor. Apple Computer, Inc. soon became one of the leading forces in the microcomputer market, making millionaires out of Jobs, Wozniak, and other early investors. The introduction of the first electronic spreadsheet software, VisiCalc, in 1979 helped convince the world that these little microcomputers were more than toys. Still, the Apple II found its greatest market in schools and homes, rather than in businesses.

In 1980, IBM decided that the microcomputer market was too promising to ignore and contracted with Microsoft Corporation to write an operating system for a new microcomputer based on the Intel 8080. (An operating system is a program that integrates and controls the computer's internal functions.) The IBM Personal Computer (PC), with a microprocessor chip made by Intel Corporation and a Microsoft operating system called MS-DOS, was released in 1981.  Based on the lessons learned in the minicomputer market, IBM adopted an open architecture model for the PC (only a small portion of the computer's built-in startup code was copyrighted). IBM expressly invited third-party suppliers to create accessory devices for the IBM PC, and the company did not challenge competitors who created IBM-compatible computers (also called clones), which could run any software developed for the IBM PC. The result was a flourishing market, to which many hardware and software companies made major commitments.

The first IBM PC was released in 1981. Intel provided the microprocessor chip and Microsoft Corporation provided the operating system.
IBM's share of the PC market soon declined. The decline was partly due to stiff competition from clone makers, but it was also due to IBM management's insistence on viewing the PC as something of a toy, used chiefly as a means of introducing buyers to IBM's larger computer systems. Ironically, thanks to IBM's reputation among businesses, the IBM PC helped to establish the idea that a PC wasn't just a toy or an educational computer, but could play an important role in a business.
The Apple II and IBM PC created the personal computer industry, but they also introduced a division that continues to this day. Because software must be tailored to a given processor's instruction set, software written for one type of machine cannot be directly run on another type. Apple chose Motorola processors for its line of computers, while IBM chose Intel. Today's PCs use advanced Intel microprocessors; the Apple II's successor, the Macintosh, uses PowerPC chips provided by Motorola.
Why were the Apple II and IBM PC so successful? Part of the reason was attributable to the lessons taught by the minicomputer industry. Computer buyers don't like it when manufacturers use proprietary protocols in an attempt to force them to buy the same brand's accessories. Both the Apple II and IBM PC were open architecture systems that enabled users to buy printers, monitors, and other accessories made by third-party companies. Although an open-architecture strategy loses some business initially, in the end it benefits a company because it promotes the growth of an entire industry focused around a given company's computer system. As more software and accessories become available, the number of users grows—and so do the profits.

The first microcomputers weren't easy to use. To operate them, users had to cope with the computer's command-line user interface. (A user interface is the means provided to enable users to control the computer.) In a command-line interface, you must type commands to perform such actions as formatting a disk or starting a program. Although the Apple II and IBM PC were popular, computers would have to become easier to use if they were to become a common fixture in homes and offices. That's why the graphical user interface (GUI) was such an important innovation.
The first GUI was developed at Xerox Corporation's Palo Alto Research Center (PARC) in the 1970s. In a graphical user interface, users interact with programs that run in their own sizeable windows. Using a mouse (also developed at PARC), they choose program options by clicking symbols (called icons) that represent program functions. Within the program's workspace, users see their document just as it would appear when printed on a graphics-capable printer. To print these documents, PARC scientists also developed the laser printer.
It's difficult to underestimate the contribution that PARC scientists made to computing. Just about every key technology that we use today, including Ethernet local area networks (see Module 6B), stems from PARC research. But Xerox Corporation never succeeded in capitalizing on PARC technology, repeating a theme that you've seen throughout this module: big companies sometimes have difficulty perceiving important new markets.
The potential of PARC technology wasn't lost on a late-1970s visitor, Apple Computer's Steve Jobs. Grasping instantly what the PARC technology could mean, the brilliant young entrepreneur returned to Apple and bet the company's future on a new, PARC-influenced computer called the Macintosh. In 1984, Apple Computer released the first Macintosh, which offered all the key PARC innovations, including on-screen fonts, icons, windows, mouse control, and pull-down menus. Apple Computer retained its technological leadership in this area until Microsoft released an improved version of Microsoft Windows in the early 1990s.

Windows is designed to run on IBM-compatible computers, which are far more numerous and generally less expensive than Macintoshes. Also showing the influence of PARC innovations, Windows is now the most widely-used computer user interface program in the world.


Apple Computer's Macintosh was the first commercial personal computer to offer a PARC-influenced graphical user interface.


Microsoft Windows 2000 includes the latest version of the world's most popular user interface.
Although fourth-generation hardware has improved at a dizzying pace, the same cannot be said for software. Throughout the fourth generation, programmers have continued to use high-level programming languages. In fact, COBOL, which dates to the dawn of the second generation, is still the most widely-used programming language in the world. High-level programming languages are inefficient, time-consuming, and prone to error. In short, software (not hardware) has slowed the development of the computer industry—at least, until very recently.

A Fifth Generation of Computer
If there is a fifth generation, it has been slow in coming. After all, the last one began in 1975. For years, experts have forecast that the trademark of the next generation will be artificial intelligence (AI), in which computers exhibit some of the characteristics of human intelligence. But progress towards that goal has been disappointing.  Technologically, we're still in the fourth generation, in which engineers are pushing to see how many transistors they can pack on a chip. This effort alone will bring some of the trappings of AI, such as a computer's capability to recognize and transcribe human speech. Although fourth-generation technology will inevitably run into physical barriers, engineers do not expect to encounter these for many years (perhaps decades).

What appears to truly differentiate the late 1990s from previous years is the rocket-like ascent of computer networking, both at the LAN and WAN levels. Many new homes now include local area networks (LANs) to link the family's several computers and provide all of them with Internet access. At the WAN level, the Internet's meteoric growth is creating a massive public computer network of global proportions, and it has already penetrated close to 50 percent of U.S. households.  Another third-generation innovation was the development of standards for computer networking. Since the late 1960s, the U.S. Advanced Research Projects Agency (ARPA) had supported a project to develop a wide area network (WAN), a computer network capable of spanning continents. Headed by Vincent Cerf, this project created a test network, called the ARPANET that connected several universities that had Defense Department research contracts. The outcome of this project, the Internet, would later rock the world. Interestingly, the ARPANET proved a point that's been seen throughout the history of computing: innovators often cannot guess how people will use the systems they create. ARPANET was designed to enable scientists to access distant supercomputers. Most users, however, viewed it as a communications medium. They developed real-time chatting, electronic mail, and newsgroups. The Internet continues to play an important social role for users.

In 1973, ARPANET fully implemented the Internet protocols (also called TCP/IP), the standards that enable the Internet to work. Coincidentally, in the same year, Bob Metcalfe and other researchers at Xerox Corporation's Palo Alto Research Center (PARC) developed the standards for a local area network (LAN), a direct-cable network that could tie in all computers in a building. Called Ethernet, these standards are now the most widely-used in the world.

The Five Generations of Computers
The history of computer development is often referred to in reference to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.
First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
Third Generation - 1964-1971: Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.
Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization. 


Source:  Computers in Your Future, Bryan Pfaffenberger, Student Resource Guide (SRG), Module 1b: History of Computers



No comments:

Post a Comment