IN THE BEGINNING... The history of computers starts out
about 2000 years ago, at the birth of the abacus, a wooden rack holding
two horizontal wires with beads strung on them. When these beads are moved
around, according to programming rules memorized by the user, all regular
arithmetic problems can be done. Another important invention around the
same time was the Astrolabe, used for navigation. Blaise Pascal
is usually credited for building the first digital computer in 1642. It
added numbers entered with dials and was made to help his father, a tax
collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that
was built in 1694. It could add, and, after changing some things around,
multiply.Leibniz invented a special stepped gear mechanism for introducing
the addend digits, and this is still being used. The prototypes
made by Pascal and Leibniz were not used in many places, and considered
weird until a little more than a century later, when Thomas of Colmar (A.K.A.
Charles Xavier Thomas) created the first successful mechanical calculator
that could add, subtract, multiply, and divide. A lot of improved desktop
calculators by many inventors followed, so that by about 1890, the range
of improvements included: Accumulation of partial results Storage
and automatic reentry of past results (A memory function) Printing of the
results Each of these required manual installation. These improvements
were mainly made for commercial users, and not for the needs of science
BABBAGE While Thomas of Colmar was developing the
desktop calculator, a series of very interesting developments in computers
was started in Cambridge, England, by Charles Babbage (left, of which the
computer store "Babbages" is named), a mathematics professor. In 1812,
Babbage realized that many long calculations, especially those needed to
make mathematical tables, were really a series of predictable actions that
were constantly repeated. From this he suspected that it should be possible
to do these automatically. He began to design an automatic
mechanical calculating machine, which he called a difference engine. By
1822, he had a working model to demonstrate with. With financial help from
the British government, Babbage started fabrication of a difference engine
in 1823. It was intended to be steam powered and fully automatic, including
the printing of the resulting tables, and commanded by a fixed instruction
program. The difference engine, although having limited adaptability
and applicability, was really a great advance. Babbage continued to work
on it for the next 10 years, but in 1833 he lost interest because he thought
he had a better idea -- the construction of what would now be called a
general purpose, fully program-controlled, automatic mechanical digital
computer. Babbage called this idea an Analytical Engine. The ideas of this
design showed a lot of foresight, although this couldn’t be appreciated
until a full century later. The plans for this engine required
an identical decimal computer operating on numbers of 50 decimal digits
(or words) and having a storage capacity (memory) of 1,000 such digits.
The built-in operations were supposed to include everything that a modern
general - purpose computer would need, even the all important Conditional
Control Transfer Capability that would allow commands to be executed in
any order, not just the order in which they were programmed.
The analytical engine was soon to use punched cards (similar to those used
in a Jacquard loom), which would be read into the machine from several
different Reading Stations. The machine was supposed to operate automatically,
by steam power, and require only one person there. Babbage's
computers were never finished. Various reasons are used for his failure.
Most used is the lack of precision machining techniques at the time. Another
speculation is that Babbage was working on a solution of a problem that
few people in 1840 really needed to solve. After Babbage, there was a temporary
loss of interest in automatic digital computers.----------------------------------------------------------------------------------------------------------------------------------------------The
computers that you seeand use today hasn't come off by any inventor at
one go. Rather it took centuries of rigorous research work to reach the
present stage. And scientists are still working hard to make it better
and better. But that is a different story.First, let us see when the very
idea of computing with a machine or device, as against the conventional
manual calculation, was given a shape. Though experiments were going on
even earlier, it dates back to the 17th century when the first such successful
device came into being. Edmund Gunter, an English mathematician, is credited
with its development in 1620. Yet it was too primitive to be recognized
even as the forefather of computers. The first mechanical digital calculating
machine was built in 1642 by the French scientist-philosopher Blaise Pascal.
And since then the ideas and inventions of many mathematicians, scientists,
and engineers paved the way for the development of the modern computer
in following years.But the world has had to wait for yet another couple
of centuries to reach the next milestone in developing a computer. Then
it was the English mathematician and inventor Charles Babbage who did the
wonder with his works during 1830s. In fact, he was the first to work on
a machine that can use and store values of large mathematical tables. The
most important thing of this machine is its use in recording electric impulses,
coded in the very simple binary system, with the help of only two kinds
of symbols.This is quite a big leap closer to the basics on which computers
today work. However, there was yet a long way to go. And, compared to present
day computers, Babbage's machine could be regarded as more of high-speed
counting devices. For, they could only work on numbers alone!The Boolean
algebra developed in the 19th century removed the numbers-alone limitation
for these counting devices. This technique of mathematics, invented by
Boole, helped correlate the binary digits with our language. For instance,
the values of 0s are related with false statements and 1s with the
true ones. British mathematician Alan Turing made further progress with
the help of his theory of a computing model. Meanwhile the technological
advancements of the 1930s helped much in furthering the advancement of
computing devices. But the direct forefathers of present-day computer systems
evolved in about 1940s. The Harvard Mark 1 Computer designed by Howard
Aiken is the world's first digital computer which made use of electro-mechanical
devices. It was developed jointly by the International Business Machines
(IBM) and the Harvard University in 1944. But the real breakthrough was
the concept of the stored-program computer. This was when the Hungarian-American
mathematician John von Neumann introduced the Electronic Discrete Variable
Automatic Computer (EDVAC). The idea--that instructions as well as data
should be stored in the computer's memory for better results--made this
device totally different from its counting device type of forerunners.
And since then computers have increasingly become faster and more powerful.
Still, as against the present day's personal computers, they had the simplest
form of designs. It was based on a single CPU performing various operations,
like, addition, multiplication and so on. And these operations would be
performed following an order of instructions, called program, to produce
the desired result.This form of design, was followed, with a little change
even in the advanced versions of computers developed later. This changed
version saw a division of the CPU into memory and arithmetic logical unit
(ALU) parts and a separate input and output sections.In fact, the
first four generations of computers followed this as their basic form of
design. It was basically the type of hardware used that caused the difference
over the generation. For instance, the first generation variety was based
on vacuum tube technology. This was upgraded with the coming up of the
transistors, and printed circuit board technology in the 2nd generations.
It was further upgraded by the coming up of integrated circuit chip technology
where the little chips replaced a large number of components. Thus the
size of computer was greatly reduced in the 3rd generation, while it become
more powerful. But the real marvel came during the 1970s. It was with the
introduction of the very large scale integrated technology (VLSI) in the
4th generation. Aided by this technology a tiny microprocessor can store
millions of pieces of data. And based on this technology the IBM introduced
its famous Personal Computers. Since then IBM itself, and other makers
including Apple, Sinclair, and so forth, kept on developing more and more
advanced versions of personal computers along with bigger and more powerful
ones like Mainframe and Supercomputers for more complicated works. Meanwhile
the tinier versions like laptops and even palmtops came up with more advanced
technologies over the past couple of decades. But only advancement of technology
cannot take the full credit for the amazing advancement of computers over
the past few decades. Software, or the inbuilt logic to run the computer
the way you like, kept on being developed at an equal pace. The coming
of famous software manufacturers like Microsoft, Oracle, Sun have helped
pacing up the development. The result of these all, is to add to
our ease in solving complex problems at a lightning speed with a handier
version of a device called computer.----------------------------------------------------------------------------------------------------------------------------------------------
A Brief History of Computer Technology A complete history of computing
would include a multitude of diverse devices such as the ancient Chinese
abacus, the Jacquard loom (1805) and Charles Babbage's ``analytical engine''
(1834). It would also include discussion of mechanical, analog and digital
computing architectures. As late as the 1960s, mechanical devices, such
as the Marchant calculator, still found widespread application in science
and engineering. During the early days of electronic computing devices,
there was much discussion about the relative merits of analog vs. digital
computers. In fact, as late as the 1960s, analog computers were routinely
used to solve systems of finite difference equations arising in oil reservoir
modeling. In the end, digital computing devices proved to have the power,
economics and scalability necessary to deal with large scale computations.
Digital computers now dominate the computing world in all areas ranging
from the hand calculator to the supercomputer and are pervasive throughout
society. Therefore, this brief sketch of the development of scientific
computing is limited to the area of digital, electronic computers. The
evolution of digital computing is often divided into generations. Each
generation is characterized by dramatic improvements over the previous
generation in the technology used to build computers, the internal organization
of computer systems, and programming languages. Although not usually associated
with computer generations, there has been a steady improvement in algorithms,
including algorithms used in computational science. The following history
has been organized using these widely recognized generations as mileposts.
The Mechanical Era (1623-1945) The idea of using machines to solve
mathematical problems can be traced at least as far as the early 17th century.
Mathematicians who designed and implemented calculators that were capable
of addition, subtraction, multiplication, and division included Wilhelm
Schickhard, Blaise Pascal, and Gottfried Leibnitz. The first multi-purpose,
i.e. programmable, computing device was probably Charles Babbage's Difference
Engine, which was begun in 1823 but never completed. A more ambitious machine
was the Analytical Engine. It was designed in 1842, but unfortunately it
also was only partially completed by Babbage. Babbage was truly a man ahead
of his time: many historians think the major reason he was unable to complete
these projects was the fact that the technology of the day was not reliable
enough. In spite of never building a complete working machine, Babbage
and his colleagues, most notably Ada, Countess of Lovelace, recognized
several important programming techniques, including conditional branches,
iterative loops and index variables. A machine inspired by Babbage's design
was arguably the first to be used in computational science. George Scheutz
read of the difference engine in 1833, and along with his son Edvard Scheutz
began work on a smaller version. By 1853 they had constructed a machine
that could process 15-digit numbers and calculate fourth-order differences.
Their machine won a gold medal at the Exhibition of Paris in 1855, and
later they sold it to the Dudley Observatory in Albany, New York, which
used it to calculate the orbit of Mars. One of the first commercial uses
of mechanical computers was by the US Census Bureau, which used punch-card
equipment designed by Herman Hollerith to tabulate data for the 1890 census.
In 1911 Hollerith's company merged with a competitor to found the corporation
which in 1924 became International Business MachinesFirst Generation Electronic
Computers (1937-1953) Three machines have been promoted at various
times as the first electronic computers. These machines used electronic
switches, in the form of vacuum tubes, instead of electromechanical relays.
In principle the electronic switches would be more reliable, since they
would have no moving parts that would wear out, but the technology was
still new at that time and the tubes were comparable to relays in reliability.
Electronic components had one major benefit, however: they could ``open''
and ``close'' about 1,000 times faster than mechanical switches. The earliest
attempt to build an electronic computer was by J. V. Atanasoff, a professor
of physics and mathematics at Iowa State, in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of
partial differential equations. By 1941 he and graduate student Clifford
Berry had succeeded in building a machine that could solve 29 simultaneous
equations with 29 unknowns. However, the machine was not programmable,
and was more of an electronic calculator. A second early electronic machine
was Colossus, designed by Alan Turing for the British military in 1943.
This machine played an important role in breaking codes used by the German
army in World War II. Turing's main contribution to the field of computer
science was the idea of the Turing machine, a mathematical formalism widely
used in the study of computable functions. The existence of Colossus was
kept secret until long after the war ended, and the credit due to Turing
and his colleagues for designing one of the first working electronic computers
was slow in coming. The first general purpose programmable electronic computer
was the Electronic Numerical Integrator and Computer (ENIAC), built by
J. Presper Eckert and John V. Mauchly at the University of Pennsylvania.
Work began in 1943, funded by the Army Ordnance Department, which needed
a way to compute ballistics during World War II. The machine wasn't completed
until 1945, but then it was used extensively for calculations during the
design of the hydrogen bomb. By the time it was decommissioned in 1955
it had been used for research on the design of wind tunnels, random number
generators, and weather prediction. Eckert, Mauchly, and John von Neumann,
a consultant to the ENIAC project, began work on a new machine before ENIAC
was finished. The main contribution of EDVAC, their new project, was the
notion of a stored program. There is some controversy over who deserves
the credit for this idea, but none over how important the idea was to the
future of general purpose computers. ENIAC was controlled by a set of external
switches and dials; to change the program required physically altering
the settings on these controls. These controls also limited the speed of
the internal electronic operations. Through the use of a memory that was
large enough to hold both instructions and data, and using the program
stored in memory to control the order of arithmetic operations, EDVAC was
able to run orders of magnitude faster than ENIAC. By storing instructions
in the same medium as data, designers could concentrate on improving the
internal structure of the machine without worrying about matching it to
the speed of an external control. Regardless of who deserves the credit
for the stored program idea, the EDVAC project is significant as an example
of the power of interdisciplinary projects that characterize modern computational
science. By recognizing that functions, in the form of a sequence of instructions
for a computer, can be encoded as numbers, the EDVAC group knew the instructions
could be stored in the computer's memory along with numerical data. The
notion of using numbers to represent functions was a key step used by Goedel
in his incompleteness theorem in 1937, work which von Neumann, as a logician,
was quite familiar with. Von Neumann's background in logic, combined with
Eckert and Mauchly's electrical engineering skills, formed a very powerful
interdisciplinary team. Software technology during this period was very
primitive. The first programs were written out in machine code, i.e. programmers
directly wrote down the numbers that corresponded to the instructions they
wanted to store in memory. By the 1950s programmers were using a symbolic
notation, known as assembly language, then hand-translating the symbolic
notation into machine code. Later programs known as assemblers performed
the translation task. As primitive as they were, these first electronic
machines were quite useful in applied science and engineering. Atanasoff
estimated that it would take eight hours to solve a set of equations with
eight unknowns using a Marchant calculator, and 381 hours to solve 29 equations
for 29 unknowns. The Atanasoff-Berry computer was able to complete the
task in under an hour. The first problem run on the ENIAC, a numerical
simulation used in the design of the hydrogen bomb, required 20 seconds,
as opposed to forty hours using mechanical calculators. Eckert and Mauchly
later developed what was arguably the first commercially successful computer,
the UNIVAC; in 1952, 45 minutes after the polls closed and with 7% of the
vote counted, UNIVAC predicted Eisenhower would defeat Stevenson with 438
electoral votes (he ended up with 442). Second Generation (1954-1962)
The second generation saw several important developments at all levels
of computer system design, from the technology used to build the basic
circuits to the programming languages used to write scientific applications.
Electronic switches in this era were based on discrete diode and transistor
technology with a switching time of approximately 0.3 microseconds. The
first machines to be built with this technology include TRADIC at Bell
Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology
was based on magnetic cores which could be accessed in random order, as
opposed to mercury delay lines, in which data was stored as an acoustic
wave that passed sequentially through the medium and could be accessed
only when the data moved by the I/O interface. Important innovations in
computer architecture included index registers for controlling loops and
floating point units for calculations based on real numbers. Prior to this
accessing successive elements in an array was quite tedious and often involved
writing self-modifying code (programs which modified themselves as they
ran; at the time viewed as a powerful application of the principle that
programs and data were fundamentally the same, this practice is now frowned
upon as extremely hard to debug and is impossible in most high level languages).
Floating point operations were performed by libraries of software routines
in early computers, but were done in hardware in second generation machines.
During this second generation many high level programming languages were
introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important
commercial machines of this era include the IBM 704 and its successors,
the 709 and 7094. The latter introduced I/O processors for better throughput
between I/O devices and main memory. The second generation also saw the
first two supercomputers designed specifically for numeric processing in
scientific applications. The term ``supercomputer'' is generally reserved
for a machine that is an order of magnitude more powerful than other machines
of its era. Two machines of the 1950s deserve this title. The Livermore
Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early
examples of machines that overlapped memory operations with processor operations
and had primitive forms of parallel processing.Third Generation (1963-1972)
The third generation brought huge gains in computational power. Innovations
in this era include the use of integrated circuits, or ICs (semiconductor
devices with several transistors built into one physical component), semiconductor
memories starting to be used instead of magnetic cores, microprogramming
as a technique for efficiently designing complex processors, the coming
of age of pipelining and other forms of parallel processing (described
in detail in Chapter CA), and the introduction of operating systems and
time-sharing. The first ICs were based on small-scale integration (SSI)
circuits, which had around 10 devices per circuit (or ``chip''), and evolved
to the use of medium-scale integrated (MSI) circuits, which had up to 100
devices per chip. Multilayered printed circuits were developed and core
memory was replaced by faster, solid state memories. Computer designers
began to take advantage of parallelism by using multiple functional units,
overlapping CPU and I/O operations, and pipelining (internal parallelism)
in both the instruction stream and the data stream. In 1964, Seymour Cray
developed the CDC 6600, which was the first architecture to use functional
parallelism. By using 10 separate functional units that could operate simultaneously
and 32 independent memory banks, the CDC 6600 was able to attain a computation
rate of 1 million floating point operations per second (1 Mflops). Five
years later CDC released the 7600, also developed by Seymour Cray. The
CDC 7600, with its pipelined functional units, is considered to be the
first vector processor and was capable of executing at 10 Mflops. The IBM
360/91, released during the same period, was roughly twice as fast as the
CDC 660. It employed instruction look ahead, separate floating point and
integer functional units and pipelined instruction stream. The IBM 360-195
was comparable to the CDC 7600, deriving much of its performance from a
very fast cache memory. The SOLOMON computer, developed by Westinghouse
Corporation, and the ILLIAC IV, jointly developed by Burroughs, the Department
of Defense and the University of Illinois, were representative of the first
parallel computers. The Texas Instrument Advanced Scientific Computer (TI-ASC)
and the STAR-100 of CDC were pipelined vector processors that demonstrated
the viability of that design and set the standards for subsequent vector
processors. Early in the this third generation Cambridge and the University
of London cooperated in the development of CPL (Combined Programming Language,
1963). CPL was, according to its authors, an attempt to capture only the
important features of the complicated and sophisticated ALGOL. However,
like ALGOL, CPL was large with many features that were hard to learn. In
an attempt at further simplification, Martin Richards of Cambridge developed
a subset of CPL called BCPL (Basic Computer Programming Language, 1967).
In 1970 Ken Thompson of Bell Labs developed yet another simplification
of CPL called simply B, in connection with an early implementation of the
UNIX operating system. comment): Fourth Generation (1972-1984) The
next generation of computer systems saw the use of large scale integration
(LSI - 1000 devices per chip) and very large scale integration (VLSI -
100,000 devices per chip) in the construction of computing elements. At
this scale entire processors will fit onto a single chip, and for simple
systems the entire computer (processor, main memory, and I/O controllers)
can fit on one chip. Gate delays dropped to about 1ns per gate. Semiconductor
memories replaced core memories as the main memory in most systems; until
this time the use of semiconductor memory in most systems was limited to
registers and cache. During this period, high speed vector processors,
such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high performance
computing scene. Computers with large main memory, such as the CRAY 2,
began to emerge. A variety of parallel architectures began to appear; however,
during this period the parallel computing efforts were of a mostly experimental
nature and most computational science was carried out on vector processors.
Microcomputers and workstations were introduced and saw wide use as alternatives
to time-shared mainframe computers. Developments in software include very
high level languages such as FP (functional programming) and Prolog (programming
in logic). These languages tend to use a declarative programming style
as opposed to the imperative style of Pascal, C, FORTRAN, et al. In a declarative
style, a programmer gives a mathematical specification of what should be
computed, leaving many details of how it should be computed to the compiler
and/or runtime system. These languages are not yet in wide use, but are
very promising as notations for programs that will run on massively parallel
computers (systems with over 1,000 processors). Compilers for established
languages started to use sophisticated optimization techniques to improve
code, and compilers for vector processors we