History of Software Engineering
History Of Software
The term programming was commonly used
through the mid-1960s, and referred essentially to the task of coding a computer. The term
software engineering—referring to the highly
disciplined, systematic approach to software
development and maintenance—came into
existence after a NATO-sponsored conference
in 1968. At that conference, the difficulties
and pitfalls of designing complex systems were
explored in depth, and a search for solutions
began that concentrated on better methodologies and tools. The most prominent of these
tools were languages reflecting procedural,
modular, and object-oriented styles of programming. Since 1968, the development of
software engineering has been intimately tied
to these tools’ emergence and improvement,
as well as to efforts for systematizing or
automating program documentation and testing. Ultimately, analytic verification and correctness proofs were supposed to replace
testing, but that has not happened.
The 1960s and the origin of software engineering
It is unfortunate that people dealing with computers often have little interest in the history of their subject. As a result, many concepts and ideas are propagated and advertised as being new, when in fact they existed decades ago, perhaps under a different name. I believe it worthwhile to occasionally consider the past and to investigate how computing terms and concepts originated. I regard the late 1950s as a period essential to the era of computing. At that time, large computers became available to research institutions and universities. Computers were then used primarily in engineering and the natural sciences, but they soon became indispensable in business, too. The time when they were accessible only to a few insiders in laboratories, when they frequently broke down whenever one wanted to use them, belonged to the past. Computers’ emergence from the closed laboratory of electrical engineers into the public domain meant that computers’ use, in particular their programming, became an activity of many. As a result, a new profession was born, and all kinds of companies began to hire programmers. The actual computers, however, remained hidden, enclosed within special rooms built to house them in those same companies. Programmers would write code and bring their programs to a counter where a dispatcher would pick up the programs and queue them for processing.
Programming as a mathematical discipline
In 1967, Robert W. Floyd had suggested the
idea of assertions of states, of truths always
valid at given points in a program.10 It led to
Hoare’s seminal paper titled ‘‘An Axiomatic
Basis of Computer Programming,’’ postulating
the so-called Hoare logic.11 A few years later, in
1975, Dijkstra deduced from it the calculus of
predicate transformers.12 Programming was
obtaining a mathematical basis. Programs
were no longer just code for controlling
computers, but static texts that could be
subjected to mathematical reasoning.
Although these developments were recognized at some universities, they passed virtually unnoticed in industry. Indeed, Hoare’s
logic and Dijkstra’s predicate transformers
were explained on interesting but small algorithms such as multiplication of integers,
binary search, and greatest common divisor.
But industry was plagued by large, even
gargantuan, systems. It was not obvious that
mathematical theories would ever solve real
problems when the analysis of simple algorithms was demanding enough.
Era of the personal workstation
However, another development influenced
the computing field more profoundly than all
programming languages. It was the workstation, whose first incarnation, the Alto, was
built in 1975 by the Xerox PARC lab.16 In
contrast to the aforementioned microcomputers, the workstation was powerful enough to
allow serious software development, complex
computations, and the use of a compiler for an
advanced programming language. Most important, the Alto pioneered the bit-mapped,
high-resolution display and the pointing device called a mouse, which together brought
about a revolutionary change in computer
usage. Along with the Alto, the concept of a
local area network was introduced, and that of
central servers for (laser) printing, large-scale
file storage, and email service.
It is no exaggeration at all to claim that the
modern computing era started in 1975 with
the Alto. The Alto caused nothing less than a
revolution, albeit slowly, and as a result many
people today have no idea how computing
was done before 1975, without personal,
highly interactive workstations. The influence
of these developments on software engineering cannot be overestimated.
Abundance of computer power
The period since 1985 has, until a few years
ago, chiefly been characterized by enormous
advances in hardware technology. Today,
even tiny computers, such as mobile telephones, have 100 times more power and
capacity than the biggest of 20 years ago. It is
fair to say that semiconductor and disk
technologies have recently determined all
advances. Who, for example, would have
dreamed in 1990 of memory sticks holding
several gigabytes of data, of tiny disks with
dozens-of-gigabytes capacity, of processors
with clock rates of several gigahertz?
Such speedy development has vastly widened the area of computer applications. This
has happened particularly in connection with
communications technology. It is now hard to
believe that before 1975 computer and communications technologies were considered
separate fields.
Wasteful software
Whereas the incredible increase in the
power of hardware was very beneficial for a
wide spectrum of applications (we think of
administrations, banks, railways, airlines, guidance systems, engineering, science), the same
cannot be claimed for software engineering.
Surely, software engineering has profited too
from the many sophisticated development
tools. But the quality of its products hardly
reflects signs of great progress. No wonder: after
July–September 2008 37
all, the increase of power was itself the reason
for the terrifying growth of complexity. Whatever progress was made in software methodology was quickly compensated for by still higher
complexity of the software tasks. This is
reflected by Martin Reiser’s ‘‘law’’: ‘‘Software
is getting slower faster than hardware is getting
faster.’’17 Indeed, new problems have been
tackled that are so difficult that engineers often
have to be admired more for their optimism
and courage than for their success.
Personal reflections
What can we do to release this logjam?
There is little point in reading history unless
we are willing to learn from it. Therefore, I dare
to reflect on the past and will try to draw some
conclusions. A primary effort must be education concerning a sense of quality. Programmers must become engaged crusaders against
homemade complexity. The cancerous growth
of complexity is not a thing to be admired; it
must be fought wherever possible.18 Programmers must be given time and afforded respect
for work of high quality. This is crucial,
ultimately more effective than more and better
tools and rules. Let us embark on a global effort
to prevent software from becoming known as
softwaste!
Recently I have become acquainted with a
few projects where large, commercial operating systems were discarded in favor of the
Oberon system, whose principal design objective had been perspicuity and concentration
on the essentials.
Comments
Post a Comment