To understand the need for software engineering, we must pause briefly to look back at the recent history of computing. This story will help us understand the problems that began to become apparent in the late 1960s and early 1970s, and the solutions that have led to the creation of the field of software engineering. Some referred to these problems as “The Software Crisis”, so named because of the symptoms of the problem. The situation could also be called “The Complexity Barrier,” named after the root cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that now fall under the heading of software engineering, we have made progress and continue to make progress.

In the early days of computing, the main concern was building or acquiring the hardware. The software was almost expected to take care of itself. The consensus held that “hardware” is “hard” to change, while “software” is “soft” or easy to change. Accordingly, most people in the industry carefully planned the development of the hardware but gave much less thought to the software. If the software didn’t work, they believed, it would be easy enough to change it until it did. In that case, why go to the effort of planning?

The cost of the software was such a small fraction of the cost of the hardware that no one considered it very important to manage its development. However, they all saw the importance of producing programs that were efficient and ran fast because this saved time on expensive hardware. People’s time was assumed to save time for machines. Making the people process efficient was given a low priority.

This approach proved successful in the early days of computing, when the software was simple. However, as computing matured, programs became more complex, and projects grew larger, whereas since then programs had been routinely specified, written, operated, and maintained by the same person, programs began to be developed by teams of programmers to meet the expectations of another person.

Individual effort gave way to team effort. Communication and coordination that once happened inside one person’s head had to happen between the heads of many people, making the whole process much more complicated. As a result, communication, management, planning and documentation became essential.

Consider this analogy: a carpenter might work alone to build a simple house for himself with no more than a general concept of a plan. He or she could figure things out or make adjustments as the job progressed. This is how the first programs were written. But if the house is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house will be built. Plans must be reviewed with the prospective owner before construction begins. And if the house is to be built by many carpenters, the entire project certainly needs to be planned out before the work begins, so that while one carpenter is building one part of the house, another is not building the other side of a different house. Scheduling becomes a key element for cement contractors for basement walls before carpenters start framing. As the house becomes more complex and the work of more people must be coordinated, blueprints and management plans are required.

As programs became more complex, the earlier methods used to make plans (flowcharts) were no longer satisfactory for representing this increased complexity. And so it became difficult for a person who needed a program written to convey to another person, the programmer, exactly what was wanted, or for programmers to convey to each other what they were doing. In fact, without better rendering methods it became difficult for even a programmer to keep track of what he or she is doing.

The times required to write programs and their costs began to exceed all estimates. It was not uncommon for systems to cost more than double what had been estimated and take weeks, months, or years longer than expected to complete. Customer-delivered systems frequently failed to function properly because money or time had run out before the programs would work as originally intended. But the program was so complex that each attempt to solve a problem produced more problems than it solved. When customers finally saw what they were getting, they often changed their minds about what they wanted. At least one very large military software systems project costing several hundred million dollars was abandoned because it could never be made to work properly.

The quality of the programs also became a major concern. As computers and their software were used for more vital tasks, such as monitoring life support equipment, software quality took on a new meaning. As we had increased our dependence on computers and in many cases could no longer live without them, we discovered how important it is that they work properly.

Making a change within a complex program turned out to be very expensive. Often even getting the program to do something slightly different was so difficult that it was easier to scrap the old program and start over. This, of course, was expensive. Part of the evolution in the software engineering approach was learning to develop systems that are built well enough the first time that simple changes can be easily made.

At the same time, hardware was becoming less and less expensive. Tubes were replaced by transistors and transistors were replaced by integrated circuits until microcomputers that cost less than three thousand dollars became multi-million dollars. As an indication of how quickly change was occurring, the cost of a given amount of computing is halved every two years. Given this realignment, the times and costs to develop the software were no longer small enough, compared to the hardware, to be ignored.

As the cost of hardware plummeted, software continued to be written by humans, whose salaries rose. The savings from productivity improvements in software development through the use of assemblers, compilers, and database management systems were not as rapid as the savings in hardware costs. In fact, today software costs can no longer be ignored, but have become greater than hardware costs. Some current developments, such as non-procedural languages ​​(4th generation) and the use of artificial intelligence (5th generation), promise to increase the productivity of software development, but we are just beginning to see their potential.

Another problem was that, in the past, programs often didn’t fully understand what the program needed to do. Once the program had been written, the client began to express his discontent. And if the customer is dissatisfied, ultimately the producer was also dissatisfied. Over time, software developers learned to design with pencil and paper exactly what they intended to do before they started. They could then review the plans with the client to see if they met the client’s expectations. It is simpler and less expensive to make changes to this pen-and-paper version than it is after the system has been built. Using good planning makes it less likely that changes will need to be made once the program is over.

Unfortunately, until several years ago there was no good representation method to satisfactorily describe systems as complex as those currently being developed. The only good representation of what the product will look like was the finished product itself. The developers couldn’t show customers what they were planning. And customers couldn’t see if the software was what they wanted until it was finally built. So it was too expensive to change.

Once again, consider the analogy of building construction. An architect can draw a floor plan. Usually the client can gain some understanding of what the architect has planned and give feedback on whether it is appropriate. Floor plans are reasonably easy for the layman to understand because most people are familiar with drawings that represent geometric objects. The architect and the client share common concepts about space and geometry. But the software engineer must represent to the client a system that involves logic and information processing. Since they do not yet have a language of common concepts, the software engineer must teach a new language to the client before they can communicate.

In addition, it is important that this language is simple so that it can be learned quickly.

Leave a Reply

Your email address will not be published. Required fields are marked *