Applying information technology to organization design

In the past, organization theory has been mainly concerned with what might be called “organization for production.” The theory traditionally paid special attention to two problems: how to divide up the work for its efficient performance and in such a way as to keep the needs for coordi-nation of the parts within manageable bounds; and how to construct and maintain mechanisms for coordinating the several organizational parts.

Research on human relations in organizations, beginning on a sub- stantial scale in the 1930s, turned attention in organizational design to the linkage between the individual as organization member and the total pattern of organizational activity. The principal normative concern here was to create organizational environments in which employees would be motivated to join the organization, to remain in it, and to contribute vig- orously and effectively to its goals.

With the introduction of highly automated machinery, and particu-larly with the introduction of mechanized information-processing equip- ment, the assembly line becomes a rather rare form of organization of production, as does the repetitive unautomated clerical process. The human operative or clerk is more and more an observer, moderator, maintenance and repair person for a nearly autonomous process that can, carry on for significant intervals of time without direct human interven-tion. More and more of the human work becomes work of thought and communication, and as a   consequence,   the   design   of   organizations becomes a central topic in the study and application of information tech-nology, and vice versa.

1. The Post-Industrial Society

Peter Drucker used the phrase “post-industrial society” to describe the emerging world in which manufacturing and the activities associated with it play a much less central role than they did in the world of the past century. Providing services tends to pose different organizational problems from producing tangible goods. It is usually more difficult to define appropriate output measures for service organizations than for organizations that produce commodities. Whatever problems are present in measuring the quality of goods are magnified greatly in measuring the quality of services. The point can be illustrated by comparing two versions of the same economic activity, first viewed as a goods-producing activity, then as a service-producing activity: that is, producing houses and housing respectively.

A house is a tangible commodity that can be manufactured and dis- tributed through the usual market mechanisms; housing is a bundle of services provided by a dwelling in the context of a neighborhood, with schools, streets, shopping facilities, and a pattern of social interaction among the inhabitants. However complex it may be to define the qualities of a house, narrowly conceived as a structure, it is far more complex to define the qualities of housing, conceived as a situation that creates and supports a pattern of social activity, the life of a family, say.

Related to the tendency of organizations in our society to broaden the definition of their goals from the production of tangible commodities to the production of bundles of services that may or may not be associated with tangible commodities, is a tendency to broaden their concern for the externalities associated with their activities. Externalities are those consequences of action that are not charged, through the existing market mechanisms, to the actors. The classical example is the factory smoke whose social costs have not generally been paid by the consumers of the factory’s product.

It may be that organizations producing services usually have more and larger externalities associated with their activities than organizations producing goods; or it may be that we are simply becoming more sensitive in our society to the indirect consequences of organizational activity directed toward specialized goals; or it may be that, with the growth of population and technology, the actual interdependencies of organizations, and hence the externalities they cause, are becoming more extensive and significant. Whatever the reasons—and all three of those mentioned probably contribute to the trend—organizational decision-making in the organizations of the post-industrial world shows every sign of becoming a great deal more complex than the decision-making of the past.

2. Organizing the Information-Processing Structure

In the post-industrial society, then, the key problem is how to organize to make decisions—that is, to process information. Until recent years, decision- making was exclusively a human activity; it involved processes going on inside the human head and symbolic communication among humans. In our present world, decision-making is shared between the human and mechanized components of man-machine systems. The division of labor between the human and computer components in these systems has changed steadily over the past forty years, and it will continue to change as the sophistication of computer technology—and particularly computer programming or software technology—grows.

The anatomy of an organization viewed as a decision-making and information-processing system may look very different from the anatomy of the same organization viewed as a collection of people. The latter viewpoint, which is the traditional one, focuses attention on the groupings of human beings—that is, departmentalization. The former viewpoint, on the other hand, focuses on the decision-making process itself—that is, upon the flows and transformations of symbols. If we carve an organization, conceptually, into subsystems on the basis of the principal components into which the decision-making process divides, we may, and probably will, arrive at a very different dissection than if we carve it into its departmental and subdepartmental components. Moreover, the greater the interdependencies among departmental components, the greater will be the difference in these two ways of conceptualizing the organization.

Both of these viewpoints are useful and even essential in arriving at sound designs for organizations. In this analysis, I shall emphasize the less conventional point of view and discuss the decision-making process dis- embodied, so to speak, from the flesh and blood (or glass and metal, as the case may be) decision-makers who actually carry out this process. Instead of watching a person or computer as information arrives and is processed, and new information transmitted in its turn, we will follow information as it flows from one person or computer to another and is transformed in the course of flow. This approach, apart from any other advantages, will give us a fresh look at the design of organizations.

3. Factorization of Decisions and Allocation of Attention

From the information processing point of view, division of labor means factoring the total system of decisions that need to be made into relatively independent subsystems, each of which can be designed with only

rationality of both humans and computers. The number of alternatives that can be considered, the intricacy of the chains of consequences that can be traced—all these are severely restricted by the limited capacities of the available processors.

Any division of labor among decisional subsystems must take account of the interdependencies that are ignored. What is wanted is a factorization that minimizes these interdependencies and consequently permits a maximum degree of decentralization of final decision to the subsystems, and a maximum use of relatively simple and cheap coordinating devices to relate each of the decisional subsystems with the others.

Not only must the size of decision problems handled by organizations be reduced to manageable proportions by factorization, but the number of decisions to be processed must be limited by applying good principles of attention management. Attention management for an organization means exactly what it means for an individual human being: Processing capacity must be allocated to specific decision tasks, and if the total capacity is not adequate to the totality of tasks, then priorities must be set so that the most important or critical tasks are attended to.

The bottleneck of attention becomes narrower and narrower as we move to the tops of organizations, where parallel processing capacity becomes less easy to provide without damaging the coordinating function that is a prime responsibility of these levels. Only a few items can simultaneously be on the active agenda at the top.

The difficulty of coping with an information-rich environment is compounded by the fact that most information relevant to top-level and long- run organizational decisions typically originates outside the organization, and hence in forms and quantities that are beyond its control. This means that the organization must have an “interface” for identifying, obtaining, and ingesting such information selectively and for translating it into formats that are compatible with its internal information flows and systems.

Second, if attention is the scarce resource, then it becomes particularly important to distinguish between problems for decision that come with deadlines attached (real-time decisions) and problems that have relatively flexible deadlines. Rather different system designs are called for to handle these different kinds of decisions.

In summary, the inherent capacity limits of information-processing systems impose two requirements on organizational design: that the totality of decision problems be factored in such a way as to minimize the interdependence of the components; and that the entire system be so

inates outside the organization, and special provision must be made for real- time decisions that have deadlines.

Applying these basic design requirements makes it easy to see the fallacy lurking in some standard but more or less abortive approaches to the improvement of information systems: for example, municipal data banks and management information systems. There was a great enthusiasm, when computers first became available to municipal organizations, for developing comprehensive data banks for metropolitan areas—these data banks to incorporate in a single system all the myriad pieces of information about land and its uses, and about people and their activities, that are generated by the operations of urban government.

As the result of several attempts to construct such systems, the enthusiasm has been much moderated, and several incipient undertakings of this kind have been abandoned. There were several reasons for the disenchantment that followed the initial attempts at construction. First, the data processing and data storage tasks proved much larger and more complex than had been imagined. Perhaps more crucial, it became less and less clear just how the data were to enter into the decision-making process, or indeed to just what decisions they were relevant.

There is no magic in comprehensiveness. The mere existence of a mass of data is not a sufficient reason for collecting it into a single, comprehensive information system. Indeed, the problem is quite the opposite: of finding a way of factoring decision problems in order to relate the several components to their respective relevant data sources. Analysis of the decision-making system and its data requirements must come first; only then can a reasonable approach be made to defining the data systems that will support the decision- making process.

The history of management information systems has been nearly the same as the history of municipal data banks. In the enthusiasm to make use of the enormous power of computers, there was a tendency, in designing such systems, to take the existing finance and production records as a starting point and to try to give top management access to all this information. The question was not asked, or not asked with sufficient seriousness, whether top or middle management either wanted or needed such information, nor whether the information that management at various levels needed and should want could in fact be derived from these particular source records. The systems were not designed to conserve the critical scarce resource—the attention of managers—and they tended to ignore the fact that the information most important to top managers comes mainly from external sources and not from the internal records

mation is better.” They took over, implicitly, the assumptions of a past society where information rather than attention was the scarce resource.

4. Components of the New Information Technology

In designing decision-making organizations, we must understand not only the structure of the decisions to be made, but also the decision-making tools at our disposal, both human and mechanical—men and computers.

The Human Components, In our fascination with the new capabilities that computers offer us, we must not forget that our human decisionmakers have some pretty remarkable qualities too. Each person is provided with a sizable memory that is stocked cumulatively over a long period of years with various kinds of relevant and irrelevant information and skills. Each can recover relevant portions of that memory by recognition of audible or visible cues in the current situation. Each is able to communicate in natural language with others, either in direct face-to- face settings or by remote devices like the telephone or fax or e-mail.

Suppose, for example, that we were interested in designing an orga- nization that would lead us to the most expert source of information in the United States about any particular question that happened to arise. Our first impulse, today, might be to turn to the World Wide Web. Should that impulse be encouraged?

The information we are seeking is stored both in human heads and in books and data banks. Moreover, the information in books is also indexed in human heads, so that usually the most expeditious way to find the right book is to ask a human who is an expert on the subject of interest. Not only is information available from books indexed in human heads, but information about people is also. Taking these resources into account, the most powerful information-processing system for carrying out a search for the best expert in the United States is still the aggregate of memory that is distributed among 250 million human heads, together with the telephone system that links these distributed memories.

On receipt of an inquiry, I pick up the phone and call the person, among my acquaintances, whose field of expertness is as close as possible to the target (it need not be very close at all). I ask my respondent, not for the answer to the question, but for the name of the person in his or her circle of acquaintance who is closest to being an expert on the topic. I repeat the process until I have the information I want. It will be a rare occasion when more than three or four calls are required.

I do not mean to propose that we junk all our other information systems and place sole reliance on the telephone and the vast distributed memory with which it connects us. However, this is a useful thought experiment on how we must regard information-processing systems— including both electronic and human systems—their components and interconnections. We must learn to characterize them in terms of the sizes of their memories, the ways in which those memories are indexed, their processing rates, and the rapidity with which they can respond. The human components of information systems are just as describable as the machine components, and today we know a great deal through psychological research about the parameters of the human system.

Our new and growing understanding of information processing enables us to look at familiar processing systems—man and telephone—in new ways. It also introduces us to new kinds of systems, under the general rubric of “computers,” that have capabilities of the most varied kinds.

The Computer as Memory. The computer is, first of all, a memory. I have already expressed my qualms about confusing the design of an information- collecting system with the design of an information-processing system. The fault, of course, is not in collecting information (although that may be costly in itself); it is in demanding the scarce attention of decision-makers to process the information that has been collected. Memories, as components of information-processing systems, need to be viewed as stores of potential information, which, if indexed effectively, can become available at a reason- able cost whenever it is needed as input to a decision-making process.

Even reading one book a day—a pretty good clip—a person who has collected a library of 30,000 books will take 100 years to read through all of them. We may even consider it a bit ostentatious of people to collect more books than they can possibly read—as though they were trying to impress us with their learning. However, we must not be too hasty in judging them. If their libraries are properly indexed, then each of our collectors has potential access to any of the information in the 30,000 volumes. They are quite justified in collecting more volumes than they can read if they cannot predict in advance what particular information they will need in the future, and if they have a good indexing system for finding, on demand, what they want to see.

Except for the Web, and a few specialized data banks, the computer memories that are employed today are not, in general, large compared with

directions of technological progress since the computer has appeared on the scene has been our understanding of the indexing and information- retrieval processes, and our ability to carry these out mechanically.

The Computer as Processor. In addition to being a memory, the computer is also a processor that has quite general capabilities for handling symbols of all kinds, numerical and non-numerical. This is its most novel feature. Non- human memories have been familiar to mankind since the invention of writing. Non-human symbol manipulation is something quite new, and even after forty years, we are just beginning to glimpse its potential.73

Up to the present time, perhaps the most important use of the computer in decision-making (though not the use that accounts for the bulk of computer time in organizations) is to model complex situations and to infer the consequences of alternative decisions. Some of this modeling makes use of mathematical techniques, like linear programming, that permit the calculation of optimal courses of action, hence serving as direct decision- making tools. In other forms of modeling, the computer serves as a simulator, calculating the alternative paths a system would follow in response to different decision strategies.

The term “management information system” has generally been con- strued narrowly and has been applied to large information storage and retrieval systems, like those mentioned earlier, in which the computer does only very simple processing of the information. The term would be better applied to the optimizing and simulation models that are increasingly used to illuminate various areas of management decision—models that are usually referred to as “operations research” and “strategic planning,” or sometimes, “management decision aids.” Such models, however they are labeled, probably give us a better preview of the future uses of computers in organizational decision systems than do the explicitly named management information systems.

Let me cite one example of an area of application for a strategic planning model. In the next decades, our society faces some important and difficult policy decisions with respect to the production and use of energy. In the past, the national energy problem was perceived mostly as a resource problem, and it was left in considerable part to private management through market mechanisms. Today, we see that the use of energy has important indirect consequences for the environment, and we see also that the adequacy of fuel resources for producing energy will depend on such broader trends as the rates of development of industrializing countries and the decisions we make with respect to R&D for energy technology.

The number of important variables involved in the energy picture is so large, and the interconnections among variables so intricate, that common sense and everyday reasoning no longer provide adequate guides to energy policies—if, indeed, they ever did. Nor is there a simple organizational solution of a traditional kind: establishing a federal agency with comprehensive jurisdiction over energy problems, or, alternatively, tinkering with the market mechanism.

Agency reorganization is no solution for at least two reasons. First, energy problems cannot be separated neatly from other problems. What would be the relation of a comprehensive energy agency to environmental problems? The fragmentation of responsibility for energy policy in the federal government today ignores the intertwining of those problems with others. Second, even if there were such an agency, it too would need a systematic framework within which to take up its decision problems. Tinkering with market mechanisms raises the same difficulty— without a decision framework, we do not know how to tinker.

Hence, the most important organizational requirement for handling energy policy in an intelligent way is the creation of one or more models— either of an optimizing or simulation type—to provide coherence to the decision-making process. No doubt, it is of some importance to locate the responsibility for developing and exploiting such models in appropriate places in the governmental and industrial structure. But the mere existence of the models, wherever located, cannot but have a major impact on energy policy decisions. Surprisingly, comprehensive models of the energy system are still not common, although the need for them has been fairly obvious for some years. The tardiness of response to the need is evidence both of the novelty of the modeling technology and the novelty of looking at organization as a collection of decision systems rather than a collection of agencies and departments.

Computer Access to External Information.14 A third point must be made about the characteristics of the computer as a component of the organization’s information-processing system. I have mentioned as one limitation of management information systems up to the present time their great reliance on information that is generated within the organization itself—for example, production and accounting information. A major reason for the emphasis on internal information was that, as the organization controls the production of this information, it was not hard to produce it in machine-readable form. No costly step was involved in getting it inside the computer.

If we examine the kinds of external information that executives use, we find that a large proportion of it is simply natural language text—the pages of newspapers, trade magazines, technical journals, and so on. Natural language text can, of course, be stored in computer memory after it is translated into some machine-readable form—punched cards, magnetic tape, or the like. Once stored in memory, computer programs can be written to index it automatically and to retrieve information from it in response to inquiries of a variety of kinds.

The only barrier, therefore, to making available to the mechanized components of organizational information systems the same kind of external information that executives now rely upon is the cost of putting the information into machine-readable form. Technologically, and even economically, there is no longer an obstacle; we have low-cost devices (scanners combined with optical character recognizers) that translate printed text into computer files, cheaply and accurately.

But for new materials, we do not even have to incur a cost to obtain them in machine-readable form. Every word that is now printed in a newspaper, journal, or book passes through a machine at some time during its prior history (as these words are doing while I write them)—a typewriter or typesetting machine—that can produce a machine-read- able version of the text at the same time that it produces the human- readable version. Hence, the written word is becoming almost universally available in both machine- readable and human-readable editions. Personal computers and electronic networks created the market for the machine-readable versions, and the conversion process is now going very rapidly. It is a little like the telephone— the more people who have them, the more worthwhile it is to get one.

This development has opened up a whole new range of applications of computers to organizations’ information systems. It enables computers to serve as initial filters for most of the information that enters the organization from outside, and thereby can reduce the attentional demands on executives. A recent example is the information system installed by the TIAA, the principal manager of university professors’ retirement funds. Letters and other communications from owners of policies are typically typed or hand- written. When received, they are immediately passed through a scanner and an optical character reader so that they can be stored in computer-readable form in the TIAA computer system. An employee determines where the communication should be routed, and if it needs attention at more than one point in the organization, the system automatically prepares and distributes copies. The ability to work on the task in parallel at several places reduces turnaround time considerably. As the communication is inside the company’s information system, it can be used to call up automatically records in the files that are needed for handling it.

Matching Techniques to Requirements. These comments will serve to indi- cate what is involved in fitting together the requirements of organization information systems with the characteristics of the information technology that is already available, and that which is emerging. The key to the successful design of information systems lies in matching the technology to the limits of the attentional resources. From this general principle, we can derive several rules of thumb to guide us when we are considering adding a component to an existing information-processing system.

In general, an additional component (man or machine) for an infor- mation-processing system will improve the system’s performance only if:

  1. Its output is small in comparison with its input, so that it conserves attention instead of making additional demands on attention;
  2. It incorporates effective indexes of both passive and active kinds (active indexes are processes that automatically select and filter information for subsequent transmission);
  3. It incorporates analytic and synthetic models that are capable not merely of storing and retrieving information, but of solving problems, evaluating solutions, and making decisions.

Source: Simon Herbert A. (1997), Administrative Behavior, Free Press; Subsequent edition.

Leave a Reply

Your email address will not be published. Required fields are marked *