We turn next to discussion of our attempt to replicate these findings using data on the mortality of firms in the semiconductor industry.
1. Measurement
We use the same definition of mortality reported in previous chapters- exits from the market, as tracked through references to listings in the Electronics Buyer’s Guide. We record for each firm by year whether it was listed in each of the 85 product categories that appeared at one time or another in the Guide. We counted the number of categories existing in each year. For each firm, we counted the number of categories in which it markets products in each year. We measure generalism as the percentage of products sold in a given year that are sold by the firm in question. This measure varies among firms at a point in time and over time for each firm.
We measured variability and coarseness of grain using data on yearly North American sales in nine aggregate categories. We conducted spectral analysis of the yearly series of sales for each category using the SAS procedure PROC SPECTRA (SAS Institute 1984). The first step in this analysis was to detrend the series on sales. We tried various specifications of a regression model to accomplish this, using R2 as the criterion for choosing among them. We found that logarithmic transformations of the series improved the fits substantially. Then we regressed each value (of the log- transformed series) on the previous year’s value and its square. This quadratic specification was used to take the accelerating growth in sales into account. The resulting R2 values ranged from 0.92 to 0.98. The residuals from these time-series analyses are the bases of our measures of both variability and coarseness of grain.
The decision to focus on seasonality in the restaurant study was a sub- stantive one. It seemed reasonable to treat the market in a unitary way, rather than trying to estimate fluctuation in demand by kind of food. In the semiconductor study, we knew a priori that the families of products do not appear at the same time and that each has its own product life cycle. Transistors, for example, are one of the older product families; over time the technology for producing these products has become highly standardized. Others, such as MOS integrated circuits, are newer (and, therefore, have shorter time series). At the time of the study, these products were still accelerating in sales and the technology was still evolving. Finally, semiconductor products are integral parts of larger systems; they are rarely sold directly to end users. Consequently, the business cycles for each product family are driven by the business cycles for the systems in which they are inserted. In order to accommodate these differences, we analyzed each product group separately.
To measure coarseness of grain, we used spectral analysis to estimate the strength of cycles of different frequencies. The time series of any variable may have high-frequency cycles nested within low-frequency cycles. Spectral analysis provides estimates of the contribution to variance in the series of cycles of varying lengths (frequencies). The spectral density associated with each frequency is analogous to an R2‘, it reflects the share of total variation in the detrended time series which can be attributed to that cycle. For each sales vector, we noted the frequency of the cycle with the highest spectral density. This allowed us to distinguish product groups in which deviations for the growth trend concentrated in short-term cycles from those in which deviations had longer-term fluctuations.
We computed a measure of the coarseness of the environmental grain of each firm using information on the cyclical behavior of the product groups in which the firm participated. We noted the product families in which the firm sold products each year. We also noted for each product group the frequency that had the highest spectral density. Then we computed the mean of the frequencies for the product families in which the firm was active in each year as a measure of coarseness of grain (C) for each firm.
Of course the vectors of yearly sales may exhibit highly cyclical patterns of variation, or there may be virtually no cycles in the data. Variation composed of all frequency cycles in equal proportion, once long-run trends are removed, is called “white noise.“ The relative amount of white noise in the various sales vectors is the basis of our measure of variability (V). Fisher’s Kappa is used to measure the white noise in the time series of residuals (Fuller 1976). The mean of Kappa across the product families in which a firm produces devices is the measure of V used here. It varies among firms and over time for individual firms as they enter and leave product families. Kappa ranges from 1.97 to 3.65.
In the analysis of restaurants we defined variability in terms of the tem- poral variation in circumstances affecting the life chances of a given form of organization. Variability was described in terms of variation about a mean. In this study, that mean is a shifting average defined in least- squares terms. This is tantamount to assuming that semiconductor firms are not exposed to uncertainty by the orderly expansion of the markets in which they operate. Their managers know what this expansion is likely to be; what they know much less well is when cycles will reach turning points. Being well adapted to all the circumstances that the pronounced semiconductor business cycle represents is virtually impossible. Firms must counterbalance the long-run advantage of continuing to invest in the most current technology against the short-run risks attending financial losses in a period in which orders fall. Firms that invest little when times are lean may survive the current conditions only to fall behind in either product design or manufacturing technique. This suggests that no single form of organization can dominate the others in all circumstances. If this is true, fitness sets are concave. We proceed on the assumption that they are, and note that if we are wrong, our model should not fit the data well.
2. Results
We begin by reporting estimates of a model that parallels the one whose estimates are reported in the first column of Table 12.1. Column 1 of Table 12.2 reports a simple model with just the effects of the three components generalism, variability, and coarseness of grain, and the interaction effect of generalism and variability. (Since we are using partial likelihood estimation, there is no intercept.) Column 2 adds the other two interaction effects, which allows estimation of 8 and £ and a joint test of hypothesis 1, that fine- grained and coarse-grained selection environments differ. Adding the two interactions in column 2 increases the fit significantly. Moreover, both δ and ζ differ significantly from zero at the .01 level. This result supports the first hypothesis.
Column 3 adds the effect of log time since entry. We expected this variable to have a negative effect and it does, as we discussed in previous chapters. None of the other variables changes its sign when we control for time since entry, and none loses statistical significance. So we evaluate the hypotheses involving point estimates using the estimates in column 3.
Hypotheses 2a and 2b pertain to fine-grained environments. They concern the claim that specialist organizations fare better than generalists over the range of variability in such environments. The first subhypothesis, 2a, pertains to fine-grained environments with low variability. It holds that specialists are favored in these conditions, which in our notation means β > 0. In the case of the semiconductor population, as for the restaurant population, this hypothesis fails. In fact, βˆ is positive and significantly different from zero at the .01 level. So in our data, fine-grained environ-ments with low variability do not favor specialists; they favor generalists rather strongly.
Hypothesis 2b concerns the relative mortality rates of specialists and generalists in fine-grained environments with moderate or high levels of variation. Specialists are favored over the full range of variation if y > 0. This is the case in all three columns in Table 12.2. However, the estimated parameter does not differ significantly from zero in the second and third columns. Thus we conclude that this hypothesis also fails.
Our third hypothesis pertains to coarse-grained environments. It states that when variability is low, the mortality rate of generalists should exceed that of specialists, but when variability is high, specialists should have the advantage and their mortality rate should be relatively lower than general- ists. Our data strongly support this hypothesis. The estimated ratio of mortality rate of generalists over specialists is
The expression in parentheses changes sign when V equals 3.14. The range of Vis from 1.97 to 3.65. So when grain is coarse, generalists maintain their advantage except when V is near its maximum.
For continuity with the previous chapter, we present Table 12.3, which shows how the niche width model adds to the fit provided by the models developed in the previous chapters. The model of exit rates from Chapter 11 includes density dependence, dependence on prior rates of entry and exit, dependence on time since entry to the industry, the subsidiary/inde- pendent organizational difference, and the effects of historical periods and business conditions. To aid comparison, we reproduce the results from Table 11.4, column 4. Column 2 adds the variables and interactions relevant to niche width dynamics. This addition improves the fit of the model in column 1 significantly.87 The log-likelihood rises from -7,474.2 to -7,427.7, a difference that is significant at the .01 level with six degrees of freedom.
In fact, although we made no predictions about the first-order effects, all three columns in Table 12.2 show a negative first-order effect of coarseness of grain, and all three are statistically significant. However, the interaction effects in the niche width model no longer differ significantly from zero individually. We should not be greatly surprised by this, however, since the model we are supplementing already has twelve parameters. If we ignore the lack of individual statistical significance and examine the hypotheses of the niche width model, we find the same pattern of results reported in Table 12.2: hypothesis 2 is not supported, but hypotheses 1 and 3 are supported.
The failure of hypothesis 2 reflects the fact that generalist firms appear to have an overall advantage. Why would generalism offer such persistent advantages? One answer lies in the fact that we have been unable to measure the size of the organizations in the population. Generalism, as we have measured it, is very likely to be confounded with large size. A big firm, such as Texas Instruments, is likely to offer a broad range of products, while a very small firm could not do so. Large size may very well operate like generalism. A big firm has greater reserves of resources with which to ride out difficult conditions.
We believe that we have exposed our theory to an unusually stringent test. Although it was not fully supported, we do think the data support it well enough to encourage further research.
Source: Hannan Michael T., Freeman John (1993), Organizational Ecology, Harvard University Press; Reprint edition.