We have defined - partly functionally and partly explicitly - the model, the quantities of interest, and the belief specifications that we need to carry out the adjustment. In particular, we have organised matters so that variances and covariances pertaining to underlying mean components are stored in variance-covariance store 2, and those pertaining to the mean-plus-residual components are in store 1. It remains only to define a range of values for (we choose initially the three values ), and for each such value to construct the corresponding , and then perform the belief adjustments desired. The fragment of code in Figure 8 contains three parts: calls to the main analysis subroutine with different values of , the subroutine itself, and the data. We concentrate only on describing the subroutine.
For each value of , the INDEX: and COBUILD: commands are used to construct thirteen and thirteen quantities. These will have expectations, variances, and underlying mean-component variances computed from their definitions: the former functionally, and the latter as linear combinations of intercept, slope, and error quantities. (Much computation is involved at this point, so the program may take some time to generate the quantities, depending on computer platform.)
Once constructed, the regression parameters are collected into the base named , and the observables are gathered into the base whose name is . Observations from three experiments are then attached: these are shown at the foot of the listing.
For the analysis, we exploit the notion of Bayes linear sufficiency via exchangeability. Here the vector of averages of the observables, , over the three experiments is Bayes linear sufficient for the regression parameters. To perform the Bayes linear adjustment of the regression parameters by the observables, it is necessary to set various belief source controls. These indicate to [B/D] the belief stores in which the overall and underlying mean-component variance matrices are stored:
The exchangeable and usedata controls specify that [B/D] should take into account data on the observables, and should use the internal routines which exploit exchangeability.
We next make two kinds of adjustments. Firstly, we adjust the regression parameters by the data (three observations over the ), and display their adjusted expectations using the SHOW: command. (As the observables are Bayes linear sufficient for the regression parameters, [B/D] automatically obtains the general adjustment via the sample means only.) The adjusted expectations for the regression parameters are shown in Table 3 for the three values of the stability parameter . This output indicates that although the model and observations are consistent with changes in expectation for these parameters (the adjusted expectation for the intercept rises slightly, etc.), changing the stability parameter makes little difference, so that the model is not particularly sensitive to choice of for predicting individual values.
Our second adjustment assesses variance sensitivities in relation to changes in sample size. It is a property of such second-order exchangeable adjustments that the analysis for a sample of size can be deduced with almost no additional computation from the same analysis performed for a sample of size . The analysis is particularly simple for adjustments where the observables are Bayes linear sufficient for the collection to be adjusted, as is the case here. Consequently, we tend to make an initial analysis assuming a sample of size one from which we may deduce easily the analysis for a general sample size n. Therefore, we use the usedata and obs controls to indicate that the analysis should assume an initial sample size of one and that it should ignore the actual observations available, and we then perform the adjustment of the underlying mean component vector by the observables, exploiting Bayes linear sufficiency automatically. [B/D] deduces here that by Y we mean from the settings of the belief source controls described above.
Much of the output available from an adjustment is available interactively as further input to the program. These include adjusted expectations, variances and covariances; the resolution transform and its canonical structure, and so forth. Various other functions are available to exploit exchangeability as appropriate. For the sensitivity study here, we output, for each value of , the following:
For the analysis of sensitivity over the model we examine both the canonical resolutions and some implications of changing the sample size. For three values of , Table 4 shows two sample sizes and thirteen canonical resolutions. The first sample size, , is the sample size needed to achieve a 50% reduction in uncertainty over the collection overall, as measured by the trace of the resolution transform. The second, , is the sample size needed to guarantee a variance reduction at least 50% in every linear combination of the quantities . The canonical resolutions indicate the effective dimension of the model and the speed of variance reduction as the sample sizes increase.
We discover that these values are highly sensitive to the choice of stability parameter . For the smallest canonical resolution is , so that for a sample size n=1 we can guarantee a reduction in uncertainty of only 0.1% in every linear combination of the mean components. This guaranteed reduction rises to 50% if we take a sample size n=1001, whereas we need take only n=79 to achieve the same reduction when we choose . For uncertainty in the collection overall the picture is similar. Examining the canonical resolutions, for , the model is dominated by two canonical quantities with resolutions of 0.60 and 0.31 respectively. The remaining canonical quantities have small resolutions, so that large sample sizes will be needed to reduce their variances appreciably. Therefore the learning process for the model with is essentially two-dimensional. (This should not surprise us: taking forces (4) to become a simple regression model with two parameters, intercept and common slope.)
As we reduce the stability parameter, the dynamics of adjustment change: we increase the number of canonical quantities having noticeable variance reductions for small sample sizes, and we learn more quickly about the collection overall, so that for we can learn about almost all combinations of the Y values. Therefore, the model is very sensitive to choice of , for learning about changes in Y over time.