Technical details / Weighting

Weighting

The weights for the British Social Attitudessurvey correct for the unequal selection of addresses, DUs and individuals, and for biases caused by differential non-response. The different stages of the weighting scheme are outlined in detail below. 

Selection weights

Selection weights are required because not all the units covered in the survey had the same probability of selection. The weighting reflects the relative selection probabilities of the individual at the three main stages of selection: address, DU and individual. First, because addresses in Scotland were selected using the MOI, weights were needed to compensate for the greater probability of an address with an MOI of more than one being selected, compared with an address with an MOI of one. (This stage was omitted for the English and Welsh data). Secondly, data were weighted to compensate for the fact that a DU at an address that contained a large number of DUs was less likely to be selected for inclusion in the survey than a DU at an address that contained fewer DUs. (We used this procedure because in most cases where the MOI is greater than one, the two stages will cancel each other out, resulting in more efficient weights). Thirdly, data were weighted to compensate for the lower selection probabilities of adults living in large households, compared with those in small households. 

At each stage the selection weights were trimmed to avoid a small number of very high or very low weights in the sample; such weights would inflate standard errors, reducing the precision of the survey estimates and causing the weighted sample to be less efficient. Less than one per cent of the selection weights were trimmed at each stage. 

Non-response model

It is known that certain subgroups in the population are more likely to respond to surveys than others. These groups can end up over-represented in the sample, which can bias the survey estimates. Where information is available about non-responding households, the response behaviour of the sample members can be modelled and the results used to generate a non-response weight. This non-response weight is intended to reduce bias in the sample resulting from differential response to the survey. 

The data was modelled using logistic regression, with the dependent variable indicating whether or not the selected individual responded to the survey. Ineligible households[2] were not included in the non-response modelling. A number of area-level and interviewer observation variables were used to model response. Not all the variables examined were retained for the final model: variables not strongly related to a household’s propensity to respond were dropped from the analysis. 

The variables found to be related to response were: Government Office Region (GOR), the relative condition of the immediate local area, the relative condition of the address, population density, dwelling type, and whether there were entry barriers to the selected address. Full details of the response model are available on request.

The non-response weight was calculated as the inverse of the predicted response probabilities saved from the logistic regression model. The non-response weight was then combined with the selection weights to create the final non-response weight. The top one per cent of the weight were trimmed before the weight was scaled to the achieved sample size (resulting in the weight being standardised around an average of one). 

Calibration weighting

The final stage of weighting was to adjust the final non-response weight so that the weighted sample matched the population in terms of age, sex and region. 

Only adults aged 18 or over are eligible to take part in the survey, therefore the data have been weighted to the British population aged 18+ based on 2011 Census data from the Office for National Statistics/General Register Office for Scotland. 

The survey data were weighted to the marginal age/sex and GOR distributions using raking-ratio (or rim) weighting. As a result, the weighted data should exactly match the population across these three dimensions.  

The calibration weight is the final non-response weight to be used in the analysis of the 2013 survey; this weight has been scaled to the responding sample size.  

Effective sample size

The effect of the sample design on the precision of survey estimates is indicated by the effective sample size (neff). The effective sample size measures the size of an (unweighted) simple random sample that would achieve the same precision (standard error) as the design being implemented. If the effective sample size is close to the actual sample size, then we have an efficient design with a good level of precision. The lower the effective sample size is, the lower the level of precision. The efficiency of a sample is given by the ratio of the effective sample size to the actual sample size. Samples that select one person per household tend to have lower efficiency than samples that select all household members. The final calibrated non-response weights have an effective sample size (neff) of 2,575 and efficiency of 75 per cent.

All the percentages presented in this report are based on weighted data.

Notes
  1. Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series – for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable way, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).
  2. This includes households not containing any adults aged 18 or over, vacant dwelling units, derelict dwelling units, non-resident addresses and other deadwood.
  3. In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of a laptop computer during the interview, with the interviewer entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.

    Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in the British Social Attitudes 11th Report (Lynn and Purdon, 1994).
  4. Interview times recorded as less than 20 minutes were excluded, as these timings were likely to be errors.
  5. An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992) which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on British Social Attitudes surveys since 1991.
  6. Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year. 
  7. In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation’s wealth. [Wealth1]
  8. In constructing the scale, a decision had to be taken on how to treat missing values (“Don’t know” and “Not answered”). Respondents who had more than two missing values on the left–right scale and more than three missing values on the libertarian–authoritarian and welfarism scales were excluded from that scale. For respondents with fewer missing values, “Don’t know” was recoded to the mid-point of the scale and “Not answered” was recoded to the scale mean for that respondent on their valid items.
  • Notes
    1. Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series – for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable way, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).
    2. This includes households not containing any adults aged 18 or over, vacant dwelling units, derelict dwelling units, non-resident addresses and other deadwood.
    3. In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of a laptop computer during the interview, with the interviewer entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.

      Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in the British Social Attitudes 11th Report (Lynn and Purdon, 1994).
    4. Interview times recorded as less than 20 minutes were excluded, as these timings were likely to be errors.
    5. An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992) which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on British Social Attitudes surveys since 1991.
    6. Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year. 
    7. In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation’s wealth. [Wealth1]
    8. In constructing the scale, a decision had to be taken on how to treat missing values (“Don’t know” and “Not answered”). Respondents who had more than two missing values on the left–right scale and more than three missing values on the libertarian–authoritarian and welfarism scales were excluded from that scale. For respondents with fewer missing values, “Don’t know” was recoded to the mid-point of the scale and “Not answered” was recoded to the scale mean for that respondent on their valid items.