We now live longer and healthier lives than at any time in history. Yet we are more pre-occupied today than at any other time with risks to health, safety and the environment. We want higher levels of health and safety and expect the environment to be safeguarded for future generations. This is apparent from the willingness to pay more for goods alleged to be healthier and safer to man and the environment and preoccupations about issues such as global warming and holes in the ozone layer. Moreover, there have been, in recent decades, precipitous drops in the rate and social acceptability of smoking, a widespread shift towards low-fat high-fibre diets and dramatic improvements in road safety.
In short, though it is a chimera, there is a general hankering after zero risk and an expectation that more will be done about health, safety and the environment.
In an ideal world, there would be no need for the Government to intervene in all this. Market forces operating properly would automatically adjust prices of goods or services according to the level of risk they entail. For example, the price of products would reflect the costs to workers and to society more generally of the risks imposed by production processes; and would also reflect consumers' willingness to pay more for products which were safer to use.
In the real world though a variety of malfunctions do occur which prevent the market from operating fully and can provide a case for or result in pressure for Government intervention. For instance, the social costs of a product may not be reflected in the business costs to the firm that produces it. The classic example is air pollution. More often than not, the burden of air pollution falls largely on those who do not purchase the firm's product. Another example occurs where consumers may have no way of knowing that a particular product contains a contaminant which is highly carcinogenic or conversely might believe that a harmless contaminant is a highly hazardous one.
The net effect of these examples is that the messages which a market operating properly should have sent to those creating risks - such as "stop undertaking this activity, because we will not purchase the products that you manufacture or the services that you offer" - are at times either incomplete, not received at all, or sent to the wrong people. Hence the need at times for, or the call on, the Government to intervene.
It is not easy to strike a balance on how far the Government should regulate to correct market failures. On the one hand, regulations do bring benefits. Pressure for greater safety is exerted by the public, the media, pressure groups, industry, trade unions, Parliament, and governmental institutions and it is proper that regulators should be responsive to these forces. Moreover, besides reducing risks, regulations introduce a level playing field and give the public assurance that industry is able to take advantage of technological advances.
On the other hand, experience has shown that introducing regulations slavishly to respond to pressures can result in over-regulation of some hazards, under-regulation of others and a reduction in the credibility of the Government and/or its agencies. Furthermore, poor regulation can entail excessive costs, impose burdens on industry and may needlessly restrict the freedom of individuals. Where the costs of regulation to business cannot be offset by benefits (eg by securing greater reliability and operability of plant and encouraging firms to have in place good management systems), the rest will ultimately have to be borne by consumers or the taxpayer. If the overall costs of regulation exceed the social benefits they engender, then there will be a reduction in overall well-being.
The resolution of these conflicting pressures often involves political judgements, but the process of risk assessment is nowadays an essential component of the exercise. Indeed, the Government wants the practice of risk assessment to epitomise the process of policy making in the field of Government regulation. The Department of Trade and Industry (DTI), for example, under the Government's Deregulation Initiative, which is intended to reduce the burden of Government regulations on business, published a booklet, in November 1993, about the techniques of risk assessment and risk management and how these can be incorporated into the decision-making process.
In particular, the DTI booklet stresses that it is important to examine each issue or problem carefully and consider all the alternatives so that when advice is given on whether regulation is appropriate, the action recommended is in proportion to the risks involved. The booklet also emphasises the need for an analysis of the costs and benefits of each option to make sure that it does not impose any unnecessary burdens on business in relation to the likely benefits.
However, the process of risk assessment has wider application too. For example, it is being increasingly used as a mechanism in legislation, for placing duties on those creating risks to man and the environment to introduce appropriate measures to eliminate or control such risks; for deciding what goods should be allowed on the market; and by inspectors in assessing compliance with regulations.
Assessing risks is simple in principle. It involves identifying hazards or examining what in a particular situation could cause harm or damage and then assessing the likelihood that harm will actually be experienced by a specified population and what the consequences would be (ie the risk). The overall objective is to obtain a view on how to manage the risk or to compare the risk with other risks.
At a conceptual level, it has proved useful to make a distinction between an assessment of the risks (the evaluation of the likelihood of harm and its consequences for populations or individuals as described above) and risk control (the prioritisation of risks and the introduction of measures that might be put in place to reduce, if not prevent, the harm from occurring). This stems from the belief that an assessment of risks by and large can be scientific and objective but risk control less so because it integrates the results of an assessment of risks with other critical inputs, such as economic analysis, perception of risks, need for and availability of alternative technologies, concerns about equity, consistency across a range of risks, and many other matters, where there is more room for subjective elements to creep in.
In practice, such a strict demarcation is not possible. It is often difficult to say where an assessment of risks ends and risk control begins or to assess risks without making a number of assumptions. In other words, a risk assessment is invariably a mixture of science and policy with one of the two predominating and with often no consensus where the boundary between the two lies.
As such, risk assessment is essentially a tool for extrapolating from statistical and scientific data, a value which people will accept as an estimate of the risk attached to a particular activity or event. Though there are many techniques for arriving at such a value or number, tailored to different applications and covering a wide range of sophistication, risk assessment nowadays more often than not is a composite of established disciplines, including toxicology, engineering, statistics, economics, demography and psychology.
However, integrating the results of a risk assessment into the decision-making process is not without its problems. Some of these problems and their implications are examined below. This makes it all the more important that risk assessments submitted to decision makers, eg Ministers, should give full information on: the science and assumptions underpinning the risk assessment; the criteria used for deciding what risks are unacceptable, tolerable, or negligible; and any other critical inputs that have been taken into account and contributed to the overall conclusions and recommendations.
All those concerned with risk assessment quickly find out that uncertainty permeates the whole process. This uncertainty is of two types. The first is due to lack of knowledge, either about relevant data or about whether a particular effect will occur for postulated circumstances. An example is the study of the threat posed by a substance to humans and/or the environment.
The starting point for undertaking such a study is to look at the intrinsic ability of the substance to cause adverse effects such as cancer, irritation, liver dysfunction or environmental damage. Information on toxicity can be obtained from in-vitro (test-tube) studies and/or animal experiments and at times comparisons with the toxicity of substances with a similar chemical structure can be helpful. It is then necessary to have or deduce a dose-response relationship for each adverse effect, ie to what extent the harm to humans and the environment depends on the amount (or dose) to which they are exposed. The next step is to determine the degree to which the population might be exposed. The risks can then be estimated from these measurements and the dose-response relationships.
Uncertainty in the above process occurs at different stages. It can be caused by the usual variance that accompanies scientific measurements (eg measurement of exposure), absence of data, individual variability and imperfect understanding of the underlying biological processes involved. For instance, even when it has been established that a particular substance can cause ill-health, there is often not enough data to decide whether its toxicity is directly proportional to the dose or whether the relationship is much more complex. Moreover, if the toxicity data was obtained from animal experiments, the problem arises of how to extrapolate such results to estimate the effects on man.
Indeed, sometimes there is so little data on the hazardous properties of physical, biological or chemical agents that attention is focused on the potential of the agent to be released into an environment accessible to people, plants or animals or other things that people value. The adoption of the principle that it is better to be safe than sorry means that the release of these risk agents must be controlled, even in the absence of information on the extent of the threats posed by the agents. This is in line with the UK Sustainable Development Strategy1 which makes clear that
"where there are significant risks of damage to the environment, the Government will be prepared to take precautionary action to limit the use of potentially dangerous materials or the spread of potentially dangerous pollutants, even where scientific knowledge is not conclusive, if the likely balance of costs and benefits justifies it".
The second type of uncertainty derives from the random nature of the events, which itself depends on natural circumstances and their timespan. The frequency of past events can provide a good indication of the probability of repeats in the future, if circumstances do not change. For example, insurance companies can predict with great precision the average probability of individuals in a particular population dying in any one year. Risks for events for which there is no previous accident data or which occur infrequently need to be estimated by modelling or by the exercise of judgement, inevitably subjective and uncertain.
It is therefore not surprising that a large proportion of effort devoted to improving risk assessment methodologies has concentrated on the development of methods for tackling problems posed by uncertainty. Several techniques are now available. For instance, sensitivity testing is often used to check the validity and robustness of the assumptions made to fill the gaps where data is missing. Small changes are made, within plausible limits, in the data used and assumptions made, and their effects on the result of the assessment are then examined. Large changes in the results are undesirable.
Methods are also continually being developed for finding ways of determining the extent of risks for processes for which there is little historical data. One such technique - fault tree analysis - allows experts to analyse the combination of events that could cause a system to break down and how these events could arise from failure of components of the system and the chance of those failures occurring. A fault tree is then drawn by depicting graphically how the components of the system depend on each other and how failure of one component affects key operations. Once the tree is available it is possible to estimate the chance of the entire system failing from a knowledge of the failure rates of the individual commonly-used components. These failure rates can be obtained from reliability databanks.
However, the problem of uncertainty that occurs in the modelling of systems subject to random failure of components, is not confined to the quality of data on failure rates. It could stem from the methodology adopted as well. For example, is the list of initial hazards complete? Are the fault trees complete or has a fault sequence been missed out? Could a common cause trigger more than one failure sequence at a time? Engineers guard against common cause failures by principles of diversity plus defence in depth to provide insurance against the unexpected.
Erring on the side of safety to account for uncertainty is very common. Indeed risk assessments are very often deliberately designed to be over-prudent. Rather than attempting to produce an explicit quantification of uncertainty, overcautious risk estimates are adopted that minimise the likelihood that additional information will reveal that they were in fact too low. Such risk assessments are in effect upper-bound estimates of risk. In other words, if more accurate risk assessments were available, they would very likely be much lower than the overcautious estimates and would be most unlikely to be any higher.
Adopting overcautious estimates has many drawbacks. They can produce results that are highly distorted, particularly if they are introduced at more than one stage in the assessment. Each individual introduction may appear sensible, but together they may have a multiplicative effect that produces results that are so overtly over-cautious as to be misleading. The approach can also assign incorrect priorities to risk and thereby distort the pattern of regulations. If different Departments, or different parts of the same Department adopt different procedures and different degrees of caution, the net result will be that some risks might be regulated too stringently and vice-versa.
There is a strong (and controversial) school of thought that since decisions on the degree to which one should err on the side of safety are essentially policy judgements and not scientific ones, they should be introduced by decision-makers at the policy formulation stage and not by assessors undertaking risk assessment. However, this is not always easy. As has been pointed out before, risk assessment in practice is a mixture of science and policy rules. If it is the level of risk attached to a particular activity which is very uncertain, it might be reasonable for the judgement about the degree of caution necessary to err on the side of safety to be made by assessors. On the other hand where uncertainty is a matter of public or political acceptability it may be a judgement which decision-makers may wish to make for themselves.
As if estimating the risk associated with a particular hazard is not difficult enough, the concern which the public or pressure groups feel about the risk depends in practice upon many other factors as well. Social scientists have studied this phenomenon intensively. Fischoff, Slovic and Lichtenstein2 have shown that people rate risks according to "how well the process (giving rise to the hazard) is understood, how equitably the danger is distributed and how well individuals can control their exposure and whether risk is assumed voluntarily". Their results are summarised in Figure 1. Risks in the upper right quadrant of the diagram are most likely to provoke calls for Government regulation. The opposite is true for risks in the bottom left quadrant.
More recently Kasperson et al.3 have produced a model - which is still being verified- for explaining why minor risks (as defined by technical experts) at times produce such massive reactions while major risks are often ignored. Their social amplification of risk model suggests that the impact of a particular risk begins with the initial victims and diffuses outward to society at large. In that process public response to the risk can be amplified or attenuated depending on how the risk interacts with psychological, social, cultural, and institutional processes.
For example, awareness of the risk of air travel following an airline crash can be amplified by a large volume of information, scientific experts challenging one another, dramatisation of the issue and use of value-laden terminology and images by the media. This perception can then be further amplified or attenuated depending on its effects on the community and society as a whole.
When looking at what could cause harm and the potential severity of that harm, regulators have to examine whether the assessments should be confined to looking at those directly at risk (ie individual risk) or whether this should be extended to encompass an examination of the effects on society as a whole.
As a rule, regulators tend to focus on those directly at risk. But even this is not as straightforward as it seems. What is a typical individual? Not all individuals perceive risks in the same way. To any individual the most important question is what is the risk to me and my family from the hazard under consideration. To answer this regulators have to construct a hypothetical type of individual or hypothetical person who is in some fixed relation to the hazard, eg the persons most exposed to it, or a person living at some fixed point or with some assumed pattern of life. Other individuals can then see how their circumstances relate to this hypothetical person and reckon that they or their family incur a greater or smaller risk.
As such, hypothetical persons may be regarded as persons (including groups) assumed to have homogeneous characteristics who are deliberately posited for the purpose of risk assessment. They may for example be persons hypothetically constantly present at the perimeter fence of a nuclear power station, or a group of workers presumed to be exposed to a given risk for exactly forty hours per week etc. Very often, for the reasons explained above, control measures or standards are determined in relation to hypothetical persons.
However, for certain hazards, regulators may, in addition to looking at those directly affected by the risks, also consider the wider social impact of the risks. One frequent feature of such hazards is that the risks tend to be shifted around, so that some people bear more and others less of them. The benefits are also likely to be redistributed, not only between groups of people, but also through time so that less risk may be borne now, and more by some future generation. Typical examples include hazards associated with a nuclear power station at a particular site, chemicals affecting the ozone layer, or a dam. The redistribution and balancing of such major costs and benefits are of course a classic function of Government, subject to public discussion and debate.
All this shows clearly that the degree to which risks should be controlled is fundamentally a question of values and has strong political dimensions. Questions then arise regarding the criteria that should be adopted to judge:
Different regulators have adopted different criteria, either cost benefit-based, equity-based or a mixture of these two.
As mentioned above, cost benefit analyses (CBAs) offer a framework for balancing the benefits of reducing risks against the costs incurred. Often this can be done without the explicit valuation of the benefits, but on the basis of common sense judgement. But there are also many situations where the benefits of reducing risk need to be valued explicitly. However, this is far from easy. The health and safety of people and protection of the environment are not things that are bought or sold. It is therefore difficult to use market transactions to gauge their value.
In some cases, though, their worth can be estimated indirectly. For example, by comparing property values, one might be able to estimate how much people actually pay for cleaner air or attractive scenery. Or again by looking at how much more people are prepared to pay for a car fitted with an automatic braking system than one which does not have this feature, one might be able to gauge what people are prepared to pay for achieving a marginal reduction in the risk of death. Similarly by examining how much extra is paid for undertaking particularly dangerous jobs one can estimate what monetary value people place on a marginal increase in the risk of death. Such results can then be extrapolated to derive a figure (known as the value of statistical life) to avert a death.
But these techniques have often been found to fall short. It is often difficult to isolate from other factors how much extra is really being paid for a particular safety or environmental improvement. Moreover, one cannot always be confident that the people concerned know how much of an improvement in their safety or environment they are purchasing. They are, therefore, frequently supplemented by surveys where carefully devised questionnaires are used to gauge people's preferences. These too are not without controversy. Many believe that, at times, the survey responses have nothing to do with reality.
Despite these shortfalls, the above techniques - which are constantly being refined - have been used to put a monetary value on risks to human life and health. This value is not compensation for loss of life since it is obviously impossible to compensate someone for the loss of their own life. It is nothing more than the value to be attached to a small reduction in risk that would result in the saving of a life as it relates to an unknown individual. For example, a value of statistical life of £1m is just another way of saying that a reduction in risk of death of 1 in 100,000 per year has a value of £10 per year.
The concept does make sense. It ties in well with the fact that in everyday life, people do accept additional risks of death, or other harm to themselves and their family, in return for financial benefits or other satisfactions - provided that the additional risks are kept as low as reasonably practicable. For example, most parents are aware that even the best maintained ride at fairgrounds does entail a very slight risk, but this does not prevent them taking their children to fairgrounds. The concept is also useful. Any health and safety regulation embodies valuations of risk reduction and placing a value on statistical life does provide a way of valuing the benefits of the regulation. The question is whether such valuations should be made explicit. Often this can improve the efficiency and consistency of regulation.
Finally, regulators must also acknowledge that there is an ethical dimension to the public's attitude to acceptance of risk. There are some people who judge that certain hazards should not be entertained at all, no matter how low the risks, eg. in relation to nuclear power. Others feel that the use of risk assessment is morally wrong because, however small the risks may be, it legitimises the concept that it is acceptable for some people to be exposed to man-made risks, for the environment to be polluted or for experiments to be done on animals, so that others may benefit.
They also have much scepticism about the meaningfulness of the low failure probabilities estimated from analysis - quoting major incidents such as that at Three Mile Island in America and major accidents such as Chernobyl in USSR to support their fears. This, of course, does not contradict the fact that in many such instances the probabilities are indeed low.
From a different perspective, there is a strong belief from some parts of industry and elsewhere that risk assessment may systematically overestimate risks and therefore cause undue alarm and despondency among the public, particularly those risk assessments representing worst case scenarios. On the other hand many pressure groups believe that risk assessment may often inherently underestimate the true magnitude of the problem, by ignoring, for example, salient factors such as synergies among exposures and vast variations in susceptibility among humans.
Perhaps all this is but a recognition that assessing risks is full of uncertainties; that the science underlying most risk assessment assumptions is often inconclusive or untestable. In short risk assessment in its present form can only be used to inform a decision. It should not be used blindly to dictate it.
Risk assessment provides a necessary structure for rationalising decisions on the risks that society is prepared to accept. Such acceptance has to take account of values established by political debate and public willingness to tolerate risks in return for benefits. However, for the tool to work, those who rely on it for informing their decisions have to understand its nature and its limitations; that it relates to hypothetical rather than real persons; and that the process is assumption and value-laden and subject to uncertainties which need to be exposed.
Above all, decision-makers must adopt criteria for establishing how far it is reasonable to go, having regard to established good practice and impact of technological change, especially where the risk estimates have attendant uncertainties and be able to show for all to see how their decisions were reached.