Actuaries and Artificial Intelligence

There has been much discussion recently about the future of the profession. It is clear that if we are to have a future, we must have an identity; and that the description "insurance professionals" is far too restrictive for the activities that actuaries are currently engaged in, let alone those in which they may be engaged in the future. What really distinguishes us from other professions is our skill and expertise in analysing and reasoning about uncertainty. If the profession is to stay healthy it must be at the forefront of developments in this area.

Actuarial skills are founded on quantitative methods of analysing the effects of uncertainty, whether in the area of mortality, sickness, motor accidents, investment performance or any of the multitudinous areas in which actuaries work. Actuaries are, or should be, constantly looking both for new problems to which they can apply the techniques they have developed and for new techniques that they can apply to the problems they are currently addressing. Interestingly, there is another group of people who are also concerned with uncertainty: researchers in Artificial Intelligence (AI). In this article I shall discuss some of the problems with which AI researchers are concerned and the techniques they use to address them. The range of issues covered by AI is huge: my discussion will be confined to the areas in which I see the most potential for cross-fertilisation between the two fields.

Probabilistic approaches to heuristic reasoning

One of the main interests of AI researchers is the investigation of intelligence: they come up with theories of cognition and reasoning and then model them on computers. In order to do this they need to be able to represent knowledge and ideas and algorithms that manipulate them. In particular, they need to be able to represent heuristic reasoning, such as "if Tweety is a bird then Tweety can fly" or "if you've got a red rash and a fever then you've got measles." These are examples of heuristic reasoning because the conclusions, while reasonable, are not infallible: penguins and ostriches don't fly, and there are other diseases that produce a red rash and a fever. There are a number of different types of heuristic reasoning, and there is no universally effective technique. However, many of the techniques that are used are closely allied to traditional areas of actuarial expertise, especially probability theory.

A particularly active area is that of research into Bayesian inference methods. These methods are used to reason about partial beliefs under uncertainty. For example, if you know the probability of having a rash and fever given that you have measles, and the prior probabilities of having a rash, fever and measles, you can work out the probability of having measles given that you have red spots. Various techniques have been developed for handling uncertain evidence (you haven't taken your temperature so you are not sure whether you have a fever or not) and multiple causes (a red rash might also be caused by an allergy to shellfish), and efficient algorithms have been devised for evaluating long and complex chains of inference.

A related approach is Dempster-Shafer theory, which distinguishes between uncertainty and ignorance: for example, the difference between knowing the probability of a fair coin coming up heads and not knowing whether the coin is fair or not. Just as probabilities range between 0 and 1, so does the belief function. However, although P(Heads) = 1 - P(not Heads), the same relationship does not necessarily hold between the belief that the coin in question will come up heads, written Bel(Heads), and the belief that it will not. Sceptism as to the fairness of the coin is represented by setting Bel(Heads) = Bel(not Heads) = 0. There are methods of combining belief functions with probabilities, so that, for example, if you are 90% sure that the coin is fair Bel(Heads) = Bel(not Heads) = 0.45. Dempster-Shafer theory can thus be interpreted as specifying a probability interval, in this case [0.45, 0.55]. The size of the interval indicates the reliance that can be placed on the estimate.

Another approach to probabilistic reasoning is the incidence calculus. Given upper and lower bounds on the probabilities of pieces of evidence it formalises the derivation of upper and lower bounds on the inferences that can be drawn from them.

Other logics that are used for reasoning under uncertainty eschew probabilistic reasoning altogether. Instead of measuring in some way the likelihood of a particular fact being true, they use mechanisms that decide what conclusions to draw from the available information. For example, if you only know that Tweety is a bird, you would probably jump to the conclusion that Tweety can fly. If you then learn that Tweety is in fact a penguin, you would withdraw that conclusion. Logics that describe this type of reasoning are termed non-monotonic logics: the set of propositions that are believed to be true does not always increase when new evidence is brought to light.

Other researchers have concentrated on the investigation of qualitative methods for probabilistic reasoning about uncertainty. It is often argued that numerical probabilities are hard to come by, especially all the conditionals and priors that are required for Bayesian methods. Often, all that may be known is the direction in which one factor influences another and possibly a rough idea of the strength of the influence. Under these circumstances the use of quantitative methods is fraught with difficulties.

Machine learning

Statistical techniques are becoming increasingly influential in the world of AI. They are especially prevalent in the field of machine learning. Very broadly speaking, the aim in machine learning is to generalise from data. For instance, you may want to predict the type of product that a potential customer might buy; if you have records of past sales to individual customers you may be able to spot relevant patterns that you can use.

One of the most publicised techniques in the whole of AI is the use of neural networks. Neural networks are often touted as a solution to the general AI problem (whatever that is) and are described as being based on the operation of the brain. There are indeed some similarities between neural networks and brain neurons but there are many more differences. A neural network consists of a number of nodes (often called units) connected by weighted links. The signals that each unit receives signals from its input links determine its activation level, which in turn determines the signal it sends via its output links. The signal along each link is adjusted according to the link's weight. A neural network is trained by adjusting the weights using a learning algorithm applied to a set of training examples for which the correct outputs are known. Once it has been trained, it can be used to find the correct outputs for new input data. The accuracy of a network's performance depends on the choice of an appropriate configuration and learning algorithm from the many possibilities.

Neural networks have proved to be highly effective in a number of applications, as well as well-nigh useless in others. Although training a neural network is often extremely compuationally intensive using a ready-trained network is not. Work on the mathematical analysis and characterisation of various topologies and learning algorithms continues apace.

Other techniques of machine learning are also highly effective in some circumstances. It is possible to use training examples to learn rules that can be used to classify new data, or to learn decision trees, or several other methods of explicitly representing the classification method. A variety of techniques drawn from several areas of statistics and information theory are used.

Similar problems

So far in this article I have concentrated on brief descriptions of some of the techniques used by AI researchers in the hope that actuaries may find them interesting and possibly relevant to their own problems. Indeed, I know of some ongoing research into the application of some of these techniques to actuarial problems: the use of neural networks in deriving risk premiums for motor insurance, for instance. However, communication is a two-way process and there are actuarial techniques that may prove useful for AI, too. For example, I am currently involved in the investigation of the use of credibility theory in robotics.

Imagine a robot operating in a dynamic world in which changes occur randomly and frequently; it cannot know what is going on and what changes are occurring without observing the world. Observation is expensive: it takes time and uses resources that may also be required for other purposes. The robot will want to look around (or whatever the relevant observation action is) more frequently the more dynamic the world or the more drastic the changes that occur. However, because the changes are random the number of changes occurring in a short period may not be a good measure of the overall level of change. There are obvious parallels to be drawn with insurance: the cost of observation is like a premium, the changes that occur are like claims and the interval between observations is like the policy period. Credibility theory is used to find an appropriate premium for a fixed policy period; we are in effect investigating its use in finding an appropriate policy period for a fixed premium.

It seems likely that there are many other problems in AI and other fields that are essentially similar to problems encountered in insurance and that might prove amenable to actuarial techniques. Actuaries should be involved in investigating and solving these problems.

Further reading

I am not aware of any comprehensive survey of AI written for the general reader. Probably the best (although very expensive) reference is the second edition of the Encyclopedia of Artificial Intelligence, edited by S Shapiro, published by John Wiley & Sons; an excellent introductory textbook is Artificial Intelligence: A Modern Approach, by S Russell and P Norvig, published by Prentice-Hall. On reasoning under uncertainty it is hard to beat Readings in Uncertain Reasoning, edited by G Shafer and J Pearl, published by Morgan Kaufmann.

The future

I hope I have succeeded in hinting at some of the connections between actuarial work and artificial intelligence. In my view, actuaries should be at the forefront of developments in reasoning under uncertainty. Although the emphases of actuaries and AI researchers are invevitably different, the two groups have many areas of interest in common and can, I believe, learn much from each other. The alert reader will have noticed that I have not mentioned ways in which actuaries can use AI in their work. This is not because there are no potential actuarial applications of AI, but because they deserve a whole article to themselves.


Louise Pryor <louisep@aisb.ed.ac.uk>
Last modified: Tue Sep 3 13:21:21 1996