In my last article I argued that there are many common interests between actuaries and researchers into Artificial Intelligence (AI). However, I omitted possibly the most significant common interest of all: how AI researchers can help actuaries in their work. In this article I consider the possibility of building AI tools for actuaries. The tools I shall describe have not, as far as I am aware, been built; they are potential tools that I believe to be feasible. All of them would involve some original research on the AI side. They should be seen as examples arising out of my own experience of the possibilities that there are; I'm sure many readers will be able to come up with other ideas of their own in different areas of actuarial work.
Before I describe the proposed tools, I think it is important to discuss their role. I do not believe that AI will replace actuaries (or indeed any other skilled professionals) in the foreseeable future. People are just too intelligent and too good at what they do. AI tools are only effective at simplified versions of the tasks that humans perform: a medical diagnosis system, for example, relies on someone to type the information in, thus avoiding all the complexities of deciding on the exact severity of the symptoms and so on. One of the big challenges in AI is how to express subtle concepts so that fine distinctions can be drawn. Although progress has been made and is being made in this direction, it is not a trivial task. When you analyse how people make judgements, it turns out that they use a lot of background information and so-called common sense. Moreover, a very broad range of common sense information is used, increasing the size of the problem faced by the builder of an AI system.
To my mind, AI tools can be useful in taking some of the drudgery out of tasks and in concentrating the actuary's attention on the really interesting borderline issues. It is much easier to build a system that can cope with the easy cases and point out the borderline ones than it is to build one that can handle all the cases. An actuary using such a tool would be able to spend more time on the difficult decisions, thus making the most of their expertise.
Many AI tools are designed so that they can give explanations of the reasoning they use in performing their task. If the actuary using a tool disagrees with its answer, they can find out the reasons behind the disagreement. This might prompt them to re-evaluate their own decision, for example by reminding them of a factor that they had overlooked, or it might point out a weakness in the AI system. Good AI tools are built in such a way that they can easily be modified to take advantage of new information about how to perform the task at hand. Following on from this, AI systems have a great potential as educational tools to faciliate learning on the job. A student actuary using an AI tool with a good explanation facility effectively has an expert actuary looking over their shoulder all the time, offering advice on what to do and explaining the reasons behind that advice.
Imagine, if you will, that you are an actuarial student working for one of the big consultancy firms in their life insurance division. You spend much of your time building financial models of life offices. Part of this task involves choosing a set of model points (product/age/term combinations) that, taken together, will represent the whole life office and profit-testing each model point to produce an individual financial profile for it. As you have limited experience, you find both of these processes challenging. You might, for example, run a particular profit test and think that it looks all right, but when you take it to the senior actuary on the project they run through the printout at high speed, points at a figure somewhere in the middle and says "That doesn't look right." "I wish I could do that," you think.
This scenario provides opportunities for two linked AI tools: one to help select model points; and one to help check the reasonableness of profit-test output.
When selecting model points, there are number of factors that must be taken into account. The more model points you have, the more time consuming and complex the calculations that must be performed but the more accurate the model will be. In particular, you must trade the accuracy gained by modelling a very small line of business off against the overall error caused by not doing so in the context of the likely errors in the model as a whole. If there is something very unusual about it that has significant financial effects it may be worth modelling a line of business with a very small number of policies separately. Within a line of business how many age/term combinations should be modelled? The distribution of the policies is obviously significant here, as are the financial characteristics of the product in question. As in so many other areas, some decisions will be obvious and others will be borderline. A system that could spot the obvious decisions and indicate factors that should be taken into account in the borderline cases would save time for actuaries at all levels.
Checking the reasonableness of profit-test output is probably a more difficult task for which to build an AI tool, but it would certainly be possible at some level. One approach would be to reverse-engineer the results: construct a description of the product from the profit test output. The actuary checking the profit test could then compare this description with their own understanding of the product. Any discrepancies should raise questions in the actuary's mind. The greater the level of detail in the descriptions the more helpful they would be, but the more difficult the system would be to build. The system would also indicate when there was some doubt as to the appropriate description: this information too would help the checking process.
Possibly the most common type of AI system is based on rules: for example, if you have a red rash then you have measles. A system of this type has many such rules, which between them encode all the reasoning that the system performs. These rule-based systems work well in many areas, but are difficult to build for problem areas in which the experts find it hard to come up with generally applicable rules. Although it is often claimed that experts themselves base their reasoning on rules, there is evidence that this is not always so. Instead, experts use their memories of past problems that they have encountered; rather than working out an answer from first principles, they compare the current problem to similar ones that they have solved successfully, and adapt the old answer to the new situation. This mode of reasoning has inspired the development of case-based AI systems, which have a library of past problems with their solutions, together with methods of similarity measurement and adaptation techniques.
Estimating individual case reserves in some areas of general insurance is a notoriously difficult problem, and is viewed as something of a black art by outsiders. There appear to be few universally applicable principles, thus making it difficult to build a conventional rule-based system. However, an interactive case-based AI tool could be highly effective. It would assist the reserver by looking for and drawing attention to similarities between the case for which a reserve is being set and past cases. It could also point out significant differences, and suggest how these similarities and differences would affect the reserve to be set. It would thus act as an extended memory for the reserver, giving access to a far wider selection of cases than had been encountered in person.
Some actuaries spend much of their time performing complex analyses of large data sets. The operations to be performed on the data are often predictable in outline but the details, and sometimes even the course the analysis takes, depend on the results that appear as the earlier steps are performed. An obvious example of this type of work is the setting of loss reserves. There are many techniques, both sophisticated and simple, that can be used. The results from these techniques may be ignored, combined with other results or used on their own depending on a number of factors. The lines of business may be analysed separately or grouped together in various combinations. The effects of reinsurance must be allowed for, as must individual large losses.
An actuary who is setting loss reserves usually has an overall plan of the analyses to be performed. It may be fairly detailed, if this is a body of business that the actuary knows well and in which no surprises are expected, or it may be very sketchy, if the actuary has only the vaguest notion of its characteristics. In either case, there are some details that can only be fleshed out as the analysis is performed; and it is always possible that unforeseen results will mean the abandonment of the actuary's initial ideas as to the form the analysis will take. There is a growing body of AI research into the problems of combining predetermined, planned, behaviour with appropriate responses to unforeseen circumstances, and of effective methods of planning in advance when the situations that will be encountered cannot be predicted in any detail. It would be possible to use this research to build an interactive tool to help an actuary through the loss reserving process. The tool would in effect sit on top of the software usually used in loss reserving, and would supply suggestions as to possible analyses to perform and results to look out for. It could be designed to be as interactive as desired, ranging all the way from asking the actuary what to do at every stage to asking only in borderline cases. A tool like this would use the knowledge of experts in the field of loss reserving, and if provided with a good explanation facility would not only assist less experienced actuaries in their tasks but also help them to learn from those with more experience.
In general, AI techniques can help actuaries (as other professions) by assisting in the spread of expertise through the organisation. A good AI tool can reduce reliance on a single person who is the acknowledged expert in a particular field. However, they should be used with caution: like any other system (or indeed person) they are not foolproof and their results should be checked before undue reliance is placed on them. Although I have drawn my examples from only a narrow range of the tasks undertaken by actuaries, many actuarial tasks share significant characteristics and could benefit from the development of AI tools such as those I have suggested.