You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Argumentation schemes for clinical decision support

Abstract

This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created and illustrates the overall process on a small set of case studies.

1.Introduction

1.1.Motivation

The process of deciding what is the best treatment for a patient involves integrating data from multiple data sources with one or more clinical guidelines. This is already a difficult problem for hard-pressed clinicians to achieve in the short time that they are allotted per patient, and is only getting worse as resources are further squeezed. There are ever increasing numbers of people living with multiple chronic conditions, with treatment governed by different, sometimes conflicting, guidelines. Moreover, there is a proliferation of additional relevant data (e.g., from well-being sensors). It is thus ever more important to efficiently provide personalised advice, such as advice on medication or lifestyle changes, in ways that take proper account of all the information. Furthermore, it is also necessary to ensure consistency in the interpretation of patient data and guidelines [8]. The use of decision support tools provides one way to integrate these disparate sources of information in a consistent way.

The suggestion that decision support systems can help in a clinical setting is not new [16]. There is evidence that the use of Decision Support improves GP performance [30], and yet adoption is limited [50]. A survey [22], that aims to outline the reasons for the lack of adoption of clinical decision support systems (CDS) in clinical practice, lists both inadequate patient specificity, particularly in multi-morbidity situations, and the difficulties in sharing knowledge from different systems as some of the reasons. In [39], the authors note the lack of integration into clinical work as a barrier to adoption of CDS in general practice. In the UK, the National Health Service (NHS) standard length for a consultation is 10 minutes. Feedback [40] reported through Focus Groups that we held with patients, carers and clinicians to discuss barriers to the adoption of CDS, included the need to condense all the different sources of data into meaningful summaries and personalised recommendations. The concern raised was that there is not enough time within a 10 minute conversation to make sense of the data, and its implications on treatment, and to discuss options effectively. We interpret this as saying that CDS need to provide more support for integrating data from different sources, for understanding treatment decisions, and for presenting alternative treatment options and the trade-offs between them. In particular, CDS need the ability to provide justifications for any recommendations made [22, 39].

Argumentation for medical decision making has a long and established history, for example [15, 18]. There are several reasons why argumentation is helpful in this context. First, it provides an abstract formalisation of logic-based reasoning over conflicting information. Second, the exchange of arguments enables one to rationally deliberate and critique possible actions (or treatments) and to reason over different pieces of information in a modular manner. The advantages of modularising such reasoning in the form of interacting arguments, is that one can readily integrate information from independent sources [21, 52]. Third, argumentation-based formalisations of logical reasoning appeal to principles familiar in everyday reasoning and debate, and so render the reasoning process more readily understandable. This in turn means that the integration of human input into the reasoning process is more readily facilitated and the resulting audit trails of exchanged arguments provide evidence of due diligence with regards to the decisions made. Finally, argumentation can also accommodate reasoning with and about possibly conflicting preferences (e.g., over arguments supporting alternative treatments) that may arise from different knowledge sources and the perspectives of different clinicians.

The advantages of argumentation-based approaches have never been more relevant given multiple co-morbidities and hence possibly conflicting guidelines, requiring explanations of decisions that can be readily communicated to physicians, GPs, and patients, and moreover requiring input from physicians/GPs as well as patients in order to account for their preferences/values. Indeed, such requirements are particularly pressing given the increasing use of black box machine learning approaches whose decisions need to be rendered explainable and parameterised by human inputs. Furthermore the models upon which the decisions are made need to be trustworthy, ensure transparency and provide evidence of due diligence [46]. There is also scope for argumentation based approaches in reasoning with conflicts between different guidelines and incorporating preferences/values as part of the design [9]; however these are not in scope for this paper.

1.2.Approach

The approach we present in this paper builds upon Walton’s work on argumentation schemes (AS) [57]. Argumentation schemes can be used as inference licences, as dialogical devices, or, as in our case, they can be employed as a way of expressing reasoning templates [42]. In other words, they are a mechanism that supports the generation of arguments that align with common patterns of reasoning. For example, the scheme “Argument from Expert Opinion” [56] captures the stereotypical form of argument whereby if someone is an expert in some domain, and they make an assertion in that domain, then one might reasonably assume that the assertion is true. Defining argumentation schemes is thus a form of knowledge elicitation, specific to the use of argumentation. Moreover, since computational argumentation is concerned with the evaluation of arguments and counter-arguments for candidate options, argument schemes are associated with scheme specific critical questions (CQs). While argumentation schemes effectively provide templates for capturing the rationale for some claim, the scheme’s critical questions help identify possible counter-arguments and elicit further rationales (i.e. arguments) supporting the scheme’s premises. For example, critical questions associated with the ‘Argument from Expert Opinion’ scheme include querying whether the person in question is indeed an expert, and whether other experts agree on the assertion being made [56]. Posing the first question may involve constructing a (counter) argument claiming that the person is not an expert, or placing the burden of proof on the agent constructing the argument from expert opinion to provide some further argument justifying the claim that the person in question is indeed an expert. A practical clinical application of argumentation schemes is studied in [53].

Argumentation schemes provide generic templates for constructing arguments, via the provision of concrete values for the variables in these templates; a process referred to as instantiation of the scheme. As outlined above, the CQs associated with each scheme can be used to identify possible counter-arguments or elicit further rationales for the premises of an argument. In this paper we focus on the former: the instantiation of a CQ generates a counter-argument to the source argument obtained by instantiating the scheme. A counter-argument can challenge the source argument by challenging a premise or the conclusion of the source argument.

The advantage of using argumentation schemes and critical questions is that they provide a bridging formalism between data and knowledge in logic-based or relational languages and human understandable representations of dialectical reasoning. They also motivate exploration of the full space of dialectical reasoning w.r.t. a topic, so fulfilling the due diligence requirement, which is especially important in safety critical decision support such as clinical decision support. The use of schemes and CQs has been explored for clinical decision making [4, 21, 53], but practical deployment suggests that existing schemes are too generic for use by clinicians [53]. A response, explored in [53], is to propose domain specific specialisations of generic schemes and CQs, so that they can effectively be implemented to leverage the above mentioned advantages for medical decision making.

In this paper, we are similarly motivated to develop specialised schemes and CQs for deliberation over medical treatment proposals, and moreover to operationalise their use in the ways described above. To be clear, this paper presents the first step in a more ambitious long term program of fully mapping out and operationalising reasoning about medical treatments in the form of specialised schemes and CQs.

1.3.Contribution

There has been previous work on the application of argumentation schemes to various clinical and decision support situations. For example, specialisation of schemes for decision support within the clinical domain was outlined in [53] in the context of organ donation. However the specialisation of argumentation schemes to the process of supporting decisions on treatments using clinical guidelines, is novel. The work outlined in this paper builds on previous work that introduced the specialised scheme ASPT [28]. Our proposed specialisation of the schemes enables us to differentiate between different clinical rationales that would preclude an action, for example the difference between a contraindication due to demographics (e.g. ethnicity or age) as opposed to one due to a past bad reaction.

Such argumentation schemes can be used by a decision support system or learning health system to pre-process and narrow down the treatment options for a given patient. The advantage of the use of computational argumentation for this is that the rationale for any recommendation made can be provided both to support the recommendation to consider or not to consider a specific treatment. The schemes identified here are a comprehensive set and can be directly implemented to create a clinical decision support system that integrate the relevant guidelines and patient observations. Such a system has the potential to improve patient consultations by allowing more time to be spent in discussing treatment options, as opposed to finding them [41].

The paper also articulates an approach to operationalise the specialised schemes, and illustrates the application of the schemes using case studies related to hypertension treatment as part of secondary stroke prevention [25, 26]. We also share details of our implementation which includes the formalisation of the clinical guideline for hypertension, and demonstrates the applicability of the argumentation schemes that we propose.

1.4.Methodology

Finally, a few words about how we arrived at the argument schemes, and the implementation, that is presented here. The development of the argumentation schemes was carried out through knowledge elicitation from a general practitioner (GP).

This was time-consuming, as is often the case with knowledge elicitation processes. However, there are two aspects that suggest that building systems that make use of argumentation schemes is not as gravely affected by the “knowledge acquisition bottleneck” as the construction of other systems is. First, because argumentation schemes are, as the name suggests, schemas for generating arguments, they formalize knowledge at a more abstract level than the rules or Bayesian network relations that are typically yielded by knowledge elicitation. Hence, schemes and their associated critical questions are modular and reusable representations of reasoning patterns, playing a similar role in the structuring of inference mechanisms as ontologies do in the structuring of symbolic knowledge bases. Thus, we anticipate that future work on argumentation scheme-based clinical decision making will be able to reuse much of what we present here. Second, the domain knowledge that is used along with the argumentation schemes, is derived from clinical guidelines. These guidelines are already highly structured representations, distilled from clinical knowledge and medical evidence, by experts. Such information can then be represented as a formal language that would support automated reasoning, as we demonstrate in this paper.

A final issue is the validation and evaluation of the schemes. Validation was performed in the context of a set of case studies relating to the management of hypertension.11 We are in the process of carrying out this evaluation exercise. We are developing a new set of anonymised case studies for presentation to a panel of GPs and will then compare the outcomes reached by the argument schemes and those reached by the GPs. The same study will also evaluate whether the use of argumentation schemes in itself promotes trust and encourages usability in our target user base of clinicians.

2.Background

The relevant background for this paper covers clinical decision making, specifically for decisions on treating hypertension. We also provide background on Argumentation Schemes, a short introduction to computational argumentation, and a brief discussion of reasoning with the schemes in order to support recommendations.

2.1.Clinical background

When a clinician deliberates about what treatment to prescribe in order to attain a specific goal for a patient, they rely on clinical guidelines. For example, in the UK, NICE (National Institute for Clinical Excellence) issues guidelines, distilled from medical evidence collected in the literature, that are intended to guide clinicians’ decisions, and provide standardised approaches to handling conditions. For example, there is a NICE guideline for reducing a patient’s blood pressure (BP) when they are suffering from hypertension.22 Guidelines also cover the diagnostics and monitoring of conditions, but in this paper we focus on treatment aspects, and so assume that diagnostic and monitoring aspects are complied with. Depending on the specific patient circumstances (or facts) the hypertension guidelines may not recommend a single treatment option; in some cases multiple options will be available. There are often situations where known information or characteristics of the patient will either preclude or favour one treatment over another. The decision as to which to prescribe is then made by the GP, in consultation with the patient.

The process of narrowing down and deciding on a specific treatment involves an initial generic inference that in effect provides a starting list of treatment options, given the patient’s goal. This list then needs to be challenged by ascertaining that there are no contraindications for a specific treatment currently in the options list. Another consideration is whether there is treatment-specific experience from the patient’s clinical record that may impact its suitability. The latter could be a previous allergic reaction, a previous side effect or previous ineffectiveness. Obviously the last consideration is relevant only if the patient has direct experience of the treatment under consideration.

The process of determining the possible treatment options involves querying the clinical guidelines and asking questions to determine whether the proposed treatment is suitable for the given clinical objective, and for the specific patient. The questions include:

  • Which treatments in the guidelines achieve the clinical goal?

  • Are there any contraindications given the patient’s situation (facts)?

  • Is the patient allergic to, or has the patient suffered from side effects from, any of the treatments being considered?

  • Has the patient taken any of these treatments in the past and observed no benefit?

  • What are the alternatives and what are the advantages or disadvantages of each?

It is desirable that the specialisation of the argumentation schemes and associated critical questions address these questions.

2.2.Argumentation schemes

The idea of argumentation (‘argument’) scheme (AS) originally emerged from the domain of informal logic, in particular, from the work of Doug Walton [57, 58]. An AS is a semi-formal reasoning template that matches common reasoning patterns in real life. More formally [26], an AS=S,c,V is a set of support premises (S) which support the conclusion c, where variables (V) in the AS appear in the premises (S.V) or the conclusion (c.V). These variables can then be instantiated by ground terms to yield a specific argument. Consider the “Argument Scheme for Practical Reasoning – the “sufficient condition scheme”, from [58]:

aac-12-aac200550-g001.jpg

We might instantiate G with the goal “get from London to Edinburgh” and a with the individual “Amy”. In our example the support premises are: “Getting to Edinburgh” and “Taking a train is sufficient for Amy to get to Edinburgh”. The conclusion of the scheme is that “Amy should take the train”.

One of the key features of argumentation schemes is the list of associated critical questions (CQs). The premises, claim (or conclusion) supported by the scheme are presumptive and a premise or claim is withdrawn unless the CQs posed have been answered successfully. In general, posing CQs can also be used as pointers to counter-arguments that themselves instantiate argument schemes. In this paper, we adopt this latter use, and thus represent the CQs as argument schemes that can be used to construct counter-arguments. Of course, these argument schemes have their own critical questions, and in this way, the instantiation of schemes thus yields graphs of attacking arguments. A similar such use of critical questions was formalised in the modelling of legal reasoning with preferences in [44]. Whenever the CQs are posed, they can be answered in two different ways. First, a knowledge base can be used to generate arguments automatically. Second, a human can construct arguments. We will assume both options in this paper. The instantiation of the appropriate argumentation scheme and its associated CQs is a way of generating arguments that can then be evaluated when organised into an argumentation framework, which we introduce in Section 2.3.

The argument schemes we propose as good candidates to use as a basis for the clinical specialization in support of reasoning about clinical treatment options are: the Argument Scheme for Practical Reasoning (ASPR) in Table 1 [3] (this is an extended version of the scheme introduced above), the Argument Scheme from Negative Consequences (ASNC) in Table 2 [57], the Abductive Argument Scheme for Argument from Effect to Cause (ASEtoC) in Table 3 [57], the Argument Scheme from Analogy (ASA) in Table 4 [57] and the Argument Scheme from Danger Appeal (ASDA) in Table 5 [57]. ASPR is a general scheme that can assist in reasoning about what actions to take in given circumstances [3] [44], and it is more comprehensive than the version introduced in our simple travel example. In the clinical setting, the actions being considered are possible treatments. ASNC can help when reasoning about side effects, and in turn ASEtoC can assist in reasoning about whether a side effect can indeed be attributed to the treatment being considered. ASDA can be used to generate arguments related to contraindications, when a particular treatment is known to be harmful in patients with specific characteristics. When specialising the schemes we make a distinction between a side effect and a contraindication. The former is experienced after trying a treatment, and may not rule out the use of a treatment. The latter is known once the patient’s details are known and will rule out the use of the treatment considered. In the case of a contraindication, an alternative will need to be considered. ASA can support reasoning about whether a treatment has achieved the desired goal in past similar situations on the same patient.

Table 1

Argument scheme for practical reasoning [3], and its associated critical questions

ASPR
premise – In the current circumstances R
premise – We should perform action A
premise – Which will result in new circumstances S
premise – Which will realise Goal G
premise – Which will promote some value V
therefore: We should perform action A
Critical questions
CQ1: Are the believed circumstances true?
CQ2: Assuming the circumstances, does the action have the stated consequences?
CQ3: Assuming the circumstances and that the action has the stated consequences, will the action bring about the desired goal?
CQ4: Does the goal realise the value stated?
CQ5: Are there alternative ways of realising the same consequences?
CQ6:Are there alternative ways of realising the same goal?
CQ7: Are there alternative ways of promoting the same value?
CQ8: Does doing the action have a side effect which demotes the value?
CQ9: Does doing the action have a side effect which demotes some other value?
CQ10: Does doing the action promote some other value?
CQ11: Does doing the action preclude some other action which would promote some other value?
CQ12: Are the circumstances as described possible?
CQ13: Is the action possible?
CQ14: Are the consequences as described possible?
CQ15: Can the desired goal be realised?
CQ16: Is the value indeed a legitimate value?
Table 2

Argument scheme from negative consequences, based on the argument scheme from consequences [57], and its associated critical questions

ASNC
premise – if I brings about A, then B will occur.
premiseB is a bad outcome (vs my stated goals).
therefore: I should not bring about A.
Critical questions
CQ1: How strong is the likelihood that the cited consequences will (may, must) occur?
CQ2: What evidence supports the claim that the cited consequences will (may, must) occur, is it sufficient to support the strength of the claim adequately?
CQ3: Are there other opposite consequences (bad as opposed to good, for example) that should be taken into account?

ASPR (as outlined in Table 1) has 16 critical questions associated with it. The purpose of these CQs is to identify the ways in which the premises may be challenged, and arguments grounding negative answers to the CQs can be seen as attacks on the original argument. For example if ASPR is used to reason about action when a person has a temperature (fever), then CQ1 queries the presence of the fever. A negative response would invalidate the argument to take a treatment to lower temperature, as the circumstances are not true. In previous work [28], we introduced a specialisation of this scheme, Argument Scheme for Possible Treatment (ASPT), and alongside it we gave applicable critical questions in the context of clinical treatments. We will not be revisiting the creation of this scheme as a specialisation of ASPR and we will leverage the reduced list (with respect to those for ASPR) of applicable critical questions as defined in [28].

Table 3

Abductive argument scheme from effect to cause [55] and its associated critical questions

ASEtoC
premiseF is a finding or a given set of facts in the form of some event that has occurred
premiseE is a satisfactory causal explanation of F
premise – no alternative causal explanation E given so far is as satisfactory as E
therefore: E is plausible, as a hypothesis
Critical questions
CQ1: How satisfactory is E as an explanation of F, apart from the alternative explanations available so far in the dialogue?
CQ2: How much better an explanation is E than the alternative explanations available so far in the dialogue?
CQ3: How far has the dialogue progressed? If the dialogue is an inquiry, how thorough has the investigation of the case been?
CQ4: Would it be better to continue the dialogue further, instead of drawing a conclusion at this point?
Table 4

Argument scheme from analogy [57]

ASA
premise – Generally case C1, is similar to case C2
premise – Proposition A is true (false) in case C1
therefore: Proposition A is true (false) in case C2
Critical questions
CQ1: Is A true (false) in C1?
CQ2: Are C1 and C2 similar in the respects cited?
CQ3: Are there important differences (dissimilarities) between C1 and C2?
CQ4: Is there some other case C3 that is also similar to C1 except that A is false (true) in C3?
Table 5

Argument scheme from danger appeal [57]

ASDA
premise – If you bring about A, then B will occur
premiseB is a danger to you
therefore: you should not bring about A

2.3.Computational argumentation

Computational argumentation is an established method for reasoning with incomplete, and at times conflicting, information or knowledge [45]. An argument is structured so that it has a conclusion or a claim, and the support for the claim. One way to define such arguments is through the instantiation of argument schemes. When argumentation is employed in the context of decision support, its structure supports a more human like reasoning process where arguments’ conclusions, their support and their relations can be modelled. Argumentation has been extensively explored within the multi-agent systems community [3, 29, 51], and in Section 5 we discuss previous work on using argumentation in clinical decision support systems (DSS).

While there are several different formal models of argumentation, for example [2, 17, 37], all models include the notion of an argument, which we denote with Arg=S,c, which consists of a set of premises, S, defined in some language, L, which support the conclusion, c. An Argumentation Framework (AF) [12] represents a set of Arguments A, and the relationships between the members of the set. Formally an AF is a pair A,R, where A is a set of arguments and RA2 is a binary attack (counter-argument) relation between arguments. When using argumentation schemes, we suggest that attack relations come from the instantiation of the CQs associated with a scheme. We explain in detail how this can be done in Section 3.3, but consider the argumentation framework depicted in Fig. 1, where the boxes represent the arguments and the arrows represent the attack relation between them. For example, Arg1 and Arg2 are two arguments; the arrow from Arg2 to Arg1 indicates that Arg2 attacks Arg1, i.e., (Arg2,Arg1)R. If Arg1 is an argument instantiated from a scheme AS1, and AS1 is associated with a critical question that is represented as another scheme AS2, then Arg2, an argument instantiating AS2, will attack Arg1. Then, if there is a CQ associated with AS2 that is represented as another scheme AS3, the instantiation of AS3 will yield another argument Arg3 which will attack Arg2. In this simple example, the argumentation framework can be formally represented as {Arg1,Arg2,Arg3},{(Arg2,Arg1),(Arg3,Arg2)}.

Fig. 1.

An example of an argument framework where Arg1 is an argument instantiated from a scheme AS1. The critical question to AS1 is instantiated using another scheme AS2 and as such Arg2, an argument instantiated from AS2, will attack Arg1. The critical question to AS2 is instantiated using another scheme AS3, and argument generated from AS2 will attack Arg2.

An example of an argument framework where Arg1 is an argument instantiated from a scheme AS1. The critical question to AS1 is instantiated using another scheme AS2 and as such Arg2, an argument instantiated from AS2, will attack Arg1. The critical question to AS2 is instantiated using another scheme AS3, and argument generated from AS2 will attack Arg2.

Given an AF (A,R), one can identify subsets of arguments that are coherently justified [12] (these coherent sets then represent views that we might consider as rational conclusions from the AF). Let SA be a set of arguments. We say S is conflict-free (cf) iff S2R=; i.e., no two arguments in S attack each other. We say an argument ArgA is acceptable w.r.t. S iff all attackers of Arg are in turn attacked by some argument in S. For any S let d(S)A denote the set of arguments acceptable w.r.t. S. We say S is self-defending iff Sd(S). We say S is admissible iff it is conflict free and self defending. Intuitively, admissible sets of arguments represent justified sets of arguments that are collectively consistent and can respond to all their counter-arguments. Thus admissibility is typically taken as the minimal requirement for a set of arguments to be considered reasonable. There are a range of more stringent requirements, identifying more restricted sets of arguments that can be accepted. Such sets of arguments are called extensions, and the approaches for determining which arguments are accepted are called semantics such as grounded and preferred. The grounded extension, is defined to be the ⊆-smallest admissible set satisfying S=d(S); S is the preferred extension, iff S is a ⊆-maximal complete extension. Given the framework in Fig. 1:

  • There are multiple conflict free sets: {{Arg1,Arg3},{Arg2},{Arg1},{Arg3},{}}

  • The admissible sets are {{Arg1,Arg3},{Arg3},{}}

  • The grounded extension is {{Arg1,Arg3}}

  • The preferred extension is {{Arg1,Arg3}}

In Section 4, we provide case studies where we consider the preferred semantics that give possible sets of acceptable arguments; then a clinician decides on a treatment option accordingly.

2.4.Reasoning with the schemes to support recommendations

As already discussed, argumentation schemes are instantiated to construct arguments. Attack relations between arguments are then added, based on several different kinds of knowledge. One type, as discussed above, takes the form of critical questions for argumentation schemes. Another type of knowledge that leads to attack relations is domain knowledge. For example, in the case of clinical decision support, this domain knowledge could be about two treatments that are alternatives, so that an argument for one is an argument against another. A final kind of knowledge leading to attack relations is domain independent. An example of this is that an argument for something being true can be an argument against it being false. The sets of arguments and attacks are then used to construct an argumentation framework, and that can be evaluated as articulated in Section 2.3. Acceptable arguments and extensions can be computed in order to identify the possible treatment options, and reasons for any treatment not to be considered can be provided. There are many approaches to reasoning and supporting recommendations given an argumentation framework and the choice of approach or method is independent of the fact these arguments were defined using a specific set of schemes.

The use of argument schemes to generate argumentation frameworks can be carried out in several ways. ASPIC+ is a framework for structured computational argumentation that supports the generation of arguments and attacks by using a knowledge base including axioms and rules [37]. For example, argumentation schemes can be represented as defeasible rules. Another approach, which allows for a broad range of domain-dependent attacks between arguments, is that described in [26], in which attack schemes can be instantiated so as to identify attack relations. We have also explored the use of metalevel argumentation frameworks (MAFs) [36] to support medical decisions [25, 27]. MAFs allow the rationales for attacks can be provided as meta-arguments. Therefore, challenging the rationale for any given attack becomes possible. Moreover, meta-arguments providing rationales for preferences (i.e., preferences between other ‘object-level’ arguments) can be constructed, so enabling reasoning about possibly conflicting preferences (e.g., when clinicians’ preferences for treatment options may conflict). These approaches are not mutually exclusive since, for example, attack schemes can also be represented in MAFs.

3.Method: Modelling decision support using argument schemes

In this section we introduce a set of specialised schemes in order to support reasoning about treatments in the clinical setting. These schemes were obtained by specialising existing generic schemes, such as the Argument Scheme for Practical Reasoning, so as to yield a version suitable for specific clinical contexts. Table 6 shows a list of these specialised schemes, the generic schemes they are derived from, and their roles; Fig. 2 illustrates how these schemes are related to each other. In this section, we also outline the requirements for a clinical treatment knowledge base, and make some assumptions on the state, availability and format of patient related findings. The role of the specialised schemes is to provide a structure to consider all the treatment options for a given clinical objective. The use of the specialised schemes and critical questions will ensure that all possible treatment options (as per the applicable guidelines) are considered, and any objection to their use is evaluated and documented. The Argumentation Framework that results from the instantiation of the specialised schemes will map all the treatments considered, the objections raised and their rationale. This Argumentation Framework will form the basis for recommending the possible treatment options to medical experts such as GPs. This will also support the task of explaining why a particular treatment is not an option, through the use of the rationale provided by the counter-arguments that attack it.

Table 6

The list of specialised schemes, the schemes they are derived from and their CQ role

Original schemeClinical specialisationRole
ASPRASPTASPT.CQ4,
AS Practical ReasoningAS Proposed TreatmentASnegSE.CQ1
ASNCASnegSEASPT.CQ2
AS Negative ConsequencesAS Negative Side Effect
ASEtoCAS_eff_causeASnegSE.CQ2
AS Effect to CauseAS Side Effect to Cause
ASAASPatMedHistASPT.CQ1
AS AnalogyAS Patient Medical History
ASDAASContraIndASPT.CQ3
AS Danger AppealAS Contraindication
Fig. 2.

The relationship between the proposed specialised clinical argument schemes. Solid arrows point to the schemes from the critical questions for those schemes and on to the schemes which instantiate the critical questions. The bidirectional arrows between ASPT and ASPT.CQ4 indicate that ASPT.CQ4 is itself instantiated using ASPT. The dotted arrows identify when a critical question is answered by the creation of a goal, which then leads to another instantiation of ASPT.

The relationship between the proposed specialised clinical argument schemes. Solid arrows point to the schemes from the critical questions for those schemes and on to the schemes which instantiate the critical questions. The bidirectional arrows between ASPT and ASPT.CQ4 indicate that ASPT.CQ4 is itself instantiated using ASPT. The dotted arrows identify when a critical question is answered by the creation of a goal, which then leads to another instantiation of ASPT.

3.1.Schemes for clinical decision making

The specialised schemes we propose will form part of the decision making process that occurs after a clinician has made a diagnosis and is in the process of deciding treatments. The diagnosis will be associated with a clinical goal; for example if there is a diagnosis of hypertension (high blood pressure) then the associated clinical goal would be to lower the patient’s blood pressure. The reasoning process that needs to be captured by the specialised schemes should include proposing treatments from the applicable clinical guidelines, ensuring that there are no contraindications to the treatment for the patient, ascertaining whether this treatment has been taken by the patient in the past and has not been effective or has caused side effects that made the treatment intolerable. Furthermore as there are typically multiple treatment options, this reasoning process also needs to ensure that all the alternative options, relevant at the stage of treatment are considered.

To capture this reasoning process, we introduce five new argument schemes, which are specializations of the schemes introduced in Section 2.3. Table 6 contains the relation between the original schemes and the clinically specialised ones that we propose. These schemes are: argument scheme for proposed treatment (ASPT) [28], argument scheme from negative side effect (ASnegSE), argument scheme from side effect to cause (AS_Eff_Cause), argument scheme for contraindication (ASContraInd) and argument scheme from Patient Medical History (ASPatMedHist).

The relationship between the specialised schemes are illustrated in Fig. 2 where the arrows point from the attacking argument (instantiating the scheme representing the critical question) to the attacked argument (instantiating the argument scheme whose CQ is posed in the form of the attacking argument). Schemes are denoted by boxes, critical questions by ovals. Thus the scheme ASPT at the top of the figure has four critical questions represented by the ovals connected to ASPT by arrows. The four critical questions (including the alternative action) all represent possible ways to generate additional arguments relevant to the applicability of ASPT. We can also see which argument scheme is applicable for each of these critical questions, for example the Argument Scheme from Patient Medical History’s role is as the first critical question of ASPT. The notation ASXX.CQy represents the yth critical question of the Argumentation Scheme ASXX. Therefore ASPatMedHist is ASPT.CQ1.

As is clear from its position in Fig. 2, the key scheme here is ASPT. This is the Argument Scheme for Proposed Treatment (ASPT) [28], which we derived from the Argument Scheme for Practical Reasoning [3] to reason specifically about clinical treatments and actions. ASPT instantiates an argument in support of each possible treatment, given the known facts F about the patient and the treatment goal G to be realized, e.g. lowering the patient’s BP. The arguments instantiated by this scheme are all subject to CQs. It is imperative that before proposing the use of a specific treatment all possible objections to its use, or alternative approaches should be explored. In this case the critical questions are used to instantiate counterarguments to the arguments instantiated by ASPT.

ASPT as proposed in [28] with the addition of new critical question (CQ4) is in Table 7. This additional critical question is one derived from ASPR’s CQ6 (“Are there alternative ways of realising the same goal?”). The patient facts F include (amongst possibly other information) their age, BP (including stage), ethnicity and previous treatments and relevant clinical history (such as allergies).

Table 7

Argument scheme for a proposed treatment

ASPT
premise – Given the patient Facts F
premise – In order to realise the goal G
premise – Treatment T promotes the goal G
therefore: Treatment T should be considered
Critical questions
CQ1: Has treatment T been unsuccessfully used on the patient in the past?
CQ2: Has treatment T caused side effects on the patient?
CQ3: Given patient facts F, are there any counter-indications to treatment T at step i?
CQ4: Are there alternative Actions to achieve the same goal G?

By taking each of the critical questions of ASPT in turn we can assess what existing argument scheme template best suits each of the critical questions. Starting with ASPT.CQ1 which asks “Has treatment T been unsuccessfully used on this patient in the past?”. In other words ASPT.CQ1 wants confirmation that if this treatment was used in the past, there is no evidence that it did not help and was not effective. If a treatment was not effective on a specific patient in the past then this provides a counter argument to the repeat use of the same treatment. The Argument Scheme from Analogy provides a starting template for ASPT.CQ1 [57] and the specialised scheme we propose is the Argument Scheme from Patient Medical History (ASPatMedHist). This scheme is outlined with its own critical questions in Table 8. The critical questions of ASPatMedHist can also be specialised; however we leave this to future work.

Table 8

Argument scheme from patient medical history

ASPatMedHist
premise – Treatment T, is similar or identical to treatment T
premiseT is not effective in achieving goal G
therefore: T is not effective for achieving goal G
Critical questions
CQ1: Was T not effective in achieving goal G?
CQ2: Are T and T similar in the respects cited?
CQ3: Are there important differences (dissimilarities) between T and T?
CQ4: Is there some other case T that is also similar to T except that T was effective in achieving goal G?

ASPT.CQ2 can be addressed by an argument instantiating a specialisation from the Argument Scheme from Negative Consequences as it queries the past occurrence of undesirable side effects that have caused the patient to not tolerate the treatment. The difference between ASPT.CQ1 and ASPT.CQ2 is that the latter does not consider whether the treatment was effective, only if it caused some undesirable side effect. We assume that treatments can be effective, yet not be tolerable due to the side effects. The Argument Scheme from Negative Consequences in Table 2 needs to be modified to support the treatment recommendation, its specialised representation Argument Scheme from Negative Side Effect (ASnegSE) is in Table 9. This scheme has two critical questions: ASnegSE.CQ1 which is concerned with the possibility of ameliorating the side effects observed; ASnegSE.CQ2 probes whether the side effect could have an alternative explanation (or cause).

Table 9

Argument scheme from negative side effect

ASnegSE
premise – if SE is an effect of T
premiseSE is a harmful outcome (vs stated health goals and facts).
therefore: T should not be considered.
Critical questions
CQ1: Are there ways of ameliorating the side effects observed?
CQ2: Could the observed side effect have an alternative explanation?

The instantiation of the first CQ of the AS from Negative Side Effect (ASnegSE.CQ1) generates a new clinical objective or goal, which can then lead to the re-use of ASPT with this modified goal.33 For example suppose the initial goal is Blood Pressure Reduction and treatment T is proposed through the instantiation of ASPT. If a patient experiences a headache as a side effect of treatment T, then the new goal would be Pain Reduction. The instantiation of the second CQ of the AS from Negative Side Effect (ASnegSE.CQ2) involves the use of the Argument Scheme from Effect to Cause [57] derived from the Abductive Argumentation Scheme which relies on a set of premises to be confirmed or validated by a clinician. This specialised scheme is in Table 10. For example, if while taking treatment T a patient suffers from headaches, then if headache is a listed possible side effect of treatment T, this constitutes a satisfactory justification. Additionally there is room for clinical judgment as to whether there is an alternative casual justification. This can be achieved as a critical question that requires an answer to be given by a human expert (in this case a clinician). A similar set up where some critical questions are queries on knowledge bases and some require a human was outlined in [49].

Table 10

Argument scheme for argument from effect to cause

AS_eff_cause
premiseObs is an observation in the form of some event that has occurred or a clinical measurement
premise – E is a satisfactory justification for obs
premise – no alternative causal explanation E, given so far is as satisfactory as E
therefore: E is plausible as a hypothesis for the cause of obs
Critical questions
CQ1: How satisfactory is E as an explanation of F, apart from the alternative explanations available?
CQ2: How much better an explanation is E than the alternative explanations available (in medical literature)?
CQ3: How far has the diagnosis progressed?
CQ4: Would it be better to run more diagnostics the instead of drawing a conclusion at this point?
Table 11

Argument scheme for contraindication

ASContraInd
premise – if Treatment T is used when F is true then C will occur
premiseC is a danger
therefore: Treatment T should not be used

ASPT.CQ3 is concerned with considering known contraindications for a treatment given specific circumstances or patient facts. These sets of contraindications are extracted as rules from the clinical guidelines. For example, if a specific type of treatment is not to be used above a specific age (for example Ibuprofen can cause Gastrointestinal (GI) bleeds in patients above a certain age), then this will generate a counter argument to the use of that treatment for the patient. The Argument Scheme from Danger Appeal is used as a basis for the specialised Argument Scheme for Contraindication. The Argument Scheme for Contraindication scheme is outlined in Table 11. The Argument Scheme from Danger Appeal does not have Critical Questions therefore the Argument Scheme for Contraindications does not have any either. In cases where a contraindication is identified ASPT should be instantiated on alternative treatments for the same goal.

Finally ASPT.CQ4 expands the possible arguments by asking if there are alternative actions (or treatments) available to achieve the same goal. If alternatives are available, then this results in additional instantiations of ASPT with the alternative treatment options.

3.2.Clinical guidelines

As mentioned above, our work, being UK-based, made use of NICE guidelines, although we see no reason why guidelines from other countries could not also be used. Below we will illustrate the use of the argumentation schemes introduced above using examples based on the treatment of hypertension. Within the NICE guidelines, the treatment of hypertension involves a patient progressing along a pathway involving the administration of a number of medications. A partial view (it only covers the first step) of the hypertension treatment guideline CG127 published by NICE is shown in Table 12. As shown, there are four possible treatment medications: ACE Inhibitor or low-cost Angiotensin II receptor blocker (ARB), Beta-blocker, calcium-channel blocker (CCB) and thiazide-like Diuretic. In [27], we introduced a formal representation of this guideline in terms of logic-based rules. We will use a subset of these rules here.

Table 12

Step 1 of NICE guideline CG127

Step (1.a):
Offer people aged under 55 years step 1 antihypertensive treatment with an ACE inhibitor or a low-cost ARB.
If an ACE inhibitor is prescribed and is not tolerated (for example, because of cough), offer a low-cost ARB.
Beta-blockers are not a preferred initial therapy for hypertension. However, beta-blockers may be considered in younger people, particularly: (1) those with an intolerance or contraindication to ACE inhibitors and angiotensin II receptor antagonists, or (2) women of child-bearing potential, or (3) people with evidence of increased sympathetic drive.”
Step (1.b):
Offer step 1 antihypertensive treatment with a CCB to people aged over 55 years and to black people of African or Caribbean family origin of any age.
If a CCB is not suitable, for example because of oedema or intolerance, or if there is evidence of heart failure or a high risk of heart failure, offer a thiazide-like diuretic.”

3.3.Implementation

We have implemented the argumentation schemes introduced above, along with knowledge from the hypertension guideline. Our code can be accessed from our repository.44 The basis of our implementation is ASPARTIX [14]. Thus we use an Answer Set Programming (ASP) representation for the knowledge base, and ASPARTIX provides the mechanism for computing the justified arguments of an argumentation framework (AF), providing us with the choice of different argumentation semantics. On top of this, we developed software that can: (1) combine various inputs (specialised schemes, clinical guidelines, patient facts and so on) represented as a set of ASP facts and rules (Table 13), (2) invoke ASPARTIX to compute the justified arguments, and (3) post-process the results to generate visual representations for the resulting argumentation frameworks. The algorithm that generates an argumentation framework, on which ASPARTIX is run, from the knowledge in the guidelines and the argumentation schemes, was presented in [26]. The algorithm only considers relevant schemes instead of considering all the schemes available, where the relevance of the schemes is decided according to the relations between schemes (i.e. if a scheme is a critical question of another). Hence, the algorithm can process relationships as described in Fig. 2 to compute an argumentation framework.

Table 13

A partial view of the knowledge base which covers case studies. Lines beginning with # are comments

NHS Guideline
# If the age of the patient is below 55, ace or arb can be recommended at Step 1.
action(ace) :- age(Patient, A), A < 55, step(1).
action(arb) :- age(Patient, A), A < 55, step(1).
# If the age of the patient is above 55, ccb can be recommended at Step 1.
action(ccb) :- age(Patient, A), A > 55, step(1).
# If the age of the patient is above 55 and ccb is not tolerated, thiazide can be recommended at Step 1.
action(thiazide) :- age(Patient, A), A > 55, nottolerated(ccb), step(1).
NHS Guideline – Alternative Actions
alter(arb, ace).
alter(ccb, thiazide).
Argumentation Scheme ASPT
# If there is an action that promotes a goal, an ASPT argument is constructed.
arg(aspt([goal(G), action(A), promotes(A, G)], action(A))) :- goal(G), action(A), promotes(A, G).
Hypertension – Goal Mapping
# If the patient suffers from hypertension, the goal is reduce blood pressure (rbp).
goal(rbp) :- suffers_from(Patient, hypertension).
Action – Goal Mapping
# ace, arb and ccb are actions that promote the goal rbp.
promotes(ace, rbp).
promotes(arb, rbp).
promotes(ccb, rbp).
promotes(thiazide, rbp).
CQ4 related rules:
# Two alternative actions promoting the same goal is an argument.
arg(aspt([goal(G), action(X), promotes(X, G)], action(X))) :- alternative(X, Y), promotes(X, G), promotes(Y, G).
# Alternative actions cannot be prescribed together.
att(aspt([goal(G), action(X), promotes(X, G)], action(X)),
aspt([goal(G), action(Y), promotes(Y, G)], action(Y))) :- alternative(X, Y), promotes(X, G), promotes(Y, G).
Other knowledge:
# The predicate alternative is a symmetric relation.
alternative(X, Y) :- alter(X, Y).
alternative(X, Y) :- alter(Y, X).
# If an action X may cause a side-effect reported as an observation, X is not tolerated.
nottolerated(X) :- observation(O), may_cause(X, O).

We will show the use of the specialised schemes, introduced in this paper, and the algorithm from [26] through two case studies as described in Section 4. This approach works for those examples, but we acknowledge that further work will be necessary before substantially more complex examples can be tackled, specifically those with larger sets of argument schemes and critical questions. In particular, as we further develop a dialogical formalisation enabling use of the schemes and CQ, attention will need to be paid to defining a strategy function that prioritises the choice of which schemes and CQs to use at any point in the unfolding dialogue. Such a choice is typically encoded in a dialogue protocol’s strategy function. Indeed, such a function could well be informed by doctors, who could help rank the schemes and CQ choices, based on their experience.

The knowledge base includes all the domain-specific information in terms of ASP facts and rules. In the medical context, we consider clinical guidelines being part of the knowledge base. Moreover, each argumentation scheme is translated into a domain-specific rule that generates arguments and attacks automatically as described in [26]. The information in the knowledge base is static, but can be updated at any time to represent the most up-to-date information such as new clinical guidelines or argumentation schemes. A partial view of the knowledge base, as implemented, is shown in Table 13. For simplicity, we only display a formal representation that covers the case studies. The whole knowledge base can be accessed from our repository.

A domain-specific rule is of the form Q :- P, which means that if the antecedent P holds, then the consequent Q also holds. In this work, Q is a singular atom, while P is a conjunction of atoms separated by a comma. The domain-specific rules are grouped together, and each rule has a description above it. There are two specific predicates arg() and att(). The argumentation engine uses these two predicates to construct the argumentation framework including the arguments and the attacks respectively. The patient facts is a set of unary and binary predicates about the patient. This component is dynamic in the sense that the information is extracted from the medical records of the patient, or during the interactions between a patient and a GP. We show an example of this in Section 4.

3.4.Preferences

The Argumentation Schemes proposed herein were developed to support the GP and the patients when deciding on what treatments to consider. In such a situation the decision on what treatment to ultimately prescribe is a dialogue between the GP and the patient external to the argumentation schemes, but supported by the argumentation framework instantiated by the specialised schemes and resulting extensions. In some cases the attacks amongst arguments per. se. may not definitively decide a course of action. In such cases, one can preferentially select amongst arguments on the basis of information that is typically assumed exogenous to the domain of reasoning. This is illustrated in the following Section 4.1 in which two arguments for alternative treatments symmetrically attack, and hence yield two preferred extensions, each of which contain one of the two arguments (in such a case, the grounded extension would of course be empty). One could then appeal to valuations of these arguments, e.g., in terms of cost, or effectiveness, to decide the issue, or indeed, patient and/or GP preferences. However, such preference information may itself be conflicting (e.g., the patient and GP preferences may conflict), and one would want to incorporate reasoning about preference information in the framework. Indeed, this has been explored in work orthogonal to that presented in this paper. On the one hand, we in [28] outline an approach which enables argumentation based reasoning about preferences by extending Dung frameworks to include arguments expressing preferences over other arguments, i.e., in Extended Argumentation Frameworks [34]. On the other hand, in [27] we explore an alternative means of incorporating argumentation based reasoning about possibly conflicting preferences, through the use of Metalevel Argumentation Frameworks [36] that are essentially Dung frameworks that accommodate ‘meta’ arguments expressing preferences over other arguments.

4.Case studies

We present two clinical case studies to illustrate the application of our approach. Both case studies are within the hypertension domain. The first one focuses on the recommendations to be made at the start of the treatment, i.e. when a patient is first diagnosed with Hypertension. The second one focuses on a patient that requires decisions to be made further along the treatment. These case studies were selected and developed in conjunction with a General Practitioner (GP) and reflect common situations in which treatment options are discussed. In Fig. 2, we provide the full list of argumentation schemes; the two case studies below will consider three schemes (ASPT, ASnegSE, AS_eff_cause) and the relationships between them to show the applicability of the specialised schemes. We use our previously developed algorithm in order to compute the argumentation frameworks based on relevant argumentation schemes as described in Section 3.3. The argumentation frameworks generated by the algorithm are available in our repository.

4.1.Case study 1

In this scenario Joey, a 50 year old male patient has suffered a stroke, and has hypertension. For Joey, the goal is to keep his blood pressure under control; and for this, Joey is meeting and discussing treatment options with his GP. This is a simple application of the approach proposed, and it leverages the hypertension guidelines relevant to Step 1.a from Table 13, as this is the first iteration of treatment. For this particular scenario, we represent the patient facts extracted from Joey’s medical record as: suffers_from(joey, hypertension), age(joey, 50) and step(1). The argumentation engine will make recommendations by instantiating the argumentation schemes outlined in this paper, considering the domain-specific rules and the patient facts as described in Section 3.2.

Instantiating ASPT with a goal of reducing blood pressure (rbp), given the known patient facts and clinical guidelines will result in an initial argument in support of the use of ace. This argument is aspt([goal(rbp), action(ace), promotes(ace, rbp)], action(ace)) and is generated as an instantiation of ASPT. The critical questions ASPT.CQ1, ASPT.CQ2 and ASPT.CQ3 do not result in arguments attacking ace. ASPT.CQ4 results in the instantiation of ASPT for alternative treatment, the use of arb. This argument is aspt([goal(rbp), action(arb), promotes(arb, rbp)], action(arb)). In Table 13, we show two rules that represent CQ4-related rules. The first one states how an alternative treatment promoting a same goal becomes an argument. According to the guideline, it is known that ace and arb are two alternative actions that promote the same goal (rbp). Hence, arb becomes an argument. This argument is not attacked by any of its critical questions, for the same reasons as ace. The second rule states that alternative actions should not be prescribed together. In our example, there are two ASPT arguments in support of alternative treatments. Hence, an attack relation is created between these two arguments.

The resulting argumentation framework (AF) is shown in Fig. 3. The resulting AF supports two possible courses of action: taking ace or arb, but not both at the same time, as they are alternative actions. At this stage, we can adopt different semantics to make a recommendation. For example, under preferred semantics [12] this AF will result in two different extensions: the first one promoting the use of ace, and the second one promoting the use of arb. The GP will then make a decision together with Joey as to which of these two treatments to consider. In [28], we discuss how instantiations of argument schemes, preferences and other considerations such as cost can be incorporated into the reasoning.

Fig. 3.

Argumentation framework for Case study 1.

Argumentation framework for Case study 1.

4.2.Case study 2

In this scenario Rachel, a 60 year old female patient has suffered a stroke, and has hypertension. Rachel needs to discuss treatment options with her GP. The known facts about Rachel are: suffers_from(rachel, hypertension), age(rachel, 60) and step(1). The argumentation engine will recommend treatments according to the information in its knowledge base and the patient facts as in the previous case study.

Visit 1. According to the guideline information (Table 12), not all Step 1 treatments are suitable for Rachel because of age restrictions. In other words, the use of ace or arb cannot be recommended. However, according to Step (1.b), ccb is a possible treatment for Rachel. Hence, the generated AF includes the single ASPT argument: aspt([goal(rbp), action(ccb), promotes(ccb, rbp)], action(ccb)), which is the only ASPT argument in support of ccb. Therefore, Rachel is prescribed ccb.

Visit 2. Rachel returns to her GP a few days later complaining of an headache. Rachel and her GP now need to reason about the need to modify the treatment if necessary. The argument schemes are instantiated and now make use of a new additional observation about Rachel observation(headache), and a new fact harmful(headache). Such an observation results in a new inference regarding the knowledge base: nottolerated(ccb) (see Table 13). The resulting AF is shown in Fig. 4. There is a new ASPT argument that supports the use of thiazide. Moreover, the new observation leads to the instantiation of the argumentation scheme ASnegEff. This new argument attacks the ASPT argument in support of the use of ccb. The preferred extension includes the argument based on clinical judgement, as the clinician in this case confirmed that the headache was indeed a side effect of CCB (e.g. a new fact in the KB may_cause(ccb, headache)). In such a case, thiazide can be recommended.

Fig. 4.

Second AF for the case study: clinician judgement: headache is a side effect.

Second AF for the case study: clinician judgement: headache is a side effect.

Contrast this to the case where the clinician does not think that the headache is a side effect of CCB. This input from the clinician instantiates the argument scheme AS_Eff_Cause. Since the clinician thinks that the headache is not because of the use of CCB by Rachel, the justification derived from the knowledge base is not plausible anymore. Therefore, this new argument attacks the argument constructed by the instantiation of the argumentation scheme ASnegEff. This alternative case is depicted in Fig. 5. Here there are two extensions under the preferred semantics because we have two ASPT arguments that attack each other. The GP can recommend thiazide or ccb in this case, which is based on a similar reasoning as we have shown for the previous case study. Moreover, such recommendations can be used to display explanations to the GP (e.g., ccb is a possible treatment option since headache may not be caused by the use of ccb) [26].

Fig. 5.

Second AF for the case study: clinician judgement: headache is not a side effect.

Second AF for the case study: clinician judgement: headache is not a side effect.

4.3.Towards explainability

The benefit of argumentation in this case is that the rationale for recommending each specific treatment is present, and the process of critical questions ensures that any reasons not to recommend a treatment are checked. The rationale for each of these two arguments is contained within the argument structure, for example in the case of arb then the rationale to support the action(arb) is promotes(arb,rbp). Further explanation templates [26, 48] may be used to exploit the structure of the argument schemes in order to generate a more natural language personalised explanation of the rationale. Applying the approach proposed in [48] to an argument instantiated by ASPT such as aspt([goal(rbp), action(ace), promotes(ace, rbp)], action(ace)) will result in the explanations in Table 14. As part of our future GP evaluation study we plan to evaluate the templates and layouts of the arguments with clinicians.

Table 14

An ASPT argument’s explanation template [48]

Argument schemeExplanation template
ASPT aspt([goal(rbp), action(ace), promotes(ace, rbp)], action(ace))Treatment ace should be considered as it promotes goal reduced blood pressure (rbp), given patient facts of Joey

5.Related work

This paper builds on two main lines of previous work: the use of argumentation schemes, particularly where they have been applied to decision support, and the use of argumentation in medical decision making, and these are the two lines of research on which we focus.

As noted above, the notion of argumentation schemes came originally from work on informal logic, particularly the work of Doug Walton [5658]. Since they were first introduced, argumentation schemes have been increasingly applied, with legal reasoning [19, 44, 54] a major area of application. In [19] the authors explore how schemes can be used together to reason about a domain. Their domain is German family law, and, unlike in our work they is no specialisation of the schemes in order to map the domain. In [44] the authors outline a formal account of reasoning with legal cases in terms of argumentation schemes.

Other work applying argumentation schemes, includes [47, 49], where a set of specialised schemes are proposed and implemented to support reasoning about the most suitable statistical model to apply given the data and the research objective on the data. Argument mining [31], and e-participation systems [6]. have also seen applications of argumentation schemes, and there is previous work on argumentation schemes for clinical decision making. This includes the work on CARREL+ [52, 53] which leverages argumentation by proposing specialisations of generic schemes in order to support the deliberation on organ transplant allocation. DRAMA [4] applies the Argument Scheme for Practical Reasoning to disparate knowledge bases, in order to assist in deciding which treatments are applicable for a specific patient. There is some additional similarity between DRAMA and our approach as both use argumentation schemes and knowledge bases, and aim to recommend treatments. In contrast to our approach, DRAMA does not specialise the scheme, propose argument schemes for the critical questions and or propose multiple schemes. ArguEIRA [21] also uses argumentation schemes in decision support. An argumentation based hypothesis generation platform aims at highlighting anomalous reactions to medications in the Intensive Care Unit of a hospital. ArguEIRA makes use of argumentation schemes as reasoning templates derived from ICU scenarios, but does not propose use of critical questions.

Also worth noting is [32] which provides an implementation of schemes that does include the use of critical questions. Here the knowledge in clinical guidelines is modelled as schemes in an argumentation framework. The system provides the physician with an overview of the evidence to support a dementia diagnosis in a patient case interpreted within different guideline contexts, represented as a set of argument schemes. This approach makes use of a specialised set of critical questions for the scheme. Whilst there are some similarities with our approach, we specialise a set of schemes as well as the critical questions for each scheme. Finally, application of argumentation schemes in clinical decision making is considered in [1], in which the schemes are used to model existing conversations that are part of clinical discussions amongst multidisciplinary teams.

Turning to previous work on argumentation-based (but not argumentation scheme-based) medical decision making, we find systems such as StAR [15] which reasoned about the carcinogenicity of chemicals based on their molecular structure, and demonstrated the value of argumentation when reasoning with different types of evidence. RAGs [7] was designed to help GPs with initial genetic counselling for patients with specific conditions. Based on patient characteristics and family history, RAGs determines a patient’s risk by aggregating the pro and con arguments. REACT [18] uses argumentation to enable patients and clinicians to map out the treatment plan and simulate implications of different interventions. In REACT the information was processed as logical for or against arguments that allowed for both qualitative and quantitative inputs. When compared to our proposed approach, StAR, RAGs and REACT do not make use of argumentation schemes and do not utilise Dung’s argumentation based semantics to support decision making.

Another related approach includes the line of work documented in [911, 38], in which argumentation is used to support treatment recommendations and support explainability. The line of work can be seen to begin in [38], which suggests, much as we do in [28], that argumentation’s ability to resolve conflicts makes it suitable for handling clashing ideas about treatment. The major difference between these two approaches is that [28] suggests a meta-level approach to resolving conflicts (allowing argumentation about which arguments for treatment should be preferred), whereas [38] resolves conflicts on the basis of which goals are associated with the arguments for treatment. [38] also proposes a version of ASPIC+ [37], called ASPIC-G, which extends ASPIC+ with the representation of goals and preferences between them. Next, [10] proposes combining ASPIC-G with assumption-based argumentation, ABA [13], while [9] shows how the TMR (transition-based medical recommendation model) [59] and patient specific data from the electronic health record (EHR) may be used in conjunction with ABA to reason with the applicable clinical guidelines. These different strands are then brought together in [11], which shows how ABA (assumption based argumentation), ABA-G (which is to ABA as ASPIC-G is to ASPIC+) and TMR can be brought together to provide a means of identifying and resolving conflicting guidelines as they apply to an individual patient.

A different but related challenge to reasoning about the most suitable treatment from existing clinical guidelines is reasoning about the merits of different treatments for a specific condition, when no guidelines are applicable and when such knowledge is taken from research (clinical trials, case studies) rather than official guidelines. To support the extraction and reasoning with such evidence [20] and [24] propose an approach that generates arguments from the evidence available about a specific condition (clinical trials, case studies and clinical guidelines) and introduce the concept of preferences in the process to take into account the relative benefits of the treatments being considered. This is done though the use of preference argument frameworks (PAF) [2], and defining preference relations over sets of benefits (from the different treatment options). In the context of primary care in the UK (for example for blood pressure reduction) GPs rely on NICE guidelines. The approach outlined by [24] would in that context be most suited to refining, informing the expansion or development of guidelines, rather than be used alongside existing guidelines by GPs.

Finally, we consider [33] which compares the application of a machine learning (ML) model to the use of argumentation on the breast cancer recurrence problem. Machine Learning can be considered an alternative approach to reasoning about treatments given specific patient data, with the aim of providing a recommendation. However, unlike argumentation, Machine Learning (ML) methods do not typically provide the rationale for a given recommendation [5]. [33] shows that argumentation and machine learning have similar predictive power and demonstrates the advantages that argumentation provides when it comes to explaining a prediction.

6.Further work

During the modelling process we identified some reasoning steps that are not part of the core decision making we consider in this paper, such as reasoning with stakeholder preferences, values and trust. As part of our ongoing work, we will consider improving the reasoning process by incorporating stakeholder preferences (patient and clinician), the effect of trust in findings and observations (for example if these are taken with a smart phone) and the use of case studies when proposing actions outside the remit of the NICE guidelines. In particular, in Section 3.4 we mentioned work orthogonal to the present paper in which we explore integration of metalevel reasoning about preferences over arguments [27, 28]. A key direction for future work would be to specify argument schemes and critical questions that can be instantiated to define such meta-arguments (cf. the attack schemes identified in [26]). Secondly, in Section 5 we mentioned the substantial body of work on argumentation based medical decision support developed by Fox et al. [7, 15, 18]. Rather than adopting Dung’s theory when evaluating arguments, these approaches rely on aggregation of arguments and the weighing up of arguments for and against a decision option (typically referred to as ‘accrual’). Indeed, accounting for the accrual of arguments within the Dung paradigm has been addressed by a number of works (notably [23, 35, 43]), and while a number of challenges remain, accounting for accrual in our Dung based approach will be an important topic going forward.

There are two further steps in the evaluation of the work in this paper. The first focuses on validating the reasoning process that results from the application of the specialised schemes. In order to do so we will compare the system’s recommendations (using the specialised schemes and a suitable knowledge base) with treatments that GPs would recommend given the same patient information and guidelines. This evaluative study is currently underway. This study presents 10 GPs with a set of scenarios presented as clinical vignettes, all in the hypertension domain. Each vignette includes information about a patient in the form of facts such as their blood pressure, age, past treatments and/or adverse reactions. Each GP is given the same set of vignettes and then they explain what treatment they would recommend and the justification as part of a one to one interview. The findings from these interviews will enable an initial comparison of the reasoning GPs apply in practice to the reasoning this set of Argument Schemes supports. This comparison will enable us to ascertain whether additional schemes and/or critical questions will be required.

The second step in the evaluation of this work will explore qualitative measures of the benefits provided by argumentation based recommendations that use specialised schemes. A similar evaluation was done in [21] with a small subset of clinicians in secondary care. In our case this can be assessed by sharing the textual and graphical output resulting from our approach with a focus group of GPs in order to gauge if and how this approach can support their decision making process.

7.Summary

This paper has introduced a comprehensive set of argumentation schemes for clinical decision making. These schemes were designed in consultation with a General Practitioner, and the case studies used in this development are representative of commonly occurring clinical scenarios. This gives us confidence that the set of schemes both accurately reflects common patterns of reasoning used by clinicians, and are applicable in everyday medical decision making. However, it is still necessary to evaluate the schemes on a larger set of scenarios and to elicit feedback from a larger group of General Practitioners, and this evaluation is currently being planned.

This work was carried out as part of project with the aim of establishing the feasibility of an argumentation-based mobile decision support system that would allow patients to self manage their treatments. This project has built a system that uses the implementation described here along with electronic health record data, and patient collected sensor and mood data. The system is being piloted in a user study involving stroke survivors that is separate from the evaluation of the schemes mentioned above.

Notes

1 We used the case studies described in [25, 26, 28]. and involved the GP who was working with us, reviewing the results of applying the schemes to the case studies ensuring that the results were correct. That process gives us confidence that the schemes are an accurate summary of our domain expert’s approach to decision making. To fully evaluate the work, we need to test the reasoning possible using the argument schemes with a larger set of experts.

3 The creation of this new goal is denoted in Fig. 2 by the dotted line from ASnegSE.CQ1 to the “new goal” diamond. This goal can then (given a suitable knowledge base (KB) extracted from the respective guidelines) be used to instantiate ASPT with the new goal.

Acknowledgements

This research was partially funded by the UK Engineering & Physical Sciences Research Council (EPSRC) under grant #EP/P010105/1. The authors are grateful for the help and advice from Dr Mark Ashworth, who provided invaluable input on the clinical aspects of this work.

References

[1] 

M. Al Qassas, D. Fogli, M. Giacomin and G. Guida, Analysis of clinical discussions based on argumentation schemes, Procedia Computer Science 64: ((2015) ), 282–289. doi:10.1016/j.procs.2015.08.491.

[2] 

L. Amgoud and C. Cayrol, A reasoning model based on the production of acceptable arguments, Annals of Mathematics and Artifical Intelligence 34: (3) ((2002) ), 197–215. doi:10.1023/A:1014490210693.

[3] 

K. Atkinson and T. Bench-Capon, Practical reasoning as presumptive argumentation using action based alternating transition systems, Artificial Intelligence 171: (10–15) ((2007) ), 855–874. doi:10.1016/j.artint.2007.04.009.

[4] 

K. Atkinson, T. Bench-Capon and S. Modgil, Argumentation for decision support, in: Database and Expert Systems Applications, Springer, (2006) , pp. 822–831. doi:10.1007/11827405_80.

[5] 

O. Biran and C. Cotton, Explanation and justification in machine learning: A survey, in: IJCAI-17 Workshop on Explainable AI (XAI), Vol. 8: , (2017) , pp. 8–13.

[6] 

D. Cartwright and K. Atkinson, Using computational argumentation to support e-participation, IEEE Intelligent Systems 24: (5) ((2009) ), 42–52. doi:10.1109/MIS.2009.104.

[7] 

A. Coulson, D. Glasspool, J. Fox and J. Emery, RAGs: A novel approach to computerized genetic risk assessment and decision support from pedigrees, Methods of information in medicine 40: (04) ((2001) ), 315–322. doi:10.1055/s-0038-1634427.

[8] 

L.E. Craig, L. Churilov, L. Olenko, D.A. Cadilhac, R. Grimley, S. Dale, C. Martinez-Garduno, E. McInnes, J. Considine, J.M. Grimshaw et al., Testing a systematic approach to identify and prioritise barriers to successful implementation of a complex healthcare intervention, BMC Medical Research Methodology 17: (1) ((2017) ), 24. doi:10.1186/s12874-017-0298-4.

[9] 

K. Cyras, B. Delaney, D. Prociuk, F. Toni, M. Chapman, J. Dominguez and V. Curcin, Argumentation for explainable reasoning with conflicting medical recommendations, in: Proceedings of the Workshop on Reasoning with Ambiguous and Conflicting Evidence and Recommendations in Medicine, Tempe, AZ, (2018) .

[10] 

K. Cyras and T. Oliveira, Argumentation for reasoning with conflicting clinical guidelines and preferences, in: Sixteenth International Conference on Principles of Knowledge Representation and Reasoning, (2018) .

[11] 

K. Čyras and T. Oliveira, Resolving conflicts in clinical guidelines using argumentation, in: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, (2019) .

[12] 

P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial Intelligence 77: (2) ((1995) ), 321–357. doi:10.1016/0004-3702(94)00041-X.

[13] 

P.M. Dung, R.A. Kowalski and F. Toni, Assumption-based argumentation, in: Argumentation in Artificial Intelligence, Springer, (2009) , pp. 199–218. doi:10.1007/978-0-387-98197-0_10.

[14] 

U. Egly, S. Gaggl and S.W. Aspartix, Implementing argumentation frameworks using answer-set programming, in: Logic Programming, (2008) , pp. 734–738. doi:10.1007/978-3-540-89982-2_67.

[15] 

J. Fox, D. Glasspool, D. Grecu, S. Modgil, M. South and V. Patkar, Argumentation-based inference and decision making – A medical perspective, IEEE intelligent systems 22: (6) ((2007) ), 34–41. doi:10.1109/MIS.2007.102.

[16] 

J. Fox, D. Glasspool, V. Patkar, M. Austin, L. Black, M. South, D. Robertson and C. Vincent, Delivering clinical decision support services: There is nothing as practical as a good theory, Journal of biomedical informatics 43: (5) ((2010) ), 831–843. doi:10.1016/j.jbi.2010.06.002.

[17] 

A.J. García and G. Simari, Defeasible logic programming: An argumentative approach, Theory and Practice of Logic Programming 4: (1) ((2004) ), 95–138. doi:10.1017/S1471068403001674.

[18] 

D.W. Glasspool, J. Fox, F.D. Castillo and V.E. Monaghan, Interactive decision support for medical planning, in: Conference on Artificial Intelligence in Medicine in Europe, Springer, (2003) , pp. 335–339. doi:10.1007/978-3-540-39907-0_45.

[19] 

T.F. Gordon and D. Walton, Legal reasoning with argumentation schemes, in: Proceedings of the 12th International Conference on Artificial Intelligence and Law, ACM, (2009) , pp. 137–146. doi:10.1145/1568234.1568250.

[20] 

N. Gorogiannis, A. Hunter and M. Williams, An argument-based approach to reasoning with clinical knowledge, International Journal of Approximate Reasoning 51: (1) ((2009) ), 1–22. doi:10.1016/j.ijar.2009.06.015.

[21] 

M.A. Grando, L. Moss, D. Sleeman and J. Kinsella, Argumentation-logic for creating and explaining medical hypotheses, Artificial Intelligence in Medicine 58: (1) ((2013) ), 1–13. doi:10.1016/j.artmed.2013.02.003.

[22] 

R.A. Greenes, D.W. Bates, K. Kawamoto, B. Middleton, J. Osheroff and Y. Shahar, Clinical decision support models and frameworks: Seeking to address research issues underlying implementation successes and failures, Journal of Biomedical Informatics 78: ((2018) ), 134–143. doi:10.1016/j.jbi.2017.12.005.

[23] 

D. Grossi and S. Modgil, On the graded acceptability of arguments in abstract and instantiated argumentation, Artificial Intelligence 275: (5) ((2019) ), 138–173. doi:10.1016/j.artint.2019.05.001.

[24] 

A. Hunter and M. Williams, Aggregating evidence about the positive and negative effects of treatments, Artificial intelligence in medicine 56: (3) ((2012) ), 173–190. doi:10.1016/j.artmed.2012.09.004.

[25] 

N. Kökciyan, M. Chapman, P. Balatsoukas, I. Sassoon, K. Essers, M. Ashworth, V. Curcin, S. Modgil, S. Parsons and E. Sklar, A collaborative decision support tool for managing chronic conditions, in: MEDINFO 2019: Health and Wellbeing e-Networks for All, Vol. 264: , (2019) , pp. 644–648.

[26] 

N. Kökciyan, S. Parsons, I. Sassoon, E. Sklar and S. Modgil, An argumentation-based approach to generate domain-specific explanations, in: European Conference on Multiagent Systems (EUMAS), Springer International Publishing, (2020) , pp. 319–337.

[27] 

N. Kökciyan, I. Sassoon, E. Sklar, S. Modgil and S. Parsons, Applying metalevel argumentation frameworks to support medical decision making, IEEE Intelligent Systems 36: (2) ((2021) ), 64–71. doi:10.1109/MIS.2021.3051420.

[28] 

N. Kökciyan, I. Sassoon, A. Young, M. Chapman, T. Porat, M. Ashworth, V. Curcin, S. Modgil, S. Parsons and E. Sklar, Towards an argumentation system for supporting patients in self-managing their chronic conditions, in: AAAI Joint Workshop on Health Intelligence, (2018) .

[29] 

N. Kökciyan, N. Yaglikci and P. Yolum, An argumentation approach for resolving privacy disputes in online social networks, ACM Transactions on Internet Technology 17: (3) ((2017) ), 27:1–27:22.

[30] 

O. Kostopoulou, T. Porat, D. Corrigan, S. Mahmoud and B.C. Delaney, Diagnostic accuracy of GPs when using an early-intervention decision support system: A high-fidelity simulation, British Journal of General Practice 67: (656) ((2017) ), e201–e208. doi:10.3399/bjgp16X688417.

[31] 

J. Lawrence and C. Reed, Argument mining using argumentation scheme structures, in: Proceedings of the Conference on Computational Models of Argument, (2016) , pp. 379–390.

[32] 

H. Lindgren, Towards using argumentation schemes and critical questions for supporting diagnostic reasoning in the dementia domain, in: Proc. Computational Models of Natural Arguments (CMNA ’09), Pasadena, CA, (2009) , pp. 10–14.

[33] 

L. Longo, B. Kane and L. Hederman, Argumentation theory in health care, in: International Symposium on Computer-Based Medical Systems (CBMS), IEEE, (2012) , pp. 1–6.

[34] 

S. Modgil, Reasoning about preferences in argumentation frameworks, Artificial Intelligence 173: (9–10) ((2009) ), 901–1040. doi:10.1016/j.artint.2009.02.001.

[35] 

S. Modgil and T. Bench-Capon, Integrating dialectical and accrual modes of argumentation, in: Proceedings of the 3rd Conference on Computational Models of Argument, (2010) , pp. 335–346.

[36] 

S. Modgil and T. Bench-Capon, Metalevel argumentation, Journal of Logic and Computation 21: (6) ((2011) ), 959–1003. doi:10.1093/logcom/exq054.

[37] 

S. Modgil and H. Prakken, A general account of argumentation with preferences, Artificial Intelligence 195: ((2013) ), 361–397. doi:10.1016/j.artint.2012.10.008.

[38] 

T. Oliveira, J. Dauphin, K. Satoh, S. Tsumoto and P. Novais, Argumentation with goals for clinical decision support in multimorbidity, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, (2018) .

[39] 

T. Porat, B. Delaney and O. Kostopoulou, The impact of a diagnostic decision support system on the consultation: Perceptions of GPs and patients, BMC Medical Informatics and Decision Making 17: (1) ((2017) ), 79. doi:10.1186/s12911-017-0477-6.

[40] 

T. Porat, N. Kökciyan, I. Sassoon, A. Young, M. Chapman, M. Ashworth, S. Modgil, S. Parsons, E. Sklar and V. Curcin, Stakeholders’ views on a collaborative decision support system to promote multimorbidity self-management: Barriers, facilitators and design implications, in: American Medical Informatics Association Symposium, Vol. 6: , (2018) .

[41] 

T. Porat, O. Kostopoulou, A. Woolley and B.C. Delaney, Eliciting user decision requirements for designing computerized diagnostic support for family physicians, Journal of Cognitive Engineering and Decision Making 10: (1) ((2016) ), 57–73. doi:10.1177/1555343415608973.

[42] 

H. Prakken, On the nature of argument schemes, in: Dialectics, Dialogue and Argumentation. An Examination of Douglas Walton’s Theories of Reasoning and Argument, (2010) , pp. 167–185.

[43] 

H. Prakken, Modelling accrual of arguments in ASPIC+, in: Proceedings of the 17th International Conference on Artificial Intelligence and Law, (2019) , pp. 103–112.

[44] 

H. Prakken, A. Wyner, T. Bench-Capon and K. Atkinson, A formalization of argumentation schemes for legal case-based reasoning in ASPIC+, Journal of Logic and Computation 25: (5) ((2015) ), 1141–1166. doi:10.1093/logcom/ext010.

[45] 

I. Rahwan and G.R. Simari, Argumentation in Artificial Intelligence, Vol. 47: , Springer, (2009) .

[46] 

M.J. Rigby, Ethical dimensions of using artificial intelligence in health care, AMA Journal of Ethics 21: (2) ((2019) ), 121–124. doi:10.1001/amajethics.2019.121.

[47] 

I. Sassoon, Argumentation for statistical model selection, PhD thesis, King’s College London, 2018.

[48] 

I. Sassoon, N. Kökciyan, E. Sklar and S. Parsons, Explainable argumentation for wellness consultation, in: Explainable, Transparent Autonomous Agents and Multi-Agent Systems, D. Calvaresi, A. Najjar, M. Schumacher and K. Främling, eds, Springer International Publishing, Cham, (2019) , pp. 186–202. doi:10.1007/978-3-030-30391-4_11.

[49] 

I. Sassoon, S. Zillessen, J. Keppens and P. McBurney, A formalisation and prototype implementation of argumentation for statistical model selection, Argument & Computation 10: (1) ((2018) ), 83–103. doi:10.3233/AAC-181002.

[50] 

R. Shibl, M. Lawley and J. Debuse, Factors influencing decision support system acceptance, Decision Support Systems 54: (2) ((2013) ), 953–961. doi:10.1016/j.dss.2012.09.018.

[51] 

E. Sklar, S. Parsons, Z. Li, J. Salvit, H. Wall and J. Mangels, Evaluation of a trust-modulated argumentation-based interactive decision-making tool, Journal of Autonomous and Multi-Agent Systems 30: (1) ((2016) ), 136–173. doi:10.1007/s10458-015-9289-1.

[52] 

P. Tolchinsky, U. Cortes, S. Modgil, F. Caballero and A. Lopez-Navidad, Increasing human-organ transplant availability: Argumentation-based agent deliberation, IEEE Intelligent Systems 21: (6) ((2006) ), 30–37. doi:10.1109/MIS.2006.116.

[53] 

P. Tolchinsky, S. Modgil, K. Atkinson, P. McBurney and U. Cortés, Deliberation dialogues for reasoning about safety critical actions, Autonomous Agents and Multi-Agent Systems 25: (2) ((2012) ), 209–259. doi:10.1007/s10458-011-9174-5.

[54] 

B. Verheij, Dialectical argumentation with argumentation schemes: An approach to legal logic, Artificial intelligence and Law 11: (2) ((2003) ), 167–195. doi:10.1023/B:ARTI.0000046008.49443.36.

[55] 

J.H. Wagemans, The assessment of argumentation based on abduction, in: Proceedings of the Ontario Society for the Study of Argumentation Conference, Vol. 10: , (2013) .

[56] 

D. Walton, Justification of argumentation schemes, The Australasian Journal of Logic 3: ((2005) ), 1–13. doi:10.26686/ajl.v3i0.1769.

[57] 

D. Walton, C. Reed and F. Macagno, Argumentation Schemes, Cambridge University Press, (2008) .

[58] 

D.N. Walton, Argumentation Schemes for Presumptive Reasoning, Psychology Press, (1996) .

[59] 

V. Zamborlini, M. Da Silveira, C. Pruski, A. ten Teije, E. Geleijn, M. van der Leeden, M. Stuiver and F. van Harmelen, Analyzing interactions on combining multiple clinical guidelines, Artificial Intelligence in Medicine 81: ((2017) ), 78–93. doi:10.1016/j.artmed.2017.03.012.