You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Abstract argumentation with conditional preferences

Abstract

In this paper, we study conditional preferences in abstract argumentation by introducing a new generalization of Dung-style argumentation frameworks (AFs) called Conditional Preference-based AFs (CPAFs). Each subset of arguments in a CPAF can be associated with its own preference relation. This generalizes existing approaches for preference-handling in abstract argumentation, and allows us to reason about conditional preferences in a general way. We conduct a principle-based analysis of CPAFs and compare them to related generalizations of AFs. Specifically, we highlight similarities and differences to Modgil’s Extended AFs and show that our formalism can capture Value-based AFs. Moreover, we show that in some cases the introduction of conditional preferences leads to an increase in computational complexity.

1.Introduction

Preferences are of great importance in computer science and artificial intelligence [16,25,39], where research areas such as recommender systems [18], computational social choice [31], and non-monotonic reasoning [15,40] are concerned with the representation and processing of preferences. Moreover, many situations require the use of conditional preferences, where a choice between two options (e.g. whether to drink tea or coffee) is dependent on other factors (e.g. the time of day). This has lead to the introduction of formalisms such as CP-nets [19], which are explicitly defined to deal with conditional preferences.

In formal argumentation theory, preferences have been studied from various points of view, be it in terms of argument strength [3,4,6,32,33], preferences between values [8,12], or weighted arguments/attacks [17]. Despite this, conditional preferences have received only limited attention in the field of argumentation. Dung et al. investigated conditional preferences in the setting of structured argumentation [27]. There, argumentation frameworks (AFs) are built from defeasible knowledge bases containing preference rules of the form a1,,and0d1, where d0 and d1 are defeasible rules. Similarly, there is only one recent paper we are aware of that deals with conditional preferences on the abstract level [2]. This is in contrast to unconditional preferences, which are extensively studied both in structured [3638] and abstract [1,8,13,17,32] argumentation in the literature.

Outside of argumentation, conditional preferences appear in many situations and formalisms. One example for this are CP-nets [19], which use graphs for preference representation. Another instance is logic programming, where conditional preferences may occur in the head of rules [21,23,24] or as dedicated preference statements [20]. To demonstrate the importance of conditional preferences in common reasoning tasks, we now adapt an example given by Dung et al. [27]:

Example 1.

Sherlock Holmes is investigating a murder. There are two suspects, Person 1 and Person 2. After analyzing the crime scene, Sherlock is sure:

  • I1: Person 1 or Person 2 is the culprit, but not both.

Moreover, Sherlock adheres to the following rules:
  • R1: If Person i has a motive but Person j, with ji, does not, then this supports the case that Person i is the culprit.

  • R2: If Person i has an alibi but Person j, with ji, does not, then this supports the case that Person j is the culprit.

  • R3: Alibis have more importance than motives.

After interrogating the suspects, Sherlock concludes that:
  • C1: Person 1 has a motive but Person 2 does not.

  • C2: Person 1 has an alibi but Person 2 does not.

If C1 is trusted, but C2 is not, then this supports that Person 1 is the culprit. If C2 is trusted then this supports that Person 2 is the culprit, regardless of our stance on C1.

In this paper, we aim to capture conditional preferences in argumentation on the abstract level rather than the structured level. Doing so will generalize existing formalisms for unconditional preferences in abstract argumentation and provide a more direct target formalism for structured approaches. To this end, we introduce Conditional Preference-based AFs (CPAFs), where each subset of arguments S can be associated with its own preference relation S. Preferences are then resolved via so-called preference-reductions [32], which modify the attack relation based on the given preferences. As a consequence, S must be justified in view of its own preferences, i.e., S must be an extension in view of S. We investigate the following topics relevant to CPAFs:

  • We show that CPAFs generalize Preference-based AFs (PAFs), and demonstrate that they are capable of dealing with conditional preferences in a general manner.

  • We conduct a principle-based analysis of CPAF-semantics and show that especially complete and stable semantics preserve properties that hold on PAFs. This analysis is helpful when aiming to understand the behavior of CPAF-semantics in a general manner, and lets us pinpoint differences to AFs/PAFs formally.

  • We analyze the computational complexity of CPAFs in detail, showing that for some semantics (naive, complete, grounded, preferred) the introduction of conditional preferences can cause a rise in complexity compared to AFs. This gives insights into the expressiveness of CPAFs, and differentiates them further from AFs/PAFs.

  • Lastly, we compare CPAFs to related formalisms. Specifically, we show that CPAFs can capture other generalizations of AFs such as Value-based AFs (VAFs) [8,12] in a straightforward way, and compare CPAFs to Extended AFs (EAFs) [10,28,34] in order to highlight similarities and differences. Moreover, we discuss a recently introduced alternative approach to conditional preferences in abstract argumentation [2] and compare it to our CPAFs.

The remainder of this paper is structured as follows: Section 2 covers the necessary preliminaries on abstract argumentation. In Section 3 we introduce CPAFs and investigate them with respect to some basic properties. Section 4 contains our principle-based analysis, and in Section 5 we analyze the computational complexity of CPAFs. We discuss related formalisms in Section 6 and conclude in Section 7.

This paper is an extended version of [14]. In Section 4 we now generalize and investigate all ten principles of Kaci et al. [32] instead of just the initial six [33]. The complexity analysis featured in Section 5 is entirely new. In Section 6.1 we now provide a second translation from VAFs to CPAFs. The comparison of our formalism with lifting-based CPAFs in Section 6.3 is new as well. Moreover, this version contains additional figures and explanations to improve readability.

2.Preliminaries

We first define (abstract) argumentation frameworks [26].

Definition 1.

An argumentation framework (AF) is a tuple F=(A,R) where A is a finite set of arguments and RA×A is an attack relation between arguments. Let SA. We say S attacks b (in F) if (a,b)R for some aS; SF+={bA|aS:(a,b)R} denotes the set of arguments attacked by S. An argument aA is defended (in F) by S if bSF+ for each b with (b,a)R.

Semantics for AFs are defined as functions σ which assign to each AF F=(A,R) a set σ(F)2A of extensions [9]. We consider for σ the functions cf (conflict-free), naive, adm (admissible), com (complete), grd (grounded), prf (preferred), and stb (stable).

Definition 2.

Let F=(A,R) be an AF. A set SA is conflict-free (in F), written as Scf(F), if there are no a,bS, such that (a,b)R. For Scf(F) it holds that

  • Snaive(F) iff there is no Tcf(F) with ST;

  • Sadm(F) iff each aS is defended by S in F;

  • Scom(F) iff Sadm(F) and each aA defended by S in F is contained in S;

  • Sgrd(F) iff Scom(F) and there is no Tcom(F) with TS;

  • Sprf(F) iff Sadm(F) and there is no Tadm(F) with ST;

  • Sstb(F) iff each aAS is attacked by S in F.

The computational complexity of AFs has been extensively studied in the literature [29], with the three central problems being those of credulous acceptance, skeptical acceptance, and verification.

Definition 3.

Given an AF-semantics σ we define the following decision problems:

  • Credulous Acceptance (CredσAF): given an AF F and an argument x, is xS for some Sσ(F)?

  • Skeptical Acceptance (SkeptσAF): given an AF F and an argument x, is xS in all Sσ(F)?

  • Verification (VerσAF): given an AF F and a set of arguments S, is Sσ(F)?

Table 1

Complexity of AFs [29]

σCredσAFSkeptσAFVerσAF
cfin Ptrivialin P
naivein Pin Pin P
admNP-ctrivialin P
comNP-cP-cin P
grdP-cP-cP-c
stbNP-ccoNP-cin P
prfNP-cΠ2P-ccoNP-c

Table 1 shows the complexity of these problems [29]. We assume familiarity with the complexity classes of P, NP, and coNP. Furthermore, the class Σ2P contains exactly those problems that can be solved in NP-time with access to an NP-oracle and Π2P contains the complementary problems of Σ2P [7].

Preference-based AFs enrich standard AFs with preferences between arguments [3,4,6,32,33].

Definition 4.

A preference-based AF (PAF) is a triple F=(A,R,) where (A,R) is an AF and ≻ is an asymmetric binary relation over A.

If a and b are arguments and ab holds then we say that a is stronger than b. An established method of resolving preferences in PAFs are so-called preference reductions, of which there exist four in the literature [32]. If in a PAF (A,R,) there is an attack (a,b)R and a preference ba then (a,b) is called a critical attack. In other words, critical attacks are from weak to strong arguments. The preference-reductions deal with these critical attacks, e.g., by removing or reverting them.

Definition 5.

Given a PAF F=(A,R,), a corresponding AF Ri(F)=(A,R) is constructed via Reduction i, where i{1,2,3,4}, as follows:

  • i=1: a,bA:(a,b)R(a,b)R,ba

  • i=2: a,bA:(a,b)R((a,b)R,ba) or ((b,a)R,(a,b)R,ab)

  • i=3: a,bA:(a,b)R((a,b)R,ba) or ((a,b)R,(b,a)R)

  • i=4: a,bA:(a,b)R((a,b)R,ba) or ((b,a)R,(a,b)R,ab) or ((a,b)R,(b,a)R)

For each AF-semantics σ we define the PAF-semantics σpi as follows: σpi(F)=σ(Ri(F)).

Intuitively, Reduction 1 removes critical attacks while Reduction 2 reverts them. Reduction 3 removes critical attacks, but only if the stronger argument also attacks the weaker one. Reduction 4 can be seen as a combination of Reduction 2 and 3: if the weaker argument attacks the stronger argument, but not vice versa, then we retain the critical attack (as in Reduction 3) but also add a reverse attack (as in Reduction 2). Note that on symmetric attacks, all four reductions function in the same way. The following example demonstrates the reductions and PAF-semantics.

Example 2.

Consider the PAF F=({a,b,c},{(a,b),(b,a),(c,b)},) with ba and bc. Figure 1 depicts F as well as Ri(F), i{1,2,3,4}. It can be checked that, for Reduction 1, admp1(F)=adm(R1(F))={,{b},{c},{b,c}} and therefore comp1(F)=prfp1(F)=stbp1(F)={{b,c}}. If we use Reduction 2 for example we get different extensions, namely admp2(F)={,{b}} and comp2(F)=prfp2(F)=stbp2(F)={{b}}.

Fig. 1.

PAF F and its preference reducts from Example 2.

PAF F and its preference reducts from Example 2.

PAFs have the same complexity as AFs with respect to the decision problems of Definition 3: hardness results follows from the fact that PAFs generalize AFs, and membership results from the fact that the four preference reductions can be carried out in polynomial time.

A principle-based analysis of the four preference reductions was conducted for complete, grounded, preferred, and stable semantics [32,33]. To this end, ten PAF-properties were laid out and investigated. We now recall them in Definitions 6, 7, 9, 11 according to [32].

Definition 6.

Let σpi be a PAF-semantics. Let ,(A×A) such that is asymmetric.

  • P1 (conflict-freeness): If (x,y)R there is no Sσpi(A,R,) such that {x,y}S.

  • P2 (preference selects extensions 1): σpi(A,R,)σpi(A,R,).

  • P3 (preference selects extensions 2): σpi(A,R,)σpi(A,R,).

Intuitively, P1 states that if there is an attack between two arguments, then there is no extension containing both of them. P2 expresses that adding more preferences to a PAF can exclude extensions, but not introduce them. P3 states that this is also true if we add preferences to a framework without any preferences, i.e., P3 is a special case of P2.

Definition 7.

Let σpi be a PAF-semantics. Let ,(A×A) such that is asymmetric.

  • P4 (extension refinement): for all Sσpi(A,R,) there is Sσpi(A,R,) such that SS.

  • P5 (extension growth): (σpi(A,R,))(σpi(A,R,)).

  • P6 (number of extensions): |σpi(A,R,)||σpi(A,R,)|.

P4 states that adding preferences means extensions will be supersets of extensions in the original PAF. P5 says that adding preferences will preserve skeptically accepted arguments, and might cause new arguments to be skeptically accepted. P6 expresses that the number of extensions will not grow if new preferences are added.

For the next two principles, we need to define the notion of an argument’s status.

Definition 8.

Let F=(A,R,) be a PAF and xA. We write

  • status(x,F)=sk-cr iff x is skeptically and credulously accepted in F;

  • status(x,F)=cr iff x is credulously but not skeptically accepted in F;

  • status(x,F)=rej iff status(x,F){sk-cr,cr}.

We define the order over theses statutes as follows: sk-cr>cr>rej.

Note that in stable semantics an argument is not always credulously accepted if it is skeptically accepted, since there are AFs without stable extensions. Thus, some argument x might be skeptically accepted with respect to stable semantics, yet we still might have status(a,F)=rej.

Definition 9.

Let σpi be a PAF-semantics.

  • P7 (status conservation): status(x,(A,R,{(x,y)}))status(x,(A,R,)).

  • P8 (preference-based immunity): if (x,x)R and xy for all yA{x} then status(x,(A,R,))rej.

If a semantics satisfies P7 then the status of an argument x can not be lowered by adding a preference xy where x is the preferred (stronger) argument. P8 states that if an argument x is not self-attacking and also stronger than all other arguments, then x can not be rejected.

For principles P9 and P10 we need the concept of paths between two arguments, by which we mean a path in the underlying undirected graph of a PAF.

Definition 10.

Let F=(A,R,) be a PAF. Let R={(x,y)|(y,x)R}. There is a path between xA and yA iff there is a sequence of arguments z1,,znA such that z1=x, zn=y, and (zk,zk+1)RR for all 1k<n.

Definition 11.

Let σpi be a PAF-semantics.

  • P9 (path preference influence 1): if there is no path from xA to yA in (A,R,) then σpi(A,R,)=σpi(A,R,{(x,y)}).

  • P10 (path preference influence 2): if (x,y)R and (y,x)R then σpi(A,R,)=σpi(A,R,{(x,y)}).

If P9 is satisfied then adding a preference between two arguments x and y that do not occur in the same weakly connected component does not change the extensions of a PAF. P10 is similar to P9, but only requires that there is no direct connection between x and y.

Table 2

Satisfaction of various PAF-principles [32,33]. C stands for complete, G for grounded, P for preferred, and S for stable. × indicates that none of those four semantics satisfy this principle

R1R2R3R4
P1 (conflict-freeness)×CGPSCGPSCGPS
P2 (preference selects extensions)××CS×
P3 (preference selects extensions 2)××CS×
P4 (extension refinement)××CGS×
P5 (extension growth)××CG×
P6 (number of extensions)GGCGPSG
P7 (status conservation)CGPSCGPSCGPSCGPS
P8 (preference-based immunity)CGPCGP×CPS
P9 (path preference influence 1)CGPSCGPSCGPSCGPS
P10 (path preference influence 2)CGPSCGPSCGPSCGPS

Table 2 shows which semantics satisfy which principle, as investigated in [32,33].

3.Conditional preference-based argumentation frameworks

As argued in the introduction, our aim is to provide a framework for reasoning with conditional preferences in abstract argumentation. This means that arguments themselves must be capable of expressing preferences, and that those argument-bound preferences are relevant only if the corresponding arguments are themselves accepted. How this is implemented must be considered carefully, as Example 1 demonstrates. There, the fact that Person 1 has a motive (let us refer to this as m1) and the fact that Person 1 has an alibi (a1) result in opposing preferences. When accepting both m1 and a1 it seems natural to combine these opposing preferences, i.e., to cancel them. But this does not allow us to express that alibis are more important than motives, as required in Example 1. Therefore, we need to define our formalism in a general way such that the joint acceptance of arguments must not necessarily result in the combination of their associated preferences. We solve this by mapping each subset S of arguments to a separate preference relation S.

Definition 12.

A Conditional PAF (CPAF) is a triple F=(A,R,cond), where (A,R) is an AF and cond:2A2(A×A) is a function that maps each set of arguments SA to an irreflexive and asymmetric binary relation S over A.

We set no restriction on how exactly conditional preferences are represented. This is deliberate, as we wish to stay as general as possible. In practice, succinct representations could be achieved, e.g., by expressing the cond-function via rules of the form φxy where φ is a propositional formula over the arguments. Indeed, this representation will be used by us in Section 5 where we analyze the complexity of CPAFs.

Just as in PAFs, preferences in CPAFs are resolved with the help of the four preference-reductions (cf. Definition 5). A set of arguments S is an extension of some CPAF if it is an extension relative to its associated preference relation cond(S).

Definition 13.

Let F=(A,R,cond) be a CPAF and let SA. The S-reduct of F with respect to a preference reduction i{1,2,3,4} is defined as RiS(F)=Ri(A,R,cond(S)). Given an AF-semantics σ we define the CPAF-semantics σcpi as follows: Sσcpi(F) iff Sσ(RiS(F)).

Using CPAFs we can easily formalize our Sherlock Holmes example.

Example 3.

We continue Example 1 and introduce two arguments c1 and c2 expressing that Person 1 (resp. Person 2) is the culprit. Moreover, we introduce m1 and a1 to express that Person 1 has a motive (resp. alibi) but Person 2 does not. c1 and c2 attack each other while m1 and a1 have no incoming or outgoing attacks, but rather express preferences. Formally, we model this via the CPAF F=({c1,c2,m1,a1},{(c1,c2),(c2,c1)},cond) with cond such that c1Sc2 iff m1S but a1S, c2Sc1 iff a1S, and cond(S)= for all other SA. Figure 2 depicts F and Fig. 3 shows the S-reducts of F. Note that m1 and a1 are unattacked in all S-reducts of F. Therefore, both arguments must be part of any σcpi-extension for σ{grd,com,prf,stb} and we can conclude that σcpi(F)={{m1,a1,c2}}.

Fig. 2.

The CPAF F from Example 3.

The CPAF F from Example 3.
Fig. 3.

The preference-reducts of the CPAF F from Fig. 2/Example 3.

The preference-reducts of the CPAF F from Fig. 2/Example 3.

Note that, according to Definition 13, preferred semantics do not maximize over all admissible sets of a CPAF, but rather over all admissible sets in the given S-reduct. This means that if there is a set S that is admissible in the S-reduct of F, but there is also some TS that is admissible in the S-reduct of F, then S is not preferred in the S-reduct of F (and therefore Sprfcpi(F)). But this T does not have to be admissible in F, since it might not be admissible in the T-reduct of F. The situation is analogous for naive semantics. The following alternative semantics may be considered more natural:

Definition 14.

Let F=(A,R,cond) be a CPAF and let SA. Then

  • Snaive-glbcpi(F) iff Scfcpi(F) and there is no T such that ST and Tcfcpi(F);

  • Sprf-glbcpi(F) iff Sadmcpi(F) and there is no T such that ST and Tadmcpi(F).

Intuitively, naive-glbcpi and prf-glbcpi maximize globally over all admissible sets of a CPAF, while naivecpi and prfcpi maximize locally over the admissible sets of the given S-reduct.

Example 4.

Let F be the CPAF from Example 3 and recall that prfcpi(F)={{m1,a1,c2}}. Observe that {m1,c1} is not preferred in the {m1,c1}-reduct of F, but it is a subset-maximal admissible set in F. Thus, prf-glbcpi(F)={{m1,a1,c2},{m1,c1}}.

The difference between local and global maximization is not only philosophical, but impacts fundamental properties for maximization-based semantics such as I-maximality [11]. A semantics σcpi is I-maximal if and only if ST implies S=T for all CPAFs F and all S,Tσcpi(F).

Proposition 1.

prf-glbcpi is I-maximal, but prfcpi is not, where i{1,2,3,4}.

Proof.

I-maximality of prf-glbcpi follows from Definition 14. Regarding counter examples for prfcpi we consider the preference-reductions separately. Reduction 1: consider the CPAF depicted in Fig. 4a, i.e., F=({a,b},{(a,b)},cond} with cond such that b{a,b}a. Then {a}prfcp1(F) and {a,b}prfcp1(F). Reductions 2 and 4: consider the CPAF depicted in Fig. 4b, i.e., F=({a,b,c},{(a,b),(b,c),(c,a)},cond} with cond such that a{a}c. Then prfcpi(F) and {a}prfcpi(F). Reduction 3: consider the CPAF depicted in Fig. 4c, i.e., F=({a,b,c},{(a,b),(b,a),(b,c),(c,a)},cond} with cond such that ab. Then prfcp3(F) and {b}prfcp3(F). □

Fig. 4.

Counterexamples for I-maximality (cf. Propositions 1,2,3).

Counterexamples for I-maximality (cf. Propositions 1,2,3).

One may be tempted to deduce from the above proposition that prf-glbcpi is more suitable as a default preferred semantics than prfcpi. However, we will see in Section 6.1 that prfcpi allows us to capture the problems of subjective/objective acceptance in VAFs in a natural way. In our subsequent analysis of CPAFs we consider both local and global subset maximization. Like preferred semantics, naive and stable semantics satisfy I-maximality on AFs. Interestingly, on CPAFs, this depends on the preference-reduction.

Proposition 2.

naive-glbcpi is I-maximal for i{1,2,3,4}. Moreover, naivecpj is I-maximal for j{2,3,4} but not for j=1.

Proof.

I-maximality of naive-glbcpi follows from Definition 14. I-maximality of naivecpj with j{2,3,4} follows from the fact that Reductions 2, 3, and 4 do not remove conflicts between arguments, and therefore conflict-free sets are the same across all S-reducts. For naivecp1 we can use the same counter-example as for prfcp1 (cf. Proposition 1 and Fig. 4a). □

Proposition 3.

stbcpj is I-maximal for j{2,3,4} but not for j=1.

Proof.

For stbcp1 we can use the same counter-example as for prfcp1 (cf. Proposition 1 and Fig. 4a). For stbcpj with j{2,3,4} we proceed by contradiction: assume there is a CPAF F=(A,R,cond) with S,Tstbcpj(F) such that ST. Then there is xT such that xS. Since Sstbcpj(F) there is yS such that (y,x)RjS(F). Reductions 2, 3, and 4 do not remove conflicts between arguments, and thus either (y,x)R or (x,y)R. Therefore, (y,x)RjT(F) or (x,y)RjT(F). But yS implies yT, i.e., Tcfcpj(F). □

A further well-known property of AFs is that if an argument set S is stable in a framework F, then S is also preferred in F [26]. The same is true for CPAFs, with the exception of preferred semantics utilizing global maximization and Reduction 1.

Proposition 4.

If Sstbcpi(F) then Sprfcpi(F) for i{1,2,3,4}. Moreover, if Sstbcpj(F) then Sprf-glbcpj(F) for j{2,3,4}. However, Sstbcp1(F) does not necessarily imply Sprf-glbcp1(F).

Proof.

Let F=(A,R,cond) be a CPAF, and let Sstbcpi(F), where i{1,2,3,4}. Then Sstb(RiS(F)). Since RiS(F) is an AF this implies Sprf(RiS(F)) which means that Sprfcpi(F).

Now let j{2,3,4}. If Sstbcpj(F) then every argument in RjS(F) is either in S or attacked by it. Towards a contradiction, assume there is Tadmcpj(F) such that TS. Then there is some xTS. Since S attacks x in RjS(F) there is a conflict between some yS and x in the underlying AF (A,R) of F. Note that yT. But Reductions 2, 3, 4 can not remove conflicts between arguments, i.e., Tcf(RjT(F)). Contradiction.

For prf-glbcp1, let F be the CPAF from Fig. 4a). Then {a}stbcp1(F) but {a}prf-glbcp1(F). □

Another interesting point is that grounded extensions are not necessarily unique in CPAFs: consider F=({a,b},{(a,b)},cond) with cond such that b{b}a. Then {a}grdcp2(F) and {b}grdcp2(F). We stress that each grounded extension S is still unique in the S-reduct of the given CPAF and thus unique with respect to its own preferences.

Lastly, by the following proposition we express that every CPAF-semantics considered here generalizes their corresponding PAF-semantics, i.e., that CPAFs generalize PAFs.

Proposition 5.

Let F=(A,R,cond) be a CPAF such that the preference function cond maps every set of arguments to the same binary relation, i.e., there is somesuch that cond(S)= for all SA. Let σ{cf,naive,adm,com,grd,prf,stb}. Then σcpi(F)=σpi(A,R,). Furthermore, naive-glbcpi(F)=naivepi(A,R,) and prf-glbcpi(F)=prfpi(A,R,).

4.Principle-based analysis

Principles play an important role in argumentation theory, as they allow us to examine the vast amount of semantics defined for AFs in a general way [11,30,41]. In this section, we generalize the principles of Kaci et al. [32] for PAFs (cf. Definitions 6, 7, 9, 11) to account for conditional preferences. We then investigate by which semantics these principles are satisfied, and show that there are differences to PAFs.

In the case of PAFs, adding more preferences to a framework (A,R,) means that we now deal with the PAF (A,R,). In the case of CPAFs, if we want (A,R,cond) to have at least the same preferences as (A,R,cond), we must require that cond(S)cond(S) for all SA. However, if we only want to add a single preference xy to a CPAF, then we add xSy to only a single subset S and leave the preferences associated with other subsets unchanged. Given the above considerations, generalizing the PAF-principles to CPAF-principles is quite straightforward. The notions of an argument’s status and paths between two arguments in a CPAF are defined analogously to PAFs (cf. Definitions 8, 10), e.g., status(x,F)=cr iff x is contained in some but not all extensions of the CPAF F.

Definition 15.

Let σcpi be a CPAF-semantics. In the following, given a CPAF (A,R,cond), we denote by cond an arbitrary function such that cond(S)cond(S) for all SA. Moreover, cond(x,y) is the same as cond except that for some SA we have (x,y)cond(x,y)(S) but (x,y),(y,x)cond(S). Lastly, cond(S)= for all SA.

  • P1 (conflict-freeness): If (x,y)R there is no Sσcpi(A,R,cond) such that {x,y}S.

  • P2 (preference selects extensions): σcpi(A,R,cond)σcpi(A,R,cond).

  • P3 (preference selects extensions 2): σcpi(A,R,cond)σcpi(A,R,cond).

  • P4 (extension refinement): for all Sσcpi(A,R,cond) there is Sσcpi(A,R,cond) s.t. SS.

  • P5 (extension growth): (σcpi(A,R,cond))(σcpi(A,R,cond)).

  • P6 (number of extensions): |σcpi(A,R,cond)||σcpi(A,R,cond)|.

  • P7 (status conservation): status(x,(A,R,cond(x,y)))status(x,(A,R,cond)).

  • P8 (preference-based immunity): if (x,x)R and if cond is defined such that for all SA and all yA{x} we have xSy then status(x,(A,R,cond))rej.

  • P9 (path preference influence 1): if there is no path from xA to yA in (A,R,cond) then σcpi(A,R,cond)=σcpi(A,R,cond(x,y)).

  • P10 (path preference influence 2): if (x,y)R and (y,x)R then σcpi(A,R,cond)=σcpi(A,R,cond(x,y)).

The following lemma establishes some relationships between the CPAF-principles and is a generalization of known relationships between PAF-principles [32].

Lemma 6.

If σcpi satisfies P2 then it also satisfies P3, P4, and P6. If σcpi always returns at least one extension, and if it satisfies P2, then it also satisfies P5.

Proof.

For P3, P4, and P6 this is easy to see. For P5 we argue this in detail: by contrapositive, let σcpi be a semantics that always returns at least one extension, but does not satisfy P5. Thus, there must be A, R, cond, cond with cond(S)cond(S) for all SA, such that (σcpi(A,R,cond))(σcpi(A,R,cond)). Then there is xA such that x(σcpi(A,R,cond)) but x(σcpi(A,R,cond)), i.e., there is EA such that xE and Eσcpi(A,R,cond). Of course, Eσcpi(A,R,cond), otherwise x(σcpi(A,R,cond)). But then σcpi(A,R,cond)σcpi(A,R,cond), i.e., σcpi does not satisfy P2. □

Observe that, since CPAFs are a generalization of PAFs (cf. Proposition 5), a CPAF-semantics σcpi can not satisfy Pj if the corresponding PAF-semantics σpi does not satisfy Pj. Moreover, it is obvious that P1 is still satisfied under Reductions 2, 3, and 4, as conflicts are not removed by these reductions even if we consider conditional preferences. We can also show that satisfaction of P2 carries over from PAFs to CPAFs.

Lemma 7.

If σpi satisfies P2 then σcpi satisfies P2.

Proof.

By contrapositive, assume σcpi does not satisfy P2. Then there is a CPAF F=(A,R,cond) and cond with cond(S)cond(S) for all SA such that σcpi(A,R,cond)σcpi(A,R,cond). Thus, there is EA such that Eσcpi(A,R,cond) but Eσcpi(A,R,cond). Then Eσ(Ri(A,R,cond(E))) but Eσ(Ri(A,R,cond(E))), i.e., σpi does not satisfy P2. □

Lemma 7 implies that complete and stable semantics satisfy P2 on CPAFs under Reduction 3. By Lemma 6 these semantics also satisfy P3, P4, and P6. However, we can not use Lemma 6 to show that complete semantics satisfy P5, since conditional preferences allow for frameworks without complete extensions. Indeed, we can find a counter-example in this case. Counterexamples for the satisfaction of various principles can also be found for grounded semantics, both variants of the preferred semantics, and even stable semantics in the case of P8.

Lemma 8.

The following holds:

  • grdcpi, where i{1,2,3,4}, does not satisfy any of P4, P5, or P6;

  • comcp3 does not satisfy P5;

    Fig. 5.

    Counterexamples used in Lemma 8.

    Counterexamples used in Lemma 8.

  • prfcp3 and prf-glbcp3 do not satisfy P6;

  • σcpi, where for σ{com,grd,prf,stb} and i{1,2,3,4}, does not satisfy P8.

Proof.

We provide counterexamples for all cases.

  • For grdcpi and P4, P5, P6, consider A={a,b}, R={(a,b),(b,a)}, and cond/cond as shown in Fig. 5a. Then grdcpi(A,R,cond)={{a}} while grdcpi(A,R,cond)={{a},{b}}.

  • For comcp3 and P5, consider A={a,b}, R={(a,b),(b,a)}, and cond/cond as shown in Fig. 5b. Then comcp3(A,R,cond)={{a}} while comcp3(A,R,cond)=.

  • For prfcp3, prf-glbcp3 and P6, consider A={a,b,c,d}, R={(a,c),(c,a),(b,d),(d,b),(c,c),(d,d)}, and cond/cond as shown in Fig. 5c. Then prfcp3(A,R,cond)=prf-glbcp3(A,R,cond)={{a,b}} while prfcp3(A,R,cond)=prf-glbcp3(A,R,cond)={{a},{b}}.

  • Regarding P8, consider the CPAF F=(A,R,cond) shown in Fig. 5d. Note that (a,a)R and aSy for all SA and all y{a}. Observe that b is unattacked in Ri{a}(F). Thus, {a}σ(Ri{a}(F)) for σ{com,grd,prf,stb}. Moreover, b is not defended against c in Ri{a,b}(F). Analogously for c in Ri{a,c}(F). Thus, status(a,F)=rej. □

We have now fully investigated the first six principles. It remains to examine principles 7-10, of which we so far only know that P8 is not satisfied by most semantics. It turns out that P8 is retained when using preferred semantics with global maximization.

Lemma 9.

prf-glbcpi satisfies P8 for i{1,2,4}.

Proof.

Let F=(A,R,cond) be a CPAF containing an argument xA such that (x,x)R and for all SA and all yA{x} we have xSy. Specifically, this means that x{x}y for all yA{x}. Then, by definition of Reduction i{1,2,4}, x defends itself against all attacks in Ri{x}(F). Thus, {x}admcpi(F). By this and the definition of prf-glb, there is some Eprf-glbcpi(F) such that xE. □

Now we turn our attention to P7, where, analogously to P2 (cf. Lemma 7), it turns out that satisfaction carries over from PAFs to CPAFs.

Lemma 10.

If σpi satisfies P7 then σcpi satisfies P7.

Proof.

By contrapositive, assume σcpi does not satisfy P7. Then there is a CPAF F=(A,R,cond) such that status(x,(A,R,cond(x,y)))<status(x,(A,R,cond)). This means there is some SA{x} for which Sσcpi(A,R,cond) but Sσcpi(A,R,cond(x,y)). By the definition of CPAF-semantics this means that Sσpi(A,R,cond(S)) but Sσpi(A,R,cond(S){(x,y)}), i.e., σpi does not satisfy P7. □

However, unlike in the case of P2, the above lemma does not constitute an exhaustive investigation of P7. The reason for this is that P7, in contrast to P2, is satisfied by preferred semantics on PAFs (cf. Table 2). Lemma 10 only allows us to conclude that prfcpi satisfies P7, but it says nothing about the satisfaction of prf-glbcpi. We find that P7 is satisfied also when maximizing admissible sets globally.

Lemma 11.

prf-glbcpi satisfies P7.

Proof.

Let F=(A,R,cond) be an arbitrary CPAF and xA. Let F(x,y)=(A,R,cond(x,y)) as specified in Definition 15. There are three possible cases:

  • (1) status(x,F)=rej. Then status(x,F(x,y))status(x,F) trivially holds.

  • (2) status(x,F)=cr. Then there is some Sadmcpi(F) with xS. We distinguish two cases:

    • (a) cond(S)=cond(x,y)(S). Then clearly Sadmcpi(F(x,y)).

    • (b) cond(S)cond(x,y)(S). Then cond(x,y) is the same as cond except that (x,y)cond(x,y)(S) but (x,y),(y,x)cond(S) for some yA{x}. Adding the preference xy via cond(x,y) does not introduce any new attacks against S in RiS(F(x,y)), no matter which of the preference reductions we consider. Thus, Sadmcpi(F(x,y)).

    In both cases we have Sadmcpi(F(x,y)) and therefore Tprf-glbcpi(F(x,y)) for some TS. Thus, status(x,F(x,y))cr.

  • (3) status(x,F)=sk-cr. By the same line of reasoning as in case (2) we have status(x,F(x,y))cr. Towards a contradiction, assume that status(x,F(x,y))sk-cr. Then there is some Sprf-glbcpi(F(x,y)) such that xS. Since status(x,F)=sk-cr we know that Sprf-glbcpi(F). One of the following must be the case:

    • (a) Scfcpi(F). Since Sprf-glbcpi(F(x,y)) it must be that Scfcpi(F(x,y)). Then it must be that the additional preference in cond(x,y) removes a conflict between two arguments in S. But this preference is xy for some yA{x}. Thus, xS. Contradiction.

    • (b) Scfcpi(F) but Sadmcpi(F). Since Sprf-glbcpi(F(x,y)) it must be that Sadmcpi(F(x,y)). However, the additional preference xy added via cond(x,y) at most adds an attack (x,y). Since xS this means that S is still not defended in RiS(F(x,y)). Contradiction.

    • (c) Sadmcpi(F) but there is TS such that Tprf-glbcpi(F). Since status(x,F)=sk-cr we know that xT. Since Tprf-glbcpi(F) we have Tadmcpi(F). By the same line of reasoning as in case (2) we can conclude that Tadmcpi(F(x,y)) and therefore Sprf-glbcpi(F(x,y)). Contradiction.

    In all three cases we arrive at a contradiction. Thus, status(x,F(x,y))=sk-cr. □

Lastly, we must consider principles 9 and 10. Below, we show that they retain the satisfaction of all principles under all considered semantics.

Lemma 12.

σcpi satisfies P9 and P10 for σ{com,grd,prf,prf-glb,stb}.

Proof.

Let F=(A,R,cond) be an arbitrary CPAF and xA. Let F(x,y)=(A,R,cond(x,y)) as specified in Definition 15. Note that the premise for P9 (there is no path from xA to yA) implies the premise of P10 ((x,y)R and (y,x)R). If (x,y)R and (y,x)R then the additional preference xy added via cond(x,y) does not delete or add any attacks, regardless of which preference reduction we consider. This means that RiS(F)=RiS(F(x,y)) for all SA and all i{1,2,3,4} and therefore σcpi(A,R,cond)=σcpi(A,R,cond(x,y)) for all i{1,2,3,4}. □

Table 3

Satisfaction of CPAF-principles. C stands for complete, G for grounded, P for preferred (both prfcpi and prf-glbcpi), and S for stable. Pg indicates that prf-glbcpi satisfies the principle but prfcpi does not. If a cell is marked with × then none of the investigated semantics satisfy this principle

R1R2R3R4
P1 (conflict-freeness)×CGPSCGPSCGPS
P2 (preference selects extensions)××CS×
P3 (preference selects extensions 2)××CS×
P4 (extension refinement)××CS×
P5 (extension growth)××××
P6 (number of extensions)××CS×
P7 (status conservation)CGPSCGPSCGPSCGPS
P8 (preference-based immunity)PgPg×Pg
P9 (path preference influence 1)CGPSCGPSCGPSCGPS
P10 (path preference influence 2)CGPSCGPSCGPSCGPS

The above results constitute an exhaustive investigation of the ten CPAF-principles for all semantics considered in this paper. Thus, we can conclude:

Theorem 13.

The satisfaction of CPAF-principles depicted in Table 3 holds.

To summarize, complete and stable semantics preserve the satisfaction of PAF-principles in most cases. Grounded semantics no longer satisfies any of the principles 1-6 on CPAFs except P1 (conflict-freeness) since grounded extensions are not unique on CPAFs, and since there are even CPAFs without a grounded extension (cf. Lemma 8). Unlike on PAFs, complete semantics does not satisfy P5 (extension growth) under Reduction 3. Furthermore, neither variant of preferred semantics satisfies P6 (number of extensions) under Reduction 3. As for principles 7-10, we note that only P8 is no longer satisfied by all semantics.

5.Computational complexity

The computational complexity of Dung-style AFs and various generalizations thereof has received considerable attention in the literature [29]. Indeed, complexity results give insights into the expressiveness of specific argumentation formalisms and help to find appropriate methods for solving a given problem. As discussed in Section 2, AFs and PAFs have the same properties with regards to complexity, i.e., none of the four preference reductions result in a higher complexity when considering unconditional preferences in the setting of Dung-AFs. The situation is not as clear when dealing with conditional preferences. As we have seen in previous sections, CPAFs do not necessarily have unique grounded extensions, there are CPAFs without any complete extensions, and there is more than one way of dealing with subset maximization (recall the naive/naive-glb and prf/prf-glb semantics). In this section, we show that these differences between CPAFs and AFs/PAFs have an impact on complexity.

We define Verσ,iCPAF, Credσ,iCPAF, and Skeptσ,iCPAF analogously to VerσAF, CredσAF, and SkeptσAF (cf. Definition 3), with the difference that the framework in question is now a CPAF instead of an AF and that we appeal to the σcpi semantics of Definitions 13 and 14 rather than the AF-semantics of Definition 2:

Definition 16.

Given a CPAF-semantics σcpi we define the following decision problems:

  • Credulous Acceptance (Credσ,iCPAF): given a CPAF F and an argument x, is xS for some Sσcpi(F)?

  • Skeptical Acceptance (Skeptσ,iCPAF): given a CPAF F and an argument x, is xS in all Sσcpi(F)?

  • Verification (Verσ,iCPAF): given a CPAF F and a set of arguments S, is Sσcpi(F)?

In the interest of generality, we did not impose a specific method to represent conditional preferences in the previous sections. However, when analyzing the computational complexity of CPAFs, it is necessary to decide on more specific representations if tight bounds are to be found. Therefore, we will assume conditional preferences to be expressed succinctly as arbitrary propositional formulas. Note that if preferences would be stored explicitly for each possible set of arguments, the input size of our problems would always be exponentially larger than the underlying AF itself, and thus some decision problems for CPAFs would be in lower complexity classes than their counterparts for AFs.

Specifically, given the set of argument A in a framework F=(A,R,cond) we allow a finite number of rules φxy where x,yA and φ is a propositional formula built from atoms in A and the usual connectives (¬, ∧, ∨). As for the meaning of these rules, we define that for some SA we have xSy iff there is a rule φxy such that Sφ and there is no rule φyx such that Sφ.11 Observe that, given SA, it is possible to compute S in polynomial time with respect to the size of the given framework F since Sφ can be decided in polynomial time for each rule φxy.22

Table 4

Complexity of CPAFs with conditional preferences represented via finitely many rules of the form φxy. Underlines indicate a rise in complexity compared to AFs

σCredσ,1CPAF/Credσ,j{2,3,4}CPAFSkeptσ,1CPAF/Skeptσ,j{2,3,4}CPAFVerσ,1CPAF/Verσ,j{2,3,4}CPAF
cfin Ptrivialin P
naiveNP-c/in PcoNP-c/in Pin P
naive-glbin PΠ2P-c/in PcoNP-c/in P
admNP-ctrivialin P
comNP-ccoNP-cin P
grdNP-ccoNP-cP-c
stbNP-ccoNP-cin P
prfΣ2P-cΠ2P-ccoNP-c
prf-glbNP-cΠ2P-ccoNP-c

Our complexity results are summarized in Table 4. Note that problems for naive/naive-glb semantics become harder only under Reduction 1. Intuitively, this is because Reduction 1 can remove conflicts between arguments altogether, unlike Reductions 2-4. Observe that naive-glb under Reduction 1 is the only semantics for which the verification problem becomes harder (coNP-complete) compared to AFs (in P). As a result, skeptical acceptance for naive-glb is Π2P-complete, i.e., the complexity rises by two levels in the polynomial hierarchy compared to the case of AFs. For complete semantics, skeptical acceptance is now coNP-complete regardless of which preference reduction is used. With respect to grounded semantics we see an increase in complexity for both credulous acceptance (NP-complete) and skeptical acceptance (coNP-complete). Lastly, for preferred semantics with local maximization, credulous acceptance rises by one level in the polynomial hierarchy compared to AFs.

Theorem 14.

The complexity results for CPAFs depicted in Table 4 hold.

The remainder of this section is dedicated to proving Theorem 14. We first consider the verification problem, for which most semantics have the same complexity as in the case of AFs.

Lemma 15.

Verσ,iCPAF, where i{1,2,3,4}, has the same complexity properties with regards to membership and hardness as VerσAF for σ{cf,naive,adm,com,grd,stb,prf}. Moreover, Verprf-glb,iCPAF is coNP-complete for i{1,2,3,4}.

Proof.

Hardness follows from the fact that CPAFs are a generalization of AFs. Membership for σ{cf,naive,adm,com,grd,stb,prf}: given a CPAF F and a set of arguments S, we can determine S and therefore also RiS(F) in polynomial time. It then suffices to check whether Sσ(RiS(F)). Membership for σ=prf-glb: let (F,S) be an arbitrary instance of Verprf-glb,iCPAF, i.e., F=(A,R,cond) is a CPAF and SA is a set of arguments. First, check in polynomial time whether Sadmcpi(F). Then, in coNP-time, check that for all T we have either TS or Tadmcpi(F). □

For naive semantics with global maximization (naive-glb) we see a rise in complexity, but only when using Reduction 1. The following proof makes use of Reduction 1’s ability to delete conflicts between arguments. By Sat we denote the NP-complete satisfiability problem for propositional formulas, and by Unsat we denote its complementary problem which is coNP-complete.

Lemma 16.

Vernaive-glb,jCPAF is in P for j{2,3,4}. Vernaive-glb,1CPAF is coNP-complete.

Proof.

Let (F,S) be an arbitrary instance of Vernaive-glb,iCPAF, i.e., F=(A,R,cond) is a CPAF and SA is a set of arguments. For Reductions 2, 3, and 4 it suffices to check whether Snaive(A,R) since these reductions can not remove or add conflicts.

We now turn our attention to Reduction 1. coNP-membership: first check whether Scfcp1(F). Then, in coNP-time, check that for all T we have either TS or Tcfcp1(F). coNP-hardness by reduction from Unsat: let φ be an arbitrary propositional formula over variables X. Let a be a fresh variable, i.e., aX. We construct an instance (F,{a}) of Vernaive-glb,1CPAF as follows: F=(A,R,cond) with A=X{a}, R={(x,a)|xX}, and cond defined by the rules φax for xX, i.e., aSx iff Sφ. Figure 6 depicts the above construction. We now show that φ is unsatisfiable iff {a}naive-glbcp1(F) (i.e., (F,{a}) is a yes-instance of Vernaive-glb,1CPAF).

  • Suppose φ is unsatisfiable. This means that for all xX and all SA we have aSx, i.e., for each xX the attack (x,a) is present in R1S(F). Thus, there is no conflict-free set containing a other than {a} which implies {a}naive-glbcp1(F).

  • Suppose φ is satisfiable. Then there is an interpretation IX such that Iφ. We assume that I. This is permissible since we can check in polynomial time whether satisfies φ, and if this is the case, return a trivial no-instance of Vernaive-glb,1CPAF. Consider S=I{a}. Since a does not appear in φ we have Sφ and therefore aSx for all xX. Thus, Scfcp1(F) and, since {a}S, {a}naive-glbcp1(F). □

Fig. 6.

Construction used in the proof of Lemma 16. Given a formula φ over variables X={x1,x2,,xn}, a CPAF F is constructed such that φ is unsatisfiable iff {a}naive-glbcp1(F).

Construction used in the proof of Lemma 16. Given a formula φ over variables X={x1,x2,…,xn}, a CPAF F is constructed such that φ is unsatisfiable iff {a}∈naive-glbcp1(F).

We now consider credulous and skeptical acceptance, starting with semantics based solely on conflict-freeness. Let us first cover the cases in which there is no rise in complexity.

Lemma 17.

Credσ,iCPAF is in P for σ{cf,naive-glb}, i{1,2,3,4}. Crednaive,jCPAF is in P for j{2,3,4}.

Proof.

Let (F,x) be an instance of Credσ,iCPAF. For σ=cf it suffices to check whether x is self-attacking in the underlying AF of F, since self-attacks are not removed by any of the four reductions. For σ=naive-glb it suffices to test whether (F,x) is a yes-instance of Credcf,iCPAF. For σ=naive and Reductions 2, 3, and 4 it is enough to check whether x appears in a naive set of the underlying AF, since these reductions can not remove conflicts. □

Lemma 18.

Skeptcf,iCPAF is trivial for i{1,2,3,4}. Skeptσ,jCPAF is in P for σ{naive,naive-glb} and j{2,3,4}.

Proof.

Let (F,x) be an arbitrary instance of Skeptnaive-glb,i, i.e., F=(A,R,cond) is a CPAF and xA is a an argument. Note that is always conflict-free in Ri(F), i.e., (F,x) is trivially a no-instance. For σ{naive,naive-glb} and Reductions 2, 3, and 4 it is enough to solve the problem on the underlying AF, since these reductions can not remove conflicts. □

For naive semantics with local maximization the complexity rises by one level in the polynomial hierarchy under Reduction 1.

Lemma 19.

Crednaive,1CPAF is NP-complete and Skeptnaive,1CPAF is coNP-complete.

Proof.

We will consider the complementary problem of Skeptnaive,1CPAF and show that it is NP-complete since this allows us to prove both results simultaneously.

NP-Membership: given a CPAF F=(A,R,cond) and an argument xA, guess a set SA and, in polynomial time, check whether Snaivecp1(F) and xS (resp. Snaivecp1(F) and xS).

NP-hardness by reduction from SAT: let φ be an arbitrary propositional formula over a set of variables X. Let a and b be fresh atoms, i.e., a,bX. We construct an instance (F,a) of Crednaive,1CPAF as follows: F=(A,R,cond) with A=X{x|xX}{a,b}, R={(x,x),(x,x)|xX}{(a,b)}, and cond defined by the rule ¬φa¬bba, i.e., bSa iff S¬φa¬b. The above construction is visualized in Fig. 7. We show that φ is satisfiable iff (F,a) is a yes-instance of Crednaive,1CPAF iff (F,b) is a no-instance of Skeptnaive,1CPAF:

  • Assume φ is satisfiable. Then there is an interpretation IX such that Iφ. Then also I{a}φ and I{a}¬φa¬b. Let S=I{x|xI}{a}. Note that bSa, which means that a and b are in conflict in R1S(F). Furthermore, for all xX, we have either xS or xS, but not both. Thus, Snaive(R1S(F)). Note that aS but bS, i.e., (F,a) is a yes-instance of Crednaive,1CPAF but (F,b) is a no-instance of Skeptnaive,1CPAF.

    Fig. 7.

    Construction used in the proof of Lemma 19. Given a formula φ over variables X={x1,x2,,xn}, a CPAF F is constructed such that φ is satisfiable iff (F,a) is a yes-instance of Crednaive,1CPAF iff (F,b) is a no-instance of Skeptnaive,1CPAF.

    Construction used in the proof of Lemma 19. Given a formula φ over variables X={x1,x2,…,xn}, a CPAF F is constructed such that φ is satisfiable iff (F,a) is a yes-instance of Crednaive,1CPAF iff (F,b) is a no-instance of Skeptnaive,1CPAF.

  • Assume φ is unsatisfiable. Consider some SA. If there is some xX such that neither xS nor xS, then Snaive(R1S(F)). Likewise, if there is some xX such that both xS and xS then Scf(R1S(F)) and therefore Snaive(R1S(F)). It remains to consider sets S in which for all xX we have either xS or xS but not both. Given such a set, we consider four cases:

    • (1) aS and bS. Then Snaive(R1S(F)) since S{a}cf(R1S(F)).

    • (2) aS and bS. Then S¬φa¬b, i.e., bSa. This means that the attack (a,b) is present in R1S(F). Note that every argument is either in S or in conflict with S in R1S(F). Moreover, Scf(R1S(F)). We can conclude that Snaive(R1S(F)).

    • (3) aS and bS. Then S¬φa¬b, i.e., bSa. This means that the attack (a,b) is deleted in R1S(F), which further implies that S{b}cf(R1S(F)). Thus, Snaive(R1S(F)).

    • (4) aS and bS. Then S¬φa¬b, i.e., bSa. This means that the attack (a,b) is present in R1S(F). Thus, Scf(R1S(F)) and therefore Snaive(R1S(F)).

    In conclusion, if Snaivecp1(F) then aS and bS. Thus, (F,a) is a no-instance of Crednaive,1CPAF but (F,b) is a yes-instance of Skeptnaive,1CPAF. □

For naive semantics with global maximization, skeptical acceptance rises by even two levels in the polynomial hierarchy under Reduction 1. The reason for this is the increased complexity of the verification problem in this case (cf. Lemma 16). By QBF2 we denote the Π2P-complete problem of deciding whether a quantified boolean formula of the form YZφ, where φ is a formula over YZ, is true.

Lemma 20.

Skeptnaive-glb,1CPAF is Π2P-complete.

Proof.

Σ2P-membership for the complementary problem of Skeptnaive-glb,1CPAF: given a CPAF F=(A,R,cond) and an argument x, guess a set SA and check that xS and, in coNP-time, that Snaive-glbcp1(F).

Π2P-hardness: let YZφ be an arbitrary instance of QBF2 over variables Y={y1,,yn} and Z={z1,,zm}. Let X=YZ. Using fresh variables a and zm+1 we construct an instance (F,a) of Skeptnaive-glb,1CPAF where F=(A,R,cond) with

  • A=X{y|yY}{a,zm+1},

  • R={(y,y),(y,y)|yY}{(zi,zj)|1i<jm+1}{(zi,a)|1im+1},

  • and cond defined by the following rules:

    • φzm+1zjzi for all 1i<jm+1,

    • φzm+1azi for all 1im+1,

    • zi1jm+1,ji(¬zj)azi for all 1im.

    Expressed in natural language, the first two rules remove all conflicts between z1,,zm+1,a if φzm+1 is satisfied, and the third rule removes the conflict between some zZ and a if this z is the only element from Z that is part of the extension, and if zm+1 is also not part of the extension.

The above construction is visualized in Fig. 8. Note that the resulting CPAF is polynomial in the size of φ as we employ O(m2) rules, each linear in the size of φ. It remains to show that YZφ is true iff (F,a) is a yes-instance of Skeptnaive-glb,1CPAF.
  • Assume that YZφ is true. We want to show that for all Snaive-glbcp1(F) we have aS. Towards a contradiction assume this is not the case, i.e., there is some Snaive-glbcp1(F) such that aS. There are two possibilities:

    • (1) Sφ. Then for S=S{a,zm+1} we also have Sφ since a and zm+1 are fresh variables. Moreover, Scfcp1(F) since Sφzm+1 and thus all conflicts between the arguments z1,,zm+1,a are removed. But SS, i.e., Snaive-glbcp1(F). Contradiction.

      Fig. 8.

      Construction used in the proof of Lemma 20. Given a quantified Boolean formula YZφ over variables Y={y1,,yn} and Z={z1,,zm}, a CPAF F is constructed such that YZφ is true iff (F,a) is a yes-instance of Skeptnaive-glb,1CPAF.

      Construction used in the proof of Lemma 20. Given a quantified Boolean formula ∀Y∃Zφ over variables Y={y1,…,yn} and Z={z1,…,zm}, a CPAF F is constructed such that ∀Y∃Zφ is true iff (F,a) is a yes-instance of Skeptnaive-glb,1CPAF.

    • (2) Sφ. Then Sφzm+1 and therefore the conflicts between z1,,zm+1,a are not removed. This means that at most one of z1,,zm+1,a is in S since we require Scfcp1(F). Indeed, exactly one argument from z1,,zm+1,a has to be in S, since if none of them were in S then we could add any of these arguments to S and the resulting set would still be conflict free regardless of preferences. By our assumption, aS. Again, we distinguish two cases:

      • (a) ziS with 1im. But then S{a}cfcp1(F) because the following rule would apply: zi1jm+1,ji(¬zj)azi. Thus, Snaive-glbcp1(F). Contradiction.

      • (b) zm+1S. Let IY=YS. Since YZφ is true there is some IZZ such that IYIZφ. Therefore, Sφ for S=IY{y|yIY}IZ{a,zm+1}. Moreover, Scfcp1(F) since Sφzm+1 and thus all conflicts between z1,,zm+1,a are removed. Since SS by construction we have that Snaive-glbcp1(F). Contradiction.

    In all cases we arrive at a contradiction, and we can conclude that aS for all Snaive-glbcp1(F).

  • Assume that YZφ is not true. Then there is some IYY such that IYIZφ for all IZZ. Let S=IY{y|yIY}{zm+1}. Clearly, Scfcp1(F). Moreover, there can be no SS such that Scfcp1(F) since we would need to add at least one argument from z1,,zm,a to S. But these arguments are all in conflict with zm+1 unless Sφzm+1, which we know to be impossible. □

We now turn our attention to admissibility-based semantics, where, in contrast to semantics based only on conflict-freeness, the choice of preference reduction makes no difference with regards to complexity. Again, let us first consider the cases in which there is no rise in complexity compared to AFs.

Lemma 21.

Credσ,iCPAF is NP-complete for σ{adm,com,stb,prf-glb} and i{1,2,3,4}.

Proof.

Hardness follows from hardness for AFs. Regarding membership of σ{adm,com,stb}, given a CPAF F and an argument x we can simply guess a set of arguments S containing x and, by Lemma 15, check whether Sσcpi(F) in polynomial time. Regarding membership of prf-glb, it suffices to test whether (F,x) is a yes-instance of Credadm,iCPAF. □

Lemma 22.

Let i{1,2,3,4}. Skeptσ,iCPAF is trivial for σ=adm, coNP-complete for σ=stb, and Π2P-complete for σ{prf,prf-glb}.

Proof.

Hardness follows from hardness for AFs. Regarding membership, let F=(A,R,cond) be a CPAF and xA. Concerning σ=adm, note that is always admissible in Ri(F), i.e., (F,x) is trivially a no-instance. Regarding σ{stb,prf,prf-glb} we consider the complementary problem: guess a set SA and check that xS and that Sσcpi(F) with σ{stb,prf,prf-glb}. Checking Sσcpi(F) can be done in polynomial time in the case of σ=stb and in coNP-time in the case of σ{prf,prf-glb} (cf. Lemma 15). □

In the case of grounded semantics, both credulous and skeptical acceptance are located one level higher on the polynomial hierarchy compared to AFs. For complete semantics, the same is true for skeptical acceptance.

Lemma 23.

Let i{1,2,3,4}. Credgrd,iCPAF is NP-complete. Skeptσ,iCPAF is coNP-complete for σ{grd,com}.

Proof.

We will consider the complementary problems of Skeptgrd,iCPAF/Skeptcom,iCPAF and show that they are NP-complete since this allows us to prove all results simultaneously.

NP-membership: given a CPAF F=(A,R,cond) and an argument xA, guess a set SA and, in polynomial time, check whether Sgrdcpi(F) and xS (resp. Sgrdcpi(F) and xS or Scomcpi(F) and xS).

NP-hardness by reduction from SAT: let φ be an arbitrary propositional formula over variables X. Let a and b be fresh variables, i.e., a,bX. We construct an instance (F,b) of Credgrd,iCPAF as follows: F=(A,R,cond) with A=X{x|xX}{a,b}, R={(x,a),(a,x),(x,x),(x,x)|xX}{(a,b)}, and cond defined by the rules φxa, ¬φax, and xxx for xX, i.e., xSa iff Sφ, aSx iff S¬φ, and xSx iff Sx. Figure 9 depicts the above construction. In fact, this construction also works for the complementary problem of skeptical acceptance with respect to grounded and complete semantics, except that we will ask for the acceptance of the argument a instead of b. In this spirit, we now show that φ is satisfiable iff (F,b) is a yes-instance of Credgrd,iCPAF iff (F,a) is a no-instance of Skeptgrd,iCPAF/Skeptcom,iCPAF.

  • Suppose φ is satisfiable. Then there is an interpretation IX such that Iφ. We assume that I. This is permissible since we can check in polynomial time whether satisfies φ, and if this is the case, return a trivial yes-instance of Credgrd,iCPAF (or a trivial no-instance of Skeptgrd,iCPAF/Skeptcom,iCPAF). Consider S=I{b}. Then Sφ since Iφ and b does not occur in φ. We then have xSa and xx for all xI, but xSx for xXI. Thus, the unattacked arguments in RiS(F) are exactly those in I. Since I, b is defended in RiS(F) against a by the arguments in I. Thus, S is the minimal complete extension in RiS(F) and therefore also grounded in RiS(F). This implies that (F,b) is a yes-instance of Credgrd,iCPAF while (F,a) is a no-instance of both Skeptgrd,iCPAF and Skeptcom,iCPAF.

    Fig. 9.

    Construction used in the proof of Lemma 23. Given a formula φ over variables X={x1,x2,,xn}, a CPAF F is constructed such that φ is satisfiable iff (F,b) is a yes-instance of Credgrd,iCPAF iff (F,a) is a no-instance of Skeptgrd,iCPAF/Skeptcom,iCPAF.

    Construction used in the proof of Lemma 23. Given a formula φ over variables X={x1,x2,…,xn}, a CPAF F is constructed such that φ is satisfiable iff (F,b) is a yes-instance of Credgrd,iCPAF iff (F,a) is a no-instance of Skeptgrd,iCPAF/Skeptcom,iCPAF.

  • Suppose φ is unsatisfiable. Then, for every SA and xX, we have aSx. Thus, the argument a is unattacked in every R1S(F), i.e., every complete extension (and therefore also every grounded extension) in F must contain a. Since a and b are in conflict, b is contained in no complete or grounded extension. Thus, (F,b) is a no-instance of Credgrd,iCPAF while (F,a) is a yes-instance of both Skeptgrd,iCPAF and Skeptcom,iCPAF. □

The last problem to consider is credulous acceptance for preferred semantics with local maximization. The following proof is the only one in this section to utilize one of the well-known standard reductions for AFs [29]. Only a very limited inclusion of conditional preferences is necessary. Indeed, only a single preference rule, consisting of a very simple propositional formula, is used in the construction.

Lemma 24.

Credprf,iCPAF is Σ2P-complete for i{1,2,3,4}.

Proof.

Σ2P-membership: given a CPAF F=(A,R,cond) and an argument x, guess a set SA and check that xS and, in coNP-time, that Sprfcpi(F).

Fig. 10.

Construction used in the proof of Lemma 24. Given the quantified Boolean formula y1y2z1z2φ, with φ consisting of clauses c1=(y1¬y2z1), c2=(¬y1¬z1z2), and c3=(y2z1¬z2), a CPAF F is constructed such that y1y2z1z2φ is true iff (F,a) is a no-instance of Credprf,iCPAF.

Construction used in the proof of Lemma 24. Given the quantified Boolean formula ∀y1y2∃z1z2φ, with φ consisting of clauses c1=(y1∨¬y2∨z1), c2=(¬y1∨¬z1∨z2), and c3=(y2∨z1∨¬z2), a CPAF F is constructed such that ∀y1y2∃z1z2φ is true iff (F,a) is a no-instance of Credprf,iCPAF.

Π2P-hardness of the complementary problem: let XYφ be an arbitrary instance of QBF2 in 3-CNF over variables Y={y1,,yn} and Z={z1,,zm} with clauses C={c1,,ck}. Let X=YZ. Using fresh variables a and b we construct an instance (F,a) of co-Credprf,iCPAF where F=(A,R,cond),

  • A=X{x|xX}C{a,b,,},

  • R={(x,x),(x,x)xX}

    {(x,c)xC}{(x,c)¬xC}

    {(c,c),(c,)cC}

    {(,z),(,z)zZ}

    {(,),(,)}{(a,b),(b,a)},

  • and cond defined by the following rule: zZ(zz)ba.

This construction is exemplified in Fig. 10. It remains to show that YZφ is true iff aS for all Sprfcpi(F).
  • Assume that YZφ is true. Towards a contradiction, assume that there is some Sprfcpi(F) such that aS. Then it must be that bSa, otherwise a is undefended in RiS(F). Thus, for all zZ we have zS and zS. Let IY=SY. Since YZφ is true there is some IZZ such that IYIZφ. Let S=IY{y|yY,yIY}IZ{z|zZ,zIZ}{,a}. Clearly, SS. Moreover, S is admissible in RiS(F): since IYIZφ all clause-arguments cC are attacked by arguments in S, and therefore ⊤ is defended by S. This further implies that all arguments z, z are defended by S against ⊥. We can conclude that S is not preferred in RiS(F). Contradiction.

  • Assume that YZφ is not true. Then there is IYY such that IYIZφ for all IZZ. Let S=IY{y|yY,yIY}{a}. Note that bSa and that all arguments y, y defend themselves. Thus, S is admissible in RiS(F). Towards a contradiction, assume there is SS such that S is admissible in RiS(F). This means one of the following must be the case:

    • S. Then ⊤ needs to be defended by S against the clause arguments cC. But this means that I=(SY)(SZ) satisfies all clauses in φ, i.e., Iφ. Note that S contains exactly one of y, y for every yY. Thus, SY=SY=IY. The fact that IY(SZ)φ contradicts IYIZφ for all IZZ.

    • zS for some zZ. Then z needs to be defended by S against ⊥. This is only possible if S, which we already have shown to not be the case. Contradiction.

    • zS for some zZ. Analogous to the case that zS.

    Since we arrive at a contradiction in all cases, Sprfcpi(F). Moreover, note that aS. □

6.Related formalisms

We now investigate the connection between CPAFs and related formalisms. First, we show that Value-based Argumentation Frameworks (VAFs) [8,12] can be captured by CPAFs in a straightforward way. Secondly, we consider Extended Argumentation Frameworks (EAFs) [10,28,34] and highlight similarities and differences to CPAFs. Lastly, we compare our CPAFs with a recently introduced alternative approach to conditional preferences in abstract argumentation [2].

6.1.Capturing value-based argumentation

VAFs, similarly to CPAFs, are capable of dealing with multiple preference relations. But, in contrast to CPAFs, these preferences are not over individual arguments but over values associated with arguments. Which values are preferred depends on the audience. A set of arguments may then be accepted in view of one audience, but not in view of another.

Fig. 11.

Example VAF with two audiences p1 (v1v2) and p2 (v2v1).

Example VAF with two audiences p1 (v1≻v2) and p2 (v2≻v1).

More formally, a VAF is a quintuple (A,R,V,val,P) such that (A,R) is an AF, V is a set of values, val:AV is a mapping from arguments to values, and P is a finite set of audiences. Each audience pP is associated with a preference relation p over values, and FP=(A,R,V,val,p) is called an audience-specific VAF (AVAF). The extensions of VAFs are determined for each audience separately. Specifically, an argument x successfully attacks y in Fp iff (x,y)R and val(y)pval(x). Conflict-freeness and admissibility are then defined over these successful attacks. In essence, this boils down to using Reduction 1 on Fp, i.e., deleting attacks that contradict the preference ordering.

Figure 11a shows a VAF with two values v1 and v2. Let us say there are two audiences in this VAF, p1 with the preference v1v2 and p2 with v2v1. The AFs associated with p1 and p2, i.e., the AFs containing only the successful attacks in the AVAFs of p1 and p2, are depicted in Figs 11b and 11c.

Fig. 12.

CPAF obtained by translating the VAF of Fig. 11 according to Definition 17.

CPAF obtained by translating the VAF of Fig. 11 according to Definition 17.

The reasoning tasks typically associated with VAFs are those of subjective and objective acceptance. Let F=(A,R,V,val,P) be a VAF and xA. Then x is subjectively accepted in F iff there is pP such that x is in a preferred extension of the AVAF (A,R,V,val,p). Similarly, x is objectively accepted in F iff for all pP we have that x is in all preferred extensions of the AVAF (A,R,V,val,p).

We now provide a translation where an arbitrary VAF F=(A,R,V,val,P) is transformed into a CPAF Tr(F)=(A,R,cond) such that the subjectively and objectively accepted arguments in F correspond to the credulously and skeptically preferred arguments in Tr(F) respectively.

Definition 17.

Let F=(A,R,V,val,P) be a VAF. Then Tr(F)=(A,R,cond) is the CPAF such that

  • A=AP,

  • R=R{(p,p),(p,p)|p,pP,pp},

  • for every SA, aSb iff there is pP with SP={p} and val(a)pval(b).

Intuitively, each audience in the initial VAF is added as its own argument in our CPAF. The attacks of the VAF are preserved and symmetric attacks are added between all audience-arguments. Lastly, the preferences in our CPAF correspond to the preferences of each audience and are controlled by the newly introduced audience-arguments. Figure 12 shows the CPAF that results if the above translation is applied to the VAF of Fig. 11.

Observe that the successful attacks in some AVAF Fp=(A,R,V,val,p) are also attacks in R1S{p}(Tr(F)), where SA, and vice versa. Thus, the admissible sets in the initial VAF F stand in direct relationship to the admissible sets in our constructed CPAF.

Lemma 25.

Let F=(A,R,V,val,P) be a VAF, SA, and pP. Then S is admissible in the AVAF Fp=(A,R,V,val,p) iff S{p}admcp1(Tr(F)).

Furthermore, note that all audience-arguments in Tr(F) attack each other, i.e., an admissible set in Tr(F) contains at most one audience-argument. In fact, each audience-argument defends itself, and thus every preferred extension in Tr(F) must contain exactly one audience-argument pP if we appeal to the prfcp1-semantics. Therefore, the direct correspondence between admissible sets observed in Lemma 25 carries over to preferred extensions.

Proposition 26.

Given a VAF F=(A,R,V,val,P), xA is subjectively (resp. objectively) accepted in F iff x is credulously (resp. skeptically) preferred in Tr(F) w.r.t. Reduction 1.

It must be pointed out that the translation provided in Definition 17 was designed for VAFs in which each audience is given explicitly. However, VAFs can also be defined with preferences given implicitly as the set of all possible audiences [8], where each audience corresponds to a linear ordering over all values. In this case, the translation of Definition 17 is not polynomial as the number of audience arguments would be factorial in the number of values. We now provide an alternative translation that can handle this implicit definition of audiences and where we only need |V|2 additional arguments.

Definition 18.

Let F=(A,R,V,val,P) be a VAF, with P implicitly given as the set of all possible linear orderings over V. Then Tr2(F)=(A,R,cond) is the CPAF such that

  • A=A{vk|vV,1k|V|},

  • R=R{(vk,wk),(wk,vk)|vV,wV,1k|V|,vw}{(vk,vl)|vV,kl},

  • cond is defined as follows: for every a,bA such that val(a)val(b) and every 1k<|V| we introduce the rule val(a)k(l=k+1|V|val(b)l)ab.

Fig. 13.

CPAF obtained by translating the VAF of Fig. 11 according to Definition 18.

CPAF obtained by translating the VAF of Fig. 11 according to Definition 18.

Figure 13 shows the CPAF that results if the above translation is applied to the VAF of Fig. 11. As with our first translation (cf. Definition 17), there is a direct semantic correspondence between the initial VAF and the constructed CPAF. The idea is the following: along with the arguments and attacks of the original VAF, we introduce arguments v1,,v|V| for each value vV. If vk is accepted, this means that v is considered the k-th best value. Since vk attacks all other value-arguments wk with wv we know that no other value is simultaneously ascribed the k-th best position. Moreover, vk attacks all vl with lk, i.e., v is only ascribed the k-th best position and no other. Then, we prefer an argument a to another argument b if the value of a is preferred (appears at an earlier position in the linear ordering) than b. In this way, each extension corresponds to a linear ordering over all values, i.e., each extension corresponds to an audience. This further implies that each S-reduct of the constructed CPAF has exactly the same attacks between the arguments of the initial VAFs as the AVAF corresponding to the value-ordering encoded in S. This gives us a result analogous to Proposition 26.

Proposition 27.

Given a VAF F=(A,R,V,val,P) where P is implicitly given as the set of all possible linear orderings over V, xA is subjectively (resp. objectively) accepted in F iff x is credulously (resp. skeptically) preferred in Tr2(F) w.r.t. Reduction 1.

Our translations highlight the versatility of our formalism. On the one hand, conditional preferences can be tied to dedicated arguments (in this case the audience-arguments). On the other hand, these dedicated arguments themselves may be part of the argumentation process. Note that we used CPAFs with Reduction 1 since preferences in VAFs are usually handled by deleting attacks. However, our approach also allows for the use of other preference-reductions in VAF-settings.

Moreover, note that the problem of subjective acceptance in VAFs is NP-complete [12], even if the set of all audiences is represented implicitly. In contrast, we have shown that credulous acceptance in CPAFs is Σ2P-complete (cf. Table 4). Thus, assuming that the polynomial hierarchy does not collapse, finding a polynomial translation from CPAFs to VAFs analogous to our Proposition 26 (resp. Proposition 27) is not possible when considering credulously/subjectively accepted arguments.

6.2.Relationship to extended argumentation frameworks

EAFs allow arguments to express preferences between other arguments by permitting attacks themselves to be attacked. While EAFs are related to our CPAFs conceptually, we will see that there are crucial differences in how exactly preferences are handled.

Formally, an EAF is a triple (A,R,D) such that (A,R) is an AF, DA×R, and if (a,(b,c)),(a,(c,b))D then (a,a),(a,a)R. The definition of admissibility in EAFs is quite involved and requires so-called reinstatement sets. Essentially, a set of arguments S is admissible in an EAF if all arguments xS are defended from other arguments yAS, and if all attacks (z,y) used for defending x are in turn defended from attacks on attacks (w,(z,y)) and thus reinstated. It is possible that a chain of such reinstatements is required which is formalized with the aforementioned reinstatement sets. Formally defining these concepts is not necessary for our purposes, but the corresponding definitions can be found in [34]. Observe that the notion of attacks on attacks in EAFs is similar to Reduction 1 in the sense that attacks between arguments can be unsuccessful, but they are never reversed. Therefore, we will compare EAFs to CPAFs with Reduction 1.

Recall our Sherlock Holmes example from the introduction (Example 1) that we modeled as a CPAF (Example 3). Let us first consider a slimmed-down variation without an argument stating that Person 1 has an alibi. We can model this as an EAF with three arguments c1 (Person 1 is the culprit), c2 (Person 2 is the culprit), and m1 (Person 1 has a motive) in which m1 attacks the attack from c2 to c1. The corresponding EAF is depicted in Fig. 14b. Compare this to the formalization via a CPAF in Fig. 14a. Note that {c1} is admissible in the EAF but {c2} is not since (c2,c1) is used to defend against (c1,c2) but not reinstated against (m1,(c2,c1)). In the CPAF, {c2} is admissible (but not stable).

This simple example highlights a fundamental difference in how preferences are viewed in the two formalisms. In CPAFs, preferences are relevant exactly if the argument that expresses them (e.g. m1) are part of the set under inspection. In EAFs, preference are relevant even if the argument that expresses them is not accepted. Modgil [34] states that admissibility for EAFs was defined in this way because it was deemed important to satisfy Dung’s Fundamental Lemma [26], which says that if S is admissible and x is acceptable w.r.t. S then S{x} is admissible. This Fundamental Lemma is not satisfied in our CPAFs. However, in our opinion, this is no drawback but rather a necessary property of formalisms that can deal with conditional preferences in a flexible way. For example, in Fig. 14a it is clear that {c2} should be admissible since, when considering only admissibility, we are not forced to include the unattacked m1, i.e., we do not have to accept that Person 1 has a motive. The inclusion of unattacked arguments in CPAFs is handled via more restrictive approaches such as stable or preferred semantics, as usual.

Fig. 14.

A simplified version of the Sherlock Holmes example modeled via a CPAF and an EAF.

A simplified version of the Sherlock Holmes example modeled via a CPAF and an EAF.

Another difference between CPAFs and EAFs becomes clear when considering the entire Sherlock Holmes example. Recall our formalization for CPAFs (cf. Fig. 2). In order to express our preference in case Person 1 has an alibi we extend our EAF from Fig. 14b by adding an attack from a1 to the attack (c1,c2), as shown in Fig. 15a.33 Note that a1 and m1 must attack each other in this EAF by definition since they express conflicting preferences. But this formalization is unsatisfactory since it should be possible for Person 1 to have both a motive and an alibi. The fact that the preference of one argument may change in view of another argument must be modeled indirectly in EAFs. For example, we can introduce an additional argument to express that Person 1 has both a motive and an alibi. This is depicted in Fig. 15b. Thus, we can see that CPAFs allow for more flexibility when combining preferences associated with several arguments.

Fig. 15.

The full Sherlock Holmes example modeled by two different EAFs.

The full Sherlock Holmes example modeled by two different EAFs.

There are also some differences between CPAFs and EAFs when it comes to preferred semantics. For instance, stable extensions in EAFs are not necessarily preferred extensions [28]. In CPAFs, every stable extension is also preferred, except if we use global maximization and Reduction 1 (cf. Proposition 4). Moreover, credulous acceptance under preferred semantics is in NP for EAFs [28], but Σ2P-complete for CPAFs when using local maximization (cf. Table 4).

To summarize, CPAFs are designed to express conditional preferences in abstract argumentation, whereas preferences in EAFs are unconditional in the sense that they may always influence the argumentation process, even if the argument associated with the preference is not accepted. Moreover, since our CPAFs can make use of all four preference reductions, they allow for more flexibility in how preferences are handled compared to EAFs, in which unsuccessful attacks are always deleted. However, the two formalisms are similar in that arguments are capable of reasoning about the argumentation process itself, i.e., they constitute a form of metalevel argumentation [35].

6.3.Lifting preferences over arguments to sets of arguments

In our CPAFs, we deal with preferences by using preference reductions which modify the attack relation (see Definition 5). There exist other approaches to preferences in argumentation, where preference orderings over arguments are lifted to sets of arguments [1,6,22,33], and the most preferred extensions are then selected according to this new preference ordering.

Recently [2], conditional preferences in abstract argumentation have been investigated using the aforementioned preference liftings. We refer to the CPAFs introduced in that work as lifting-based CPAFs. Similarly to our reduction-based CPAFs, a lifting-based CPAF is given as (A,R,Γ) where (A,R) is an AF and Γ is a set of conditional preference rules of the form a1a2b1bm¬c1¬cn built from arguments a1,a2,b1,,bm,c1,,cn. The conditional preferences over arguments given by Γ are lifted to preferences over sets of arguments according to one of three criteria (democratic, elitist, KTV), and then the ‘best’ extensions are selected according to this lifted preference ordering.

Note that lifting-based CPAFs, in contrast to our reduction-based CPAFs, satisfy principle P2 (cf. Definition 15) by design, since the ‘best’ extensions selected in a lifting-based CPAF (A,R,Γ) are always extensions of (A,R). We note that, for complete and stable semantics, Reduction 3 satisfies P2 as well and thus selects extensions in the style of preference liftings (cf. Table 3).

The conditional preference rules Γ of a lifting-based CPAF (A,R,Γ) are usually assumed to be well-formed, which ensures that arguments a1, a2 occurring in the head of a rule a1a2b1bm¬c1¬cn do not occur in the body of the same rule. This is to prevent counterintuitive results, as explained in [2] via the following example: given (A,R,Γ) with extensions {{a,b},{a,c}} and Γ given by cbb and cbc, one would expect the only ‘best’ extension to be {a,c}. However, under semantics of lifting-based CPAFs, both {a,b} and {a,c} are ‘best’. This problem does not occur with the well-formed Γ={cb}. In our reduction-based CPAFs we have no analogous assumption of well-formedness. Despite this, the counter-intuitive behavior observed above does not necessarily occur in our reduction-based CPAFs. For example, consider (A,R,cond) with A={a,b,c}, R={(b,c),(c,b)}, and cond given by the rules bcb and ccb. Then prf((A,R))={{a,b},{a,c}}. However, under all four preference reductions, the attack (b,c) is deleted as soon as b or c is in the extension under inspection. Thus, prfcpi((A,R,cond))={{a,c}}.

Another difference between lifting-based CPAFs and our reduction-based CPAFs lies in their computational complexity, which is higher for lifting-based CPAFs in most cases. For example, verification for stable semantics is coNP-complete in lifting-based CPAFs [2] but remains in P in reduction-based CPAFs (see Table 4). As a result, credulous and skeptical acceptance for stable semantics are Σ2P-complete and Π2P-complete respectively in lifting-based CPAFs, while they remain NP-complete and coNP-completely respectively in reduction-based CPAFs. Some problems, such as credulous and skeptical acceptance of preferred semantics under elitist and KTV criteria, may even lie on the third level of the polynomial hierarchy for lifting-based CPAFs (tight bounds for the complexity of these problems have not been established yet). We observe that the increased complexity of lifting-based CPAFs is in many cases not due to the introduction of conditional preferences, but rather due to the preference-liftings themselves, as the complexity of lifting-based PAFs (featuring only unconditional preferences) is already considerably higher than that of regular AFs [1].

As pointed out in [2, Example 2], there are lifting-based CPAFs where not every (best) stable extension is also a (best) preferred extension. In contrast, every stable extension in a reduction-based CPAF is also a preferred extension, except when considering Reduction 1 and preferred semantics with global maximization (cf. Proposition 4).

To conclude this comparison, we want to emphasize that there is a conceptual difference between the reduction-based and lifting-based approaches to resolving preferences in argumentation: when using preference reductions, xy expresses that x is stronger than y; when using preference liftings, xy expresses that we prefer outcomes containing x rather than y. Which of the two approaches should be chosen depends on the task at hand.

7.Conclusion

In this paper, we introduce Conditional Preference-based AFs (CPAFs) which generalize PAFs and allow to flexibly handle conditional preferences in abstract argumentation.

We conduct a principle-based analysis for CPAFs and show that complete and stable semantics satisfy the same principles as on PAFs in most cases while grounded semantics no longer satisfies many of the principles. We further investigate the computational complexity of CPAFs and show that this complexity can be influenced by the chosen preference reduction (in case of naive semantics) or by how maximization is handled (in case of naive and preferred semantics). Our results also show that the satisfaction of I-maximality can depend on how maximization is dealt with (in case of preferred semantics) and on which preference-reduction is chosen (in case of stable semantics).

Moreover, we compare CPAFs to related formalisms. On the one hand, we show that CPAFs can be used to capture VAFs via a straightforward translation. On the other hand, we demonstrate that CPAFs exhibit significant differences to EAFs in terms of how preferences are handled. We also discuss a recently introduced alternative approach to conditional preferences in abstract argumentation, where preferences over arguments are lifted to preferences over sets of arguments.

The fact that Reduction 1 results in a higher complexity under naive semantics (cf. Table 4) is not unique to our setting. It has been shown that Reduction 1 causes a higher complexity compared to Reductions 2-4 also in the setting of claim-acceptance in AFs [13], although there the difference between the preference reductions extends to more than just naive semantics.

For future work, the relationship between CPAFs and existing approaches in structured argumentation [27] shall be investigated. Related to this point, it may also be interesting to see whether conditional preferences can be adapted to other formalisms such as bipolar argumentation frameworks [5], in which both attack and support relations are present. As for preference representation, it could be investigated how existing formalisms designed to handle conditional preferences such as CP-nets [19] or various forms of logic programming [20,21,23,24] relate to CPAFs.

Notes

1 A set of atoms S can be seen as an interpretation, with x set to true under S iff xS.

2 In fact, for our membership results the explicit representation of rules using propositional formulas is not necessary. It suffices to have some representation such that, given SA, we can determine S in polynomial time with respect to the size of F=(A,R,cond). However, for hardness results, a more concrete representation such as via our rules is necessary.

3 The EAFs of Fig. 14b and Fig. 15a are also used as examples in Modgil’s original paper [34].

Acknowledgements

We thank the anonymous reviewers, including the reviewers of the previous version of this paper, for their valuable feedback. This work was funded by the Austrian Science Fund (FWF) under grant P32830 and by the Vienna Science and Technology Fund (WWTF) under grant ICT19-065.

References

[1] 

G. Alfano, S. Greco, F. Parisi and I. Trubitsyna, On preferences and priority rules in abstract argumentation, in: Proc. IJCAI’22, IJCAI Organization, (2022) , pp. 2517–2524.

[2] 

G. Alfano, S. Greco, F. Parisi, I. Trubitsyna et al., Abstract argumentation framework with conditional preferences, in: Proc. AAAI’23, AAAI Press, (2023) , pp. 6218–6227.

[3] 

L. Amgoud and C. Cayrol, On the acceptability of arguments in preference-based argumentation, in: Proc. UAI’98, Morgan Kaufmann, (1998) , pp. 1–7.

[4] 

L. Amgoud and C. Cayrol, A reasoning model based on the production of acceptable arguments, Ann. Math. Artif. Intell. 34: (1–3) ((2002) ), 197–215. doi:10.1023/A:1014490210693.

[5] 

L. Amgoud, C. Cayrol, M. Lagasquie-Schiex and P. Livet, On bipolarity in argumentation frameworks, Int. J. Intell. Syst. 23: (10) ((2008) ), 1062–1093. doi:10.1002/int.20307.

[6] 

L. Amgoud and S. Vesic, Rich preference-based argumentation frameworks, Int. J. Approx. Reason. 55: (2) ((2014) ), 585–606. doi:10.1016/j.ijar.2013.10.010.

[7] 

S. Arora and B. Barak, Computational Complexity: A Modern Approach, Cambridge University Press, (2009) .

[8] 

K. Atkinson and T.J.M. Bench-Capon, Value-based argumentation, in: Handbook of Formal Argumentation, Vol. 2: , College Publications, (2021) , pp. 299–354. Also appears in IfCoLog Journal of Logics and their Applications 8(6), 1543–1588.

[9] 

P. Baroni, M. Caminada and M. Giacomin, Abstract argumentation frameworks and their semantics, in: Handbook of Formal Argumentation, College Publications, (2018) , pp. 159–236, Chapter 4.

[10] 

P. Baroni, F. Cerutti, M. Giacomin and G. Guida, Encompassing attacks to attacks in abstract argumentation frameworks, in: Proc. ECSQARU’09, LNCS, Vol. 5590: , Springer, (2009) , pp. 83–94.

[11] 

P. Baroni and M. Giacomin, On principle-based evaluation of extension-based argumentation semantics, Artif. Intell. 171: (10–15) ((2007) ), 675–700. doi:10.1016/j.artint.2007.04.004.

[12] 

T.J.M. Bench-Capon, S. Doutre and P.E. Dunne, Audiences in argumentation frameworks, Artif. Intell. 171: (1) ((2007) ), 42–71. doi:10.1016/j.artint.2006.10.013.

[13] 

M. Bernreiter, W. Dvořák, A. Rapberger and S. Woltran, The effect of preferences in abstract argumentation under a claim-centric view, in: Proc. AAAI’23, AAAI Press, (2023) , pp. 6253–6261.

[14] 

M. Bernreiter, W. Dvořák and S. Woltran, Abstract argumentation with conditional preferences, in: Proc. COMMA’22, Frontiers in Artificial Intelligence and Applications, Vol. 353: , IOS Press, (2022) , pp. 92–103. doi:10.3233/FAIA220144.

[15] 

M. Bernreiter, J. Maly and S. Woltran, Choice logics and their computational properties, Artif. Intell. 311: ((2022) ), 103755. doi:10.1016/j.artint.2022.103755.

[16] 

M. Bienvenu, J. Lang and N. Wilson, From preference logics to preference languages, and back, in: Proc. KR’10, AAAI Press, (2010) , pp. 414–424.

[17] 

S. Bistarelli and F. Santini, Weighted argumentation, in: Handbook of Formal Argumentation, Vol. 2: , College Publications, (2021) , pp. 355–395. Also appears in IfCoLog Journal of Logics and their Applications 8(6), 1589–1622.

[18] 

J. Bobadilla, F. Ortega, A. Hernando and A. Gutiérrez, Recommender systems survey, Knowl. Based Syst. 46: ((2013) ), 109–132. doi:10.1016/j.knosys.2013.03.012.

[19] 

C. Boutilier, R.I. Brafman, C. Domshlak, H.H. Hoos and D. Poole, CP-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements, J. Artif. Intell. Res. 21: ((2004) ), 135–191. doi:10.1613/jair.1234.

[20] 

G. Brewka, J.P. Delgrande, J. Romero and T. Schaub, Asprin: Customizing answer set preferences without a headache, in: Proc. AAAI’15, AAAI Press, (2015) , pp. 1467–1474.

[21] 

G. Brewka, I. Niemelä and T. Syrjänen, Logic programs with ordered disjunction, Comput. Intell. 20: (2) ((2004) ), 335–357. doi:10.1111/j.0824-7935.2004.00241.x.

[22] 

G. Brewka, M. Truszczynski and S. Woltran, Representing preferences among sets, in: Proc. AAAI’10, AAAI Press, (2010) , pp. 273–278.

[23] 

A. Charalambidis, P. Rondogiannis and A. Troumpoukis, A logical characterization of the preferred models of logic programs with ordered disjunction, Theory Pract. Log. Program. 21: (5) ((2021) ), 629–645. doi:10.1017/S1471068421000235.

[24] 

J.P. Delgrande, T. Schaub and H. Tompits, A framework for compiling preferences in logic programs, Theory Pract. Log. Program. 3: (2) ((2003) ), 129–187. doi:10.1017/S1471068402001539.

[25] 

C. Domshlak, E. Hüllermeier, S. Kaci and H. Prade, Preferences in AI: An overview, Artif. Intell. 175: (7–8) ((2011) ), 1037–1052. doi:10.1016/j.artint.2011.03.004.

[26] 

P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artif. Intell. 77: (2) ((1995) ), 321–358. doi:10.1016/0004-3702(94)00041-X.

[27] 

P.M. Dung, P.M. Thang and T.C. Son, On structured argumentation with conditional preferences, in: Proc. AAAI’19, AAAI Press, (2019) , pp. 2792–2800.

[28] 

P.E. Dunne, S. Modgil and T.J.M. Bench-Capon, Computation in extended argumentation frameworks, in: Proc. ECAI’10, FAIA, Vol. 215: , IOS Press, (2010) , pp. 119–124.

[29] 

W. Dvořák and P.E. Dunne, Computational problems in formal argumentation and their complexity, in: Handbook of Formal Argumentation, College Publications, (2018) , pp. 631–687, Chapter 14. Also appears in IfCoLog Journal of Logics and their Applications 4(8), 2557–2622.

[30] 

W. Dvořák, M. König, M. Ulbricht and S. Woltran, Rediscovering argumentation principles utilizing collective attacks, in: Proc. KR’22, (2022) .

[31] 

U. Endriss (ed.), Trends in Computational Social Choice, AI Access, (2017) .

[32] 

S. Kaci, L.W.N. van der Torre, S. Vesic and S. Villata, Preference in abstract argumentation, in: Handbook of Formal Argumentation, Vol. 2: , College Publications, (2021) , pp. 211–248.

[33] 

S. Kaci, L.W.N. van der Torre and S. Villata, Preference in abstract argumentation, in: Proc. COMMA’18, FAIA, Vol. 305: , IOS Press, (2018) , pp. 405–412.

[34] 

S. Modgil, Reasoning about preferences in argumentation frameworks, Artif. Intell. 173: (9–10) ((2009) ), 901–934. doi:10.1016/j.artint.2009.02.001.

[35] 

S. Modgil and T.J.M. Bench-Capon, Metalevel argumentation, J. Log. Comput. 21: (6) ((2011) ), 959–1003. doi:10.1093/logcom/exq054.

[36] 

S. Modgil and H. Prakken, Reasoning about preferences in structured extended argumentation frameworks, in: Proc. COMMA’10, FAIA, Vol. 216: , IOS Press, (2010) , pp. 347–358.

[37] 

S. Modgil and H. Prakken, A general account of argumentation with preferences, Artif. Intell. 195: ((2013) ), 361–397. doi:10.1016/j.artint.2012.10.008.

[38] 

S. Modgil and H. Prakken, Abstract rule-based argumentation, in: Handbook of Formal Argumentation, College Publications, (2018) , pp. 287–364, Chapter 6. Also appears in IfCoLog Journal of Logics and their Applications 4(8), 2319–2406.

[39] 

G. Pigozzi, A. Tsoukiàs and P. Viappiani, Preferences in artificial intelligence, Ann. Math. Artif. Intell. 77: (3–4) ((2016) ), 361–401. doi:10.1007/s10472-015-9475-5.

[40] 

Y. Shoham, Nonmonotonic logics: Meaning and utility, in: Proc. IJCAI’87, Morgan Kaufmann, (1987) , pp. 388–393.

[41] 

L. van der Torre and S. Vesic, The principle-based approach to abstract argumentation semantics, in: Handbook of Formal Argumentation, College Publications, (2018) , pp. 797–838, Chapter 16. Also appears in IfCoLog Journal of Logics and their Applications 4(8), 2737–2780.