A negotiation process by 2 agents e1 and e2 can be interleaved by another negotiation process between, say, e1 and e3. The interleaving may alter the resource allocation assumed at the inception of the first negotiation process. Existing proposals for argumentation-based negotiations have focused primarily on two-agent bilateral negotiations, but scarcely on the concurrency of multi-agent negotiations. To fill the gap, we present a novel argumentation theory, basing its development on abstract persuasion argumentation (which is an abstract argumentation formalism with a dynamic relation). Incorporating into it numerical information and a mechanism of handshakes among members of the dynamic relation, we show that the extended theory adapts well to concurrent multi-agent negotiations over scarce resources.
The semantics as to which set of arguments in a given argumentation graph may be acceptable (acceptability semantics) can be characterised in a few different ways. Among them, labelling-based approach allows for concise and flexible determination of acceptability statuses of arguments through assignment of a label indicating acceptance, rejection, or undecided to each argument. In this work, we contemplate a way of broadening it by accommodating may- and must- conditions for an argument to be accepted or rejected, as determined by the number(s) of rejected and accepted attacking arguments. We show that the broadened label-based semantics can be used to express more mild indeterminacy than inconsistency for acceptability judgement when, for example, it may be the case that an argument is accepted and when it may also be the case that it is rejected. We identify that finding which conditions a labelling satisfies for every argument can be an undecidable problem, which has an unfavourable implication to semantics. We propose to address this problem by enforcing a labelling to maximally respect the conditions, while keeping the rest that would necessarily cause non-termination labelled undecided.
From marketing to politics, exploitation of incomplete information through selective communication of arguments is ubiquitous. In this work, we focus on development of an argumentation-theoretic model for manipulable multi-agent argumentation, where each agent may transmit deceptive information to others for tactical motives. In particular, we study characterisation of epistemic states, and their roles in deception/honesty detection and (mis)trust-building. To this end, we propose the use of intra-agent preferences to handle deception/honesty detection and inter-agent preferences to determine which agent(s) to believe in more. We show how deception/honesty in an argumentation of an agent, if detected, would alter the agent's perceived trustworthiness, and how that may affect their judgement as to which arguments should be acceptable.
In this work we propose an ontology to support automated negotiation in multiagent systems. The ontology can be connected with some domain-specific ontologies to facilitate the negotiation in different domains, such as Intelligent Transportation Systems (ITS), e-commerce, etc. The specific negotiation rules for each type of negotiation strategy can also be defined as part of the ontology, reducing the amount of knowledge hardcoded in the agents and ensuring the interoperability. The expressiveness of the ontology was proved in a multiagent architecture for the automatic traffic light setting application on ITS.