Theory of Mind + Abduction
A Jason agent combining Theory of Mind with abductive reasoning
tomabd documentation

This package provides a Jason agent that combines Theory of Mind and abductive reasoning to interpret and extract information from the actions of other agents.

The main class of this package is tomabd.agent.TomAbdAgent, since it contains the methods that implement the actual computations. Other classes in the tomabd.agent package are Jason internal actions (IAs) that provide an interface to the public methods of the tomabd.agent.TomAbdAgent class that are most likely to be invoked from the AgentSpeak code.

Other IAs in the tomabd.misc package provide manipulations of AgentSpeak constructs, such as literals, logical formulas and Prolog-like rules. These are particularly handy for writing the Theory of Mind clauses with head believes(Ag, Fact) (see Adopting a different viewpoint).

Author
Nieves Montes

Agent model

For an example to help understand the discussion that follows, the reader is directed to the examples folder.

The agent model that this package implements revolves around the function TomAbductionTask, implemented by tomabd.agent.TomAbdAgent.tomAbductionTask. One execution of this functiontask is composed of these steps:

  1. Adopt the acting agent's point of view.
  2. Generate abductive explanations.
  3. Refine abductive explanations from the acting agent point of view.
  4. Adopt the observer agent's point of view.
  5. Refine abductive explanations from the observer agent point of view.

Steps 1 and 4 are covered in Adopting a different viewpoint. Steps 2, and 3 and 5 are covered in Generating and refining abductive explanations.

To set the scene, consider the following:

  • Agent \(i\) operating with logic program \(T_i\). We refer to agent \(i\) as the observer agent. Agent \(i\) will be the one executing TomAbductionTask.
  • Agent \(j\), operating with logic program \(T_j\). We refer to agent \(j\) as the acting agent.

Agents select their actions according to a set of action selection clauses. This is a set of Prolog-like rules: action(Agent, Action) :- ... . These clauses indicate what action should be taken by every agent given their current perception of the world. These clauses are necessary because actions by others are the observations that need to be explained using abductive reasoning (see Generating and refining abductive explanations).

Adopting a different viewpoint

At some time-step, the acting agent \(j\) selects an action \(a_j\) to be executed. Observer agent \(i\) comes to learn that \(j\) has indeed selected \(a_j\). The particular mechanism by which this happens is left as a domain-specific choice for the developer (see Usage section).

When \(i\) learns about \(j\)'s action of choice, \(i\) seeks to inder the reasons and motivations behind this decision. To do so, \(i\) embarks on a TomAbductionTask.

First, \(i\) engages in Theory of Mind and substitutes its view of the world by the view that it estimates \(j\) has of the world. By doing so, the observer agent \(i\) is putting itself in the shoes of the acting agent \(j\). Computationally, \(i\) substitutes its program \(T_i\) by the program it estimates that \(j\) is operating with. We denote \(i\)'s estimation of \(j\)'s program by \(T_{i,j}\), which is computed as follows:

\begin{equation} T_{i,j} = \{\phi \mid T_{i} \models \texttt{believes}(j, \phi)\} \label{eq:tom} \end{equation}

Note that eq. \(\eqref{eq:tom}\) makes a reference to a believes/2 predicate. These predicates are known as Theory of Mind (ToM) clauses. These clauses specify what the agent believes about what other agents believe. Hence, they have head believes(Agent, Fact). ToM clauses are domain-specific and they are queried to build an approximation of other agents' programs. Hence, they operate as a meta-interpreter on the program \(T_i\) of the agent performing the ToM+Abd task.

Eq. \(\eqref{eq:tom}\) formulates first-order Theory of Mind. This means that agent \(i\) tries to view the world in the way that it thinks \(j\) is perceiving it. However, eq. \(\eqref{eq:tom}\) can be recursively extended to any arbitrary level of Theory of Mind:

\begin{equation} T_{i,k, ..., l, j} = \{\phi \mid T_{i, k, ..., l} \models \texttt{believes}(j, \phi)\} \label{eq:tom-recursion} \end{equation}

For example, \(i\) might want to know how \(j\) is estimating that \(k\) is perceiving the world. This corresponds to \(T_{i,j,k}\), a second-order Theory of Mind substitution. In particular, it might be the case that \(i\) wants to know how \(j\) is estimating its own ( \(i\)'s) view. This corresponds to \(T_{i,j,i}\).

For the discussion that follows, we use the following notation in reference to the symbols in eq. \(\eqref{eq:tom-recursion}\):

  • The sequence \([i, k, ..., l, j]\) is the actor viewpoint.
  • The last element of the sequence ( \(j\)) is the acting agent.
  • The sequence excluding the last element (the actor), \([i, j, ..., k]\), is the observer viewpoint.

In order for the observer agent \(i\) to interpret the acting agent \(j\)'s action, \(i\) needs to switch its perspective to that of the actor. This may mean adopting a direct estimation of \(j\)'s program ( \(T_{i,j}\)) in a first-order ToM+abd task, or through several intermediaries ( \(T_{i,k, ..., l, j}\)) in a higher-order TomAbductionTask function execution. Either way, the substitution of the original agent program by a new point of view is implemented by the tomabd.agent.TomAbdAgent.adoptViewpoint method.

Generating and refining abductive explanations

Once observer \(i\) has adopted acting agent point of view, the observer is in a position to infer the reasons why the actor selected action \(a_j\), in hopes that this newly derived knowledge will be useful for his own later decision-making. This inference to the best explanation is called abductive reasoning. In order to compute abductive explanations, it is necessary to specify, for the current domain, the set of abducible facts. These are the facts that can possibly compliment a belief base. These are specified through a set of clauses with head abducible(Fact).

The tomabd.agent.TomAbdAgent class automatically loads an abductive meta-interpreter at initialization time. Given query \(Q = \texttt{action}(j, a_j)\), the abductive meta-interpreter generates a set of raw explanations \(\Phi\), composed of ground abducible facts. This set of raw explanations can be represented in disjunctive normal form (DNF):

\begin{equation} \Phi = (\phi_{11} \; \land \; ... \; \land \; \phi_{1n_1}) \; \lor \; ... \; \lor \; (\phi_{m1} \; \land \; ... \; \land \; \phi_{mn_m}) \label{eq:dnf} \end{equation}

where all \(\phi_{rs}\) are derivable from the current belief base, i.e. \(T_{i, k, ..., l, j} \models \texttt{abducible}(\phi_{rs})\). Within the agent class, this DNF is implemented as a list of lists.

The raw explanations \(\Phi\) need to be post-processed. First, this is done with respect to the viewpoint of the acting agent. This step allows the opportunity to refine the raw explanations to, for example, check for inconsistencies with respect to the current agent program. This explanation refinement step is implemented by the method tomabd.agent.TomAbdAgent.erf, which can be overridden by the MAS developer depending on the application's needs. The default computation performs these two steps:

  1. First, for every potential explanation (i.e. every disjunct in eq. \(\eqref{eq:dnf}\)), uninformative atoms are removed.
  2. Second, disjuncts that are incompatible with the impossibility constraints in the current program \(T_{i,j,...,k,l}\) (i.e. at the acting agent viewpoint) are removed.

The refined explanations with respect to the observer viewpoint are denoted by \(\Phi^{obs}\):

\begin{equation} \Phi^{obs} = (\phi_{11}' \; \land \; ... \; \land \; \phi_{1n_1'}') \; \lor \; ... \; \lor \; (\phi_{m'1}' \; \land \; ... \; \land \; \phi_{m'n_{m'}'}') \end{equation}

The refined abductive explanation has to hold true, but logical formular cannot be directly added to a Jason belief base. However, the negation of the explanation must be false. We take advantage of this to integrate the \(\Phi^{Obs}\) into the agent's program. From the negation of \(\Phi^{Obs}\), we build a new impossibility constraint:

\begin{equation} \texttt{imp [source(abduction)] :- } (\sim\phi_{11}' \; \texttt{|} \; ... \; \texttt{|} \; \sim\phi_{1n_1'}') \; \texttt{&} \; ... \; \texttt{&} \; (\sim\phi_{m'1}' \; \texttt{|} \; ... \; \texttt{|} \; \sim\phi_{m'n_{m'}'}'). \label{eq:ic-actor} \end{equation}

Note the source(abduction) annotation to indicate that this IC is derived from an abductive reasoning process and is not a domain-dependent constraint.

However, eq. \(\eqref{eq:ic-actor}\) should not be directly added to the original agent's program \(T_i\). Some extra step has to account for the fact that this explanation has been generated from viewpoint \([i, k, l, ..., j]\). To achieve this, the following nested believes/2 literal is generated:

\begin{equation} \texttt{believes($k$, ..., believes($l$, believes($j$, imp [source(abduction)] :- ... )) ... )} \label{eq:ic-observer} \end{equation}

To allow flexibility, the method responsible for TomAbductionTask (tomabd.agent.TomAbdAgent.tomAbductionTask) does not add the generated abductive impossibility constraints in eqs. \(\eqref{eq:ic-actor}\) and \(\eqref{eq:ic-observer}\). That is left as domain-dependent choice, to be decided by the developer of the application.

This process allows to update the original agent \(i\)'s model of the actor point of view. However, the ultimate goal is to update the model of the world from the perspective of the observer. To do so, the whole process of (i) refining the raw explanations; (ii) building the abductive impossibility constraint; and (iii) building a new believes/2 literal, is repeated, this time from the viewpoint of the observer agent. In case the viewpoint of the observer corresponds directly to the original agent \(i\) (which happens if TomAbductionTask is of first-order), the returned literal is directly of the form in eq. \(\eqref{eq:ic-actor}\).

Updating abductive explanation

Abductive explanation, once generated, will not be valid forever, since the environment is dynamic. Therefore, the tomabd.agent.TomAbdAgent has a custom belief update function (BUF) that includes a call to an explanation update function (EUF). The default implementation of EUF follows these steps:

  1. Update the belief base according to buf().
  2. For every literal originated in an abductive reasoning process (annotated with source(abduction)):
  3. Extract the viewpoint at which it was generated and the associated explanation.
  4. Adopt the viewpoint in question.
  5. If the associated explanation can now be derived from the current program, drop the abductive literal from \(T_i\), since it is no longer informative.

This default explanation update function is implemented in the tomabd.agent.TomAbdAgent.euf method, and can be overridden by the MAS developer in a custom subclass.

Action selection

The whole purpose of the ToM+Abd task is to have the observer agent \(i\) in a more informed position when it is \(i\)'s turn to act, thanks to the additional knowledge derived from TomAbductionTask. Hence, this package also implements a basic action selection function that takes into account impossibility constraints derived from abductive reasonings.

The default implementation uses all constraints (abductive and domain-specific) to reson over all possible worlds. When querying the action selection rules, the interpreter may come across a sub-goal that, according to \(T_i\), is abducible. Then, the action selection function looks for all the possible instantiations of this abducible sub-goal. If all of the possible instantiations lead to the same action being selected, that action is returned. This is a fairly restrictive action selection mechanisms, however it can be overridden by the MAS developer.

This default action selection is implemented in the agent method tomabd.agent.TomAbdAgent.selectAction. The internal action tomabd.agent.select_action is an interface to this method, so that it can be called from the AgentSpeak code (see the following section).

Usage

This package should be used in multi-agent systems developed using Jason or JaCaMo. The recommended way to use this package is to download a copy of the jar file and add it to the project's class-path.

In Jason (.mas2j file):

MAS myMAS {
agents: ...
environment: ...
classpath: "path/to/tomabd.jar";
}

In JaCaMo (.jcm file):

mas myMAS {
// agents configuration
...
// environment configuration
...
// organization configuration
...
// execution configuration
class-path: path/to/tomabd.jar
...
}

To trigger the execution of TomAbductionTask from the AgentSpeak code, invoke the internal action tomabd.agent.tom_abduction_task:

+!g : c
<- ...;
tomabd.agent.tom_abduction_task(
ObserverViewpoint, // a list
ActingAgent, // an atom
Action, // an atom
ActorViewpointExplanation, // bound by the IA
ObserverViewpointExplanations, // bound by the IA
ActorAbductiveTomRule, // bound by the IA
ObserverAbductiveTomRule, // bound by the IA
ElapsedTime // bound by the IA
);
...

The decision when to invoke this IA is domain-specific and is left to the MAS developer. For example, in the Hanabi game a custom KQML performative publicAction is used to announce the selected action. A reception of a message with this performative triggers the execution of TomAbductionTask:

!kqml_received(Sender, publicAction, Action, KQML_MsgId) : c
<- ...;
// first-order Theory of Mind -- abduction task
tomabd.agent.tom_abduction_task(
[],
Sender,
Action,
ActExpls,
ObsExpls,
ActTomTule,
ObsTomRule
);
...

If one wishes to use the tomabd.agent.TomAbdAgent.selectAction method to select the next agent action, the IA tomabd.agent.select_action acts as an interface to this method. It is used as follows:

+!g : c
<- ...;
tomabd.agent.select_action(
Action, // bound by the IA
Priority // bound by the IA
);
(!)Action;
...

Use the ! prefix on Action if it is modelled as an achievement goal.