PT Entailment

A Proof of Concept of the Implementation of PT Entailment

PT Entailment is a form of entailment for PTL that satisfies Strict Entailment, but not Conditional Rationality and therefore not the Single Model postulate. PT Entailment is based on the minimality of ranked models. The algorithm to find minimal models can return multiple models. The idea behind this is to preserve the presumption of typicality, which corresponds to considering only models where valuations are as far down as possible in the ranking, meaning more typical. To compare ranked models a preference relation $⊴_{PT}$ is introduced using a function defined as the height function. The height of a valuation in a ranked model is the number of the layer it is on, or if it is not in the ranked model then the height is infinity. A ranked model is $⊴_{PT}$ than another if the height of every valuation is lower in the first interpretation than in the second.

The PT Algorithm

PT Entailment requires us at look at different ranked interpretations to determine minimal models. The algorithm returns a set of PT-minimal models. An overview of the algorithm is that it looks at every possible subset of valuations for a given knowledge base and finds the most minimal ranked interpretations that form a model. This breaks down to checking arrangements of valuations in order to find ranked interpretations that are models for the knowledge base, i.e. satisfy it, and are not $⊴_{PT}$ than another ranked model. The first arrangement checked for each subset of valuations is that where every valuation is equally typical, i.e. all on level 0. Each next iteration of arrangements are the possible ranked interpretations where one valuation is moved up a level. The following is a formal description of the algorithm:

Once PT minimal models are found, they are used to check entailment of PTL sentences from a knowledge base. This comes down to checking if every valuation in a model satisfies the sentence with respect to the ranked structure.

The PT Implementation

The algorithm was implemented in Python, using the PyMiniSolvers module to interface with MiniSAT. The following operators were used and represented in input format.

  • $\lnot: -$
  • $\lor: |$
  • ⋀ : &
  • $\rightarrow:$ ›
  • $\bullet: *$
In order to pass sentences to the SAT Solver to check satisfiability, they need to be in Conjunctive Normal Form. This means that they need to be in a form of ands of ors. To do this conversion, the sentences were processed as trees, with each operator or atom being a node, with the operator with highest precedence being the root. The trees are recursively processed using De Morgan’s laws and a base case to convert into CNF.

The data structures used included 2D arrays for ranked models, sets and lists of valuations and the tree structures for sentences. A list of atoms is used to keep track and match the binary values in each valuation to atoms. In order to check whether a ranked model satisfies a PTL knowledge base $\mathscr{K}$ each valuation in the ranked model must satisfy every sentence in the $\mathscr{K}$. If the sentence is a classical sentence, this is simply checked classically using the SAT Solver by checking if the conjunction of the sentence and the literals of the valuation is satisfiable. If it is a typicality sentence containing $\bullet a$, this means checking if the valuation is the most typical one that satisfies a. If not then $\bullet a$ is false, since it is not a typical world of a. In this case the typicality is replaced by unconditionally false, $p$ and not $p$. Otherwise, it is a typical world and the valuation is checked if it classically satisfies $a$. Checking whether a ranked model entails a sentence works similarly.

The algorithm for finding minimal models first finds the powerset of all valuations for a knowledge base. Starting with the biggest subset, the most typical interpretation is checked i.e. every valuation taken as low as possible on level 0. Each possible arrangement is checked until one that forms a model using the ranked interpretation that matches a given arrangement is found. Once any possible models have been found for a given subset, they are compared to the previously found models, i.e. in bigger subsets, and if $\mathscr{R}_1$ $⊴_{PT}$ $\mathscr{R}_2$ then the new models are not returned. This check only needs to be done in one direction since if $\mathscr{R}_1$ has more valuations than $\mathscr{R}_2$, it can be better or incomparable to $\mathscr{R}_2$ but it cannot be worse, since at least one valuation in $\mathscr{R}_1$ is not in $\mathscr{R}_2$ and is therefore on level infinity.

Results

Results follow for small knowledge bases with a small number of atoms, usually producing only one minimal model. Testing against the results against the LM Entailment implementation gives the same answers for these small problems. This shows it is correct for small knowledge bases. However, due to the nature of checking each possible ranked interpretation, the complexity of the algorithm is extremely large. Each knowledge base with $n$ atoms has $2^n$ interpretations and each subset of interpretations needs to be checked for possible arrangements. This makes the algorithm very inefficient for large knowledge bases.

An example from the Booth 2015 paper is $\mathcal{K}$ $:= {•⊤ → (¬p ∧ ¬r), •p → •¬f , •r → •f }$, in other words "the most typical things are neither penguins nor robins, typical penguins are typical non-flying birds, and typical robins are typical flying birds". This example can be modeled, converting the typicality of $⊤$ into $p ∨ ¬p$, in the implementation as: ["(*(p|-p))>(-p&-r)", "*p>*-f", "*r>*f"]. The resulting minimal models generated match with those PT minimal models found in the paper:

Note that $\mathscr{R}_1$ is the model produced by LM Entailment and its set of valuations is a subset of $\mathscr{R}_3$. Some expected results of entailment from previous literature are that $•¬p → ¬r$ is entailed by the knowledge base, or more specifically from the PT Minimal models. This entailment is true in the implementation so we can reason that typical non-penguins are not robins. Two more examples are that neither $•¬p → ¬f$ nor $•(¬p ∧ f ) → ¬r$ are entailed by the knowledge base. The implementation correctly produces false for both of these and they are therefore not entailed from the knowledge base. These tests produced the correct minimal models and the correct entailment results. This means that the implementation is indeed a successful proof of concept for PT Entailment, even though it is not a highly efficient implementation