We have identified two use cases. First, as detailed in the project overview, a suite of tests can be used to detect errors and regressions, and assure the quality of an ontology. Second, temporary tests can be to explore inferences or predict the consequences of adding a new axiom without the overhead of reclassification.
We aim to develop new ontology testing algorithms which address the shortcomings of existing tools and satisfy these use cases. Further, we aim to mathematically prove that they are correct. In order to do this, we present a formal model of ontology testing.
We assume that any ontology under test \(O\) is consistent and coherent, and that any axiom under test \(A\) contains only entities declared in \(O\). The result of a test is one of the following, in order of precedence:
In formal notation:
\[ \text{test}_O(A) = \begin{cases} \text{entailed} & \text{if } O \vdash A \\ \text{inconsistent} & \text{if } O \cup A \vdash \bot \\ \text{incoherent} & \text{if } (\exists C)\; O \cup A \vdash C \sqsubseteq \bot \\ \text{absent} & \text{otherwise} \\ \end{cases} \]
We describe algorithms which cover TBox, ABox, and some RBox axioms in their general forms. For example, we can test \(\exists \texttt{eats} . \texttt{Animal} \sqsubseteq \texttt{Herbivore}\) by calling \(\text{testSubClassOf}(\exists\texttt{eats}.\texttt{Animal}, \texttt{Herbivore})\). It is not possible to test this algorithm with any existing tools, because they only permit a named class on the left hand side.
We presented a novel model of ontology testing, and a comprehensive collection of algorithms which meet all stated aims. The algorithms were proven to be correct.
These algorithms can easily be implemented in new or existing tools.