TDDON Towards Test-Driven Development of Ontologies

By Kieren Davies and Ameerah Allie

Supervised by Maria Keet

Introduction

Ontologies, and ontology engineering, have become increasingly relevant in the past decade. They are regarded as a critical component of the Semantic Web, and have been employed successfully in fields ranging from genetics to news and broadcasting.

Despite this, ontologies have not seen widespread adoption within business and industry. We postulate that one of the contributing factors is the state of ontology engineering methodologies, which lag behind software engineering methodologies in terms of both maturity and adoption. In particular, there are no published methodologies which explicitly incorporate automated testing, which has become a staple of software engineering.

Test-Driven Development (TDD) is a software engineering methodology based on two rules:

• Write new code only if an automated test has failed.
• Eliminate duplication.

Tests thereby serve to define desired functionality. The process is usually facilitated with a test harness which runs tests automatically and generates reports. TDD has been shown to improve code quality, and it is widely believed to improve productivity and morale.

There exist some tools for testing ontologies, including TDDonto, Tawny-OWL, and SCONE, all of which make use of standard OWL reasoners such as HermiT. They are all in the early stages of development, and therefore still lack desirable features, and are limited in the forms of axioms they are able to test. They report the result of testing an axiom as pass or failure, but do not offer any insight into the potential consequences of adding the axiom to the ontology.

This project aims to work towards test-driven development of ontologies in two ways:

• Develop new algorithms which enable all possible OWL 2 axioms to be tested and which report the consequences of adding an axiom to the ontology, and mathematically prove these algorithms to be correct. Research conducted by Kieren Davies.

• Benchmark the performance of reasoners when used to evaluate tests on an ontology. Research conducted by Ameerah Allie.

Why test?

Ontologies, like computer programs, can become complex to the point that it is difficult for a human author to predict the consequences of changes, especially if the author is inexperienced. Automated tests are therefore useful to detect unintended consequences. As an illustrative example, suppose an author creates the following classes:

$\texttt{Giraffe} \sqsubseteq \texttt{Herbivore} \sqsubseteq \texttt{Mammal} \sqsubseteq \texttt{Animal}$

The author then realises that not all herbivores are mammals, so changes $$\texttt{Herbivore}$$ to be a subclass only of $$\texttt{Animal}$$. But now $$\texttt{Giraffe}$$ is no longer a derived subclass of $$\texttt{Mammal}$$, and an application which uses this ontology to enumerate mammals would erroneously exclude giraffes. This mistake could be caught by a simple test which declares that $$\texttt{Giraffe}$$ should be a subclass of $$\texttt{Mammal}$$.

Superficially, it seems like this problem can be solved by not writing tests for axioms but instead just adding those axioms directly to the ontology. However, adding such axioms introduces redundancy, making modification of the ontology more difficult, and in some circumstances increasing the complexity of reasoning. Adding only a test instead ensures correctness without bloating the ontology.