A logic-based reasoning system is a software system that generates conclusions that are consistent with a knowledge base (KB).
This KB consists of a set of propositions that are interpreted to be
This section outlines the objectives of our project. The project is composed a theoretical component and a practical component.
With the overarching goal of improved readability and more effective explanations, two methods are considered. One method is to allow the creator of an ontology to define an explanation for an axiom, which would be displayed as the explanation for that particular axiom. The second method is to expand the keywords that are used in the axiom to use more natural language.
The theoretical component of this project develops a theoretical framework for generating explanations as shown in the following figure.
For the practical part of the project, the Protégé OWL Ontology Editor's Explanation Workbench plugin was extended to provide more readable and effective explanations. This was done by allowing ontology creators to define an explanation for a specific axiom, as well as expanding the keywords from the Manchester Syntax (as used by Protégé) to use more natural language.
We defined and provided a framework for generating explanations for entailments in ALC DL ontology. There are limitations to this framework, including unoptimised algorithms as presented here. Converting explanations to English is tricky given English's complexity and ambiguity. The framework does not assess the user understanding of the explanation. However, we curate the state of the art in explanation generation and provide a thorough summary thereof.
From the practical side, we provide a framework for ontology creators to define detailed explanations for particular axioms. We also increase the understandability and effectiveness of the default explanations by expanding the Manchester Syntax keywords to use more natural language. This serves as a good baseline for future work in improving explanations.