Reasoning is an integral part of human lives. It is well-documented that human reasoning fails to conform to the prescriptions of classical logics, such as propositional logic. An important question in the artificial intelligence community has been to accurately represent human reasoning. Non-classical or non-monotonic logic is flexible by nature and non-monotonic reasoning approaches have been developed to model human reasoning. The problem, however, is that non-monotonic reasoning schemes have been developed for and tested on computers, but not on humans. We propose to determine the extent to which forms of non-montonic reasoning correspond with human reasoning.
To download our project proposal, click on the link below.
To download our project poster, click on the link below.
A reasoning agent may make an inference based on the information at hand, however, that inference is not absolute. When presented with additional information, the original inference can be strengthened or withdrawn.
Read more.
Learning conflicting information indicates flawed prior knowledge, and the agent can retract conclusions made and draw new ones based on what they explicitly know, aiming for minimal change in beliefs.
Read more.
A variant of belief revision, with the distinction being between learning conflicting information about an unchanging world (belief revision) vs learning conflicting information about new changes in the world (belief update).
Read more.