Belief Update

Outline of Belief Update

Say that you initially have a certain set of beliefs, but later come into contact with new information that contradicts one or more of those beliefs. Propositional logic, a paradigm way of modelling such beliefs, provides no guidance as to what to now believe. So to answer this question, propositional logic should be extended. The AGM approach to belief revision and the KM approach to belief update are two separate approaches to such an extension. The distinction between the two is that belief revision is taken to be appropriate when learning new information about an unchanging world, while belief update is appropriate when learning of the world that it has undergone new changes.

Project Purpose

In the AI community the relevance of non-classical extensions to propositional logic, such as the KM approach to belief update, is often held to be that they model a certain flexibility of reasoning found in human reasoning, a flexibility that the AI community would like to incorporate in their work. However, the rules governing the extensions are often proposed simply on the basis that they seem reasonable. Such is the case in the KM approach to belief update, which proposes eight properties governing the non-classical part of the logic. The purpose of this part of the project was to test the extent to which these properties conform to human reasoning.

Methodology

A survey was developed using Google forms, which was completed by workers on Amazon's Mechanical Turk. Respondents were to complete three tasks:

  1. Rate their agreement with a non-formal rendition of each of the eight properties on a five point Likert scale
  2. Indicate whether they agreed with examples that modelled each of the postulates, and provide a reason for their choice.
  3. Indicate whether they agreed with counter-examples to the postulates, and provide a reason for their choice.

Results and Conclusions

  1. Six of the eight postulates saw a better than neutral median agreement value.
  2. All of the examples of confirming models of the postulates saw over 50% agreement with. On the qualitative side partipants generally reasoned as if they had the postulates in mind.
  3. Four counter-examples to the postulates saw agreement rates over 50%. Qualitatively the participants reasoned as would be expected given theory.
The results provide some indication that the postulates are a good fit with human reasoning. Importantly, the reasons given for agreement with the confirming examples for the postulates provides some evidence of a more general trend in reasoning, beyond simply the specific examples. However, the results for the last task indicate that exceptions to the postulates exist.