In order to develop artificial agents that can understand social interactions at a near-human level, it is required that these agents develop an artificial Theory of Mind; the ability to infer the mental state of others. However, developing this artificial Theory of Mind is a hig
...
In order to develop artificial agents that can understand social interactions at a near-human level, it is required that these agents develop an artificial Theory of Mind; the ability to infer the mental state of others. However, developing this artificial Theory of Mind is a highly difficult process. This is because Theory of Mind is an ambiguous and multifaceted concept, having several mechanisms associated with it, and being tested using many different tasks. In this thesis, we formalize what mechanisms constitute Theory of Mind, and establish how we can represent these mechanisms using artificial intelligence. Furthermore, we evaluate whether current artificial Theory of Mind models are able to reason effectively about these mechanisms. This is done by creating Theory of Mind tasks for artificial models, evaluating their effectiveness, and allowing us to provide recommendations for the development of future artificial Theory of Mind models.