Trust and Cooperation

Kuipers, Benjamin (2022) Trust and Cooperation. Frontiers in Robotics and AI, 9. ISSN 2296-9144

[thumbnail of pubmed-zip/versions/1/package-entries/frobt-09-676767/frobt-09-676767.pdf] Text
pubmed-zip/versions/1/package-entries/frobt-09-676767/frobt-09-676767.pdf - Published Version

Download (1MB)

Abstract

We AI researchers are concerned about the potential impact of artificially intelligent systems on humanity. In the first half of this essay, I argue that ethics is an evolved body of cultural knowledge that (among other things) encourages individual behavior that promotes the welfare of the society (which in turn promotes the welfare of its individual members). The causal paths involved suggest that trust and cooperation play key roles in this process. In the second half of the essay, I consider whether the key role of trust exposes our society to existential threats. This possibility arises because decision-making agents (humans, AIs, and others) necessarily rely on simplified models to cope with the unbounded complexity of our physical and social world. By selecting actions to maximize a utility measure, a well-formulated game theory model can be a powerful and valuable tool. However, a poorly-formulated game theory model may be uniquely harmful, in cases where the action it recommends deliberately exploits the vulnerability and violates the trust of cooperative partners. Widespread use of such models can erode the overall levels of trust in the society. Cooperation is reduced, resources are constrained, and there is less ability to meet challenges or take advantage of opportunities. Loss of trust will affect humanity’s ability to respond to existential threats such as climate change.

Item Type: Article
Subjects: Library Keep > Mathematical Science
Depositing User: Unnamed user with email support@librarykeep.com
Date Deposited: 22 Jun 2023 08:44
Last Modified: 20 Nov 2023 05:19
URI: http://archive.jibiology.com/id/eprint/1221

Actions (login required)

View Item
View Item