Philosophy Faculty Publications and Presentations

Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency

Document Type

Book Chapter

Publication Date

10-27-2017

Abstract

This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to read the behavior of human actors, available as collected data, and to categorize their moral behavior grounded in moral patterns herein. The present model is grounded in several analogies among artificial cognition, human cognition, and moral action. It is premised on the idea that moral agents should not be built on rule-following procedures, but on learning patterns from data. This idea is rarely implemented in AAMA models, albeit it has been suggested in the machine ethics literature (W. Wallach, C. Allen, J. Gips and especially M. Guarini). As an agent-based model, this AAMA constitutes an alternative to the mainstream action-centric models proposed by K. Abney, M. Anderson and S. Anderson, R. Arkin, T. Powers, W. Wallach, i.a. Moral learning and moral development of dispositional traits play here a fundamental role in cognition. By using a combination of neural networks and evolutionary computation, called “soft computing” (H. Adeli, N. Siddique, S. Mitra, L. Zadeh), the present model reaches a certain level of autonomy and complexity, which illustrates well “moral particularism” and a form of virtue ethics for machines, grounded in active learning. An example derived from the “lifeboat metaphor” (G. Hardin) and the extension of this model to the NEAT architecture (K. Stanley, R. Miikkulainen, i.a.) are briefly assessed.

Comments

© 2017 Springer International Publishing AG

https://rdcu.be/eAU0o

Publication Title

Philosophy and Computing

DOI

10.1007/978-3-319-61043-6_7

Share

COinS