Skip to main content

Advertisement

Log in

Moral Judgments in the Age of Artificial Intelligence

  • Original Paper
  • Published:
Journal of Business Ethics Aims and scope Submit manuscript

Abstract

The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence (AI) system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency—attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience—attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI—mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., & Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30(1), 127–158.

    Article  Google Scholar 

  • Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40–46.

    Article  Google Scholar 

  • Ames, D. L., & Fiske, S. T. (2013). Intentional harms are worse, even when they’re not. Psychological Science, 24(9), 1755–1762.

    Article  Google Scholar 

  • Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189.

    Article  Google Scholar 

  • Arkin, R. C., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. 2009 IEEE international symposium on computational intelligence in robotics and automation-(CIRA) (pp. 381–387). IEEE.

    Google Scholar 

  • Ashman, I., & Winstanley, D. (2007). For or against corporate identity? Personification and the problem of moral agency. Journal of Business Ethics, 76, 83–95.

    Article  Google Scholar 

  • Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483.

    Article  Google Scholar 

  • Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. M. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38(2), 247–256.

    Article  Google Scholar 

  • BBC. (2014). Stephen Hawking warns artificial intelligence could end mankind. Retrieved from https://www.bbc.com/news/technology-30290540

  • Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30, 195–218.

    Article  Google Scholar 

  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.

    Article  Google Scholar 

  • Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368.

    Article  Google Scholar 

  • Brusoni, S., & Vaccaro, A. (2017). Ethics, technology and organization innovation. Journal of Business Ethics, 143, 223–226.

    Article  Google Scholar 

  • Capraro, V., & Sippel, J. (2017). Gender differences in moral judgment and the evaluation of gender-specified moral agents. Cognitive Processing, 18(4), 399–405.

    Article  Google Scholar 

  • Cohen, S. (1988). Perceived stress in a probability sample of the United States. In S. Spacapan & S. Oskamp (Eds.), The Claremont symposium on applied social psychology (pp. 31–67). Sage.

    Google Scholar 

  • Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S. (2020). The strategic impacts of Intelligent Automation for knowledge and service work: An interdisciplinary review. The Journal of Strategic Information Systems, 29(4), 101600.

    Article  Google Scholar 

  • Courthousenews. (2017). Case Case 1:17-cv-00219 ECF No. 1 filed 03/07/17, Available online: https://www.courthousenews.com/wp-content/uploads/2017/03/RobotDeath.pdf.

  • Donald, S. J. (2019). Don’t blame the AI, it’s the humans who are biased. Toward Data Science. Retrieved at https://towardsdatascience.com/dont-blame-the-ai-it-s-the-humans-who-are-biased-d01a3b876d58

  • Doyle, C. M., & Gray, K. (2020). How people perceive the minds of the dead: The importance of consciousness at the moment of death. Cognition, 202, 104308.

    Article  Google Scholar 

  • Eisenberg, N., & Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101(1), 91–119.

    Article  Google Scholar 

  • Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.

    Article  Google Scholar 

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Article  Google Scholar 

  • Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436.

    Article  Google Scholar 

  • Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029–1046.

    Article  Google Scholar 

  • Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, 619.

    Article  Google Scholar 

  • Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences of the United States of America, 108(2), 477–479.

    Article  Google Scholar 

  • Gray, K., & Wegner, D. M. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96(3), 505–520.

    Article  Google Scholar 

  • Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125, 125–130.

    Article  Google Scholar 

  • Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.

    Article  Google Scholar 

  • Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108.

    Article  Google Scholar 

  • Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2–3), 115–121.

    Article  Google Scholar 

  • Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25(3), 255–271.

    Article  Google Scholar 

  • Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814.

    Article  Google Scholar 

  • Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 181–217). Cambridge, MA: MIT Press.

    Google Scholar 

  • Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628.

    Article  Google Scholar 

  • Hao, K. (2019a). This is how AI bias really happens—And why it’s so hard to fix. MIT Technology Review.

  • Hao, K. (2019b). When algorithms mess up, the nearest human gets the blame. MIT Technology Review.

  • Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.

    Article  Google Scholar 

  • Hollebeek, L. D., Sprott, D. E., & Brady, M. K. (2021). Rise of the machines? Customer engagement in automated service interactions. Journal of Service Research, 24(1), 3–8.

    Article  Google Scholar 

  • Hume, D. (1751). An enquiry concerning the principles of morals. Clarendon Press.

    Book  Google Scholar 

  • Ishizaki, K. (November, 2020). AI model lifecycle management: Overview. IBM, Retrieved from https://www.ibm.com/cloud/blog/ai-model-lifecycle-management-overview

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.

    Article  Google Scholar 

  • Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301.

    Article  Google Scholar 

  • Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., & Severson, R. L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction (pp. 33–40). IEEE.

    Google Scholar 

  • Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of dehumanization versus moral typecasting accounts. Cognition, 146(1), 33–47.

    Article  Google Scholar 

  • Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: Experimental studies. Phenomenology and the Cognitive Sciences, 7(1), 67–83.

    Article  Google Scholar 

  • Kozhaya, J. (November, 2020). AI model lifecycle management: Build phase. IBM, Retrieved from https://www.ibm.com/cloud/blog/ai-model-lifecycle-management-build-phase

  • KPMG. (2020). Avoiding setbacks in the intelligent automation race. Retrieved from https://advisory.kpmg.us/content/advisory/en/index/articles/2018/new-study-findings-read-ready-set-fail.html

  • Lagioia, F., & Sartor, G. (2020). Ai systems under criminal law: A legal analysis and a regulatory perspective. Philosophy & Technology, 33, 433–465.

    Article  Google Scholar 

  • Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human–computer Studies, 64(10), 962–973.

    Article  Google Scholar 

  • Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262–273.

    Article  Google Scholar 

  • Malle, B. F. (2019). How many dimensions of mind perception really are there? Proceedings of the 41st annual meeting of the cognitive science society (pp. 2268–2274). Cognitive Science Society.

    Google Scholar 

  • Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318.

    Article  Google Scholar 

  • Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25, 147–186.

    Article  Google Scholar 

  • Malle, B. F., Magar, S. T., & Scheutz, M. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. Robotics and well-being (pp. 111–133). Springer.

    Chapter  Google Scholar 

  • Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. in 2014 IEEE international symposium on ethics in science, technology and engineering, ETHICS. IEEE.

    Google Scholar 

  • Merritt, T. R., Tan, K. B., Ong, C., Thomas, A., Chuah, T. L., & McGee, K. (2011, March). Are artificial team-mates scapegoats in computer games. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, pp. 685–688.

  • Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215.

    Article  Google Scholar 

  • Mou, X. (2019). Artificial Intelligence: Investment trends and selected industry uses. IFC EMCompass Emerging Markets, 71, 1–8.

    Google Scholar 

  • Omohundro, S. M. (2008). The basic AI drives. In: Artificial General Intelligence, pp. 483–492.

  • Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735.

    Article  Google Scholar 

  • Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(3), 879–891.

    Article  Google Scholar 

  • Pyszczynski, T., Greenberg, J., & Solomon, S. (1997). Why do we need what we need? A terror management perspective on the roots of human social motivation. Psychological Inquiry, 8(1), 1–20.

    Article  Google Scholar 

  • Rai, A., Constantinides, P., & Sarker, S. (2019). Next-generation digital platforms: Toward human–AI hybrids. MIS Quarterly, 43(1), iii–ix.

    Google Scholar 

  • Rai, T. S., & Diermeier, D. (2015). Corporations are cyborgs: Organizations elicit anger but not sympathy when they can think but cannot feel. Organizational Behavior and Human Decision Processes, 126, 18–26.

    Article  Google Scholar 

  • Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

    Google Scholar 

  • Rybalko, D. (November, 2020). AI model lifecycle management: Deploy phase. IBM, Retrived from https://www.ibm.com/cloud/blog/ai-model-lifecycle-management-deploy-phase.

  • Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology., 29(2), 354–400.

    Google Scholar 

  • Schraube, E. (2009). Technology as materialized action and its ambivalences. Theory & Psychology, 19(2), 296–312.

    Article  Google Scholar 

  • Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020). Collaborating with technology-based autonomous agents: Issues and research opportunities. Internet Research, 30, 1–18.

    Article  Google Scholar 

  • Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648–663.

    Article  Google Scholar 

  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

    Article  Google Scholar 

  • Tang, S., & Gray, K. (2018). CEOs imbue organizations with feelings, increasing punishment satisfaction and apology effectiveness. Journal of Experimental Social Psychology, 79, 115–125.

    Article  Google Scholar 

  • Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & Society, 22(4), 495–521.

    Article  Google Scholar 

  • van der Woerdt, S., & Haselager, P. (2019). When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas in Psychology, 54, 93–100.

    Article  Google Scholar 

  • Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. robot agents. 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 775–780). IEEE.

    Chapter  Google Scholar 

  • Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 3(01), 177–192.

    Article  Google Scholar 

  • Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445.

    Article  Google Scholar 

  • Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14, 383–388.

    Article  Google Scholar 

  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.

    Article  Google Scholar 

  • Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: The MIT Press.

    Book  Google Scholar 

  • Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin Random House.

    Google Scholar 

  • Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer—And forgive—Service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557–1572.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel Fosso Wamba.

Ethics declarations

Conflict of interest

We have no conflicts of interest to disclose.

Ethical Approval

The three studies were granted exemption by the first author’s home institution according to federal regulation 45 CFR 46.104(d)(2): Research involving the use of educational tests, survey procedures, interview procedures, or observation of public behavior. We certify that all procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the studies.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sullivan, Y., Fosso Wamba, S. Moral Judgments in the Age of Artificial Intelligence. J Bus Ethics 178, 917–943 (2022). https://doi.org/10.1007/s10551-022-05053-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10551-022-05053-w

Keywords

Navigation