Statements written by AI chatbot ChatGPT have revealed that users might not be aware of just how much their personal moral judgement are influenced by interactions with the chatbot, according to a new study.
According to the peer-reviewed study published in Scientific Reports, researchers tested the limitations of chatbots and their limits in understanding the biggest dilemmas in humanity. Participating scientists asked ChatGPT multiple times about the ethics of sacrificing one life in order to save five others, revealing unexpected results.
The researchers found that ChatGPT wrote statements arguing for both sides - for and against - of the argument. The chatbot wrote statements in favor of and opposing sacrificing a life, though it indicated a lack of bias toward either side of the argument. Authors soon presented 767 US-based participants in the 39-year-old range with moral dilemmas which required participants to identify their own stance on the same dilemma.
How were moral responses of participants measured in this study?
Prior to answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers to better understand their thought processes.
The researchers found that the statements did end up having an influence on what was read. The same was true specifically regarding ChatGPT statements. This was still true when statements were attributed to a chatbot.
Though 80% of participants reported that their answers were not influenced by the statements they read, the researchers were not convinced that statements did not have as much sway. The study revealed that participants would still likely base their stance on that of the statement rather than the opposing side. This was an indication that participants could have underestimated the influence of ChatGPT’s statements on their own moral judgement.
The subconscious power of chatbots to influence the moral judgement of humans has made one thing clear: More education is needed to help humans understand the depth of artificial intelligence and its abilities. Researchers are now hoping that with continued work on the matter, they will ideally be able to someday design chatbots that will decline answering questions related to moral judgement.