On the Effectiveness and Fragility of Artificial Apologies
- Date: May 12, 2025
- Time: 11:00 AM (Local Time Germany)
- Speaker: Anne-Marie Nußberger (MPI Bildungsforschung)
- Room: Basement
Narratives play a crucial role in shaping human experiences and social interactions. We examine how Large Language Models (LLMs) influence the generation and reception of social narratives, particularly apologies. Comparing human- and LLM-generated apologies for fictitious situations (Study 1A; Nhuman = 18; NLLM = 6, corresponding to six prompt iterations with GPT-4) with text-analysis and dimensionality-reduction-techniques, we observe that human- and LLM-apologies form dissociable clusters and that LLM-apologies more frequently deploy effective apology strategies. These apology-differences translate into effectiveness-differences: incentivized evaluations by N = 1,772 participants show that LLM-apologies tend to elicit greater forgiveness than human ones; but only when their origin is undisclosed. In contrast, their effectiveness is lower relative to human-apologies when their origin is disclosed. Our findings suggest that involving LLMs in the creation of social narratives like apologies may lead to a generalized devaluation of these narratives, potentially affecting the dynamics of social trust.