The integration of generative AI (genAI) into daily life raises questions about its impact on mental health, particularly for vulnerable populations. While genAI systems are generally seen as supportive and therapeutic, there are growing concerns about potential links to psychotic symptoms in individuals with mental disorders.
“AI psychosis” is a term used to describe psychotic symptoms influenced by interactions with AI, such as beliefs that genAI is sentient or controlling thoughts. This phenomenon draws on cultural materials and may exacerbate delusions, particularly for individuals with impaired reality testing.
Conversational AI, by design, may reinforce distorted beliefs by personalizing and adapting responses, which can be harmful for susceptible users. Moreover, while genAI can help alleviate loneliness, it may replace genuine human interactions, especially for those already socially isolated.
Currently, there is no definitive evidence that AI directly causes psychosis; rather, it may act as a contributing factor. Most developers do not consider severe mental illness in AI design, focusing instead on self-harm prevention. This gap presents ethical concerns about responsibility when AI interactions reinforce delusions.
Mental health experts emphasize the need to approach AI cautiously, recognizing its unique risks for mental health. Collaboration among clinicians, researchers, and engineers is crucial to ensure that AI does not inadvertently harm vulnerable users, and to address the psychosocial implications of increasingly human-like AI interactions.
Source link


